1
|
Carlini A, Bordeau C, Ambard M. Auditory localization: a comprehensive practical review. Front Psychol 2024; 15:1408073. [PMID: 39049946 PMCID: PMC11267622 DOI: 10.3389/fpsyg.2024.1408073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 06/17/2024] [Indexed: 07/27/2024] Open
Abstract
Auditory localization is a fundamental ability that allows to perceive the spatial location of a sound source in the environment. The present work aims to provide a comprehensive overview of the mechanisms and acoustic cues used by the human perceptual system to achieve such accurate auditory localization. Acoustic cues are derived from the physical properties of sound waves, and many factors allow and influence auditory localization abilities. This review presents the monaural and binaural perceptual mechanisms involved in auditory localization in the three dimensions. Besides the main mechanisms of Interaural Time Difference, Interaural Level Difference and Head Related Transfer Function, secondary important elements such as reverberation and motion, are also analyzed. For each mechanism, the perceptual limits of localization abilities are presented. A section is specifically devoted to reference systems in space, and to the pointing methods used in experimental research. Finally, some cases of misperception and auditory illusion are described. More than a simple description of the perceptual mechanisms underlying localization, this paper is intended to provide also practical information available for experiments and work in the auditory field.
Collapse
|
2
|
Pardhan S, Raman R, Moore BCJ, Cirstea S, Velu S, Kolarik AJ. Effect of early versus late onset of partial visual loss on judgments of auditory distance. Optom Vis Sci 2024; 101:393-398. [PMID: 38990237 DOI: 10.1097/opx.0000000000002125] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2024] Open
Abstract
SIGNIFICANCE It is important to know whether early-onset vision loss and late-onset vision loss are associated with differences in the estimation of distances of sound sources within the environment. People with vision loss rely heavily on auditory cues for path planning, safe navigation, avoiding collisions, and activities of daily living. PURPOSE Loss of vision can lead to substantial changes in auditory abilities. It is unclear whether differences in sound distance estimation exist in people with early-onset partial vision loss, late-onset partial vision loss, and normal vision. We investigated distance estimates for a range of sound sources and auditory environments in groups of participants with early- or late-onset partial visual loss and sighted controls. METHODS Fifty-two participants heard static sounds with virtual distances ranging from 1.2 to 13.8 m within a simulated room. The room simulated either anechoic (no echoes) or reverberant environments. Stimuli were speech, music, or noise. Single sounds were presented, and participants reported the estimated distance of the sound source. Each participant took part in 480 trials. RESULTS Analysis of variance showed significant main effects of visual status (p<0.05) environment (reverberant vs. anechoic, p<0.05) and also of the stimulus (p<0.05). Significant differences (p<0.05) were shown in the estimation of distances of sound sources between early-onset visually impaired participants and sighted controls for closer distances for all conditions except the anechoic speech condition and at middle distances for all conditions except the reverberant speech and music conditions. Late-onset visually impaired participants and sighted controls showed similar performance (p>0.05). CONCLUSIONS The findings suggest that early-onset partial vision loss results in significant changes in judged auditory distance in different environments, especially for close and middle distances. Late-onset partial visual loss has less of an impact on the ability to estimate the distance of sound sources. The findings are consistent with a theoretical framework, the perceptual restructuring hypothesis, which was recently proposed to account for the effects of vision loss on audition.
Collapse
Affiliation(s)
| | | | | | | | - Saranya Velu
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya Eye Hospital, Chennai, India
| | | |
Collapse
|
3
|
Lavandier M, Heine L, Perrin F. Comparing the Auditory Distance and Externalization of Virtual Sound Sources Simulated Using Nonindividualized Stimuli. Trends Hear 2024; 28:23312165241285695. [PMID: 39435499 PMCID: PMC11500226 DOI: 10.1177/23312165241285695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2024] [Revised: 08/26/2024] [Accepted: 09/03/2024] [Indexed: 10/23/2024] Open
Abstract
When reproducing sounds over headphones, the simulated source can be externalized (i.e., perceived outside the head) or internalized (i.e., perceived within the head). Is it because it is perceived as more or less distant? To investigate this question, 18 participants evaluated distance and externalization for three types of sound (speech, piano, helicopter) in 27 conditions using nonindividualized stimuli. Distance and externalization ratings were significantly correlated across conditions and listeners, and when averaged across listeners or conditions. However, they were also decoupled in some circumstances: (1) Sound type had different effects on distance and externalization: the helicopter was evaluated as more distant, while speech was judged as less externalized. (2) Distance estimations increased with simulated distances even for stimuli judged as internalized. (3) Diotic reverberation influenced distance but not externalization. Overall, a source was not rated as externalized as soon as and only if its perceived distance exceeded a threshold (e.g., the head radius). These results suggest that distance and externalization are correlated but might not be aspects of a single perceptual continuum. In particular, a virtual source might be judged as both internalized and with a distance. Hence, it could be important to avoid using a scale related to distance when evaluating externalization.
Collapse
Affiliation(s)
- Mathieu Lavandier
- ENTPE, Ecole Centrale de Lyon, CNRS, LTDS, UMR5513, Vaulx-en-Velin, France
| | - Lizette Heine
- ENTPE, Ecole Centrale de Lyon, CNRS, LTDS, UMR5513, Vaulx-en-Velin, France
- Audition Cognition and Psychoacoustics Team, Lyon Neuroscience Research Center, INSERM, U1028, CNRS, UMR5292, Lyon, France
| | - Fabien Perrin
- Audition Cognition and Psychoacoustics Team, Lyon Neuroscience Research Center, INSERM, U1028, CNRS, UMR5292, Lyon, France
| |
Collapse
|
4
|
Sitdikov VM, Gvozdeva AP, Andreeva IG. A quick method for determining the relative minimum audible distance using sound images. Atten Percept Psychophys 2023; 85:2718-2730. [PMID: 36949259 DOI: 10.3758/s13414-023-02663-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/23/2023] [Indexed: 03/24/2023]
Abstract
Auditory localization plays an essential role in various tasks, including spatial orientation, locomotion, attention and memory. Optimization of experimental routine is important for preliminary assessment of the subject's sound localization ability. In the present study, a new quick technique for estimating the relative minimum audible distance (RMAD) using sound images is introduced. Twenty adults with normal hearing took part in six RMAD measurements in free field. The reference RMAD values were obtained using a method of constant stimuli by physically positioning a real sound source. The same method was used with stationary sound images created by superposition of signals emitted by two loudspeakers. To optimize the measurements, the RMADs were determined for the sound images using two adaptive psychoacoustic procedures known as one-down, one-up and two-down, one-up staircases. The group-average RMADs obtained by the method of constant stimuli for both types of stimuli and by two adaptive procedures were similar, 7% (SD = 2%). The effect of whether subjects were sighted or blindfolded was not significant for measurements of RMAD to sound images. The average measurement times were 373 s (SD = 20 s) for the method of constant stimuli, 85 s (SD = 9 s) for the one-down, one-up, and 124 s (SD = 14 s) for the two-down, one-up procedure. The results are consistent with the previous studies and confirm the validity of the measurements of RMAD using adaptive procedures with stationary sound images as a quick method.
Collapse
Affiliation(s)
- V M Sitdikov
- Laboratory of Comparative Sensory Physiology, Sechenov Institute of Evolutionary Physiology and Biochemistry of Russian Academy of Sciences, Saint Petersburg, Russia
| | - A P Gvozdeva
- Laboratory of Comparative Sensory Physiology, Sechenov Institute of Evolutionary Physiology and Biochemistry of Russian Academy of Sciences, Saint Petersburg, Russia
| | - I G Andreeva
- Laboratory of Comparative Sensory Physiology, Sechenov Institute of Evolutionary Physiology and Biochemistry of Russian Academy of Sciences, Saint Petersburg, Russia.
| |
Collapse
|
5
|
Hüg MX, Bermejo F, Tommasini FC, Di Paolo EA. Effects of guided exploration on reaching measures of auditory peripersonal space. Front Psychol 2022; 13:983189. [PMID: 36337523 PMCID: PMC9632294 DOI: 10.3389/fpsyg.2022.983189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 09/23/2022] [Indexed: 11/13/2022] Open
Abstract
Despite the recognized importance of bodily movements in spatial audition, few studies have integrated action-based protocols with spatial hearing in the peripersonal space. Recent work shows that tactile feedback and active exploration allow participants to improve performance in auditory distance perception tasks. However, the role of the different aspects involved in the learning phase, such as voluntary control of movement, proprioceptive cues, and the possibility of self-correcting errors, is still unclear. We study the effect of guided reaching exploration on perceptual learning of auditory distance in peripersonal space. We implemented a pretest-posttest experimental design in which blindfolded participants must reach for a sound source located in this region. They were divided into three groups that were differentiated by the intermediate training phase: Guided, an experimenter guides the participant’s arm to contact the sound source; Active, the participant freely explores the space until contacting the source; and Control, without tactile feedback. The effects of exploration feedback on auditory distance perception in the peripersonal space are heterogeneous. Both the Guided and Active groups change their performance. However, participants in the Guided group tended to overestimate distances more than those in the Active group. The response error of the Guided group corresponds to a generalized calibration criterion over the entire range of reachable distances. Whereas the Active group made different adjustments for proximal and distal positions. The results suggest that guided exploration can induce changes on the boundary of the auditory reachable space. We postulate that aspects of agency such as initiation, control, and monitoring of movement, assume different degrees of involvement in both guided and active tasks, reinforcing a non-binary approach to the question of activity-passivity in perceptual learning and supporting a complex view of the phenomena involved in action-based learning.
Collapse
Affiliation(s)
- Mercedes X. Hüg
- Centro de Investigación y Transferencia en Acústica, CONICET, Universidad Tecnológica Nacional Facultad Regional Córdoba, Córdoba, Argentina
- Facultad de Psicología, Universidad Nacional de Córdoba, Córdoba, Argentina
- *Correspondence: Mercedes X. Hüg,
| | - Fernando Bermejo
- Centro de Investigación y Transferencia en Acústica, CONICET, Universidad Tecnológica Nacional Facultad Regional Córdoba, Córdoba, Argentina
- Facultad de Psicología, Universidad Nacional de Córdoba, Córdoba, Argentina
| | - Fabián C. Tommasini
- Centro de Investigación y Transferencia en Acústica, CONICET, Universidad Tecnológica Nacional Facultad Regional Córdoba, Córdoba, Argentina
| | - Ezequiel A. Di Paolo
- Ikerbasque, Basque Foundation for Science, Bilbao, Spain
- IAS Research Center for Life, Mind and Society, University of the Basque Country, San Sebastián, Spain
- Department of Informatics, University of Sussex, Brighton, United Kingdom
| |
Collapse
|
6
|
Lohse M, Zimmer-Harwood P, Dahmen JC, King AJ. Integration of somatosensory and motor-related information in the auditory system. Front Neurosci 2022; 16:1010211. [PMID: 36330342 PMCID: PMC9622781 DOI: 10.3389/fnins.2022.1010211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Accepted: 09/28/2022] [Indexed: 11/30/2022] Open
Abstract
An ability to integrate information provided by different sensory modalities is a fundamental feature of neurons in many brain areas. Because visual and auditory inputs often originate from the same external object, which may be located some distance away from the observer, the synthesis of these cues can improve localization accuracy and speed up behavioral responses. By contrast, multisensory interactions occurring close to the body typically involve a combination of tactile stimuli with other sensory modalities. Moreover, most activities involving active touch generate sound, indicating that stimuli in these modalities are frequently experienced together. In this review, we examine the basis for determining sound-source distance and the contribution of auditory inputs to the neural encoding of space around the body. We then consider the perceptual consequences of combining auditory and tactile inputs in humans and discuss recent evidence from animal studies demonstrating how cortical and subcortical areas work together to mediate communication between these senses. This research has shown that somatosensory inputs interface with and modulate sound processing at multiple levels of the auditory pathway, from the cochlear nucleus in the brainstem to the cortex. Circuits involving inputs from the primary somatosensory cortex to the auditory midbrain have been identified that mediate suppressive effects of whisker stimulation on auditory thalamocortical processing, providing a possible basis for prioritizing the processing of tactile cues from nearby objects. Close links also exist between audition and movement, and auditory responses are typically suppressed by locomotion and other actions. These movement-related signals are thought to cancel out self-generated sounds, but they may also affect auditory responses via the associated somatosensory stimulation or as a result of changes in brain state. Together, these studies highlight the importance of considering both multisensory context and movement-related activity in order to understand how the auditory cortex operates during natural behaviors, paving the way for future work to investigate auditory-somatosensory interactions in more ecological situations.
Collapse
|
7
|
Gaveau V, Coudert A, Salemme R, Koun E, Desoche C, Truy E, Farnè A, Pavani F. Benefits of active listening during 3D sound localization. Exp Brain Res 2022; 240:2817-2833. [PMID: 36071210 PMCID: PMC9587935 DOI: 10.1007/s00221-022-06456-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 08/28/2022] [Indexed: 11/29/2022]
Abstract
In everyday life, sound localization entails more than just the extraction and processing of auditory cues. When determining sound position in three dimensions, the brain also considers the available visual information (e.g., visual cues to sound position) and resolves perceptual ambiguities through active listening behavior (e.g., spontaneous head movements while listening). Here, we examined to what extent spontaneous head movements improve sound localization in 3D—azimuth, elevation, and depth—by comparing static vs. active listening postures. To this aim, we developed a novel approach to sound localization based on sounds delivered in the environment, brought into alignment thanks to a VR system. Our system proved effective for the delivery of sounds at predetermined and repeatable positions in 3D space, without imposing a physically constrained posture, and with minimal training. In addition, it allowed measuring participant behavior (hand, head and eye position) in real time. We report that active listening improved 3D sound localization, primarily by ameliorating accuracy and variability of responses in azimuth and elevation. The more participants made spontaneous head movements, the better was their 3D sound localization performance. Thus, we provide proof of concept of a novel approach to the study of spatial hearing, with potentials for clinical and industrial applications.
Collapse
Affiliation(s)
- V Gaveau
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France. .,University of Lyon 1, Lyon, France.
| | - A Coudert
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,ENT Departments, Hôpital Femme-Mère-Enfant and Edouard Herriot University Hospitals, Lyon, France
| | - R Salemme
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,Neuro-immersion, Lyon, France
| | - E Koun
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France
| | - C Desoche
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,Neuro-immersion, Lyon, France
| | - E Truy
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,ENT Departments, Hôpital Femme-Mère-Enfant and Edouard Herriot University Hospitals, Lyon, France
| | - A Farnè
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,Neuro-immersion, Lyon, France
| | - F Pavani
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, 16 Av. Doyen Lépine, BRON cedex, 69500, Lyon, France.,University of Lyon 1, Lyon, France.,Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy
| |
Collapse
|
8
|
Zahorik P. Asymmetric visual capture of virtual sound sources in the distance dimension. Front Neurosci 2022; 16:958577. [PMID: 36117637 PMCID: PMC9475063 DOI: 10.3389/fnins.2022.958577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 08/08/2022] [Indexed: 11/17/2022] Open
Abstract
Visual capture describes the tendency of a sound to be mislocalized to the location of a plausible visual target. This effect, also known as the ventriloquist effect, has been extensively studied in humans, but primarily for mismatches in the angular direction between auditory and visual targets. Here, visual capture was examined in the distance dimension using a single visual target (an un-energized loudspeaker) and invisible virtual sound sources presented over headphones. The sound sources were synthesized from binaural impulse-response measurements at distances ranging from 1 to 5 m (0.25 m steps) in the semi-reverberant room (7.7 × 4.2 × 2.7 m3) in which the experiment was conducted. Listeners (n = 11) were asked whether or not the auditory target appeared to be at the same distance as the visual target. Within a block of trials, the visual target was placed at a fixed distance of 1.5, 3, or 4.5 m, and the auditory target varied randomly from trial-to-trial over the sample of measurement distances. The resulting psychometric functions were generally consistent with visual capture in distance, but the capture was asymmetric: Sound sources behind the visual target were more strongly captured than sources in front of the visual target. This asymmetry is consistent with previous reports in the literature, and is shown here to be well predicted by a simple model of sensory integration and decision in which perceived auditory space is compressed logarithmically in distance and has lower resolution than perceived visual space.
Collapse
Affiliation(s)
- Pavel Zahorik
- Department of Otolaryngology and Communicative Disorders, Heuser Hearing Institute, University of Louisville, Louisville, KY, United States
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY, United States
- *Correspondence: Pavel Zahorik,
| |
Collapse
|
9
|
Steffens H, Schutte M, Ewert SD. Acoustically driven orientation and navigation in enclosed spaces. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1767. [PMID: 36182293 DOI: 10.1121/10.0013702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 08/02/2022] [Indexed: 06/16/2023]
Abstract
Awareness of space, and subsequent orientation and navigation in rooms, is dominated by the visual system. However, humans are able to extract auditory information about their surroundings from early reflections and reverberation in enclosed spaces. To better understand orientation and navigation based on acoustic cues only, three virtual corridor layouts (I-, U-, and Z-shaped) were presented using real-time virtual acoustics in a three-dimensional 86-channel loudspeaker array. Participants were seated on a rotating chair in the center of the loudspeaker array and navigated using real rotation and virtual locomotion by "teleporting" in steps on a grid in the invisible environment. A head mounted display showed control elements and the environment in a visual reference condition. Acoustical information about the environment originated from a virtual sound source at the collision point of a virtual ray with the boundaries. In different control modes, the ray was cast either in view or hand direction or in a rotating, "radar"-like fashion in 90° steps to all sides. Time to complete, number of collisions, and movement patterns were evaluated. Navigation and orientation were possible based on the direct sound with little effect of room acoustics and control mode. Underlying acoustic cues were analyzed using an auditory model.
Collapse
Affiliation(s)
- Henning Steffens
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, 26111 Oldenburg, Germany
| | - Michael Schutte
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, 26111 Oldenburg, Germany
| | - Stephan D Ewert
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, 26111 Oldenburg, Germany
| |
Collapse
|
10
|
Best Distance Perception in Virtual Audiovisual Environment. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:6010667. [PMID: 35800684 PMCID: PMC9256371 DOI: 10.1155/2022/6010667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Accepted: 06/08/2022] [Indexed: 11/18/2022]
Abstract
Auditory distance perception is important to virtual sound production and is affected by many factors. In order to investigate the effect of the visual cue on auditory distance perception and the best auditory distance perception in the existence of virtual sound source, experiments on the effect of visual cue on distance perception and the best auditory distance perception were conducted, respectively. The results of the first experiment showed that there was no obvious difference between the auditory distance perception in the existence and absence of virtual sound source, but visual cue can decrease the fluctuation of the perception. From the best auditory distance perception experiment, the attenuating SPL of the initial sound signal that made the subjects perceive the best auditory distance in the existence of virtual sound source at the distance of 4m, 6m, 8m, 10m, and 12m was measured, respectively, and compared the difference of attenuating SPL between experiment measurement and theoretical calculation.
Collapse
|
11
|
Loss of audiovisual facilitation with age occurs for vergence eye movements but not for saccades. Sci Rep 2022; 12:4453. [PMID: 35292652 PMCID: PMC8924254 DOI: 10.1038/s41598-022-08072-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 01/31/2022] [Indexed: 11/08/2022] Open
Abstract
Though saccade and vergence eye movements are fundamental for everyday life, the way these movements change as we age has not been sufficiently studied. The present study examines the effect of age on vergence and saccade eye movement characteristics (latency, peak and average velocity, amplitude) and on audiovisual facilitation. We compare the results for horizontal saccades and vergence movements toward visual and audiovisual targets in a young group of 22 participants (mean age 25 ± 2.5) and an elderly group of 45 participants (mean age 65 + 6.9). The results show that, with increased age, latency of all eye movements increases, average velocity decreases, amplitude of vergence decreases, and audiovisual facilitation collapses for vergence eye movements in depth but is preserved for saccades. There is no effect on peak velocity, suggesting that, although the sensory and attentional mechanisms controlling the motor system does age, the motor system itself does not age. The loss of audiovisual facilitation along the depth axis can be attributed to a physiologic decrease in the capacity for sound localization in depth with age, while left/right sound localization coupled with saccades is preserved. The results bring new insight for the effects of aging on multisensory control and attention.
Collapse
|
12
|
Auditory distance perception in front and rear space. Hear Res 2022; 417:108468. [DOI: 10.1016/j.heares.2022.108468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Revised: 01/22/2022] [Accepted: 02/12/2022] [Indexed: 11/21/2022]
|
13
|
Cui P, Zhang J, Li TT. Research on Acoustic Environment in the Building of Nursing Homes Based on Sound Preference of the Elderly People: A Case Study in Harbin, China. Front Psychol 2021; 12:707457. [PMID: 34744868 PMCID: PMC8563576 DOI: 10.3389/fpsyg.2021.707457] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Accepted: 09/15/2021] [Indexed: 11/13/2022] Open
Abstract
Nursing homes are the facilities where the elderly conduct their daily activities. This may lead to a complicated acoustic environment which would potentially affect the ability of the elderly to function. In this study, the main indoor public space of a nursing home in Harbin was taken as the research object, and the methods of field observation, sound measurement, and questionnaire survey were used to explore the sound perception and preference of the elderly. The results revealed that in terms of the temporal and spatial distribution of sound pressure level (SPL), the unit living space had the highest SPL, which was above 60 dB (A). The reverberation time (RT) of the unit living space, medical and health care center corridor, was 2.15 and 2.13 s, respectively, at a frequency of 1,000 Hz, which was within the discomfort range. The results also revealed that an acoustic environment had a strong correlation with humidity and a weak correlation with temperature. However, no significant correlation could be assessed with a luminous environment. The elderly people were generally willing to accept the natural sound sources. The factors of gender and offspring numbers had no significant impact on the evaluation of acoustic environment comfort, whereas marriage and income status affected the comfort. This study may help improve the quality of life of the elderly in the nursing home and provide a reference for the construction and design of pension facilities.
Collapse
Affiliation(s)
- Peng Cui
- School of Landscape, Northeast Forestry University, Harbin, China
| | - Jun Zhang
- School of Landscape, Northeast Forestry University, Harbin, China
| | - Ting Ting Li
- School of Landscape, Northeast Forestry University, Harbin, China
| |
Collapse
|
14
|
Effect of hearing aids on body balance function in non-reverberant condition: A posturographic study. PLoS One 2021; 16:e0258590. [PMID: 34644358 PMCID: PMC8513876 DOI: 10.1371/journal.pone.0258590] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Accepted: 09/30/2021] [Indexed: 11/19/2022] Open
Abstract
Objective The purpose of this study was to evaluate the effect of hearing aids on body balance function in a strictly controlled auditory environment. Methods We recorded the findings of 10 experienced hearing aid users and 10 normal-hearing participants. All the participants were assessed using posturography under eight conditions in an acoustically shielded non-reverberant room: (1) eyes open with sound stimuli, with and without foam rubber, (2) eyes closed with sound stimuli, with and without foam rubber, (3) eyes open without sound stimuli, with and without foam rubber, and (4) eyes closed without sound stimuli, with and without foam rubber. Results The auditory cue improved the total path area and sway velocity in both the hearing aid users and normal-hearing participants. The analysis of variance showed that the interaction among eye condition, sound condition, and between-group factor was significant in the maximum displacement of the center-of-pressure in the mediolateral axis (F [1, 18] = 6.19, p = 0.02). The maximum displacement of the center-of-pressure in the mediolateral axis improved with the auditory cues in the normal-hearing participants in the eyes closed condition (5.4 cm and 4.7 cm, p < 0.01). In the hearing aid users, this difference was not significant (5.9 cm and 5.7 cm, p = 0.45). The maximum displacement of the center-of-pressure in the anteroposterior axis improved in both the hearing aid users and the normal-hearing participants.
Collapse
|
15
|
Partial visual loss disrupts the relationship between judged room size and sound source distance. Exp Brain Res 2021; 240:81-96. [PMID: 34623459 PMCID: PMC8803715 DOI: 10.1007/s00221-021-06235-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Accepted: 09/25/2021] [Indexed: 11/18/2022]
Abstract
Visual spatial information plays an important role in calibrating auditory space. Blindness results in deficits in a number of auditory abilities, which have been explained in terms of the hypothesis that visual information is needed to calibrate audition. When judging the size of a novel room when only auditory cues are available, normally sighted participants may use the location of the farthest sound source to infer the nearest possible distance of the far wall. However, for people with partial visual loss (distinct from blindness in that some vision is present), such a strategy may not be reliable if vision is needed to calibrate auditory cues for distance. In the current study, participants were presented with sounds at different distances (ranging from 1.2 to 13.8 m) in a simulated reverberant (T60 = 700 ms) or anechoic room. Farthest distance judgments and room size judgments (volume and area) were obtained from blindfolded participants (18 normally sighted, 38 partially sighted) for speech, music, and noise stimuli. With sighted participants, the judged room volume and farthest sound source distance estimates were positively correlated (p < 0.05) for all conditions. Participants with visual losses showed no significant correlations for any of the conditions tested. A similar pattern of results was observed for the correlations between farthest distance and room floor area estimates. Results demonstrate that partial visual loss disrupts the relationship between judged room size and sound source distance that is shown by sighted participants.
Collapse
|
16
|
Kolarik AJ, Moore BCJ, Cirstea S, Aggius-Vella E, Gori M, Campus C, Pardhan S. Factors Affecting Auditory Estimates of Virtual Room Size: Effects of Stimulus, Level, and Reverberation. Perception 2021; 50:646-663. [PMID: 34053354 DOI: 10.1177/03010066211020598] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
When vision is unavailable, auditory level and reverberation cues provide important spatial information regarding the environment, such as the size of a room. We investigated how room-size estimates were affected by stimulus type, level, and reverberation. In Experiment 1, 15 blindfolded participants estimated room size after performing a distance bisection task in virtual rooms that were either anechoic (with level cues only) or reverberant (with level and reverberation cues) with a relatively short reverberation time of T60 = 400 milliseconds. Speech, noise, or clicks were presented at distances between 1.9 and 7.1 m. The reverberant room was judged to be significantly larger than the anechoic room (p < .05) for all stimuli. In Experiment 2, only the reverberant room was used and the overall level of all sounds was equalized, so only reverberation cues were available. Ten blindfolded participants took part. Room-size estimates were significantly larger for speech than for clicks or noise. The results show that when level and reverberation cues are present, reverberation increases judged room size. Even relatively weak reverberation cues provide room-size information, which could potentially be used by blind or visually impaired individuals encountering novel rooms.
Collapse
Affiliation(s)
- Andrew J Kolarik
- Anglia Ruskin University, Cambridge, UK.,Anglia Ruskin University, Cambridge, UK
| | - Brian C J Moore
- Anglia Ruskin University, Cambridge, UK; University of Cambridge, Cambridge, UK.,Anglia Ruskin University, Cambridge, UK
| | - Silvia Cirstea
- Anglia Ruskin University, Cambridge, UK.,Anglia Ruskin University, Cambridge, UK
| | - Elena Aggius-Vella
- Fondazione Istituto Italiano di Tecnologia, Genoa, Italy; Institute for Mind, Brain and Technology, Herzeliya, Israel.,Anglia Ruskin University, Cambridge, UK
| | | | - Claudio Campus
- Fondazione Istituto Italiano di Tecnologia, Genoa, Italy.,Anglia Ruskin University, Cambridge, UK
| | | |
Collapse
|
17
|
Preserving Human Perspectives in Cultural Heritage Acoustics: Distance Cues and Proxemics in Aural Heritage Fieldwork. ACOUSTICS 2021. [DOI: 10.3390/acoustics3010012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
We examine the praxis implications of our working definition of aural heritage: spatial acoustics as physically experienced by humans in cultural contexts; aligned with the aims of anthropological archaeology (the study of human life from materials). Here we report on human-centered acoustical data collection strategies from our project “Digital Preservation and Access to Aural Heritage via a Scalable, Extensible Method,” supported by the National Endowment for the Humanities (NEH) in the USA. The documentation and accurate translation of human sensory perspectives is fundamental to the ecological validity of cultural heritage fieldwork and the preservation of heritage acoustics. Auditory distance cues, which enable and constrain sonic communication, relate to proxemics, contextualized understandings of distance relationships that are fundamental to human social interactions. We propose that source–receiver locations in aural heritage measurements should be selected to represent a comprehensive range of proxemics according to site-contextualized spatial-use scenarios, and we identify and compare acoustical metrics for auditory distance cues from acoustical fieldwork we conducted using this strategy in three contrasting case-study heritage sites. This conceptual shift from architectural acoustical sampling to aural heritage sampling prioritizes culturally and physically plausible human auditory/sound-sensing perspectives and relates them to spatial proxemics as scaled architecturally.
Collapse
|
18
|
Bahadori M, Barumerli R, Geronazzo M, Cesari P. Action planning and affective states within the auditory peripersonal space in normal hearing and cochlear-implanted listeners. Neuropsychologia 2021; 155:107790. [PMID: 33636155 DOI: 10.1016/j.neuropsychologia.2021.107790] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 12/28/2020] [Accepted: 02/12/2021] [Indexed: 11/24/2022]
Abstract
Fast reaction to approaching stimuli is vital for survival. When sounds enter the auditory peripersonal space (PPS), sounds perceived as being nearer elicit higher motor cortex activation. There is a close relationship between motor preparation and the perceptual components of sounds, particularly of highly arousing sounds. Here we compared the ability to recognize, evaluate, and react to affective stimuli entering the PPS between 20 normal-hearing (NH, 7 women) and 10 cochlear-implanted (CI, 3 women) subjects. The subjects were asked to quickly flex their arm in reaction to positive (P), negative (N), and neutral (Nu) affective sounds ending virtually at five distances from their body. Pre-motor reaction time (pm-RT) was detected via electromyography from the postural muscles to measure action anticipation at the sound-stopping distance; the sounds were also evaluated for their perceived level of valence and arousal. While both groups were able to localize sound distance, only the NH group modulated their pm-RT based on the perceived sound distance. Furthermore, when the sound carried no affective components, the pm-RT to the Nu sounds was shorter compared to the P and the N sounds for both groups. Only the NH group perceived the closer sounds as more arousing than the distant sounds, whereas both groups perceived sound valence similarly. Our findings underline the role of emotional states in action preparation and describe the perceptual components essential for prompt reaction to sounds approaching the peripersonal space.
Collapse
Affiliation(s)
- Mehrdad Bahadori
- Department of Neurosciences, Biomedicine & Movement Sciences, University of Verona, 37131, Verona, Italy.
| | - Roberto Barumerli
- Department of Information Engineering, University of Padova, 35131, Padova, Italy
| | - Michele Geronazzo
- Dyson School of Design Engineering, Imperial College London, London, SW7 2AZ, United Kingdom
| | - Paola Cesari
- Department of Neurosciences, Biomedicine & Movement Sciences, University of Verona, 37131, Verona, Italy
| |
Collapse
|
19
|
Bell L, Peng ZE, Pausch F, Reindl V, Neuschaefer-Rube C, Fels J, Konrad K. fNIRS Assessment of Speech Comprehension in Children with Normal Hearing and Children with Hearing Aids in Virtual Acoustic Environments: Pilot Data and Practical Recommendations. CHILDREN (BASEL, SWITZERLAND) 2020; 7:E219. [PMID: 33171753 PMCID: PMC7695031 DOI: 10.3390/children7110219] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 11/02/2020] [Accepted: 11/05/2020] [Indexed: 11/16/2022]
Abstract
The integration of virtual acoustic environments (VAEs) with functional near-infrared spectroscopy (fNIRS) offers novel avenues to investigate behavioral and neural processes of speech-in-noise (SIN) comprehension in complex auditory scenes. Particularly in children with hearing aids (HAs), the combined application might offer new insights into the neural mechanism of SIN perception in simulated real-life acoustic scenarios. Here, we present first pilot data from six children with normal hearing (NH) and three children with bilateral HAs to explore the potential applicability of this novel approach. Children with NH received a speech recognition benefit from low room reverberation and target-distractors' spatial separation, particularly when the pitch of the target and the distractors was similar. On the neural level, the left inferior frontal gyrus appeared to support SIN comprehension during effortful listening. Children with HAs showed decreased SIN perception across conditions. The VAE-fNIRS approach is critically compared to traditional SIN assessments. Although the current study shows that feasibility still needs to be improved, the combined application potentially offers a promising tool to investigate novel research questions in simulated real-life listening. Future modified VAE-fNIRS applications are warranted to replicate the current findings and to validate its application in research and clinical settings.
Collapse
Affiliation(s)
- Laura Bell
- Child Neuropsychology Section, Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, Medical Faculty, RWTH Aachen University, 52074 Aachen, Germany; (V.R.); (K.K.)
| | - Z. Ellen Peng
- Teaching and Research Area of Medical Acoustics, Institute of Technical Acoustics, RWTH Aachen University, 52074 Aachen, Germany; (F.P.); (J.F.)
- Waisman Center, University of Wisconsin-Madison, Madison, WI 53705, USA;
| | - Florian Pausch
- Teaching and Research Area of Medical Acoustics, Institute of Technical Acoustics, RWTH Aachen University, 52074 Aachen, Germany; (F.P.); (J.F.)
| | - Vanessa Reindl
- Child Neuropsychology Section, Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, Medical Faculty, RWTH Aachen University, 52074 Aachen, Germany; (V.R.); (K.K.)
- JARA-Brain Institute II, Molecular Neuroscience and Neuroimaging, RWTH Aachen & Research Centre Juelich, 52428 Juelich, Germany
| | - Christiane Neuschaefer-Rube
- Clinic of Phoniatrics, Pedaudiology, and Communication Disorders, Medical Faculty, RWTH Aachen University, 52074 Aachen, Germany;
| | - Janina Fels
- Teaching and Research Area of Medical Acoustics, Institute of Technical Acoustics, RWTH Aachen University, 52074 Aachen, Germany; (F.P.); (J.F.)
| | - Kerstin Konrad
- Child Neuropsychology Section, Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, Medical Faculty, RWTH Aachen University, 52074 Aachen, Germany; (V.R.); (K.K.)
- JARA-Brain Institute II, Molecular Neuroscience and Neuroimaging, RWTH Aachen & Research Centre Juelich, 52428 Juelich, Germany
| |
Collapse
|
20
|
Courtois G, Grimaldi V, Lissek H, Estoppey P, Georganti E. Perception of Auditory Distance in Normal-Hearing and Moderate-to-Profound Hearing-Impaired Listeners. Trends Hear 2020; 23:2331216519887615. [PMID: 31774032 PMCID: PMC6887817 DOI: 10.1177/2331216519887615] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
The auditory system allows the estimation of the distance to sound-emitting objects using multiple spatial cues. In virtual acoustics over headphones, a prerequisite to render auditory distance impression is sound externalization, which denotes the perception of synthesized stimuli outside of the head. Prior studies have found that listeners with mild-to-moderate hearing loss are able to perceive auditory distance and are sensitive to externalization. However, this ability may be degraded by certain factors, such as non-linear amplification in hearing aids or the use of a remote wireless microphone. In this study, 10 normal-hearing and 20 moderate-to-profound hearing-impaired listeners were instructed to estimate the distance of stimuli processed with different methods yielding various perceived auditory distances in the vicinity of the listeners. Two different configurations of non-linear amplification were implemented, and a novel feature aiming to restore a sense of distance in wireless microphone systems was tested. The results showed that the hearing-impaired listeners, even those with a profound hearing loss, were able to discriminate nearby and far sounds that were equalized in level. Their perception of auditory distance was however more contracted than in normal-hearing listeners. Non-linear amplification was found to distort the original spatial cues, but no adverse effect on the ratings of auditory distance was evident. Finally, it was shown that the novel feature was successful in allowing the hearing-impaired participants to perceive externalized sounds with wireless microphone systems.
Collapse
Affiliation(s)
- Gilles Courtois
- Swiss Federal Institute of Technology (EPFL), Signal Processing Laboratory (LTS2), Lausanne, Switzerland.,Sonova AG, Stäfa, Switzerland
| | - Vincent Grimaldi
- Swiss Federal Institute of Technology (EPFL), Signal Processing Laboratory (LTS2), Lausanne, Switzerland
| | - Hervé Lissek
- Swiss Federal Institute of Technology (EPFL), Signal Processing Laboratory (LTS2), Lausanne, Switzerland
| | | | | |
Collapse
|
21
|
Prud'homme L, Lavandier M. Do we need two ears to perceive the distance of a virtual frontal sound source? THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:1614. [PMID: 33003836 DOI: 10.1121/10.0001954] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Accepted: 08/26/2020] [Indexed: 06/11/2023]
Abstract
The present study investigated whether the perception of virtual auditory distance is binaural, monaural, or both. Listeners evaluated the distance of a frontal source of pink noise simulated in a room via headphones. Experiment 1 was performed with eyes closed in a soundproof booth. Experiment 2 was performed with eyes open in the room used to create the stimuli. Individualized and non-individualized stimuli were compared. Different conditions for controlling sound level were tested. The amount of binaural information in the stimuli was varied by mixing the left and right ear signals in different proportions. Results showed that the use of non-individualized stimuli did not impair distance perception. Binaural information was not used by naive listeners to evaluate distance, both with and without visual information available. However, for some listeners, a complete absence of binaural information could disrupt distance evaluation with headphones. Sound level was a dominant cue used by listeners to judge for distance, and some listeners could also reliably use reverberation-related changes in spectral content. In the absence of specific training, artificial manipulation of sound level greatly altered distance judgments.
Collapse
Affiliation(s)
- Luna Prud'homme
- Univ. Lyon, ENTPE, Laboratoire Génie Civil et Bâtiment, Rue M. Audin, Vaulx-en-Velin Cedex, 69518, France
| | - Mathieu Lavandier
- Univ. Lyon, ENTPE, Laboratoire Génie Civil et Bâtiment, Rue M. Audin, Vaulx-en-Velin Cedex, 69518, France
| |
Collapse
|
22
|
Kolarik AJ, Raman R, Moore BCJ, Cirstea S, Gopalakrishnan S, Pardhan S. The accuracy of auditory spatial judgments in the visually impaired is dependent on sound source distance. Sci Rep 2020; 10:7169. [PMID: 32346036 PMCID: PMC7189236 DOI: 10.1038/s41598-020-64306-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2019] [Accepted: 04/13/2020] [Indexed: 11/09/2022] Open
Abstract
Blindness leads to substantial enhancements in many auditory abilities, and deficits in others. It is unknown how severe visual losses need to be before changes in auditory abilities occur, or whether the relationship between severity of visual loss and changes in auditory abilities is proportional and systematic. Here we show that greater severity of visual loss is associated with increased auditory judgments of distance and room size. On average participants with severe visual losses perceived sounds to be twice as far away, and rooms to be three times larger, than sighted controls. Distance estimates for sighted controls were most accurate for closer sounds and least accurate for farther sounds. As the severity of visual impairment increased, accuracy decreased for closer sounds and increased for farther sounds. However, it is for closer sounds that accurate judgments are needed to guide rapid motor responses to auditory events, e.g. planning a safe path through a busy street to avoid collisions with other people, and falls. Interestingly, greater visual impairment severity was associated with more accurate room size estimates. The results support a new hypothesis that crossmodal calibration of audition by vision depends on the severity of visual loss.
Collapse
Affiliation(s)
- Andrew J Kolarik
- Vision and Eye Research Institute, School of Medicine, Anglia Ruskin University, Cambridge, United Kingdom. .,Department of Psychology, University of Cambridge, Cambridge, United Kingdom.
| | - Rajiv Raman
- Vision and Eye Research Institute, School of Medicine, Anglia Ruskin University, Cambridge, United Kingdom.,Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya Eye Hospital, Chennai, India
| | - Brian C J Moore
- Vision and Eye Research Institute, School of Medicine, Anglia Ruskin University, Cambridge, United Kingdom.,Department of Psychology, University of Cambridge, Cambridge, United Kingdom
| | - Silvia Cirstea
- Vision and Eye Research Institute, School of Medicine, Anglia Ruskin University, Cambridge, United Kingdom.,School of Computing and Information Science, Anglia Ruskin University, Cambridge, United Kingdom
| | - Sarika Gopalakrishnan
- Faculty of Low Vision Care, Elite School of Optometry, Chennai, India.,Low Vision Care Department, Sankara Nethralaya Eye Hospital, Chennai, India
| | - Shahina Pardhan
- Vision and Eye Research Institute, School of Medicine, Anglia Ruskin University, Cambridge, United Kingdom
| |
Collapse
|
23
|
Ding J, Ke Y, Cheng L, Zheng C, Li X. Joint estimation of binaural distance and azimuth by exploiting deep neural networks. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:2625. [PMID: 32359271 DOI: 10.1121/10.0001155] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Accepted: 04/08/2020] [Indexed: 06/11/2023]
Abstract
The state-of-the-art supervised binaural distance estimation methods often use binaural features that are related to both the distance and the azimuth, and thus the distance estimation accuracy may degrade a great deal with fluctuant azimuth. To incorporate the azimuth on estimating the distance, this paper proposes a supervised method to jointly estimate the azimuth and the distance of binaural signals based on deep neural networks (DNNs). In this method, the subband binaural features, including many statistical properties of several subband binaural features and the binaural spectral magnitude difference standard deviation, are extracted together as cues to jointly estimate the azimuth and the distance using binaural signals by exploiting a multi-objective DNN framework. Especially, both the azimuth and the distance cues are utilized in the learning stage of the error back-propagation in the multi-objective DNN framework, which can improve the generalization ability of the azimuth and the distance estimation. Experimental results demonstrate that the proposed method can not only achieve high azimuth estimation accuracy but can also effectively improve the distance estimation accuracy when compared with several state-of-the-art supervised binaural distance estimation methods.
Collapse
Affiliation(s)
- Jiance Ding
- Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Science, 100190, Beijing, China
| | - Yuxuan Ke
- Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Science, 100190, Beijing, China
| | - Linjuan Cheng
- Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Science, 100190, Beijing, China
| | - Chengshi Zheng
- Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Science, 100190, Beijing, China
| | - Xiaodong Li
- Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Science, 100190, Beijing, China
| |
Collapse
|
24
|
Maezawa T, Kawahara JI. Distance Estimation by Blindfolded Sighted Participants Using Echolocation. Perception 2019; 48:1235-1251. [PMID: 31648599 DOI: 10.1177/0301006619884788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Auditory perceived distance can be distorted in one’s internal representation. Thus, the present study examined whether blindfolded sighted participants could reduce the bias and preserve estimated distance for 5 to 15 s using echolocation. The participants performed a delayed reproduction task that consisted of testing sessions on 2 separate days in which the target distance was manipulated from 20 to 50 cm. Participants were blindfolded and asked to reproduce the distance of a target after a temporal delay of several seconds using click bursts produced by a loudspeaker. The testing session was preceded by a practice session that included training and feedback. The relationship between estimated and actual distances was approximated based on a power function and the over- and underestimation of the target distance on each test day. Although participants showed systematic bias in distance estimation on both days, participants changed their bias in the second session by shifting reproduced locations closer to their bodies. The accuracy and consistency of their responses improved across the 2 days. Neither accuracy nor consistency was affected by the retention intervals. These enhancements of performance might be due to improved hearing ability or calibration of internal spatial references through a practice session.
Collapse
Affiliation(s)
- Tomoki Maezawa
- Department of Psychology, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Jun I Kawahara
- Department of Psychology, Hokkaido University, Sapporo, Hokkaido, Japan
| |
Collapse
|
25
|
Uhomoibhi J, Onime C, Wang H. A study of developments and applications of mixed reality cubicles and their impact on learning. THE INTERNATIONAL JOURNAL OF INFORMATION AND LEARNING TECHNOLOGY 2019. [DOI: 10.1108/ijilt-02-2019-0026] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PurposeThe purpose of this paper is to report on developments and applications of mixed reality cubicles and their impacts on learning in higher education. This paper investigates and presents the cost effective application of augmented reality (AR) as a mixed reality technology via or to mobile devices such as head-mounted devices, smart phones and tablets. Discuss the development of mixed reality applications for mobile (smartphones and tablets) devices leading up to the implementation of a mixed reality cubicle for immersive three dimensional (3D) visualizations.Design/methodology/approachThe approach adopted was to limit the considerations to the application of AR via mobile platforms including head-mounted devices with focus on smartphones and tablets, which contain basic feedback–to-user channels such as speakers and display screens. An AR visualization cubicle was jointly developed and applied by three collaborating institutions. The markers, acting as placeholders acts as identifiable reference points for objects being inserted in the mixed reality world. Hundreds of participants comprising academics and students from seven different countries took part in the studies and gave feedback on impact on their learning experience.FindingsResults from current study show less than 30 percent had used mixed reality environments. This is lower than expected. About 70 percent of participants were first time users of mixed reality technologies. This indicates a relatively low use of mixed reality technologies in education. This is consistent with research findings reported that educational use and research on AR is still not common despite their categorization as emerging technologies with great promise for educational use.Research limitations/implicationsCurrent research has focused mainly on cubicles which provides immersive experience if used with head-mounted devices (goggles and smartphones), that are limited by their display/screen sizes. There are some issues with limited battery lifetime for energy to function, hence the need to use rechargeable batteries. Also, the standard dimension of cubicles does not allow for group visualizations. The current cubicle has limitations associated with complex gestures and movements involving two hands, as one hand are currently needed for holding the mobile phone.Practical implicationsThe use of mixed reality cubicles would allow and enhance information visualization for big data in real time and without restrictions. There is potential to have this extended for use in exploring and studying otherwise inaccessible locations such as sea beds and underground caves. Social implications – Following on from this study further work could be done to developing and application of mixed reality cubicles that would impact businesses, health and entertainment.Originality/valueThe originality of this paper lies in the unique approach used in the study of developments and applications of mixed reality cubicles and their impacts on learning. The diverse composition in nature and location of participants drawn from many countries comprising of both tutors and students adds value to the present study. The value of this research include amongst others, the useful results obtained and scope for developments in the future.
Collapse
|
26
|
Reaching measures and feedback effects in auditory peripersonal space. Sci Rep 2019; 9:9476. [PMID: 31263231 PMCID: PMC6603038 DOI: 10.1038/s41598-019-45755-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Accepted: 06/14/2019] [Indexed: 11/09/2022] Open
Abstract
We analyse the effects of exploration feedback on reaching measures of perceived auditory peripersonal space (APS) boundary and the auditory distance perception (ADP) of sound sources located within it. We conducted an experiment in which the participants had to estimate if a sound source was (or not) reachable and to estimate its distance (40 to 150 cm in 5-cm steps) by reaching to a small loudspeaker. The stimulus consisted of a train of three bursts of Gaussian broadband noise. Participants were randomly assigned to two groups: Experimental (EG) and Control (CG). There were three phases in the following order: Pretest-Test-Posttest. For all phases, the listeners performed the same task except for the EG-Test phase where the participants reach in order to touch the sound source. We applied models to characterise the participants' responses and provide evidence that feedback significantly reduces the response bias of both the perceived boundary of the APS and the ADP of sound sources located within reach. In the CG, the repetition of the task did not affect APS and ADP accuracy, but it improved the performance consistency: the reachable uncertainty zone in APS was reduced and there was a tendency to decrease variability in ADP.
Collapse
|
27
|
Pausch F, Aspöck L, Vorländer M, Fels J. An Extended Binaural Real-Time Auralization System With an Interface to Research Hearing Aids for Experiments on Subjects With Hearing Loss. Trends Hear 2019; 22:2331216518800871. [PMID: 30322347 PMCID: PMC6195018 DOI: 10.1177/2331216518800871] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Theory and implementation of acoustic virtual reality have matured and become a powerful tool for the simulation of entirely controllable virtual acoustic environments. Such virtual acoustic environments are relevant for various types of auditory experiments on subjects with normal hearing, facilitating flexible virtual scene generation and manipulation. When it comes to expanding the investigation group to subjects with hearing loss, choosing a reproduction system which offers a proper integration of hearing aids into the virtual acoustic scene is crucial. Current loudspeaker-based spatial audio reproduction systems rely on different techniques to synthesize a surrounding sound field, providing various possibilities for adaptation and extension to allow applications in the field of hearing aid-related research. Representing one option, the concept and implementation of an extended binaural real-time auralization system is presented here. This system is capable of generating complex virtual acoustic environments, including room acoustic simulations, which are reproduced as combined via loudspeakers and research hearing aids. An objective evaluation covers the investigation of different system components, a simulation benchmark analysis for assessing the processing performance, and end-to-end latency measurements.
Collapse
Affiliation(s)
- Florian Pausch
- 1 Institute of Technical Acoustics, Teaching and Research Area of Medical Acoustics, RWTH Aachen University, Germany
| | - Lukas Aspöck
- 2 Institute of Technical Acoustics, RWTH Aachen University, Germany
| | | | - Janina Fels
- 1 Institute of Technical Acoustics, Teaching and Research Area of Medical Acoustics, RWTH Aachen University, Germany
| |
Collapse
|
28
|
Campos J, Ramkhalawansingh R, Pichora-Fuller MK. Hearing, self-motion perception, mobility, and aging. Hear Res 2018; 369:42-55. [DOI: 10.1016/j.heares.2018.03.025] [Citation(s) in RCA: 53] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/01/2017] [Revised: 02/20/2018] [Accepted: 03/29/2018] [Indexed: 11/30/2022]
|
29
|
Yamasaki D, Miyoshi K, Altmann CF, Ashida H. Front-Presented Looming Sound Selectively Alters the Perceived Size of a Visual Looming Object. Perception 2018; 47:751-771. [PMID: 29783921 DOI: 10.1177/0301006618777708] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In spite of accumulating evidence for the spatial rule governing cross-modal interaction according to the spatial consistency of stimuli, it is still unclear whether 3D spatial consistency (i.e., front/rear of the body) of stimuli also regulates audiovisual interaction. We investigated how sounds with increasing/decreasing intensity (looming/receding sound) presented from the front and rear space of the body impact the size perception of a dynamic visual object. Participants performed a size-matching task (Experiments 1 and 2) and a size adjustment task (Experiment 3) of visual stimuli with increasing/decreasing diameter, while being exposed to a front- or rear-presented sound with increasing/decreasing intensity. Throughout these experiments, we demonstrated that only the front-presented looming sound caused overestimation of the spatially consistent looming visual stimulus in size, but not of the spatially inconsistent and the receding visual stimulus. The receding sound had no significant effect on vision. Our results revealed that looming sound alters dynamic visual size perception depending on the consistency in the approaching quality and the front-rear spatial location of audiovisual stimuli, suggesting that the human brain differently processes audiovisual inputs based on their 3D spatial consistency. This selective interaction between looming signals should contribute to faster detection of approaching threats. Our findings extend the spatial rule governing audiovisual interaction into 3D space.
Collapse
Affiliation(s)
| | | | - Christian F Altmann
- Human Brain Research Center, Graduate School of Medicine, Kyoto University, Japan
| | | |
Collapse
|
30
|
Meng Q, Zhao T, Kang J. Influence of Music on the Behaviors of Crowd in Urban Open Public Spaces. Front Psychol 2018; 9:596. [PMID: 29755390 PMCID: PMC5934855 DOI: 10.3389/fpsyg.2018.00596] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2017] [Accepted: 04/09/2018] [Indexed: 11/26/2022] Open
Abstract
Sound environment plays an important role in urban open spaces, yet studies on the effects of perception of the sound environment on crowd behaviors have been limited. The aim of this study, therefore, is to explore how music, which is considered an important soundscape element, affects crowd behaviors in urban open spaces. On-site observations were performed at a 100 m × 70 m urban leisure square in Harbin, China. Typical music was used to study the effects of perception of the sound environment on crowd behaviors; then, these behaviors were classified into movement (passing by and walking around) and non-movement behaviors (sitting). The results show that the path of passing by in an urban leisure square with music was more centralized than without music. Without music, 8.3% of people passing by walked near the edge of the square, whereas with music, this percentage was zero. In terms of the speed of passing by behavior, no significant difference was observed with the presence or absence of background music. Regarding the effect of music on walking around behavior in the square, the mean area and perimeter when background music was played were smaller than without background music. The mean speed of those exhibiting walking around behavior with background music in the square was 0.296 m/s slower than when no background music was played. For those exhibiting sitting behavior, when background music was not present, crowd density showed no variation based on the distance from the sound source. When music was present, it was observed that as the distance from the sound source increased, crowd density of those sitting behavior decreased accordingly.
Collapse
Affiliation(s)
- Qi Meng
- Heilongjiang Cold Region Architectural Science Key Laboratory, School of Architecture, Harbin Institute of Technology, Harbin, China.,UCL Institute for Environmental Design and Engineering, University College London, London, United Kingdom
| | - Tingting Zhao
- Heilongjiang Cold Region Architectural Science Key Laboratory, School of Architecture, Harbin Institute of Technology, Harbin, China
| | - Jian Kang
- Heilongjiang Cold Region Architectural Science Key Laboratory, School of Architecture, Harbin Institute of Technology, Harbin, China.,UCL Institute for Environmental Design and Engineering, University College London, London, United Kingdom
| |
Collapse
|
31
|
Risoud M, Hanson JN, Gauvrit F, Renard C, Lemesre PE, Bonne NX, Vincent C. Sound source localization. Eur Ann Otorhinolaryngol Head Neck Dis 2018; 135:259-264. [PMID: 29731298 DOI: 10.1016/j.anorl.2018.04.009] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
Sound source localization is paramount for comfort of life, determining the position of a sound source in 3 dimensions: azimuth, height and distance. It is based on 3 types of cue: 2 binaural (interaural time difference and interaural level difference) and 1 monaural spectral cue (head-related transfer function). These are complementary and vary according to the acoustic characteristics of the incident sound. The objective of this report is to update the current state of knowledge on the physical basis of spatial sound localization.
Collapse
Affiliation(s)
- M Risoud
- Department of otology and neurotology, CHU de Lille, 59000 Lille, France; Inserm U1008 - controlled drug delivery systems and biomaterials, université de Lille 2, CHU de Lille, 59000 Lille, France.
| | - J-N Hanson
- Department of otology and neurotology, CHU de Lille, 59000 Lille, France
| | - F Gauvrit
- Department of otology and neurotology, CHU de Lille, 59000 Lille, France
| | - C Renard
- Department of otology and neurotology, CHU de Lille, 59000 Lille, France
| | - P-E Lemesre
- Department of otology and neurotology, CHU de Lille, 59000 Lille, France
| | - N-X Bonne
- Department of otology and neurotology, CHU de Lille, 59000 Lille, France; Inserm U1008 - controlled drug delivery systems and biomaterials, université de Lille 2, CHU de Lille, 59000 Lille, France
| | - C Vincent
- Department of otology and neurotology, CHU de Lille, 59000 Lille, France; Inserm U1008 - controlled drug delivery systems and biomaterials, université de Lille 2, CHU de Lille, 59000 Lille, France
| |
Collapse
|
32
|
Kates JM, Arehart KH, Muralimanohar RK, Sommerfeldt K. Externalization of remote microphone signals using a structural binaural model of the head and pinna. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 143:2666. [PMID: 29857749 DOI: 10.1121/1.5032326] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In a remote microphone (RM) system, a talker speaks into a microphone and the signal is transmitted to the hearing aids worn by the hearing-impaired listener. A difficulty with remote microphones, however, is that the signal received at the hearing aid bypasses the head and pinna, so the acoustic cues needed to externalize the sound source are missing. The objective of this paper is to process the RM signal to improve externalization when listening through earphones. The processing is based on a structural binaural model, which uses a cascade of processing modules to simulate the interaural level difference, interaural time difference, pinna reflections, ear-canal resonance, and early room reflections. The externalization results for the structural binaural model are compared to a left-right signal blend, the listener's own anechoic head-related impulse response (HRIR), and the listener's own HRIR with room reverberation. The azimuth is varied from straight ahead to 90° to one side. The results show that the structural binaural model is as effective as the listener's own HRIR plus reverberation in producing an externalized acoustic image, and that there is no significant difference in externalization between hearing-impaired and normal-hearing listeners.
Collapse
Affiliation(s)
- James M Kates
- Department of Speech, Language, and Hearing Sciences, University of Colorado, Boulder, Colorado 80309, USA
| | - Kathryn H Arehart
- Department of Speech, Language, and Hearing Sciences, University of Colorado, Boulder, Colorado 80309, USA
| | - Ramesh Kumar Muralimanohar
- Department of Speech, Language, and Hearing Sciences, University of Colorado, Boulder, Colorado 80309, USA
| | - Kristin Sommerfeldt
- Department of Speech, Language, and Hearing Sciences, University of Colorado, Boulder, Colorado 80309, USA
| |
Collapse
|
33
|
Meng Q, Sun Y, Kang J. Effect of temporary open-air markets on the sound environment and acoustic perception based on the crowd density characteristics. THE SCIENCE OF THE TOTAL ENVIRONMENT 2017; 601-602:1488-1495. [PMID: 28605866 DOI: 10.1016/j.scitotenv.2017.06.017] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2017] [Revised: 05/31/2017] [Accepted: 06/02/2017] [Indexed: 06/07/2023]
Abstract
The sound environment and acoustic perception of open-air markets, which are very common in high-density urban open spaces, play important roles in terms of the urban soundscape. Based on objective and subjective measurements of a typical temporary open-air market in Harbin city, China, the effects of the temporary open-air market on the sound environment and acoustic perception were studied, considering different crowd densities. It was observed that a temporary open-air market without zoning increases the sound pressure level and subjective loudness by 2.4dBA and 0.21dBA, respectively, compared to the absence of a temporary market. Different from the sound pressure level and subjective loudness, the relationship between crowd density and the perceived acoustic comfort is parabolic. Regarding the effect of a temporary open-air market with different zones on the sound environment and acoustic perception, when the crowd densities were the same, subjective loudness in the fruit and vegetable sales area was always higher than in the food sales area and the clothing sales area. In terms of acoustic comfort, with an increase in crowd density, acoustic comfort in the fruit and vegetable sales area decreased, and acoustic comfort in the food sales area and the clothing sales area exhibited a parabolic change trend of increase followed by decrease. Overall, acoustic comfort can be effectively improved by better planning temporary open-air markets in high-density urban open spaces.
Collapse
Affiliation(s)
- Qi Meng
- Heilongjiang Cold Region Architectural Science Key Laboratory, School of Architecture, Harbin Institute of Technology, Harbin 150001, China; School of Architecture, University of Sheffield, Western Bank, Sheffield S10 2TN, UK
| | - Yang Sun
- Heilongjiang Cold Region Architectural Science Key Laboratory, School of Architecture, Harbin Institute of Technology, Harbin 150001, China
| | - Jian Kang
- Heilongjiang Cold Region Architectural Science Key Laboratory, School of Architecture, Harbin Institute of Technology, Harbin 150001, China; School of Architecture, University of Sheffield, Western Bank, Sheffield S10 2TN, UK.
| |
Collapse
|
34
|
Direct-location versus verbal report methods for measuring auditory distance perception in the far field. Behav Res Methods 2017; 50:1234-1247. [PMID: 28786043 DOI: 10.3758/s13428-017-0939-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In this study we evaluated whether a method of direct location is an appropriate response method for measuring auditory distance perception of far-field sound sources. We designed an experimental set-up that allows participants to indicate the distance at which they perceive the sound source by moving a visual marker. We termed this method Cross-Modal Direct Location (CMDL) since the response procedure involves the visual modality while the stimulus is presented through the auditory modality. Three experiments were conducted with sound sources located from 1 to 6 m. The first one compared the perceived distances obtained using either the CMDL device or verbal report (VR), which is the response method more frequently used for reporting auditory distance in the far field, and found differences on response compression and bias. In Experiment 2, participants reported visual distance estimates to the visual marker that were found highly accurate. Then, we asked the same group of participants to report VR estimates of auditory distance and found that the spatial visual information, obtained from the previous task, did not influence their reports. Finally, Experiment 3 compared the same responses that Experiment 1 but interleaving the methods, showing a weak, but complex, mutual influence. However, the estimates obtained with each method remained statistically different. Our results show that the auditory distance psychophysical functions obtained with the CMDL method are less susceptible to previously reported underestimation for distances over 2 m.
Collapse
|
35
|
Spiousas I, Etchemendy PE, Eguia MC, Calcagno ER, Abregú E, Vergara RO. Sound Spectrum Influences Auditory Distance Perception of Sound Sources Located in a Room Environment. Front Psychol 2017; 8:969. [PMID: 28690556 PMCID: PMC5479918 DOI: 10.3389/fpsyg.2017.00969] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2016] [Accepted: 05/26/2017] [Indexed: 12/03/2022] Open
Abstract
Previous studies on the effect of spectral content on auditory distance perception (ADP) focused on the physically measurable cues occurring either in the near field (low-pass filtering due to head diffraction) or when the sound travels distances >15 m (high-frequency energy losses due to air absorption). Here, we study how the spectrum of a sound arriving from a source located in a reverberant room at intermediate distances (1–6 m) influences the perception of the distance to the source. First, we conducted an ADP experiment using pure tones (the simplest possible spectrum) of frequencies 0.5, 1, 2, and 4 kHz. Then, we performed a second ADP experiment with stimuli consisting of continuous broadband and bandpass-filtered (with center frequencies of 0.5, 1.5, and 4 kHz and bandwidths of 1/12, 1/3, and 1.5 octave) pink-noise clips. Our results showed an effect of the stimulus frequency on the perceived distance both for pure tones and filtered noise bands: ADP was less accurate for stimuli containing energy only in the low-frequency range. Analysis of the frequency response of the room showed that the low accuracy observed for low-frequency stimuli can be explained by the presence of sparse modal resonances in the low-frequency region of the spectrum, which induced a non-monotonic relationship between binaural intensity and source distance. The results obtained in the second experiment suggest that ADP can also be affected by stimulus bandwidth but in a less straightforward way (i.e., depending on the center frequency, increasing stimulus bandwidth could have different effects). Finally, the analysis of the acoustical cues suggests that listeners judged source distance using mainly changes in the overall intensity of the auditory stimulus with distance rather than the direct-to-reverberant energy ratio, even for low-frequency noise bands (which typically induce high amount of reverberation). The results obtained in this study show that, depending on the spectrum of the auditory stimulus, reverberation can degrade ADP rather than improve it.
Collapse
Affiliation(s)
- Ignacio Spiousas
- Laboratorio de Dinámica Sensomotora, Departamento de Ciencia y Tecnología, CONICET, Universidad Nacional de QuilmesBernal, Argentina
| | - Pablo E Etchemendy
- Laboratorio de Acústica y Percepción Sonora, Escuela Universitaria de Artes, CONICET, Universidad Nacional de QuilmesBernal, Argentina
| | - Manuel C Eguia
- Laboratorio de Acústica y Percepción Sonora, Escuela Universitaria de Artes, CONICET, Universidad Nacional de QuilmesBernal, Argentina
| | - Esteban R Calcagno
- Laboratorio de Acústica y Percepción Sonora, Escuela Universitaria de Artes, CONICET, Universidad Nacional de QuilmesBernal, Argentina
| | - Ezequiel Abregú
- Laboratorio de Acústica y Percepción Sonora, Escuela Universitaria de Artes, CONICET, Universidad Nacional de QuilmesBernal, Argentina
| | - Ramiro O Vergara
- Laboratorio de Acústica y Percepción Sonora, Escuela Universitaria de Artes, CONICET, Universidad Nacional de QuilmesBernal, Argentina
| |
Collapse
|
36
|
Díaz-Andreu M, Atiénzar GG, Benito CG, Mattioli T. Do You Hear What I See? Analyzing Visibility and Audibility in the Rock Art Landscape of the Alicante Mountains of Spain. JOURNAL OF ANTHROPOLOGICAL RESEARCH 2017. [DOI: 10.1086/692103] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
|
37
|
Brattico P, Brattico E, Vuust P. Global Sensory Qualities and Aesthetic Experience in Music. Front Neurosci 2017; 11:159. [PMID: 28424573 PMCID: PMC5380758 DOI: 10.3389/fnins.2017.00159] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2016] [Accepted: 03/13/2017] [Indexed: 11/13/2022] Open
Abstract
A well-known tradition in the study of visual aesthetics holds that the experience of visual beauty is grounded in global computational or statistical properties of the stimulus, for example, scale-invariant Fourier spectrum or self-similarity. Some approaches rely on neural mechanisms, such as efficient computation, processing fluency, or the responsiveness of the cells in the primary visual cortex. These proposals are united by the fact that the contributing factors are hypothesized to be global (i.e., they concern the percept as a whole), formal or non-conceptual (i.e., they concern form instead of content), computational and/or statistical, and based on relatively low-level sensory properties. Here we consider that the study of aesthetic responses to music could benefit from the same approach. Thus, along with local features such as pitch, tuning, consonance/dissonance, harmony, timbre, or beat, also global sonic properties could be viewed as contributing toward creating an aesthetic musical experience. Several such properties are discussed and their neural implementation is reviewed in the light of recent advances in neuroaesthetics.
Collapse
Affiliation(s)
| | - Elvira Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music Aarhus/AalborgAarhus, Denmark
| | | |
Collapse
|
38
|
Rungta A, Rewkowski N, Klatzky R, Lin M, Manocha D. Effects of virtual acoustics on dynamic auditory distance perception. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 141:EL427. [PMID: 28464642 DOI: 10.1121/1.4981234] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Sound propagation encompasses various acoustic phenomena including reverberation. Current virtual acoustic methods ranging from parametric filters to physically accurate solvers can simulate reverberation with varying degrees of fidelity. The effects of reverberant sounds generated using different propagation algorithms on acoustic distance perception are investigated. In particular, two classes of methods for real time sound propagation in dynamic scenes based on parametric filters and ray tracing are evaluated. The study shows that ray tracing enables more distance accuracy as compared to the approximate, filter-based method. This suggests that accurate reverberation in virtual reality results in better reproduction of acoustic distances.
Collapse
Affiliation(s)
- Atul Rungta
- Department of Computer Science, University of North Carolina, Chapel Hill, 201 South Columbia Street, Chapel Hill, North Carolina 27599-3175, USA
| | - Nicholas Rewkowski
- Department of Computer Science, University of North Carolina, Chapel Hill, 201 South Columbia Street, Chapel Hill, North Carolina 27599-3175, USA
| | - Roberta Klatzky
- Department of Psychology, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, Pennsylvania 15213, USA , , , ,
| | - Ming Lin
- Department of Computer Science, University of North Carolina, Chapel Hill, 201 South Columbia Street, Chapel Hill, North Carolina 27599-3175, USA
| | - Dinesh Manocha
- Department of Computer Science, University of North Carolina, Chapel Hill, 201 South Columbia Street, Chapel Hill, North Carolina 27599-3175, USA
| |
Collapse
|
39
|
Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation. eNeuro 2017; 4:eN-NWR-0007-17. [PMID: 28451630 PMCID: PMC5394928 DOI: 10.1523/eneuro.0007-17.2017] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2016] [Revised: 02/03/2017] [Accepted: 02/06/2017] [Indexed: 11/21/2022] Open
Abstract
Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals.
Collapse
|
40
|
Single Neurons in the Avian Auditory Cortex Encode Individual Identity and Propagation Distance in Naturally Degraded Communication Calls. J Neurosci 2017; 37:3491-3510. [PMID: 28235893 PMCID: PMC5373131 DOI: 10.1523/jneurosci.2220-16.2017] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2016] [Revised: 01/08/2017] [Accepted: 01/13/2017] [Indexed: 11/21/2022] Open
Abstract
One of the most complex tasks performed by sensory systems is "scene analysis": the interpretation of complex signals as behaviorally relevant objects. The study of this problem, universal to species and sensory modalities, is particularly challenging in audition, where sounds from various sources and localizations, degraded by propagation through the environment, sum to form a single acoustical signal. Here we investigated in a songbird model, the zebra finch, the neural substrate for ranging and identifying a single source. We relied on ecologically and behaviorally relevant stimuli, contact calls, to investigate the neural discrimination of individual vocal signature as well as sound source distance when calls have been degraded through propagation in a natural environment. Performing electrophysiological recordings in anesthetized birds, we found neurons in the auditory forebrain that discriminate individual vocal signatures despite long-range degradation, as well as neurons discriminating propagation distance, with varying degrees of multiplexing between both information types. Moreover, the neural discrimination performance of individual identity was not affected by propagation-induced degradation beyond what was induced by the decreased intensity. For the first time, neurons with distance-invariant identity discrimination properties as well as distance-discriminant neurons are revealed in the avian auditory cortex. Because these neurons were recorded in animals that had prior experience neither with the vocalizers of the stimuli nor with long-range propagation of calls, we suggest that this neural population is part of a general-purpose system for vocalizer discrimination and ranging.SIGNIFICANCE STATEMENT Understanding how the brain makes sense of the multitude of stimuli that it continually receives in natural conditions is a challenge for scientists. Here we provide a new understanding of how the auditory system extracts behaviorally relevant information, the vocalizer identity and its distance to the listener, from acoustic signals that have been degraded by long-range propagation in natural conditions. We show, for the first time, that single neurons, in the auditory cortex of zebra finches, are capable of discriminating the individual identity and sound source distance in conspecific communication calls. The discrimination of identity in propagated calls relies on a neural coding that is robust to intensity changes, signals' quality, and decreases in the signal-to-noise ratio.
Collapse
|
41
|
Estimating the relative weights of visual and auditory tau versus heuristic-based cues for time-to-contact judgments in realistic, familiar scenes by older and younger adults. Atten Percept Psychophys 2017; 79:929-944. [DOI: 10.3758/s13414-016-1270-9] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
42
|
Meng Q, Kang J. Effect of sound-related activities on human behaviours and acoustic comfort in urban open spaces. THE SCIENCE OF THE TOTAL ENVIRONMENT 2016; 573:481-493. [PMID: 27572540 DOI: 10.1016/j.scitotenv.2016.08.130] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2016] [Revised: 08/17/2016] [Accepted: 08/18/2016] [Indexed: 06/06/2023]
Abstract
Human activities are important to landscape design and urban planning; however, the effect of sound-related activities on human behaviours and acoustic comfort has not been considered. The objective of this study is to explore how human behaviours and acoustic comfort in urban open spaces can be changed by sound-related activities. On-site measurements were performed at a case study site in Harbin, China, and an acoustic comfort survey was simultaneously conducted. In terms of effect of sound activities on human behaviours, music-related activities caused 5.1-21.5% of persons who pass by the area to stand and watch the activity, while there was a little effect on the number of persons who performed excises during the activity. Human activities generally have little effect on the behaviour of pedestrians when only 1 to 3 persons are involved in the activities, while a deep effect on the behaviour of pedestrians is noted when >6 persons are involved in the activities. In terms of effect of activities on acoustic comfort, music-related activities can increase the sound level from 10.8 to 16.4dBA, while human activities such RS and PC can increase the sound level from 9.6 to 12.8dBA; however, they lead to very different acoustic comfort. The acoustic comfort of persons can differ with activities, for example the acoustic comfort of persons who stand watch can increase by music-related activities, while the acoustic comfort of persons who sit and watch can decrease by human sound-related activities. Some sound-related activities can show opposite trend of acoustic comfort between visitors and citizens. Persons with higher income prefer music sound-related activities, while those with lower income prefer human sound-related activities.
Collapse
Affiliation(s)
- Qi Meng
- School of Architecture, Harbin Institute of Technology, Harbin 150001, China
| | - Jian Kang
- School of Architecture, Harbin Institute of Technology, Harbin 150001, China; Heilongjiang Cold Region Architectural Science Key Laboratory, School of Architecture, Harbin Institute of Technology, Harbin, 150001, China.
| |
Collapse
|
43
|
Mendonça C, Mandelli P, Pulkki V. Modeling the Perception of Audiovisual Distance: Bayesian Causal Inference and Other Models. PLoS One 2016; 11:e0165391. [PMID: 27959919 PMCID: PMC5154506 DOI: 10.1371/journal.pone.0165391] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2016] [Accepted: 10/11/2016] [Indexed: 11/23/2022] Open
Abstract
Studies of audiovisual perception of distance are rare. Here, visual and auditory cue interactions in distance are tested against several multisensory models, including a modified causal inference model. In this causal inference model predictions of estimate distributions are included. In our study, the audiovisual perception of distance was overall better explained by Bayesian causal inference than by other traditional models, such as sensory dominance and mandatory integration, and no interaction. Causal inference resolved with probability matching yielded the best fit to the data. Finally, we propose that sensory weights can also be estimated from causal inference. The analysis of the sensory weights allows us to obtain windows within which there is an interaction between the audiovisual stimuli. We find that the visual stimulus always contributes by more than 80% to the perception of visual distance. The visual stimulus also contributes by more than 50% to the perception of auditory distance, but only within a mobile window of interaction, which ranges from 1 to 4 m.
Collapse
Affiliation(s)
- Catarina Mendonça
- Department of Signal Processing and Acoustics, Aalto University, Espoo, Finland
- * E-mail:
| | - Pietro Mandelli
- School of Industrial and Information Engineering, Polytechnic University of Milan, Milan, Italy
| | - Ville Pulkki
- Department of Signal Processing and Acoustics, Aalto University, Espoo, Finland
| |
Collapse
|
44
|
Auditory spatial representations of the world are compressed in blind humans. Exp Brain Res 2016; 235:597-606. [PMID: 27837259 PMCID: PMC5272902 DOI: 10.1007/s00221-016-4823-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2016] [Accepted: 11/05/2016] [Indexed: 11/30/2022]
Abstract
Compared to sighted listeners, blind listeners often display enhanced auditory spatial abilities such as localization in azimuth. However, less is known about whether blind humans can accurately judge distance in extrapersonal space using auditory cues alone. Using virtualization techniques, we show that auditory spatial representations of the world beyond the peripersonal space of blind listeners are compressed compared to those for normally sighted controls. Blind participants overestimated the distance to nearby sources and underestimated the distance to remote sound sources, in both reverberant and anechoic environments, and for speech, music, and noise signals. Functions relating judged and actual virtual distance were well fitted by compressive power functions, indicating that the absence of visual information regarding the distance of sound sources may prevent accurate calibration of the distance information provided by auditory signals.
Collapse
|
45
|
Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss. Atten Percept Psychophys 2016; 78:373-95. [PMID: 26590050 PMCID: PMC4744263 DOI: 10.3758/s13414-015-1015-1] [Citation(s) in RCA: 96] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
Abstract
Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.
Collapse
|
46
|
DeLucia PR, Preddy D, Oberfeld D. Audiovisual Integration of Time-to-Contact Information for Approaching Objects. Multisens Res 2016; 29:365-95. [DOI: 10.1163/22134808-00002520] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Previous studies of time-to-collision (TTC) judgments of approaching objects focused on effectiveness of visual TTC information in the optical expansion pattern (e.g., visual tau, disparity). Fewer studies examined effectiveness of auditory TTC information in the pattern of increasing intensity (auditory tau), or measured integration of auditory and visual TTC information. Here, participants judged TTC of an approaching object presented in the visual or auditory modality, or both concurrently. TTC information provided by the modalities was jittered slightly against each other, so that auditory and visual TTC were not perfectly correlated. A psychophysical reverse correlation approach was used to estimate the influence of auditory and visual cues on TTC estimates. TTC estimates were shorter in the auditory than the visual condition. On average, TTC judgments in the audiovisual condition were not significantly different from judgments in the visual condition. However, multiple regression analyses showed that TTC estimates were based on both auditory and visual information. Although heuristic cues (final sound pressure level, final optical size) and more reliable information (relative rate of change in acoustic intensity, optical expansion) contributed to auditory and visual judgments, the effect of heuristics was greater in the auditory condition. Although auditory and visual information influenced judgments, concurrent presentation of both did not result in lower response variability compared to presentation of either one alone; there was no multimodal advantage. The relative weightings of heuristics and more reliable information differed between auditory and visual TTC judgments, and when both were available, visual information was weighted more heavily.
Collapse
Affiliation(s)
- Patricia R. DeLucia
- Department of Psychological Sciences, MS 2051, Texas Tech University, Lubbock, TX 79409-2051, USA
| | - Doug Preddy
- Department of Psychological Sciences, MS 2051, Texas Tech University, Lubbock, TX 79409-2051, USA
| | - Daniel Oberfeld
- Department of Psychology, Johannes Gutenberg-Universität, 55099 Mainz, Germany
| |
Collapse
|
47
|
Best V, Keidser G, Buchholz JM, Freeston K. An examination of speech reception thresholds measured in a simulated reverberant cafeteria environment. Int J Audiol 2015; 54:682-90. [PMID: 25853616 DOI: 10.3109/14992027.2015.1028656] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVE There is increasing demand in the hearing research community for the creation of laboratory environments that better simulate challenging real-world listening environments. The hope is that the use of such environments for testing will lead to more meaningful assessments of listening ability, and better predictions about the performance of hearing devices. Here we present one approach for simulating a complex acoustic environment in the laboratory, and investigate the effect of transplanting a speech test into such an environment. DESIGN Speech reception thresholds were measured in a simulated reverberant cafeteria, and in a more typical anechoic laboratory environment containing background speech babble. STUDY SAMPLE The participants were 46 listeners varying in age and hearing levels, including 25 hearing-aid wearers who were tested with and without their hearing aids. RESULTS Reliable SRTs were obtained in the complex environment, but led to different estimates of performance and hearing-aid benefit from those measured in the standard environment. CONCLUSIONS The findings provide a starting point for future efforts to increase the real-world relevance of laboratory-based speech tests.
Collapse
Affiliation(s)
- Virginia Best
- a National Acoustic Laboratories and the HEARing Cooperative Research Centre, Australian Hearing Hub, Macquarie University , Australia.,b Department of Speech , Language and Hearing Sciences, Boston University , Boston , USA
| | - Gitte Keidser
- a National Acoustic Laboratories and the HEARing Cooperative Research Centre, Australian Hearing Hub, Macquarie University , Australia
| | - Jörg M Buchholz
- a National Acoustic Laboratories and the HEARing Cooperative Research Centre, Australian Hearing Hub, Macquarie University , Australia.,c Audiology Section, Department of Linguistics , Australian Hearing Hub, Macquarie University , Australia
| | - Katrina Freeston
- a National Acoustic Laboratories and the HEARing Cooperative Research Centre, Australian Hearing Hub, Macquarie University , Australia
| |
Collapse
|
48
|
Spiousas I, Etchemendy PE, Vergara RO, Calcagno ER, Eguia MC. An Auditory Illusion of Proximity of the Source Induced by Sonic Crystals. PLoS One 2015. [PMID: 26222281 PMCID: PMC4519286 DOI: 10.1371/journal.pone.0133271] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
Abstract
In this work we report an illusion of proximity of a sound source created by a sonic crystal placed between the source and a listener. This effect seems, at first, paradoxical to naïve listeners since the sonic crystal is an obstacle formed by almost densely packed cylindrical scatterers. Even when the singular acoustical properties of these periodic composite materials have been studied extensively (including band gaps, deaf bands, negative refraction, and birrefringence), the possible perceptual effects remain unexplored. The illusion reported here is studied through acoustical measurements and a psychophysical experiment. The results of the acoustical measurements showed that, for a certain frequency range and region in space where the focusing phenomenon takes place, the sonic crystal induces substantial increases in binaural intensity, direct-to-reverberant energy ratio and interaural cross-correlation values, all cues involved in the auditory perception of distance. Consistently, the results of the psychophysical experiment revealed that the presence of the sonic crystal between the sound source and the listener produces a significant reduction of the perceived relative distance to the sound source.
Collapse
Affiliation(s)
- Ignacio Spiousas
- Laboratorio de Acústica y Percepción Sonora, Universidad Nacional de Quilmes, Bernal, Buenos Aires, Argentina
| | - Pablo E. Etchemendy
- Laboratorio de Acústica y Percepción Sonora, Universidad Nacional de Quilmes, Bernal, Buenos Aires, Argentina
| | - Ramiro O. Vergara
- Laboratorio de Acústica y Percepción Sonora, Universidad Nacional de Quilmes, Bernal, Buenos Aires, Argentina
| | - Esteban R. Calcagno
- Laboratorio de Acústica y Percepción Sonora, Universidad Nacional de Quilmes, Bernal, Buenos Aires, Argentina
| | - Manuel C. Eguia
- Laboratorio de Acústica y Percepción Sonora, Universidad Nacional de Quilmes, Bernal, Buenos Aires, Argentina
- * E-mail:
| |
Collapse
|
49
|
Jones HG, Brown AD, Koka K, Thornton JL, Tollin DJ. Sound frequency-invariant neural coding of a frequency-dependent cue to sound source location. J Neurophysiol 2015; 114:531-9. [PMID: 25972580 PMCID: PMC4509402 DOI: 10.1152/jn.00062.2015] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2015] [Accepted: 05/11/2015] [Indexed: 11/22/2022] Open
Abstract
The century-old duplex theory of sound localization posits that low- and high-frequency sounds are localized with two different acoustical cues, interaural time and level differences (ITDs and ILDs), respectively. While behavioral studies in humans and behavioral and neurophysiological studies in a variety of animal models have largely supported the duplex theory, behavioral sensitivity to ILD is curiously invariant across the audible spectrum. Here we demonstrate that auditory midbrain neurons in the chinchilla (Chinchilla lanigera) also encode ILDs in a frequency-invariant manner, efficiently representing the full range of acoustical ILDs experienced as a joint function of sound source frequency, azimuth, and distance. We further show, using Fisher information, that nominal "low-frequency" and "high-frequency" ILD-sensitive neural populations can discriminate ILD with similar acuity, yielding neural ILD discrimination thresholds for near-midline sources comparable to behavioral discrimination thresholds estimated for chinchillas. These findings thus suggest a revision to the duplex theory and reinforce ecological and efficiency principles that hold that neural systems have evolved to encode the spectrum of biologically relevant sensory signals to which they are naturally exposed.
Collapse
Affiliation(s)
- Heath G Jones
- Neuroscience Training Program, University of Colorado School of Medicine, Aurora, Colorado; and Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, Colorado
| | - Andrew D Brown
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, Colorado
| | - Kanthaiah Koka
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, Colorado
| | - Jennifer L Thornton
- Neuroscience Training Program, University of Colorado School of Medicine, Aurora, Colorado; and Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, Colorado
| | - Daniel J Tollin
- Neuroscience Training Program, University of Colorado School of Medicine, Aurora, Colorado; and Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, Colorado
| |
Collapse
|
50
|
Sunder K, Gan WS, Tan EL. Modeling distance-dependent individual head-related transfer functions in the horizontal plane using frontal projection headphones. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 138:150-171. [PMID: 26233016 DOI: 10.1121/1.4919347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
The veracity of virtual audio is degraded by the use of non-individualized head-related transfer functions (HRTFs) due to the introduction of front-back, elevation confusions, and timbral coloration. Hence, an accurate reproduction of spatial sound demands the use of individualized HRTFs. Measuring distance-dependent individualized HRTFs can be extremely tedious, since it requires precise measurements at several distances in the proximal region (<1 m) for each individual. This paper proposes a technique to model distance-dependent individualized HRTFs in the horizontal plane using "frontal projection headphones playback" that does not require individualized measurements. The frontal projection headphones [Sunder, Tan, and Gan (2013). J. Audio Eng. Soc. 61, 989-1000] project the sound directly onto the pinnae from the front, and thus inherently create listener's idiosyncratic pinna cues at the eardrum. Perceptual experiments were conducted to investigate cues (auditory parallax and interaural level differences) that aid distance perception in anechoic conditions. Interaural level differences were identified as the prominent cue for distance perception and a spherical head model was used to model these distance-dependent features. Detailed psychophysical experiments revealed that the modeled distance-dependent individualized HRTFs exhibited localization performance close to the measured distance-dependent individualized HRTFs for all subjects.
Collapse
Affiliation(s)
- Kaushik Sunder
- Digital Signal Processing Lab, School of Electrical and Electronics Engineering, Nanyang Technological University, Singapore 639798, Singapore
| | - Woon-Seng Gan
- Digital Signal Processing Lab, School of Electrical and Electronics Engineering, Nanyang Technological University, Singapore 639798, Singapore
| | - Ee-Leng Tan
- Digital Signal Processing Lab, School of Electrical and Electronics Engineering, Nanyang Technological University, Singapore 639798, Singapore
| |
Collapse
|