1
|
Slomianka V, Dau T, Ahrens A. Acoustic scene complexity affects motion behavior during speech perception in audio-visual multi-talker virtual environments. Sci Rep 2024; 14:19028. [PMID: 39152193 PMCID: PMC11329770 DOI: 10.1038/s41598-024-70026-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2024] [Accepted: 08/09/2024] [Indexed: 08/19/2024] Open
Abstract
In real-world listening situations, individuals typically utilize head and eye movements to receive and enhance sensory information while exploring acoustic scenes. However, the specific patterns of such movements have not yet been fully characterized. Here, we studied how movement behavior is influenced by scene complexity, varied in terms of reverberation and the number of concurrent talkers. Thirteen normal-hearing participants engaged in a speech comprehension and localization task, requiring them to indicate the spatial location of a spoken story in the presence of other stories in virtual audio-visual scenes. We observed delayed initial head movements when more simultaneous talkers were present in the scene. Both reverberation and a higher number of talkers extended the search period, increased the number of fixated source locations, and resulted in more gaze jumps. The period preceding the participants' responses was prolonged when more concurrent talkers were present, and listeners continued to move their eyes in the proximity of the target talker. In scenes with more reverberation, the final head position when making the decision was farther away from the target. These findings demonstrate that the complexity of the acoustic scene influences listener behavior during speech comprehension and localization in audio-visual scenes.
Collapse
Affiliation(s)
- Valeska Slomianka
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, 2800, Kgs. Lyngby, Denmark.
| | - Torsten Dau
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, 2800, Kgs. Lyngby, Denmark
| | - Axel Ahrens
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, 2800, Kgs. Lyngby, Denmark
| |
Collapse
|
2
|
Carlini A, Bordeau C, Ambard M. Auditory localization: a comprehensive practical review. Front Psychol 2024; 15:1408073. [PMID: 39049946 PMCID: PMC11267622 DOI: 10.3389/fpsyg.2024.1408073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 06/17/2024] [Indexed: 07/27/2024] Open
Abstract
Auditory localization is a fundamental ability that allows to perceive the spatial location of a sound source in the environment. The present work aims to provide a comprehensive overview of the mechanisms and acoustic cues used by the human perceptual system to achieve such accurate auditory localization. Acoustic cues are derived from the physical properties of sound waves, and many factors allow and influence auditory localization abilities. This review presents the monaural and binaural perceptual mechanisms involved in auditory localization in the three dimensions. Besides the main mechanisms of Interaural Time Difference, Interaural Level Difference and Head Related Transfer Function, secondary important elements such as reverberation and motion, are also analyzed. For each mechanism, the perceptual limits of localization abilities are presented. A section is specifically devoted to reference systems in space, and to the pointing methods used in experimental research. Finally, some cases of misperception and auditory illusion are described. More than a simple description of the perceptual mechanisms underlying localization, this paper is intended to provide also practical information available for experiments and work in the auditory field.
Collapse
|
3
|
Alemu RZ, Papsin BC, Harrison RV, Blakeman A, Gordon KA. Head and Eye Movements Reveal Compensatory Strategies for Acute Binaural Deficits During Sound Localization. Trends Hear 2024; 28:23312165231217910. [PMID: 38297817 PMCID: PMC10832417 DOI: 10.1177/23312165231217910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 10/17/2023] [Accepted: 11/14/2023] [Indexed: 02/02/2024] Open
Abstract
The present study aimed to define use of head and eye movements during sound localization in children and adults to: (1) assess effects of stationary versus moving sound and (2) define effects of binaural cues degraded through acute monaural ear plugging. Thirty-three youth (MAge = 12.9 years) and seventeen adults (MAge = 24.6 years) with typical hearing were recruited and asked to localize white noise anywhere within a horizontal arc from -60° (left) to +60° (right) azimuth in two conditions (typical binaural and right ear plugged). In each trial, sound was presented at an initial stationary position (L1) and then while moving at ∼4°/s until reaching a second position (L2). Sound moved in five conditions (±40°, ±20°, or 0°). Participants adjusted a laser pointer to indicate L1 and L2 positions. Unrestricted head and eye movements were collected with gyroscopic sensors on the head and eye-tracking glasses, respectively. Results confirmed that accurate sound localization of both stationary and moving sound is disrupted by acute monaural ear plugging. Eye movements preceded head movements for sound localization in normal binaural listening and head movements were larger than eye movements during monaural plugging. Head movements favored the unplugged left ear when stationary sounds were presented in the right hemifield and during sound motion in both hemifields regardless of the movement direction. Disrupted binaural cues have greater effects on localization of moving than stationary sound. Head movements reveal preferential use of the better-hearing ear and relatively stable eye positions likely reflect normal vestibular-ocular reflexes.
Collapse
Affiliation(s)
- Robel Z. Alemu
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada
- Institute of Medical Science, The University of Toronto, Toronto, ON, Canada
| | - Blake C. Papsin
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada
- Institute of Medical Science, The University of Toronto, Toronto, ON, Canada
- Department of Otolaryngology-Head & Neck Surgery, University of Toronto, Toronto, ON, Canada
- Department of Otolaryngology, The Hospital for Sick Children, Toronto, ON, Canada
- Program in Neuroscience and Mental Health, Research Institute, Toronto, ON, Canada
| | - Robert V. Harrison
- Institute of Medical Science, The University of Toronto, Toronto, ON, Canada
- Department of Otolaryngology-Head & Neck Surgery, University of Toronto, Toronto, ON, Canada
- Program in Neuroscience and Mental Health, Research Institute, Toronto, ON, Canada
| | - Al Blakeman
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada
| | - Karen A. Gordon
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada
- Institute of Medical Science, The University of Toronto, Toronto, ON, Canada
- Department of Otolaryngology-Head & Neck Surgery, University of Toronto, Toronto, ON, Canada
- Program in Neuroscience and Mental Health, Research Institute, Toronto, ON, Canada
- Department of Communication Disorders, The Hospital for Sick Children, Toronto, ON, Canada
| |
Collapse
|
4
|
Audiovisual Training in Virtual Reality Improves Auditory Spatial Adaptation in Unilateral Hearing Loss Patients. J Clin Med 2023; 12:jcm12062357. [PMID: 36983357 PMCID: PMC10058351 DOI: 10.3390/jcm12062357] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 03/13/2023] [Accepted: 03/13/2023] [Indexed: 03/22/2023] Open
Abstract
Unilateral hearing loss (UHL) leads to an alteration of binaural cues resulting in a significant increment of spatial errors in the horizontal plane. In this study, nineteen patients with UHL were recruited and randomized in a cross-over design into two groups; a first group (n = 9) that received spatial audiovisual training in the first session and a non-spatial audiovisual training in the second session (2 to 4 weeks after the first session). A second group (n = 10) received the same training in the opposite order (non-spatial and then spatial). A sound localization test using head-pointing (LOCATEST) was completed prior to and following each training session. The results showed a significant decrease in head-pointing localization errors after spatial training for group 1 (24.85° ± 15.8° vs. 16.17° ± 11.28°; p < 0.001). The number of head movements during the spatial training for the 19 participants did not change (p = 0.79); nonetheless, the hand-pointing errors and reaction times significantly decreased at the end of the spatial training (p < 0.001). This study suggests that audiovisual spatial training can improve and induce spatial adaptation to a monaural deficit through the optimization of effective head movements. Virtual reality systems are relevant tools that can be used in clinics to develop training programs for patients with hearing impairments.
Collapse
|
5
|
Nisha KV, Uppunda AK, Kumar RT. Spatial rehabilitation using virtual auditory space training paradigm in individuals with sensorineural hearing impairment. Front Neurosci 2023; 16:1080398. [PMID: 36733923 PMCID: PMC9887142 DOI: 10.3389/fnins.2022.1080398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 12/20/2022] [Indexed: 01/18/2023] Open
Abstract
Purpose The present study aimed to quantify the effects of spatial training using virtual sources on a battery of spatial acuity measures in listeners with sensorineural hearing impairment (SNHI). Methods An intervention-based time-series comparison design involving 82 participants divided into three groups was adopted. Group I (n = 27, SNHI-spatially trained) and group II (n = 25, SNHI-untrained) consisted of SNHI listeners, while group III (n = 30) had listeners with normal hearing (NH). The study was conducted in three phases. In the pre-training phase, all the participants underwent a comprehensive assessment of their spatial processing abilities using a battery of tests including spatial acuity in free-field and closed-field scenarios, tests for binaural processing abilities (interaural time threshold [ITD] and level difference threshold [ILD]), and subjective ratings. While spatial acuity in the free field was assessed using a loudspeaker-based localization test, the closed-field source identification test was performed using virtual stimuli delivered through headphones. The ITD and ILD thresholds were obtained using a MATLAB psychoacoustic toolbox, while the participant ratings on the spatial subsection of speech, spatial, and qualities questionnaire in Kannada were used for the subjective ratings. Group I listeners underwent virtual auditory spatial training (VAST), following pre-evaluation assessments. All tests were re-administered on the group I listeners halfway through training (mid-training evaluation phase) and after training completion (post-training evaluation phase), whereas group II underwent these tests without any training at the same time intervals. Results and discussion Statistical analysis showed the main effect of groups in all tests at the pre-training evaluation phase, with post hoc comparisons that revealed group equivalency in spatial performance of both SNHI groups (groups I and II). The effect of VAST in group I was evident on all the tests, with the localization test showing the highest predictive power for capturing VAST-related changes on Fischer discriminant analysis (FDA). In contrast, group II demonstrated no changes in spatial acuity across timelines of measurements. FDA revealed increased errors in the categorization of NH as SNHI-trained at post-training evaluation compared to pre-training evaluation, as the spatial performance of the latter improved with VAST in the post-training phase. Conclusion The study demonstrated positive outcomes of spatial training using VAST in listeners with SNHI. The utility of this training program can be extended to other clinical population with spatial auditory processing deficits such as auditory neuropathy spectrum disorder, cochlear implants, central auditory processing disorders etc.
Collapse
|
6
|
Pastore MT, Yost WA. Spatial Release from Masking for Tones and Noises in a Soundfield under Conditions Where Targets and Maskers Are Stationary or Moving. Audiol Res 2022; 12:99-112. [PMID: 35314608 PMCID: PMC8938785 DOI: 10.3390/audiolres12020013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 02/09/2022] [Accepted: 02/17/2022] [Indexed: 02/04/2023] Open
Abstract
Stationary visual targets often become far more salient when they move against an otherwise static background–the so-called “pop out” effect. In two experiments conducted over loudspeakers, we tested for a similar pop-out effect in the auditory domain. Tone-in-noise and noise-in-noise detection thresholds were measured using a 2-up, 1-down adaptive procedure under conditions where target and masker(s) were presented from the same or different locations and when the target was stationary or moved via amplitude-panning. In the first experiment, target tones of 0.5 kHz and 4 kHz were tested, maskers (2–4, depending on the condition) were independent Gaussian noises, and all stimuli were 500-ms duration. In the second experiment, a single pink noise masker (0.3–12 kHz) was presented with a single target at one of four bandwidths (0.3–0.6 kHz, 3–6 kHz, 6–12 kHz, 0.3–12 kHz) under conditions where target and masker were presented from the same or different locations and where the target moved or not. The results of both experiments failed to show a decrease in detection thresholds resulting from movement of the target.
Collapse
|
7
|
Russell MK. Age and Auditory Spatial Perception in Humans: Review of Behavioral Findings and Suggestions for Future Research. Front Psychol 2022; 13:831670. [PMID: 35250777 PMCID: PMC8888835 DOI: 10.3389/fpsyg.2022.831670] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 01/24/2022] [Indexed: 11/13/2022] Open
Abstract
It has been well documented, and fairly well known, that concomitant with an increase in chronological age is a corresponding increase in sensory impairment. As most people realize, our hearing suffers as we get older; hence, the increased need for hearing aids. The first portion of the present paper is how the change in age apparently affects auditory judgments of sound source position. A summary of the literature evaluating the changes in the perception of sound source location and the perception of sound source motion as a function of chronological age is presented. The review is limited to empirical studies with behavioral findings involving humans. It is the view of the author that we have an immensely limited understanding of how chronological age affects perception of space when based on sound. In the latter part of the paper, discussion is given to how auditory spatial perception is traditionally conducted in the laboratory. Theoretically, beneficial reasons exist for conducting research in the manner it has been. Nonetheless, from an ecological perspective, the vast majority of previous research can be considered unnatural and greatly lacking in ecological validity. Suggestions for an alternative and more ecologically valid approach to the investigation of auditory spatial perception are proposed. It is believed an ecological approach to auditory spatial perception will enhance our understanding of the extent to which individuals perceive sound source location and how those perceptual judgments change with an increase in chronological age.
Collapse
|
8
|
Pastore MT, Pulling KR, Chen C, Yost WA, Dorman MF. Effects of Bilateral Automatic Gain Control Synchronization in Cochlear Implants With and Without Head Movements: Sound Source Localization in the Frontal Hemifield. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:2811-2824. [PMID: 34100627 PMCID: PMC8632503 DOI: 10.1044/2021_jslhr-20-00493] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 12/31/2020] [Accepted: 02/24/2021] [Indexed: 06/12/2023]
Abstract
Purpose For bilaterally implanted patients, the automatic gain control (AGC) in both left and right cochlear implant (CI) processors is usually neither linked nor synchronized. At high AGC compression ratios, this lack of coordination between the two processors can distort interaural level differences, the only useful interaural difference cue available to CI patients. This study assessed the improvement, if any, in the utility of interaural level differences for sound source localization in the frontal hemifield when AGCs were synchronized versus independent and when listeners were stationary versus allowed to move their heads. Method Sound source identification of broadband noise stimuli was tested for seven bilateral CI patients using 13 loudspeakers in the frontal hemifield, under conditions where AGCs were linked and unlinked. For half the conditions, patients remained stationary; in the other half, they were encouraged to rotate or reorient their heads within a range of approximately ± 30° during sound presentation. Results In general, those listeners who already localized reasonably well with independent AGCs gained the least from AGC synchronization, perhaps because there was less room for improvement. Those listeners who performed worst with independent AGCs gained the most from synchronization. All listeners performed as well or better with synchronization than without; however, intersubject variability was high. Head movements had little impact on the effectiveness of synchronization of AGCs. Conclusion Synchronization of AGCs offers one promising strategy for improving localization performance in the frontal hemifield for bilaterally implanted CI patients. Supplemental Material https://doi.org/10.23641/asha.14681412.
Collapse
|
9
|
Mieda T, Kokubu M. Blind footballers direct their head towards an approaching ball during ball trapping. Sci Rep 2020; 10:20246. [PMID: 33219244 PMCID: PMC7679380 DOI: 10.1038/s41598-020-77049-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Accepted: 11/05/2020] [Indexed: 12/02/2022] Open
Abstract
In blind football, players predict the sound location of a ball to underpin the success of ball trapping. It is currently unknown whether blind footballers use head movements as a strategy for trapping a moving ball. This study investigated characteristics of head rotations in blind footballers during ball trapping compared to sighted nonathletes. Participants performed trapping an approaching ball using their right foot. Head and trunk rotation angles in the sagittal plane, and head rotation angles in the horizontal plane were measured during ball trapping. The blind footballers showed a larger downward head rotation angle, as well as higher performance at the time of ball trapping than did the sighted nonathletes. However, no significant differences between the groups were found with regards to the horizontal head rotation angle and the downward trunk rotation angle. The blind footballers consistently showed a larger relative angle of downward head rotation from an early time point after ball launching to the moment of ball trapping. These results suggest that blind footballers couple downward head rotation with the movement of an approaching ball, to ensure that the ball is kept in a consistent egocentric direction relative to the head throughout ball trapping.
Collapse
Affiliation(s)
- Takumi Mieda
- Graduate School of Comprehensive Human Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki, 305-8574, Japan.
| | - Masahiro Kokubu
- Faculty of Health and Sport Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki, 305-8574, Japan
| |
Collapse
|
10
|
St George BV, Cone B. Perceptual and Electrophysiological Correlates of Fixed Versus Moving Sound Source Lateralization. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:3176-3194. [PMID: 32812839 DOI: 10.1044/2020_jslhr-19-00289] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose The aims of the study were (a) to evaluate the effects of systematically varied factors of stimulus duration, interaural-level difference (ILD), and direction on perceptual and electrophysiological metrics of lateralization for fixed versus moving targets and (b) to evaluate the hemispheric activity underlying perception of fixed versus moving auditory targets. Method Twelve normal-hearing, young adult listeners were evaluated using perceptual and P300 tests of lateralization. Both perceptual and P300 tests utilized stimuli that varied for type (fixed and moving), direction (right and left), duration (100 and 500 ms), and magnitude of ILD (9 and 18 dB). Listeners provided laterality judgments and stimulus-type discrimination (fixed vs. moving) judgments for all combinations of acoustic factors. During P300 recordings, listeners discriminated between left- versus right-directed targets, as the other acoustic parameters were varied. Results ILD magnitude and stimulus type had statistically significant effects on laterality ratings, with larger magnitude ILDs and fixed type resulting in greater lateralization. Discriminability between fixed versus moving targets was dependent on stimulus duration and ILD magnitude. ILD magnitude was a significant predictor of P300 amplitude. There was a statistically significant inverse relationship between the perceived velocity of targets and P300 latency. Lateralized targets evoked contralateral hemispheric P300 activity. Moreover, a right-hemisphere enhancement was observed for fixed-type lateralized deviant stimuli. Conclusions Perceptual and P300 findings indicate that lateralization of auditory movement is highly dependent on temporal integration. Both the behavioral and physiological findings of this study suggest that moving auditory targets with ecologically valid velocities are processed by the central auditory nervous system within a window of temporal integration that is greater than that for fixed auditory targets. Furthermore, these findings lend support for a left hemispatial perceptual bias and right hemispheric dominance for spatial listening.
Collapse
Affiliation(s)
| | - Barbara Cone
- Department of Speech, Language, and Hearing Sciences, The University of Arizona, Tucson
| |
Collapse
|
11
|
Warnecke M, Peng ZE, Litovsky RY. The impact of temporal fine structure and signal envelope on auditory motion perception. PLoS One 2020; 15:e0238125. [PMID: 32822439 PMCID: PMC7446836 DOI: 10.1371/journal.pone.0238125] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Accepted: 08/10/2020] [Indexed: 02/02/2023] Open
Abstract
The majority of psychoacoustic research investigating sound localization has utilized stationary sources, yet most naturally occurring sounds are in motion, either because the sound source itself moves, or the listener does. In normal hearing (NH) listeners, previous research showed the extent to which sound duration and velocity impact the ability of listeners to detect sound movement. By contrast, little is known about how listeners with hearing impairments perceive moving sounds; the only study to date comparing the performance of NH and bilateral cochlear implant (BiCI) listeners has demonstrated significantly poorer performance on motion detection tasks in BiCI listeners. Cochlear implants, auditory protheses offered to profoundly deaf individuals for access to spoken language, retain the signal envelope (ENV), while discarding temporal fine structure (TFS) of the original acoustic input. As a result, BiCI users do not have access to low-frequency TFS cues, which have previously been shown to be crucial for sound localization in NH listeners. Instead, BiCI listeners seem to rely on ENV cues for sound localization, especially level cues. Given that NH and BiCI listeners differentially utilize ENV and TFS information, the present study aimed to investigate the usefulness of these cues for auditory motion perception. We created acoustic chimaera stimuli, which allowed us to test the relative contributions of ENV and TFS to auditory motion perception. Stimuli were either moving or stationary, presented to NH listeners in free field. The task was to track the perceived sound location. We found that removing low-frequency TFS reduces sensitivity to sound motion, and fluctuating speech envelopes strongly biased the judgment of sounds to be stationary. Our findings yield a possible explanation as to why BiCI users struggle to identify sound motion, and provide a first account of cues important to the functional aspect of auditory motion perception.
Collapse
Affiliation(s)
- Michaela Warnecke
- University of Wisconsin-Madison, Waisman Center, Madison, WI, United States of America
| | - Z. Ellen Peng
- University of Wisconsin-Madison, Waisman Center, Madison, WI, United States of America
| | - Ruth Y. Litovsky
- University of Wisconsin-Madison, Waisman Center, Madison, WI, United States of America
| |
Collapse
|
12
|
Nisha KV, Kumar UA. Pre-Attentive Neural Signatures of Auditory Spatial Processing in Listeners With Normal Hearing and Sensorineural Hearing Impairment: A Comparative Study. Am J Audiol 2019; 28:437-449. [PMID: 31461328 DOI: 10.1044/2018_aja-ind50-18-0099] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose This study was carried out to understand the neural intricacies of auditory spatial processing in listeners with sensorineural hearing impairment (SNHI) and compare it with normal hearing (NH) listeners using both local and global measures of waveform analyses. Method A standard group comparison research design was adopted in this study. Participants were assigned to 2 groups. Group I consisted of 13 participants with mild-moderate flat or sloping SNHI, while Group II consisted of 13 participants with NH sensitivity. Electroencephalographic data using virtual acoustic stimuli (spatially loaded stimuli played in center, right, and left hemifields) were recorded from 64 electrode sites in passive oddball paradigm. Both local (electrode-wise waveform analysis) and global (dissimilarity index, electric field strength, and topographic pattern analyses) measures were performed on the electroencephalographic data. Results Results of local waveform analyses marked the appearance of mismatch negativity in an earlier time window, relative to those reported conventionally in both the groups. The global measures of electric field strength and topographic modulations (dissimilarity index) revealed differences between the 2 groups in different time periods, indicating multiphases (integration and consolidation) of spatial processing. Further, the topographic pattern analysis showed the emergence of different scalp maps for SNHI and NH in the time window corresponding to mismatch negativity (78-150 ms), suggestive of differential spatial processing between the groups at the cortical level. Conclusions The findings of this study highlights the differential allotment of neural generators, denoting variations in spatial processing between SNHI and NH individuals.
Collapse
Affiliation(s)
- K. V. Nisha
- Department of Audiology, All India Institute of Speech and Hearing (AIISH), Naimisham Campus, Manasagangothri, Mysore-570006, Karnataka State, India
| | - U. Ajith Kumar
- Department of Audiology, All India Institute of Speech and Hearing (AIISH), Naimisham Campus, Manasagangothri, Mysore-570006, Karnataka State, India
| |
Collapse
|
13
|
Moua K, Kan A, Jones HG, Misurelli SM, Litovsky RY. Auditory motion tracking ability of adults with normal hearing and with bilateral cochlear implants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:2498. [PMID: 31046310 PMCID: PMC6491347 DOI: 10.1121/1.5094775] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2018] [Revised: 01/31/2019] [Accepted: 03/04/2019] [Indexed: 06/09/2023]
Abstract
Adults with bilateral cochlear implants (BiCIs) receive benefits in localizing stationary sounds when listening with two implants compared with one; however, sound localization ability is significantly poorer when compared to normal hearing (NH) listeners. Little is known about localizing sound sources in motion, which occurs in typical everyday listening situations. The authors considered the possibility that sound motion may improve sound localization in BiCI users by providing multiple places of information. Alternatively, the ability to compare multiple spatial locations may be compromised in BiCI users due to degradation of binaural cues, and thus result in poorer performance relative to NH adults. In this study, the authors assessed listeners' abilities to distinguish between sounds that appear to be moving vs stationary, and track the angular range and direction of moving sounds. Stimuli were bandpass-filtered (150-6000 Hz) noise bursts of different durations, panned over an array of loudspeakers. Overall, the results showed that BiCI users were poorer than NH adults in (i) distinguishing between a moving vs stationary sound, (ii) correctly identifying the direction of movement, and (iii) tracking the range of movement. These findings suggest that conventional cochlear implant processors are not able to fully provide the cues necessary for perceiving auditory motion correctly.
Collapse
Affiliation(s)
- Keng Moua
- University of Wisconsin-Madison, Waisman Center, 1500 Highland Avenue, Madison, Wisconsin 53706, USA
| | - Alan Kan
- University of Wisconsin-Madison, Waisman Center, 1500 Highland Avenue, Madison, Wisconsin 53706, USA
| | - Heath G Jones
- University of Wisconsin-Madison, Waisman Center, 1500 Highland Avenue, Madison, Wisconsin 53706, USA
| | - Sara M Misurelli
- University of Wisconsin-Madison, Waisman Center, 1500 Highland Avenue, Madison, Wisconsin 53706, USA
| | - Ruth Y Litovsky
- University of Wisconsin-Madison, Waisman Center, 1500 Highland Avenue, Madison, Wisconsin 53706, USA
| |
Collapse
|
14
|
Rummukainen OS, Schlecht SJ, Habets EAP. Self-translation induced minimum audible angle. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:EL340. [PMID: 30404470 DOI: 10.1121/1.5064957] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2018] [Accepted: 10/02/2018] [Indexed: 06/08/2023]
Abstract
The minimum audible angle has been studied with a stationary listener and a stationary or a moving sound source. The study at hand focuses on a scenario where the angle is induced by listener self-translation in relation to a stationary sound source. First, the classic stationary listener minimum audible angle experiment is replicated using a headphone-based reproduction system. This experiment confirms that the reproduction system is able to produce a localization cue resolution comparable to loudspeaker reproduction. Next, the self-translation minimum audible angle is shown to be 3.3° in the horizontal plane in front of the listener.
Collapse
|
15
|
Lundbeck M, Hartog L, Grimm G, Hohmann V, Bramsløw L, Neher T. Influence of Multi-microphone Signal Enhancement Algorithms on the Acoustics and Detectability of Angular and Radial Source Movements. Trends Hear 2018; 22:2331216518779719. [PMID: 29900799 PMCID: PMC6024528 DOI: 10.1177/2331216518779719] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Hearing-impaired listeners are known to have difficulties not only with understanding speech in noise but also with judging source distance and movement, and these deficits are related to perceived handicap. It is possible that the perception of spatially dynamic sounds can be improved with hearing aids (HAs), but so far this has not been investigated. In a previous study, older hearing-impaired listeners showed poorer detectability for virtual left-right (angular) and near-far (radial) source movements due to lateral interfering sounds and reverberation, respectively. In the current study, potential ways of improving these deficits with HAs were explored. Using stimuli very similar to before, detailed acoustic analyses were carried out to examine the influence of different HA algorithms for suppressing noise and reverberation on the acoustic cues previously shown to be associated with source movement detectability. For an algorithm that combined unilateral directional microphones with binaural coherence-based noise reduction and for a bilateral beamformer with binaural cue preservation, movement-induced changes in spectral coloration, signal-to-noise ratio, and direct-to-reverberant energy ratio were greater compared with no HA processing. To evaluate these two algorithms perceptually, aided measurements of angular and radial source movement detectability were performed with 20 older hearing-impaired listeners. The analyses showed that, in the presence of concurrent interfering sounds and reverberation, the bilateral beamformer could restore source movement detectability in both spatial dimensions, whereas the other algorithm only improved detectability in the near-far dimension. Together, these results provide a basis for improving the detectability of spatially dynamic sounds with HAs.
Collapse
Affiliation(s)
- Micha Lundbeck
- 1 Medizinische Physik and Cluster of Excellence "Hearing4all", Oldenburg University, Germany.,2 HörTech gGmbH, Oldenburg, Germany
| | - Laura Hartog
- 1 Medizinische Physik and Cluster of Excellence "Hearing4all", Oldenburg University, Germany.,2 HörTech gGmbH, Oldenburg, Germany
| | - Giso Grimm
- 1 Medizinische Physik and Cluster of Excellence "Hearing4all", Oldenburg University, Germany.,2 HörTech gGmbH, Oldenburg, Germany
| | - Volker Hohmann
- 1 Medizinische Physik and Cluster of Excellence "Hearing4all", Oldenburg University, Germany.,2 HörTech gGmbH, Oldenburg, Germany
| | - Lars Bramsløw
- 3 Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
| | - Tobias Neher
- 1 Medizinische Physik and Cluster of Excellence "Hearing4all", Oldenburg University, Germany.,4 Institute of Clinical Research, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
16
|
Abstract
By moving sounds around the head and asking listeners to report which ones moved more, it was found that sound sources at the side of a listener must move at least twice as much as ones in front to be judged as moving the same amount. A relative expansion of space in the front and compression at the side has consequences for spatial perception of moving sounds by both static and moving listeners. An accompanying prediction that the apparent location of static sound sources ought to also be distorted agrees with previous work and suggests that this is a general perceptual phenomenon that is not limited to moving signals. A mathematical model that mimics the measured expansion of space can be used to successfully capture several previous findings in spatial auditory perception. The inverse of this function could be used alongside individualized head-related transfer functions and motion tracking to produce hyperstable virtual acoustic environments.
Collapse
Affiliation(s)
- W Owen Brimijoin
- 1 MRC/CSO Institute of Hearing Research (Scottish Section), Glasgow Royal Infirmary, UK
| |
Collapse
|
17
|
Auditory motion parallax. Proc Natl Acad Sci U S A 2018; 115:3998-4000. [PMID: 29622682 DOI: 10.1073/pnas.1803547115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
18
|
Abstract
Distance is important: From an ecological perspective, knowledge about the distance to either prey or predator is vital. However, the distance of an unknown sound source is particularly difficult to assess, especially in anechoic environments. In vision, changes in perspective resulting from observer motion produce a reliable, consistent, and unambiguous impression of depth known as motion parallax. Here we demonstrate with formal psychophysics that humans can exploit auditory motion parallax, i.e., the change in the dynamic binaural cues elicited by self-motion, to assess the relative depths of two sound sources. Our data show that sensitivity to relative depth is best when subjects move actively; performance deteriorates when subjects are moved by a motion platform or when the sound sources themselves move. This is true even though the dynamic binaural cues elicited by these three types of motion are identical. Our data demonstrate a perceptual strategy to segregate intermittent sound sources in depth and highlight the tight interaction between self-motion and binaural processing that allows assessment of the spatial layout of complex acoustic scenes.
Collapse
|
19
|
Brimijoin WO, Akeroyd MA. The Effects of Hearing Impairment, Age, and Hearing Aids on the Use of Self-Motion for Determining Front/Back Location. J Am Acad Audiol 2018; 27:588-600. [PMID: 27406664 DOI: 10.3766/jaaa.15101] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND There are two cues that listeners use to disambiguate the front/back location of a sound source: high-frequency spectral cues associated with the head and pinnae, and self-motion-related binaural cues. The use of these cues can be compromised in listeners with hearing impairment and users of hearing aids. PURPOSE To determine how age, hearing impairment, and the use of hearing aids affect a listener's ability to determine front from back based on both self-motion and spectral cues. RESEARCH DESIGN We used a previously published front/back illusion: signals whose physical source location is rotated around the head at twice the angular rate of the listener's head movements are perceptually located in the opposite hemifield from where they physically are. In normal-hearing listeners, the strength of this illusion decreases as a function of low-pass filter cutoff frequency, this is the result of a conflict between spectral cues and dynamic binaural cues for sound source location. The illusion was used as an assay of self-motion processing in listeners with hearing impairment and users of hearing aids. STUDY SAMPLE We recruited 40 hearing-impaired participants, with an average age of 62 yr. The data for three listeners were discarded because they did not move their heads enough during the experiment. DATA COLLECTION AND ANALYSIS Listeners sat at the center of a ring of 24 loudspeakers, turned their heads back and forth, and used a wireless keypad to report the front/back location of statically presented signals and of dynamically moving signals with illusory locations. Front/back accuracy for static signals, the strength of front/back illusions, and minimum audible movement angle were measured for each listener in each condition. All measurements were made in each listener both aided and unaided. RESULTS Hearing-impaired listeners were less accurate at front/back discrimination for both static and illusory conditions. Neither static nor illusory conditions were affected by high-frequency content. Hearing aids had heterogeneous effects from listener to listener, but independent of other factors, on average, listeners wearing aids exhibited a spectrally dependent increase in "front" responses: the more high-frequency energy in the signal, the more likely they were to report it as coming from the front. CONCLUSIONS Hearing impairment was associated with a decrease in the accuracy of self-motion processing for both static and moving signals. Hearing aids may not always reproduce dynamic self-motion-related cues with sufficient fidelity to allow reliable front/back discrimination.
Collapse
Affiliation(s)
- W Owen Brimijoin
- MRC/CSO Institute of Hearing Research (Scottish Section), Glasgow Royal Infirmary, Glasgow, UK
| | | |
Collapse
|
20
|
Letter to the Editor: Johnson, J. A., Xu, J., Cox, R. M. (2017). Impact of Hearing Aid Technology on Outcomes in Daily Life III: Localization, Ear Hear, 38, 746-759. Ear Hear 2018; 39:398-399. [PMID: 29298165 DOI: 10.1097/aud.0000000000000511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
21
|
Town SM, Brimijoin WO, Bizley JK. Egocentric and allocentric representations in auditory cortex. PLoS Biol 2017; 15:e2001878. [PMID: 28617796 PMCID: PMC5472254 DOI: 10.1371/journal.pbio.2001878] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2016] [Accepted: 05/08/2017] [Indexed: 11/18/2022] Open
Abstract
A key function of the brain is to provide a stable representation of an object's location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position.
Collapse
Affiliation(s)
- Stephen M. Town
- Ear Institute, University College London, London, United Kingdom
| | - W. Owen Brimijoin
- MRC/CSO Institute of Hearing Research – Scottish Section, Glasgow, United Kingdom
| | | |
Collapse
|
22
|
Freeman TCA, Culling JF, Akeroyd MA, Brimijoin WO. Auditory compensation for head rotation is incomplete. J Exp Psychol Hum Percept Perform 2017; 43:371-380. [PMID: 27841453 PMCID: PMC5289217 DOI: 10.1037/xhp0000321] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2016] [Revised: 08/24/2016] [Accepted: 09/01/2016] [Indexed: 01/25/2023]
Abstract
Hearing is confronted by a similar problem to vision when the observer moves. The image motion that is created remains ambiguous until the observer knows the velocity of eye and/or head. One way the visual system solves this problem is to use motor commands, proprioception, and vestibular information. These "extraretinal signals" compensate for self-movement, converting image motion into head-centered coordinates, although not always perfectly. We investigated whether the auditory system also transforms coordinates by examining the degree of compensation for head rotation when judging a moving sound. Real-time recordings of head motion were used to change the "movement gain" relating head movement to source movement across a loudspeaker array. We then determined psychophysically the gain that corresponded to a perceptually stationary source. Experiment 1 showed that the gain was small and positive for a wide range of trained head speeds. Hence, listeners perceived a stationary source as moving slightly opposite to the head rotation, in much the same way that observers see stationary visual objects move against a smooth pursuit eye movement. Experiment 2 showed the degree of compensation remained the same for sounds presented at different azimuths, although the precision of performance declined when the sound was eccentric. We discuss two possible explanations for incomplete compensation, one based on differences in the accuracy of signals encoding image motion and self-movement and one concerning statistical optimization that sacrifices accuracy for precision. We then consider the degree to which such explanations can be applied to auditory motion perception in moving listeners. (PsycINFO Database Record
Collapse
Affiliation(s)
| | | | - Michael A Akeroyd
- Medical Research Council Institute of Hearing Research, University of Nottingham
| | - W Owen Brimijoin
- Medical Research Council/Chief Scientist Office Institute of Hearing Research-Scottish Section, Glasgow Royal Infirmary
| |
Collapse
|
23
|
Honda A, Ohba K, Iwaya Y, Suzuki Y. Detection of Sound Image Movement During Horizontal Head Rotation. Iperception 2016; 7:2041669516669614. [PMID: 27698993 PMCID: PMC5030746 DOI: 10.1177/2041669516669614] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
Movement detection for a virtual sound source was measured during the listener’s horizontal head rotation. Listeners were instructed to do head rotation at a given speed. A trial consisted of two intervals. During an interval, a virtual sound source was presented 60° to the right or left of the listener, who was instructed to rotate the head to face the sound image position. Then in one of a pair of intervals, the sound position was moved slightly in the middle of the rotation. Listeners were asked to judge the interval in a trial during which the sound stimuli moved. Results suggest that detection thresholds are higher when listeners do head rotation. Moreover, this effect was found to be independent of the rotation velocity.
Collapse
Affiliation(s)
- Akio Honda
- Yamanashi Eiwa College, Yamanashi, Japan
| | | | | | | |
Collapse
|
24
|
Early multisensory integration of self and source motion in the auditory system. Proc Natl Acad Sci U S A 2016; 113:8308-13. [PMID: 27357667 DOI: 10.1073/pnas.1522615113] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Discriminating external from self-produced sensory inputs is a major challenge for brains. In the auditory system, sound localization must account for movements of the head and ears, a computation likely to involve multimodal integration. Principal neurons (PNs) of the dorsal cochlear nucleus (DCN) are known to be spatially selective and to receive multimodal sensory information. We studied the responses of PNs to body rotation with or without sound stimulation, as well as to sound source rotation with stationary body. We demonstrated that PNs are sensitive to head direction, and, in the presence of sound, they differentiate between body and sound source movement. Thus, the output of the DCN provides the brain with enough information to disambiguate the movement of a sound source from an acoustically identical relative movement produced by motion of the animal.
Collapse
|
25
|
Carlile S, Leung J. The Perception of Auditory Motion. Trends Hear 2016; 20:2331216516644254. [PMID: 27094029 PMCID: PMC4871213 DOI: 10.1177/2331216516644254] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2015] [Revised: 03/22/2016] [Accepted: 03/22/2016] [Indexed: 11/16/2022] Open
Abstract
The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception.
Collapse
Affiliation(s)
- Simon Carlile
- School of Medical Sciences, University of Sydney, NSW, Australia Starkey Hearing Research Center, Berkeley, CA, USA
| | - Johahn Leung
- School of Medical Sciences, University of Sydney, NSW, Australia
| |
Collapse
|
26
|
Yost WA, Zhong X, Najam A. Judging sound rotation when listeners and sounds rotate: Sound source localization is a multisystem process. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 138:3293-310. [PMID: 26627802 DOI: 10.1121/1.4935091] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
In four experiments listeners were rotated or were stationary. Sounds came from a stationary loudspeaker or rotated from loudspeaker to loudspeaker around an azimuth array. When either sounds or listeners rotate the auditory cues used for sound source localization change, but in the everyday world listeners perceive sound rotation only when sounds rotate not when listeners rotate. In the everyday world sound source locations are referenced to positions in the environment (a world-centric reference system). The auditory cues for sound source location indicate locations relative to the head (a head-centric reference system), not locations relative to the world. This paper deals with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position. The use of visual and vestibular information in determining rotating head position in sound rotation perception was investigated. The experiments show that sound rotation perception when sources and listeners rotate was based on acoustic, visual, and, perhaps, vestibular information. The findings are consistent with the general hypotheses and suggest that sound source localization is not based just on acoustics. It is a multisystem process.
Collapse
Affiliation(s)
- William A Yost
- Speech and Hearing Science, Arizona State University, P.O. Box 870102, Tempe, Arizona 85287, USA
| | - Xuan Zhong
- Speech and Hearing Science, Arizona State University, P.O. Box 870102, Tempe, Arizona 85287, USA
| | - Anbar Najam
- Speech and Hearing Science, Arizona State University, P.O. Box 870102, Tempe, Arizona 85287, USA
| |
Collapse
|