1
|
Gonan S, Vallortigara G, Chiandetti C. When sounds come alive: animacy in the auditory sense. Front Psychol 2024; 15:1498702. [PMID: 39526129 PMCID: PMC11543492 DOI: 10.3389/fpsyg.2024.1498702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2024] [Accepted: 10/14/2024] [Indexed: 11/16/2024] Open
Abstract
Despite the interest in animacy perception, few studies have considered sensory modalities other than vision. However, even everyday experience suggests that the auditory sense can also contribute to the recognition of animate beings, for example through the identification of voice-like sounds or through the perception of sounds that are the by-products of locomotion. Here we review the studies that have investigated the responses of humans and other animals to different acoustic features that may indicate the presence of a living entity, with particular attention to the neurophysiological mechanisms underlying such perception. Specifically, we have identified three different auditory animacy cues in the existing literature, namely voicelikeness, consonance, and acoustic motion. While the first two characteristics are clearly exclusive to the auditory sense and indicate the presence of an animate being capable of producing vocalizations or harmonic sounds-with the adaptive value of consonance also being exploited in musical compositions in which the musician wants to convey certain meanings-acoustic movement is, on the other hand, closely linked to the perception of animacy in the visual sense, in particular to self-propelled and biological motion stimuli. The results presented here support the existence of a multifaceted auditory sense of animacy that is shared by different distantly related species and probably represents an innate predisposition, and also suggest that the mechanisms underlying the perception of living things may all be part of an integrated network involving different sensory modalities.
Collapse
Affiliation(s)
- Stefano Gonan
- Department of Life Sciences, University of Trieste, Trieste, Italy
| | | | | |
Collapse
|
2
|
Schleich P, Wirtz C, Schatzer R, Nopp P. Similar performance in sound localisation with unsynchronised and synchronised automatic gain controls in bilateral cochlear implant recipients. Int J Audiol 2024:1-7. [PMID: 39075948 DOI: 10.1080/14992027.2024.2383700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 07/16/2024] [Accepted: 07/17/2024] [Indexed: 07/31/2024]
Abstract
OBJECTIVE One proposed method to improve sound localisation for bilateral cochlear implant (BiCI) users is to synchronise the automatic gain control (AGC) of both audio processors. In this study we tested whether AGC synchronisation in a dual-loop front-end processing scheme with a 3:1 compression ratio improves sound localisation acuity. DESIGN Source identification in the frontal hemifield was tested in in an anechoic chamber as a function of (roving) presentation level. Three different methods of AGC synchronisation were compared to the standard unsynchronised approach. Both root mean square error (RMSE) and signed bias were calculated to evaluate sound localisation in the horizontal plane. STUDY SAMPLE Six BiCI users. RESULTS None of the three AGC synchronisation methods yielded significant improvements in either localisation error or bias, neither across presentation levels nor for individual presentation levels. For synchronised AGC, the pooled mean (standard deviation) localisation error of the three synchronisation methods was 24.7 (5.8) degrees RMSE, for unsynchronised AGC it was 27.4 (7.5) degrees. The localisation bias was 5.1 (5.5) degrees for synchronised AGC and 5.0 (3.8) for unsynchronised. CONCLUSIONS These findings do not support the hypothesis that the tested AGC synchronisation configurations improves localisation acuity in bilateral users of MED-EL cochlear implants.
Collapse
Affiliation(s)
| | | | | | - Peter Nopp
- MED-EL Medical Electronics, Innsbruck, Austria
| |
Collapse
|
3
|
Dietze A, Clapp SW, Seeber BU. Static and moving minimum audible angle: Independent contributions of reverberation and position. JASA EXPRESS LETTERS 2024; 4:054404. [PMID: 38742997 DOI: 10.1121/10.0025992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 04/26/2024] [Indexed: 05/16/2024]
Abstract
Two measures of auditory spatial resolution, the minimum audible angle and the minimum audible movement angle, have been obtained in a simulated acoustic environment using Ambisonics sound field reproduction. Trajectories were designed to provide no reliable cues for the spatial discrimination task. Larger threshold angles were found in reverberant compared to anechoic conditions, for stimuli on the side compared to the front, and for moving compared to static stimuli. The effect of reverberation appeared to be independent of the position of the sound source (same relative threshold increase) and was independently present for static and moving sound sources.
Collapse
Affiliation(s)
- Anna Dietze
- Audio Information Processing, School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
- Physiology and Modelling of Auditory Perception, Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| | - Samuel W Clapp
- Audio Information Processing, School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
- Cruise LLC, 333 Brannan Street, San Francisco, California 94107, , ,
| | - Bernhard U Seeber
- Audio Information Processing, School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
| |
Collapse
|
4
|
Tolentino-Castro JW, Schroeger A, Cañal-Bruland R, Raab M. Increasing auditory intensity enhances temporal but deteriorates spatial accuracy in a virtual interception task. Exp Brain Res 2024:10.1007/s00221-024-06787-x. [PMID: 38334793 DOI: 10.1007/s00221-024-06787-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 01/15/2024] [Indexed: 02/10/2024]
Abstract
Humans are quite accurate and precise in interception performance. So far, it is still unclear what role auditory information plays in spatiotemporal accuracy and consistency during interception. In the current study, interception performance was measured as the spatiotemporal accuracy and consistency of when and where a virtual ball was intercepted on a visible line displayed on a screen based on auditory information alone. We predicted that participants would more accurately indicate when the ball would cross a target line than where it would cross the line, because human hearing is particularly sensitive to temporal parameters. In a within-subject design, we manipulated auditory intensity (52, 61, 70, 79, 88 dB) using a sound stimulus programmed to be perceived over the screen in an inverted C-shape trajectory. Results showed that the louder the sound, the better was temporal accuracy, but the worse was spatial accuracy. We argue that louder sounds increased attention toward auditory information when performing interception judgments. How balls are intercepted and practically how intensity of sound may add to temporal accuracy and consistency is discussed from a theoretical perspective of modality-specific interception behavior.
Collapse
Affiliation(s)
- J Walter Tolentino-Castro
- Department of Performance Psychology, Institute of Psychology, German Sport University Cologne, Am Sportpark Müngersdorf 6, 50933, Cologne, Germany
| | - Anna Schroeger
- Department for General Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Rouwen Cañal-Bruland
- Department for the Psychology of Human Movement and Sport, Institute of Sport Science, Friedrich Schiller University Jena, Jena, Germany
| | - Markus Raab
- Department of Performance Psychology, Institute of Psychology, German Sport University Cologne, Am Sportpark Müngersdorf 6, 50933, Cologne, Germany.
- School of Applied Sciences, London South Bank University, London, England.
| |
Collapse
|
5
|
Kreyenmeier P, Schroeger A, Cañal-Bruland R, Raab M, Spering M. Rapid Audiovisual Integration Guides Predictive Actions. eNeuro 2023; 10:ENEURO.0134-23.2023. [PMID: 37591732 PMCID: PMC10464656 DOI: 10.1523/eneuro.0134-23.2023] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Revised: 07/19/2023] [Accepted: 07/22/2023] [Indexed: 08/19/2023] Open
Abstract
Natural movements, such as catching a ball or capturing prey, typically involve multiple senses. Yet, laboratory studies on human movements commonly focus solely on vision and ignore sound. Here, we ask how visual and auditory signals are integrated to guide interceptive movements. Human observers tracked the brief launch of a simulated baseball, randomly paired with batting sounds of varying intensities, and made a quick pointing movement at the ball. Movement end points revealed systematic overestimation of target speed when the ball launch was paired with a loud versus a quiet sound, although sound was never informative. This effect was modulated by the availability of visual information; sounds biased interception when the visual presentation duration of the ball was short. Amplitude of the first catch-up saccade, occurring ∼125 ms after target launch, revealed early integration of audiovisual information for trajectory estimation. This sound-induced bias was reversed during later predictive saccades when more visual information was available. Our findings suggest that auditory and visual signals are integrated to guide interception and that this integration process must occur early at a neural site that receives auditory and visual signals within an ultrashort time span.
Collapse
Affiliation(s)
- Philipp Kreyenmeier
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Colombia V5Z 3N9, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Colombia V6T 1Z2, Canada
| | - Anna Schroeger
- Department of Psychology, Justus Liebig University Giessen, 35390 Giessen, Germany
- Department for the Psychology of Human Movement and Sport, Friedrich Schiller University Jena, 07743 Jena, Germany
| | - Rouwen Cañal-Bruland
- Department for the Psychology of Human Movement and Sport, Friedrich Schiller University Jena, 07743 Jena, Germany
| | - Markus Raab
- Department of Performance Psychology, German Sport University Cologne, 50933 Cologne, Germany
- School of Applied Sciences, London South Bank University, London SE1 0AA, United Kingdom
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Colombia V5Z 3N9, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Colombia V6T 1Z2, Canada
- Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, British Colombia V6T 1Z3, Canada
- Institute for Computing, Information, and Cognitive Systems, University of British Columbia, Vancouver, British Colombia V6T 1Z4, Canada
| |
Collapse
|
6
|
Higgins NC, Pupo DA, Ozmeral EJ, Eddins DA. Head movement and its relation to hearing. Front Psychol 2023; 14:1183303. [PMID: 37448716 PMCID: PMC10338176 DOI: 10.3389/fpsyg.2023.1183303] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 06/07/2023] [Indexed: 07/15/2023] Open
Abstract
Head position at any point in time plays a fundamental role in shaping the auditory information that reaches a listener, information that continuously changes as the head moves and reorients to different listening situations. The connection between hearing science and the kinesthetics of head movement has gained interest due to technological advances that have increased the feasibility of providing behavioral and biological feedback to assistive listening devices that can interpret movement patterns that reflect listening intent. Increasing evidence also shows that the negative impact of hearing deficits on mobility, gait, and balance may be mitigated by prosthetic hearing device intervention. Better understanding of the relationships between head movement, full body kinetics, and hearing health, should lead to improved signal processing strategies across a range of assistive and augmented hearing devices. The purpose of this review is to introduce the wider hearing community to the kinesiology of head movement and to place it in the context of hearing and communication with the goal of expanding the field of ecologically-specific listener behavior.
Collapse
Affiliation(s)
- Nathan C. Higgins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| | - Daniel A. Pupo
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
- School of Aging Studies, University of South Florida, Tampa, FL, United States
| | - Erol J. Ozmeral
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| | - David A. Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| |
Collapse
|
7
|
McLachlan G, Majdak P, Reijniers J, Mihocic M, Peremans H. Dynamic spectral cues do not affect human sound localization during small head movements. Front Neurosci 2023; 17:1027827. [PMID: 36816108 PMCID: PMC9936143 DOI: 10.3389/fnins.2023.1027827] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 01/16/2023] [Indexed: 02/05/2023] Open
Abstract
Natural listening involves a constant deployment of small head movement. Spatial listening is facilitated by head movements, especially when resolving front-back confusions, an otherwise common issue during sound localization under head-still conditions. The present study investigated which acoustic cues are utilized by human listeners to localize sounds using small head movements (below ±10° around the center). Seven normal-hearing subjects participated in a sound localization experiment in a virtual reality environment. Four acoustic cue stimulus conditions were presented (full spectrum, flattened spectrum, frozen spectrum, free-field) under three movement conditions (no movement, head rotations over the yaw axis and over the pitch axis). Localization performance was assessed using three metrics: lateral and polar precision error and front-back confusion rate. Analysis through mixed-effects models showed that even small yaw rotations provide a remarkable decrease in front-back confusion rate, whereas pitch rotations did not show much of an effect. Furthermore, MSS cues improved localization performance even in the presence of dITD cues. However, performance was similar between stimuli with and without dMSS cues. This indicates that human listeners utilize the MSS cues before the head moves, but do not rely on dMSS cues to localize sounds when utilizing small head movements.
Collapse
Affiliation(s)
- Glen McLachlan
- Department of Engineering Management, University of Antwerp, Antwerp, Belgium,*Correspondence: Glen McLachlan ✉
| | - Piotr Majdak
- Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria
| | - Jonas Reijniers
- Department of Engineering Management, University of Antwerp, Antwerp, Belgium
| | - Michael Mihocic
- Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria
| | - Herbert Peremans
- Department of Engineering Management, University of Antwerp, Antwerp, Belgium
| |
Collapse
|
8
|
Fine I, Park WJ. Do you hear what I see? How do early blind individuals experience object motion? Philos Trans R Soc Lond B Biol Sci 2023; 378:20210460. [PMID: 36511418 PMCID: PMC9745882 DOI: 10.1098/rstb.2021.0460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 09/13/2022] [Indexed: 12/15/2022] Open
Abstract
One of the most important tasks for 3D vision is tracking the movement of objects in space. The ability of early blind individuals to understand motion in the environment from noisy and unreliable auditory information is an impressive example of cortical adaptation that is only just beginning to be understood. Here, we compare visual and auditory motion processing, and discuss the effect of early blindness on the perception of auditory motion. Blindness leads to cross-modal recruitment of the visual motion area hMT+ for auditory motion processing. Meanwhile, the planum temporale, associated with auditory motion in sighted individuals, shows reduced selectivity for auditory motion. We discuss how this dramatic shift in the cortical basis of motion processing might influence the perceptual experience of motion in early blind individuals. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Ione Fine
- Department of Psychology, University of Washington, Seattle, WA 98195-1525, USA
| | - Woon Ju Park
- Department of Psychology, University of Washington, Seattle, WA 98195-1525, USA
| |
Collapse
|
9
|
Cho AY, Kidd G. Auditory motion as a cue for source segregation and selection in a "cocktail party" listening environment. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1684. [PMID: 36182296 PMCID: PMC9489258 DOI: 10.1121/10.0013990] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Source motion was examined as a cue for segregating concurrent speech or noise sources. In two different headphone-based tasks-motion detection (MD) and speech-on-speech masking (SI)-one source among three was designated as the target only by imposing sinusoidal variation in azimuth during the stimulus presentation. For MD, the lstener was asked which of the three concurrent sources was in motion during the trial. For SI, the listener was asked to report the words spoken by the moving speech source. MD performance improved as the amplitude of the sinusoidal motion (i.e., displacement in azimuth) increased over the range of values tested (±5° to ±30°) for both modulated noise and speech targets, with better performance found for speech. SI performance also improved as the amplitude of target motion increased. Furthermore, SI performance improved as word position progressed throughout the sentence. Performance on the MD task was correlated with performance on SI task across individual subjects. For the SI conditions tested here, these findings are consistent with the proposition that listeners first detect the moving target source, then focus attention on the target location as the target sentence unfolds.
Collapse
Affiliation(s)
- Adrian Y Cho
- Speech and Hearing Bioscience and Technology Program, Harvard University, Cambridge, Massachusetts 02138, USA
| | - Gerald Kidd
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| |
Collapse
|
10
|
Battal C, Gurtubay-Antolin A, Rezk M, Mattioni S, Bertonati G, Occelli V, Bottini R, Targher S, Maffei C, Jovicich J, Collignon O. Structural and Functional Network-Level Reorganization in the Coding of Auditory Motion Directions and Sound Source Locations in the Absence of Vision. J Neurosci 2022; 42:4652-4668. [PMID: 35501150 PMCID: PMC9186796 DOI: 10.1523/jneurosci.1554-21.2022] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 03/16/2022] [Accepted: 03/21/2022] [Indexed: 11/21/2022] Open
Abstract
hMT+/V5 is a region in the middle occipitotemporal cortex that responds preferentially to visual motion in sighted people. In cases of early visual deprivation, hMT+/V5 enhances its response to moving sounds. Whether hMT+/V5 contains information about motion directions and whether the functional enhancement observed in the blind is motion specific, or also involves sound source location, remains unsolved. Moreover, the impact of this cross-modal reorganization of hMT+/V5 on the regions typically supporting auditory motion processing, like the human planum temporale (hPT), remains equivocal. We used a combined functional and diffusion-weighted MRI approach and individual in-ear recordings to study the impact of early blindness on the brain networks supporting spatial hearing in male and female humans. Whole-brain univariate analysis revealed that the anterior portion of hMT+/V5 responded to moving sounds in sighted and blind people, while the posterior portion was selective to moving sounds only in blind participants. Multivariate decoding analysis revealed that the presence of motion direction and sound position information was higher in hMT+/V5 and lower in hPT in the blind group. While both groups showed axis-of-motion organization in hMT+/V5 and hPT, this organization was reduced in the hPT of blind people. Diffusion-weighted MRI revealed that the strength of hMT+/V5-hPT connectivity did not differ between groups, whereas the microstructure of the connections was altered by blindness. Our results suggest that the axis-of-motion organization of hMT+/V5 does not depend on visual experience, but that congenital blindness alters the response properties of occipitotemporal networks supporting spatial hearing in the sighted.SIGNIFICANCE STATEMENT Spatial hearing helps living organisms navigate their environment. This is certainly even more true in people born blind. How does blindness affect the brain network supporting auditory motion and sound source location? Our results show that the presence of motion direction and sound position information was higher in hMT+/V5 and lower in human planum temporale in blind relative to sighted people; and that this functional reorganization is accompanied by microstructural (but not macrostructural) alterations in their connections. These findings suggest that blindness alters cross-modal responses between connected areas that share the same computational goals.
Collapse
Affiliation(s)
- Ceren Battal
- Institute of Research in Psychology (IPSY) and Institute of NeuroScience (IoNS), Université Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium
- Center of Mind/Brain Sciences, University of Trento, 38123 Trento, Italy
| | - Ane Gurtubay-Antolin
- Institute of Research in Psychology (IPSY) and Institute of NeuroScience (IoNS), Université Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium
- BCBL, Basque Center on Cognition, Brain and Language, 20009, Donostia-San Sebastián, Spain
| | - Mohamed Rezk
- Institute of Research in Psychology (IPSY) and Institute of NeuroScience (IoNS), Université Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium
- Center of Mind/Brain Sciences, University of Trento, 38123 Trento, Italy
| | - Stefania Mattioni
- Institute of Research in Psychology (IPSY) and Institute of NeuroScience (IoNS), Université Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium
- Center of Mind/Brain Sciences, University of Trento, 38123 Trento, Italy
| | - Giorgia Bertonati
- Center of Mind/Brain Sciences, University of Trento, 38123 Trento, Italy
| | - Valeria Occelli
- Center of Mind/Brain Sciences, University of Trento, 38123 Trento, Italy
- Department of Psychology, Edge Hill University, Ormskirk L39 4QP, United Kingdom
| | - Roberto Bottini
- Center of Mind/Brain Sciences, University of Trento, 38123 Trento, Italy
| | - Stefano Targher
- Institute of Research in Psychology (IPSY) and Institute of NeuroScience (IoNS), Université Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium
| | - Chiara Maffei
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, Massachusetts 01129
| | - Jorge Jovicich
- Center of Mind/Brain Sciences, University of Trento, 38123 Trento, Italy
| | - Olivier Collignon
- Institute of Research in Psychology (IPSY) and Institute of NeuroScience (IoNS), Université Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium
- Center of Mind/Brain Sciences, University of Trento, 38123 Trento, Italy
- School of Health Sciences, HES-SO Valais-Wallis, 1950 Sion, Switzerland
- The Sense Innovation and Research Center, CH-1011 Lausanne, Switzerland
| |
Collapse
|
11
|
Pastore MT, Yost WA. Spatial Release from Masking for Tones and Noises in a Soundfield under Conditions Where Targets and Maskers Are Stationary or Moving. Audiol Res 2022; 12:99-112. [PMID: 35314608 PMCID: PMC8938785 DOI: 10.3390/audiolres12020013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 02/09/2022] [Accepted: 02/17/2022] [Indexed: 02/04/2023] Open
Abstract
Stationary visual targets often become far more salient when they move against an otherwise static background–the so-called “pop out” effect. In two experiments conducted over loudspeakers, we tested for a similar pop-out effect in the auditory domain. Tone-in-noise and noise-in-noise detection thresholds were measured using a 2-up, 1-down adaptive procedure under conditions where target and masker(s) were presented from the same or different locations and when the target was stationary or moved via amplitude-panning. In the first experiment, target tones of 0.5 kHz and 4 kHz were tested, maskers (2–4, depending on the condition) were independent Gaussian noises, and all stimuli were 500-ms duration. In the second experiment, a single pink noise masker (0.3–12 kHz) was presented with a single target at one of four bandwidths (0.3–0.6 kHz, 3–6 kHz, 6–12 kHz, 0.3–12 kHz) under conditions where target and masker were presented from the same or different locations and where the target moved or not. The results of both experiments failed to show a decrease in detection thresholds resulting from movement of the target.
Collapse
|
12
|
Deep neural network models of sound localization reveal how perception is adapted to real-world environments. Nat Hum Behav 2022; 6:111-133. [PMID: 35087192 PMCID: PMC8830739 DOI: 10.1038/s41562-021-01244-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Accepted: 10/29/2021] [Indexed: 11/15/2022]
Abstract
Mammals localize sounds using information from their two ears.
Localization in real-world conditions is challenging, as echoes provide
erroneous information, and noises mask parts of target sounds. To better
understand real-world localization we equipped a deep neural network with human
ears and trained it to localize sounds in a virtual environment. The resulting
model localized accurately in realistic conditions with noise and reverberation.
In simulated experiments, the model exhibited many features of human spatial
hearing: sensitivity to monaural spectral cues and interaural time and level
differences, integration across frequency, biases for sound onsets, and limits
on localization of concurrent sources. But when trained in unnatural
environments without either reverberation, noise, or natural sounds, these
performance characteristics deviated from those of humans. The results show how
biological hearing is adapted to the challenges of real-world environments and
illustrate how artificial neural networks can reveal the real-world constraints
that shape perception.
Collapse
|
13
|
Dahl K, Andersen M, Henriksen TB. Association between auditory system pathology and sudden infant death syndrome (SIDS): a systematic review. BMJ Open 2021; 11:e055318. [PMID: 34911724 PMCID: PMC8679124 DOI: 10.1136/bmjopen-2021-055318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Abstract
OBJECTIVE A theory has emerged, suggesting that abnormalities in the auditory system may be associated with sudden infant death syndrome (SIDS). However, current clinical evidence has never been systematically reviewed. DESIGN A systematic review was conducted according to the guideline of Preferred Reporting Items for Systematic Reviews and Meta-Analyses. DATA SOURCES PubMed, Embase and Web of Science were systematically searched through 7 September 2020. ELIGIBILITY CRITERIA FOR SELECTING STUDIES Only human studies with a reference group were included. Studies were eligible for inclusion if they examined infants exposed to otoacoustic emissions (OAEs), auditory brainstem response (ABR) or had autopsies with brainstem histology of the auditory system. SIDS was the primary outcome, while the secondary outcome was near-miss sudden infant death syndrome episodes. DATA EXTRACTION AND SYNTHESIS Two independent reviewers extracted data and assessed risk of bias, and the quality of evidence. Due to high heterogeneity, a narrative synthesis was conducted. Risk of bias and quality of evidence was assessed using the Newcastle-Ottawa Scale and Grading of Recommendations Assessment, Development and Evaluation. RESULTS Twelve case-control studies were included. Seven studies on OAEs or ABR had a high degree of inconsistency. Contrarily, four out of five studies reporting on brainstem histology found that auditory brainstem abnormalities were more prevalent in SIDS cases than in controls. However, the quality of evidence across all studies was very low. CONCLUSION This systematic review found no clear association between auditory system pathology and SIDS. The higher prevalence of histological abnormalities in the auditory system of SIDS may indicate an association. However, further studies of higher quality and larger study populations are needed to determine whether these findings are valid. PROSPERO REGISTRATION NUMBER CRD42020208045.
Collapse
Affiliation(s)
- Katrine Dahl
- Clinical Medicine, Health, Aarhus University, Aarhus, Denmark
| | - Mads Andersen
- Paediatrics and Adolescent Medicine, Aarhus University Hospital, Aarhus, Denmark
| | - Tine Brink Henriksen
- Clinical Medicine, Health, Aarhus University, Aarhus, Denmark
- Paediatrics and Adolescent Medicine, Aarhus University Hospital, Aarhus, Denmark
| |
Collapse
|
14
|
Effect of hearing aids on body balance function in non-reverberant condition: A posturographic study. PLoS One 2021; 16:e0258590. [PMID: 34644358 PMCID: PMC8513876 DOI: 10.1371/journal.pone.0258590] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Accepted: 09/30/2021] [Indexed: 11/19/2022] Open
Abstract
Objective The purpose of this study was to evaluate the effect of hearing aids on body balance function in a strictly controlled auditory environment. Methods We recorded the findings of 10 experienced hearing aid users and 10 normal-hearing participants. All the participants were assessed using posturography under eight conditions in an acoustically shielded non-reverberant room: (1) eyes open with sound stimuli, with and without foam rubber, (2) eyes closed with sound stimuli, with and without foam rubber, (3) eyes open without sound stimuli, with and without foam rubber, and (4) eyes closed without sound stimuli, with and without foam rubber. Results The auditory cue improved the total path area and sway velocity in both the hearing aid users and normal-hearing participants. The analysis of variance showed that the interaction among eye condition, sound condition, and between-group factor was significant in the maximum displacement of the center-of-pressure in the mediolateral axis (F [1, 18] = 6.19, p = 0.02). The maximum displacement of the center-of-pressure in the mediolateral axis improved with the auditory cues in the normal-hearing participants in the eyes closed condition (5.4 cm and 4.7 cm, p < 0.01). In the hearing aid users, this difference was not significant (5.9 cm and 5.7 cm, p = 0.45). The maximum displacement of the center-of-pressure in the anteroposterior axis improved in both the hearing aid users and the normal-hearing participants.
Collapse
|
15
|
Bertonati G, Amadeo MB, Campus C, Gori M. Auditory speed processing in sighted and blind individuals. PLoS One 2021; 16:e0257676. [PMID: 34551010 PMCID: PMC8457492 DOI: 10.1371/journal.pone.0257676] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Accepted: 09/07/2021] [Indexed: 11/19/2022] Open
Abstract
Multisensory experience is crucial for developing a coherent perception of the world. In this context, vision and audition are essential tools to scaffold spatial and temporal representations, respectively. Since speed encompasses both space and time, investigating this dimension in blindness allows deepening the relationship between sensory modalities and the two representation domains. In the present study, we hypothesized that visual deprivation influences the use of spatial and temporal cues underlying acoustic speed perception. To this end, ten early blind and ten blindfolded sighted participants performed a speed discrimination task in which spatial, temporal, or both cues were available to infer moving sounds' velocity. The results indicated that both sighted and early blind participants preferentially relied on temporal cues to determine stimuli speed, by following an assumption that identified as faster those sounds with a shorter duration. However, in some cases, this temporal assumption produces a misperception of the stimulus speed that negatively affected participants' performance. Interestingly, early blind participants were more influenced by this misleading temporal assumption than sighted controls, resulting in a stronger impairment in the speed discrimination performance. These findings demonstrate that the absence of visual experience in early life increases the auditory system's preference for the time domain and, consequentially, affects the perception of speed through audition.
Collapse
Affiliation(s)
- Giorgia Bertonati
- Unit for Visually Impaired People (U-VIP), Istituto Italiano di Tecnologia, Genova, Italy
- Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS), Università degli Studi di Genova, Genova, Italy
| | - Maria Bianca Amadeo
- Unit for Visually Impaired People (U-VIP), Istituto Italiano di Tecnologia, Genova, Italy
| | - Claudio Campus
- Unit for Visually Impaired People (U-VIP), Istituto Italiano di Tecnologia, Genova, Italy
| | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
16
|
Scotto CR, Moscatelli A, Pfeiffer T, Ernst MO. Visual pursuit biases tactile velocity perception. J Neurophysiol 2021; 126:540-549. [PMID: 34259048 DOI: 10.1152/jn.00541.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
During a smooth pursuit eye movement of a target stimulus, a briefly flashed stationary background appears to move in the opposite direction as the eye's motion-an effect known as the Filehne illusion. Similar illusions occur in audition, in the vestibular system, and in touch. Recently, we found that the movement of a surface perceived from tactile slip was biased if this surface was sensed with the moving hand. The analogy between these two illusions suggests similar mechanisms of motion processing between the vision and touch. In the present study, we further assessed the interplay between these two sensory channels by investigating a novel paradigm that associated an eye pursuit of a visual target with a tactile motion over the skin of the fingertip. We showed that smooth pursuit eye movements can bias the perceived direction of motion in touch. Similarly to the classical report from the Filehne illusion in vision, a static tactile surface was perceived as moving rightward with a leftward eye pursuit movement, and vice versa. However, this time the direction of surface motion was perceived from touch. The biasing effects of eye pursuit on tactile motion were modulated by the reliability of the tactile and visual stimuli, consistently with a Bayesian model of motion perception. Overall, these results support a modality- and effector-independent process with common representations for motion perception.NEW & NOTEWORTHY The study showed that smooth pursuit eye movement produces a bias in tactile motion perception. This phenomenon is modulated by the reliability of the tactile estimate and by the presence of a visual background, in line with the predictions of the Bayesian framework of motion perception. Overall, these results support the hypothesis of shared representations for motion perception.
Collapse
Affiliation(s)
- Cécile R Scotto
- Centre de Recherches sur la Cognition et l'Apprentissage, Université de Poitiers, Université François Rabelais de Tours, Centre National de la Recherche Scientifique, Poitiers, France
| | - Alessandro Moscatelli
- Department of Systems Medicine and Centre of Space Bio-Medicine, University of Rome "Tor Vergata", Rome, Italy.,Laboratory of Neuromotor Physiology, Istituto di Ricovero e Cura a Carattere Scientifico Santa Lucia Foundation, Rome, Italy
| | - Thies Pfeiffer
- Faculty of Technology and Cognitive Interaction Technology-Center of Excellence, Bielefeld University, Bielefeld, Germany
| | - Marc O Ernst
- Applied Cognitive Systems, Ulm University, Ulm, Germany
| |
Collapse
|
17
|
Adaptive Response Behavior in the Pursuit of Unpredictably Moving Sounds. eNeuro 2021; 8:ENEURO.0556-20.2021. [PMID: 33875456 PMCID: PMC8116108 DOI: 10.1523/eneuro.0556-20.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 03/05/2021] [Accepted: 03/13/2021] [Indexed: 11/21/2022] Open
Abstract
Although moving sound-sources abound in natural auditory scenes, it is not clear how the human brain processes auditory motion. Previous studies have indicated that, although ocular localization responses to stationary sounds are quite accurate, ocular smooth pursuit of moving sounds is very poor. We here demonstrate that human subjects faithfully track a sound’s unpredictable movements in the horizontal plane with smooth-pursuit responses of the head. Our analysis revealed that the stimulus–response relation was well described by an under-damped passive, second-order low-pass filter in series with an idiosyncratic, fixed, pure delay. The model contained only two free parameters: the system’s damping coefficient, and its central (resonance) frequency. We found that the latter remained constant at ∼0.6 Hz throughout the experiment for all subjects. Interestingly, the damping coefficient systematically increased with trial number, suggesting the presence of an adaptive mechanism in the auditory pursuit system (APS). This mechanism functions even for unpredictable sound-motion trajectories endowed with fixed, but covert, frequency characteristics in open-loop tracking conditions. We conjecture that the APS optimizes a trade-off between response speed and effort. Taken together, our data support the existence of a pursuit system for auditory head-tracking, which would suggest the presence of a neural representation of a spatial auditory fovea (AF).
Collapse
|
18
|
Haywood NR, Undurraga JA, McAlpine D. The influence of envelope shape on the lateralization of amplitude-modulated, low-frequency sound. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:3133. [PMID: 34241105 DOI: 10.1121/10.0004788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Accepted: 04/06/2021] [Indexed: 06/13/2023]
Abstract
For abruptly gated sound, interaural time difference (ITD) cues at onset carry greater perceptual weight than those following. This research explored how envelope shape influences such carrier ITD weighting. Experiment 1 assessed the perceived lateralization of a tonal binaural beat that transitioned through ITD (diotic envelope, mean carrier frequency of 500 Hz). Listeners' left/right lateralization judgments were compared to those for static-ITD tones. For an 8 Hz sinusoidally amplitude-modulated envelope, ITD cues 24 ms after onset well-predicted reported sidedness. For an equivalent-duration "abrupt" envelope, which was unmodulated besides 20-ms onset/offset ramps, reported sidedness corresponded to ITDs near onset (e.g., 6 ms). However, unlike for sinusoidal amplitude modulation, ITDs toward offset seemingly also influenced perceived sidedness. Experiment 2 adjusted the duration of the offset ramp (25-75 ms) and found evidence for such offset weighting only for the most abrupt ramp tested. In experiment 3, an ITD was imposed on a brief segment of otherwise diotic filtered noise. Listeners discriminated right- from left-leading ITDs. In sinusoidal amplitude modulation, thresholds were lowest when the ITD segment occurred during rising amplitude. For the abrupt envelope, the lowest thresholds were observed when the segment occurred at either onset or offset. These experiments demonstrate the influence of envelope profile on carrier ITD sensitivity.
Collapse
Affiliation(s)
- Nicholas R Haywood
- Department of Linguistics, Faculty of Medicine, Health and Human Sciences, Macquarie Hearing, Macquarie University, Sydney, New South Wales 2109, Australia
| | - Jaime A Undurraga
- Department of Linguistics, Faculty of Medicine, Health and Human Sciences, Macquarie Hearing, Macquarie University, Sydney, New South Wales 2109, Australia
| | - David McAlpine
- Department of Linguistics, Faculty of Medicine, Health and Human Sciences, Macquarie Hearing, Macquarie University, Sydney, New South Wales 2109, Australia
| |
Collapse
|
19
|
Shestopalova LB, Petropavlovskaia EA, Semenova VV, Nikitin NI. Brain oscillations evoked by sound motion. Brain Res 2020; 1752:147232. [PMID: 33385379 DOI: 10.1016/j.brainres.2020.147232] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 11/27/2020] [Accepted: 11/30/2020] [Indexed: 11/25/2022]
Abstract
The present study investigates the event-related oscillations underlying the motion-onset response (MOR) evoked by sounds moving at different velocities. EEG was recorded for stationary sounds and for three patterns of sound motion produced by changes in interaural time differences. We explored the effect of motion velocity on the MOR potential, and also on the event-related spectral perturbation (ERSP) and inter-trial phase coherence (ITC) calculated from the time-frequency decomposition of EEG signals. The phase coherence of slow oscillations increased with an increase in motion velocity similarly to the magnitude of cN1 and cP2 components of the MOR response. The delta-to-alpha inter-trial spectral power remained at the same level up to, but not including, the highest velocity, suggesting that gradual spatial changes within the sound did not induce non-coherent activity. Conversely, the abrupt sound displacement induced theta-alpha oscillations which had low phase consistency. The findings suggest that the MOR potential could be mainly generated by the phase resetting of slow oscillations, and the degree of phase coherence may be considered as a neurophysiological indicator of sound motion processing.
Collapse
Affiliation(s)
- Lidia B Shestopalova
- Pavlov Institute of Physiology, Russian Academy of Sciences, Makarova emb. 6, 199034 Saint Petersburg, Russia.
| | | | - Varvara V Semenova
- Pavlov Institute of Physiology, Russian Academy of Sciences, Makarova emb. 6, 199034 Saint Petersburg, Russia.
| | - Nikolay I Nikitin
- Pavlov Institute of Physiology, Russian Academy of Sciences, Makarova emb. 6, 199034 Saint Petersburg, Russia.
| |
Collapse
|
20
|
St George BV, Cone B. Perceptual and Electrophysiological Correlates of Fixed Versus Moving Sound Source Lateralization. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:3176-3194. [PMID: 32812839 DOI: 10.1044/2020_jslhr-19-00289] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose The aims of the study were (a) to evaluate the effects of systematically varied factors of stimulus duration, interaural-level difference (ILD), and direction on perceptual and electrophysiological metrics of lateralization for fixed versus moving targets and (b) to evaluate the hemispheric activity underlying perception of fixed versus moving auditory targets. Method Twelve normal-hearing, young adult listeners were evaluated using perceptual and P300 tests of lateralization. Both perceptual and P300 tests utilized stimuli that varied for type (fixed and moving), direction (right and left), duration (100 and 500 ms), and magnitude of ILD (9 and 18 dB). Listeners provided laterality judgments and stimulus-type discrimination (fixed vs. moving) judgments for all combinations of acoustic factors. During P300 recordings, listeners discriminated between left- versus right-directed targets, as the other acoustic parameters were varied. Results ILD magnitude and stimulus type had statistically significant effects on laterality ratings, with larger magnitude ILDs and fixed type resulting in greater lateralization. Discriminability between fixed versus moving targets was dependent on stimulus duration and ILD magnitude. ILD magnitude was a significant predictor of P300 amplitude. There was a statistically significant inverse relationship between the perceived velocity of targets and P300 latency. Lateralized targets evoked contralateral hemispheric P300 activity. Moreover, a right-hemisphere enhancement was observed for fixed-type lateralized deviant stimuli. Conclusions Perceptual and P300 findings indicate that lateralization of auditory movement is highly dependent on temporal integration. Both the behavioral and physiological findings of this study suggest that moving auditory targets with ecologically valid velocities are processed by the central auditory nervous system within a window of temporal integration that is greater than that for fixed auditory targets. Furthermore, these findings lend support for a left hemispatial perceptual bias and right hemispheric dominance for spatial listening.
Collapse
Affiliation(s)
| | - Barbara Cone
- Department of Speech, Language, and Hearing Sciences, The University of Arizona, Tucson
| |
Collapse
|
21
|
Jenny C, Reuter C. Usability of Individualized Head-Related Transfer Functions in Virtual Reality: Empirical Study With Perceptual Attributes in Sagittal Plane Sound Localization. JMIR Serious Games 2020; 8:e17576. [PMID: 32897232 PMCID: PMC7509635 DOI: 10.2196/17576] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Revised: 05/07/2020] [Accepted: 07/26/2020] [Indexed: 01/19/2023] Open
Abstract
BACKGROUND In order to present virtual sound sources via headphones spatially, head-related transfer functions (HRTFs) can be applied to audio signals. In this so-called binaural virtual acoustics, the spatial perception may be degraded if the HRTFs deviate from the true HRTFs of the listener. OBJECTIVE In this study, participants wearing virtual reality (VR) headsets performed a listening test on the 3D audio perception of virtual audiovisual scenes, thus enabling us to investigate the necessity and influence of the individualization of HRTFs. Two hypotheses were investigated: first, general HRTFs lead to limitations of 3D audio perception in VR and second, the localization model for stationary localization errors is transferable to nonindividualized HRTFs in more complex environments such as VR. METHODS For the evaluation, 39 subjects rated individualized and nonindividualized HRTFs in an audiovisual virtual scene on the basis of 5 perceptual qualities: localizability, front-back position, externalization, tone color, and realism. The VR listening experiment consisted of 2 tests: in the first test, subjects evaluated their own and the general HRTF from the Massachusetts Institute of Technology Knowles Electronics Manikin for Acoustic Research database and in the second test, their own and 2 other nonindividualized HRTFs from the Acoustics Research Institute HRTF database. For the experiment, 2 subject-specific, nonindividualized HRTFs with a minimal and maximal localization error deviation were selected according to the localization model in sagittal planes. RESULTS With the Wilcoxon signed-rank test for the first test, analysis of variance for the second test, and a sample size of 78, the results were significant in all perceptual qualities, except for the front-back position between own and minimal deviant nonindividualized HRTF (P=.06). CONCLUSIONS Both hypotheses have been accepted. Sounds filtered by individualized HRTFs are considered easier to localize, easier to externalize, more natural in timbre, and thus more realistic compared to sounds filtered by nonindividualized HRTFs.
Collapse
Affiliation(s)
- Claudia Jenny
- Musicological Department, University of Vienna, Vienna, Austria
| | | |
Collapse
|
22
|
Rummukainen OS, Schlecht SJ, Habets EAP. No dynamic visual capture for self-translation minimum audible angle. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:EL77. [PMID: 32752782 DOI: 10.1121/10.0001588] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Accepted: 06/28/2020] [Indexed: 06/11/2023]
Abstract
Auditory localization is affected by visual cues. The study at hand focuses on a scenario where dynamic sound localization cues are induced by lateral listener self-translation in relation to a stationary sound source with matching or mismatching dynamic visual cues. The audio-only self-translation minimum audible angle (ST-MAA) is previously shown to be 3.3° in the horizontal plane in front of the listener. The present study found that the addition of visual cues has no significant effect on the ST-MAA.
Collapse
Affiliation(s)
- Olli S Rummukainen
- International Audio Laboratories Erlangen, A Joint Institution of the Friedrich-Alexander-University Erlangen-Nürnberg and Fraunhofer Institute for Integrated Circuits, Erlangen, Germany
| | - Sebastian J Schlecht
- Department of Signal Processing and Acoustics and Department of Media, Aalto University, Espoo, , ,
| | - Emanuël A P Habets
- International Audio Laboratories Erlangen, A Joint Institution of the Friedrich-Alexander-University Erlangen-Nürnberg and Fraunhofer Institute for Integrated Circuits, Erlangen, Germany
| |
Collapse
|
23
|
Honda S, Ishikawa Y, Konno R, Imai E, Nomiyama N, Sakurada K, Koumura T, Kondo HM, Furukawa S, Fujii S, Nakatani M. Proximal Binaural Sound Can Induce Subjective Frisson. Front Psychol 2020; 11:316. [PMID: 32194479 PMCID: PMC7062710 DOI: 10.3389/fpsyg.2020.00316] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2019] [Accepted: 02/10/2020] [Indexed: 11/15/2022] Open
Abstract
Auditory frisson is the experience of feeling of cold or shivering related to sound in the absence of a physical cold stimulus. Multiple examples of frisson-inducing sounds have been reported, but the mechanism of auditory frisson remains elusive. Typical frisson-inducing sounds may contain a looming effect, in which a sound appears to approach the listener's peripersonal space. Previous studies on sound in peripersonal space have provided objective measurements of sound-inducing effects, but few have investigated the subjective experience of frisson-inducing sounds. Here we explored whether it is possible to produce subjective feelings of frisson by moving a noise sound (white noise, rolling beads noise, or frictional noise produced by rubbing a plastic bag) stimulus around a listener's head. Our results demonstrated that sound-induced frisson can be experienced stronger when auditory stimuli are rotated around the head (binaural moving sounds) than the one without the rotation (monaural static sounds), regardless of the source of the noise sound. Pearson's correlation analysis showed that several acoustic features of auditory stimuli, such as variance of interaural level difference (ILD), loudness, and sharpness, were correlated with the magnitude of subjective frisson. We had also observed that the subjective feelings of frisson by moving a musical sound had increased comparing with a static musical sound.
Collapse
Affiliation(s)
- Shiori Honda
- Graduate School of Media and Governance, Keio University, Fujisawa, Japan
| | - Yuri Ishikawa
- Graduate School of Media and Governance, Keio University, Fujisawa, Japan
| | - Rei Konno
- Graduate School of Media and Governance, Keio University, Fujisawa, Japan
| | - Eiko Imai
- Faculty of Environment and Information Studies, Keio University, Fujisawa, Japan
| | - Natsumi Nomiyama
- Faculty of Environment and Information Studies, Keio University, Fujisawa, Japan
| | - Kazuki Sakurada
- Faculty of Environment and Information Studies, Keio University, Fujisawa, Japan
| | | | | | | | - Shinya Fujii
- Faculty of Environment and Information Studies, Keio University, Fujisawa, Japan
| | - Masashi Nakatani
- Faculty of Environment and Information Studies, Keio University, Fujisawa, Japan.,Precursory Research for Embryonic Science and Technology, Japan Science and Technology Agency (JST PRESTO), Saitama, Japan
| |
Collapse
|
24
|
Bermejo F, Di Paolo EA, Gilberto LG, Lunati V, Barrios MV. Learning to find spatially reversed sounds. Sci Rep 2020; 10:4562. [PMID: 32165690 PMCID: PMC7067813 DOI: 10.1038/s41598-020-61332-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Accepted: 02/24/2020] [Indexed: 11/29/2022] Open
Abstract
Adaptation to systematic visual distortions is well-documented but there is little evidence of similar adaptation to radical changes in audition. We use a pseudophone to transpose the sound streams arriving at the left and right ears, evaluating the perceptual effects it provokes and the possibility of learning to locate sounds in the reversed condition. Blindfolded participants remain seated at the center of a semicircular arrangement of 7 speakers and are asked to orient their head towards a sound source. We postulate that a key factor underlying adaptation is the self-generated activity that allows participants to learn new sensorimotor schemes. We investigate passive listening conditions (very short duration stimulus not permitting active exploration) and dynamic conditions (continuous stimulus allowing participants time to freely move their heads or remain still). We analyze head movement kinematics, localization errors, and qualitative reports. Results show movement-induced perceptual disruptions in the dynamic condition with static sound sources displaying apparent movement. This effect is reduced after a short training period and participants learn to find sounds in a left-right reversed field for all but the extreme lateral positions where motor patterns are more restricted. Strategies become less exploratory and more direct with training. Results support the hypothesis that self-generated movements underlie adaptation to radical sensorimotor distortions.
Collapse
Affiliation(s)
- Fernando Bermejo
- Centro de Investigación y Transferencia en Acústica, Universidad Tecnológica Nacional - Facultad Regional Córdoba, CONICET, CP 5016, Córdoba, Argentina.
- Facultad de Psicología, Universidad Nacional de Córdoba, CP 5016, Córdoba, Argentina.
| | - Ezequiel A Di Paolo
- Ikerbasque, Basque Foundation for Science, Bilbao, Spain
- IAS-Research Center for Life, Mind, and Society, University of the Basque Country, San Sebastián, Spain
- Centre for Computational Neuroscience and Robotics, University of Sussex, Brighton, UK
| | - L Guillermo Gilberto
- Centro de Investigación y Transferencia en Acústica, Universidad Tecnológica Nacional - Facultad Regional Córdoba, CONICET, CP 5016, Córdoba, Argentina
- Consejo Nacional de Investigaciones Científicas y Tecnológicas (CONICET), Ciudad Autónoma de Buenos Aires, Argentina
| | - Valentín Lunati
- Centro de Investigación y Transferencia en Acústica, Universidad Tecnológica Nacional - Facultad Regional Córdoba, CONICET, CP 5016, Córdoba, Argentina
- Consejo Nacional de Investigaciones Científicas y Tecnológicas (CONICET), Ciudad Autónoma de Buenos Aires, Argentina
| | - M Virginia Barrios
- Centro de Investigación y Transferencia en Acústica, Universidad Tecnológica Nacional - Facultad Regional Córdoba, CONICET, CP 5016, Córdoba, Argentina
- Facultad de Psicología, Universidad Nacional de Córdoba, CP 5016, Córdoba, Argentina
| |
Collapse
|
25
|
Shayman CS, Peterka RJ, Gallun FJ, Oh Y, Chang NYN, Hullar TE. Frequency-dependent integration of auditory and vestibular cues for self-motion perception. J Neurophysiol 2020; 123:936-944. [PMID: 31940239 DOI: 10.1152/jn.00307.2019] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
Recent evidence has shown that auditory information may be used to improve postural stability, spatial orientation, navigation, and gait, suggesting an auditory component of self-motion perception. To determine how auditory and other sensory cues integrate for self-motion perception, we measured motion perception during yaw rotations of the body and the auditory environment. Psychophysical thresholds in humans were measured over a range of frequencies (0.1-1.0 Hz) during self-rotation without spatial auditory stimuli, rotation of a sound source around a stationary listener, and self-rotation in the presence of an earth-fixed sound source. Unisensory perceptual thresholds and the combined multisensory thresholds were found to be frequency dependent. Auditory thresholds were better at lower frequencies, and vestibular thresholds were better at higher frequencies. Expressed in terms of peak angular velocity, multisensory vestibular and auditory thresholds ranged from 0.39°/s at 0.1 Hz to 0.95°/s at 1.0 Hz and were significantly better over low frequencies than either the auditory-only (0.54°/s to 2.42°/s at 0.1 and 1.0 Hz, respectively) or vestibular-only (2.00°/s to 0.75°/s at 0.1 and 1.0 Hz, respectively) unisensory conditions. Monaurally presented auditory cues were less effective than binaural cues in lowering multisensory thresholds. Frequency-independent thresholds were derived, assuming that vestibular thresholds depended on a weighted combination of velocity and acceleration cues, whereas auditory thresholds depended on displacement and velocity cues. These results elucidate fundamental mechanisms for the contribution of audition to balance and help explain previous findings, indicating its significance in tasks requiring self-orientation.NEW & NOTEWORTHY Auditory information can be integrated with visual, proprioceptive, and vestibular signals to improve balance, orientation, and gait, but this process is poorly understood. Here, we show that auditory cues significantly improve sensitivity to self-motion perception below 0.5 Hz, whereas vestibular cues contribute more at higher frequencies. Motion thresholds are determined by a weighted combination of displacement, velocity, and acceleration information. These findings may help understand and treat imbalance, particularly in people with sensory deficits.
Collapse
Affiliation(s)
- Corey S Shayman
- Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University, Portland, Oregon.,School of Medicine, University of Utah, Salt Lake City, Utah
| | - Robert J Peterka
- Department of Neurology, Oregon Health and Science University, Portland, Oregon.,National Center for Rehabilitative Auditory Research-VA Portland Health Care System, Portland, Oregon
| | - Frederick J Gallun
- National Center for Rehabilitative Auditory Research-VA Portland Health Care System, Portland, Oregon.,Oregon Hearing Research Center, Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University, Portland, Oregon
| | - Yonghee Oh
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, Florida
| | - Nai-Yuan N Chang
- Department of Preventive and Restorative Dental Sciences-Division of Bioengineering and Biomaterials, University of California, San Francisco, San Francisco, California
| | - Timothy E Hullar
- Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University, Portland, Oregon.,Department of Neurology, Oregon Health and Science University, Portland, Oregon.,National Center for Rehabilitative Auditory Research-VA Portland Health Care System, Portland, Oregon
| |
Collapse
|
26
|
Auditory motion perception emerges from successive sound localizations integrated over time. Sci Rep 2019; 9:16437. [PMID: 31712688 PMCID: PMC6848124 DOI: 10.1038/s41598-019-52742-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2019] [Accepted: 10/11/2019] [Indexed: 11/18/2022] Open
Abstract
Humans rely on auditory information to estimate the path of moving sound sources. But unlike in vision, the existence of motion-sensitive mechanisms in audition is still open to debate. Psychophysical studies indicate that auditory motion perception emerges from successive localization, but existing models fail to predict experimental results. However, these models do not account for any temporal integration. We propose a new model tracking motion using successive localization snapshots but integrated over time. This model is derived from psychophysical experiments on the upper limit for circular auditory motion perception (UL), defined as the speed above which humans no longer identify the direction of sounds spinning around them. Our model predicts ULs measured with different stimuli using solely static localization cues. The temporal integration blurs these localization cues rendering them unreliable at high speeds, which results in the UL. Our findings indicate that auditory motion perception does not require motion-sensitive mechanisms.
Collapse
|
27
|
Zuk NJ, Delgutte B. Neural coding and perception of auditory motion direction based on interaural time differences. J Neurophysiol 2019; 122:1821-1842. [PMID: 31461376 DOI: 10.1152/jn.00081.2019] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
While motion is important for parsing a complex auditory scene into perceptual objects, how it is encoded in the auditory system is unclear. Perceptual studies suggest that the ability to identify the direction of motion is limited by the duration of the moving sound, yet we can detect changes in interaural differences at even shorter durations. To understand the source of these distinct temporal limits, we recorded from single units in the inferior colliculus (IC) of unanesthetized rabbits in response to noise stimuli containing a brief segment with linearly time-varying interaural time difference ("ITD sweep") temporally embedded in interaurally uncorrelated noise. We also tested the ability of human listeners to either detect the ITD sweeps or identify the motion direction. Using a point-process model to separate the contributions of stimulus dependence and spiking history to single-neuron responses, we found that the neurons respond primarily by following the instantaneous ITD rather than exhibiting true direction selectivity. Furthermore, using an optimal classifier to decode the single-neuron responses, we found that neural threshold durations of ITD sweeps for both direction identification and detection overlapped with human threshold durations even though the average response of the neurons could track the instantaneous ITD beyond psychophysical limits. Our results suggest that the IC does not explicitly encode motion direction, but internal neural noise may limit the speed at which we can identify the direction of motion.NEW & NOTEWORTHY Recognizing motion and identifying an object's trajectory are important for parsing a complex auditory scene, but how we do so is unclear. We show that neurons in the auditory midbrain do not exhibit direction selectivity as found in the visual system but instead follow the trajectory of the motion in their temporal firing patterns. Our results suggest that the inherent variability in neural firings may limit our ability to identify motion direction at short durations.
Collapse
Affiliation(s)
- Nathaniel J Zuk
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts
| | - Bertrand Delgutte
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts.,Department of Otolaryngology, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
28
|
Moua K, Kan A, Jones HG, Misurelli SM, Litovsky RY. Auditory motion tracking ability of adults with normal hearing and with bilateral cochlear implants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:2498. [PMID: 31046310 PMCID: PMC6491347 DOI: 10.1121/1.5094775] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2018] [Revised: 01/31/2019] [Accepted: 03/04/2019] [Indexed: 06/09/2023]
Abstract
Adults with bilateral cochlear implants (BiCIs) receive benefits in localizing stationary sounds when listening with two implants compared with one; however, sound localization ability is significantly poorer when compared to normal hearing (NH) listeners. Little is known about localizing sound sources in motion, which occurs in typical everyday listening situations. The authors considered the possibility that sound motion may improve sound localization in BiCI users by providing multiple places of information. Alternatively, the ability to compare multiple spatial locations may be compromised in BiCI users due to degradation of binaural cues, and thus result in poorer performance relative to NH adults. In this study, the authors assessed listeners' abilities to distinguish between sounds that appear to be moving vs stationary, and track the angular range and direction of moving sounds. Stimuli were bandpass-filtered (150-6000 Hz) noise bursts of different durations, panned over an array of loudspeakers. Overall, the results showed that BiCI users were poorer than NH adults in (i) distinguishing between a moving vs stationary sound, (ii) correctly identifying the direction of movement, and (iii) tracking the range of movement. These findings suggest that conventional cochlear implant processors are not able to fully provide the cues necessary for perceiving auditory motion correctly.
Collapse
Affiliation(s)
- Keng Moua
- University of Wisconsin-Madison, Waisman Center, 1500 Highland Avenue, Madison, Wisconsin 53706, USA
| | - Alan Kan
- University of Wisconsin-Madison, Waisman Center, 1500 Highland Avenue, Madison, Wisconsin 53706, USA
| | - Heath G Jones
- University of Wisconsin-Madison, Waisman Center, 1500 Highland Avenue, Madison, Wisconsin 53706, USA
| | - Sara M Misurelli
- University of Wisconsin-Madison, Waisman Center, 1500 Highland Avenue, Madison, Wisconsin 53706, USA
| | - Ruth Y Litovsky
- University of Wisconsin-Madison, Waisman Center, 1500 Highland Avenue, Madison, Wisconsin 53706, USA
| |
Collapse
|
29
|
Representation of Auditory Motion Directions and Sound Source Locations in the Human Planum Temporale. J Neurosci 2019; 39:2208-2220. [PMID: 30651333 DOI: 10.1523/jneurosci.2289-18.2018] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2018] [Revised: 12/20/2018] [Accepted: 12/21/2018] [Indexed: 11/21/2022] Open
Abstract
The ability to compute the location and direction of sounds is a crucial perceptual skill to efficiently interact with dynamic environments. How the human brain implements spatial hearing is, however, poorly understood. In our study, we used fMRI to characterize the brain activity of male and female humans listening to sounds moving left, right, up, and down as well as static sounds. Whole-brain univariate results contrasting moving and static sounds varying in their location revealed a robust functional preference for auditory motion in bilateral human planum temporale (hPT). Using independently localized hPT, we show that this region contains information about auditory motion directions and, to a lesser extent, sound source locations. Moreover, hPT showed an axis of motion organization reminiscent of the functional organization of the middle-temporal cortex (hMT+/V5) for vision. Importantly, whereas motion direction and location rely on partially shared pattern geometries in hPT, as demonstrated by successful cross-condition decoding, the responses elicited by static and moving sounds were, however, significantly distinct. Altogether, our results demonstrate that the hPT codes for auditory motion and location but that the underlying neural computation linked to motion processing is more reliable and partially distinct from the one supporting sound source location.SIGNIFICANCE STATEMENT Compared with what we know about visual motion, little is known about how the brain implements spatial hearing. Our study reveals that motion directions and sound source locations can be reliably decoded in the human planum temporale (hPT) and that they rely on partially shared pattern geometries. Our study, therefore, sheds important new light on how computing the location or direction of sounds is implemented in the human auditory cortex by showing that those two computations rely on partially shared neural codes. Furthermore, our results show that the neural representation of moving sounds in hPT follows a "preferred axis of motion" organization, reminiscent of the coding mechanisms typically observed in the occipital middle-temporal cortex (hMT+/V5) region for computing visual motion.
Collapse
|
30
|
Joris PX. Neural binaural sensitivity at high sound speeds: Single cell responses in cat midbrain to fast-changing interaural time differences of broadband sounds. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:EL45. [PMID: 30710960 PMCID: PMC7112706 DOI: 10.1121/1.5087524] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2018] [Revised: 12/13/2018] [Accepted: 12/18/2018] [Indexed: 06/09/2023]
Abstract
Relative motion between the body and the outside world is a rich source of information. Neural selectivity to motion is well-established in several sensory systems, but is controversial in hearing. This study examines neural sensitivity to changes in the instantaneous interaural time difference of sounds at the two ears. Midbrain neurons track such changes up to extremely high speeds, show only a coarse dependence of firing rate on speed, and lack directional selectivity. These results argue against the presence of selectivity to auditory motion at the level of the midbrain, but reveal an acuity which enables coding of fast-fluctuating binaural cues in realistic sound environments.
Collapse
Affiliation(s)
- Philip X Joris
- Laboratory of Auditory Neurophysiology, KU Leuven, Herestraat 49, B-3000 Leuven, Belgium
| |
Collapse
|
31
|
Campos J, Ramkhalawansingh R, Pichora-Fuller MK. Hearing, self-motion perception, mobility, and aging. Hear Res 2018; 369:42-55. [DOI: 10.1016/j.heares.2018.03.025] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/01/2017] [Revised: 02/20/2018] [Accepted: 03/29/2018] [Indexed: 11/30/2022]
|
32
|
Rummukainen OS, Schlecht SJ, Habets EAP. Self-translation induced minimum audible angle. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:EL340. [PMID: 30404470 DOI: 10.1121/1.5064957] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2018] [Accepted: 10/02/2018] [Indexed: 06/08/2023]
Abstract
The minimum audible angle has been studied with a stationary listener and a stationary or a moving sound source. The study at hand focuses on a scenario where the angle is induced by listener self-translation in relation to a stationary sound source. First, the classic stationary listener minimum audible angle experiment is replicated using a headphone-based reproduction system. This experiment confirms that the reproduction system is able to produce a localization cue resolution comparable to loudspeaker reproduction. Next, the self-translation minimum audible angle is shown to be 3.3° in the horizontal plane in front of the listener.
Collapse
|
33
|
Lundbeck M, Hartog L, Grimm G, Hohmann V, Bramsløw L, Neher T. Influence of Multi-microphone Signal Enhancement Algorithms on the Acoustics and Detectability of Angular and Radial Source Movements. Trends Hear 2018; 22:2331216518779719. [PMID: 29900799 PMCID: PMC6024528 DOI: 10.1177/2331216518779719] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Hearing-impaired listeners are known to have difficulties not only with understanding speech in noise but also with judging source distance and movement, and these deficits are related to perceived handicap. It is possible that the perception of spatially dynamic sounds can be improved with hearing aids (HAs), but so far this has not been investigated. In a previous study, older hearing-impaired listeners showed poorer detectability for virtual left-right (angular) and near-far (radial) source movements due to lateral interfering sounds and reverberation, respectively. In the current study, potential ways of improving these deficits with HAs were explored. Using stimuli very similar to before, detailed acoustic analyses were carried out to examine the influence of different HA algorithms for suppressing noise and reverberation on the acoustic cues previously shown to be associated with source movement detectability. For an algorithm that combined unilateral directional microphones with binaural coherence-based noise reduction and for a bilateral beamformer with binaural cue preservation, movement-induced changes in spectral coloration, signal-to-noise ratio, and direct-to-reverberant energy ratio were greater compared with no HA processing. To evaluate these two algorithms perceptually, aided measurements of angular and radial source movement detectability were performed with 20 older hearing-impaired listeners. The analyses showed that, in the presence of concurrent interfering sounds and reverberation, the bilateral beamformer could restore source movement detectability in both spatial dimensions, whereas the other algorithm only improved detectability in the near-far dimension. Together, these results provide a basis for improving the detectability of spatially dynamic sounds with HAs.
Collapse
Affiliation(s)
- Micha Lundbeck
- 1 Medizinische Physik and Cluster of Excellence "Hearing4all", Oldenburg University, Germany.,2 HörTech gGmbH, Oldenburg, Germany
| | - Laura Hartog
- 1 Medizinische Physik and Cluster of Excellence "Hearing4all", Oldenburg University, Germany.,2 HörTech gGmbH, Oldenburg, Germany
| | - Giso Grimm
- 1 Medizinische Physik and Cluster of Excellence "Hearing4all", Oldenburg University, Germany.,2 HörTech gGmbH, Oldenburg, Germany
| | - Volker Hohmann
- 1 Medizinische Physik and Cluster of Excellence "Hearing4all", Oldenburg University, Germany.,2 HörTech gGmbH, Oldenburg, Germany
| | - Lars Bramsløw
- 3 Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
| | - Tobias Neher
- 1 Medizinische Physik and Cluster of Excellence "Hearing4all", Oldenburg University, Germany.,4 Institute of Clinical Research, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
34
|
Zuk N, Delgutte B. Neural coding of time-varying interaural time differences and time-varying amplitude in the inferior colliculus. J Neurophysiol 2017; 118:544-563. [PMID: 28381487 DOI: 10.1152/jn.00797.2016] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2016] [Revised: 03/29/2017] [Accepted: 03/31/2017] [Indexed: 11/22/2022] Open
Abstract
Binaural cues occurring in natural environments are frequently time varying, either from the motion of a sound source or through interactions between the cues produced by multiple sources. Yet, a broad understanding of how the auditory system processes dynamic binaural cues is still lacking. In the current study, we directly compared neural responses in the inferior colliculus (IC) of unanesthetized rabbits to broadband noise with time-varying interaural time differences (ITD) with responses to noise with sinusoidal amplitude modulation (SAM) over a wide range of modulation frequencies. On the basis of prior research, we hypothesized that the IC, one of the first stages to exhibit tuning of firing rate to modulation frequency, might use a common mechanism to encode time-varying information in general. Instead, we found weaker temporal coding for dynamic ITD compared with amplitude modulation and stronger effects of adaptation for amplitude modulation. The differences in temporal coding of dynamic ITD compared with SAM at the single-neuron level could be a neural correlate of "binaural sluggishness," the inability to perceive fluctuations in time-varying binaural cues at high modulation frequencies, for which a physiological explanation has so far remained elusive. At ITD-variation frequencies of 64 Hz and above, where a temporal code was less effective, noise with a dynamic ITD could still be distinguished from noise with a constant ITD through differences in average firing rate in many neurons, suggesting a frequency-dependent tradeoff between rate and temporal coding of time-varying binaural information.NEW & NOTEWORTHY Humans use time-varying binaural cues to parse auditory scenes comprising multiple sound sources and reverberation. However, the neural mechanisms for doing so are poorly understood. Our results demonstrate a potential neural correlate for the reduced detectability of fluctuations in time-varying binaural information at high speeds, as occurs in reverberation. The results also suggest that the neural mechanisms for processing time-varying binaural and monaural cues are largely distinct.
Collapse
Affiliation(s)
- Nathaniel Zuk
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts.,Speech and Hearing Bioscience and Technology Program, Harvard-MIT Division of Health Sciences and Technology, Cambridge, Massachusetts; and
| | - Bertrand Delgutte
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts; .,Speech and Hearing Bioscience and Technology Program, Harvard-MIT Division of Health Sciences and Technology, Cambridge, Massachusetts; and.,Department of Otolaryngology, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
35
|
Shaikh D, Manoonpong P. An Adaptive Neural Mechanism for Acoustic Motion Perception with Varying Sparsity. Front Neurorobot 2017; 11:11. [PMID: 28337137 PMCID: PMC5343069 DOI: 10.3389/fnbot.2017.00011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2016] [Accepted: 02/20/2017] [Indexed: 11/14/2022] Open
Abstract
Biological motion-sensitive neural circuits are quite adept in perceiving the relative motion of a relevant stimulus. Motion perception is a fundamental ability in neural sensory processing and crucial in target tracking tasks. Tracking a stimulus entails the ability to perceive its motion, i.e., extracting information about its direction and velocity. Here we focus on auditory motion perception of sound stimuli, which is poorly understood as compared to its visual counterpart. In earlier work we have developed a bio-inspired neural learning mechanism for acoustic motion perception. The mechanism extracts directional information via a model of the peripheral auditory system of lizards. The mechanism uses only this directional information obtained via specific motor behaviour to learn the angular velocity of unoccluded sound stimuli in motion. In nature however the stimulus being tracked may be occluded by artefacts in the environment, such as an escaping prey momentarily disappearing behind a cover of trees. This article extends the earlier work by presenting a comparative investigation of auditory motion perception for unoccluded and occluded tonal sound stimuli with a frequency of 2.2 kHz in both simulation and practice. Three instances of each stimulus are employed, differing in their movement velocities-0.5°/time step, 1.0°/time step and 1.5°/time step. To validate the approach in practice, we implement the proposed neural mechanism on a wheeled mobile robot and evaluate its performance in auditory tracking.
Collapse
Affiliation(s)
- Danish Shaikh
- Embodied AI and Neurorobotics Laboratory, Centre for BioRobotics, Maersk Mc-Kinney Moeller Institute, University of Southern DenmarkOdense, Denmark
| | | |
Collapse
|
36
|
Hendrickx E, Stitt P, Messonnier JC, Lyzwa JM, Katz BF, de Boishéraud C. Influence of head tracking on the externalization of speech stimuli for non-individualized binaural synthesis. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 141:2011. [PMID: 28372109 DOI: 10.1121/1.4978612] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Binaural reproduction aims at recreating a realistic audio scene at the ears of the listener using headphones. In the real acoustic world, sound sources tend to be externalized (that is, perceived to be emanating from a source out in the world) rather than internalized (that is, perceived to be emanating from inside the head). Unfortunately, several studies report a collapse of externalization, especially with frontal and rear virtual sources, when listening to binaural content using non-individualized Head-Related Transfer Functions (HRTFs). The present study examines whether or not head movements coupled with a head tracking device can compensate for this collapse. For each presentation, a speech stimulus was presented over headphones at different azimuths, using several intermixed sets of non-individualized HRTFs for the binaural rendering. The head tracker could either be active or inactive, and the subjects could either be asked to rotate their heads or to keep them as stationary as possible. After each presentation, subjects reported to what extent the stimulus had been externalized. In contrast to several previous studies, results showed that head movements can substantially enhance externalization, especially for frontal and rear sources, and that externalization can persist once the subject has stopped moving his/her head.
Collapse
Affiliation(s)
- Etienne Hendrickx
- Conservatoire National Supérieur de Musique et de Danse de Paris, 209, Avenue Jean-Jaurès, 75019 Paris, France
| | - Peter Stitt
- Audio Acoustics Group, Laboratoire d'Informatique pour la Mécanique et les Sciences de l'Ingénieur, CNRS, Université Paris-Saclay, 91405 Orsay, France
| | - Jean-Christophe Messonnier
- Conservatoire National Supérieur de Musique et de Danse de Paris, 209, Avenue Jean-Jaurès, 75019 Paris, France
| | - Jean-Marc Lyzwa
- Conservatoire National Supérieur de Musique et de Danse de Paris, 209, Avenue Jean-Jaurès, 75019 Paris, France
| | - Brian Fg Katz
- Sorbonne Universités, Université Pierre et Marie Curie Univ Paris 06, CNRS, Institut d'Alembert, 75005 Paris, France
| | - Catherine de Boishéraud
- Conservatoire National Supérieur de Musique et de Danse de Paris, 209, Avenue Jean-Jaurès, 75019 Paris, France
| |
Collapse
|
37
|
Berger CC, Ehrsson HH. Auditory Motion Elicits a Visual Motion Aftereffect. Front Neurosci 2016; 10:559. [PMID: 27994538 PMCID: PMC5136551 DOI: 10.3389/fnins.2016.00559] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2016] [Accepted: 11/21/2016] [Indexed: 11/18/2022] Open
Abstract
The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect—an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.
Collapse
Affiliation(s)
| | - H Henrik Ehrsson
- Department of Neuroscience, Karolinska Institutet Stockholm, Sweden
| |
Collapse
|
38
|
Neuhoff JG. Looming sounds are perceived as faster than receding sounds. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2016; 1:15. [PMID: 28180166 PMCID: PMC5256440 DOI: 10.1186/s41235-016-0017-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/08/2016] [Accepted: 09/23/2016] [Indexed: 11/17/2022]
Abstract
Each year thousands of people are killed by looming motor vehicles. Throughout our evolutionary history looming objects have posed a threat to survival and perceptual systems have evolved unique solutions to confront these environmental challenges. Vision provides an accurate representation of time-to-contact with a looming object and usually allows us to interact successfully with the object if required. However, audition functions as a warning system and yields an anticipatory representation of arrival time, indicating that the object has arrived when it is still some distance away. The bias provides a temporal margin of safety that allows more time to initiate defensive actions. In two studies this bias was shown to influence the perception of the speed of looming and receding sound sources. Listeners heard looming and receding sound sources and judged how fast they were moving. Listeners perceived the speed of looming sounds as faster than that of equivalent receding sounds. Listeners also showed better discrimination of the speed of looming sounds than receding sounds. Finally, close sounds were perceived as faster than distant sounds. The results suggest a prioritization of the perception of the speed of looming and receding sounds that mirrors the level of threat posed by moving objects in the environment.
Collapse
Affiliation(s)
- John G Neuhoff
- Department of Psychology, The College of Wooster, Wooster, OH 44691 USA
| |
Collapse
|
39
|
A “looming bias” in spatial hearing? Effects of acoustic intensity and spectrum on categorical sound source localization. Atten Percept Psychophys 2016; 79:352-362. [DOI: 10.3758/s13414-016-1201-9] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
40
|
Locke SM, Leung J, Carlile S. Sensitivity to Auditory Velocity Contrast. Sci Rep 2016; 6:27725. [PMID: 27291488 PMCID: PMC4904411 DOI: 10.1038/srep27725] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2015] [Accepted: 05/23/2016] [Indexed: 02/02/2023] Open
Abstract
A natural auditory scene often contains sound moving at varying velocities. Using a velocity contrast paradigm, we compared sensitivity to velocity changes between continuous and discontinuous trajectories. Subjects compared the velocities of two stimulus intervals that moved along a single trajectory, with and without a 1 second inter stimulus interval (ISI). We found thresholds were threefold larger for velocity increases in the instantaneous velocity change condition, as compared to instantaneous velocity decreases or thresholds for the delayed velocity transition condition. This result cannot be explained by the current static "snapshot" model of auditory motion perception and suggest a continuous process where the percept of velocity is influenced by previous history of stimulation.
Collapse
Affiliation(s)
- Shannon M. Locke
- School of Medical Sciences, University of Sydney, NSW 2006 Australia
- Department of Psychology, New York University, 6 Washington Place, New York, NY 10003, USA
| | - Johahn Leung
- School of Medical Sciences, University of Sydney, NSW 2006 Australia
| | - Simon Carlile
- School of Medical Sciences, University of Sydney, NSW 2006 Australia
- Starkey Hearing Research Center, 2110 Shattuck st#408, Berkeley, CA 94704 USA
| |
Collapse
|