1
|
Bordeau C, Scalvini F, Migniot C, Dubois J, Ambard M. Localization abilities with a visual-to-auditory substitution device are modulated by the spatial arrangement of the scene. Atten Percept Psychophys 2025:10.3758/s13414-025-03065-y. [PMID: 40281272 DOI: 10.3758/s13414-025-03065-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/16/2025] [Indexed: 04/29/2025]
Abstract
Visual-to-auditory substitution devices convert visual images into soundscapes. They are intended for use by blind people in everyday situations with various obstacles that need to be localized simultaneously, as well as irrelevant objects that must be ignored. It is therefore important to establish the extent to which substitution devices make it possible to localize obstacles in complex scenes. In this study, we used a substitution device that combines spatial acoustic cues and pitch modulation to convey spatial information. Nineteen blindfolded sighted participants had to point at a virtual target that was displayed alone or among distractors to evaluate their ability to perform a localization task in minimalist and complex virtual scenes. The spatial configuration of the scene was manipulated by varying the number of distractors and their spatial arrangement relative to the target. While elevation localization abilities were not impaired by the presence of distractors, the ability to localize the azimuth of the target was modulated when a large number of distractors were displayed at the same elevation as the target. The elevation localization performance tends to confirm that pitch modulation is effective to convey elevation information with the device in various spatial configurations. Conversely, the impairment to azimuth localization seems to result from segregation difficulties that arise when the spatial configuration of the objects does not allow pitch segregation. This must be considered in the design of substitution devices in order to help blind people correctly evaluate the risks posed by different situations.
Collapse
Affiliation(s)
- Camille Bordeau
- University of Burgundy, CNRS, LEAD Umr5022, 21000, Dijon, France.
- Aix Marseille University, CNRS, CRPN, Marseille, France.
| | - Florian Scalvini
- Imvia UR 7535-University of Burgundy, Dijon, France
- IMT Atlantique, LaTIM U1101 INSERM, Brest, France
| | | | | | - Maxime Ambard
- University of Burgundy, CNRS, LEAD Umr5022, 21000, Dijon, France
| |
Collapse
|
2
|
Meyer J, Picinali L. On the generalization of accommodation to head-related transfer functions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2025; 157:420-432. [PMID: 39841036 DOI: 10.1121/10.0034858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Accepted: 12/21/2024] [Indexed: 01/23/2025]
Abstract
To date, there is strong evidence indicating that humans with normal hearing can adapt to non-individual head-related transfer functions (HRTFs). However, less attention has been given to studying the generalization of this adaptation to untrained conditions. This study investigated how adaptation to one set of HRTFs can generalize to another set of HRTFs. Participants were divided into two groups and trained to localize a speech stimulus reproduced binaurally using either individual or non-individual HRTFs. Training led to an improved localization performance with the trained HRTFs for both groups of participants. Results also showed that there was no difference in the localization performance improvement between the trained and untrained HRTFs for both groups, indicating a generalization of adaptation to HRTFs. The findings did not allow to precisely determine which type of learning (procedural or perceptual) primarily contributed to the generalization, thus highlighting the potential need to expose participants to longer training protocols.
Collapse
Affiliation(s)
- Julie Meyer
- Dyson School of Design Engineering, Imperial College London, SW7 2DB London, United Kingdom
| | - Lorenzo Picinali
- Dyson School of Design Engineering, Imperial College London, SW7 2DB London, United Kingdom
| |
Collapse
|
3
|
Parise C, Gori M, Finocchietti S, Ernst M, Esposito D, Tonelli A. Happy new ears: Rapid adaptation to novel spectral cues in vertical sound localization. iScience 2024; 27:111308. [PMID: 39640573 PMCID: PMC11617380 DOI: 10.1016/j.isci.2024.111308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 04/15/2024] [Accepted: 10/30/2024] [Indexed: 12/07/2024] Open
Abstract
Humans can adapt to changes in the acoustic properties of the head and exploit the resulting novel spectral cues for sound source localization. However, the adaptation rate varies across studies and is not associated with the aftereffects commonly found after adaptation in other sensory domains. To investigate the adaptation' rate and measure potential aftereffects, our participants wore new-ears to alter the spectral cues for sound localization and underwent sensorimotor training to induce rapid adaptation. Within 20 min, our sensorimotor-training induced full adaptation to the new-ears, as demonstrated by changes in various performance indexes, including the localization gain, bias, and precision. Once the new ears were removed, participants displayed systematic aftereffects, evident as drop in the precision of localization lasting only a few trials. These results highlight the short-term plasticity of human spatial hearing, which is capable to quickly adapt to spectral perturbations and inducing large, yet short lived, aftereffects.
Collapse
Affiliation(s)
- Cesare Parise
- Department of Psychology, University of Liverpool, Liverpool, UK
| | - Monica Gori
- Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy
| | - Sara Finocchietti
- Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy
| | - Marc Ernst
- Department of Psychology, University of Ulm, Ulm, Germany
| | - Davide Esposito
- Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy
| | - Alessia Tonelli
- Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy
- School of Psychology, University of Sydney, Sydney, Australia
| |
Collapse
|
4
|
Colas T, Farrugia N, Hendrickx E, Paquier M. Sound externalization in dynamic binaural listening: A comparative behavioral and EEG study. Hear Res 2023; 440:108912. [PMID: 37952369 DOI: 10.1016/j.heares.2023.108912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 10/23/2023] [Accepted: 11/02/2023] [Indexed: 11/14/2023]
Abstract
Binaural reproduction aims at recreating a realistic sound scene at the ears of the listener using headphones. Unfortunately, externalization for frontal and rear sources is often poor (virtual sources are perceived inside the head, instead of outside the head). Nevertheless, previous studies have shown that large head-tracked movements could substantially improve externalization and that this improvement persisted once the subject had stopped moving his/her head. The present study investigates the relation between externalization and evoked response potentials (ERPs) by performing behavioral and EEG measurements in the same experimental conditions. Different degrees of externalization were achieved by preceding measurements with 1) head-tracked movements, 2) untracked head movements, and 3) no head movement. Results showed that performing a head movement, whether the head tracking was active or not, increased the amplitude of ERP components after 100 ms, which suggests that preceding head movements alters the auditory processing. Moreover, untracked head movements gave a stronger amplitude on the N1 component, which might be a marker of a consistency break in regards to the real world. While externalization scores were higher after head-tracked movements in the behavioral experiment, no marker of externalization could be found in the EEG results.
Collapse
Affiliation(s)
- Tom Colas
- University of Brest, CNRS Lab-STICC UMR 6285, 6 avenue Victor Le Gorgeu, CS 93837, 29238 Brest Cedex 3, France.
| | - Nicolas Farrugia
- IMT Atlantique, CNRS Lab-STICC UMR 6285, 655 avenue du Technopole, 29280 Plouzane, France
| | - Etienne Hendrickx
- University of Brest, CNRS Lab-STICC UMR 6285, 6 avenue Victor Le Gorgeu, CS 93837, 29238 Brest Cedex 3, France
| | - Mathieu Paquier
- University of Brest, CNRS Lab-STICC UMR 6285, 6 avenue Victor Le Gorgeu, CS 93837, 29238 Brest Cedex 3, France
| |
Collapse
|
5
|
Shim L, Lee J, Han JH, Jeon H, Hong SK, Lee HJ. Feasibility of Virtual Reality-Based Auditory Localization Training With Binaurally Recorded Auditory Stimuli for Patients With Single-Sided Deafness. Clin Exp Otorhinolaryngol 2023; 16:217-224. [PMID: 37080730 PMCID: PMC10471910 DOI: 10.21053/ceo.2023.00206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 04/08/2023] [Accepted: 04/15/2023] [Indexed: 04/22/2023] Open
Abstract
OBJECTIVES To train participants to localize sound using virtual reality (VR) technology, appropriate auditory stimuli that contain accurate spatial cues are essential. The generic head-related transfer function that grounds the programmed spatial audio in VR does not reflect individual variation in monaural spatial cues, which is critical for auditory spatial perception in patients with single-sided deafness (SSD). As binaural difference cues are unavailable, auditory spatial perception is a typical problem in the SSD population and warrants intervention. This study assessed the applicability of binaurally recorded auditory stimuli in VR-based training for sound localization in SSD patients. METHODS Sixteen subjects with SSD and 38 normal-hearing (NH) controls underwent VR-based training for sound localization and were assessed 3 weeks after completing training. The VR program incorporated prerecorded auditory stimuli created individually in the SSD group and over an anthropometric model in the NH group. RESULTS Sound localization performance revealed significant improvements in both groups after training, with retained benefits lasting for an additional 3 weeks. Subjective improvements in spatial hearing were confirmed in the SSD group. CONCLUSION By examining individuals with SSD and NH, VR-based training for sound localization that used binaurally recorded stimuli, measured individually, was found to be effective and beneficial. Furthermore, VR-based training does not require sophisticated instruments or setups. These. RESULTS suggest that this technique represents a new therapeutic treatment for impaired sound localization.
Collapse
Affiliation(s)
- Leeseul Shim
- Laboratory of Brain and Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, Korea
| | - Jihyun Lee
- Laboratory of Brain and Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, Korea
| | - Ji-Hye Han
- Laboratory of Brain and Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, Korea
| | - Hanjae Jeon
- Laboratory of Brain and Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Korea
| | - Sung-Kwang Hong
- Laboratory of Brain and Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Korea
- Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Chuncheon, Korea
| | - Hyo-Jeong Lee
- Laboratory of Brain and Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, Korea
- Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Chuncheon, Korea
| |
Collapse
|
6
|
Sanchez Jimenez A, Willard KJ, Bajo VM, King AJ, Nodal FR. Persistence and generalization of adaptive changes in auditory localization behavior following unilateral conductive hearing loss. Front Neurosci 2023; 17:1067937. [PMID: 36816127 PMCID: PMC9929551 DOI: 10.3389/fnins.2023.1067937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 01/10/2023] [Indexed: 02/04/2023] Open
Abstract
Introduction Sound localization relies on the neural processing of binaural and monaural spatial cues generated by the physical properties of the head and body. Hearing loss in one ear compromises binaural computations, impairing the ability to localize sounds in the horizontal plane. With appropriate training, adult individuals can adapt to this binaural imbalance and largely recover their localization accuracy. However, it remains unclear how long this learning is retained or whether it generalizes to other stimuli. Methods We trained ferrets to localize broadband noise bursts in quiet conditions and measured their initial head orienting responses and approach-to-target behavior. To evaluate the persistence of auditory spatial learning, we tested the sound localization performance of the animals over repeated periods of monaural earplugging that were interleaved with short or long periods of normal binaural hearing. To explore learning generalization to other stimulus types, we measured the localization accuracy before and after adaptation using different bandwidth stimuli presented against constant or amplitude-modulated background noise. Results Retention of learning resulted in a smaller initial deficit when the same ear was occluded on subsequent occasions. Each time, the animals' performance recovered with training to near pre-plug levels of localization accuracy. By contrast, switching the earplug to the contralateral ear resulted in less adaptation, indicating that the capacity to learn a new strategy for localizing sound is more limited if the animals have previously adapted to conductive hearing loss in the opposite ear. Moreover, the degree of adaptation to the training stimulus for individual animals was significantly correlated with the extent to which learning extended to untrained octave band target sounds presented in silence and to broadband targets presented in background noise, suggesting that adaptation and generalization go hand in hand. Conclusions Together, these findings provide further evidence for plasticity in the weighting of monaural and binaural cues during adaptation to unilateral conductive hearing loss, and show that the training-dependent recovery in spatial hearing can generalize to more naturalistic listening conditions, so long as the target sounds provide sufficient spatial information.
Collapse
|
7
|
Bordeau C, Scalvini F, Migniot C, Dubois J, Ambard M. Cross-modal correspondence enhances elevation localization in visual-to-auditory sensory substitution. Front Psychol 2023; 14:1079998. [PMID: 36777233 PMCID: PMC9909421 DOI: 10.3389/fpsyg.2023.1079998] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 01/06/2023] [Indexed: 01/27/2023] Open
Abstract
Introduction Visual-to-auditory sensory substitution devices are assistive devices for the blind that convert visual images into auditory images (or soundscapes) by mapping visual features with acoustic cues. To convey spatial information with sounds, several sensory substitution devices use a Virtual Acoustic Space (VAS) using Head Related Transfer Functions (HRTFs) to synthesize natural acoustic cues used for sound localization. However, the perception of the elevation is known to be inaccurate with generic spatialization since it is based on notches in the audio spectrum that are specific to each individual. Another method used to convey elevation information is based on the audiovisual cross-modal correspondence between pitch and visual elevation. The main drawback of this second method is caused by the limitation of the ability to perceive elevation through HRTFs due to the spectral narrowband of the sounds. Method In this study we compared the early ability to localize objects with a visual-to-auditory sensory substitution device where elevation is either conveyed using a spatialization-based only method (Noise encoding) or using pitch-based methods with different spectral complexities (Monotonic and Harmonic encodings). Thirty eight blindfolded participants had to localize a virtual target using soundscapes before and after having been familiarized with the visual-to-auditory encodings. Results Participants were more accurate to localize elevation with pitch-based encodings than with the spatialization-based only method. Only slight differences in azimuth localization performance were found between the encodings. Discussion This study suggests the intuitiveness of a pitch-based encoding with a facilitation effect of the cross-modal correspondence when a non-individualized sound spatialization is used.
Collapse
Affiliation(s)
- Camille Bordeau
- LEAD-CNRS UMR5022, Université de Bourgogne, Dijon, France,*Correspondence: Camille Bordeau ✉
| | | | | | - Julien Dubois
- ImViA EA 7535, Université de Bourgogne, Dijon, France
| | - Maxime Ambard
- LEAD-CNRS UMR5022, Université de Bourgogne, Dijon, France
| |
Collapse
|
8
|
Intensive Training of Spatial Hearing Promotes Auditory Abilities of Bilateral Cochlear Implant Adults: A Pilot Study. Ear Hear 2023; 44:61-76. [PMID: 35943235 DOI: 10.1097/aud.0000000000001256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
OBJECTIVE The aim of this study was to evaluate the feasibility of a virtual reality-based spatial hearing training protocol in bilateral cochlear implant (CI) users and to provide pilot data on the impact of this training on different qualities of hearing. DESIGN Twelve bilateral CI adults aged between 19 and 69 followed an intensive 10-week rehabilitation program comprised eight virtual reality training sessions (two per week) interspersed with several evaluation sessions (2 weeks before training started, after four and eight training sessions, and 1 month after the end of training). During each 45-minute training session, participants localized a sound source whose position varied in azimuth and/or in elevation. At the start of each trial, CI users received no information about sound location, but after each response, feedback was given to enable error correction. Participants were divided into two groups: a multisensory feedback group (audiovisual spatial cue) and an unisensory group (visual spatial cue) who only received feedback in a wholly intact sensory modality. Training benefits were measured at each evaluation point using three tests: 3D sound localization in virtual reality, the French Matrix test, and the Speech, Spatial and other Qualities of Hearing questionnaire. RESULTS The training was well accepted and all participants attended the whole rehabilitation program. Four training sessions spread across 2 weeks were insufficient to induce significant performance changes, whereas performance on all three tests improved after eight training sessions. Front-back confusions decreased from 32% to 14.1% ( p = 0.017); speech recognition threshold score from 1.5 dB to -0.7 dB signal-to-noise ratio ( p = 0.029) and eight CI users successfully achieved a negative signal-to-noise ratio. One month after the end of structured training, these performance improvements were still present, and quality of life was significantly improved for both self-reports of sound localization (from 5.3 to 6.7, p = 0.015) and speech understanding (from 5.2 to 5.9, p = 0.048). CONCLUSIONS This pilot study shows the feasibility and potential clinical relevance of this type of intervention involving a sensorial immersive environment and could pave the way for more systematic rehabilitation programs after cochlear implantation.
Collapse
|
9
|
Development of Sound Localization in Infants and Young Children with Cochlear Implants. J Clin Med 2022; 11:jcm11226758. [PMID: 36431235 PMCID: PMC9694519 DOI: 10.3390/jcm11226758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 11/10/2022] [Accepted: 11/14/2022] [Indexed: 11/17/2022] Open
Abstract
Cochlear implantation as a treatment for severe-to-profound hearing loss allows children to develop hearing, speech, and language in many cases. However, cochlear implants are generally provided beyond the infant period and outcomes are assessed after years of implant use, making comparison with normal development difficult. The aim was to study whether the rate of improvement of horizontal localization accuracy in children with bilateral implants is similar to children with normal hearing. A convenience sample of 20 children with a median age at simultaneous bilateral implantation = 0.58 years (0.42−2.3 years) participated in this cohort study. Longitudinal follow-up of sound localization accuracy for an average of ≈1 year generated 42 observations at a mean age = 1.5 years (0.58−3.6 years). The rate of development was compared to historical control groups including children with normal hearing and with relatively late bilateral implantation (≈4 years of age). There was a significant main effect of time with bilateral implants on localization accuracy (slope = 0.21/year, R2 = 0.25, F = 13.6, p < 0.001, n = 42). No differences between slopes (F = 0.30, p = 0.58) or correlation coefficients (Cohen’s q = 0.28, p = 0.45) existed when comparing children with implants and normal hearing (slope = 0.16/year since birth, p = 0.015, n = 12). The rate of development was identical to children implanted late. Results suggest that early bilateral implantation in children with severe-to-profound hearing loss allows development of sound localization at a similar age to children with normal hearing. Similar rates in children with early and late implantation and normal hearing suggest an intrinsic mechanism for the development of horizontal sound localization abilities.
Collapse
|
10
|
Valzolgher C, Todeschini M, Verdelet G, Gatel J, Salemme R, Gaveau V, Truy E, Farnè A, Pavani F. Adapting to altered auditory cues: Generalization from manual reaching to head pointing. PLoS One 2022; 17:e0263509. [PMID: 35421095 PMCID: PMC9009652 DOI: 10.1371/journal.pone.0263509] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 01/21/2022] [Indexed: 12/02/2022] Open
Abstract
Localising sounds means having the ability to process auditory cues deriving from the interplay among sound waves, the head and the ears. When auditory cues change because of temporary or permanent hearing loss, sound localization becomes difficult and uncertain. The brain can adapt to altered auditory cues throughout life and multisensory training can promote the relearning of spatial hearing skills. Here, we study the training potentials of sound-oriented motor behaviour to test if a training based on manual actions toward sounds can learning effects that generalize to different auditory spatial tasks. We assessed spatial hearing relearning in normal hearing adults with a plugged ear by using visual virtual reality and body motion tracking. Participants performed two auditory tasks that entail explicit and implicit processing of sound position (head-pointing sound localization and audio-visual attention cueing, respectively), before and after having received a spatial training session in which they identified sound position by reaching to auditory sources nearby. Using a crossover design, the effects of the above-mentioned spatial training were compared to a control condition involving the same physical stimuli, but different task demands (i.e., a non-spatial discrimination of amplitude modulations in the sound). According to our findings, spatial hearing in one-ear plugged participants improved more after reaching to sound trainings rather than in the control condition. Training by reaching also modified head-movement behaviour during listening. Crucially, the improvements observed during training generalize also to a different sound localization task, possibly as a consequence of acquired and novel head-movement strategies.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Center for Mind/Brain Sciences—CIMeC, University of Trento, Trento, Italy
| | - Michela Todeschini
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Trento, Italy
| | - Gregoire Verdelet
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Neuroimmersion, Lyon Neuroscience Research Center, Lyon, France
| | | | - Romeo Salemme
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Neuroimmersion, Lyon Neuroscience Research Center, Lyon, France
| | - Valerie Gaveau
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- University of Lyon 1, Villeurbanne, France
| | - Eric Truy
- Hospices Civils de Lyon, Lyon, France
| | - Alessandro Farnè
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Center for Mind/Brain Sciences—CIMeC, University of Trento, Trento, Italy
- Neuroimmersion, Lyon Neuroscience Research Center, Lyon, France
| | - Francesco Pavani
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Center for Mind/Brain Sciences—CIMeC, University of Trento, Trento, Italy
| |
Collapse
|
11
|
Ananthabhotla I, Ithapu VK, Brimijoin WO. A framework for designing head-related transfer function distance metrics that capture localization perception. JASA EXPRESS LETTERS 2021; 1:044401. [PMID: 36154203 DOI: 10.1121/10.0003983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Linear comparisons can fail to describe perceptual differences between head-related transfer functions (HRTFs), reducing their utility for perceptual tests, HRTF selection methods, and prediction algorithms. This work introduces a machine learning framework for constructing a perceptual error metric that is aligned with performance in human sound localization. A neural network is first trained to predict measurement locations from a large database of HRTFs and then fine-tuned with perceptual data. It demonstrates robust model performance over a standard spectral difference error metric. A statistical test is employed to quantify the information gain from the perceptual observations as a function of space.
Collapse
Affiliation(s)
| | | | - W Owen Brimijoin
- Facebook Reality Labs, 9845 Willows Road, Redmond, Washington 98052, USA , ,
| |
Collapse
|
12
|
Best V, Baumgartner R, Lavandier M, Majdak P, Kopčo N. Sound Externalization: A Review of Recent Research. Trends Hear 2020; 24:2331216520948390. [PMID: 32914708 PMCID: PMC7488874 DOI: 10.1177/2331216520948390] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Sound externalization, or the perception that a sound source is outside of the head, is an intriguing phenomenon that has long interested psychoacousticians. While previous reviews are available, the past few decades have produced a substantial amount of new data.In this review, we aim to synthesize those data and to summarize advances in our understanding of the phenomenon. We also discuss issues related to the definition and measurement of sound externalization and describe quantitative approaches that have been taken to predict the outcomes of externalization experiments. Last, sound externalization is of practical importance for many kinds of hearing technologies. Here, we touch on two examples, discussing the role of sound externalization in augmented/virtual reality systems and bringing attention to the somewhat overlooked issue of sound externalization in wearers of hearing aids.
Collapse
Affiliation(s)
- Virginia Best
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA, USA
| | - Robert Baumgartner
- Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria
| | - Mathieu Lavandier
- Univ Lyon, ENTPE, Laboratoire Génie Civil et Bâtiment, Vaulx-en-Velin, France
| | - Piotr Majdak
- Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria
| | - Norbert Kopčo
- Institute of Computer Science, Faculty of Science, Pavol Jozef Šafárik University, Košice, Slovakia
| |
Collapse
|
13
|
Valzolgher C, Verdelet G, Salemme R, Lombardi L, Gaveau V, Farné A, Pavani F. Reaching to sounds in virtual reality: A multisensory-motor approach to promote adaptation to altered auditory cues. Neuropsychologia 2020; 149:107665. [PMID: 33130161 DOI: 10.1016/j.neuropsychologia.2020.107665] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 07/25/2020] [Accepted: 10/24/2020] [Indexed: 11/26/2022]
Abstract
When localising sounds in space the brain relies on internal models that specify the correspondence between the auditory input reaching the ears, initial head-position and coordinates in external space. These models can be updated throughout life, setting the basis for re-learning spatial hearing abilities in adulthood. In addition, strategic behavioural adjustments allow people to quickly adapt to atypical listening situations. Until recently, the potential role of dynamic listening, involving head-movements or reaching to sounds, have remained largely overlooked. Here, we exploited visual virtual reality (VR) and real-time kinematic tracking, to study the role of active multisensory-motor interactions when hearing individuals adapt to altered binaural cues (one ear plugged and muffed). Participants were immersed in a VR scenario showing 17 virtual speakers at ear-level. In each trial, they heard a sound delivered from a real speaker aligned with one of the virtual ones and were instructed to either reach-to-touch the perceived sound source (Reaching group), or read the label associated with the speaker (Naming group). Participants were free to move their heads during the task and received audio-visual feedback on their performance. Most importantly, they performed the task under binaural or monaural listening. Results show that both groups adapted rapidly to monaural listening, improving sound localisation performance across trials and changing their head-movement behaviour. Reaching the sounds induced faster and larger sound localisation improvements, compared to just naming its position. This benefit was linked to progressively wider head-movements to explore auditory space, selectively in the Reaching group. In conclusion, reaching to sounds in an immersive visual VR context proved most effective for adapting to altered binaural listening. Head-movements played an important role in adaptation, pointing to the importance of dynamic listening when implementing training protocols for improving spatial hearing.
Collapse
Affiliation(s)
- Chiara Valzolgher
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy.
| | | | - Romeo Salemme
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Neuro-immersion, Centre de Recherche en Neuroscience Lyon (CRNL), France
| | - Luigi Lombardi
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Italy
| | - Valerie Gaveau
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France
| | - Alessandro Farné
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy; Neuro-immersion, Centre de Recherche en Neuroscience Lyon (CRNL), France
| | - Francesco Pavani
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy; Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Italy
| |
Collapse
|
14
|
Valzolgher C, Campus C, Rabini G, Gori M, Pavani F. Updating spatial hearing abilities through multisensory and motor cues. Cognition 2020; 204:104409. [PMID: 32717425 DOI: 10.1016/j.cognition.2020.104409] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2019] [Revised: 07/09/2020] [Accepted: 07/09/2020] [Indexed: 10/23/2022]
Abstract
Spatial hearing relies on a series of mechanisms for associating auditory cues with positions in space. When auditory cues are altered, humans, as well as other animals, can update the way they exploit auditory cues and partially compensate for their spatial hearing difficulties. In two experiments, we simulated monaural listening in hearing adults by temporarily plugging and muffing one ear, to assess the effects of active or passive training conditions. During active training, participants moved an audio-bracelet attached to their wrist, while continuously attending to the position of the sounds it produced. During passive training, participants received identical acoustic stimulation and performed exactly the same task, but the audio-bracelet was moved by the experimenter. Before and after training, we measured adaptation to monaural listening in three auditory tasks: single sound localization, minimum audible angle (MAA), spatial and temporal bisection. We also performed the tests twice in an untrained group, which completed the same auditory tasks but received no training. Results showed that participants significantly improved in single sound localization, across 3 consecutive days, but more in the active compared to the passive training group. This reveals that benefits of kinesthetic cues are additive with respect to those of paying attention to the position of sounds and/or seeing their positions when updating spatial hearing. The observed adaptation did not generalize to other auditory spatial tasks (space bisection and MAA), suggesting that partial updating of sound-space correspondences does not extend to all aspects of spatial hearing.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Centro Interdipartimentale Mente/Cervello (CIMeC), University of Trento, Italy; IMPACT, Centre de Recherche en Neurosciences Lyon (CRNL), France.
| | | | - Giuseppe Rabini
- Centro Interdipartimentale Mente/Cervello (CIMeC), University of Trento, Italy
| | - Monica Gori
- Italian Institute of Technology (IIT), Italy
| | - Francesco Pavani
- Centro Interdipartimentale Mente/Cervello (CIMeC), University of Trento, Italy; IMPACT, Centre de Recherche en Neurosciences Lyon (CRNL), France; Department of Psychology and Cognitive Science, Universiy of Trento, Italy
| |
Collapse
|
15
|
Steadman MA, Kim C, Lestang JH, Goodman DFM, Picinali L. Short-term effects of sound localization training in virtual reality. Sci Rep 2019; 9:18284. [PMID: 31798004 PMCID: PMC6893038 DOI: 10.1038/s41598-019-54811-w] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2019] [Accepted: 11/18/2019] [Indexed: 11/08/2022] Open
Abstract
Head-related transfer functions (HRTFs) capture the direction-dependant way that sound interacts with the head and torso. In virtual audio systems, which aim to emulate these effects, non-individualized, generic HRTFs are typically used leading to an inaccurate perception of virtual sound location. Training has the potential to exploit the brain's ability to adapt to these unfamiliar cues. In this study, three virtual sound localization training paradigms were evaluated; one provided simple visual positional confirmation of sound source location, a second introduced game design elements ("gamification") and a final version additionally utilized head-tracking to provide listeners with experience of relative sound source motion ("active listening"). The results demonstrate a significant effect of training after a small number of short (12-minute) training sessions, which is retained across multiple days. Gamification alone had no significant effect on the efficacy of the training, but active listening resulted in a significantly greater improvements in localization accuracy. In general, improvements in virtual sound localization following training generalized to a second set of non-individualized HRTFs, although some HRTF-specific changes were observed in polar angle judgement for the active listening group. The implications of this on the putative mechanisms of the adaptation process are discussed.
Collapse
Affiliation(s)
- Mark A Steadman
- Dyson School of Design Engineering, Imperial College London, London, UK.
- Department of Bioengineering, Imperial College London, London, UK.
| | - Chungeun Kim
- Dyson School of Design Engineering, Imperial College London, London, UK
| | - Jean-Hugues Lestang
- Department of Electrical and Electronic Engineering, Imperial College London, London, UK
| | - Dan F M Goodman
- Department of Electrical and Electronic Engineering, Imperial College London, London, UK
| | - Lorenzo Picinali
- Dyson School of Design Engineering, Imperial College London, London, UK
| |
Collapse
|
16
|
Kumpik DP, Campbell C, Schnupp JWH, King AJ. Re-weighting of Sound Localization Cues by Audiovisual Training. Front Neurosci 2019; 13:1164. [PMID: 31802997 PMCID: PMC6873890 DOI: 10.3389/fnins.2019.01164] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Accepted: 10/15/2019] [Indexed: 11/28/2022] Open
Abstract
Sound localization requires the integration in the brain of auditory spatial cues generated by interactions with the external ears, head and body. Perceptual learning studies have shown that the relative weighting of these cues can change in a context-dependent fashion if their relative reliability is altered. One factor that may influence this process is vision, which tends to dominate localization judgments when both modalities are present and induces a recalibration of auditory space if they become misaligned. It is not known, however, whether vision can alter the weighting of individual auditory localization cues. Using virtual acoustic space stimuli, we measured changes in subjects’ sound localization biases and binaural localization cue weights after ∼50 min of training on audiovisual tasks in which visual stimuli were either informative or not about the location of broadband sounds. Four different spatial configurations were used in which we varied the relative reliability of the binaural cues: interaural time differences (ITDs) and frequency-dependent interaural level differences (ILDs). In most subjects and experiments, ILDs were weighted more highly than ITDs before training. When visual cues were spatially uninformative, some subjects showed a reduction in auditory localization bias and the relative weighting of ILDs increased after training with congruent binaural cues. ILDs were also upweighted if they were paired with spatially-congruent visual cues, and the largest group-level improvements in sound localization accuracy occurred when both binaural cues were matched to visual stimuli. These data suggest that binaural cue reweighting reflects baseline differences in the relative weights of ILDs and ITDs, but is also shaped by the availability of congruent visual stimuli. Training subjects with consistently misaligned binaural and visual cues produced the ventriloquism aftereffect, i.e., a corresponding shift in auditory localization bias, without affecting the inter-subject variability in sound localization judgments or their binaural cue weights. Our results show that the relative weighting of different auditory localization cues can be changed by training in ways that depend on their reliability as well as the availability of visual spatial information, with the largest improvements in sound localization likely to result from training with fully congruent audiovisual information.
Collapse
Affiliation(s)
- Daniel P Kumpik
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Connor Campbell
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Jan W H Schnupp
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
17
|
Differential Adaptation in Azimuth and Elevation to Acute Monaural Spatial Hearing after Training with Visual Feedback. eNeuro 2019; 6:ENEURO.0219-19.2019. [PMID: 31601632 PMCID: PMC6825955 DOI: 10.1523/eneuro.0219-19.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 08/31/2019] [Accepted: 09/04/2019] [Indexed: 11/21/2022] Open
Abstract
Sound localization in the horizontal plane (azimuth) relies mainly on binaural difference cues in sound level and arrival time. Blocking one ear will perturb these cues, and may strongly affect azimuth performance of the listener. However, single-sided deaf listeners, as well as acutely single-sided plugged normal-hearing subjects, often use a combination of (ambiguous) monaural head-shadow cues, impoverished binaural level-difference cues, and (veridical, but limited) pinna- and head-related spectral cues to estimate source azimuth. To what extent listeners can adjust the relative contributions of these different cues is unknown, as the mechanisms underlying adaptive processes to acute monauralization are still unclear. By providing visual feedback during a brief training session with a high-pass (HP) filtered sound at a fixed sound level, we investigated the ability of listeners to adapt to their erroneous sound-localization percepts. We show that acutely plugged listeners rapidly adjusted the relative contributions of perceived sound level, and the spectral and distorted binaural cues, to improve their localization performance in azimuth also for different sound levels and locations than those experienced during training. Interestingly, our results also show that this acute cue-reweighting led to poorer localization performance in elevation, which was in line with the acoustic–spatial information provided during training. We conclude that the human auditory system rapidly readjusts the weighting of all relevant localization cues, to adequately respond to the demands of the current acoustic environment, even if the adjustments may hamper veridical localization performance in the real world.
Collapse
|
18
|
Interactions between egocentric and allocentric spatial coding of sounds revealed by a multisensory learning paradigm. Sci Rep 2019; 9:7892. [PMID: 31133688 PMCID: PMC6536515 DOI: 10.1038/s41598-019-44267-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2018] [Accepted: 05/08/2019] [Indexed: 11/09/2022] Open
Abstract
Although sound position is initially head-centred (egocentric coordinates), our brain can also represent sounds relative to one another (allocentric coordinates). Whether reference frames for spatial hearing are independent or interact remained largely unexplored. Here we developed a new allocentric spatial-hearing training and tested whether it can improve egocentric sound-localisation performance in normal-hearing adults listening with one ear plugged. Two groups of participants (N = 15 each) performed an egocentric sound-localisation task (point to a syllable), in monaural listening, before and after 4-days of multisensory training on triplets of white-noise bursts paired with occasional visual feedback. Critically, one group performed an allocentric task (auditory bisection task), whereas the other processed the same stimuli to perform an egocentric task (pointing to a designated sound of the triplet). Unlike most previous works, we tested also a no training group (N = 15). Egocentric sound-localisation abilities in the horizontal plane improved for all groups in the space ipsilateral to the ear-plug. This unexpected finding highlights the importance of including a no training group when studying sound localisation re-learning. Yet, performance changes were qualitatively different in trained compared to untrained participants, providing initial evidence that allocentric and multisensory procedures may prove useful when aiming to promote sound localisation re-learning.
Collapse
|
19
|
Auditory Accommodation to Poorly Matched Non-Individual Spectral Localization Cues Through Active Learning. Sci Rep 2019; 9:1063. [PMID: 30705332 PMCID: PMC6355836 DOI: 10.1038/s41598-018-37873-0] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Accepted: 12/17/2018] [Indexed: 12/05/2022] Open
Abstract
This study examines the effect of adaptation to non-ideal auditory localization cues represented by the Head-Related Transfer Function (HRTF) and the retention of training for up to three months after the last session. Continuing from a previous study on rapid non-individual HRTF learning, subjects using non-individual HRTFs were tested alongside control subjects using their own measured HRTFs. Perceptually worst-rated non-individual HRTFs were chosen to represent the worst-case scenario in practice and to allow for maximum potential for improvement. The methodology consisted of a training game and a localization test to evaluate performance carried out over 10 sessions. Sessions 1–4 occurred at 1 week intervals, performed by all subjects. During initial sessions, subjects showed improvement in localization performance for polar error. Following this, half of the subjects stopped the training game element, continuing with only the localization task. The group that continued to train showed improvement, with 3 of 8 subjects achieving group mean polar errors comparable to the control group. The majority of the group that stopped the training game retained their performance attained at the end of session 4. In general, adaptation was found to be quite subject dependent, highlighting the limits of HRTF adaptation in the case of poor HRTF matches. No identifier to predict learning ability was observed.
Collapse
|
20
|
Kumpik DP, King AJ. A review of the effects of unilateral hearing loss on spatial hearing. Hear Res 2018; 372:17-28. [PMID: 30143248 PMCID: PMC6341410 DOI: 10.1016/j.heares.2018.08.003] [Citation(s) in RCA: 76] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Revised: 07/05/2018] [Accepted: 08/09/2018] [Indexed: 12/13/2022]
Abstract
The capacity of the auditory system to extract spatial information relies principally on the detection and interpretation of binaural cues, i.e., differences in the time of arrival or level of the sound between the two ears. In this review, we consider the effects of unilateral or asymmetric hearing loss on spatial hearing, with a focus on the adaptive changes in the brain that may help to compensate for an imbalance in input between the ears. Unilateral hearing loss during development weakens the brain's representation of the deprived ear, and this may outlast the restoration of function in that ear and therefore impair performance on tasks such as sound localization and spatial release from masking that rely on binaural processing. However, loss of hearing in one ear also triggers a reweighting of the cues used for sound localization, resulting in increased dependence on the spectral cues provided by the other ear for localization in azimuth, as well as adjustments in binaural sensitivity that help to offset the imbalance in inputs between the two ears. These adaptive strategies enable the developing auditory system to compensate to a large degree for asymmetric hearing loss, thereby maintaining accurate sound localization. They can also be leveraged by training following hearing loss in adulthood. Although further research is needed to determine whether this plasticity can generalize to more realistic listening conditions and to other tasks, such as spatial unmasking, the capacity of the auditory system to undergo these adaptive changes has important implications for rehabilitation strategies in the hearing impaired. Unilateral hearing loss in infancy can disrupt spatial hearing, even after binaural inputs are restored. Plasticity in the developing brain enables substantial recovery in sound localization accuracy. Adaptation to unilateral hearing loss is based on reweighting of monaural spectral cues and binaural plasticity. Training on auditory tasks can partially compensate for unilateral hearing loss, highlighting potential therapies.
Collapse
Affiliation(s)
- Daniel P Kumpik
- Department of Physiology, Anatomy and Genetics, Parks Road, Oxford, OX1 3PT, UK
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, Parks Road, Oxford, OX1 3PT, UK.
| |
Collapse
|
21
|
Bălan O, Moldoveanu A, Moldoveanu F, Nagy H, Wersényi G, Unnórsson R. Improving the Audio Game–Playing Performances of People with Visual Impairments through Multimodal Training. JOURNAL OF VISUAL IMPAIRMENT & BLINDNESS 2017. [DOI: 10.1177/0145482x1711100206] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Introduction As the number of people with visual impairments (that is, those who are blind or have low vision) is continuously increasing, rehabilitation and engineering researchers have identified the need to design sensory-substitution devices that would offer assistance and guidance to these people for performing navigational tasks. Auditory and haptic cues have been shown to be an effective approach towards creating a rich spatial representation of the environment, so they are considered for inclusion in the development of assistive tools that would enable people with visual impairments to acquire knowledge of the surrounding space in a way close to the visually based perception of sighted individuals. However, achieving efficiency through a sensory substitution device requires extensive training for visually impaired users to learn how to process the artificial auditory cues and convert them into spatial information. Methods Considering all the potential advantages game-based learning can provide, we propose a new method for training sound localization and virtual navigational skills of visually impaired people in a 3D audio game with hierarchical levels of difficulty. The training procedure is focused on a multimodal (auditory and haptic) learning approach in which the subjects have been asked to listen to 3D sounds while simultaneously perceiving a series of vibrations on a haptic headband that corresponds to the direction of the sound source in space. Results The results we obtained in a sound-localization experiment with 10 visually impaired people showed that the proposed training strategy resulted in significant improvements in auditory performance and navigation skills of the subjects, thus ensuring behavioral gains in the spatial perception of the environment.
Collapse
Affiliation(s)
- Oana Bălan
- University Politehnica of Bucharest, Splaiul Independentei, 313, Bucharest, Romania
| | | | | | - Hunor Nagy
- Széchenyi István University, Egyetem tér 1., Hungary
| | - György Wersényi
- Széchenyi István University, Gyõr, Egyetem tér 1., 9026 Hungary
| | - Rúnar Unnórsson
- University of Iceland, School of Engineering and Natural Sciences—Faculty of Industrial Engineering, Mechanical Engineering and Computer Science, VR-2/V02-237, Reykjavik, Iceland
| |
Collapse
|
22
|
Jóhannesson ÓI, Balan O, Unnthorsson R, Moldoveanu A, Kristjánsson Á. The Sound of Vision Project: On the Feasibility of an Audio-Haptic Representation of the Environment, for the Visually Impaired. Brain Sci 2016; 6:brainsci6030020. [PMID: 27355966 PMCID: PMC5039449 DOI: 10.3390/brainsci6030020] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2016] [Revised: 06/18/2016] [Accepted: 06/23/2016] [Indexed: 11/16/2022] Open
Abstract
The Sound of Vision project involves developing a sensory substitution device that is aimed at creating and conveying a rich auditory representation of the surrounding environment to the visually impaired. However, the feasibility of such an approach is strongly constrained by neural flexibility, possibilities of sensory substitution and adaptation to changed sensory input. We review evidence for such flexibility from various perspectives. We discuss neuroplasticity of the adult brain with an emphasis on functional changes in the visually impaired compared to sighted people. We discuss effects of adaptation on brain activity, in particular short-term and long-term effects of repeated exposure to particular stimuli. We then discuss evidence for sensory substitution such as Sound of Vision involves, while finally discussing evidence for adaptation to changes in the auditory environment. We conclude that sensory substitution enterprises such as Sound of Vision are quite feasible in light of the available evidence, which is encouraging regarding such projects.
Collapse
Affiliation(s)
- Ómar I Jóhannesson
- Laboratory of Visual Perception and Visuo-motor control, Faculty of Psychology, School of Health Sciences, University of Iceland, Reykjavik 101, Iceland.
| | - Oana Balan
- Faculty of Automatic Control and Computers, Computer Science and Engineering Department, University Politehnica of Bucharest, Bucharest 060042, Romania.
| | - Runar Unnthorsson
- Faculty of Industrial Engineering, Mechanical Engineering and Computer Science, School of Engineering and Natural Sciences, University of Iceland, Reykjavik 101, Iceland.
| | - Alin Moldoveanu
- Faculty of Automatic Control and Computers, Computer Science and Engineering Department, University Politehnica of Bucharest, Bucharest 060042, Romania.
| | - Árni Kristjánsson
- Laboratory of Visual Perception and Visuo-motor control, Faculty of Psychology, School of Health Sciences, University of Iceland, Reykjavik 101, Iceland.
| |
Collapse
|
23
|
Sound localization in a changing world. Curr Opin Neurobiol 2015; 35:35-43. [PMID: 26126152 DOI: 10.1016/j.conb.2015.06.005] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2015] [Revised: 06/04/2015] [Accepted: 06/15/2015] [Indexed: 12/11/2022]
Abstract
In natural environments, neural systems must be continuously updated to reflect changes in sensory inputs and behavioral goals. Recent studies of sound localization have shown that adaptation and learning involve multiple mechanisms that operate at different timescales and stages of processing, with other sensory and motor-related inputs playing a key role. We are only just beginning to understand, however, how these processes interact with one another to produce adaptive changes at the level of neuronal populations and behavior. Because there is no explicit map of auditory space in the cortex, studies of sound localization may also provide much broader insight into the plasticity of complex neural representations that are not topographically organized.
Collapse
|
24
|
Mendonça C. A review on auditory space adaptations to altered head-related cues. Front Neurosci 2014; 8:219. [PMID: 25120422 PMCID: PMC4110508 DOI: 10.3389/fnins.2014.00219] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2014] [Accepted: 07/05/2014] [Indexed: 11/23/2022] Open
Abstract
In this article we present a review of current literature on adaptations to altered head-related auditory localization cues. Localization cues can be altered through ear blocks, ear molds, electronic hearing devices, and altered head-related transfer functions (HRTFs). Three main methods have been used to induce auditory space adaptation: sound exposure, training with feedback, and explicit training. Adaptations induced by training, rather than exposure, are consistently faster. Studies on localization with altered head-related cues have reported poor initial localization, but improved accuracy and discriminability with training. Also, studies that displaced the auditory space by altering cue values reported adaptations in perceived source position to compensate for such displacements. Auditory space adaptations can last for a few months even without further contact with the learned cues. In most studies, localization with the subject's own unaltered cues remained intact despite the adaptation to a second set of cues. Generalization is observed from trained to untrained sound source positions, but there is mixed evidence regarding cross-frequency generalization. Multiple brain areas might be involved in auditory space adaptation processes, but the auditory cortex (AC) may play a critical role. Auditory space plasticity may involve context-dependent cue reweighting.
Collapse
Affiliation(s)
- Catarina Mendonça
- Department of Signal Processing and Acoustics, School of Electrical Engineering, Aalto University Espoo, Finland
| |
Collapse
|
25
|
Lega C, Cattaneo Z, Merabet LB, Vecchi T, Cucchi S. The effect of musical expertise on the representation of space. Front Hum Neurosci 2014; 8:250. [PMID: 24795605 PMCID: PMC4006044 DOI: 10.3389/fnhum.2014.00250] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2014] [Accepted: 04/04/2014] [Indexed: 11/13/2022] Open
Abstract
Consistent evidence suggests that pitch height may be represented in a spatial format, having both a vertical and a horizontal representation. The spatial representation of pitch height results into response compatibility effects for which high pitch tones are preferentially associated to up-right responses, and low pitch tones are preferentially associated to down-left responses (i.e., the Spatial-Musical Association of Response Codes (SMARC) effect), with the strength of these associations depending on individuals’ musical skills. In this study we investigated whether listening to tones of different pitch affects the representation of external space, as assessed in a visual and haptic line bisection paradigm, in musicians and non musicians. Low and high pitch tones affected the bisection performance in musicians differently, both when pitch was relevant and irrelevant for the task, and in both the visual and the haptic modality. No effect of pitch height was observed on the bisection performance of non musicians. Moreover, our data also show that musicians present a (supramodal) rightward bisection bias in both the visual and the haptic modality, extending previous findings limited to the visual modality, and consistent with the idea that intense practice with musical notation and bimanual instrument training affects hemispheric lateralization.
Collapse
Affiliation(s)
- Carlotta Lega
- Department of Psychology, University of Milano-Bicocca Milano, Italy
| | - Zaira Cattaneo
- Department of Psychology, University of Milano-Bicocca Milano, Italy ; Brain Connectivity Center, National Neurological Institute C. Mondino Pavia, Italy
| | - Lotfi B Merabet
- The Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Harvard Medical School Boston, MA, USA
| | - Tomaso Vecchi
- Brain Connectivity Center, National Neurological Institute C. Mondino Pavia, Italy ; Department of Brain and Behavioural Sciences, University of Pavia Pavia, Italy
| | - Silvia Cucchi
- Department of Brain and Behavioural Sciences, University of Pavia Pavia, Italy
| |
Collapse
|