1
|
Teng S, Danforth C, Paternoster N, Ezeana M, Puri A. Object recognition via echoes: quantifying the crossmodal transfer of three-dimensional shape information between echolocation, vision, and haptics. Front Neurosci 2024; 18:1288635. [PMID: 38440393 PMCID: PMC10909950 DOI: 10.3389/fnins.2024.1288635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 02/05/2024] [Indexed: 03/06/2024] Open
Abstract
Active echolocation allows blind individuals to explore their surroundings via self-generated sounds, similarly to dolphins and other echolocating animals. Echolocators emit sounds, such as finger snaps or mouth clicks, and parse the returning echoes for information about their surroundings, including the location, size, and material composition of objects. Because a crucial function of perceiving objects is to enable effective interaction with them, it is important to understand the degree to which three-dimensional shape information extracted from object echoes is useful in the context of other modalities such as haptics or vision. Here, we investigated the resolution of crossmodal transfer of object-level information between acoustic echoes and other senses. First, in a delayed match-to-sample task, blind expert echolocators and sighted control participants inspected common (everyday) and novel target objects using echolocation, then distinguished the target object from a distractor using only haptic information. For blind participants, discrimination accuracy was overall above chance and similar for both common and novel objects, whereas as a group, sighted participants performed above chance for the common, but not novel objects, suggesting that some coarse object information (a) is available to both expert blind and novice sighted echolocators, (b) transfers from auditory to haptic modalities, and (c) may be facilitated by prior object familiarity and/or material differences, particularly for novice echolocators. Next, to estimate an equivalent resolution in visual terms, we briefly presented blurred images of the novel stimuli to sighted participants (N = 22), who then performed the same haptic discrimination task. We found that visuo-haptic discrimination performance approximately matched echo-haptic discrimination for a Gaussian blur kernel σ of ~2.5°. In this way, by matching visual and echo-based contributions to object discrimination, we can estimate the quality of echoacoustic information that transfers to other sensory modalities, predict theoretical bounds on perception, and inform the design of assistive techniques and technology available for blind individuals.
Collapse
Affiliation(s)
- Santani Teng
- Smith-Kettlewell Eye Research Institute, San Francisco, CA, United States
| | - Caroline Danforth
- Department of Biology, University of Central Arkansas, Conway, AR, United States
- Department of Psychology, Vanderbilt University, Nashville, TN, United States
| | - Nickolas Paternoster
- Department of Biology, University of Central Arkansas, Conway, AR, United States
- Department of Psychology, Cornell University, Ithaca, NY, United States
| | - Michael Ezeana
- Department of Biology, University of Central Arkansas, Conway, AR, United States
- Georgetown University School of Medicine, Washington, DC, United States
| | - Amrita Puri
- Department of Biology, University of Central Arkansas, Conway, AR, United States
| |
Collapse
|
2
|
Goicke S, Denk F, Jürgens T. Auditory Spatial Bisection of Blind and Normally Sighted Individuals in Free Field and Virtual Acoustics. Trends Hear 2024; 28:23312165241230947. [PMID: 38361245 PMCID: PMC10874137 DOI: 10.1177/23312165241230947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 01/15/2024] [Accepted: 01/19/2024] [Indexed: 02/17/2024] Open
Abstract
Sound localization is an important ability in everyday life. This study investigates the influence of vision and presentation mode on auditory spatial bisection performance. Subjects were asked to identify the smaller perceived distance between three consecutive stimuli that were either presented via loudspeakers (free field) or via headphones after convolution with generic head-related impulse responses (binaural reproduction). Thirteen azimuthal sound incidence angles on a circular arc segment of ±24° at a radius of 3 m were included in three regions of space (front, rear, and laterally left). Twenty normally sighted (measured both sighted and blindfolded) and eight blind persons participated. Results showed no significant differences with respect to visual condition, but strong effects of sound direction and presentation mode. Psychometric functions were steepest in frontal space and indicated median spatial bisection thresholds of 11°-14°. Thresholds increased significantly in rear (11°-17°) and laterally left (20°-28°) space in free field. Individual pinna and torso cues, as available only in free field presentation, improved the performance of all participants compared to binaural reproduction. Especially in rear space, auditory spatial bisection thresholds were three to four times higher (i.e., poorer) using binaural reproduction than in free field. The results underline the importance of individual auditory spatial cues for spatial bisection, irrespective of access to vision, which indicates that vision may not be strictly necessary to calibrate allocentric spatial hearing.
Collapse
Affiliation(s)
- Stefanie Goicke
- Institute of Acoustics, Technische Hochschule Lübeck (University of Applied Sciences Lübeck), Lübeck, Germany
- Research Unit for ORL—Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, Odense, Denmark
| | - Florian Denk
- German Institute of Hearing Aids, Lübeck, Germany
| | - Tim Jürgens
- Institute of Acoustics, Technische Hochschule Lübeck (University of Applied Sciences Lübeck), Lübeck, Germany
| |
Collapse
|
3
|
Finocchietti S, Esposito D, Gori M. Monaural auditory spatial abilities in early blind individuals. Iperception 2023; 14:20416695221149638. [PMID: 36861104 PMCID: PMC9969445 DOI: 10.1177/20416695221149638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 12/19/2022] [Indexed: 03/03/2023] Open
Abstract
Early blind individuals can localize single sound sources better than sighted participants, even under monaural conditions. Yet, in binaural listening, they struggle with understanding the distances between three different sounds. The latter ability has never been tested under monaural conditions. We investigated the performance of eight early blind and eight blindfolded healthy individuals in monaural and binaural listening during two audio-spatial tasks. In the localization task, a single sound was played in front of participants who needed to localize it properly. In the auditory bisection task, three consecutive sounds were played from different spatial positions, and participants reported which sound the second one was closer to. Only early blind individuals improved their performance in the monaural bisection, while no statistical difference was present for the localization task. We concluded that early blind individuals show superior ability in using spectral cues under monaural conditions.
Collapse
Affiliation(s)
| | - Davide Esposito
- Davide Esposito, Unit for Visually Impaired
People, Italian Institute of Technology, 16131, Genoa, Italy.
| | | |
Collapse
|
4
|
Sabourin CJ, Merrikhi Y, Lomber SG. Do blind people hear better? Trends Cogn Sci 2022; 26:999-1012. [PMID: 36207258 DOI: 10.1016/j.tics.2022.08.016] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 08/22/2022] [Accepted: 08/25/2022] [Indexed: 01/12/2023]
Abstract
For centuries, anecdotal evidence such as the perfect pitch of the blind piano tuner or blind musician has supported the notion that individuals who have lost their sight early in life have superior hearing abilities compared with sighted people. Recently, auditory psychophysical and functional imaging studies have identified that specific auditory enhancements in the early blind can be linked to activation in extrastriate visual cortex, suggesting crossmodal plasticity. Furthermore, the nature of the sensory reorganization in occipital cortex supports the concept of a task-based functional cartography for the cerebral cortex rather than a sensory-based organization. In total, studies of early-blind individuals provide valuable insights into mechanisms of cortical plasticity and principles of cerebral organization.
Collapse
Affiliation(s)
- Carina J Sabourin
- Department of Physiology, McGill University, Montreal, Quebec H3G 1Y6, Canada; Biological and Biomedical Engineering Graduate Program, McGill University, Montreal, Quebec H3G 1Y6, Canada
| | - Yaser Merrikhi
- Department of Physiology, McGill University, Montreal, Quebec H3G 1Y6, Canada
| | - Stephen G Lomber
- Department of Physiology, McGill University, Montreal, Quebec H3G 1Y6, Canada; Biological and Biomedical Engineering Graduate Program, McGill University, Montreal, Quebec H3G 1Y6, Canada; Department of Psychology, McGill University, Montreal, Quebec H3G 1Y6, Canada; Department of Neurology and Neurosurgery, McGill University, Montreal, Quebec H3G 1Y6, Canada.
| |
Collapse
|
5
|
Objective Evaluation of Obstacle Perception Using Spontaneous Body Movements of Blind People Evoked by Movements of Acoustic Virtual Wall. HUMAN BEHAVIOR AND EMERGING TECHNOLOGIES 2022. [DOI: 10.1155/2022/9475983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Obstacle perception using sound is the ability to detect silent objects, such as walls and poles. It is very important for blind people to recognize their environment using acoustic information through their auditory sense when walking or conducting various daily activities. In this paper, to develop an objective method for evaluating the degree of obstacle perception acquisition in the education and rehabilitation of the blind, the authors measured the spontaneous body movements evoked by the approach of an acoustic virtual wall. Ten blind persons who have experienced obstacle perception in their daily life, and seven sighted persons with no such experience participated in the experiment. The reciprocal (approach and receding) movements of the virtual wall were presented using simulated reflected sound, and the spontaneous body movements of the subjects were measured. As the results indicate, eight of the ten blind participants showed large maximum values for the correlation function between the wall and their body movements, whereas six of the seven sighted participants showed small maximum values. These results indicate that body movements can be used for an objective evaluation of obstacle perception. In particular, it was determined that the maximum value of the correlation function is the most appropriate for such an evaluation, because it does not depend on the subject’s physique.
Collapse
|
6
|
Arcos K, Harhen N, Loiotile R, Bedny M. Superior verbal but not nonverbal memory in congenital blindness. Exp Brain Res 2022; 240:897-908. [PMID: 35076724 PMCID: PMC9204649 DOI: 10.1007/s00221-021-06304-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2021] [Accepted: 12/30/2021] [Indexed: 11/28/2022]
Abstract
Previous studies suggest that people who are congenitally blind outperform sighted people on some memory tasks. Whether blindness-associated memory advantages are specific to verbal materials or are also observed with nonverbal sounds has not been determined. Congenitally blind individuals (n = 20) and age and education matched blindfolded sighted controls (n = 22) performed a series of auditory memory tasks. These included: verbal forward and backward letter spans, a complex letter span with intervening equations, as well as two matched recognition tasks: one with verbal stimuli (i.e., letters) and one with nonverbal complex meaningless sounds. Replicating previously observed findings, blind participants outperformed sighted people on forward and backward letter span tasks. Blind participants also recalled more letters on the complex letter span task despite the interference of intervening equations. Critically, the same blind participants showed larger advantages on the verbal as compared to the nonverbal recognition task. These results suggest that blindness selectively enhances memory for verbal material. Possible explanations for blindness-related verbal memory advantages include blindness-induced memory practice and 'visual' cortex recruitment for verbal processing.
Collapse
Affiliation(s)
- Karen Arcos
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, USA.
| | - Nora Harhen
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
| | - Rita Loiotile
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, United States
| | - Marina Bedny
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, United States
| |
Collapse
|
7
|
Auditory distance perception in front and rear space. Hear Res 2022; 417:108468. [DOI: 10.1016/j.heares.2022.108468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Revised: 01/22/2022] [Accepted: 02/12/2022] [Indexed: 11/21/2022]
|
8
|
Thaler L, Norman LJ. No effect of 10-week training in click-based echolocation on auditory localization in people who are blind. Exp Brain Res 2021; 239:3625-3633. [PMID: 34609546 PMCID: PMC8599323 DOI: 10.1007/s00221-021-06230-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Accepted: 09/18/2021] [Indexed: 11/16/2022]
Abstract
What factors are important in the calibration of mental representations of auditory space? A substantial body of research investigating the audiospatial abilities of people who are blind has shown that visual experience might be an important factor for accurate performance in some audiospatial tasks. Yet, it has also been shown that long-term experience using click-based echolocation might play a similar role, with blind expert echolocators demonstrating auditory localization abilities that are superior to those of people who are blind and who do not use click-based echolocation by Vercillo et al. (Neuropsychologia 67: 35–40, 2015). Based on this hypothesis we might predict that training in click-based echolocation may lead to improvement in performance in auditory localization tasks in people who are blind. Here we investigated this hypothesis in a sample of 12 adult people who have been blind from birth. We did not find evidence for an improvement in performance in auditory localization after 10 weeks of training despite significant improvement in echolocation ability. It is possible that longer-term experience with click-based echolocation is required for effects to develop, or that other factors can explain the association between echolocation expertise and superior auditory localization. Considering the practical relevance of click-based echolocation for people who are visually impaired, future research should address these questions.
Collapse
Affiliation(s)
- Lore Thaler
- Department of Psychology, Durham University, Science Site, South Road, Durham, DH1 3LE, UK.
| | - Liam J Norman
- Department of Psychology, Durham University, Science Site, South Road, Durham, DH1 3LE, UK
| |
Collapse
|
9
|
Kolarik AJ, Moore BCJ, Cirstea S, Aggius-Vella E, Gori M, Campus C, Pardhan S. Factors Affecting Auditory Estimates of Virtual Room Size: Effects of Stimulus, Level, and Reverberation. Perception 2021; 50:646-663. [PMID: 34053354 DOI: 10.1177/03010066211020598] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
When vision is unavailable, auditory level and reverberation cues provide important spatial information regarding the environment, such as the size of a room. We investigated how room-size estimates were affected by stimulus type, level, and reverberation. In Experiment 1, 15 blindfolded participants estimated room size after performing a distance bisection task in virtual rooms that were either anechoic (with level cues only) or reverberant (with level and reverberation cues) with a relatively short reverberation time of T60 = 400 milliseconds. Speech, noise, or clicks were presented at distances between 1.9 and 7.1 m. The reverberant room was judged to be significantly larger than the anechoic room (p < .05) for all stimuli. In Experiment 2, only the reverberant room was used and the overall level of all sounds was equalized, so only reverberation cues were available. Ten blindfolded participants took part. Room-size estimates were significantly larger for speech than for clicks or noise. The results show that when level and reverberation cues are present, reverberation increases judged room size. Even relatively weak reverberation cues provide room-size information, which could potentially be used by blind or visually impaired individuals encountering novel rooms.
Collapse
Affiliation(s)
- Andrew J Kolarik
- Anglia Ruskin University, Cambridge, UK.,Anglia Ruskin University, Cambridge, UK
| | - Brian C J Moore
- Anglia Ruskin University, Cambridge, UK; University of Cambridge, Cambridge, UK.,Anglia Ruskin University, Cambridge, UK
| | - Silvia Cirstea
- Anglia Ruskin University, Cambridge, UK.,Anglia Ruskin University, Cambridge, UK
| | - Elena Aggius-Vella
- Fondazione Istituto Italiano di Tecnologia, Genoa, Italy; Institute for Mind, Brain and Technology, Herzeliya, Israel.,Anglia Ruskin University, Cambridge, UK
| | | | - Claudio Campus
- Fondazione Istituto Italiano di Tecnologia, Genoa, Italy.,Anglia Ruskin University, Cambridge, UK
| | | |
Collapse
|
10
|
Aprile G, Cappagli G, Morelli F, Gori M, Signorini S. Standardized and Experimental Tools to Assess Spatial Cognition in Visually Impaired Children: A Mini-Review. Front Neurosci 2020; 14:562589. [PMID: 33041760 PMCID: PMC7525087 DOI: 10.3389/fnins.2020.562589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 08/25/2020] [Indexed: 11/13/2022] Open
Abstract
The acquisition of spatial cognition is essential for both everyday functioning (e.g., navigation) and more specific goals (e.g., mathematics), therefore being able to assess and monitor spatial cognition from the first years of life would be essential to predict developmental outcomes and timely intervene whenever spatial development is compromised. Several shreds of evidence have indicated that spatial development can be compromised in the case of development with atypical sensory experience such as blindness. Despite the massive importance of spatial abilities for the development of psychomotor competencies across childhood, only a few standardized and experimental methods have been developed to assess them in visually impaired children. In this review, we will give a short overview of current formal (standardized) and informal (experimental) methods to assess spatial cognition in visually impaired children, demonstrating that very few validated tools have been proposed to date. The main contribution of this current work is to highlight the need of ad hoc studies to create and validate clinical measures to assess spatial cognition in visually impaired individuals and address potential future developments in this area of research.
Collapse
Affiliation(s)
- Giorgia Aprile
- Centre of Child Neurophthalmology, IRCCS Mondino Foundation, Pavia, Italy
| | - Giulia Cappagli
- Centre of Child Neurophthalmology, IRCCS Mondino Foundation, Pavia, Italy
| | - Federica Morelli
- Centre of Child Neurophthalmology, IRCCS Mondino Foundation, Pavia, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
| | - Sabrina Signorini
- Centre of Child Neurophthalmology, IRCCS Mondino Foundation, Pavia, Italy
| |
Collapse
|
11
|
Tonelli A, Campus C, Gori M. Early visual cortex response for sound in expert blind echolocators, but not in early blind non-echolocators. Neuropsychologia 2020; 147:107617. [PMID: 32896527 DOI: 10.1016/j.neuropsychologia.2020.107617] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2020] [Revised: 08/25/2020] [Accepted: 08/31/2020] [Indexed: 12/11/2022]
Abstract
Echolocation is a perceptual and navigational skill that can be acquired by some individuals. Regarding blind people, this skill can help them "see" the environment around them via a new form of auditory information based on echoes. Expert human echolocators benefit from using this technique not only in controlled environments but also in their everyday lives. In the current study, we investigate the effect of echolocation on blind people's auditory spatial abilities at the cortical level. In an auditory spatial bisection task, we tested people who are early blinds and early blind expert echolocators, along with sighted people. Our results showed that there is similar early activation (50-90 ms) in the posterior area of the scalp for both early blind expert echolocators and sighted participants, but not in the early blind group. This activation was related to sound stimulation, and it is contralateral to the position of the sound in space. These findings indicate that echolocation is a good substitute for the visual modality that enables the development of auditory spatial representations when vision is not available.
Collapse
Affiliation(s)
- Alessia Tonelli
- UVIP, Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy; Department of Translational Research of New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy.
| | - Claudio Campus
- UVIP, Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
| | - Monica Gori
- UVIP, Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
12
|
Morelli F, Aprile G, Cappagli G, Luparia A, Decortes F, Gori M, Signorini S. A Multidimensional, Multisensory and Comprehensive Rehabilitation Intervention to Improve Spatial Functioning in the Visually Impaired Child: A Community Case Study. Front Neurosci 2020; 14:768. [PMID: 32792904 PMCID: PMC7393219 DOI: 10.3389/fnins.2020.00768] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Accepted: 06/30/2020] [Indexed: 12/12/2022] Open
Abstract
Congenital visual impairment may have a negative impact on spatial abilities and result in severe delays in perceptual, social, motor, and cognitive skills across life span. Despite several evidences have highlighted the need for an early introduction of re-habilitation interventions, such interventions are rarely adapted to children’s visual capabilities and very few studies have been conducted to assess their long-term efficacy. In this work, we present a case study of a visually impaired child enrolled in a newly developed re-habilitation intervention aimed at improving the overall development through the diversification of re-habilitation activities based on visual potential and developmental profile, with a focus on spatial functioning. We argue that intervention for visually impaired children should be (a) adapted to their visual capabilities, in order to increase re-habilitation outcomes, (b) multi-interdisciplinary and multidimensional, to improve adaptive abilities across development, (c) multisensory, to promote the integration of different perceptual information coming from the environment.
Collapse
Affiliation(s)
- Federica Morelli
- Center of Child Neuro-Ophthalmology, IRCCS, Mondino Foundation, Pavia, Italy
| | - Giorgia Aprile
- Center of Child Neuro-Ophthalmology, IRCCS, Mondino Foundation, Pavia, Italy
| | - Giulia Cappagli
- Center of Child Neuro-Ophthalmology, IRCCS, Mondino Foundation, Pavia, Italy
| | - Antonella Luparia
- Center of Child Neuro-Ophthalmology, IRCCS, Mondino Foundation, Pavia, Italy
| | - Francesco Decortes
- Center of Child Neuro-Ophthalmology, IRCCS, Mondino Foundation, Pavia, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
| | - Sabrina Signorini
- Center of Child Neuro-Ophthalmology, IRCCS, Mondino Foundation, Pavia, Italy
| |
Collapse
|
13
|
Scheller M, Proulx MJ, Haan M, Dahlmann‐Noor A, Petrini K. Late‐ but not early‐onset blindness impairs the development of audio‐haptic multisensory integration. Dev Sci 2020; 24:e13001. [DOI: 10.1111/desc.13001] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Revised: 04/04/2020] [Accepted: 05/26/2020] [Indexed: 11/29/2022]
Affiliation(s)
| | | | - Michelle Haan
- Developmental Neurosciences Programme University College London London UK
| | - Annegret Dahlmann‐Noor
- NIHR Biomedical Research Centre Moorfields London UK
- Paediatric Service Moorfields Eye Hospital London UK
| | - Karin Petrini
- Department of Psychology University of Bath London UK
| |
Collapse
|
14
|
Gori M, Ober KM, Tinelli F, Coubard OA. Temporal representation impairment in developmental dyslexia for unisensory and multisensory stimuli. Dev Sci 2020; 23:e12977. [PMID: 32333455 PMCID: PMC7507191 DOI: 10.1111/desc.12977] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2018] [Revised: 12/16/2019] [Accepted: 12/24/2019] [Indexed: 11/29/2022]
Abstract
Dyslexia has been associated with a problem in visual-audio integration mechanisms. Here, we investigate for the first time the contribution of unisensory cues on multisensory audio and visual integration in 32 dyslexic children by modelling results using the Bayesian approach. Non-linguistic stimuli were used. Children performed a temporal task: they had to report whether the middle of three stimuli was closer in time to the first one or to the last one presented. Children with dyslexia, compared with typical children, exhibited poorer unimodal thresholds, requiring greater temporal distance between items for correct judgements, while multisensory thresholds were well predicted by the Bayesian model. This result suggests that the multisensory deficit in dyslexia is due to impaired audio and visual inputs rather than impaired multisensory processing per se. We also observed that poorer temporal skills correlated with lower reading skills in dyslexic children, suggesting that this temporal capability can be linked to reading abilities.
Collapse
Affiliation(s)
- Monica Gori
- U-VIP Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Kinga M Ober
- Faculty of Educational Studies, Adam Mickiewicz University, Poznan, Poland
| | - Francesca Tinelli
- Department of Developmental Neuroscience, Stella Maris Scientific Institute, Pisa, Italy
| | | |
Collapse
|
15
|
Aggius-Vella E, Kolarik AJ, Gori M, Cirstea S, Campus C, Moore BCJ, Pardhan S. Comparison of auditory spatial bisection and minimum audible angle in front, lateral, and back space. Sci Rep 2020; 10:6279. [PMID: 32286362 PMCID: PMC7156409 DOI: 10.1038/s41598-020-62983-z] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2018] [Accepted: 03/11/2020] [Indexed: 11/09/2022] Open
Abstract
Although vision is important for calibrating auditory spatial perception, it only provides information about frontal sound sources. Previous studies of blind and sighted people support the idea that azimuthal spatial bisection in frontal space requires visual calibration, while detection of a change in azimuth (minimum audible angle, MAA) does not. The influence of vision on the ability to map frontal, lateral and back space has not been investigated. Performance in spatial bisection and MAA tasks was assessed for normally sighted blindfolded subjects using bursts of white noise presented frontally, laterally, or from the back relative to the subjects. Thresholds for both tasks were similar in frontal space, lower for the MAA task than for the bisection task in back space, and higher for the MAA task in lateral space. Two interpretations of the results are discussed, one in terms of visual calibration and the use of internal representations of source location and the other based on comparison of the magnitude or direction of change of the available binaural cues. That bisection thresholds were increased in back space relative to front space, where visual calibration information is unavailable, suggests that an internal representation of source location was used for the bisection task.
Collapse
Affiliation(s)
- Elena Aggius-Vella
- Unit for Visually Impaired People (U-VIP), Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy. .,Dipartimento di Informatica, Bioingegneria, Robotica e Ingegneria dei Sistemi (DIBRIS) Department, University of Genoa, Genoa, Italy.
| | - Andrew J Kolarik
- Vision and Eye Research Institute, School of Medicine, Anglia Ruskin University, Cambridge, United Kingdom.,Department of Psychology, University of Cambridge, Cambridge, United Kingdom
| | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| | - Silvia Cirstea
- Vision and Eye Research Institute, School of Medicine, Anglia Ruskin University, Cambridge, United Kingdom.,School of Computing and Information Science, Anglia Ruskin University, Cambridge, United Kingdom
| | - Claudio Campus
- Unit for Visually Impaired People (U-VIP), Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| | - Brian C J Moore
- Vision and Eye Research Institute, School of Medicine, Anglia Ruskin University, Cambridge, United Kingdom.,Department of Psychology, University of Cambridge, Cambridge, United Kingdom
| | - Shahina Pardhan
- Vision and Eye Research Institute, School of Medicine, Anglia Ruskin University, Cambridge, United Kingdom
| |
Collapse
|
16
|
The Cross-Modal Effects of Sensory Deprivation on Spatial and Temporal Processes in Vision and Audition: A Systematic Review on Behavioral and Neuroimaging Research since 2000. Neural Plast 2019; 2019:9603469. [PMID: 31885540 PMCID: PMC6914961 DOI: 10.1155/2019/9603469] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2019] [Revised: 07/06/2019] [Accepted: 10/31/2019] [Indexed: 01/12/2023] Open
Abstract
One of the most significant effects of neural plasticity manifests in the case of sensory deprivation when cortical areas that were originally specialized for the functions of the deprived sense take over the processing of another modality. Vision and audition represent two important senses needed to navigate through space and time. Therefore, the current systematic review discusses the cross-modal behavioral and neural consequences of deafness and blindness by focusing on spatial and temporal processing abilities, respectively. In addition, movement processing is evaluated as compiling both spatial and temporal information. We examine whether the sense that is not primarily affected changes in its own properties or in the properties of the deprived modality (i.e., temporal processing as the main specialization of audition and spatial processing as the main specialization of vision). References to the metamodal organization, supramodal functioning, and the revised neural recycling theory are made to address global brain organization and plasticity principles. Generally, according to the reviewed studies, behavioral performance is enhanced in those aspects for which both the deprived and the overtaking senses provide adequate processing resources. Furthermore, the behavioral enhancements observed in the overtaking sense (i.e., vision in the case of deafness and audition in the case of blindness) are clearly limited by the processing resources of the overtaking modality. Thus, the brain regions that were previously recruited during the behavioral performance of the deprived sense now support a similar behavioral performance for the overtaking sense. This finding suggests a more input-unspecific and processing principle-based organization of the brain. Finally, we highlight the importance of controlling for and stating factors that might impact neural plasticity and the need for further research into visual temporal processing in deaf subjects.
Collapse
|
17
|
Thaler L, Zhang X, Antoniou M, Kish DC, Cowie D. The flexible action system: Click-based echolocation may replace certain visual functionality for adaptive walking. J Exp Psychol Hum Percept Perform 2019; 46:21-35. [PMID: 31556685 PMCID: PMC6936248 DOI: 10.1037/xhp0000697] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
People use sensory, in particular visual, information to guide actions such as walking around obstacles, grasping or reaching. However, it is presently unclear how malleable the sensorimotor system is. The present study investigated this by measuring how click-based echolocation may be used to avoid obstacles while walking. We tested 7 blind echolocation experts, 14 sighted, and 10 blind echolocation beginners. For comparison, we also tested 10 sighted participants, who used vision. To maximize the relevance of our research for people with vision impairments, we also included a condition where the long cane was used and considered obstacles at different elevations. Motion capture and sound data were acquired simultaneously. We found that echolocation experts walked just as fast as sighted participants using vision, and faster than either sighted or blind echolocation beginners. Walking paths of echolocation experts indicated early and smooth adjustments, similar to those shown by sighted people using vision and different from later and more abrupt adjustments of beginners. Further, for all participants, the use of echolocation significantly decreased collision frequency with obstacles at head, but not ground level. Further analyses showed that participants who made clicks with higher spectral frequency content walked faster, and that for experts higher clicking rates were associated with faster walking. The results highlight that people can use novel sensory information (here, echolocation) to guide actions, demonstrating the action system’s ability to adapt to changes in sensory input. They also highlight that regular use of echolocation enhances sensory-motor coordination for walking in blind people. Vision loss has negative consequences for people’s mobility. The current report demonstrates that echolocation might replace certain visual functionality for adaptive walking. Importantly, the report also highlights that echolocation and long cane are complementary mobility techniques. The findings have direct relevance for professionals involved in mobility instruction and for people who are blind.
Collapse
Affiliation(s)
| | - Xinyu Zhang
- School of Information and Electronics, Beijing Institute of Technology
| | - Michail Antoniou
- Department of Electronic Electrical and Systems Engineering, School of Engineering, University of Birmingham
| | | | | |
Collapse
|
18
|
Ahmad H, Setti W, Campus C, Capris E, Facchini V, Sandini G, Gori M. The Sound of Scotoma: Audio Space Representation Reorganization in Individuals With Macular Degeneration. Front Integr Neurosci 2019; 13:44. [PMID: 31481884 PMCID: PMC6710446 DOI: 10.3389/fnint.2019.00044] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2019] [Accepted: 08/05/2019] [Indexed: 12/12/2022] Open
Abstract
Blindness is an ideal condition to study the role of visual input on the development of spatial representation, as studies have shown how audio space representation reorganizes in blindness. However, how spatial reorganization works is still unclear. A limitation of the study on blindness is that it is a "stable" system and it does not allow for studying the mechanisms that subtend the progress of this reorganization. To overcome this problem here we study, for the first time, audio spatial reorganization in 18 adults with macular degeneration (MD) for which the loss of vision due to scotoma is an ongoing progressive process. Our results show that the loss of vision produces immediate changes in the processing of spatial audio signals. In individuals with MD, the lateral sounds are "attracted" toward the central scotoma position resulting in a strong bias in the spatial auditory percept. This result suggests that the reorganization of audio space representation is a fast and plastic process occurring also later in life, after vision loss.
Collapse
Affiliation(s)
- Hafsah Ahmad
- Robotics, Brain and Cognitive Sciences, Italian Institute of Technology, Genoa, Italy.,Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy.,Department of Informatics, Bioengineering, Robotics, and Systems Engineering, University of Genoa, Genoa, Italy
| | - Walter Setti
- Robotics, Brain and Cognitive Sciences, Italian Institute of Technology, Genoa, Italy.,Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy.,Department of Informatics, Bioengineering, Robotics, and Systems Engineering, University of Genoa, Genoa, Italy
| | - Claudio Campus
- Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy
| | | | | | - Giulio Sandini
- Robotics, Brain and Cognitive Sciences, Italian Institute of Technology, Genoa, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy
| |
Collapse
|
19
|
Volpe G, Gori M. Multisensory Interactive Technologies for Primary Education: From Science to Technology. Front Psychol 2019; 10:1076. [PMID: 31316410 PMCID: PMC6611336 DOI: 10.3389/fpsyg.2019.01076] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2018] [Accepted: 04/25/2019] [Indexed: 12/02/2022] Open
Abstract
While technology is increasingly used in the classroom, we observe at the same time that making teachers and students accept it is more difficult than expected. In this work, we focus on multisensory technologies and we argue that the intersection between current challenges in pedagogical practices and recent scientific evidence opens novel opportunities for these technologies to bring a significant benefit to the learning process. In our view, multisensory technologies are ideal for effectively supporting an embodied and enactive pedagogical approach exploiting the best-suited sensory modality to teach a concept at school. This represents a great opportunity for designing technologies, which are both grounded on robust scientific evidence and tailored to the actual needs of teachers and students. Based on our experience in technology-enhanced learning projects, we propose six golden rules we deem important for catching this opportunity and fully exploiting it.
Collapse
Affiliation(s)
- Gualtiero Volpe
- Casa Paganini-InfoMus, DIBRIS, University of Genoa, Genoa, Italy
| | - Monica Gori
- U-Vip Unit, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
20
|
Dooley JC, Krubitzer LA. Alterations in cortical and thalamic connections of somatosensory cortex following early loss of vision. J Comp Neurol 2018; 527:1675-1688. [PMID: 30444542 DOI: 10.1002/cne.24582] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2018] [Revised: 09/26/2018] [Accepted: 11/01/2018] [Indexed: 01/31/2023]
Abstract
Early loss of vision produces dramatic changes in the functional organization and connectivity of the neocortex in cortical areas that normally process visual inputs, such as the primary and second visual area. This loss also results in alterations in the size, functional organization, and neural response properties of the primary somatosensory area, S1. However, the anatomical substrate for these functional changes in S1 has never been described. In the present investigation, we quantified the cortical and subcortical connections of S1 in animals that were bilaterally enucleated very early in development, prior to the formation of retino-geniculate and thalamocortical pathways. We found that S1 receives dense inputs from novel cortical fields, and that the density of existing cortical and thalamocortical connections was altered. Our results demonstrate that sensory systems develop in tandem and that alterations in sensory input in one system can affect the connections and organization of other sensory systems. Thus, therapeutic intervention following early loss of vision should focus not only on restoring vision, but also on augmenting the natural plasticity of the spared systems.
Collapse
Affiliation(s)
- James C Dooley
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, Iowa
| | - Leah A Krubitzer
- Center for Neuroscience, University of California, Davis, California.,Department of Psychology, University of California, Davis, California
| |
Collapse
|
21
|
Aggius-Vella E, Campus C, Gori M. Different audio spatial metric representation around the body. Sci Rep 2018; 8:9383. [PMID: 29925849 PMCID: PMC6010478 DOI: 10.1038/s41598-018-27370-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2017] [Accepted: 05/14/2018] [Indexed: 11/10/2022] Open
Abstract
Vision seems to have a pivotal role in developing spatial cognition. A recent approach, based on sensory calibration, has highlighted the role of vision in calibrating hearing in spatial tasks. It was shown that blind individuals have specific impairments during audio spatial bisection tasks. Vision is available only in the frontal space, leading to a "natural" blindness in the back. If vision is important for audio space calibration, then the auditory frontal space should be better represented than the back auditory space. In this study, we investigated this point by comparing frontal and back audio spatial metric representations. We measured precision in the spatial bisection task, for which vision seems to be fundamental to calibrate audition, in twenty-three sighted subjects. Two control tasks, a minimum audible angle and a temporal bisection were employed in order to evaluate auditory precision in the different regions considered. While no differences were observed between frontal and back space in the minimum audible angle (MAA) and temporal bisection task, a significant difference was found in the spatial bisection task, where subjects performed better in the frontal space. Our results are in agreement with the idea that vision is important in developing auditory spatial metric representation in sighted individuals.
Collapse
Affiliation(s)
- Elena Aggius-Vella
- U-VIP: Unit for Visually Impaired people, Center for Human Technologies, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Claudio Campus
- U-VIP: Unit for Visually Impaired people, Center for Human Technologies, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Monica Gori
- U-VIP: Unit for Visually Impaired people, Center for Human Technologies, Istituto Italiano di Tecnologia, Genoa, Italy.
| |
Collapse
|
22
|
Nelson JS, Kuling IA. Spatial Representation of the Workspace in Blind, Low Vision, and Sighted Human Participants. Iperception 2018; 9:2041669518781877. [PMID: 29977492 PMCID: PMC6024533 DOI: 10.1177/2041669518781877] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2017] [Accepted: 05/16/2018] [Indexed: 11/17/2022] Open
Abstract
It has been proposed that haptic spatial perception depends on one's visual abilities. We tested spatial perception in the workspace using a combination of haptic matching and line drawing tasks. There were 132 participants with varying degrees of visual ability ranging from congenitally blind to normally sighted. Each participant was blindfolded and asked to match a haptic target position felt under a table with their nondominant hand using a pen in their dominant hand. Once the pen was in position on the tabletop, they had to draw a line of equal length to a previously felt reference object by moving the pen laterally. We used targets at three different locations to evaluate whether different starting positions relative to the body give rise to different matching errors, drawn line lengths, or drawn line angles. We found no influence of visual ability on matching error, drawn line length, or line angle, but we found that early-blind participants are slightly less consistent in their matching errors across space. We conclude that the elementary haptic abilities tested in these tasks do not depend on visual experience.
Collapse
Affiliation(s)
- Jacob S. Nelson
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, the Netherlands
| | - Irene A. Kuling
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, the Netherlands
| |
Collapse
|
23
|
Vercillo T, Tonelli A, Gori M. Early visual deprivation prompts the use of body-centered frames of reference for auditory localization. Cognition 2018; 170:263-269. [DOI: 10.1016/j.cognition.2017.10.013] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2016] [Revised: 09/05/2017] [Accepted: 10/18/2017] [Indexed: 02/07/2023]
|
24
|
Cuturi LF, Gori M. The Effect of Visual Experience on Perceived Haptic Verticality When Tilted in the Roll Plane. Front Neurosci 2017; 11:687. [PMID: 29270109 PMCID: PMC5723665 DOI: 10.3389/fnins.2017.00687] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2017] [Accepted: 11/22/2017] [Indexed: 11/13/2022] Open
Abstract
The orientation of the body in space can influence perception of verticality leading sometimes to biases consistent with priors peaked at the most common head and body orientation, that is upright. In this study, we investigate haptic perception of verticality in sighted individuals and early and late blind adults when tilted counterclockwise in the roll plane. Participants were asked to perform a stimulus orientation discrimination task with their body tilted to their left ear side 90° relative to gravity. Stimuli were presented by using a motorized haptic bar. In order to test whether different reference frames relative to the head influenced perception of verticality, we varied the position of the stimulus on the body longitudinal axis. Depending on the stimulus position sighted participants tended to have biases away or toward their body tilt. Visually impaired individuals instead show a different pattern of verticality estimations. A bias toward head and body tilt (i.e., Aubert effect) was observed in late blind individuals. Interestingly, no strong biases were observed in early blind individuals. Overall, these results posit visual sensory information to be fundamental in influencing the haptic readout of proprioceptive and vestibular information about body orientation relative to gravity. The acquisition of an idiotropic vector signaling the upright might take place through vision during development. Regarding early blind individuals, independent spatial navigation experience likely enhanced by echolocation behavior might have a role in such acquisition. In the case of participants with late onset blindness, early experience of vision might lead them to anchor their visually acquired priors to the haptic modality with no disambiguation between head and body references as observed in sighted individuals (Fraser et al., 2015). With our study, we aim to investigate haptic perception of gravity direction in unusual body tilts when vision is absent due to visual impairment. Insofar, our findings throw light on the influence of proprioceptive/vestibular sensory information on haptic perceived verticality in blind individuals showing how this phenomenon is affected by visual experience.
Collapse
Affiliation(s)
- Luigi F Cuturi
- Unit for Visually Impaired People, Science and Technology for Children and Adults, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Science and Technology for Children and Adults, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
25
|
Thaler L, Foresteire D. Visual sensory stimulation interferes with people's ability to echolocate object size. Sci Rep 2017; 7:13069. [PMID: 29026115 PMCID: PMC5638915 DOI: 10.1038/s41598-017-12967-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2016] [Accepted: 09/14/2017] [Indexed: 12/03/2022] Open
Abstract
Echolocation is the ability to use sound-echoes to infer spatial information about the environment. People can echolocate for example by making mouth clicks. Previous research suggests that echolocation in blind people activates brain areas that process light in sighted people. Research has also shown that echolocation in blind people may replace vision for calibration of external space. In the current study we investigated if echolocation may also draw on ‘visual’ resources in the sighted brain. To this end, we paired a sensory interference paradigm with an echolocation task. We found that exposure to an uninformative visual stimulus (i.e. white light) while simultaneously echolocating significantly reduced participants’ ability to accurately judge object size. In contrast, a tactile stimulus (i.e. vibration on the skin) did not lead to a significant change in performance (neither in sighted, nor blind echo expert participants). Furthermore, we found that the same visual stimulus did not affect performance in auditory control tasks that required detection of changes in sound intensity, sound frequency or sound location. The results suggest that processing of visual and echo-acoustic information draw on common neural resources.
Collapse
Affiliation(s)
- L Thaler
- Department of Psychology, Durham University, Durham, United Kingdom.
| | - D Foresteire
- Department of Psychology, Durham University, Durham, United Kingdom
| |
Collapse
|
26
|
Abstract
Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals' performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions.
Collapse
|
27
|
Kolarik AJ, Scarfe AC, Moore BCJ, Pardhan S. Blindness enhances auditory obstacle circumvention: Assessing echolocation, sensory substitution, and visual-based navigation. PLoS One 2017; 12:e0175750. [PMID: 28407000 PMCID: PMC5391114 DOI: 10.1371/journal.pone.0175750] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2016] [Accepted: 03/30/2017] [Indexed: 11/18/2022] Open
Abstract
Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.
Collapse
Affiliation(s)
- Andrew J. Kolarik
- Vision and Eye Research Unit (VERU), Postgraduate Medical Institute, Anglia Ruskin University, Cambridge, United Kingdom
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
- Centre for the Study of the Senses, Institute of Philosophy, University of London, London, United Kingdom
- * E-mail:
| | - Amy C. Scarfe
- Vision and Eye Research Unit (VERU), Postgraduate Medical Institute, Anglia Ruskin University, Cambridge, United Kingdom
- Department of Clinical Engineering, Medical Imaging and Medical Physics Directorate, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, United Kingdom
| | - Brian C. J. Moore
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
| | - Shahina Pardhan
- Vision and Eye Research Unit (VERU), Postgraduate Medical Institute, Anglia Ruskin University, Cambridge, United Kingdom
| |
Collapse
|
28
|
Gori M, Chilosi A, Forli F, Burr D. Audio-visual temporal perception in children with restored hearing. Neuropsychologia 2017; 99:350-359. [PMID: 28365363 DOI: 10.1016/j.neuropsychologia.2017.03.025] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2016] [Revised: 03/20/2017] [Accepted: 03/22/2017] [Indexed: 11/27/2022]
Abstract
It is not clear how audio-visual temporal perception develops in children with restored hearing. In this study we measured temporal discrimination thresholds with an audio-visual temporal bisection task in 9 deaf children with restored audition, and 22 typically hearing children. In typically hearing children, audition was more precise than vision, with no gain in multisensory conditions (as previously reported in Gori et al. (2012b)). However, deaf children with restored audition showed similar thresholds for audio and visual thresholds and some evidence of gain in audio-visual temporal multisensory conditions. Interestingly, we found a strong correlation between auditory weighting of multisensory signals and quality of language: patients who gave more weight to audition had better language skills. Similarly, auditory thresholds for the temporal bisection task were also a good predictor of language skills. This result supports the idea that the temporal auditory processing is associated with language development.
Collapse
Affiliation(s)
- Monica Gori
- U-VIP Unit for Visually Impaired People, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, Genoa, Italy.
| | - Anna Chilosi
- Department of Developmental Neuroscience, Stella Maris Scientific Institute, Pisa, Italy
| | - Francesca Forli
- Azienda Ospedaliero-Universitaria Pisana, Pisa, Italy; Dipartimento di Psicologia, Università Degli Studi di Firenze, Via S. Salvi 12, 50125, Florence, Italy
| | - David Burr
- School of Psychology, University of Western Australia, Perth, WA, Australia
| |
Collapse
|
29
|
Finocchietti S, Cappagli G, Gori M. Auditory Spatial Recalibration in Congenital Blind Individuals. Front Neurosci 2017; 11:76. [PMID: 28261053 PMCID: PMC5309234 DOI: 10.3389/fnins.2017.00076] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2016] [Accepted: 02/02/2017] [Indexed: 11/13/2022] Open
Abstract
Blind individuals show impairments for auditory spatial skills that require complex spatial representation of the environment. We suggest that this is partially due to the egocentric frame of reference used by blind individuals. Here we investigate the possibility of reducing the mentioned auditory spatial impairments with an audio-motor training. Our hypothesis is that the association between a motor command and the corresponding movement's sensory feedback can provide an allocentric frame of reference and consequently help blind individuals in understanding complex spatial relationships. Subjects were required to localize the end point of a moving sound before and after either 2-min of audio-motor training or a complete rest. During the training, subjects were asked to move their hand, and consequently the sound source, to freely explore the space around the setup and the body. Both congenital blind (N = 20) and blindfolded healthy controls (N = 28) participated in the study. Results suggest that the audio-motor training was effective in improving space perception of blind individuals. The improvement was not observed in those subjects that did not perform the training. This study demonstrates that it is possible to recalibrate the auditory spatial representation in congenital blind individuals with a short audio-motor training and provides new insights for rehabilitation protocols in blind people.
Collapse
Affiliation(s)
- Sara Finocchietti
- Unit for Visually Impaired People, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia Genoa, Italy
| | - Giulia Cappagli
- Unit for Visually Impaired People, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia Genoa, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia Genoa, Italy
| |
Collapse
|
30
|
Gori M, Cappagli G, Baud-Bovy G, Finocchietti S. Shape Perception and Navigation in Blind Adults. Front Psychol 2017; 8:10. [PMID: 28144226 PMCID: PMC5240028 DOI: 10.3389/fpsyg.2017.00010] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Accepted: 01/03/2017] [Indexed: 11/25/2022] Open
Abstract
Different sensory systems interact to generate a representation of space and to navigate. Vision plays a critical role in the representation of space development. During navigation, vision is integrated with auditory and mobility cues. In blind individuals, visual experience is not available and navigation therefore lacks this important sensory signal. In blind individuals, compensatory mechanisms can be adopted to improve spatial and navigation skills. On the other hand, the limitations of these compensatory mechanisms are not completely clear. Both enhanced and impaired reliance on auditory cues in blind individuals have been reported. Here, we develop a new paradigm to test both auditory perception and navigation skills in blind and sighted individuals and to investigate the effect that visual experience has on the ability to reproduce simple and complex paths. During the navigation task, early blind, late blind and sighted individuals were required first to listen to an audio shape and then to recognize and reproduce it by walking. After each audio shape was presented, a static sound was played and the participants were asked to reach it. Movements were recorded with a motion tracking system. Our results show three main impairments specific to early blind individuals. The first is the tendency to compress the shapes reproduced during navigation. The second is the difficulty to recognize complex audio stimuli, and finally, the third is the difficulty in reproducing the desired shape: early blind participants occasionally reported perceiving a square but they actually reproduced a circle during the navigation task. We discuss these results in terms of compromised spatial reference frames due to lack of visual input during the early period of development.
Collapse
Affiliation(s)
- Monica Gori
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia Genoa, Italy
| | - Giulia Cappagli
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia Genoa, Italy
| | - Gabriel Baud-Bovy
- Robotics, Brain and Cognitive Science Department, Istituto Italiano di TecnologiaGenoa, Italy; The Unit of Experimental Psychology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Vita-Salute San Raffaele UniversityMilan, Italy
| | - Sara Finocchietti
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia Genoa, Italy
| |
Collapse
|
31
|
Abstract
Valuable insights into the role played by visual experience in shaping spatial representations can be gained by studying the effects of visual deprivation on the remaining sensory modalities. For instance, it has long been debated how spatial hearing evolves in the absence of visual input. While several anecdotal accounts tend to associate complete blindness with exceptional hearing abilities, experimental evidence supporting such claims is, however, matched by nearly equal amounts of evidence documenting spatial hearing deficits. The purpose of this review is to summarize the key findings which support either enhancements or deficits in spatial hearing observed following visual loss and to provide a conceptual framework that isolates the specific conditions under which they occur. Available evidence will be examined in terms of spatial dimensions (horizontal, vertical, and depth perception) and in terms of frames of reference (egocentric and allocentric). Evidence suggests that while early blind individuals show superior spatial hearing in the horizontal plane, they also show significant deficits in the vertical plane. Potential explanations underlying these contrasting findings will be discussed. Early blind individuals also show spatial hearing impairments when performing tasks that require the use of an allocentric frame of reference. Results obtained with late-onset blind individuals suggest that early visual experience plays a key role in the development of both spatial hearing enhancements and deficits.
Collapse
Affiliation(s)
- Patrice Voss
- Cognitive Neuroscience Unit, Department of Neurology and Neurosurgery, Montreal Neurological Institute – McGill UniversityMontreal, QC, Canada
| |
Collapse
|
32
|
Cuturi LF, Aggius-Vella E, Campus C, Parmiggiani A, Gori M. From science to technology: Orientation and mobility in blind children and adults. Neurosci Biobehav Rev 2016; 71:240-251. [DOI: 10.1016/j.neubiorev.2016.08.019] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2016] [Revised: 08/13/2016] [Accepted: 08/16/2016] [Indexed: 11/27/2022]
|
33
|
Learning to echolocate in sighted people: a correlational study on attention, working memory and spatial abilities. Exp Brain Res 2016; 235:809-818. [PMID: 27888324 PMCID: PMC5315722 DOI: 10.1007/s00221-016-4833-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2016] [Accepted: 11/10/2016] [Indexed: 12/04/2022]
Abstract
Echolocation can be beneficial for the orientation and mobility of visually impaired people. Research has shown considerable individual differences for acquiring this skill. However, individual characteristics that affect the learning of echolocation are largely unknown. In the present study, we examined individual factors that are likely to affect learning to echolocate: sustained and divided attention, working memory, and spatial abilities. To that aim, sighted participants with normal hearing performed an echolocation task that was adapted from a previously reported size-discrimination task. In line with existing studies, we found large individual differences in echolocation ability. We also found indications that participants were able to improve their echolocation ability. Furthermore, we found a significant positive correlation between improvement in echolocation and sustained and divided attention, as measured in the PASAT. No significant correlations were found with our tests regarding working memory and spatial abilities. These findings may have implications for the development of guidelines for training echolocation that are tailored to the individual with a visual impairment.
Collapse
|
34
|
Auditory spatial representations of the world are compressed in blind humans. Exp Brain Res 2016; 235:597-606. [PMID: 27837259 PMCID: PMC5272902 DOI: 10.1007/s00221-016-4823-1] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2016] [Accepted: 11/05/2016] [Indexed: 11/30/2022]
Abstract
Compared to sighted listeners, blind listeners often display enhanced auditory spatial abilities such as localization in azimuth. However, less is known about whether blind humans can accurately judge distance in extrapersonal space using auditory cues alone. Using virtualization techniques, we show that auditory spatial representations of the world beyond the peripersonal space of blind listeners are compressed compared to those for normally sighted controls. Blind participants overestimated the distance to nearby sources and underestimated the distance to remote sound sources, in both reverberant and anechoic environments, and for speech, music, and noise signals. Functions relating judged and actual virtual distance were well fitted by compressive power functions, indicating that the absence of visual information regarding the distance of sound sources may prevent accurate calibration of the distance information provided by auditory signals.
Collapse
|
35
|
Devices for visually impaired people: High technological devices with low user acceptance and no adaptability for children. Neurosci Biobehav Rev 2016; 69:79-88. [DOI: 10.1016/j.neubiorev.2016.06.043] [Citation(s) in RCA: 83] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2015] [Revised: 06/01/2016] [Accepted: 06/16/2016] [Indexed: 11/23/2022]
|
36
|
Thaler L, Goodale MA. Echolocation in humans: an overview. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2016; 7:382-393. [PMID: 27538733 DOI: 10.1002/wcs.1408] [Citation(s) in RCA: 56] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2016] [Revised: 06/23/2016] [Accepted: 06/27/2016] [Indexed: 01/08/2023]
Abstract
Bats and dolphins are known for their ability to use echolocation. They emit bursts of sounds and listen to the echoes that bounce back to detect the objects in their environment. What is not as well-known is that some blind people have learned to do the same thing, making mouth clicks, for example, and using the returning echoes from those clicks to sense obstacles and objects of interest in their surroundings. The current review explores some of the research that has examined human echolocation and the changes that have been observed in the brains of echolocation experts. We also discuss potential applications and assistive technology based on echolocation. Blind echolocation experts can sense small differences in the location of objects, differentiate between objects of various sizes and shapes, and even between objects made of different materials, just by listening to the reflected echoes from mouth clicks. It is clear that echolocation may enable some blind people to do things that are otherwise thought to be impossible without vision, potentially providing them with a high degree of independence in their daily lives and demonstrating that echolocation can serve as an effective mobility strategy in the blind. Neuroimaging has shown that the processing of echoes activates brain regions in blind echolocators that would normally support vision in the sighted brain, and that the patterns of these activations are modulated by the information carried by the echoes. This work is shedding new light on just how plastic the human brain is. WIREs Cogn Sci 2016, 7:382-393. doi: 10.1002/wcs.1408 For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
- Lore Thaler
- Department of Psychology, Durham University, Durham, UK.
| | - Melvyn A Goodale
- The Brain and Mind Institute, Department of Psychology, University of Western Ontario, Ontario, Canada
| |
Collapse
|
37
|
Tonelli A, Brayda L, Gori M. Depth Echolocation Learnt by Novice Sighted People. PLoS One 2016; 11:e0156654. [PMID: 27257689 PMCID: PMC4892586 DOI: 10.1371/journal.pone.0156654] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2015] [Accepted: 05/17/2016] [Indexed: 11/21/2022] Open
Abstract
Some blind people have developed a unique technique, called echolocation, to orient themselves in unknown environments. More specifically, by self-generating a clicking noise with the tongue, echolocators gain knowledge about the external environment by perceiving more detailed object features. It is not clear to date whether sighted individuals can also develop such an extremely useful technique. To investigate this, here we test the ability of novice sighted participants to perform a depth echolocation task. Moreover, in order to evaluate whether the type of room (anechoic or reverberant) and the type of clicking sound (with the tongue or with the hands) influences the learning of this technique, we divided the entire sample into four groups. Half of the participants produced the clicking sound with their tongue, the other half with their hands. Half of the participants performed the task in an anechoic chamber, the other half in a reverberant room. Subjects stood in front of five bars, each of a different size, and at five different distances from the subject. The dimension of the bars ensured a constant subtended angle for the five distances considered. The task was to identify the correct distance of the bar. We found that, even by the second session, the participants were able to judge the correct depth of the bar at a rate greater than chance. Improvements in both precision and accuracy were observed in all experimental sessions. More interestingly, we found significantly better performance in the reverberant room than in the anechoic chamber. The type of clicking did not modulate our results. This suggests that the echolocation technique can also be learned by sighted individuals and that room reverberation can influence this learning process. More generally, this study shows that total loss of sight is not a prerequisite for echolocation skills this suggests important potential implications on rehabilitation settings for persons with residual vision.
Collapse
Affiliation(s)
- Alessia Tonelli
- U-VIP: Unit for Visually Impaired People, Science and Technology for Children and Adults, Istituto Italiano di Tecnologia, Via Morego 30, Genova, Italy
- Robotics, Brain and Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia, Via Morego 30, 16163, Genova, Italy
- * E-mail:
| | - Luca Brayda
- Robotics, Brain and Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia, Via Morego 30, 16163, Genova, Italy
| | - Monica Gori
- U-VIP: Unit for Visually Impaired People, Science and Technology for Children and Adults, Istituto Italiano di Tecnologia, Via Morego 30, Genova, Italy
- Robotics, Brain and Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia, Via Morego 30, 16163, Genova, Italy
| |
Collapse
|
38
|
Thaler L, Castillo-Serrano J. People's Ability to Detect Objects Using Click-Based Echolocation: A Direct Comparison between Mouth-Clicks and Clicks Made by a Loudspeaker. PLoS One 2016; 11:e0154868. [PMID: 27135407 PMCID: PMC4852930 DOI: 10.1371/journal.pone.0154868] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2016] [Accepted: 04/20/2016] [Indexed: 11/19/2022] Open
Abstract
Echolocation is the ability to use reflected sound to obtain information about the spatial environment. Echolocation is an active process that requires both the production of the emission as well as the sensory processing of the resultant sound. Appreciating the general usefulness of echo-acoustic cues for people, in particular those with vision impairments, various devices have been built that exploit the principle of echolocation to obtain and provide information about the environment. It is common to all these devices that they do not require the person to make a sound. Instead, the device produces the emission autonomously and feeds a resultant sound back to the user. Here we tested if echolocation performance in a simple object detection task was affected by the use of a head-mounted loudspeaker as compared to active clicking. We found that 27 sighted participants new to echolocation did generally better when they used a loudspeaker as compared to mouth-clicks, and that two blind participants with experience in echolocation did equally well with mouth clicks and the speaker. Importantly, performance of sighted participants’ was not statistically different from performance of blind experts when they used the speaker. Based on acoustic click data collected from a subset of our participants, those participants whose mouth clicks were more similar to the speaker clicks, and thus had higher peak frequencies and sound intensity, did better. We conclude that our results are encouraging for the consideration and development of assistive devices that exploit the principle of echolocation.
Collapse
Affiliation(s)
- Lore Thaler
- Department of Psychology, Durham University, Durham, United Kingdom
- * E-mail:
| | | |
Collapse
|
39
|
An assessment of auditory-guided locomotion in an obstacle circumvention task. Exp Brain Res 2016; 234:1725-35. [PMID: 26879767 PMCID: PMC4851710 DOI: 10.1007/s00221-016-4567-y] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2015] [Accepted: 11/30/2015] [Indexed: 12/11/2022]
Abstract
This study investigated how effectively audition can be used to guide navigation around an obstacle. Ten blindfolded normally sighted participants navigated around a 0.6 × 2 m obstacle while producing self-generated mouth click sounds. Objective movement performance was measured using a Vicon motion capture system. Performance with full vision without generating sound was used as a baseline for comparison. The obstacle’s location was varied randomly from trial to trial: it was either straight ahead or 25 cm to the left or right relative to the participant. Although audition provided sufficient information to detect the obstacle and guide participants around it without collision in the majority of trials, buffer space (clearance between the shoulder and obstacle), overall movement times, and number of velocity corrections were significantly (p < 0.05) greater with auditory guidance than visual guidance. Collisions sometime occurred under auditory guidance, suggesting that audition did not always provide an accurate estimate of the space between the participant and obstacle. Unlike visual guidance, participants did not always walk around the side that afforded the most space during auditory guidance. Mean buffer space was 1.8 times higher under auditory than under visual guidance. Results suggest that sound can be used to generate buffer space when vision is unavailable, allowing navigation around an obstacle without collision in the majority of trials.
Collapse
|
40
|
Fiehler K, Schütz I, Meller T, Thaler L. Neural Correlates of Human Echolocation of Path Direction During Walking. Multisens Res 2015; 28:195-226. [PMID: 26152058 DOI: 10.1163/22134808-00002491] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Echolocation can be used by blind and sighted humans to navigate their environment. The current study investigated the neural activity underlying processing of path direction during walking. Brain activity was measured with fMRI in three blind echolocation experts, and three blind and three sighted novices. During scanning, participants listened to binaural recordings that had been made prior to scanning while echolocation experts had echolocated during walking along a corridor which could continue to the left, right, or straight ahead. Participants also listened to control sounds that contained ambient sounds and clicks, but no echoes. The task was to decide if the corridor in the recording continued to the left, right, or straight ahead, or if they were listening to a control sound. All participants successfully dissociated echo from no echo sounds, however, echolocation experts were superior at direction detection. We found brain activations associated with processing of path direction (contrast: echo vs. no echo) in superior parietal lobule (SPL) and inferior frontal cortex in each group. In sighted novices, additional activation occurred in the inferior parietal lobule (IPL) and middle and superior frontal areas. Within the framework of the dorso-dorsal and ventro-dorsal pathway proposed by Rizzolatti and Matelli (2003), our results suggest that blind participants may automatically assign directional meaning to the echoes, while sighted participants may apply more conscious, high-level spatial processes. High similarity of SPL and IFC activations across all three groups, in combination with previous research, also suggest that all participants recruited a multimodal spatial processing system for action (here: locomotion).
Collapse
|
41
|
Tonelli A, Brayda L, Gori M. Task-dependent calibration of auditory spatial perception through environmental visual observation. Front Syst Neurosci 2015; 9:84. [PMID: 26082692 PMCID: PMC4451354 DOI: 10.3389/fnsys.2015.00084] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2015] [Accepted: 05/18/2015] [Indexed: 12/01/2022] Open
Abstract
Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio task and whether this influence is task-specific or environment-specific or both. To test these issues we investigate possible improvements of acoustic precision with sighted blindfolded participants in two audio tasks [minimum audible angle (MAA) and space bisection] and two acoustically different environments (normal room and anechoic room). With respect to a baseline of auditory precision, we found an improvement of precision in the space bisection task but not in the MAA after the observation of a normal room. No improvement was found when performing the same task in an anechoic chamber. In addition, no difference was found between a condition of short environment observation and a condition of full vision during the whole experimental session. Our results suggest that even short-term environmental observation can calibrate auditory spatial performance. They also suggest that echoes can be the cue that underpins visual calibration. Echoes may mediate the transfer of information from the visual to the auditory system.
Collapse
Affiliation(s)
- Alessia Tonelli
- Robotics, Brain and Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia Genoa, Italy
| | - Luca Brayda
- Robotics, Brain and Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia Genoa, Italy
| | - Monica Gori
- Robotics, Brain and Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia Genoa, Italy
| |
Collapse
|