1
|
Böing S, Van der Stigchel S, Van der Stoep N. The impact of acute asymmetric hearing loss on multisensory integration. Eur J Neurosci 2024; 59:2373-2390. [PMID: 38303554 DOI: 10.1111/ejn.16263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 12/15/2023] [Accepted: 01/09/2024] [Indexed: 02/03/2024]
Abstract
Humans have the remarkable ability to integrate information from different senses, which greatly facilitates the detection, localization and identification of events in the environment. About 466 million people worldwide suffer from hearing loss. Yet, the impact of hearing loss on how the senses work together is rarely investigated. Here, we investigate how a common sensory impairment, asymmetric conductive hearing loss (AHL), alters the way our senses interact by examining human orienting behaviour with normal hearing (NH) and acute AHL. This type of hearing loss disrupts auditory localization. We hypothesized that this creates a conflict between auditory and visual spatial estimates and alters how auditory and visual inputs are integrated to facilitate multisensory spatial perception. We analysed the spatial and temporal properties of saccades to auditory, visual and audiovisual stimuli before and after plugging the right ear of participants. Both spatial and temporal aspects of multisensory integration were affected by AHL. Compared with NH, AHL caused participants to make slow, inaccurate and unprecise saccades towards auditory targets. Surprisingly, increased weight on visual input resulted in accurate audiovisual localization with AHL. This came at a cost: saccade latencies for audiovisual targets increased significantly. The larger the auditory localization errors, the less participants were able to benefit from audiovisual integration in terms of saccade latency. Our results indicate that observers immediately change sensory weights to effectively deal with acute AHL and preserve audiovisual accuracy in a way that cannot be fully explained by statistical models of optimal cue integration.
Collapse
Affiliation(s)
- Sanne Böing
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Stefan Van der Stigchel
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Nathan Van der Stoep
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
2
|
Xiong Y, Nemargut JP, Bradley C, Wittich W, Legge GE. Development and validation of a questionnaire for assessing visual and auditory spatial localization abilities in dual sensory impairment. Sci Rep 2024; 14:7911. [PMID: 38575713 PMCID: PMC10994906 DOI: 10.1038/s41598-024-58363-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 03/28/2024] [Indexed: 04/06/2024] Open
Abstract
Spatial localization is important for social interaction and safe mobility, and relies heavily on vision and hearing. While people with vision or hearing impairment compensate with their intact sense, people with dual sensory impairment (DSI) may require rehabilitation strategies that take both impairments into account. There is currently no tool for assessing the joint effect of vision and hearing impairment on spatial localization in this large and increasing population. To this end, we developed a novel Dual Sensory Spatial Localization Questionnaire (DS-SLQ) that consists of 35 everyday spatial localization tasks. The DS-SLQ asks participants about their difficulty completing different tasks using only vision or hearing, as well as the primary sense they rely on for each task. We administered the DS-SLQ to 104 participants with heterogenous vision and hearing status. Rasch analysis confirmed the psychometric validity of the DS-SLQ and the feasibility of comparing vision and hearing spatial abilities in a unified framework. Vision and hearing impairment were associated with decreased visual and auditory spatial abilities. Differences between vision and hearing abilities predicted overall sensory reliance patterns. In DSI rehabilitation, DS-SLQ may be useful for measuring vision and hearing spatial localization abilities and predicting the better sense for completing different spatial localization tasks.
Collapse
Affiliation(s)
- Yingzi Xiong
- Lions Vision Research and Rehabilitation Center, Wilmer Eye Institute, Johns Hopkins University, Baltimore, MD, USA.
- Center for Applied and Translational Sensory Sciences, University of Minnesota, Minneapolis, USA.
| | | | - Chris Bradley
- Lions Vision Research and Rehabilitation Center, Wilmer Eye Institute, Johns Hopkins University, Baltimore, MD, USA
| | - Walter Wittich
- School of Optometry, Université de Montréal, Montreal, Canada
| | - Gordon E Legge
- Center for Applied and Translational Sensory Sciences, University of Minnesota, Minneapolis, USA
| |
Collapse
|
3
|
Bruns P, Thun C, Röder B. Quantifying accuracy and precision from continuous response data in studies of spatial perception and crossmodal recalibration. Behav Res Methods 2024; 56:3814-3830. [PMID: 38684625 PMCID: PMC11133116 DOI: 10.3758/s13428-024-02416-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/05/2024] [Indexed: 05/02/2024]
Abstract
The ability to detect the absolute location of sensory stimuli can be quantified with either error-based metrics derived from single-trial localization errors or regression-based metrics derived from a linear regression of localization responses on the true stimulus locations. Here we tested the agreement between these two approaches in estimating accuracy and precision in a large sample of 188 subjects who localized auditory stimuli from different azimuthal locations. A subsample of 57 subjects was subsequently exposed to audiovisual stimuli with a consistent spatial disparity before performing the sound localization test again, allowing us to additionally test which of the different metrics best assessed correlations between the amount of crossmodal spatial recalibration and baseline localization performance. First, our findings support a distinction between accuracy and precision. Localization accuracy was mainly reflected in the overall spatial bias and was moderately correlated with precision metrics. However, in our data, the variability of single-trial localization errors (variable error in error-based metrics) and the amount by which the eccentricity of target locations was overestimated (slope in regression-based metrics) were highly correlated, suggesting that intercorrelations between individual metrics need to be carefully considered in spatial perception studies. Secondly, exposure to spatially discrepant audiovisual stimuli resulted in a shift in bias toward the side of the visual stimuli (ventriloquism aftereffect) but did not affect localization precision. The size of the aftereffect shift in bias was at least partly explainable by unspecific test repetition effects, highlighting the need to account for inter-individual baseline differences in studies of spatial learning.
Collapse
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany.
| | - Caroline Thun
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany
| |
Collapse
|
4
|
Bertonati G, Amadeo MB, Campus C, Gori M. Task-dependent spatial processing in the visual cortex. Hum Brain Mapp 2023; 44:5972-5981. [PMID: 37811869 PMCID: PMC10619374 DOI: 10.1002/hbm.26489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 07/31/2023] [Accepted: 08/30/2023] [Indexed: 10/10/2023] Open
Abstract
To solve spatial tasks, the human brain asks for support from the visual cortices. Nonetheless, representing spatial information is not fixed but depends on the reference frames in which the spatial inputs are involved. The present study investigates how the kind of spatial representations influences the recruitment of visual areas during multisensory spatial tasks. Our study tested participants in an electroencephalography experiment involving two audio-visual (AV) spatial tasks: a spatial bisection, in which participants estimated the relative position in space of an AV stimulus in relation to the position of two other stimuli, and a spatial localization, in which participants localized one AV stimulus in relation to themselves. Results revealed that spatial tasks specifically modulated the occipital event-related potentials (ERPs) after the onset of the stimuli. We observed a greater contralateral early occipital component (50-90 ms) when participants solved the spatial bisection, and a more robust later occipital response (110-160 ms) when they processed the spatial localization. This observation suggests that different spatial representations elicited by multisensory stimuli are sustained by separate neurophysiological mechanisms.
Collapse
Affiliation(s)
- G. Bertonati
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
- Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS)Università degli Studi di GenovaGenoaItaly
| | - M. B. Amadeo
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - C. Campus
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - M. Gori
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| |
Collapse
|
5
|
Bruns P, Röder B. Development and experience-dependence of multisensory spatial processing. Trends Cogn Sci 2023; 27:961-973. [PMID: 37208286 DOI: 10.1016/j.tics.2023.04.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 04/24/2023] [Accepted: 04/25/2023] [Indexed: 05/21/2023]
Abstract
Multisensory spatial processes are fundamental for efficient interaction with the world. They include not only the integration of spatial cues across sensory modalities, but also the adjustment or recalibration of spatial representations to changing cue reliabilities, crossmodal correspondences, and causal structures. Yet how multisensory spatial functions emerge during ontogeny is poorly understood. New results suggest that temporal synchrony and enhanced multisensory associative learning capabilities first guide causal inference and initiate early coarse multisensory integration capabilities. These multisensory percepts are crucial for the alignment of spatial maps across sensory systems, and are used to derive more stable biases for adult crossmodal recalibration. The refinement of multisensory spatial integration with increasing age is further promoted by the inclusion of higher-order knowledge.
Collapse
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany.
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
6
|
Martolini C, Amadeo MB, Campus C, Cappagli G, Gori M. Effects of audio-motor training on spatial representations in long-term late blindness. Neuropsychologia 2022; 176:108391. [DOI: 10.1016/j.neuropsychologia.2022.108391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 08/16/2022] [Accepted: 10/01/2022] [Indexed: 11/15/2022]
|
7
|
Senna I, Piller S, Gori M, Ernst M. The power of vision: calibration of auditory space after sight restoration from congenital cataracts. Proc Biol Sci 2022; 289:20220768. [PMID: 36196538 PMCID: PMC9532985 DOI: 10.1098/rspb.2022.0768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 09/12/2022] [Indexed: 11/12/2022] Open
Abstract
Early visual deprivation typically results in spatial impairments in other sensory modalities. It has been suggested that, since vision provides the most accurate spatial information, it is used for calibrating space in the other senses. Here we investigated whether sight restoration after prolonged early onset visual impairment can lead to the development of more accurate auditory space perception. We tested participants who were surgically treated for congenital dense bilateral cataracts several years after birth. In Experiment 1 we assessed participants' ability to understand spatial relationships among sounds, by asking them to spatially bisect three consecutive, laterally separated sounds. Participants performed better after surgery than participants tested before. However, they still performed worse than sighted controls. In Experiment 2, we demonstrated that single sound localization in the two-dimensional frontal plane improves quickly after surgery, approaching performance levels of sighted controls. Such recovery seems to be mediated by visual acuity, as participants gaining higher post-surgical visual acuity performed better in both experiments. These findings provide strong support for the hypothesis that vision calibrates auditory space perception. Importantly, this also demonstrates that this process can occur even when vision is restored after years of visual deprivation.
Collapse
Affiliation(s)
- Irene Senna
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, Ulm, Germany
| | - Sophia Piller
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, Ulm, Germany
| | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| | - Marc Ernst
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, Ulm, Germany
| |
Collapse
|
8
|
Gori M, Bertonati G, Campus C, Amadeo MB. Multisensory representations of space and time in sensory cortices. Hum Brain Mapp 2022; 44:656-667. [PMID: 36169038 PMCID: PMC9842891 DOI: 10.1002/hbm.26090] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/05/2022] [Accepted: 09/07/2022] [Indexed: 01/25/2023] Open
Abstract
Clear evidence demonstrated a supramodal organization of sensory cortices with multisensory processing occurring even at early stages of information encoding. Within this context, early recruitment of sensory areas is necessary for the development of fine domain-specific (i.e., spatial or temporal) skills regardless of the sensory modality involved, with auditory areas playing a crucial role in temporal processing and visual areas in spatial processing. Given the domain-specificity and the multisensory nature of sensory areas, in this study, we hypothesized that preferential domains of representation (i.e., space and time) of visual and auditory cortices are also evident in the early processing of multisensory information. Thus, we measured the event-related potential (ERP) responses of 16 participants while performing multisensory spatial and temporal bisection tasks. Audiovisual stimuli occurred at three different spatial positions and time lags and participants had to evaluate whether the second stimulus was spatially (spatial bisection task) or temporally (temporal bisection task) farther from the first or third audiovisual stimulus. As predicted, the second audiovisual stimulus of both spatial and temporal bisection tasks elicited an early ERP response (time window 50-90 ms) in visual and auditory regions. However, this early ERP component was more substantial in the occipital areas during the spatial bisection task, and in the temporal regions during the temporal bisection task. Overall, these results confirmed the domain specificity of visual and auditory cortices and revealed that this aspect selectively modulates also the cortical activity in response to multisensory stimuli.
Collapse
Affiliation(s)
- Monica Gori
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - Giorgia Bertonati
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly,Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS)Università degli Studi di GenovaGenoaItaly
| | - Claudio Campus
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - Maria Bianca Amadeo
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| |
Collapse
|
9
|
Lombera EN, Guevara MA, Vergara RO. Is source elevation an auditory distance cue? A preliminary study. Perception 2022; 51:3010066221114589. [PMID: 35989643 DOI: 10.1177/03010066221114589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The aim of this work was to evaluate whether the angular elevation of a sound source could generate auditory cues which improve the auditory distance perception in a similar way to that previously reported by visual modality. For this purpose, we compared ADP curves obtained with sources located both at the listeners' ears and at ground level. Our hypothesis was that the participants can interpret the relation between elevation and distance of ground-level sources (which are linked geometrically) so we expected them to perceive their distances more accurately than those at ear level. However, the responses obtained with sources located at ground level were almost identical to those obtained at the height of the listeners' ears, showing that, under the conditions of our experiment, auditory elevation cues do not influence auditory distance perception.
Collapse
Affiliation(s)
- Esteban N Lombera
- 28235Laboratorio de Acústica y Percepción Sonora, CONICET, Universidad Nacional de Quilmes, Argentina
- 28242Departamento de Ciencia y Tecnología, Universidad Nacional de Tres de Febrero, Argentina
| | - Manuel A Guevara
- 28242Departamento de Ciencia y Tecnología, Universidad Nacional de Tres de Febrero, Argentina
| | - Ramiro O Vergara
- 28235Laboratorio de Acústica y Percepción Sonora, CONICET, Universidad Nacional de Quilmes, Argentina
| |
Collapse
|
10
|
Xiong YZ, Addleman DA, Nguyen NA, Nelson PB, Legge GE. Visual and Auditory Spatial Localization in Younger and Older Adults. Front Aging Neurosci 2022; 14:838194. [PMID: 35493928 PMCID: PMC9043801 DOI: 10.3389/fnagi.2022.838194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Accepted: 03/22/2022] [Indexed: 11/17/2022] Open
Abstract
Visual and auditory localization abilities are crucial in real-life tasks such as navigation and social interaction. Aging is frequently accompanied by vision and hearing loss, affecting spatial localization. The purpose of the current study is to elucidate the effect of typical aging on spatial localization and to establish a baseline for older individuals with pathological sensory impairment. Using a verbal report paradigm, we investigated how typical aging affects visual and auditory localization performance, the reliance on vision during sound localization, and sensory integration strategies when localizing audiovisual targets. Fifteen younger adults (N = 15, mean age = 26 years) and thirteen older adults (N = 13, mean age = 68 years) participated in this study, all with age-adjusted normal vision and hearing based on clinical standards. There were significant localization differences between younger and older adults, with the older group missing peripheral visual stimuli at significantly higher rates, localizing central stimuli as more peripheral, and being less precise in localizing sounds from central locations when compared to younger subjects. Both groups localized auditory targets better when the test space was visible compared to auditory localization when blindfolded. The two groups also exhibited similar patterns of audiovisual integration, showing optimal integration in central locations that was consistent with a Maximum-Likelihood Estimation model, but non-optimal integration in peripheral locations. These findings suggest that, despite the age-related changes in auditory and visual localization, the interactions between vision and hearing are largely preserved in older individuals without pathological sensory impairments.
Collapse
Affiliation(s)
- Ying-Zi Xiong
- Department of Psychology, University of Minnesota, Minneapolis, MN, United States
- Center for Applied and Translational Sensory Science, University of Minnesota, Minneapolis, MN, United States
- *Correspondence: Ying-Zi Xiong,
| | - Douglas A. Addleman
- Department of Psychology, University of Minnesota, Minneapolis, MN, United States
- Center for Applied and Translational Sensory Science, University of Minnesota, Minneapolis, MN, United States
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, United States
- Douglas A. Addleman,
| | - Nam Anh Nguyen
- Department of Psychology, University of Minnesota, Minneapolis, MN, United States
| | - Peggy B. Nelson
- Center for Applied and Translational Sensory Science, University of Minnesota, Minneapolis, MN, United States
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN, United States
| | - Gordon E. Legge
- Department of Psychology, University of Minnesota, Minneapolis, MN, United States
- Center for Applied and Translational Sensory Science, University of Minnesota, Minneapolis, MN, United States
| |
Collapse
|
11
|
Bottini R, Nava E, De Cuntis I, Benetti S, Collignon O. Synesthesia in a congenitally blind individual. Neuropsychologia 2022; 170:108226. [DOI: 10.1016/j.neuropsychologia.2022.108226] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2021] [Revised: 02/25/2022] [Accepted: 03/23/2022] [Indexed: 11/25/2022]
|
12
|
Auditory distance perception in front and rear space. Hear Res 2022; 417:108468. [DOI: 10.1016/j.heares.2022.108468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Revised: 01/22/2022] [Accepted: 02/12/2022] [Indexed: 11/21/2022]
|
13
|
Coudert A, Gaveau V, Gatel J, Verdelet G, Salemme R, Farne A, Pavani F, Truy E. Spatial Hearing Difficulties in Reaching Space in Bilateral Cochlear Implant Children Improve With Head Movements. Ear Hear 2021; 43:192-205. [PMID: 34225320 PMCID: PMC8694251 DOI: 10.1097/aud.0000000000001090] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Supplemental Digital Content is available in the text. The aim of this study was to assess three-dimensional (3D) spatial hearing abilities in reaching space of children and adolescents fitted with bilateral cochlear implants (BCI). The study also investigated the impact of spontaneous head movements on sound localization abilities.
Collapse
Affiliation(s)
- Aurélie Coudert
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, Lyon, France Department of Pediatric Otolaryngology-Head & Neck Surgery, Femme Mere Enfant Hospital, Hospices Civils de Lyon, Lyon, France Department of Otolaryngology-Head & Neck Surgery, Edouard Herriot Hospital, Hospices Civils de Lyon, Lyon, France University of Lyon 1, Lyon, France Hospices Civils de Lyon, Neuro-immersion Platform, Lyon, France Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, Italy Department of Psychology and Cognitive Sciences, University of Trento, Rovereto, Italy
| | | | | | | | | | | | | | | |
Collapse
|
14
|
Visual Influences on Auditory Behavioral, Neural, and Perceptual Processes: A Review. J Assoc Res Otolaryngol 2021; 22:365-386. [PMID: 34014416 PMCID: PMC8329114 DOI: 10.1007/s10162-021-00789-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 02/07/2021] [Indexed: 01/03/2023] Open
Abstract
In a naturalistic environment, auditory cues are often accompanied by information from other senses, which can be redundant with or complementary to the auditory information. Although the multisensory interactions derived from this combination of information and that shape auditory function are seen across all sensory modalities, our greatest body of knowledge to date centers on how vision influences audition. In this review, we attempt to capture the state of our understanding at this point in time regarding this topic. Following a general introduction, the review is divided into 5 sections. In the first section, we review the psychophysical evidence in humans regarding vision's influence in audition, making the distinction between vision's ability to enhance versus alter auditory performance and perception. Three examples are then described that serve to highlight vision's ability to modulate auditory processes: spatial ventriloquism, cross-modal dynamic capture, and the McGurk effect. The final part of this section discusses models that have been built based on available psychophysical data and that seek to provide greater mechanistic insights into how vision can impact audition. The second section reviews the extant neuroimaging and far-field imaging work on this topic, with a strong emphasis on the roles of feedforward and feedback processes, on imaging insights into the causal nature of audiovisual interactions, and on the limitations of current imaging-based approaches. These limitations point to a greater need for machine-learning-based decoding approaches toward understanding how auditory representations are shaped by vision. The third section reviews the wealth of neuroanatomical and neurophysiological data from animal models that highlights audiovisual interactions at the neuronal and circuit level in both subcortical and cortical structures. It also speaks to the functional significance of audiovisual interactions for two critically important facets of auditory perception-scene analysis and communication. The fourth section presents current evidence for alterations in audiovisual processes in three clinical conditions: autism, schizophrenia, and sensorineural hearing loss. These changes in audiovisual interactions are postulated to have cascading effects on higher-order domains of dysfunction in these conditions. The final section highlights ongoing work seeking to leverage our knowledge of audiovisual interactions to develop better remediation approaches to these sensory-based disorders, founded in concepts of perceptual plasticity in which vision has been shown to have the capacity to facilitate auditory learning.
Collapse
|
15
|
Gourgou E, Adiga K, Goettemoeller A, Chen C, Hsu AL. Caenorhabditis elegans learning in a structured maze is a multisensory behavior. iScience 2021; 24:102284. [PMID: 33889812 PMCID: PMC8050377 DOI: 10.1016/j.isci.2021.102284] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 11/23/2020] [Accepted: 03/04/2021] [Indexed: 11/05/2022] Open
Abstract
We show that C. elegans nematodes learn to associate food with a combination of proprioceptive cues and information on the structure of their surroundings (maze), perceived through mechanosensation. By using the custom-made Worm-Maze platform, we demonstrate that C. elegans young adults can locate food in T-shaped mazes and, following that experience, learn to reach a specific maze arm. C. elegans learning inside the maze is possible after a single training session, it resembles working memory, and it prevails over conflicting environmental cues. We provide evidence that the observed learning is a food-triggered multisensory behavior, which requires mechanosensory and proprioceptive input, and utilizes cues about the structural features of nematodes' environment and their body actions. The CREB-like transcription factor and dopamine signaling are also involved in maze performance. Lastly, we show that the observed aging-driven decline of C. elegans learning ability in the maze can be reversed by starvation. C. elegans can be trained to reach a target arm in a T-shaped maze Learning requires the contribution of tactile and proprioceptive cues C. elegans follow a kind of response learning strategy in the maze environment Learning is short-term and sensitive to distraction
Collapse
Affiliation(s)
- Eleni Gourgou
- Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI 48109, USA.,Institute of Gerontology, University of Michigan Medical School, Ann Arbor, MI 41809, USA
| | - Kavya Adiga
- Department of Internal Medicine, Division of Geriatrics & Palliative Medicine, University of Michigan Medical School, Ann Arbor, MI 41809, USA
| | - Anne Goettemoeller
- Neuroscience Program, College of Literature, Science and the Arts, University of Michigan, Ann Arbor, MI 41809, USA
| | - Chieh Chen
- Institute of Biochemistry and Molecular Biology, National Yang Ming University, Taipei, 112 Taiwan
| | - Ao-Lin Hsu
- Department of Internal Medicine, Division of Geriatrics & Palliative Medicine, University of Michigan Medical School, Ann Arbor, MI 41809, USA.,Institute of Biochemistry and Molecular Biology, National Yang Ming University, Taipei, 112 Taiwan.,Research Center for Healthy Aging and Institute of New Drug Development, China Medical University, Taichung, 404, Taiwan
| |
Collapse
|
16
|
Van der Stoep N, Van der Smagt MJ, Notaro C, Spock Z, Naber M. The additive nature of the human multisensory evoked pupil response. Sci Rep 2021; 11:707. [PMID: 33436889 PMCID: PMC7803952 DOI: 10.1038/s41598-020-80286-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Accepted: 12/14/2020] [Indexed: 12/23/2022] Open
Abstract
Pupillometry has received increased interest for its usefulness in measuring various sensory processes as an alternative to behavioural assessments. This is also apparent for multisensory investigations. Studies of the multisensory pupil response, however, have produced conflicting results. Some studies observed super-additive multisensory pupil responses, indicative of multisensory integration (MSI). Others observed additive multisensory pupil responses even though reaction time (RT) measures were indicative of MSI. Therefore, in the present study, we investigated the nature of the multisensory pupil response by combining methodological approaches of previous studies while using supra-threshold stimuli only. In two experiments we presented auditory and visual stimuli to observers that evoked a(n) (onset) response (be it constriction or dilation) in a simple detection task and a change detection task. In both experiments, the RT data indicated MSI as shown by race model inequality violation. Still, the multisensory pupil response in both experiments could best be explained by linear summation of the unisensory pupil responses. We conclude that the multisensory pupil response for supra-threshold stimuli is additive in nature and cannot be used as a measure of MSI, as only a departure from additivity can unequivocally demonstrate an interaction between the senses.
Collapse
Affiliation(s)
- Nathan Van der Stoep
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Langeveld Building, Room H0.26, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands.
| | - M J Van der Smagt
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Langeveld Building, Room H0.26, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
| | - C Notaro
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Langeveld Building, Room H0.26, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
| | - Z Spock
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Langeveld Building, Room H0.26, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
| | - M Naber
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Langeveld Building, Room H0.26, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
| |
Collapse
|
17
|
Gu L, Mei X, Wu Q, Huang Y, Wu X. Temporal recalibration in vision requires location-based binding. Cognition 2020; 207:104510. [PMID: 33187640 DOI: 10.1016/j.cognition.2020.104510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Revised: 10/19/2020] [Accepted: 11/02/2020] [Indexed: 10/23/2022]
Abstract
Occupying the same location and occurring at the same time are the essential spatial and temporal factors for different features of a natural event or object to be integrated. Audio-visual temporal recalibration, as a temporal integration mechanism, refers to the brain's capacity to perceive simultaneity by adjusting for differential delays in the transmission of auditory and visual signals. Co-localization of auditory and visual information, however, is found not to be necessary for audio-visual temporal recalibration to occur. Here, we show that after exposure to a time lag between a visual flash and a visual collision, simultaneity responses were shifted toward an adapt lag in a bound condition where the flash and collision belonged to the same object but not in a separate condition where the flash and collision belonged to spatially separated objects. The results demonstrate that location-based binding is a requisite for temporal recalibration within the visual modality. Our finding suggests that the brain takes the modality difference in object localization into consideration when integrating temporally asynchronous signals.
Collapse
Affiliation(s)
- Li Gu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China; Department of Psychology, Sun Yat-Sen University, Guangzhou, China
| | - Xiaolin Mei
- Department of Psychology, Sun Yat-Sen University, Guangzhou, China
| | - Qian Wu
- Department of Psychology, Sun Yat-Sen University, Guangzhou, China
| | - Yingyu Huang
- Department of Psychology, Sun Yat-Sen University, Guangzhou, China
| | - Xiang Wu
- Department of Psychology, Sun Yat-Sen University, Guangzhou, China.
| |
Collapse
|
18
|
Chebat DR, Schneider FC, Ptito M. Spatial Competence and Brain Plasticity in Congenital Blindness via Sensory Substitution Devices. Front Neurosci 2020; 14:815. [PMID: 32848575 PMCID: PMC7406645 DOI: 10.3389/fnins.2020.00815] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Accepted: 07/10/2020] [Indexed: 12/22/2022] Open
Abstract
In congenital blindness (CB), tactile, and auditory information can be reinterpreted by the brain to compensate for visual information through mechanisms of brain plasticity triggered by training. Visual deprivation does not cause a cognitive spatial deficit since blind people are able to acquire spatial knowledge about the environment. However, this spatial competence takes longer to achieve but is eventually reached through training-induced plasticity. Congenitally blind individuals can further improve their spatial skills with the extensive use of sensory substitution devices (SSDs), either visual-to-tactile or visual-to-auditory. Using a combination of functional and anatomical neuroimaging techniques, our recent work has demonstrated the impact of spatial training with both visual to tactile and visual to auditory SSDs on brain plasticity, cortical processing, and the achievement of certain forms of spatial competence. The comparison of performances between CB and sighted people using several different sensory substitution devices in perceptual and sensory-motor tasks uncovered the striking ability of the brain to rewire itself during perceptual learning and to interpret novel sensory information even during adulthood. We discuss here the implications of these findings for helping blind people in navigation tasks and to increase their accessibility to both real and virtual environments.
Collapse
Affiliation(s)
- Daniel-Robert Chebat
- Visual and Cognitive Neuroscience Laboratory (VCN Lab), Department of Psychology, Faculty of Social Sciences and Humanities, Ariel University, Ariel, Israel
- Navigation and Accessibility Research Center of Ariel University (NARCA), Ariel, Israel
| | - Fabien C. Schneider
- Department of Radiology, University of Lyon, Saint-Etienne, France
- Neuroradiology Unit, University Hospital of Saint-Etienne, Saint-Etienne, France
| | - Maurice Ptito
- BRAIN Lab, Department of Neuroscience and Pharmacology, University of Copenhagen, Copenhagen, Denmark
- Chaire de Recherche Harland Sanders en Sciences de la Vision, École d’Optométrie, Université de Montréal, Montréal, QC, Canada
| |
Collapse
|
19
|
Yamasaki D, Ashida H. Size-Distance Scaling With Absolute and Relative Auditory Distance Information. Multisens Res 2020; 33:109-126. [PMID: 31648194 DOI: 10.1163/22134808-20191467] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Accepted: 08/10/2019] [Indexed: 11/19/2022]
Abstract
In the dynamic 3D space, it is critical for survival to perceive size of an object and rescale it with distance from an observer. Humans can perceive distance via not only vision but also audition, which plays an important role in the localization of objects, especially in visually ambiguous environments. However, whether and how auditory distance information contributes to visual size perception is not well understood. To address this issue, we investigated the efficiency of size-distance scaling by using auditory distance information that was conveyed by binaurally recorded auditory stimuli. We examined the effects of absolute distance information of a single sound sequence (Experiment 1) and relative distance information between two sound sequences (Experiment 2) on visual size estimation performances in darkened and well-lit environments. We demonstrated that humans could perform size-distance disambiguation by using auditory distance information even in darkness. Curiously, relative distance information was more efficient in size-distance scaling than absolute distance information, suggesting a high reliance on relative auditory distance information in our visual spatial experiences. The results highlight a benefit of audiovisual interaction for size-distance processing and calibration of external events under visually degraded situations.
Collapse
Affiliation(s)
- Daiki Yamasaki
- 1Graduate School of Letters, Kyoto University, Japan.,2Japan Society for the Promotion of Science, Japan
| | | |
Collapse
|
20
|
Gori M, Amadeo MB, Campus C. Temporal cues trick the visual and auditory cortices mimicking spatial cues in blind individuals. Hum Brain Mapp 2020; 41:2077-2091. [PMID: 32048380 PMCID: PMC7267917 DOI: 10.1002/hbm.24931] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Revised: 01/03/2020] [Accepted: 01/07/2020] [Indexed: 11/05/2022] Open
Abstract
In the absence of vision, spatial representation may be altered. When asked to compare the relative distances between three sounds (i.e., auditory spatial bisection task), blind individuals demonstrate significant deficits and do not show an event-related potential response mimicking the visual C1 reported in sighted people. However, we have recently demonstrated that the spatial deficit disappears if coherent time and space cues are presented to blind people, suggesting that they may use time information to infer spatial maps. In this study, we examined whether the modification of temporal cues during space evaluation altered the recruitment of the visual and auditory cortices in blind individuals. We demonstrated that the early (50-90 ms) occipital response, mimicking the visual C1, is not elicited by the physical position of the sound, but by its virtual position suggested by its temporal delay. Even more impressively, in the same time window, the auditory cortex also showed this pattern and responded to temporal instead of spatial coordinates.
Collapse
Affiliation(s)
- Monica Gori
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| | - Maria Bianca Amadeo
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| | - Claudio Campus
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
21
|
Chikara RK, Lo WC, Ko LW. Exploration of Brain Connectivity during Human Inhibitory Control Using Inter-Trial Coherence. SENSORS 2020; 20:s20061722. [PMID: 32204504 PMCID: PMC7147711 DOI: 10.3390/s20061722] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Revised: 03/11/2020] [Accepted: 03/16/2020] [Indexed: 11/16/2022]
Abstract
Inhibitory control is a cognitive process that inhibits a response. It is used in everyday activities, such as driving a motorcycle, driving a car and playing a game. The effect of this process can be compared to the red traffic light in the real world. In this study, we investigated brain connectivity under human inhibitory control using the phase lag index and inter-trial coherence (ITC). The human brain connectivity gives a more accurate representation of the functional neural network. Results of electroencephalography (EEG), the data sets were generated from twelve healthy subjects during left and right hand inhibitions using the auditory stop-signal task, showed that the inter-trial coherence in delta (1-4 Hz) and theta (4-7 Hz) band powers increased over the frontal and temporal lobe of the brain. These EEG delta and theta band activities neural markers have been related to human inhibition in the frontal lobe. In addition, inter-trial coherence in the delta-theta and alpha (8-12 Hz) band powers increased at the occipital lobe through visual stimulation. Moreover, the highest brain connectivity was observed under inhibitory control in the frontal lobe between F3-F4 channels compared to temporal and occipital lobes. The greater EEG coherence and phase lag index in the frontal lobe is associated with the human response inhibition. These findings revealed new insights to understand the neural network of brain connectivity and underlying mechanisms during human response inhibition.
Collapse
Affiliation(s)
- Rupesh Kumar Chikara
- Department of Biological Science and Technology, College of Biological Science and Technology, National Chiao Tung University, Hsinchu 300, Taiwan;
- Center For Intelligent Drug Systems and Smart Bio-devices (IDS2B), National Chiao Tung University, Hsinchu 300, Taiwan
| | - Wei-Cheng Lo
- Department of Biological Science and Technology, College of Biological Science and Technology, National Chiao Tung University, Hsinchu 300, Taiwan;
- Institute of Bioinformatics and Systems Biology, National Chiao Tung University, Hsinchu 300, Taiwan
- Correspondence: (W.-C.L.); (L.-W.K.)
| | - Li-Wei Ko
- Department of Biological Science and Technology, College of Biological Science and Technology, National Chiao Tung University, Hsinchu 300, Taiwan;
- Center For Intelligent Drug Systems and Smart Bio-devices (IDS2B), National Chiao Tung University, Hsinchu 300, Taiwan
- Institute of Bioinformatics and Systems Biology, National Chiao Tung University, Hsinchu 300, Taiwan
- The Drug Development and Value Creation Research Center, Kaohsiung Medical University, Kaohsiung 80708, Taiwan
- Correspondence: (W.-C.L.); (L.-W.K.)
| |
Collapse
|
22
|
Bermejo F, Di Paolo EA, Gilberto LG, Lunati V, Barrios MV. Learning to find spatially reversed sounds. Sci Rep 2020; 10:4562. [PMID: 32165690 PMCID: PMC7067813 DOI: 10.1038/s41598-020-61332-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Accepted: 02/24/2020] [Indexed: 11/29/2022] Open
Abstract
Adaptation to systematic visual distortions is well-documented but there is little evidence of similar adaptation to radical changes in audition. We use a pseudophone to transpose the sound streams arriving at the left and right ears, evaluating the perceptual effects it provokes and the possibility of learning to locate sounds in the reversed condition. Blindfolded participants remain seated at the center of a semicircular arrangement of 7 speakers and are asked to orient their head towards a sound source. We postulate that a key factor underlying adaptation is the self-generated activity that allows participants to learn new sensorimotor schemes. We investigate passive listening conditions (very short duration stimulus not permitting active exploration) and dynamic conditions (continuous stimulus allowing participants time to freely move their heads or remain still). We analyze head movement kinematics, localization errors, and qualitative reports. Results show movement-induced perceptual disruptions in the dynamic condition with static sound sources displaying apparent movement. This effect is reduced after a short training period and participants learn to find sounds in a left-right reversed field for all but the extreme lateral positions where motor patterns are more restricted. Strategies become less exploratory and more direct with training. Results support the hypothesis that self-generated movements underlie adaptation to radical sensorimotor distortions.
Collapse
Affiliation(s)
- Fernando Bermejo
- Centro de Investigación y Transferencia en Acústica, Universidad Tecnológica Nacional - Facultad Regional Córdoba, CONICET, CP 5016, Córdoba, Argentina.
- Facultad de Psicología, Universidad Nacional de Córdoba, CP 5016, Córdoba, Argentina.
| | - Ezequiel A Di Paolo
- Ikerbasque, Basque Foundation for Science, Bilbao, Spain
- IAS-Research Center for Life, Mind, and Society, University of the Basque Country, San Sebastián, Spain
- Centre for Computational Neuroscience and Robotics, University of Sussex, Brighton, UK
| | - L Guillermo Gilberto
- Centro de Investigación y Transferencia en Acústica, Universidad Tecnológica Nacional - Facultad Regional Córdoba, CONICET, CP 5016, Córdoba, Argentina
- Consejo Nacional de Investigaciones Científicas y Tecnológicas (CONICET), Ciudad Autónoma de Buenos Aires, Argentina
| | - Valentín Lunati
- Centro de Investigación y Transferencia en Acústica, Universidad Tecnológica Nacional - Facultad Regional Córdoba, CONICET, CP 5016, Córdoba, Argentina
- Consejo Nacional de Investigaciones Científicas y Tecnológicas (CONICET), Ciudad Autónoma de Buenos Aires, Argentina
| | - M Virginia Barrios
- Centro de Investigación y Transferencia en Acústica, Universidad Tecnológica Nacional - Facultad Regional Córdoba, CONICET, CP 5016, Córdoba, Argentina
- Facultad de Psicología, Universidad Nacional de Córdoba, CP 5016, Córdoba, Argentina
| |
Collapse
|
23
|
Kramer A, Röder B, Bruns P. Feedback Modulates Audio-Visual Spatial Recalibration. Front Integr Neurosci 2020; 13:74. [PMID: 32009913 PMCID: PMC6979315 DOI: 10.3389/fnint.2019.00074] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Accepted: 12/10/2019] [Indexed: 11/13/2022] Open
Abstract
In an ever-changing environment, crossmodal recalibration is crucial to maintain precise and coherent spatial estimates across different sensory modalities. Accordingly, it has been found that perceived auditory space is recalibrated toward vision after consistent exposure to spatially misaligned audio-visual stimuli (VS). While this so-called ventriloquism aftereffect (VAE) yields internal consistency between vision and audition, it does not necessarily lead to consistency between the perceptual representation of space and the actual environment. For this purpose, feedback about the true state of the external world might be necessary. Here, we tested whether the size of the VAE is modulated by external feedback and reward. During adaptation audio-VS with a fixed spatial discrepancy were presented. Participants had to localize the sound and received feedback about the magnitude of their localization error. In half of the sessions the feedback was based on the position of the VS and in the other half it was based on the position of the auditory stimulus. An additional monetary reward was given if the localization error fell below a certain threshold that was based on participants’ performance in the pretest. As expected, when error feedback was based on the position of the VS, auditory localization during adaptation trials shifted toward the position of the VS. Conversely, feedback based on the position of the auditory stimuli reduced the visual influence on auditory localization (i.e., the ventriloquism effect) and improved sound localization accuracy. After adaptation with error feedback based on the VS position, a typical auditory VAE (but no visual aftereffect) was observed in subsequent unimodal localization tests. By contrast, when feedback was based on the position of the auditory stimuli during adaptation, no auditory VAE was observed in subsequent unimodal auditory trials. Importantly, in this situation no visual aftereffect was found either. As feedback did not change the physical attributes of the audio-visual stimulation during adaptation, the present findings suggest that crossmodal recalibration is subject to top–down influences. Such top–down influences might help prevent miscalibration of audition toward conflicting visual stimulation in situations in which external feedback indicates that visual information is inaccurate.
Collapse
Affiliation(s)
- Alexander Kramer
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| | - Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
24
|
Alaçam Ö, Li X, Menzel W, Staron T. Crossmodal Language Comprehension-Psycholinguistic Insights and Computational Approaches. Front Neurorobot 2020; 14:2. [PMID: 32116634 PMCID: PMC7025497 DOI: 10.3389/fnbot.2020.00002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Accepted: 01/13/2020] [Indexed: 11/13/2022] Open
Abstract
Crossmodal interaction in situated language comprehension is important for effective and efficient communication. The relationship between linguistic and visual stimuli provides mutual benefit: While vision contributes, for instance, information to improve language understanding, language in turn plays a role in driving the focus of attention in the visual environment. However, language and vision are two different representational modalities, which accommodate different aspects and granularities of conceptualizations. To integrate them into a single, coherent system solution is still a challenge, which could profit from inspiration by human crossmodal processing. Based on fundamental psycholinguistic insights into the nature of situated language comprehension, we derive a set of performance characteristics facilitating the robustness of language understanding, such as crossmodal reference resolution, attention guidance, or predictive processing. Artificial systems for language comprehension should meet these characteristics in order to be able to perform in a natural and smooth manner. We discuss how empirical findings on the crossmodal support of language comprehension in humans can be applied in computational solutions for situated language comprehension and how they can help to mitigate the shortcomings of current approaches.
Collapse
Affiliation(s)
- Özge Alaçam
- Natural Language Systems Group, Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Xingshan Li
- Reading and Visual Cognition Lab, Institute of Psychology, Chinese Academy of Science, Beijing, China
| | - Wolfgang Menzel
- Natural Language Systems Group, Department of Informatics, University of Hamburg, Hamburg, Germany
| | - Tobias Staron
- Natural Language Systems Group, Department of Informatics, University of Hamburg, Hamburg, Germany
| |
Collapse
|
25
|
Gori M, Amadeo MB, Campus C. Spatial metric in blindness: behavioural and cortical processing. Neurosci Biobehav Rev 2020; 109:54-62. [PMID: 31899299 DOI: 10.1016/j.neubiorev.2019.12.031] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2019] [Revised: 11/30/2019] [Accepted: 12/29/2019] [Indexed: 11/29/2022]
Abstract
Visual modality dominates spatial perception and, in lack of vision, space representation might be altered. Here we review our work showing that blind individuals have a strong deficit when performing spatial bisection tasks (Gori et al., 2014). We also describe the neural correlates associated with this deficit, as blind individuals do not show the same ERP response mimicking the visual C1 reported in sighted people during spatial bisection (Campus et al., 2019). Interestingly, the deficit is not always evident in late blind individuals, and it is dependent on blindness duration. We report that the deficit disappears when one presents coherent temporal and spatial cues to blind people. This suggests that they may use time information to infer spatial maps (Gori et al., 2018). Finally, we propose a model to explain why blind individuals are impaired in this task, speculating that a lack of vision drives the construction of a multi-sensory cortical network that codes space based on temporal, rather than spatial, coordinates.
Collapse
Affiliation(s)
- Monica Gori
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano Di Tecnologia, Via E. Melen, 83, 16152 Genova, Italy.
| | - Maria Bianca Amadeo
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano Di Tecnologia, Via E. Melen, 83, 16152 Genova, Italy; Department of Informatics, Bioengineering, Robotics and Systems Engineering, Università Degli Studi Di Genova, via all'Opera Pia, 13, 16145 Genova, Italy
| | - Claudio Campus
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano Di Tecnologia, Via E. Melen, 83, 16152 Genova, Italy
| |
Collapse
|
26
|
Amadeo MB, Campus C, Gori M. Time attracts auditory space representation during development. Behav Brain Res 2019; 376:112185. [PMID: 31472192 DOI: 10.1016/j.bbr.2019.112185] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Revised: 08/16/2019] [Accepted: 08/28/2019] [Indexed: 10/26/2022]
Abstract
Vision is the most accurate sense for spatial representation, whereas audition is for temporal representation. However, how different sensory modalities shape the development of spatial and temporal representations is still unclear. Here, 45 children aged 11-13 years were tested to investigate the abilities to evaluate spatial features of auditory stimuli during bisection tasks, while conflicting or non-conflicting spatial and temporal information was delivered. Since audition is fundamental for temporal representation, the hypothesis was that temporal information could influence auditory spatial representation development. Results show a strong interaction between the temporal and the spatial domain. Younger children are not able to build complex spatial representations when the temporal domain is uninformative about space. However, when the spatial information is coherent with the temporal information children of all age are able to decode complex spatial relationships. When spatial and temporal cues are conflicting, younger children are strongly attracted by the temporal instead of spatial information, while older participants result unaffected by the cross-domain conflict. These findings suggest that during development temporal representation of events is used to infer spatial coordinates of the environment, offering important opportunities for new teaching and rehabilitation strategies.
Collapse
Affiliation(s)
- Maria Bianca Amadeo
- Unit for Visually Impaired People (U-VIP), Fondazione Istituto Italiano di Tecnologia, Via E. Melen, 83, 16152 Genova Italy; Università degli studi di Genova, Department of Informatics, Bioengineering, Robotics and Systems Engineering, Via all'Opera Pia, 13, 16145 Genova Italy
| | - Claudio Campus
- Unit for Visually Impaired People (U-VIP), Fondazione Istituto Italiano di Tecnologia, Via E. Melen, 83, 16152 Genova Italy
| | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Fondazione Istituto Italiano di Tecnologia, Via E. Melen, 83, 16152 Genova Italy.
| |
Collapse
|
27
|
Bruns P. The Ventriloquist Illusion as a Tool to Study Multisensory Processing: An Update. Front Integr Neurosci 2019; 13:51. [PMID: 31572136 PMCID: PMC6751356 DOI: 10.3389/fnint.2019.00051] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2019] [Accepted: 08/22/2019] [Indexed: 12/02/2022] Open
Abstract
Ventriloquism, the illusion that a voice appears to come from the moving mouth of a puppet rather than from the actual speaker, is one of the classic examples of multisensory processing. In the laboratory, this illusion can be reliably induced by presenting simple meaningless audiovisual stimuli with a spatial discrepancy between the auditory and visual components. Typically, the perceived location of the sound source is biased toward the location of the visual stimulus (the ventriloquism effect). The strength of the visual bias reflects the relative reliability of the visual and auditory inputs as well as prior expectations that the two stimuli originated from the same source. In addition to the ventriloquist illusion, exposure to spatially discrepant audiovisual stimuli results in a subsequent recalibration of unisensory auditory localization (the ventriloquism aftereffect). In the past years, the ventriloquism effect and aftereffect have seen a resurgence as an experimental tool to elucidate basic mechanisms of multisensory integration and learning. For example, recent studies have: (a) revealed top-down influences from the reward and motor systems on cross-modal binding; (b) dissociated recalibration processes operating at different time scales; and (c) identified brain networks involved in the neuronal computations underlying multisensory integration and learning. This mini review article provides a brief overview of established experimental paradigms to measure the ventriloquism effect and aftereffect before summarizing these pathbreaking new advancements. Finally, it is pointed out how the ventriloquism effect and aftereffect could be utilized to address some of the current open questions in the field of multisensory research.
Collapse
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
28
|
Chikara RK, Ko LW. Modulation of the Visual to Auditory Human Inhibitory Brain Network: An EEG Dipole Source Localization Study. Brain Sci 2019; 9:E216. [PMID: 31461954 PMCID: PMC6770157 DOI: 10.3390/brainsci9090216] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2019] [Revised: 08/15/2019] [Accepted: 08/23/2019] [Indexed: 12/21/2022] Open
Abstract
Auditory alarms are used to direct people's attention to critical events in complicated environments. The capacity for identifying the auditory alarms in order to take the right action in our daily life is critical. In this work, we investigate how auditory alarms affect the neural networks of human inhibition. We used a famous stop-signal or go/no-go task to measure the effect of visual stimuli and auditory alarms on the human brain. In this experiment, go-trials used visual stimulation, via a square or circle symbol, and stop trials used auditory stimulation, via an auditory alarm. Electroencephalography (EEG) signals from twelve subjects were acquired and analyzed using an advanced EEG dipole source localization method via independent component analysis (ICA) and EEG-coherence analysis. Behaviorally, the visual stimulus elicited a significantly higher accuracy rate (96.35%) than the auditory stimulus (57.07%) during inhibitory control. EEG theta and beta band power increases in the right middle frontal gyrus (rMFG) were associated with human inhibitory control. In addition, delta, theta, alpha, and beta band increases in the right cingulate gyrus (rCG) and delta band increases in both right superior temporal gyrus (rSTG) and left superior temporal gyrus (lSTG) were associated with the network changes induced by auditory alarms. We further observed that theta-alpha and beta bands between lSTG-rMFG and lSTG-rSTG pathways had higher connectivity magnitudes in the brain network when performing the visual tasks changed to receiving the auditory alarms. These findings could be useful for further understanding the human brain in realistic environments.
Collapse
Affiliation(s)
- Rupesh Kumar Chikara
- Department of Biological Science and Technology, College of Biological Science and Technology, National Chiao Tung University, Hsinchu 300, Taiwan
- Center for Intelligent Drug Systems and Smart Bio-devices (IDS2B), National Chiao Tung University, Hsinchu 300, Taiwan
| | - Li-Wei Ko
- Department of Biological Science and Technology, College of Biological Science and Technology, National Chiao Tung University, Hsinchu 300, Taiwan.
- Center for Intelligent Drug Systems and Smart Bio-devices (IDS2B), National Chiao Tung University, Hsinchu 300, Taiwan.
- Institute of Bioinformatics and Systems Biology, National Chiao Tung University, Hsinchu 300, Taiwan.
- Swartz Center for Computational Neuroscience, University of California San Diego, San Diego, CA 92093, USA.
| |
Collapse
|
29
|
Ahmad H, Setti W, Campus C, Capris E, Facchini V, Sandini G, Gori M. The Sound of Scotoma: Audio Space Representation Reorganization in Individuals With Macular Degeneration. Front Integr Neurosci 2019; 13:44. [PMID: 31481884 PMCID: PMC6710446 DOI: 10.3389/fnint.2019.00044] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2019] [Accepted: 08/05/2019] [Indexed: 12/12/2022] Open
Abstract
Blindness is an ideal condition to study the role of visual input on the development of spatial representation, as studies have shown how audio space representation reorganizes in blindness. However, how spatial reorganization works is still unclear. A limitation of the study on blindness is that it is a "stable" system and it does not allow for studying the mechanisms that subtend the progress of this reorganization. To overcome this problem here we study, for the first time, audio spatial reorganization in 18 adults with macular degeneration (MD) for which the loss of vision due to scotoma is an ongoing progressive process. Our results show that the loss of vision produces immediate changes in the processing of spatial audio signals. In individuals with MD, the lateral sounds are "attracted" toward the central scotoma position resulting in a strong bias in the spatial auditory percept. This result suggests that the reorganization of audio space representation is a fast and plastic process occurring also later in life, after vision loss.
Collapse
Affiliation(s)
- Hafsah Ahmad
- Robotics, Brain and Cognitive Sciences, Italian Institute of Technology, Genoa, Italy.,Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy.,Department of Informatics, Bioengineering, Robotics, and Systems Engineering, University of Genoa, Genoa, Italy
| | - Walter Setti
- Robotics, Brain and Cognitive Sciences, Italian Institute of Technology, Genoa, Italy.,Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy.,Department of Informatics, Bioengineering, Robotics, and Systems Engineering, University of Genoa, Genoa, Italy
| | - Claudio Campus
- Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy
| | | | | | - Giulio Sandini
- Robotics, Brain and Cognitive Sciences, Italian Institute of Technology, Genoa, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy
| |
Collapse
|
30
|
Reaching measures and feedback effects in auditory peripersonal space. Sci Rep 2019; 9:9476. [PMID: 31263231 PMCID: PMC6603038 DOI: 10.1038/s41598-019-45755-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Accepted: 06/14/2019] [Indexed: 11/09/2022] Open
Abstract
We analyse the effects of exploration feedback on reaching measures of perceived auditory peripersonal space (APS) boundary and the auditory distance perception (ADP) of sound sources located within it. We conducted an experiment in which the participants had to estimate if a sound source was (or not) reachable and to estimate its distance (40 to 150 cm in 5-cm steps) by reaching to a small loudspeaker. The stimulus consisted of a train of three bursts of Gaussian broadband noise. Participants were randomly assigned to two groups: Experimental (EG) and Control (CG). There were three phases in the following order: Pretest-Test-Posttest. For all phases, the listeners performed the same task except for the EG-Test phase where the participants reach in order to touch the sound source. We applied models to characterise the participants' responses and provide evidence that feedback significantly reduces the response bias of both the perceived boundary of the APS and the ADP of sound sources located within reach. In the CG, the repetition of the task did not affect APS and ADP accuracy, but it improved the performance consistency: the reachable uncertainty zone in APS was reduced and there was a tendency to decrease variability in ADP.
Collapse
|
31
|
Richards MD, Goltz HC, Wong AM. Audiovisual perception in amblyopia: A review and synthesis. Exp Eye Res 2019; 183:68-75. [DOI: 10.1016/j.exer.2018.04.017] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2018] [Revised: 04/27/2018] [Accepted: 04/28/2018] [Indexed: 11/15/2022]
|
32
|
Richards MD, Goltz HC, Wong AMF. Impaired Spatial Hearing in Amblyopia: Evidence for Calibration of Auditory Maps by Retinocollicular Input in Humans. Invest Ophthalmol Vis Sci 2019; 60:944-953. [PMID: 30849170 DOI: 10.1167/iovs.18-24908] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose Evidence from animals and blind humans suggests that early visual experience influences the developmental calibration of auditory localization. Hypothesizing that unilateral amblyopia may involve cross-modal deficits in spatial hearing, we measured the precision and accuracy of sound localization in humans with amblyopia. Methods All participants passed a standard hearing test. Experiment 1 measured sound localization precision for click stimuli in 10 adults with amblyopia and 10 controls using a minimum audible angle (MAA) task. Experiment 2 measured sound localization error (i.e., accuracy) for click train stimuli in 14 adults with amblyopia and 16 controls using an absolute sound localization task. Results In Experiment 1, the MAA (mean ± SEM) was significantly greater in the amblyopia group compared with controls (2.75 ± 0.30° vs. 1.69 ± 0.09°, P = 0.006). In Experiment 2, the overall sound localization error was significantly greater in the amblyopia group compared with controls (P = 0.047). The amblyopia group also showed significantly greater sound localization error in the auditory hemispace ipsilateral to the amblyopic eye (P = 0.036). At a location within this auditory hemispace, the magnitude of sound localization error correlated significantly with deficits in stereo acuity (P = 0.036). Conclusions The precision and accuracy of sound localization are impaired in unilateral amblyopia. The asymmetric pattern of sound localization error suggests that amblyopic vision may interfere with the development of spatial hearing via the retinocollicular pathway.
Collapse
Affiliation(s)
- Michael D Richards
- Institute of Medical Science, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada.,Department of Ophthalmology and Vision Sciences, The Hospital for Sick Children, Toronto, Ontario, Canada.,Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - Herbert C Goltz
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada.,Program in Neurosciences and Mental Health, The Hospital for Sick Children, Toronto, Ontario, Canada
| | - Agnes M F Wong
- Department of Ophthalmology and Vision Sciences, The Hospital for Sick Children, Toronto, Ontario, Canada.,Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada.,Program in Neurosciences and Mental Health, The Hospital for Sick Children, Toronto, Ontario, Canada
| |
Collapse
|
33
|
Amadeo MB, Campus C, Gori M. Impact of years of blindness on neural circuits underlying auditory spatial representation. Neuroimage 2019; 191:140-149. [PMID: 30710679 DOI: 10.1016/j.neuroimage.2019.01.073] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2018] [Revised: 01/10/2019] [Accepted: 01/29/2019] [Indexed: 11/30/2022] Open
Abstract
Early visual deprivation impacts negatively on spatial bisection abilities. Recently, an early (50-90 ms) ERP response, selective for sound position in space, has been observed in the visual cortex of sighted individuals during the spatial but not the temporal bisection task. Here, we clarify the role of vision on spatial bisection abilities and neural correlates by studying late blind individuals. Results highlight that a shorter period of blindness is linked to a stronger contralateral activation in the visual cortex and a better performance during the spatial bisection task. Contrarily, not lateralized visual activation and lower performance are observed in individuals with a longer period of blindness. To conclude, the amount of time spent without vision may gradually impact on neural circuits underlying the construction of spatial representations in late blind participants. These findings suggest a key relationship between visual deprivation and auditory spatial abilities in humans.
Collapse
Affiliation(s)
- Maria Bianca Amadeo
- Unit for Visually Impaired People (U-VIP), Fondazione Istituto Italiano di Tecnologia, Via E. Melen, 83 - 16152, Genova, Italy; Università degli studi di Genova, Department of Informatics, Bioengineering, Robotics and Systems Engineering, Via all'Opera Pia, 13 - 16145, Genova, Italy
| | - Claudio Campus
- Unit for Visually Impaired People (U-VIP), Fondazione Istituto Italiano di Tecnologia, Via E. Melen, 83 - 16152, Genova, Italy
| | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Fondazione Istituto Italiano di Tecnologia, Via E. Melen, 83 - 16152, Genova, Italy.
| |
Collapse
|
34
|
Chillemi G, Calamuneri A, Quartarone A, Terranova C, Salatino A, Cacciola A, Milardi D, Ricci R. Endogenous orientation of visual attention in auditory space. J Adv Res 2019; 18:95-100. [PMID: 30828479 PMCID: PMC6383076 DOI: 10.1016/j.jare.2019.01.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2018] [Revised: 01/17/2019] [Accepted: 01/18/2019] [Indexed: 11/24/2022] Open
Abstract
Facilitation was observed for right-sided auditory stimuli in a new visuo-audio task. Auditory space has dynamic nature, which adapts to changes in visual space. Sound localization was enhanced by visual cues. Crossmodal links in spatial attention were found between audition and vision. These findings have theoretical and translational implications for future studies.
Visuospatial attention is asymmetrically distributed with a leftward bias (i.e. pseudoneglect), while evidence for asymmetries in auditory spatial attention is still controversial. In the present study, we investigated putative asymmetries in the distribution of auditory spatial attention and the influence that visual information might have on its deployment. A modified version of the Posner task (i.e. the visuo-audio spatial task [VAST]) was used to investigate spatial processing of auditory targets when endogenous orientation of spatial attention was mediated by visual cues in healthy adults. A line bisection task (LBT) was also administered to assess the presence of a leftward bias in deployment of visuospatial attention. Overall, participants showed rightward and leftward biases in the VAST and the LBT, respectively. In the VAST, sound localization was enhanced by visual cues. Altogether, these findings support the existence of a facilitation effect for auditory targets originating from the right side of space and provide new evidence for crossmodal links in endogenous spatial attention between vision and audition.
Collapse
Affiliation(s)
- Gaetana Chillemi
- IRCCS Centro Neurolesi Bonino Pulejo, Contrada Casazza, SS113, 98124 Messina, Italy
| | | | - Angelo Quartarone
- IRCCS Centro Neurolesi Bonino Pulejo, Contrada Casazza, SS113, 98124 Messina, Italy.,Department of Biomedical and Dental Sciences and Morphofunctional Imaging, University of Messina, Via Consolare Valeria 1, Gazzi, 98125 Messina, Italy
| | - Carmen Terranova
- Department of Clinical and Experimental Medicine, Endocrinology, University of Messina, Via Consolare Valeria 1, Gazzi, 98125 Messina, Italy
| | - Adriana Salatino
- Department of Psychology, University of Torino, Torino 10123, Italy
| | - Alberto Cacciola
- IRCCS Centro Neurolesi Bonino Pulejo, Contrada Casazza, SS113, 98124 Messina, Italy.,Department of Biomedical and Dental Sciences and Morphofunctional Imaging, University of Messina, Via Consolare Valeria 1, Gazzi, 98125 Messina, Italy
| | - Demetrio Milardi
- IRCCS Centro Neurolesi Bonino Pulejo, Contrada Casazza, SS113, 98124 Messina, Italy.,Department of Biomedical and Dental Sciences and Morphofunctional Imaging, University of Messina, Via Consolare Valeria 1, Gazzi, 98125 Messina, Italy
| | - Raffaella Ricci
- Department of Psychology, University of Torino, Torino 10123, Italy
| |
Collapse
|
35
|
Temporal Cues Influence Space Estimations in Visually Impaired Individuals. iScience 2018; 6:319-326. [PMID: 30240622 PMCID: PMC6137691 DOI: 10.1016/j.isci.2018.07.003] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2018] [Revised: 06/01/2018] [Accepted: 07/03/2018] [Indexed: 11/20/2022] Open
Abstract
Many works have highlighted enhanced auditory processing in blind individuals, suggesting that they compensate for lack of vision with greater sensitivity of the other senses. Few years ago, we demonstrated severely impaired auditory precision in congenitally blind individuals performing an auditory spatial metric task: their thresholds for bisecting three consecutive spatially distributed sounds were seriously compromised, ranging from three times typical thresholds to total randomness. Here, we show that the deficit disappears if blind individuals are presented with coherent temporal and spatial cues. More interestingly, when the audio information is presented in conflict for space and time, sighted individuals are unaffected by the perturbation, whereas blind individuals are strongly attracted by the temporal cue. These results highlight that temporal cues influence space estimations in blind participants, suggesting for the first time that blind individuals use temporal information to infer spatial environmental coordinates. Blind individuals are not able to perform auditory spatial metric tasks Their deficit disappears when coherent temporal and spatial cues are presented In some cases, blind people use temporal cues to infer spatial coordinates
Collapse
|
36
|
Normal temporal binding window but no sound-induced flash illusion in people with one eye. Exp Brain Res 2018; 236:1825-1834. [DOI: 10.1007/s00221-018-5263-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Accepted: 04/12/2018] [Indexed: 10/17/2022]
|
37
|
Berger CC, Gonzalez-Franco M, Tajadura-Jiménez A, Florencio D, Zhang Z. Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity. Front Neurosci 2018; 12:21. [PMID: 29456486 PMCID: PMC5801410 DOI: 10.3389/fnins.2018.00021] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2017] [Accepted: 01/11/2018] [Indexed: 11/13/2022] Open
Abstract
Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.
Collapse
Affiliation(s)
- Christopher C. Berger
- Microsoft Research, Redmond, WA, United States
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, United States
| | | | - Ana Tajadura-Jiménez
- UCL Interaction Centre, University College London, London, United Kingdom
- Interactive Systems DEI-Lab, Universidad Carlos III de Madrid, Madrid, Spain
| | | | - Zhengyou Zhang
- Microsoft Research, Redmond, WA, United States
- Department Electrical Engineering, University of Washington, Seattle, WA, United States
| |
Collapse
|
38
|
Chotsrisuparat C, Koning A, Jacobs R, van Lier R. Effects of Auditory Patterns on Judged Displacements of an Occluded Moving Object. Multisens Res 2018; 31:623-643. [PMID: 31264610 DOI: 10.1163/22134808-18001294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2017] [Accepted: 12/22/2017] [Indexed: 11/19/2022]
Abstract
Using displays in which a moving disk disappeared behind an occluder, we examined whether an accompanying auditory rhythm influenced the perceived displacement of the disk during occlusion. We manipulated a baseline rhythm, comprising a relatively fast alternation of equal sound and pause durations. We had two different manipulations to create auditory sequences with a slower rhythm: either the pause durations or the sound durations were increased. In the trial, a disk moved at a constant speed, and at a certain point moved behind an occluder during which an auditory rhythm was played. Participants were instructed to track the occluded disk, and judge the expected position of the disk at the moment that the auditory rhythm ended by touching the judged position on a touch screen. We investigated the influence of the auditory rhythm, i.e., ratio of sound to pause duration, and the influence of auditory density, i.e., the number of sound onsets per time unit, on the judged distance. The results showed that the temporal characteristics affected the spatial judgments. Overall, we found that in the current paradigm relatively slow rhythms led to shorter judged distance as compared to relatively fast rhythms for both pause and sound variations. There was no main effect of auditory density on the judged distance of an expected visual event. That is, whereas the speed of the auditory rhythm appears crucial, the number of sound onsets per time unit as such, i.e., the auditory density, appears a much weaker factor.
Collapse
Affiliation(s)
- Chayada Chotsrisuparat
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - Arno Koning
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - Richard Jacobs
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - Rob van Lier
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| |
Collapse
|
39
|
Bruns P, Röder B. Spatial and frequency specificity of the ventriloquism aftereffect revisited. PSYCHOLOGICAL RESEARCH 2017; 83:1400-1415. [PMID: 29285647 DOI: 10.1007/s00426-017-0965-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2017] [Accepted: 12/18/2017] [Indexed: 11/28/2022]
Abstract
Exposure to audiovisual stimuli with a consistent spatial misalignment seems to result in a recalibration of unisensory auditory spatial representations. The previous studies have suggested that this so-called ventriloquism aftereffect is confined to the trained region of space, but yielded inconsistent results as to whether or not recalibration generalizes to untrained sound frequencies. Here, we reassessed the spatial and frequency specificity of the ventriloquism aftereffect by testing whether auditory spatial perception can be independently recalibrated for two different sound frequencies and/or at two different spatial locations. Recalibration was confined to locations within the trained hemifield, suggesting that spatial representations were independently adjusted for the two hemifields. The frequency specificity of the ventriloquism aftereffect depended on the presence or the absence of conflicting audiovisual adaptation stimuli within the same hemifield. Moreover, adaptation of two different sound frequencies in opposite directions (leftward vs. rightward) resulted in a selective suppression of leftward recalibration, even when the adapting stimuli were presented in different hemifields. Thus, instead of representing a fixed stimulus-driven process, cross-modal recalibration seems to critically depend on the sensory context and takes into account inconsistencies in the cross-modal input.
Collapse
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany. .,Department of Cognitive, Linguistic and Psychological Sciences, Brown University, 190 Thayer Street, Providence, RI, 02912, USA.
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany
| |
Collapse
|
40
|
Calzolari E, Albini F, Bolognini N, Vallar G. Multisensory and Modality-Specific Influences on Adaptation to Optical Prisms. Front Hum Neurosci 2017; 11:568. [PMID: 29213233 PMCID: PMC5702769 DOI: 10.3389/fnhum.2017.00568] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2017] [Accepted: 11/09/2017] [Indexed: 11/30/2022] Open
Abstract
Visuo-motor adaptation to optical prisms displacing the visual scene (prism adaptation, PA) is a method used for investigating visuo-motor plasticity in healthy individuals and, in clinical settings, for the rehabilitation of unilateral spatial neglect. In the standard paradigm, the adaptation phase involves repeated pointings to visual targets, while wearing optical prisms displacing the visual scene laterally. Here we explored differences in PA, and its aftereffects (AEs), as related to the sensory modality of the target. Visual, auditory, and multisensory - audio-visual - targets in the adaptation phase were used, while participants wore prisms displacing the visual field rightward by 10°. Proprioceptive, visual, visual-proprioceptive, auditory-proprioceptive straight-ahead shifts were measured. Pointing to auditory and to audio-visual targets in the adaptation phase produces proprioceptive, visual-proprioceptive, and auditory-proprioceptive AEs, as the typical visual targets did. This finding reveals that cross-modal plasticity effects involve both the auditory and the visual modality, and their interactions (Experiment 1). Even a shortened PA phase, requiring only 24 pointings to visual and audio-visual targets (Experiment 2), is sufficient to bring about AEs, as compared to the standard 92-pointings procedure. Finally, pointings to auditory targets cause AEs, although PA with a reduced number of pointings (24) to auditory targets brings about smaller AEs, as compared to the 92-pointings procedure (Experiment 3). Together, results from the three experiments extend to the auditory modality the sensorimotor plasticity underlying the typical AEs produced by PA to visual targets. Importantly, PA to auditory targets appears characterized by less accurate pointings and error correction, suggesting that the auditory component of the PA process may be less central to the building up of the AEs, than the sensorimotor pointing activity per se. These findings highlight both the effectiveness of a reduced number of pointings for bringing about AEs, and the possibility of inducing PA with auditory targets, which may be used as a compensatory route in patients with visual deficits.
Collapse
Affiliation(s)
- Elena Calzolari
- Department of Psychology and NeuroMI, University of Milano-Bicocca, Milan, Italy
- Neuro-Otology Unit, Division of Brain Sciences, Imperial College London, London, United Kingdom
| | - Federica Albini
- Department of Psychology and NeuroMI, University of Milano-Bicocca, Milan, Italy
| | - Nadia Bolognini
- Department of Psychology and NeuroMI, University of Milano-Bicocca, Milan, Italy
- Neuropsychological Laboratory, Istituto Auxologico Italiano, Istituto di Ricovero e Cura a Carattere Scientifico, Milan, Italy
| | - Giuseppe Vallar
- Department of Psychology and NeuroMI, University of Milano-Bicocca, Milan, Italy
- Neuropsychological Laboratory, Istituto Auxologico Italiano, Istituto di Ricovero e Cura a Carattere Scientifico, Milan, Italy
| |
Collapse
|
41
|
Spence C, Lee J, Van der Stoep N. Responding to sounds from unseen locations: crossmodal attentional orienting in response to sounds presented from the rear. Eur J Neurosci 2017; 51:1137-1150. [PMID: 28973789 DOI: 10.1111/ejn.13733] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2017] [Revised: 09/27/2017] [Accepted: 09/27/2017] [Indexed: 11/28/2022]
Abstract
To date, most of the research on spatial attention has focused on probing people's responses to stimuli presented in frontal space. That is, few researchers have attempted to assess what happens in the space that is currently unseen (essentially rear space). In a sense, then, 'out of sight' is, very much, 'out of mind'. In this review, we highlight what is presently known about the perception and processing of sensory stimuli (focusing on sounds) whose source is not currently visible. We briefly summarize known differences in the localizability of sounds presented from different locations in 3D space, and discuss the consequences for the crossmodal attentional and multisensory perceptual interactions taking place in various regions of space. The latest research now clearly shows that the kinds of crossmodal interactions that take place in rear space are very often different in kind from those that have been documented in frontal space. Developing a better understanding of how people respond to unseen sound sources in naturalistic environments by integrating findings emerging from multiple fields of research will likely lead to the design of better warning signals in the future. This review highlights the need for neuroscientists interested in spatial attention to spend more time researching what happens (in terms of the covert and overt crossmodal orienting of attention) in rear space.
Collapse
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, OX1 3UD, UK
| | - Jae Lee
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, OX1 3UD, UK
| | - Nathan Van der Stoep
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
42
|
Etchemendy PE, Abregú E, Calcagno ER, Eguia MC, Vechiatti N, Iasi F, Vergara RO. Auditory environmental context affects visual distance perception. Sci Rep 2017; 7:7189. [PMID: 28775372 PMCID: PMC5543138 DOI: 10.1038/s41598-017-06495-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2017] [Accepted: 06/13/2017] [Indexed: 11/21/2022] Open
Abstract
In this article, we show that visual distance perception (VDP) is influenced by the auditory environmental context through reverberation-related cues. We performed two VDP experiments in two dark rooms with extremely different reverberation times: an anechoic chamber and a reverberant room. Subjects assigned to the reverberant room perceived the targets farther than subjects assigned to the anechoic chamber. Also, we found a positive correlation between the maximum perceived distance and the auditorily perceived room size. We next performed a second experiment in which the same subjects of Experiment 1 were interchanged between rooms. We found that subjects preserved the responses from the previous experiment provided they were compatible with the present perception of the environment; if not, perceived distance was biased towards the auditorily perceived boundaries of the room. Results of both experiments show that the auditory environment can influence VDP, presumably through reverberation cues related to the perception of room size.
Collapse
Affiliation(s)
- Pablo E Etchemendy
- Laboratorio de Acústica y Percepción Sonora, Escuela Universitaria de Artes, CONICET, Universidad Nacional de Quilmes, B1876BXD, Bernal, Buenos Aires, Argentina
| | - Ezequiel Abregú
- Laboratorio de Acústica y Percepción Sonora, Escuela Universitaria de Artes, CONICET, Universidad Nacional de Quilmes, B1876BXD, Bernal, Buenos Aires, Argentina
| | - Esteban R Calcagno
- Laboratorio de Acústica y Percepción Sonora, Escuela Universitaria de Artes, CONICET, Universidad Nacional de Quilmes, B1876BXD, Bernal, Buenos Aires, Argentina
| | - Manuel C Eguia
- Laboratorio de Acústica y Percepción Sonora, Escuela Universitaria de Artes, CONICET, Universidad Nacional de Quilmes, B1876BXD, Bernal, Buenos Aires, Argentina
| | - Nilda Vechiatti
- Laboratorio de Acústica y Luminotecnia. Comisión de Investigaciones Científicas de la Provincia de Buenos Aires. Cno. Centenario e/505 y 508, M. B. Gonnet, Buenos Aires, Argentina
| | - Federico Iasi
- Laboratorio de Acústica y Luminotecnia. Comisión de Investigaciones Científicas de la Provincia de Buenos Aires. Cno. Centenario e/505 y 508, M. B. Gonnet, Buenos Aires, Argentina
| | - Ramiro O Vergara
- Laboratorio de Acústica y Percepción Sonora, Escuela Universitaria de Artes, CONICET, Universidad Nacional de Quilmes, B1876BXD, Bernal, Buenos Aires, Argentina.
| |
Collapse
|
43
|
Cambi J, Livi L, Livi W. Underwater Acoustic Source Localisation Among Blind and Sighted Scuba Divers: Comparative study. Sultan Qaboos Univ Med J 2017; 17:e168-e173. [PMID: 28690888 DOI: 10.18295/squmj.2016.17.02.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2016] [Revised: 02/26/2017] [Accepted: 03/09/2017] [Indexed: 11/16/2022] Open
Abstract
OBJECTIVES Many blind individuals demonstrate enhanced auditory spatial discrimination or localisation of sound sources in comparison to sighted subjects. However, this hypothesis has not yet been confirmed with regards to underwater spatial localisation. This study therefore aimed to investigate underwater acoustic source localisation among blind and sighted scuba divers. METHODS This study took place between February and June 2015 in Elba, Italy, and involved two experimental groups of divers with either acquired (n = 20) or congenital (n = 10) blindness and a control group of 30 sighted divers. Each subject took part in five attempts at an under-water acoustic source localisation task, in which the divers were requested to swim to the source of a sound originating from one of 24 potential locations. The control group had their sight obscured during the task. RESULTS The congenitally blind divers demonstrated significantly better underwater sound localisation compared to the control group or those with acquired blindness (P = 0.0007). In addition, there was a significant correlation between years of blindness and underwater sound localisation (P <0.0001). CONCLUSION Congenital blindness was found to positively affect the ability of a diver to recognise the source of a sound in an underwater environment. As the correct localisation of sounds underwater may help individuals to avoid imminent danger, divers should perform sound localisation tests during training sessions.
Collapse
Affiliation(s)
- Jacopo Cambi
- Department of Ear, Nose & Throat, University of Siena, Siena, Italy
| | - Ludovica Livi
- Department of Ear, Nose & Throat, University of Siena, Siena, Italy
| | - Walter Livi
- Department of Ear, Nose & Throat, University of Siena, Siena, Italy
| |
Collapse
|
44
|
Easwar V, Yamazaki H, Deighton M, Papsin B, Gordon K. Simultaneous bilateral cochlear implants: Developmental advances do not yet achieve normal cortical processing. Brain Behav 2017; 7:e00638. [PMID: 28413698 PMCID: PMC5390830 DOI: 10.1002/brb3.638] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/17/2016] [Revised: 11/15/2016] [Accepted: 12/18/2016] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND Simultaneous bilateral cochlear implantation promotes symmetric development of bilateral auditory pathways but binaural hearing remains abnormal. To evaluate whether bilateral cortical processing remains impaired in such children, cortical activity to unilateral and bilateral stimuli was assessed in a unique cohort of 16 children who received bilateral cochlear implants (CIs) simultaneously at 1.97 ± 0.86 years of age and had ~4 years of CI experience, providing the first opportunity to assess electrically driven cortical development in the absence of reorganized asymmetries from sequential implantation. METHODS Cortical activity to unilateral and bilateral stimuli was measured using multichannel electro-encephalography. Cortical processing in children with bilateral CIs was compared with click-elicited activity in 13 normal hearing children matched for time-in-sound. Source activity was localized using the Time Restricted, Artefact and Coherence source Suppression (TRACS) beamformer method. RESULTS Consistent with dominant crossed auditory pathways, normal P1 activity (~100 ms) was weaker to ipsilateral stimuli relative to contralateral and bilateral stimuli and both auditory cortices preferentially responded to the contralateral ear. Right hemisphere dominance was evident overall. Children with bilateral CIs maintained the expected right dominance but differences from normal included: (i) minimal changes between ipsilateral, contralateral and bilateral stimuli, (ii) weaker than normal contralateral stimulus preference, (iii) symmetric activity to bilateral stimuli, and (iv) increased occipital lobe recruitment during bilateral relative to unilateral stimulation. Between-group contrasts demonstrated lower than normal activity in the inferior parieto-occipital lobe (suggesting deficits in sensory integration) and greater than normal left frontal lobe activity (suggesting increased attention), even during passive listening. CONCLUSIONS Together, findings suggest that early simultaneous bilateral cochlear implantation promotes normal-like auditory symmetry but that abnormalities in cortical processing consequent to deafness and/or electrical stimulation through two independent speech processors persist.
Collapse
Affiliation(s)
- Vijayalakshmi Easwar
- Archie's Cochlear Implant Laboratory The Hospital for Sick Children Toronto ON Canada.,Collaborative Program in Neuroscience The University of Toronto Toronto ON Canada
| | - Hiroshi Yamazaki
- Archie's Cochlear Implant Laboratory The Hospital for Sick Children Toronto ON Canada
| | - Michael Deighton
- Archie's Cochlear Implant Laboratory The Hospital for Sick Children Toronto ON Canada
| | - Blake Papsin
- Otolaryngology The University of Toronto Toronto ON Canada.,Otolaryngology The Hospital for Sick Children Toronto ON Canada
| | - Karen Gordon
- Archie's Cochlear Implant Laboratory The Hospital for Sick Children Toronto ON Canada.,Otolaryngology The University of Toronto Toronto ON Canada
| |
Collapse
|
45
|
Chillemi G, Calamuneri A, Morgante F, Terranova C, Rizzo V, Girlanda P, Ghilardi MF, Quartarone A. Spatial and Temporal High Processing of Visual and Auditory Stimuli in Cervical Dystonia. Front Neurol 2017; 8:66. [PMID: 28316586 PMCID: PMC5334342 DOI: 10.3389/fneur.2017.00066] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2016] [Accepted: 02/15/2017] [Indexed: 01/21/2023] Open
Abstract
OBJECTIVE Investigation of spatial and temporal cognitive processing in idiopathic cervical dystonia (CD) by means of specific tasks based on perception in time and space domains of visual and auditory stimuli. BACKGROUND Previous psychophysiological studies have investigated temporal and spatial characteristics of neural processing of sensory stimuli (mainly somatosensorial and visual), whereas the definition of such processing at higher cognitive level has not been sufficiently addressed. The impairment of time and space processing is likely driven by basal ganglia dysfunction. However, other cortical and subcortical areas, including cerebellum, may also be involved. METHODS We tested 21 subjects with CD and 22 age-matched healthy controls with 4 recognition tasks exploring visuo-spatial, audio-spatial, visuo-temporal, and audio-temporal processing. Dystonic subjects were subdivided in three groups according to the head movement pattern type (lateral: Laterocollis, rotation: Torticollis) as well as the presence of tremor (Tremor). RESULTS We found significant alteration of spatial processing in Laterocollis subgroup compared to controls, whereas impairment of temporal processing was observed in Torticollis subgroup compared to controls. CONCLUSION Our results suggest that dystonia is associated with a dysfunction of temporal and spatial processing for visual and auditory stimuli that could underlie the well-known abnormalities in sequence learning. Moreover, we suggest that different movement pattern type might lead to different dysfunctions at cognitive level within dystonic population.
Collapse
Affiliation(s)
- Gaetana Chillemi
- Department of Clinical and Experimental Medicine, University of Messina , Messina , Italy
| | - Alessandro Calamuneri
- Department of Clinical and Experimental Medicine, University of Messina , Messina , Italy
| | - Francesca Morgante
- Department of Clinical and Experimental Medicine, University of Messina , Messina , Italy
| | - Carmen Terranova
- Department of Clinical and Experimental Medicine, University of Messina , Messina , Italy
| | - Vincenzo Rizzo
- Department of Clinical and Experimental Medicine, University of Messina , Messina , Italy
| | - Paolo Girlanda
- Department of Clinical and Experimental Medicine, University of Messina , Messina , Italy
| | - Maria Felice Ghilardi
- Department of Physiology, Pharmacology and Neuroscience, City University of New York Medical School , New York, NY , USA
| | - Angelo Quartarone
- Istituto Di Ricovero e Cura a Carattere Scientifico (IRCCS), Centro "Bonino Pulejo", Messina, Italy; Department of Biomedical Science and Morphological and Functional Images, University of Messina, Messina, Italy
| |
Collapse
|
46
|
Gori M, Cappagli G, Baud-Bovy G, Finocchietti S. Shape Perception and Navigation in Blind Adults. Front Psychol 2017; 8:10. [PMID: 28144226 PMCID: PMC5240028 DOI: 10.3389/fpsyg.2017.00010] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Accepted: 01/03/2017] [Indexed: 11/25/2022] Open
Abstract
Different sensory systems interact to generate a representation of space and to navigate. Vision plays a critical role in the representation of space development. During navigation, vision is integrated with auditory and mobility cues. In blind individuals, visual experience is not available and navigation therefore lacks this important sensory signal. In blind individuals, compensatory mechanisms can be adopted to improve spatial and navigation skills. On the other hand, the limitations of these compensatory mechanisms are not completely clear. Both enhanced and impaired reliance on auditory cues in blind individuals have been reported. Here, we develop a new paradigm to test both auditory perception and navigation skills in blind and sighted individuals and to investigate the effect that visual experience has on the ability to reproduce simple and complex paths. During the navigation task, early blind, late blind and sighted individuals were required first to listen to an audio shape and then to recognize and reproduce it by walking. After each audio shape was presented, a static sound was played and the participants were asked to reach it. Movements were recorded with a motion tracking system. Our results show three main impairments specific to early blind individuals. The first is the tendency to compress the shapes reproduced during navigation. The second is the difficulty to recognize complex audio stimuli, and finally, the third is the difficulty in reproducing the desired shape: early blind participants occasionally reported perceiving a square but they actually reproduced a circle during the navigation task. We discuss these results in terms of compromised spatial reference frames due to lack of visual input during the early period of development.
Collapse
Affiliation(s)
- Monica Gori
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia Genoa, Italy
| | - Giulia Cappagli
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia Genoa, Italy
| | - Gabriel Baud-Bovy
- Robotics, Brain and Cognitive Science Department, Istituto Italiano di TecnologiaGenoa, Italy; The Unit of Experimental Psychology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Vita-Salute San Raffaele UniversityMilan, Italy
| | - Sara Finocchietti
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia Genoa, Italy
| |
Collapse
|
47
|
Pasqualotto A, Esenkaya T. Sensory Substitution: The Spatial Updating of Auditory Scenes "Mimics" the Spatial Updating of Visual Scenes. Front Behav Neurosci 2016; 10:79. [PMID: 27148000 PMCID: PMC4838627 DOI: 10.3389/fnbeh.2016.00079] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2015] [Accepted: 04/08/2016] [Indexed: 12/19/2022] Open
Abstract
Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or “soundscapes”. Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD).
Collapse
Affiliation(s)
| | - Tayfun Esenkaya
- Faculty of Arts and Social Sciences, Sabanci UniversityIstanbul, Turkey; Department of Psychology, University of BathBath, UK
| |
Collapse
|
48
|
van der Stoep N, Serino A, Farnè A, Di Luca M, Spence C. Depth: the Forgotten Dimension in Multisensory Research. Multisens Res 2016. [DOI: 10.1163/22134808-00002525] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
The last quarter of a century has seen a dramatic rise of interest in the spatial constraints on multisensory integration. However, until recently, the majority of this research has investigated integration in the space directly in front of the observer. The space around us, however, extends in three spatial dimensions in the front and to the rear beyond such a limited area. The question to be addressed in this review concerns whether multisensory integration operates according to the same rules throughout the whole of three-dimensional space. The results reviewed here not only show that the space around us seems to be divided into distinct functional regions, but they also suggest that multisensory interactions are modulated by the region of space in which stimuli happen to be presented. We highlight a number of key limitations with previous research in this area, including: (1) The focus on only a very narrow region of two-dimensional space in front of the observer; (2) the use of static stimuli in most research; (3) the study of observers who themselves have been mostly static; and (4) the study of isolated observers. All of these factors may change the way in which the senses interact at any given distance, as can the emotional state/personality of the observer. In summarizing these salient issues, we hope to encourage researchers to consider these factors in their own research in order to gain a better understanding of the spatial constraints on multisensory integration as they affect us in our everyday life.
Collapse
Affiliation(s)
- N. van der Stoep
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - A. Serino
- Center for Neuroprosthetics, EPFL, Lausanne, Switzerland
| | - A. Farnè
- ImpAct Team, Lyon Neuroscience Research Center, INSERM U1028, CNRS UMR5292, 69000 Lyon, France
| | - M. Di Luca
- School of Psychology, CNCR, University of Birmingham, Birmingham, United Kingdom
| | - C. Spence
- Department of Experimental Psychology, Oxford University, Oxford, United Kingdom
| |
Collapse
|
49
|
Koper N, Leston L, Baker TM, Curry C, Rosa P. Effects of ambient noise on detectability and localization of avian songs and tones by observers in grasslands. Ecol Evol 2015; 6:245-55. [PMID: 26811789 PMCID: PMC4716498 DOI: 10.1002/ece3.1847] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2015] [Revised: 10/20/2015] [Accepted: 10/21/2015] [Indexed: 12/04/2022] Open
Abstract
Probability of detection and accuracy of distance estimates in aural avian surveys may be affected by the presence of anthropogenic noise, and this may lead to inaccurate evaluations of the effects of noisy infrastructure on wildlife. We used arrays of speakers broadcasting recordings of grassland bird songs and pure tones to assess the probability of detection, and localization accuracy, by observers at sites with and without noisy oil and gas infrastructure in south‐central Alberta from 2012 to 2014. Probability of detection varied with species and with speaker distance from transect line, but there were few effects of noisy infrastructure. Accuracy of distance estimates for songs and tones decreased as distance to observer increased, and distance estimation error was higher for tones at sites with infrastructure noise. Our results suggest that quiet to moderately loud anthropogenic noise may not mask detection of bird songs; however, errors in distance estimates during aural surveys may lead to inaccurate estimates of avian densities calculated using distance sampling. We recommend caution when applying distance sampling if most birds are unseen, and where ambient noise varies among treatments.
Collapse
Affiliation(s)
- Nicola Koper
- Natural Resources Institute University of Manitoba 70 Dysart Road Winnipeg Manitoba R3T 2M7 Canada
| | - Lionel Leston
- Natural Resources Institute University of Manitoba 70 Dysart Road Winnipeg Manitoba R3T 2M7 Canada; Department of Biological Sciences University of Alberta CW 405 Biological Sciences Building Edmonton Alberta T6G 2E9 Canada
| | - Tyne M Baker
- Natural Resources Institute University of Manitoba 70 Dysart Road Winnipeg Manitoba R3T 2M7 Canada; TERA Environmental Consultants 815 8 Ave SW Calgary Alberta T2M 2M8 Canada
| | - Claire Curry
- Natural Resources Institute University of Manitoba 70 Dysart Road Winnipeg Manitoba R3T 2M7 Canada
| | - Patricia Rosa
- Natural Resources Institute University of Manitoba 70 Dysart Road Winnipeg Manitoba R3T 2M7 Canada
| |
Collapse
|
50
|
Finocchietti S, Cappagli G, Gori M. Encoding audio motion: spatial impairment in early blind individuals. Front Psychol 2015; 6:1357. [PMID: 26441733 PMCID: PMC4561343 DOI: 10.3389/fpsyg.2015.01357] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2015] [Accepted: 08/24/2015] [Indexed: 11/13/2022] Open
Abstract
The consequence of blindness on auditory spatial localization has been an interesting issue of research in the last decade providing mixed results. Enhanced auditory spatial skills in individuals with visual impairment have been reported by multiple studies, while some aspects of spatial hearing seem to be impaired in the absence of vision. In this study, the ability to encode the trajectory of a 2-dimensional sound motion, reproducing the complete movement, and reaching the correct end-point sound position, is evaluated in 12 early blind (EB) individuals, 8 late blind (LB) individuals, and 20 age-matched sighted blindfolded controls. EB individuals correctly determine the direction of the sound motion on the horizontal axis, but show a clear deficit in encoding the sound motion in the lower side of the plane. On the contrary, LB individuals and blindfolded controls perform much better with no deficit in the lower side of the plane. In fact the mean localization error resulted 271 ± 10 mm for EB individuals, 65 ± 4 mm for LB individuals, and 68 ± 2 mm for sighted blindfolded controls. These results support the hypothesis that (i) it exists a trade-off between the development of enhanced perceptual abilities and role of vision in the sound localization abilities of EB individuals, and (ii) the visual information is fundamental in calibrating some aspects of the representation of auditory space in the brain.
Collapse
Affiliation(s)
- Sara Finocchietti
- Science and Technology for Visually Impaired Children and Adults Group, Istituto Italiano di Tecnologia Genoa, Italy
| | - Giulia Cappagli
- Science and Technology for Visually Impaired Children and Adults Group, Istituto Italiano di Tecnologia Genoa, Italy
| | - Monica Gori
- Science and Technology for Visually Impaired Children and Adults Group, Istituto Italiano di Tecnologia Genoa, Italy
| |
Collapse
|