1
|
Maimon A, Wald IY, Snir A, Ben Oz M, Amedi A. Perceiving depth beyond sight: Evaluating intrinsic and learned cues via a proof of concept sensory substitution method in the visually impaired and sighted. PLoS One 2024; 19:e0310033. [PMID: 39321152 PMCID: PMC11423994 DOI: 10.1371/journal.pone.0310033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Accepted: 08/23/2024] [Indexed: 09/27/2024] Open
Abstract
This study explores spatial perception of depth by employing a novel proof of concept sensory substitution algorithm. The algorithm taps into existing cognitive scaffolds such as language and cross modal correspondences by naming objects in the scene while representing their elevation and depth by manipulation of the auditory properties for each axis. While the representation of verticality utilized a previously tested correspondence with pitch, the representation of depth employed an ecologically inspired manipulation, based on the loss of gain and filtration of higher frequency sounds over distance. The study, involving 40 participants, seven of which were blind (5) or visually impaired (2), investigates the intrinsicness of an ecologically inspired mapping of auditory cues for depth by comparing it to an interchanged condition where the mappings of the two axes are swapped. All participants successfully learned to use the algorithm following a very brief period of training, with the blind and visually impaired participants showing similar levels of success for learning to use the algorithm as did their sighted counterparts. A significant difference was found at baseline between the two conditions, indicating the intuitiveness of the original ecologically inspired mapping. Despite this, participants were able to achieve similar success rates following the training in both conditions. The findings indicate that both intrinsic and learned cues come into play with respect to depth perception. Moreover, they suggest that by employing perceptual learning, novel sensory mappings can be trained in adulthood. Regarding the blind and visually impaired, the results also support the convergence view, which claims that with training, their spatial abilities can converge with those of the sighted. Finally, we discuss how the algorithm can open new avenues for accessibility technologies, virtual reality, and other practical applications.
Collapse
Affiliation(s)
- Amber Maimon
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- Computational Psychiatry and Neurotechnology Lab, Ben Gurion University, Be'er Sheva, Israel
| | - Iddo Yehoshua Wald
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- Digital Media Lab, University of Bremen, Bremen, Germany
| | - Adi Snir
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
| | - Meshi Ben Oz
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
| | - Amir Amedi
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
| |
Collapse
|
2
|
Wenger M, Maimon A, Yizhar O, Snir A, Sasson Y, Amedi A. Hearing temperatures: employing machine learning for elucidating the cross-modal perception of thermal properties through audition. Front Psychol 2024; 15:1353490. [PMID: 39156805 PMCID: PMC11327021 DOI: 10.3389/fpsyg.2024.1353490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2023] [Accepted: 06/28/2024] [Indexed: 08/20/2024] Open
Abstract
People can use their sense of hearing for discerning thermal properties, though they are for the most part unaware that they can do so. While people unequivocally claim that they cannot perceive the temperature of pouring water through the auditory properties of hearing it being poured, our research further strengthens the understanding that they can. This multimodal ability is implicitly acquired in humans, likely through perceptual learning over the lifetime of exposure to the differences in the physical attributes of pouring water. In this study, we explore people's perception of this intriguing cross modal correspondence, and investigate the psychophysical foundations of this complex ecological mapping by employing machine learning. Our results show that not only can the auditory properties of pouring water be classified by humans in practice, the physical characteristics underlying this phenomenon can also be classified by a pre-trained deep neural network.
Collapse
Affiliation(s)
- Mohr Wenger
- Baruch Ivcher Institute for Brain Cognition and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
- Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Amber Maimon
- Baruch Ivcher Institute for Brain Cognition and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
- Computational Psychiatry and Neurotechnology Lab, Department of Brain and Cognitive Sciences, Ben Gurion University, Be’er Sheva, Israel
| | - Or Yizhar
- Baruch Ivcher Institute for Brain Cognition and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
- Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
- Research Group Adaptive Memory and Decision Making, Max Planck Institute for Human Development, Berlin, Germany
| | - Adi Snir
- Baruch Ivcher Institute for Brain Cognition and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
| | - Yonatan Sasson
- Baruch Ivcher Institute for Brain Cognition and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
| | - Amir Amedi
- Baruch Ivcher Institute for Brain Cognition and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
| |
Collapse
|
3
|
Bang JW, Hamilton-Fletcher G, Chan KC. Visual Plasticity in Adulthood: Perspectives from Hebbian and Homeostatic Plasticity. Neuroscientist 2023; 29:117-138. [PMID: 34382456 PMCID: PMC9356772 DOI: 10.1177/10738584211037619] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
The visual system retains profound plastic potential in adulthood. In the current review, we summarize the evidence of preserved plasticity in the adult visual system during visual perceptual learning as well as both monocular and binocular visual deprivation. In each condition, we discuss how such evidence reflects two major cellular mechanisms of plasticity: Hebbian and homeostatic processes. We focus on how these two mechanisms work together to shape plasticity in the visual system. In addition, we discuss how these two mechanisms could be further revealed in future studies investigating cross-modal plasticity in the visual system.
Collapse
Affiliation(s)
- Ji Won Bang
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
| | - Giles Hamilton-Fletcher
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
| | - Kevin C. Chan
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
- Department of Radiology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
- Neuroscience Institute, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
- Center for Neural Science, College of Arts and Science, New York University, New York, NY, USA
| |
Collapse
|
4
|
Shvadron S, Snir A, Maimon A, Yizhar O, Harel S, Poradosu K, Amedi A. Shape detection beyond the visual field using a visual-to-auditory sensory augmentation device. Front Hum Neurosci 2023; 17:1058617. [PMID: 36936618 PMCID: PMC10017858 DOI: 10.3389/fnhum.2023.1058617] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 01/09/2023] [Indexed: 03/06/2023] Open
Abstract
Current advancements in both technology and science allow us to manipulate our sensory modalities in new and unexpected ways. In the present study, we explore the potential of expanding what we perceive through our natural senses by utilizing a visual-to-auditory sensory substitution device (SSD), the EyeMusic, an algorithm that converts images to sound. The EyeMusic was initially developed to allow blind individuals to create a spatial representation of information arriving from a video feed at a slow sampling rate. In this study, we aimed to use the EyeMusic for the blind areas of sighted individuals. We use it in this initial proof-of-concept study to test the ability of sighted subjects to combine visual information with surrounding auditory sonification representing visual information. Participants in this study were tasked with recognizing and adequately placing the stimuli, using sound to represent the areas outside the standard human visual field. As such, the participants were asked to report shapes' identities as well as their spatial orientation (front/right/back/left), requiring combined visual (90° frontal) and auditory input (the remaining 270°) for the successful performance of the task (content in both vision and audition was presented in a sweeping clockwise motion around the participant). We found that participants were successful at a highly above chance level after a brief 1-h-long session of online training and one on-site training session of an average of 20 min. They could even draw a 2D representation of this image in some cases. Participants could also generalize, recognizing new shapes they were not explicitly trained on. Our findings provide an initial proof of concept indicating that sensory augmentation devices and techniques can potentially be used in combination with natural sensory information in order to expand the natural fields of sensory perception.
Collapse
Affiliation(s)
- Shira Shvadron
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- *Correspondence: Shira Shvadron,
| | - Adi Snir
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Amber Maimon
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Or Yizhar
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- Research Group Adaptive Memory and Decision Making, Max Planck Institute for Human Development, Berlin, Germany
- Max Planck Dahlem Campus of Cognition (MPDCC), Max Planck Institute for Human Development, Berlin, Germany
| | - Sapir Harel
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Keinan Poradosu
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- Weizmann Institute of Science, Rehovot, Israel
| | - Amir Amedi
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
5
|
Himmelberg MM, Gardner JL, Winawer J. What has vision science taught us about functional MRI? Neuroimage 2022; 261:119536. [PMID: 35931310 PMCID: PMC9756767 DOI: 10.1016/j.neuroimage.2022.119536] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Revised: 07/21/2022] [Accepted: 08/02/2022] [Indexed: 10/31/2022] Open
Abstract
In the domain of human neuroimaging, much attention has been paid to the question of whether and how the development of functional magnetic resonance imaging (fMRI) has advanced our scientific knowledge of the human brain. However, the opposite question is also important; how has our knowledge of the brain advanced our understanding of fMRI? Here, we discuss how and why scientific knowledge about the human and animal visual system has been used to answer fundamental questions about fMRI as a brain measurement tool and how these answers have contributed to scientific discoveries beyond vision science.
Collapse
Affiliation(s)
- Marc M Himmelberg
- Department of Psychology, New York University, NY, USA; Center for Neural Science, New York University, NY, USA.
| | | | - Jonathan Winawer
- Department of Psychology, New York University, NY, USA; Center for Neural Science, New York University, NY, USA
| |
Collapse
|
6
|
Bleau M, Paré S, Chebat DR, Kupers R, Nemargut JP, Ptito M. Neural substrates of spatial processing and navigation in blindness: An activation likelihood estimation meta-analysis. Front Neurosci 2022; 16:1010354. [PMID: 36340755 PMCID: PMC9630591 DOI: 10.3389/fnins.2022.1010354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Accepted: 09/30/2022] [Indexed: 12/02/2022] Open
Abstract
Even though vision is considered the best suited sensory modality to acquire spatial information, blind individuals can form spatial representations to navigate and orient themselves efficiently in space. Consequently, many studies support the amodality hypothesis of spatial representations since sensory modalities other than vision contribute to the formation of spatial representations, independently of visual experience and imagery. However, given the high variability in abilities and deficits observed in blind populations, a clear consensus about the neural representations of space has yet to be established. To this end, we performed a meta-analysis of the literature on the neural correlates of spatial processing and navigation via sensory modalities other than vision, like touch and audition, in individuals with early and late onset blindness. An activation likelihood estimation (ALE) analysis of the neuroimaging literature revealed that early blind individuals and sighted controls activate the same neural networks in the processing of non-visual spatial information and navigation, including the posterior parietal cortex, frontal eye fields, insula, and the hippocampal complex. Furthermore, blind individuals also recruit primary and associative occipital areas involved in visuo-spatial processing via cross-modal plasticity mechanisms. The scarcity of studies involving late blind individuals did not allow us to establish a clear consensus about the neural substrates of spatial representations in this specific population. In conclusion, the results of our analysis on neuroimaging studies involving early blind individuals support the amodality hypothesis of spatial representations.
Collapse
Affiliation(s)
- Maxime Bleau
- École d’Optométrie, Université de Montréal, Montreal, QC, Canada
| | - Samuel Paré
- École d’Optométrie, Université de Montréal, Montreal, QC, Canada
| | - Daniel-Robert Chebat
- Visual and Cognitive Neuroscience Laboratory (VCN Lab), Department of Psychology, Faculty of Social Sciences and Humanities, Ariel University, Ariel, Israel
- Navigation and Accessibility Research Center of Ariel University (NARCA), Ariel University, Ariel, Israel
| | - Ron Kupers
- École d’Optométrie, Université de Montréal, Montreal, QC, Canada
- Institute of Neuroscience, Faculty of Medicine, Université de Louvain, Brussels, Belgium
- Department of Neuroscience, University of Copenhagen, Copenhagen, Denmark
| | | | - Maurice Ptito
- École d’Optométrie, Université de Montréal, Montreal, QC, Canada
- Department of Neuroscience, University of Copenhagen, Copenhagen, Denmark
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- *Correspondence: Maurice Ptito,
| |
Collapse
|
7
|
Pesnot Lerousseau J, Arnold G, Auvray M. Training-induced plasticity enables visualizing sounds with a visual-to-auditory conversion device. Sci Rep 2021; 11:14762. [PMID: 34285265 PMCID: PMC8292401 DOI: 10.1038/s41598-021-94133-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Accepted: 06/28/2021] [Indexed: 12/04/2022] Open
Abstract
Sensory substitution devices aim at restoring visual functions by converting visual information into auditory or tactile stimuli. Although these devices show promise in the range of behavioral abilities they allow, the processes underlying their use remain underspecified. In particular, while an initial debate focused on the visual versus auditory or tactile nature of sensory substitution, since over a decade, the idea that it reflects a mixture of both has emerged. In order to investigate behaviorally the extent to which visual and auditory processes are involved, participants completed a Stroop-like crossmodal interference paradigm before and after being trained with a conversion device which translates visual images into sounds. In addition, participants' auditory abilities and their phenomenologies were measured. Our study revealed that, after training, when asked to identify sounds, processes shared with vision were involved, as participants’ performance in sound identification was influenced by the simultaneously presented visual distractors. In addition, participants’ performance during training and their associated phenomenology depended on their auditory abilities, revealing that processing finds its roots in the input sensory modality. Our results pave the way for improving the design and learning of these devices by taking into account inter-individual differences in auditory and visual perceptual strategies.
Collapse
Affiliation(s)
| | | | - Malika Auvray
- Sorbonne Université, CNRS UMR 7222, Institut des Systèmes Intelligents et de Robotique (ISIR), 75005, Paris, France.
| |
Collapse
|