1
|
Maimon A, Wald IY, Snir A, Ben Oz M, Amedi A. Perceiving depth beyond sight: Evaluating intrinsic and learned cues via a proof of concept sensory substitution method in the visually impaired and sighted. PLoS One 2024; 19:e0310033. [PMID: 39321152 PMCID: PMC11423994 DOI: 10.1371/journal.pone.0310033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Accepted: 08/23/2024] [Indexed: 09/27/2024] Open
Abstract
This study explores spatial perception of depth by employing a novel proof of concept sensory substitution algorithm. The algorithm taps into existing cognitive scaffolds such as language and cross modal correspondences by naming objects in the scene while representing their elevation and depth by manipulation of the auditory properties for each axis. While the representation of verticality utilized a previously tested correspondence with pitch, the representation of depth employed an ecologically inspired manipulation, based on the loss of gain and filtration of higher frequency sounds over distance. The study, involving 40 participants, seven of which were blind (5) or visually impaired (2), investigates the intrinsicness of an ecologically inspired mapping of auditory cues for depth by comparing it to an interchanged condition where the mappings of the two axes are swapped. All participants successfully learned to use the algorithm following a very brief period of training, with the blind and visually impaired participants showing similar levels of success for learning to use the algorithm as did their sighted counterparts. A significant difference was found at baseline between the two conditions, indicating the intuitiveness of the original ecologically inspired mapping. Despite this, participants were able to achieve similar success rates following the training in both conditions. The findings indicate that both intrinsic and learned cues come into play with respect to depth perception. Moreover, they suggest that by employing perceptual learning, novel sensory mappings can be trained in adulthood. Regarding the blind and visually impaired, the results also support the convergence view, which claims that with training, their spatial abilities can converge with those of the sighted. Finally, we discuss how the algorithm can open new avenues for accessibility technologies, virtual reality, and other practical applications.
Collapse
Affiliation(s)
- Amber Maimon
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- Computational Psychiatry and Neurotechnology Lab, Ben Gurion University, Be'er Sheva, Israel
| | - Iddo Yehoshua Wald
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- Digital Media Lab, University of Bremen, Bremen, Germany
| | - Adi Snir
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
| | - Meshi Ben Oz
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
| | - Amir Amedi
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
| |
Collapse
|
2
|
Goral O, Wald IY, Maimon A, Snir A, Golland Y, Goral A, Amedi A. Enhancing interoceptive sensibility through exteroceptive-interoceptive sensory substitution. Sci Rep 2024; 14:14855. [PMID: 38937475 PMCID: PMC11211335 DOI: 10.1038/s41598-024-63231-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 05/27/2024] [Indexed: 06/29/2024] Open
Abstract
Exploring a novel approach to mental health technology, this study illuminates the intricate interplay between exteroception (the perception of the external world), and interoception (the perception of the internal world). Drawing on principles of sensory substitution, we investigated how interoceptive signals, particularly respiration, could be conveyed through exteroceptive modalities, namely vision and hearing. To this end, we developed a unique, immersive multisensory environment that translates respiratory signals in real-time into dynamic visual and auditory stimuli. The system was evaluated by employing a battery of various psychological assessments, with the findings indicating a significant increase in participants' interoceptive sensibility and an enhancement of the state of flow, signifying immersive and positive engagement with the experience. Furthermore, a correlation between these two variables emerged, revealing a bidirectional enhancement between the state of flow and interoceptive sensibility. Our research is the first to present a sensory substitution approach for substituting between interoceptive and exteroceptive senses, and specifically as a transformative method for mental health interventions, paving the way for future research.
Collapse
Affiliation(s)
- Oran Goral
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
| | - Iddo Yehoshua Wald
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- Digital Media Lab, Bremen University, Bremen, Germany
| | - Amber Maimon
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- Computational Psychiatry and Neurotechnology Lab, Ben Gurion University, Be'er Sheva, Israel
| | - Adi Snir
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
| | - Yulia Golland
- Sagol Center for Brain and Mind, Reichman University, Herzliya, Israel
| | - Aviva Goral
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
| | - Amir Amedi
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel.
| |
Collapse
|
3
|
Zhang B, Zhang R, Zhao J, Yang J, Xu S. The mechanism of human color vision and potential implanted devices for artificial color vision. Front Neurosci 2024; 18:1408087. [PMID: 38962178 PMCID: PMC11221215 DOI: 10.3389/fnins.2024.1408087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Accepted: 05/31/2024] [Indexed: 07/05/2024] Open
Abstract
Vision plays a major role in perceiving external stimuli and information in our daily lives. The neural mechanism of color vision is complicated, involving the co-ordinated functions of a variety of cells, such as retinal cells and lateral geniculate nucleus cells, as well as multiple levels of the visual cortex. In this work, we reviewed the history of experimental and theoretical studies on this issue, from the fundamental functions of the individual cells of the visual system to the coding in the transmission of neural signals and sophisticated brain processes at different levels. We discuss various hypotheses, models, and theories related to the color vision mechanism and present some suggestions for developing novel implanted devices that may help restore color vision in visually impaired people or introduce artificial color vision to those who need it.
Collapse
Affiliation(s)
- Bingao Zhang
- Key Laboratory for the Physics and Chemistry of Nanodevices, Institute of Physical Electronics, Department of Electronics, Peking University, Beijing, China
| | - Rong Zhang
- Key Laboratory for the Physics and Chemistry of Nanodevices, Institute of Physical Electronics, Department of Electronics, Peking University, Beijing, China
| | - Jingjin Zhao
- Key Laboratory for the Physics and Chemistry of Nanodevices, Institute of Physical Electronics, Department of Electronics, Peking University, Beijing, China
| | - Jiarui Yang
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Department of Ophthalmology, Peking University Third Hospital, Beijing, China
| | - Shengyong Xu
- Key Laboratory for the Physics and Chemistry of Nanodevices, Institute of Physical Electronics, Department of Electronics, Peking University, Beijing, China
| |
Collapse
|
4
|
Jin R, Petoe MA, McCarthy CD, Stefopoulos S, Battiwalla X, McGinley J, Ayton LN. Functional performance of a vibrotactile sensory substitution device in people with profound vision loss. Optom Vis Sci 2024; 101:358-367. [PMID: 38990235 DOI: 10.1097/opx.0000000000002151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2024] Open
Abstract
SIGNIFICANCE This study has shown a vibrotactile sensory substitution device (SSD) prototype, VibroSight, has the potential to improve functional outcomes (i.e., obstacle avoidance, face detection) for people with profound vision loss, even with brief familiarization (<20 minutes). PURPOSE Mobility aids such as long canes are still the mainstay of support for most people with vision loss, but they do have limitations. Emerging technologies such as SSDs are gaining widespread interest in the low vision community. The aim of this project was to assess the efficacy of a prototype vibrotactile SSD for people with profound vision loss in the face detection and obstacle avoidance tasks. METHODS The VibroSight device was tested in a movement laboratory setting. The first task involved obstacle avoidance, in which participants were asked to walk through an obstacle course. The second was a face detection task, in which participants were asked to step toward the first face they detected. Exit interviews were also conducted to gather user experience data. Both people with low vision (n = 7) and orientation and mobility instructors (n = 4) completed the tasks. RESULTS In obstacle avoidance task, participants were able to use the device to detect (p<0.001) and avoid (p<0.001) the obstacles within a significantly larger range, but were slower (p<0.001), when compared with without the device. In face detection task, participants demonstrated a great level of accuracy, precision, and sensitivity when using the device. Interviews revealed a positive user experience, although participants identified that they would require a lighter and compact design for real-world use. CONCLUSIONS Overall, the results verified the functionality of vibrotactile SSD prototype. Further research is warranted to evaluate the user performance after an extended training program and to add new features, such as object recognition software algorithms, into the device.
Collapse
Affiliation(s)
- Rui Jin
- Department of Optometry and Vision Sciences, University of Melbourne, Melbourne, Victoria, Australia
| | | | - Chris D McCarthy
- School of Software and Electrical Engineering, Faculty of Science, Engineering & Technology, Swinburne University of Technology, Hawthorn, Victoria, Australia
| | | | | | - Jennifer McGinley
- Department of Physiotherapy, University of Melbourne, Melbourne, Victoria, Australia
| | | |
Collapse
|
5
|
Várkuti B, Halász L, Hagh Gooie S, Miklós G, Smits Serena R, van Elswijk G, McIntyre CC, Lempka SF, Lozano AM, Erōss L. Conversion of a medical implant into a versatile computer-brain interface. Brain Stimul 2024; 17:39-48. [PMID: 38145752 DOI: 10.1016/j.brs.2023.12.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 12/19/2023] [Accepted: 12/20/2023] [Indexed: 12/27/2023] Open
Abstract
BACKGROUND Information transmission into the human nervous system is the basis for a variety of prosthetic applications. Spinal cord stimulation (SCS) systems are widely available, have a well documented safety record, can be implanted minimally invasively, and are known to stimulate afferent pathways. Nonetheless, SCS devices are not yet used for computer-brain-interfacing applications. OBJECTIVE Here we aimed to establish computer-to-brain communication via medical SCS implants in a group of 20 individuals who had been operated for the treatment of chronic neuropathic pain. METHODS In the initial phase, we conducted interface calibration with the aim of determining personalized stimulation settings that yielded distinct and reproducible sensations. These settings were subsequently utilized to generate inputs for a range of behavioral tasks. We evaluated the required calibration time, task training duration, and the subsequent performance in each task. RESULTS We could establish a stable spinal computer-brain interface in 18 of the 20 participants. Each of the 18 then performed one or more of the following tasks: A rhythm-discrimination task (n = 13), a Morse-decoding task (n = 3), and/or two different balance/body-posture tasks (n = 18; n = 5). The median calibration time was 79 min. The median training time for learning to use the interface in a subsequent task was 1:40 min. In each task, every participant demonstrated successful performance, surpassing chance levels. CONCLUSION The results constitute the first proof-of-concept of a general purpose computer-brain interface paradigm that could be deployed on present-day medical SCS platforms.
Collapse
Affiliation(s)
| | - László Halász
- Albert-Szentgyörgyi Medical School, Doctoral School of Clinical Medicine, Clinical and Experimental Research for Reconstructive and Organ-Sparing Surgery, University of Szeged, Szeged, Hungary
| | | | - Gabriella Miklós
- CereGate GmbH, München, Germany; National Institute of Mental Health, Neurology, and Neurosurgery, Budapest, Hungary; János Szentágothai Doctoral School of Neurosciences, Semmelweis University, Budapest, Hungary
| | - Ricardo Smits Serena
- CereGate GmbH, München, Germany; Department of Orthopaedics and Sports Orthopaedics, Klinikum Rechts der Isar, Technical University of Munich, München, Germany
| | | | - Cameron C McIntyre
- Department of Biomedical Engineering and Department of Neurosurgery, Duke University, Durham, NC, USA
| | - Scott F Lempka
- Department of Biomedical Engineering, Department of Anesthesiology and the Biointerfaces Institute, University of Michigan, Ann Arbor, MI, USA
| | - Andres M Lozano
- Division of Neurosurgery, Department of Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Loránd Erōss
- National Institute of Mental Health, Neurology, and Neurosurgery, Budapest, Hungary
| |
Collapse
|
6
|
de Paz C, Travieso D. A direct comparison of sound and vibration as sources of stimulation for a sensory substitution glove. Cogn Res Princ Implic 2023; 8:41. [PMID: 37402032 DOI: 10.1186/s41235-023-00495-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 06/18/2023] [Indexed: 07/05/2023] Open
Abstract
Sensory substitution devices (SSDs) facilitate the detection of environmental information through enhancement of touch and/or hearing capabilities. Research has demonstrated that several tasks can be successfully completed using acoustic, vibrotactile, and multimodal devices. The suitability of a substituting modality is also mediated by the type of information required to perform the specific task. The present study tested the adequacy of touch and hearing in a grasping task by utilizing a sensory substitution glove. The substituting modalities inform, through increases in stimulation intensity, about the distance between the fingers and the objects. A psychophysical experiment of magnitude estimation was conducted. Forty blindfolded sighted participants discriminated equivalently the intensity of both vibrotactile and acoustic stimulation, although they experienced some difficulty with the more intense stimuli. Additionally, a grasping task involving cylindrical objects of varying diameters, distances and orientations was performed. Thirty blindfolded sighted participants were divided into vibration, sound, or multimodal groups. High performance was achieved (84% correct grasps) with equivalent success rate between groups. Movement variables showed more precision and confidence in the multimodal condition. Through a questionnaire, the multimodal group indicated their preference for using a multimodal SSD in daily life and identified vibration as their primary source of stimulation. These results demonstrate that there is an improvement in performance with specific-purpose SSDs, when the necessary information for a task is identified and coupled with the delivered stimulation. Furthermore, the results suggest that it is possible to achieve functional equivalence between substituting modalities when these previous steps are met.
Collapse
Affiliation(s)
- Carlos de Paz
- Facultad de Psicología, Universidad Autónoma de Madrid, 28049, Madrid, Spain
| | - David Travieso
- Facultad de Psicología, Universidad Autónoma de Madrid, 28049, Madrid, Spain.
| |
Collapse
|
7
|
Kral A, Sharma A. Crossmodal plasticity in hearing loss. Trends Neurosci 2023; 46:377-393. [PMID: 36990952 PMCID: PMC10121905 DOI: 10.1016/j.tins.2023.02.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 01/27/2023] [Accepted: 02/21/2023] [Indexed: 03/29/2023]
Abstract
Crossmodal plasticity is a textbook example of the ability of the brain to reorganize based on use. We review evidence from the auditory system showing that such reorganization has significant limits, is dependent on pre-existing circuitry and top-down interactions, and that extensive reorganization is often absent. We argue that the evidence does not support the hypothesis that crossmodal reorganization is responsible for closing critical periods in deafness, and crossmodal plasticity instead represents a neuronal process that is dynamically adaptable. We evaluate the evidence for crossmodal changes in both developmental and adult-onset deafness, which start as early as mild-moderate hearing loss and show reversibility when hearing is restored. Finally, crossmodal plasticity does not appear to affect the neuronal preconditions for successful hearing restoration. Given its dynamic and versatile nature, we describe how this plasticity can be exploited for improving clinical outcomes after neurosensory restoration.
Collapse
Affiliation(s)
- Andrej Kral
- Institute of AudioNeuroTechnology and Department of Experimental Otology, Otolaryngology Clinics, Hannover Medical School, Hannover, Germany; Australian Hearing Hub, School of Medicine and Health Sciences, Macquarie University, Sydney, NSW, Australia
| | - Anu Sharma
- Department of Speech Language and Hearing Science, Center for Neuroscience, Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, USA.
| |
Collapse
|
8
|
Maimon A, Wald IY, Ben Oz M, Codron S, Netzer O, Heimler B, Amedi A. The Topo-Speech sensory substitution system as a method of conveying spatial information to the blind and vision impaired. Front Hum Neurosci 2023; 16:1058093. [PMID: 36776219 PMCID: PMC9909096 DOI: 10.3389/fnhum.2022.1058093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 12/13/2022] [Indexed: 01/27/2023] Open
Abstract
Humans, like most animals, integrate sensory input in the brain from different sensory modalities. Yet humans are distinct in their ability to grasp symbolic input, which is interpreted into a cognitive mental representation of the world. This representation merges with external sensory input, providing modality integration of a different sort. This study evaluates the Topo-Speech algorithm in the blind and visually impaired. The system provides spatial information about the external world by applying sensory substitution alongside symbolic representations in a manner that corresponds with the unique way our brains acquire and process information. This is done by conveying spatial information, customarily acquired through vision, through the auditory channel, in a combination of sensory (auditory) features and symbolic language (named/spoken) features. The Topo-Speech sweeps the visual scene or image and represents objects' identity by employing naming in a spoken word and simultaneously conveying the objects' location by mapping the x-axis of the visual scene or image to the time it is announced and the y-axis by mapping the location to the pitch of the voice. This proof of concept study primarily explores the practical applicability of this approach in 22 visually impaired and blind individuals. The findings showed that individuals from both populations could effectively interpret and use the algorithm after a single training session. The blind showed an accuracy of 74.45%, while the visually impaired had an average accuracy of 72.74%. These results are comparable to those of the sighted, as shown in previous research, with all participants above chance level. As such, we demonstrate practically how aspects of spatial information can be transmitted through non-visual channels. To complement the findings, we weigh in on debates concerning models of spatial knowledge (the persistent, cumulative, or convergent models) and the capacity for spatial representation in the blind. We suggest the present study's findings support the convergence model and the scenario that posits the blind are capable of some aspects of spatial representation as depicted by the algorithm comparable to those of the sighted. Finally, we present possible future developments, implementations, and use cases for the system as an aid for the blind and visually impaired.
Collapse
Affiliation(s)
- Amber Maimon
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Iddo Yehoshua Wald
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Meshi Ben Oz
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Sophie Codron
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Ophir Netzer
- Gonda Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Benedetta Heimler
- Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
9
|
Bordeau C, Scalvini F, Migniot C, Dubois J, Ambard M. Cross-modal correspondence enhances elevation localization in visual-to-auditory sensory substitution. Front Psychol 2023; 14:1079998. [PMID: 36777233 PMCID: PMC9909421 DOI: 10.3389/fpsyg.2023.1079998] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 01/06/2023] [Indexed: 01/27/2023] Open
Abstract
Introduction Visual-to-auditory sensory substitution devices are assistive devices for the blind that convert visual images into auditory images (or soundscapes) by mapping visual features with acoustic cues. To convey spatial information with sounds, several sensory substitution devices use a Virtual Acoustic Space (VAS) using Head Related Transfer Functions (HRTFs) to synthesize natural acoustic cues used for sound localization. However, the perception of the elevation is known to be inaccurate with generic spatialization since it is based on notches in the audio spectrum that are specific to each individual. Another method used to convey elevation information is based on the audiovisual cross-modal correspondence between pitch and visual elevation. The main drawback of this second method is caused by the limitation of the ability to perceive elevation through HRTFs due to the spectral narrowband of the sounds. Method In this study we compared the early ability to localize objects with a visual-to-auditory sensory substitution device where elevation is either conveyed using a spatialization-based only method (Noise encoding) or using pitch-based methods with different spectral complexities (Monotonic and Harmonic encodings). Thirty eight blindfolded participants had to localize a virtual target using soundscapes before and after having been familiarized with the visual-to-auditory encodings. Results Participants were more accurate to localize elevation with pitch-based encodings than with the spatialization-based only method. Only slight differences in azimuth localization performance were found between the encodings. Discussion This study suggests the intuitiveness of a pitch-based encoding with a facilitation effect of the cross-modal correspondence when a non-individualized sound spatialization is used.
Collapse
Affiliation(s)
- Camille Bordeau
- LEAD-CNRS UMR5022, Université de Bourgogne, Dijon, France,*Correspondence: Camille Bordeau ✉
| | | | | | - Julien Dubois
- ImViA EA 7535, Université de Bourgogne, Dijon, France
| | - Maxime Ambard
- LEAD-CNRS UMR5022, Université de Bourgogne, Dijon, France
| |
Collapse
|
10
|
Shvadron S, Snir A, Maimon A, Yizhar O, Harel S, Poradosu K, Amedi A. Shape detection beyond the visual field using a visual-to-auditory sensory augmentation device. Front Hum Neurosci 2023; 17:1058617. [PMID: 36936618 PMCID: PMC10017858 DOI: 10.3389/fnhum.2023.1058617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 01/09/2023] [Indexed: 03/06/2023] Open
Abstract
Current advancements in both technology and science allow us to manipulate our sensory modalities in new and unexpected ways. In the present study, we explore the potential of expanding what we perceive through our natural senses by utilizing a visual-to-auditory sensory substitution device (SSD), the EyeMusic, an algorithm that converts images to sound. The EyeMusic was initially developed to allow blind individuals to create a spatial representation of information arriving from a video feed at a slow sampling rate. In this study, we aimed to use the EyeMusic for the blind areas of sighted individuals. We use it in this initial proof-of-concept study to test the ability of sighted subjects to combine visual information with surrounding auditory sonification representing visual information. Participants in this study were tasked with recognizing and adequately placing the stimuli, using sound to represent the areas outside the standard human visual field. As such, the participants were asked to report shapes' identities as well as their spatial orientation (front/right/back/left), requiring combined visual (90° frontal) and auditory input (the remaining 270°) for the successful performance of the task (content in both vision and audition was presented in a sweeping clockwise motion around the participant). We found that participants were successful at a highly above chance level after a brief 1-h-long session of online training and one on-site training session of an average of 20 min. They could even draw a 2D representation of this image in some cases. Participants could also generalize, recognizing new shapes they were not explicitly trained on. Our findings provide an initial proof of concept indicating that sensory augmentation devices and techniques can potentially be used in combination with natural sensory information in order to expand the natural fields of sensory perception.
Collapse
Affiliation(s)
- Shira Shvadron
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- *Correspondence: Shira Shvadron,
| | - Adi Snir
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Amber Maimon
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Or Yizhar
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- Research Group Adaptive Memory and Decision Making, Max Planck Institute for Human Development, Berlin, Germany
- Max Planck Dahlem Campus of Cognition (MPDCC), Max Planck Institute for Human Development, Berlin, Germany
| | - Sapir Harel
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Keinan Poradosu
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- Weizmann Institute of Science, Rehovot, Israel
| | - Amir Amedi
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
11
|
Samara M, Deriche M, Al-Sadah J, Osais Y. Design and Implementation of a Real-Time Color Recognition System for the Visually Impaired. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2022. [DOI: 10.1007/s13369-022-07506-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
12
|
Steffens H, Schutte M, Ewert SD. Acoustically driven orientation and navigation in enclosed spaces. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1767. [PMID: 36182293 DOI: 10.1121/10.0013702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 08/02/2022] [Indexed: 06/16/2023]
Abstract
Awareness of space, and subsequent orientation and navigation in rooms, is dominated by the visual system. However, humans are able to extract auditory information about their surroundings from early reflections and reverberation in enclosed spaces. To better understand orientation and navigation based on acoustic cues only, three virtual corridor layouts (I-, U-, and Z-shaped) were presented using real-time virtual acoustics in a three-dimensional 86-channel loudspeaker array. Participants were seated on a rotating chair in the center of the loudspeaker array and navigated using real rotation and virtual locomotion by "teleporting" in steps on a grid in the invisible environment. A head mounted display showed control elements and the environment in a visual reference condition. Acoustical information about the environment originated from a virtual sound source at the collision point of a virtual ray with the boundaries. In different control modes, the ray was cast either in view or hand direction or in a rotating, "radar"-like fashion in 90° steps to all sides. Time to complete, number of collisions, and movement patterns were evaluated. Navigation and orientation were possible based on the direct sound with little effect of room acoustics and control mode. Underlying acoustic cues were analyzed using an auditory model.
Collapse
Affiliation(s)
- Henning Steffens
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, 26111 Oldenburg, Germany
| | - Michael Schutte
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, 26111 Oldenburg, Germany
| | - Stephan D Ewert
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, 26111 Oldenburg, Germany
| |
Collapse
|
13
|
Maimon A, Yizhar O, Buchs G, Heimler B, Amedi A. A case study in phenomenology of visual experience with retinal prosthesis versus visual-to-auditory sensory substitution. Neuropsychologia 2022; 173:108305. [PMID: 35752268 PMCID: PMC9297294 DOI: 10.1016/j.neuropsychologia.2022.108305] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 04/30/2022] [Accepted: 06/13/2022] [Indexed: 11/26/2022]
Abstract
The phenomenology of the blind has provided an age-old, unparalleled means of exploring the enigmatic link between the brain and mind. This paper delves into the unique phenomenological experience of a man who became blind in adulthood. He subsequently underwent both an Argus II retinal prosthesis implant and training, and extensive training on the EyeMusic visual to auditory sensory substitution device (SSD), thereby becoming the first reported case to date of dual proficiency with both devices. He offers a firsthand account into what he considers the great potential of combining sensory substitution devices with visual prostheses as part of a complete visual restoration protocol. While the Argus II retinal prosthesis alone provided him with immediate visual percepts by way of electrically stimulated phosphenes elicited by the device, the EyeMusic SSD requires extensive training from the onset. Yet following the extensive training program with the EyeMusic sensory substitution device, our subject reports that the sensory substitution device allowed him to experience a richer, more complex perceptual experience, that felt more "second nature" to him, while the Argus II prosthesis (which also requires training) did not allow him to achieve the same levels of automaticity and transparency. Following long-term use of the EyeMusic SSD, our subject reported that visual percepts representing mainly, but not limited to, colors portrayed by the EyeMusic SSD are elicited in association with auditory stimuli, indicating the acquisition of a high level of automaticity. Finally, the case study indicates an additive benefit to the combination of both devices on the user's subjective phenomenological visual experience.
Collapse
Affiliation(s)
- Amber Maimon
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel.
| | - Or Yizhar
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel; Max Planck Institute for Human Development, Research Group Adaptive Memory and Decision Making, Berlin, Germany; Max Planck Institute for Human Development, Max Planck Dahlem Campus of Cognition (MPDCC), Berlin, Germany
| | - Galit Buchs
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Benedetta Heimler
- Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel.
| |
Collapse
|
14
|
Romeo K, Pissaloux E, Gay SL, Truong NT, Djoussouf L. The MAPS: Toward a Novel Mobility Assistance System for Visually Impaired People. SENSORS 2022; 22:s22093316. [PMID: 35591005 PMCID: PMC9131141 DOI: 10.3390/s22093316] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 04/12/2022] [Accepted: 04/21/2022] [Indexed: 11/16/2022]
Abstract
This paper introduces the design of a novel indoor and outdoor mobility assistance system for visually impaired people. This system is named the MAPS (Mobility Assistance Path Planning and orientation in Space), and it is based on the theoretical frameworks of mobility and spatial cognition. Its originality comes from the assistance of two main functions of navigation: locomotion and wayfinding. Locomotion involves the ability to avoid obstacles, while wayfinding involves the orientation in space and ad hoc path planning in an (unknown) environment. The MAPS architecture proposes a new low-cost system for indoor–outdoor cognitive mobility assistance, relying on two cooperating hardware feedbacks: the Force Feedback Tablet (F2T) and the TactiBelt. F2T is an electromechanical tablet using haptic effects that allow the exploration of images and maps. It is used to assist with maps’ learning, space awareness emergence, path planning, wayfinding and effective journey completion. It helps a VIP construct a mental map of their environment. TactiBelt is a vibrotactile belt providing active support for the path integration strategy while navigating; it assists the VIP localize the nearest obstacles in real-time and provides the ego-directions to reach the destination. Technology used for acquiring the information about the surrounding space is based on vision (cameras) and is defined with the localization on a map. The preliminary evaluations of the MAPS focused on the interaction with the environment and on feedback from the users (blindfolded participants) to confirm its effectiveness in a simulated environment (a labyrinth). Those lead-users easily interpreted the system’s provided data that they considered relevant for effective independent navigation.
Collapse
Affiliation(s)
- Katerine Romeo
- LITIS Lab, University of Rouen Normandy, 76800 St-Etienne-du-Rouvray, France; (E.P.); (N.-T.T.); (L.D.)
- Correspondence:
| | - Edwige Pissaloux
- LITIS Lab, University of Rouen Normandy, 76800 St-Etienne-du-Rouvray, France; (E.P.); (N.-T.T.); (L.D.)
| | - Simon L. Gay
- LCIS Lab, University of Grenoble Alpes, 26000 Valence, France;
| | - Ngoc-Tan Truong
- LITIS Lab, University of Rouen Normandy, 76800 St-Etienne-du-Rouvray, France; (E.P.); (N.-T.T.); (L.D.)
| | - Lilia Djoussouf
- LITIS Lab, University of Rouen Normandy, 76800 St-Etienne-du-Rouvray, France; (E.P.); (N.-T.T.); (L.D.)
| |
Collapse
|
15
|
Trepkowski C, Marquardt A, Eibich TD, Shikanai Y, Maiero J, Kiyokawa K, Kruijff E, Schoning J, Konig P. Multisensory Proximity and Transition Cues for Improving Target Awareness in Narrow Field of View Augmented Reality Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1342-1362. [PMID: 34591771 DOI: 10.1109/tvcg.2021.3116673] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Augmented reality applications allow users to enrich their real surroundings with additional digital content. However, due to the limited field of view of augmented reality devices, it can sometimes be difficult to become aware of newly emerging information inside or outside the field of view. Typical visual conflicts like clutter and occlusion of augmentations occur and can be further aggravated especially in the context of dense information spaces. In this article, we evaluate how multisensory cue combinations can improve the awareness for moving out-of-view objects in narrow field of view augmented reality displays. We distinguish between proximity and transition cues in either visual, auditory or tactile manner. Proximity cues are intended to enhance spatial awareness of approaching out-of-view objects while transition cues inform the user that the object just entered the field of view. In study 1, user preference was determined for 6 different cue combinations via forced-choice decisions. In study 2, the 3 most preferred modes were then evaluated with respect to performance and awareness measures in a divided attention reaction task. Both studies were conducted under varying noise levels. We show that on average the Visual-Tactile combination leads to 63% and Audio-Tactile to 65% faster reactions to incoming out-of-view augmentations than their Visual-Audio counterpart, indicating a high usefulness of tactile transition cues. We further show a detrimental effect of visual and audio noise on performance when feedback included visual proximity cues. Based on these results, we make recommendations to determine which cue combination is appropriate for which application.
Collapse
|
16
|
Longin L, Deroy O. Augmenting perception: How artificial intelligence transforms sensory substitution. Conscious Cogn 2022; 99:103280. [PMID: 35114632 DOI: 10.1016/j.concog.2022.103280] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 11/26/2021] [Accepted: 01/12/2022] [Indexed: 01/28/2023]
Abstract
What happens when artificial sensors are coupled with the human senses? Using technology to extend the senses is an old human dream, on which sensory substitution and other augmentation technologies have already delivered. Laser tactile canes, corneal implants and magnetic belts can correct or extend what individuals could otherwise perceive. Here we show why accommodating intelligent sensory augmentation devices not just improves but also changes the way of thinking and classifying former sensory augmentation devices. We review the benefits in terms of signal processing and show why non-linear transformation is more than a mere improvement compared to classical linear transformation.
Collapse
Affiliation(s)
- Louis Longin
- Faculty of Philosophy, Philosophy of Science and the Study of Religion, LMU-Munich, Geschwister-Scholl-Platz 1, 80359 Munich, Germany.
| | - Ophelia Deroy
- Faculty of Philosophy, Philosophy of Science and the Study of Religion, LMU-Munich, Geschwister-Scholl-Platz 1, 80359 Munich, Germany; Munich Center for Neurosciences-Brain & Mind, Großhaderner Str. 2, 82152 Planegg-Martinsried, Germany; Institute of Philosophy, School of Advanced Study, University of London, London WC1E 7HU, United Kingdom
| |
Collapse
|
17
|
Chundury P, Patnaik B, Reyazuddin Y, Tang C, Lazar J, Elmqvist N. Towards Understanding Sensory Substitution for Accessible Visualization: An Interview Study. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1084-1094. [PMID: 34587061 DOI: 10.1109/tvcg.2021.3114829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
For all its potential in supporting data analysis, particularly in exploratory situations, visualization also creates barriers: accessibility for blind and visually impaired individuals. Regardless of how effective a visualization is, providing equal access for blind users requires a paradigm shift for the visualization research community. To enact such a shift, it is not sufficient to treat visualization accessibility as merely another technical problem to overcome. Instead, supporting the millions of blind and visually impaired users around the world who have equally valid needs for data analysis as sighted individuals requires a respectful, equitable, and holistic approach that includes all users from the onset. In this paper, we draw on accessibility research methodologies to make inroads towards such an approach. We first identify the people who have specific insight into how blind people perceive the world: orientation and mobility (O&M) experts, who are instructors that teach blind individuals how to navigate the physical world using non-visual senses. We interview 10 O&M experts-all of them blind-to understand how best to use sensory substitution other than the visual sense for conveying spatial layouts. Finally, we investigate our qualitative findings using thematic analysis. While blind people in general tend to use both sound and touch to understand their surroundings, we focused on auditory affordances and how they can be used to make data visualizations accessible-using sonification and auralization. However, our experts recommended supporting a combination of senses-sound and touch-to make charts accessible as blind individuals may be more familiar with exploring tactile charts. We report results on both sound and touch affordances, and conclude by discussing implications for accessible visualization for blind individuals.
Collapse
|
18
|
Favela LH, Amon MJ, Lobo L, Chemero A. Empirical Evidence for Extended Cognitive Systems. Cogn Sci 2021; 45:e13060. [PMID: 34762738 PMCID: PMC9285798 DOI: 10.1111/cogs.13060] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 08/28/2021] [Accepted: 09/30/2021] [Indexed: 11/27/2022]
Abstract
We present an empirically supported theoretical and methodological framework for quantifying the system‐level properties of person‐plus‐tool interactions in order to answer the question: “Are person‐plus‐tool‐systems extended cognitive systems?” Nineteen participants provided perceptual judgments regarding their ability to pass through apertures of various widths while using visual information, blindfolded wielding a rod, or blindfolded wielding an Enactive Torch—a vibrotactile sensory‐substitution device for detecting distance. Monofractal, multifractal, and recurrence quantification analyses were conducted to assess features of person‐plus‐tool movement dynamics. Trials where people utilized the rod or Enactive Torch demonstrated stable “self‐similarity,” or indices of healthy and adaptive single systems, regardless of aperture width, trial order, features of the participants’ judgments, and participant characteristics. Enactive Torch trials exhibited a somewhat greater range of dynamic fluctuations than the rod trials, as well as less movement recurrence, suggesting that the Enactive Torch allowed for more exploratory movements. Findings provide support for the notion that person‐plus‐tool systems can be classified as extended cognitive systems and a framework for quantifying system‐level properties of these systems. Implications concerning future research on extended cognition are discussed.
Collapse
Affiliation(s)
- Luis H Favela
- Department of Philosophy, University of Central Florida.,Cognitive Sciences Program, University of Central Florida
| | - Mary Jean Amon
- School of Modeling, Simulation, and Training, University of Central Florida
| | - Lorena Lobo
- Departamento de Psicología, Universidad a Distancia de Madrid
| | - Anthony Chemero
- Department of Philosophy, University of Cincinnati.,Department of Psychology, University of Cincinnati
| |
Collapse
|
19
|
Impact of a Vibrotactile Belt on Emotionally Challenging Everyday Situations of the Blind. SENSORS 2021; 21:s21217384. [PMID: 34770689 PMCID: PMC8587958 DOI: 10.3390/s21217384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Revised: 10/31/2021] [Accepted: 11/03/2021] [Indexed: 11/16/2022]
Abstract
Spatial orientation and navigation depend primarily on vision. Blind people lack this critical source of information. To facilitate wayfinding and to increase the feeling of safety for these people, the "feelSpace belt" was developed. The belt signals magnetic north as a fixed reference frame via vibrotactile stimulation. This study investigates the effect of the belt on typical orientation and navigation tasks and evaluates the emotional impact. Eleven blind subjects wore the belt daily for seven weeks. Before, during and after the study period, they filled in questionnaires to document their experiences. A small sub-group of the subjects took part in behavioural experiments before and after four weeks of training, i.e., a straight-line walking task to evaluate the belt's effect on keeping a straight heading, an angular rotation task to examine effects on egocentric orientation, and a triangle completion navigation task to test the ability to take shortcuts. The belt reduced subjective discomfort and increased confidence during navigation. Additionally, the participants felt safer wearing the belt in various outdoor situations. Furthermore, the behavioural tasks point towards an intuitive comprehension of the belt. Altogether, the blind participants benefited from the vibrotactile belt as an assistive technology in challenging everyday situations.
Collapse
|
20
|
Colorophone 2.0: A Wearable Color Sonification Device Generating Live Stereo-Soundscapes-Design, Implementation, and Usability Audit. SENSORS 2021; 21:s21217351. [PMID: 34770658 PMCID: PMC8587929 DOI: 10.3390/s21217351] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 10/29/2021] [Accepted: 11/01/2021] [Indexed: 11/20/2022]
Abstract
The successful development of a system realizing color sonification would enable auditory representation of the visual environment. The primary beneficiary of such a system would be people that cannot directly access visual information—the visually impaired community. Despite the plethora of sensory substitution devices, developing systems that provide intuitive color sonification remains a challenge. This paper presents design considerations, development, and the usability audit of a sensory substitution device that converts spatial color information into soundscapes. The implemented wearable system uses a dedicated color space and continuously generates natural, spatialized sounds based on the information acquired from a camera. We developed two head-mounted prototype devices and two graphical user interface (GUI) versions. The first GUI is dedicated to researchers, and the second has been designed to be easily accessible for visually impaired persons. Finally, we ran fundamental usability tests to evaluate the new spatial color sonification algorithm and to compare the two prototypes. Furthermore, we propose recommendations for the development of the next iteration of the system.
Collapse
|
21
|
Thaler L, Norman LJ. No effect of 10-week training in click-based echolocation on auditory localization in people who are blind. Exp Brain Res 2021; 239:3625-3633. [PMID: 34609546 PMCID: PMC8599323 DOI: 10.1007/s00221-021-06230-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Accepted: 09/18/2021] [Indexed: 11/16/2022]
Abstract
What factors are important in the calibration of mental representations of auditory space? A substantial body of research investigating the audiospatial abilities of people who are blind has shown that visual experience might be an important factor for accurate performance in some audiospatial tasks. Yet, it has also been shown that long-term experience using click-based echolocation might play a similar role, with blind expert echolocators demonstrating auditory localization abilities that are superior to those of people who are blind and who do not use click-based echolocation by Vercillo et al. (Neuropsychologia 67: 35–40, 2015). Based on this hypothesis we might predict that training in click-based echolocation may lead to improvement in performance in auditory localization tasks in people who are blind. Here we investigated this hypothesis in a sample of 12 adult people who have been blind from birth. We did not find evidence for an improvement in performance in auditory localization after 10 weeks of training despite significant improvement in echolocation ability. It is possible that longer-term experience with click-based echolocation is required for effects to develop, or that other factors can explain the association between echolocation expertise and superior auditory localization. Considering the practical relevance of click-based echolocation for people who are visually impaired, future research should address these questions.
Collapse
Affiliation(s)
- Lore Thaler
- Department of Psychology, Durham University, Science Site, South Road, Durham, DH1 3LE, UK.
| | - Liam J Norman
- Department of Psychology, Durham University, Science Site, South Road, Durham, DH1 3LE, UK
| |
Collapse
|
22
|
Analysis and Validation of Cross-Modal Generative Adversarial Network for Sensory Substitution. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18126216. [PMID: 34201269 PMCID: PMC8228544 DOI: 10.3390/ijerph18126216] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 06/03/2021] [Accepted: 06/03/2021] [Indexed: 11/20/2022]
Abstract
Visual-auditory sensory substitution has demonstrated great potential to help visually impaired and blind groups to recognize objects and to perform basic navigational tasks. However, the high latency between visual information acquisition and auditory transduction may contribute to the lack of the successful adoption of such aid technologies in the blind community; thus far, substitution methods have remained only laboratory-scale research or pilot demonstrations. This high latency for data conversion leads to challenges in perceiving fast-moving objects or rapid environmental changes. To reduce this latency, prior analysis of auditory sensitivity is necessary. However, existing auditory sensitivity analyses are subjective because they were conducted using human behavioral analysis. Therefore, in this study, we propose a cross-modal generative adversarial network-based evaluation method to find an optimal auditory sensitivity to reduce transmission latency in visual-auditory sensory substitution, which is related to the perception of visual information. We further conducted a human-based assessment to evaluate the effectiveness of the proposed model-based analysis in human behavioral experiments. We conducted experiments with three participant groups, including sighted users (SU), congenitally blind (CB) and late-blind (LB) individuals. Experimental results from the proposed model showed that the temporal length of the auditory signal for sensory substitution could be reduced by 50%. This result indicates the possibility of improving the performance of the conventional vOICe method by up to two times. We confirmed that our experimental results are consistent with human assessment through behavioral experiments. Analyzing auditory sensitivity with deep learning models has the potential to improve the efficiency of sensory substitution.
Collapse
|
23
|
Netzer O, Heimler B, Shur A, Behor T, Amedi A. Backward spatial perception can be augmented through a novel visual-to-auditory sensory substitution algorithm. Sci Rep 2021; 11:11944. [PMID: 34099756 PMCID: PMC8184900 DOI: 10.1038/s41598-021-88595-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 02/08/2021] [Indexed: 11/23/2022] Open
Abstract
Can humans extend and augment their natural perceptions during adulthood? Here, we address this fascinating question by investigating the extent to which it is possible to successfully augment visual spatial perception to include the backward spatial field (a region where humans are naturally blind) via other sensory modalities (i.e., audition). We thus developed a sensory-substitution algorithm, the “Topo-Speech” which conveys identity of objects through language, and their exact locations via vocal-sound manipulations, namely two key features of visual spatial perception. Using two different groups of blindfolded sighted participants, we tested the efficacy of this algorithm to successfully convey location of objects in the forward or backward spatial fields following ~ 10 min of training. Results showed that blindfolded sighted adults successfully used the Topo-Speech to locate objects on a 3 × 3 grid either positioned in front of them (forward condition), or behind their back (backward condition). Crucially, performances in the two conditions were entirely comparable. This suggests that novel spatial sensory information conveyed via our existing sensory systems can be successfully encoded to extend/augment human perceptions. The implications of these results are discussed in relation to spatial perception, sensory augmentation and sensory rehabilitation.
Collapse
Affiliation(s)
- Ophir Netzer
- The Cognitive Science Program, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Benedetta Heimler
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Herzeliya, Israel.,Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel.,Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Amir Shur
- The Cognitive Science Program, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Tomer Behor
- The Cognitive Science Program, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Herzeliya, Israel. .,Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel.
| |
Collapse
|
24
|
Norman LJ, Dodsworth C, Foresteire D, Thaler L. Human click-based echolocation: Effects of blindness and age, and real-life implications in a 10-week training program. PLoS One 2021; 16:e0252330. [PMID: 34077457 PMCID: PMC8171922 DOI: 10.1371/journal.pone.0252330] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Accepted: 05/13/2021] [Indexed: 01/19/2023] Open
Abstract
Understanding the factors that determine if a person can successfully learn a novel sensory skill is essential for understanding how the brain adapts to change, and for providing rehabilitative support for people with sensory loss. We report a training study investigating the effects of blindness and age on the learning of a complex auditory skill: click-based echolocation. Blind and sighted participants of various ages (21-79 yrs; median blind: 45 yrs; median sighted: 26 yrs) trained in 20 sessions over the course of 10 weeks in various practical and virtual navigation tasks. Blind participants also took part in a 3-month follow up survey assessing the effects of the training on their daily life. We found that both sighted and blind people improved considerably on all measures, and in some cases performed comparatively to expert echolocators at the end of training. Somewhat surprisingly, sighted people performed better than those who were blind in some cases, although our analyses suggest that this might be better explained by the younger age (or superior binaural hearing) of the sighted group. Importantly, however, neither age nor blindness was a limiting factor in participants' rate of learning (i.e. their difference in performance from the first to the final session) or in their ability to apply their echolocation skills to novel, untrained tasks. Furthermore, in the follow up survey, all participants who were blind reported improved mobility, and 83% reported better independence and wellbeing. Overall, our results suggest that the ability to learn click-based echolocation is not strongly limited by age or level of vision. This has positive implications for the rehabilitation of people with vision loss or in the early stages of progressive vision loss.
Collapse
Affiliation(s)
- Liam J. Norman
- Department of Psychology, Durham University, Durham, United Kingdom
| | | | | | - Lore Thaler
- Department of Psychology, Durham University, Durham, United Kingdom
- * E-mail:
| |
Collapse
|
25
|
Buchs G, Haimler B, Kerem M, Maidenbaum S, Braun L, Amedi A. A self-training program for sensory substitution devices. PLoS One 2021; 16:e0250281. [PMID: 33905446 PMCID: PMC8078811 DOI: 10.1371/journal.pone.0250281] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 04/01/2021] [Indexed: 11/30/2022] Open
Abstract
Sensory Substitution Devices (SSDs) convey visual information through audition or touch, targeting blind and visually impaired individuals. One bottleneck towards adopting SSDs in everyday life by blind users, is the constant dependency on sighted instructors throughout the learning process. Here, we present a proof-of-concept for the efficacy of an online self-training program developed for learning the basics of the EyeMusic visual-to-auditory SSD tested on sighted blindfolded participants. Additionally, aiming to identify the best training strategy to be later re-adapted for the blind, we compared multisensory vs. unisensory as well as perceptual vs. descriptive feedback approaches. To these aims, sighted participants performed identical SSD-stimuli identification tests before and after ~75 minutes of self-training on the EyeMusic algorithm. Participants were divided into five groups, differing by the feedback delivered during training: auditory-descriptive, audio-visual textual description, audio-visual perceptual simultaneous and interleaved, and a control group which had no training. At baseline, before any EyeMusic training, participants SSD objects’ identification was significantly above chance, highlighting the algorithm’s intuitiveness. Furthermore, self-training led to a significant improvement in accuracy between pre- and post-training tests in each of the four feedback groups versus control, though no significant difference emerged among those groups. Nonetheless, significant correlations between individual post-training success rates and various learning measures acquired during training, suggest a trend for an advantage of multisensory vs. unisensory feedback strategies, while no trend emerged for perceptual vs. descriptive strategies. The success at baseline strengthens the conclusion that cross-modal correspondences facilitate learning, given SSD algorithms are based on such correspondences. Additionally, and crucially, the results highlight the feasibility of self-training for the first stages of SSD learning, and suggest that for these initial stages, unisensory training, easily implemented also for blind and visually impaired individuals, may suffice. Together, these findings will potentially boost the use of SSDs for rehabilitation.
Collapse
Affiliation(s)
- Galit Buchs
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Jerusalem, Israel
- * E-mail: (AA); (GB)
| | - Benedetta Haimler
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Center of Advanced Technologies in Rehabilitation (CATR), The Chaim Sheba Medical Center, Ramat Gan, Israel
| | - Menachem Kerem
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
| | - Shachar Maidenbaum
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Department of Biomedical Engineering, Ben Gurion University, Beersheba, Israel
| | - Liraz Braun
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Hebrew University of Jerusalem, Jerusalem, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- * E-mail: (AA); (GB)
| |
Collapse
|
26
|
Blindness and the Reliability of Downwards Sensors to Avoid Obstacles: A Study with the EyeCane. SENSORS 2021; 21:s21082700. [PMID: 33921202 PMCID: PMC8070041 DOI: 10.3390/s21082700] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Revised: 04/07/2021] [Accepted: 04/09/2021] [Indexed: 11/17/2022]
Abstract
Vision loss has dramatic repercussions on the quality of life of affected people, particularly with respect to their orientation and mobility. Many devices are available to help blind people to navigate in their environment. The EyeCane is a recently developed electronic travel aid (ETA) that is inexpensive and easy to use, allowing for the detection of obstacles lying ahead within a 2 m range. The goal of this study was to investigate the potential of the EyeCane as a primary aid for spatial navigation. Three groups of participants were recruited: early blind, late blind, and sighted. They were first trained with the EyeCane and then tested in a life-size obstacle course with four obstacles types: cube, door, post, and step. Subjects were requested to cross the corridor while detecting, identifying, and avoiding the obstacles. Each participant had to perform 12 runs with 12 different obstacles configurations. All participants were able to learn quickly to use the EyeCane and successfully complete all trials. Amongst the various obstacles, the step appeared to prove the hardest to detect and resulted in more collisions. Although the EyeCane was effective for detecting obstacles lying ahead, its downward sensor did not reliably detect those on the ground, rendering downward obstacles more hazardous for navigation.
Collapse
|
27
|
Paré S, Bleau M, Djerourou I, Malotaux V, Kupers R, Ptito M. Spatial navigation with horizontally spatialized sounds in early and late blind individuals. PLoS One 2021; 16:e0247448. [PMID: 33635892 PMCID: PMC7909643 DOI: 10.1371/journal.pone.0247448] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 02/07/2021] [Indexed: 12/02/2022] Open
Abstract
Blind individuals often report difficulties to navigate and to detect objects placed outside their peri-personal space. Although classical sensory substitution devices could be helpful in this respect, these devices often give a complex signal which requires intensive training to analyze. New devices that provide a less complex output signal are therefore needed. Here, we evaluate a smartphone-based sensory substitution device that offers navigation guidance based on strictly spatial cues in the form of horizontally spatialized sounds. The system uses multiple sensors to either detect obstacles at a distance directly in front of the user or to create a 3D map of the environment (detection and avoidance mode, respectively), and informs the user with auditory feedback. We tested 12 early blind, 11 late blind and 24 blindfolded-sighted participants for their ability to detect obstacles and to navigate in an obstacle course. The three groups did not differ in the number of objects detected and avoided. However, early blind and late blind participants were faster than their sighted counterparts to navigate through the obstacle course. These results are consistent with previous research on sensory substitution showing that vision can be replaced by other senses to improve performance in a wide variety of tasks in blind individuals. This study offers new evidence that sensory substitution devices based on horizontally spatialized sounds can be used as a navigation tool with a minimal amount of training.
Collapse
Affiliation(s)
- Samuel Paré
- École d’Optométrie, Université de Montréal, Québec, Canada
| | - Maxime Bleau
- École d’Optométrie, Université de Montréal, Québec, Canada
| | | | - Vincent Malotaux
- Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium
| | - Ron Kupers
- École d’Optométrie, Université de Montréal, Québec, Canada
- Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium
- Institute of Neuroscience and Pharmacology (INF), University of Copenhagen, Copenhagen, Denmark
| | - Maurice Ptito
- École d’Optométrie, Université de Montréal, Québec, Canada
- Institute of Neuroscience and Pharmacology (INF), University of Copenhagen, Copenhagen, Denmark
- * E-mail:
| |
Collapse
|
28
|
Ptito M, Bleau M, Djerourou I, Paré S, Schneider FC, Chebat DR. Brain-Machine Interfaces to Assist the Blind. Front Hum Neurosci 2021; 15:638887. [PMID: 33633557 PMCID: PMC7901898 DOI: 10.3389/fnhum.2021.638887] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Accepted: 01/19/2021] [Indexed: 12/31/2022] Open
Abstract
The loss or absence of vision is probably one of the most incapacitating events that can befall a human being. The importance of vision for humans is also reflected in brain anatomy as approximately one third of the human brain is devoted to vision. It is therefore unsurprising that throughout history many attempts have been undertaken to develop devices aiming at substituting for a missing visual capacity. In this review, we present two concepts that have been prevalent over the last two decades. The first concept is sensory substitution, which refers to the use of another sensory modality to perform a task that is normally primarily sub-served by the lost sense. The second concept is cross-modal plasticity, which occurs when loss of input in one sensory modality leads to reorganization in brain representation of other sensory modalities. Both phenomena are training-dependent. We also briefly describe the history of blindness from ancient times to modernity, and then proceed to address the means that have been used to help blind individuals, with an emphasis on modern technologies, invasive (various type of surgical implants) and non-invasive devices. With the advent of brain imaging, it has become possible to peer into the neural substrates of sensory substitution and highlight the magnitude of the plastic processes that lead to a rewired brain. Finally, we will address the important question of the value and practicality of the available technologies and future directions.
Collapse
Affiliation(s)
- Maurice Ptito
- École d’Optométrie, Université de Montréal, Montréal, QC, Canada
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
- Department of Neuroscience, University of Copenhagen, Copenhagen, Denmark
| | - Maxime Bleau
- École d’Optométrie, Université de Montréal, Montréal, QC, Canada
| | - Ismaël Djerourou
- École d’Optométrie, Université de Montréal, Montréal, QC, Canada
| | - Samuel Paré
- École d’Optométrie, Université de Montréal, Montréal, QC, Canada
| | - Fabien C. Schneider
- TAPE EA7423 University of Lyon-Saint Etienne, Saint Etienne, France
- Neuroradiology Unit, University Hospital of Saint-Etienne, Saint-Etienne, France
| | - Daniel-Robert Chebat
- Visual and Cognitive Neuroscience Laboratory (VCN Lab), Department of Psychology, Faculty of Social Sciences and Humanities, Ariel University, Ariel, Israël
- Navigation and Accessibility Research Center of Ariel University (NARCA), Ariel, Israël
| |
Collapse
|
29
|
Zilbershtain-Kra Y, Graffi S, Ahissar E, Arieli A. Active sensory substitution allows fast learning via effective motor-sensory strategies. iScience 2021; 24:101918. [PMID: 33392481 PMCID: PMC7773576 DOI: 10.1016/j.isci.2020.101918] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Revised: 10/25/2020] [Accepted: 12/07/2020] [Indexed: 11/28/2022] Open
Abstract
We examined the development of new sensing abilities in adults by training participants to perceive remote objects through their fingers. Using an Active-Sensing based sensory Substitution device (ASenSub), participants quickly learned to perceive fast via the new modality and preserved their high performance for more than 20 months. Both sighted and blind participants exhibited almost complete transfer of performance from 2D images to novel 3D physical objects. Perceptual accuracy and speed using the ASenSub were, on average, 300% and 600% better than previous reports for 2D images and 3D objects. This improvement is attributed to the ability of the participants to employ their own motor-sensory strategies. Sighted participants dominant strategy was based on motor-sensory convergence on the most informative regions of objects, similarly to fixation patterns in vision. Congenitally, blind participants did not show such a tendency, and many of their exploratory procedures resembled those observed with natural touch.
Collapse
Affiliation(s)
- Yael Zilbershtain-Kra
- The Department of Neurobiology, Weizmann Institute of Science, 234 Herzl Street, Rehovot 76100, Israel
| | - Shmuel Graffi
- The Department of Neurobiology, Weizmann Institute of Science, 234 Herzl Street, Rehovot 76100, Israel
| | - Ehud Ahissar
- The Department of Neurobiology, Weizmann Institute of Science, 234 Herzl Street, Rehovot 76100, Israel
| | - Amos Arieli
- The Department of Neurobiology, Weizmann Institute of Science, 234 Herzl Street, Rehovot 76100, Israel
| |
Collapse
|
30
|
Abstract
Making sense of the world requires perceptual constancy—the stable perception of an object across changes in one’s sensation of it. To investigate whether constancy is intrinsic to perception, we tested whether humans can learn a form of constancy that is unique to a novel sensory skill (here, the perception of objects through click-based echolocation). Participants judged whether two echoes were different either because: (a) the clicks were different, or (b) the objects were different. For differences carried through spectral changes (but not level changes), blind expert echolocators spontaneously showed a high constancy ability (mean d′ = 1.91) compared to sighted and blind people new to echolocation (mean d′ = 0.69). Crucially, sighted controls improved rapidly in this ability through training, suggesting that constancy emerges in a domain with which the perceiver has no prior experience. This provides strong evidence that constancy is intrinsic to human perception. This study shows that people who learn a new skill to sense their environment - here: listening to sound echoes - can correctly represent the physical properties of objects. This result has implications for effectively rehabilitating people with sensory loss.
Collapse
|
31
|
Zai AT, Cavé-Lopez S, Rolland M, Giret N, Hahnloser RHR. Sensory substitution reveals a manipulation bias. Nat Commun 2020; 11:5940. [PMID: 33230182 PMCID: PMC7684286 DOI: 10.1038/s41467-020-19686-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 10/14/2020] [Indexed: 01/01/2023] Open
Abstract
Sensory substitution is a promising therapeutic approach for replacing a missing or diseased sensory organ by translating inaccessible information into another sensory modality. However, many substitution systems are not well accepted by subjects. To explore the effect of sensory substitution on voluntary action repertoires and their associated affective valence, we study deaf songbirds to which we provide visual feedback as a substitute of auditory feedback. Surprisingly, deaf birds respond appetitively to song-contingent binary visual stimuli. They skillfully adapt their songs to increase the rate of visual stimuli, showing that auditory feedback is not required for making targeted changes to vocal repertoires. We find that visually instructed song learning is basal-ganglia dependent. Because hearing birds respond aversively to the same visual stimuli, sensory substitution reveals a preference for actions that elicit sensory feedback over actions that do not, suggesting that substitution systems should be designed to exploit the drive to manipulate.
Collapse
Affiliation(s)
- Anja T Zai
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, 8057, Zurich, Switzerland
- Neuroscience Center Zurich (ZNZ), University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Sophie Cavé-Lopez
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, 8057, Zurich, Switzerland
| | - Manon Rolland
- Institut des Neurosciences Paris Saclay, CNRS, Université Paris Saclay, Orsay, France
| | - Nicolas Giret
- Institut des Neurosciences Paris Saclay, CNRS, Université Paris Saclay, Orsay, France
| | - Richard H R Hahnloser
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, 8057, Zurich, Switzerland.
- Neuroscience Center Zurich (ZNZ), University of Zurich and ETH Zurich, Zurich, Switzerland.
| |
Collapse
|
32
|
Heimler B, Amedi A. Are critical periods reversible in the adult brain? Insights on cortical specializations based on sensory deprivation studies. Neurosci Biobehav Rev 2020; 116:494-507. [DOI: 10.1016/j.neubiorev.2020.06.034] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 06/07/2020] [Accepted: 06/25/2020] [Indexed: 02/06/2023]
|
33
|
Neugebauer A, Rifai K, Getzlaff M, Wahl S. Navigation aid for blind persons by visual-to-auditory sensory substitution: A pilot study. PLoS One 2020; 15:e0237344. [PMID: 32818953 PMCID: PMC7446825 DOI: 10.1371/journal.pone.0237344] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Accepted: 07/23/2020] [Indexed: 11/19/2022] Open
Abstract
PURPOSE In this study, we investigate to what degree augmented reality technology can be used to create and evaluate a visual-to-auditory sensory substitution device to improve the performance of blind persons in navigation and recognition tasks. METHODS A sensory substitution algorithm that translates 3D visual information into audio feedback was designed. This algorithm was integrated in an augmented reality based mobile phone application. Using the mobile device as sensory substitution device, a study with blind participants (n = 7) was performed. The participants navigated through pseudo-randomized obstacle courses using either the sensory substitution device, a white cane or a combination of both. In a second task, virtual 3D objects and structures had to be identified by the participants using the same sensory substitution device. RESULTS The realized application for mobile devices enabled participants to complete the navigation and object recognition tasks in an experimental environment already within the first trials without previous training. This demonstrates the general feasibility and low entry barrier of the designed sensory substitution algorithm. In direct comparison to the white cane, within the study duration of ten hours the sensory substitution device did not offer a statistically significant improvement in navigation.
Collapse
Affiliation(s)
- Alexander Neugebauer
- ZEISS Vision Science Lab, Eberhard-Karls-University Tuebingen, Tübingen, Germany
- * E-mail:
| | - Katharina Rifai
- ZEISS Vision Science Lab, Eberhard-Karls-University Tuebingen, Tübingen, Germany
- Carl Zeiss Vision International GmbH, Aalen, Germany
| | - Mathias Getzlaff
- Institute for Applied Physics, Heinrich-Heine University Duesseldorf, Duesseldorf, Germany
| | - Siegfried Wahl
- ZEISS Vision Science Lab, Eberhard-Karls-University Tuebingen, Tübingen, Germany
- Carl Zeiss Vision International GmbH, Aalen, Germany
| |
Collapse
|
34
|
Chebat DR, Schneider FC, Ptito M. Spatial Competence and Brain Plasticity in Congenital Blindness via Sensory Substitution Devices. Front Neurosci 2020; 14:815. [PMID: 32848575 PMCID: PMC7406645 DOI: 10.3389/fnins.2020.00815] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Accepted: 07/10/2020] [Indexed: 12/22/2022] Open
Abstract
In congenital blindness (CB), tactile, and auditory information can be reinterpreted by the brain to compensate for visual information through mechanisms of brain plasticity triggered by training. Visual deprivation does not cause a cognitive spatial deficit since blind people are able to acquire spatial knowledge about the environment. However, this spatial competence takes longer to achieve but is eventually reached through training-induced plasticity. Congenitally blind individuals can further improve their spatial skills with the extensive use of sensory substitution devices (SSDs), either visual-to-tactile or visual-to-auditory. Using a combination of functional and anatomical neuroimaging techniques, our recent work has demonstrated the impact of spatial training with both visual to tactile and visual to auditory SSDs on brain plasticity, cortical processing, and the achievement of certain forms of spatial competence. The comparison of performances between CB and sighted people using several different sensory substitution devices in perceptual and sensory-motor tasks uncovered the striking ability of the brain to rewire itself during perceptual learning and to interpret novel sensory information even during adulthood. We discuss here the implications of these findings for helping blind people in navigation tasks and to increase their accessibility to both real and virtual environments.
Collapse
Affiliation(s)
- Daniel-Robert Chebat
- Visual and Cognitive Neuroscience Laboratory (VCN Lab), Department of Psychology, Faculty of Social Sciences and Humanities, Ariel University, Ariel, Israel
- Navigation and Accessibility Research Center of Ariel University (NARCA), Ariel, Israel
| | - Fabien C. Schneider
- Department of Radiology, University of Lyon, Saint-Etienne, France
- Neuroradiology Unit, University Hospital of Saint-Etienne, Saint-Etienne, France
| | - Maurice Ptito
- BRAIN Lab, Department of Neuroscience and Pharmacology, University of Copenhagen, Copenhagen, Denmark
- Chaire de Recherche Harland Sanders en Sciences de la Vision, École d’Optométrie, Université de Montréal, Montréal, QC, Canada
| |
Collapse
|
35
|
Kvansakul J, Hamilton L, Ayton LN, McCarthy C, Petoe MA. Sensory augmentation to aid training with retinal prostheses. J Neural Eng 2020; 17:045001. [PMID: 32554868 DOI: 10.1088/1741-2552/ab9e1d] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE Retinal prosthesis recipients require rehabilitative training to learn the non-intuitive nature of prosthetic 'phosphene vision'. This study investigated whether the addition of auditory cues, using The vOICe sensory substitution device (SSD), could improve functional performance with simulated phosphene vision. APPROACH Forty normally sighted subjects completed two visual tasks under three conditions. The phosphene condition converted the image to simulated phosphenes displayed on a virtual reality headset. The SSD condition provided auditory information via stereo headphones, translating the image into sound. Horizontal information was encoded as stereo timing differences between ears, vertical information as pitch, and pixel intensity as audio intensity. The third condition combined phosphenes and SSD. Tasks comprised light localisation from the Basic Assessment of Light and Motion (BaLM) and the Tumbling-E from the Freiburg Acuity and Contrast Test (FrACT). To examine learning effects, twenty of the forty subjects received SSD training prior to assessment. MAIN RESULTS Combining phosphenes with auditory SSD provided better light localisation accuracy than either phosphenes or SSD alone, suggesting a compound benefit of integrating modalities. Although response times for SSD-only were significantly longer than all other conditions, combined condition response times were as fast as phosphene-only, highlighting that audio-visual integration provided both response time and accuracy benefits. Prior SSD training provided a benefit to localisation accuracy and speed in SSD-only (as expected) and Combined conditions compared to untrained SSD-only. Integration of the two modalities did not improve spatial resolution task performance, with resolution limited to that of the higher resolution modality (SSD). SIGNIFICANCE Combining phosphene (visual) and SSD (auditory) modalities was effective even without SSD training and led to an improvement in light localisation accuracy and response times. Spatial resolution performance was dominated by auditory SSD. The results suggest there may be a benefit to including auditory cues when training vision prosthesis recipients.
Collapse
Affiliation(s)
- Jessica Kvansakul
- Bionics Institute, East Melbourne, VIC, Australia. Department of Medical Bionics, University of Melbourne, Parkville, VIC, Australia
| | | | | | | | | |
Collapse
|
36
|
Kirsch LP, Job X, Auvray M. Mixing up the Senses: Sensory Substitution Is Not a Form of Artificially Induced Synaesthesia. Multisens Res 2020; 34:297-322. [PMID: 33706280 DOI: 10.1163/22134808-bja10010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2020] [Accepted: 05/26/2020] [Indexed: 11/19/2022]
Abstract
Sensory Substitution Devices (SSDs) are typically used to restore functionality of a sensory modality that has been lost, like vision for the blind, by recruiting another sensory modality such as touch or audition. Sensory substitution has given rise to many debates in psychology, neuroscience and philosophy regarding the nature of experience when using SSDs. Questions first arose as to whether the experience of sensory substitution is represented by the substituted information, the substituting information, or a multisensory combination of the two. More recently, parallels have been drawn between sensory substitution and synaesthesia, a rare condition in which individuals involuntarily experience a percept in one sensory or cognitive pathway when another one is stimulated. Here, we explore the efficacy of understanding sensory substitution as a form of 'artificial synaesthesia'. We identify several problems with previous suggestions for a link between these two phenomena. Furthermore, we find that sensory substitution does not fulfil the essential criteria that characterise synaesthesia. We conclude that sensory substitution and synaesthesia are independent of each other and thus, the 'artificial synaesthesia' view of sensory substitution should be rejected.
Collapse
Affiliation(s)
- Louise P Kirsch
- Institut des Systèmes Intelligents et de Robotique (ISIR), Sorbonne Université, Paris, France
| | - Xavier Job
- Institut des Systèmes Intelligents et de Robotique (ISIR), Sorbonne Université, Paris, France
| | - Malika Auvray
- Institut des Systèmes Intelligents et de Robotique (ISIR), Sorbonne Université, Paris, France
| |
Collapse
|
37
|
VES: A Mixed-Reality System to Assist Multisensory Spatial Perception and Cognition for Blind and Visually Impaired People. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10020523] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In this paper, the Virtually Enhanced Senses (VES) System is described. It is an ARCore-based, mixed-reality system meant to assist blind and visually impaired people’s navigation. VES operates in indoor and outdoor environments without any previous in-situ installation. It provides users with specific, runtime-configurable stimuli according to their pose, i.e., position and orientation, and the information of the environment recorded in a virtual replica. It implements three output data modalities: Wall-tracking assistance, acoustic compass, and a novel sensory substitution algorithm, Geometry-based Virtual Acoustic Space (GbVAS). The multimodal output of this algorithm takes advantage of natural human perception encoding of spatial data. Preliminary experiments of GbVAS have been conducted with sixteen subjects in three different scenarios, demonstrating basic orientation and mobility skills after six minutes training.
Collapse
|
38
|
Krzhizhanovskaya VV, Závodszky G, Lees MH, Dongarra JJ, Sloot PMA, Brissos S, Teixeira J. Interactive Travel Aid for the Visually Impaired: from Depth Maps to Sonic Patterns and Verbal Messages. LECTURE NOTES IN COMPUTER SCIENCE 2020. [PMCID: PMC7304791 DOI: 10.1007/978-3-030-50436-6_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
This paper presents user trials of a prototype micro-navigation aid for the visually impaired. The main advantage of the system is its small form factor. The device consists of a Structure Sensor depth camera, a smartphone, a remote controller and a pair of headphones. An original feature of the system is its interactivity. The user can activate different space scanning modes and different sound presentation schemes for 3D scenes on demand. The results of the trials are documented by timeline logs recording the activation of different interactive modes. The aim of the first trial was to test system capability for aiding the visually impaired to avoid obstacles. The second tested system efficiency at detecting open spaces. The two visually impaired testers performed the trials successfully, although the times required to complete the tasks seem rather long. Nevertheless, the trials show the potential usefulness of the system as a navigational aid and have enabled us to introduce numerous improvements to the tested prototype.
Collapse
|
39
|
Chen L. Education and visual neuroscience: A mini-review. Psych J 2019; 9:524-532. [PMID: 31884725 DOI: 10.1002/pchj.335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2019] [Revised: 10/04/2019] [Accepted: 11/26/2019] [Indexed: 11/06/2022]
Abstract
Neuroscience, especially visual neuroscience, is a burgeoning field that has greatly shaped the format and efficacy of education. Moreover, findings from visual neuroscience are an ongoing source of great progress in pedagogy. In this mini-review, I review existing evidence and areas of active research to describe the fundamental questions and general applications for visual neuroscience as it applies to education. First, I categorize the research questions and future directions for the role of visual neuroscience in education. Second, I juxtapose opposing views on the roles of neuroscience in education and reveal the "neuromyths" propagated under the guise of educational neuroscience. Third, I summarize the policies and practices applied in different countries and for different age ranges. Fourth, I address and discuss the merits of visual neuroscience in art education and of visual perception theories (e.g., those concerned with perceptual organization with respect to space and time) in reading education. I consider how vision-deprived students could benefit from current knowledge of brain plasticity and visual rehabilitation methods involving compensation from other sensory systems. I also consider the potential educational value of instructional methods based on statistical learning in the visual domain. Finally, I outline the accepted translational framework for applying findings from educational neuroscience to pedagogical theory.
Collapse
Affiliation(s)
- Lihan Chen
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| |
Collapse
|
40
|
Brown FE, Sutton J, Yuen HM, Green D, Van Dorn S, Braun T, Cree AJ, Russell SR, Lotery AJ. A novel, wearable, electronic visual aid to assist those with reduced peripheral vision. PLoS One 2019; 14:e0223755. [PMID: 31613911 PMCID: PMC6793879 DOI: 10.1371/journal.pone.0223755] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2019] [Accepted: 09/29/2019] [Indexed: 12/05/2022] Open
Abstract
Purpose To determine whether visual-tactile sensory substitution utilizing the Low-vision Enhancement Optoelectronic (LEO) Belt prototype is suitable as a new visual aid for those with reduced peripheral vision by assessing mobility performance and user opinions. Methods Sighted subjects (n = 20) and subjects with retinitis pigmentosa (RP) (n = 6) were recruited. The LEO Belt was evaluated on two cohorts: normally sighted subjects wearing goggles to artificially reduce peripheral vision to simulate stages of RP progression, and subjects with advanced visual field limitation from RP. Mobility speed and accuracy was assessed using simple mazes, with and without the LEO Belt, to determine its usefulness across disease severities and lighting conditions. Results Sighted subjects wearing most narrowed field goggles simulating most advanced RP had increased mobility accuracy (44% mean reduction in errors, p = 0.014) and self-reported confidence (77% mean increase, p = 0.004) when using the LEO Belt. Additionally, use of LEO doubled mobility accuracy for RP subjects with remaining visual fields between 10° and 20°. Further, in dim lighting, confidence scores for this group also doubled. By patient reported outcomes, subjects largely deemed the device comfortable (100%), easy to use (92.3%) and thought it had potential future benefit as a visual aid (96.2%). However, regardless of severity of vision loss or simulated vision loss, all subjects were slower to complete the mazes using the device. Conclusions The LEO Belt improves mobility accuracy and therefore confidence in those with severely restricted peripheral vision. The LEO Belt’s positive user feedback suggests it has potential to become the next generation of visual aid for visually impaired individuals. Given the novelty of this approach, we expect navigation speeds may improve with experience.
Collapse
Affiliation(s)
- Ffion E. Brown
- Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, University Hospital Southampton, Tremona Road, Southampton, England, United Kingdom
| | - Janice Sutton
- Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, University Hospital Southampton, Tremona Road, Southampton, England, United Kingdom
| | - Ho M. Yuen
- Primary Care and Population Sciences, Faculty of Medicine, University of Southampton, University Hospital Southampton, Tremona Road, Southampton, England, United Kingdom
| | - Dylan Green
- Department of Ophthalmology and Visual Sciences, Carver College of Medicine, University of Iowa, Iowa City, IA, United States of America
| | - Spencer Van Dorn
- Department of Ophthalmology and Visual Sciences, Carver College of Medicine, University of Iowa, Iowa City, IA, United States of America
| | - Terry Braun
- Department of Ophthalmology and Visual Sciences, Carver College of Medicine, University of Iowa, Iowa City, IA, United States of America
| | - Angela J. Cree
- Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, University Hospital Southampton, Tremona Road, Southampton, England, United Kingdom
| | - Stephen R. Russell
- Department of Ophthalmology and Visual Sciences, Carver College of Medicine, University of Iowa, Iowa City, IA, United States of America
- * E-mail: (AL); (SR)
| | - Andrew J. Lotery
- Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, University Hospital Southampton, Tremona Road, Southampton, England, United Kingdom
- Southampton Eye Unit, University Hospital Southampton NHS Foundation Trust, University Hospital Southampton, Southampton, England, United Kingdom
- * E-mail: (AL); (SR)
| |
Collapse
|
41
|
Norman LJ, Thaler L. Retinotopic-like maps of spatial sound in primary 'visual' cortex of blind human echolocators. Proc Biol Sci 2019; 286:20191910. [PMID: 31575359 PMCID: PMC6790759 DOI: 10.1098/rspb.2019.1910] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2019] [Accepted: 09/12/2019] [Indexed: 01/30/2023] Open
Abstract
The functional specializations of cortical sensory areas were traditionally viewed as being tied to specific modalities. A radically different emerging view is that the brain is organized by task rather than sensory modality, but it has not yet been shown that this applies to primary sensory cortices. Here, we report such evidence by showing that primary 'visual' cortex can be adapted to map spatial locations of sound in blind humans who regularly perceive space through sound echoes. Specifically, we objectively quantify the similarity between measured stimulus maps for sound eccentricity and predicted stimulus maps for visual eccentricity in primary 'visual' cortex (using a probabilistic atlas based on cortical anatomy) to find that stimulus maps for sound in expert echolocators are directly comparable to those for vision in sighted people. Furthermore, the degree of this similarity is positively related with echolocation ability. We also rule out explanations based on top-down modulation of brain activity-e.g. through imagery. This result is clear evidence that task-specific organization can extend even to primary sensory cortices, and in this way is pivotal in our reinterpretation of the functional organization of the human brain.
Collapse
Affiliation(s)
| | - Lore Thaler
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| |
Collapse
|
42
|
Thaler L, Zhang X, Antoniou M, Kish DC, Cowie D. The flexible action system: Click-based echolocation may replace certain visual functionality for adaptive walking. J Exp Psychol Hum Percept Perform 2019; 46:21-35. [PMID: 31556685 PMCID: PMC6936248 DOI: 10.1037/xhp0000697] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
People use sensory, in particular visual, information to guide actions such as walking around obstacles, grasping or reaching. However, it is presently unclear how malleable the sensorimotor system is. The present study investigated this by measuring how click-based echolocation may be used to avoid obstacles while walking. We tested 7 blind echolocation experts, 14 sighted, and 10 blind echolocation beginners. For comparison, we also tested 10 sighted participants, who used vision. To maximize the relevance of our research for people with vision impairments, we also included a condition where the long cane was used and considered obstacles at different elevations. Motion capture and sound data were acquired simultaneously. We found that echolocation experts walked just as fast as sighted participants using vision, and faster than either sighted or blind echolocation beginners. Walking paths of echolocation experts indicated early and smooth adjustments, similar to those shown by sighted people using vision and different from later and more abrupt adjustments of beginners. Further, for all participants, the use of echolocation significantly decreased collision frequency with obstacles at head, but not ground level. Further analyses showed that participants who made clicks with higher spectral frequency content walked faster, and that for experts higher clicking rates were associated with faster walking. The results highlight that people can use novel sensory information (here, echolocation) to guide actions, demonstrating the action system’s ability to adapt to changes in sensory input. They also highlight that regular use of echolocation enhances sensory-motor coordination for walking in blind people. Vision loss has negative consequences for people’s mobility. The current report demonstrates that echolocation might replace certain visual functionality for adaptive walking. Importantly, the report also highlights that echolocation and long cane are complementary mobility techniques. The findings have direct relevance for professionals involved in mobility instruction and for people who are blind.
Collapse
Affiliation(s)
| | - Xinyu Zhang
- School of Information and Electronics, Beijing Institute of Technology
| | - Michail Antoniou
- Department of Electronic Electrical and Systems Engineering, School of Engineering, University of Birmingham
| | | | | |
Collapse
|
43
|
Richardson M, Thar J, Alvarez J, Borchers J, Ward J, Hamilton-Fletcher G. How Much Spatial Information Is Lost in the Sensory Substitution Process? Comparing Visual, Tactile, and Auditory Approaches. Perception 2019; 48:1079-1103. [PMID: 31547778 DOI: 10.1177/0301006619873194] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Sensory substitution devices (SSDs) can convey visuospatial information through spatialised auditory or tactile stimulation using wearable technology. However, the level of information loss associated with this transformation is unknown. In this study, novice users discriminated the location of two objects at 1.2 m using devices that transformed a 16 × 8-depth map into spatially distributed patterns of light, sound, or touch on the abdomen. Results showed that through active sensing, participants could discriminate the vertical position of objects to a visual angle of 1°, 14°, and 21°, and their distance to 2 cm, 8 cm, and 29 cm using these visual, auditory, and haptic SSDs, respectively. Visual SSDs significantly outperformed auditory and tactile SSDs on vertical localisation, whereas for depth perception, all devices significantly differed from one another (visual > auditory > haptic). Our findings highlight the high level of acuity possible for SSDs even with low spatial resolutions (e.g., 16 × 8) and quantify the level of information loss attributable to this transformation for the SSD user. Finally, we discuss ways of closing this “modality gap” found in SSDs and conclude that this process is best benchmarked against performance with SSDs that return to their primary modality (e.g., visuospatial into visual).
Collapse
Affiliation(s)
| | - Jan Thar
- Media Computing Group, RWTH Aachen University, Germany
| | - James Alvarez
- Department of Psychology, University of Sussex, Brighton, UK
| | - Jan Borchers
- Media Computing Group, RWTH Aachen University, Germany
| | - Jamie Ward
- Department of Psychology, University of Sussex, Brighton, UK; Sackler Centre for Consciousness Science, University of Sussex, Brighton, UK
| | - Giles Hamilton-Fletcher
- Department of Psychology, University of Sussex, Brighton, UK; Neuroimaging and Visual Science Laboratory, New York University Langone Health, NY, USA
| |
Collapse
|
44
|
Navigation Systems for the Blind and Visually Impaired: Past Work, Challenges, and Open Problems. SENSORS 2019; 19:s19153404. [PMID: 31382536 PMCID: PMC6696419 DOI: 10.3390/s19153404] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Revised: 07/30/2019] [Accepted: 07/30/2019] [Indexed: 11/16/2022]
Abstract
Over the last decades, the development of navigation devices capable of guiding the blind through indoor and/or outdoor scenarios has remained a challenge. In this context, this paper’s objective is to provide an updated, holistic view of this research, in order to enable developers to exploit the different aspects of its multidisciplinary nature. To that end, previous solutions will be briefly described and analyzed from a historical perspective, from the first “Electronic Travel Aids” and early research on sensory substitution or indoor/outdoor positioning, to recent systems based on artificial vision. Thereafter, user-centered design fundamentals are addressed, including the main points of criticism of previous approaches. Finally, several technological achievements are highlighted as they could underpin future feasible designs. In line with this, smartphones and wearables with built-in cameras will then be indicated as potentially feasible options with which to support state-of-art computer vision solutions, thus allowing for both the positioning and monitoring of the user’s surrounding area. These functionalities could then be further boosted by means of remote resources, leading to cloud computing schemas or even remote sensing via urban infrastructure.
Collapse
|
45
|
Caraiman S, Zvoristeanu O, Burlacu A, Herghelegiu P. Stereo Vision Based Sensory Substitution for the Visually Impaired. SENSORS 2019; 19:s19122771. [PMID: 31226796 PMCID: PMC6630569 DOI: 10.3390/s19122771] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/22/2019] [Revised: 06/11/2019] [Accepted: 06/17/2019] [Indexed: 11/25/2022]
Abstract
The development of computer vision based systems dedicated to help visually impaired people to perceive the environment, to orientate and navigate has been the main research subject of many works in the recent years. A significant ensemble of resources has been employed to support the development of sensory substitution devices (SSDs) and electronic travel aids for the rehabilitation of the visually impaired. The Sound of Vision (SoV) project used a comprehensive approach to develop such an SSD, tackling all the challenging aspects that so far restrained the large scale adoption of such systems by the intended audience: Wearability, real-time operation, pervasiveness, usability, cost. This article is set to present the artificial vision based component of the SoV SSD that performs the scene reconstruction and segmentation in outdoor environments. In contrast with the indoor use case, where the system acquires depth input from a structured light camera, in outdoors SoV relies on stereo vision to detect the elements of interest and provide an audio and/or haptic representation of the environment to the user. Our stereo-based method is designed to work with wearable acquisition devices and still provide a real-time, reliable description of the scene in the context of unreliable depth input from the stereo correspondence and of the complex 6 DOF motion of the head-worn camera. We quantitatively evaluate our approach on a custom benchmarking dataset acquired with SoV cameras and provide the highlights of the usability evaluation with visually impaired users.
Collapse
Affiliation(s)
- Simona Caraiman
- Faculty of Automatic Control and Computer Engineering,"Gheorghe Asachi" Technical University of Iasi, D. Mangeron 27, 700050 Iasi, Romania.
| | - Otilia Zvoristeanu
- Faculty of Automatic Control and Computer Engineering,"Gheorghe Asachi" Technical University of Iasi, D. Mangeron 27, 700050 Iasi, Romania.
| | - Adrian Burlacu
- Faculty of Automatic Control and Computer Engineering,"Gheorghe Asachi" Technical University of Iasi, D. Mangeron 27, 700050 Iasi, Romania.
| | - Paul Herghelegiu
- Faculty of Automatic Control and Computer Engineering,"Gheorghe Asachi" Technical University of Iasi, D. Mangeron 27, 700050 Iasi, Romania.
| |
Collapse
|
46
|
Matusz PJ, Turoman N, Tivadar RI, Retsa C, Murray MM. Brain and Cognitive Mechanisms of Top–Down Attentional Control in a Multisensory World: Benefits of Electrical Neuroimaging. J Cogn Neurosci 2019; 31:412-430. [DOI: 10.1162/jocn_a_01360] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In real-world environments, information is typically multisensory, and objects are a primary unit of information processing. Object recognition and action necessitate attentional selection of task-relevant from among task-irrelevant objects. However, the brain and cognitive mechanisms governing these processes remain not well understood. Here, we demonstrate that attentional selection of visual objects is controlled by integrated top–down audiovisual object representations (“attentional templates”) while revealing a new brain mechanism through which they can operate. In multistimulus (visual) arrays, attentional selection of objects in humans and animal models is traditionally quantified via “the N2pc component”: spatially selective enhancements of neural processing of objects within ventral visual cortices at approximately 150–300 msec poststimulus. In our adaptation of Folk et al.'s [Folk, C. L., Remington, R. W., & Johnston, J. C. Involuntary covert orienting is contingent on attentional control settings. Journal of Experimental Psychology: Human Perception and Performance, 18, 1030–1044, 1992] spatial cueing paradigm, visual cues elicited weaker behavioral attention capture and an attenuated N2pc during audiovisual versus visual search. To provide direct evidence for the brain, and so, cognitive, mechanisms underlying top–down control in multisensory search, we analyzed global features of the electrical field at the scalp across our N2pcs. In the N2pc time window (170–270 msec), color cues elicited brain responses differing in strength and their topography. This latter finding is indicative of changes in active brain sources. Thus, in multisensory environments, attentional selection is controlled via integrated top–down object representations, and so not only by separate sensory-specific top–down feature templates (as suggested by traditional N2pc analyses). We discuss how the electrical neuroimaging approach can aid research on top–down attentional control in naturalistic, multisensory settings and on other neurocognitive functions in the growing area of real-world neuroscience.
Collapse
Affiliation(s)
- Pawel J. Matusz
- University of Applied Sciences Western Switzerland (HES-SO Valais)
- University Hospital Centre and University of Lausanne
- Vanderbilt University, Nashville, TN
| | - Nora Turoman
- University Hospital Centre and University of Lausanne
| | - Ruxandra I. Tivadar
- University Hospital Centre and University of Lausanne
- University of Lausanne and Fondation Asile des Aveugles
| | - Chrysa Retsa
- University Hospital Centre and University of Lausanne
| | - Micah M. Murray
- University Hospital Centre and University of Lausanne
- Vanderbilt University, Nashville, TN
- University of Lausanne and Fondation Asile des Aveugles
| |
Collapse
|
47
|
Cappagli G, Finocchietti S, Cocchi E, Giammari G, Zumiani R, Cuppone AV, Baud-Bovy G, Gori M. Audio motor training improves mobility and spatial cognition in visually impaired children. Sci Rep 2019; 9:3303. [PMID: 30824830 PMCID: PMC6397231 DOI: 10.1038/s41598-019-39981-x] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Accepted: 02/07/2019] [Indexed: 11/25/2022] Open
Abstract
Since it has been demonstrated that spatial cognition can be affected in visually impaired children, training strategies that exploit the plasticity of the human brain should be early adopted. Here we developed and tested a new training protocol based on the reinforcement of audio-motor associations and thus supporting spatial development in visually impaired children. The study involved forty-four visually impaired children aged 6–17 years old assigned to an experimental (ABBI training) or a control (classical training) rehabilitation conditions. The experimental training group followed an intensive but entertaining rehabilitation for twelve weeks during which they performed ad-hoc developed audio-spatial exercises with the Audio Bracelet for Blind Interaction (ABBI). A battery of spatial tests administered before and after the training indicated that children significantly improved in almost all the spatial aspects considered, while the control group didn’t show any improvement. These results confirm that perceptual development in the case of blindness can be enhanced with naturally associated auditory feedbacks to body movements. Therefore the early introduction of a tailored audio-motor training could potentially prevent spatial developmental delays in visually impaired children.
Collapse
Affiliation(s)
- Giulia Cappagli
- Unit for Visually Impaired People, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| | - Sara Finocchietti
- Unit for Visually Impaired People, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| | - Elena Cocchi
- Istituto David Chiossone per Ciechi ed ipovedenti ONLUS, Genova, Italy
| | - Giuseppina Giammari
- Centro regionale per l'ipovisione in età evolutiva, IRCCS Scientific Institute "E. Medea", Bosisio Parini, Lecco, Italy
| | | | - Anna Vera Cuppone
- Unit for Visually Impaired People, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| | - Gabriel Baud-Bovy
- RBCS Robotics, Brain and Cognitive Science department, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genova, Italy.,Vita-Salute San Raffaele University & Unit of Experimental Psychology, Division of Neuroscience, San Raffaele Scientific Institute, Milan, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genova, Italy.
| |
Collapse
|
48
|
Buchs G, Heimler B, Amedi A. The Effect of Irrelevant Environmental Noise on the Performance of Visual-to-Auditory Sensory Substitution Devices Used by Blind Adults. Multisens Res 2019; 32:87-109. [DOI: 10.1163/22134808-20181327] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2018] [Accepted: 11/30/2018] [Indexed: 02/04/2023]
Abstract
Abstract
Visual-to-auditory Sensory Substitution Devices (SSDs) are a family of non-invasive devices for visual rehabilitation aiming at conveying whole-scene visual information through the intact auditory modality. Although proven effective in lab environments, the use of SSDs has yet to be systematically tested in real-life situations. To start filling this gap, in the present work we tested the ability of expert SSD users to filter out irrelevant background noise while focusing on the relevant audio information. Specifically, nine blind expert users of the EyeMusic visual-to-auditory SSD performed a series of identification tasks via SSDs (i.e., shape, color, and conjunction of the two features). Their performance was compared in two separate conditions: silent baseline, and with irrelevant background sounds from real-life situations, using the same stimuli in a pseudo-random balanced design. Although the participants described the background noise as disturbing, no significant performance differences emerged between the two conditions (i.e., noisy; silent) for any of the tasks. In the conjunction task (shape and color) we found a non-significant trend for a disturbing effect of the background noise on performance. These findings suggest that visual-to-auditory SSDs can indeed be successfully used in noisy environments and that users can still focus on relevant auditory information while inhibiting irrelevant sounds. Our findings take a step towards the actual use of SSDs in real-life situations while potentially impacting rehabilitation of sensory deprived individuals.
Collapse
Affiliation(s)
- Galit Buchs
- 1Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Benedetta Heimler
- 2The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- 3Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Amir Amedi
- 1Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- 2The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- 3Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- 4Sorbonne Universités UPMC Univ Paris 06, Institut de la Vision Paris, France
| |
Collapse
|
49
|
Liu Y, Stiles NRB, Meister M. Augmented reality powers a cognitive assistant for the blind. eLife 2018; 7:e37841. [PMID: 30479270 PMCID: PMC6257813 DOI: 10.7554/elife.37841] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2018] [Accepted: 10/27/2018] [Indexed: 11/24/2022] Open
Abstract
To restore vision for the blind, several prosthetic approaches have been explored that convey raw images to the brain. So far, these schemes all suffer from a lack of bandwidth. An alternate approach would restore vision at the cognitive level, bypassing the need to convey sensory data. A wearable computer captures video and other data, extracts important scene knowledge, and conveys that to the user in compact form. Here, we implement an intuitive user interface for such a device using augmented reality: each object in the environment has a voice and communicates with the user on command. With minimal training, this system supports many aspects of visual cognition: obstacle avoidance, scene understanding, formation and recall of spatial memories, navigation. Blind subjects can traverse an unfamiliar multi-story building on their first attempt. To spur further development in this domain, we developed an open-source environment for standardized benchmarking of visual assistive devices.
Collapse
Affiliation(s)
- Yang Liu
- Division of Biology and Biological EngineeringCalifornia Institute of TechnologyPasadenaUnited States
- Computation and Neural Systems ProgramCalifornia Institute of TechnologyPasadenaUnited States
| | - Noelle RB Stiles
- Division of Biology and Biological EngineeringCalifornia Institute of TechnologyPasadenaUnited States
- Institute for Biomedical Therapeutics, Keck School of MedicineUniversity of Southern CaliforniaLos AngelesUnited States
| | - Markus Meister
- Division of Biology and Biological EngineeringCalifornia Institute of TechnologyPasadenaUnited States
| |
Collapse
|
50
|
Negen J, Wen L, Thaler L, Nardini M. Bayes-Like Integration of a New Sensory Skill with Vision. Sci Rep 2018; 8:16880. [PMID: 30442895 PMCID: PMC6237778 DOI: 10.1038/s41598-018-35046-7] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2018] [Accepted: 10/22/2018] [Indexed: 11/09/2022] Open
Abstract
Humans are effective at dealing with noisy, probabilistic information in familiar settings. One hallmark of this is Bayesian Cue Combination: combining multiple noisy estimates to increase precision beyond the best single estimate, taking into account their reliabilities. Here we show that adults also combine a novel audio cue to distance, akin to human echolocation, with a visual cue. Following two hours of training, subjects were more precise given both cues together versus the best single cue. This persisted when we changed the novel cue's auditory frequency. Reliability changes also led to a re-weighting of cues without feedback, showing that they learned something more flexible than a rote decision rule for specific stimuli. The main findings replicated with a vibrotactile cue. These results show that the mature sensory apparatus can learn to flexibly integrate new sensory skills. The findings are unexpected considering previous empirical results and current models of multisensory learning.
Collapse
Affiliation(s)
- James Negen
- Department of Psychology, Durham University Durham, DH1 3LE, Durham, UK.
| | - Lisa Wen
- Department of Psychology, Durham University Durham, DH1 3LE, Durham, UK
| | - Lore Thaler
- Department of Psychology, Durham University Durham, DH1 3LE, Durham, UK
| | - Marko Nardini
- Department of Psychology, Durham University Durham, DH1 3LE, Durham, UK
| |
Collapse
|