1
|
Witter M, de Rooij A, van Dartel M, Krahmer E. Bridging a sensory gap between deaf and hearing people–A plea for a situated design approach to sensory augmentation. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.991180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Abstract
Deaf and hearing people can encounter challenges when communicating with one another in everyday situations. Although problems in verbal communication are often seen as the main cause, such challenges may also result from sensory differences between deaf and hearing people and their impact on individual understandings of the world. That is, challenges arising from a sensory gap. Proposals for innovative communication technologies to address this have been met with criticism by the deaf community. They are mostly designed to enhance deaf people's understanding of the verbal cues that hearing people rely on, but omit many critical sensory signals that deaf people rely on to understand (others in) their environment and to which hearing people are not tuned to. In this perspective paper, sensory augmentation, i.e., technologically extending people's sensory capabilities, is put forward as a way to bridge this sensory gap: (1) by tuning to the signals deaf people rely on more strongly but are commonly missed by hearing people, and vice versa, and (2) by sensory augmentations that enable deaf and hearing people to sense signals that neither person is able to normally sense. Usability and user-acceptance challenges, however, lie ahead of realizing the alleged potential of sensory augmentation for bridging the sensory gap between deaf and hearing people. Addressing these requires a novel approach to how such technologies are designed. We contend this requires a situated design approach.
Collapse
|
2
|
Pamir Z, Jung JH, Peli E. Preparing participants for the use of the tongue visual sensory substitution device. Disabil Rehabil Assist Technol 2022; 17:888-896. [PMID: 32997554 PMCID: PMC8007668 DOI: 10.1080/17483107.2020.1821102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2020] [Accepted: 09/04/2020] [Indexed: 02/08/2023]
Abstract
PURPOSE Visual sensory substitution devices (SSDs) convey visual information to a blind person through another sensory modality. Using a visual SSD in various daily activities requires training prior to use the device independently. Yet, there is limited literature about procedures and outcomes of the training conducted for preparing users for practical use of SSDs in daily activities. METHODS We trained 29 blind adults (9 with congenital and 20 with acquired blindness) in the use of a commercially available electro-tactile SSD, BrainPort. We describe a structured training protocol adapted from the previous studies, responses of participants, and we present retrospective qualitative data on the progress of participants during the training. RESULTS The length of the training was not a critical factor in reaching an advanced stage. Though performance in the first two sessions seems to be a good indicator of participants' ability to progress in the training protocol, there are large individual differences in how far and how fast each participant can progress in the training protocol. There are differences between congenital blind users and those blinded later in life. CONCLUSIONS The information on the training progression would be of interest to researchers preparing studies, and to eye care professionals, who may advise patients to use SSDs.IMPLICATIONS FOR REHABILITATIONThere are large individual differences in how far and how fast each participant can learn to use a visual-to-tactile sensory substitution device for a variety of tasks.Recognition is mainly achieved through top-down processing with prior knowledge about the possible responses. Therefore, the generalizability is still questionable.Users develop different strategies in order to succeed in training tasks.
Collapse
Affiliation(s)
- Zahide Pamir
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA
| | - Jae-Hyun Jung
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA
| | - Eli Peli
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA
| |
Collapse
|
3
|
Pesnot Lerousseau J, Arnold G, Auvray M. Training-induced plasticity enables visualizing sounds with a visual-to-auditory conversion device. Sci Rep 2021; 11:14762. [PMID: 34285265 PMCID: PMC8292401 DOI: 10.1038/s41598-021-94133-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Accepted: 06/28/2021] [Indexed: 12/04/2022] Open
Abstract
Sensory substitution devices aim at restoring visual functions by converting visual information into auditory or tactile stimuli. Although these devices show promise in the range of behavioral abilities they allow, the processes underlying their use remain underspecified. In particular, while an initial debate focused on the visual versus auditory or tactile nature of sensory substitution, since over a decade, the idea that it reflects a mixture of both has emerged. In order to investigate behaviorally the extent to which visual and auditory processes are involved, participants completed a Stroop-like crossmodal interference paradigm before and after being trained with a conversion device which translates visual images into sounds. In addition, participants' auditory abilities and their phenomenologies were measured. Our study revealed that, after training, when asked to identify sounds, processes shared with vision were involved, as participants’ performance in sound identification was influenced by the simultaneously presented visual distractors. In addition, participants’ performance during training and their associated phenomenology depended on their auditory abilities, revealing that processing finds its roots in the input sensory modality. Our results pave the way for improving the design and learning of these devices by taking into account inter-individual differences in auditory and visual perceptual strategies.
Collapse
Affiliation(s)
| | | | - Malika Auvray
- Sorbonne Université, CNRS UMR 7222, Institut des Systèmes Intelligents et de Robotique (ISIR), 75005, Paris, France.
| |
Collapse
|
4
|
Zai AT, Cavé-Lopez S, Rolland M, Giret N, Hahnloser RHR. Sensory substitution reveals a manipulation bias. Nat Commun 2020; 11:5940. [PMID: 33230182 PMCID: PMC7684286 DOI: 10.1038/s41467-020-19686-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 10/14/2020] [Indexed: 01/01/2023] Open
Abstract
Sensory substitution is a promising therapeutic approach for replacing a missing or diseased sensory organ by translating inaccessible information into another sensory modality. However, many substitution systems are not well accepted by subjects. To explore the effect of sensory substitution on voluntary action repertoires and their associated affective valence, we study deaf songbirds to which we provide visual feedback as a substitute of auditory feedback. Surprisingly, deaf birds respond appetitively to song-contingent binary visual stimuli. They skillfully adapt their songs to increase the rate of visual stimuli, showing that auditory feedback is not required for making targeted changes to vocal repertoires. We find that visually instructed song learning is basal-ganglia dependent. Because hearing birds respond aversively to the same visual stimuli, sensory substitution reveals a preference for actions that elicit sensory feedback over actions that do not, suggesting that substitution systems should be designed to exploit the drive to manipulate.
Collapse
Affiliation(s)
- Anja T Zai
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, 8057, Zurich, Switzerland
- Neuroscience Center Zurich (ZNZ), University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Sophie Cavé-Lopez
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, 8057, Zurich, Switzerland
| | - Manon Rolland
- Institut des Neurosciences Paris Saclay, CNRS, Université Paris Saclay, Orsay, France
| | - Nicolas Giret
- Institut des Neurosciences Paris Saclay, CNRS, Université Paris Saclay, Orsay, France
| | - Richard H R Hahnloser
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, 8057, Zurich, Switzerland.
- Neuroscience Center Zurich (ZNZ), University of Zurich and ETH Zurich, Zurich, Switzerland.
| |
Collapse
|
5
|
Lloyd-Esenkaya T, Lloyd-Esenkaya V, O'Neill E, Proulx MJ. Multisensory inclusive design with sensory substitution. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2020; 5:37. [PMID: 32770416 PMCID: PMC7415050 DOI: 10.1186/s41235-020-00240-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2019] [Accepted: 07/13/2020] [Indexed: 11/10/2022]
Abstract
Sensory substitution techniques are perceptual and cognitive phenomena used to represent one sensory form with an alternative. Current applications of sensory substitution techniques are typically focused on the development of assistive technologies whereby visually impaired users can acquire visual information via auditory and tactile cross-modal feedback. But despite their evident success in scientific research and furthering theory development in cognition, sensory substitution techniques have not yet gained widespread adoption within sensory-impaired populations. Here we argue that shifting the focus from assistive to mainstream applications may resolve some of the current issues regarding the use of sensory substitution devices to improve outcomes for those with disabilities. This article provides a tutorial guide on how to use research into multisensory processing and sensory substitution techniques from the cognitive sciences to design new inclusive cross-modal displays. A greater focus on developing inclusive mainstream applications could lead to innovative technologies that could be enjoyed by every person.
Collapse
Affiliation(s)
- Tayfun Lloyd-Esenkaya
- Crossmodal Cognition Lab, University of Bath, Bath, BA2 7AY, UK.,Department of Computer Science, University of Bath, Bath, UK
| | | | - Eamonn O'Neill
- Department of Computer Science, University of Bath, Bath, UK
| | - Michael J Proulx
- Crossmodal Cognition Lab, University of Bath, Bath, BA2 7AY, UK. .,Department of Psychology, University of Bath, Bath, UK.
| |
Collapse
|
6
|
Proulx MJ, Brown DJ, Lloyd-Esenkaya T, Leveson JB, Todorov OS, Watson SH, de Sousa AA. Visual-to-auditory sensory substitution alters language asymmetry in both sighted novices and experienced visually impaired users. APPLIED ERGONOMICS 2020; 85:103072. [PMID: 32174360 DOI: 10.1016/j.apergo.2020.103072] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2019] [Revised: 12/05/2019] [Accepted: 02/01/2020] [Indexed: 06/10/2023]
Abstract
Visual-to-auditory sensory substitution devices (SSDs) provide improved access to the visual environment for the visually impaired by converting images into auditory information. Research is lacking on the mechanisms involved in processing data that is perceived through one sensory modality, but directly associated with a source in a different sensory modality. This is important because SSDs that use auditory displays could involve binaural presentation requiring both ear canals, or monaural presentation requiring only one - but which ear would be ideal? SSDs may be similar to reading, as an image (printed word) is converted into sound (when read aloud). Reading, and language more generally, are typically lateralised to the left cerebral hemisphere. Yet, unlike symbolic written language, SSDs convert images to sound based on visuospatial properties, with the right cerebral hemisphere potentially having a role in processing such visuospatial data. Here we investigated whether there is a hemispheric bias in the processing of visual-to-auditory sensory substitution information and whether that varies as a function of experience and visual ability. We assessed the lateralization of auditory processing with two tests: a standard dichotic listening test and a novel dichotic listening test created using the auditory information produced by an SSD, The vOICe. Participants were tested either in the lab or online with the same stimuli. We did not find a hemispheric bias in the processing of visual-to-auditory information in visually impaired, experienced vOICe users. Further, we did not find any difference between visually impaired, experienced vOICe users and sighted novices in the hemispheric lateralization of visual-to-auditory information processing. Although standard dichotic listening is lateralised to the left hemisphere, the auditory processing of images in SSDs is bilateral, possibly due to the increased influence of right hemisphere processing. Auditory SSDs might therefore be equally effective with presentation to either ear if a monaural, rather than binaural, presentation were necessary.
Collapse
Affiliation(s)
- Michael J Proulx
- Department of Psychology, University of Bath, Bath, BA2 7AY, UK; Crossmodal Cognition Laboratory, REVEAL Research Centre, University of Bath, Bath, BA2 7AY, UK
| | - David J Brown
- Crossmodal Cognition Laboratory, REVEAL Research Centre, University of Bath, Bath, BA2 7AY, UK; Centre for Health and Cognition, Bath Spa University, Bath, BA2 9BN, UK
| | - Tayfun Lloyd-Esenkaya
- Crossmodal Cognition Laboratory, REVEAL Research Centre, University of Bath, Bath, BA2 7AY, UK; Department of Computer Science, REVEAL Research Centre, University of Bath, Bath, BA2 7AY, UK
| | - Jack Barnett Leveson
- Department of Psychology, University of Bath, Bath, BA2 7AY, UK; Crossmodal Cognition Laboratory, REVEAL Research Centre, University of Bath, Bath, BA2 7AY, UK
| | - Orlin S Todorov
- School of Biological Sciences, The University of Queensland, St. Lucia, QLD, 4072, Australia
| | - Samuel H Watson
- Centre for Health and Cognition, Bath Spa University, Bath, BA2 9BN, UK
| | - Alexandra A de Sousa
- Crossmodal Cognition Laboratory, REVEAL Research Centre, University of Bath, Bath, BA2 7AY, UK; Centre for Health and Cognition, Bath Spa University, Bath, BA2 9BN, UK.
| |
Collapse
|
7
|
Niketeghad S, Pouratian N. Brain Machine Interfaces for Vision Restoration: The Current State of Cortical Visual Prosthetics. Neurotherapeutics 2019; 16:134-143. [PMID: 30194614 PMCID: PMC6361050 DOI: 10.1007/s13311-018-0660-1] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
Loss of vision alters the day to day life of blind individuals and may impose a significant burden on their family and the economy. Cortical visual prosthetics have been shown to have the potential of restoring a useful degree of vision via stimulation of primary visual cortex. Due to current advances in electrode design and wireless power and data transmission, development of these prosthetics has gained momentum in the past few years and multiple sites around the world are currently developing and testing their designs. In this review, we briefly outline the visual prosthetic approaches and describe the history of cortical visual prosthetics. Next, we focus on the state of the art of cortical visual prosthesis by briefly explaining the design of current devices that are either under development or in the clinical testing phase. Lastly, we shed light on the challenges of each design and provide some potential solutions.
Collapse
Affiliation(s)
- Soroush Niketeghad
- Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, USA
| | - Nader Pouratian
- Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, USA.
- Department of Neurosurgery, University of California Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|
8
|
Tactile recognition of visual stimuli: Specificity versus generalization of perceptual learning. Vision Res 2018; 152:40-50. [DOI: 10.1016/j.visres.2017.11.007] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2017] [Revised: 10/30/2017] [Accepted: 11/16/2017] [Indexed: 11/19/2022]
|
9
|
Sorgini F, Massari L, D'Abbraccio J, Palermo E, Menciassi A, Petrovic PB, Mazzoni A, Carrozza MC, Newell FN, Oddo CM. Neuromorphic Vibrotactile Stimulation of Fingertips for Encoding Object Stiffness in Telepresence Sensory Substitution and Augmentation Applications. SENSORS (BASEL, SWITZERLAND) 2018; 18:E261. [PMID: 29342076 PMCID: PMC5795525 DOI: 10.3390/s18010261] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2017] [Revised: 01/10/2018] [Accepted: 01/12/2018] [Indexed: 01/07/2023]
Abstract
We present a tactile telepresence system for real-time transmission of information about object stiffness to the human fingertips. Experimental tests were performed across two laboratories (Italy and Ireland). In the Italian laboratory, a mechatronic sensing platform indented different rubber samples. Information about rubber stiffness was converted into on-off events using a neuronal spiking model and sent to a vibrotactile glove in the Irish laboratory. Participants discriminated the variation of the stiffness of stimuli according to a two-alternative forced choice protocol. Stiffness discrimination was based on the variation of the temporal pattern of spikes generated during the indentation of the rubber samples. The results suggest that vibrotactile stimulation can effectively simulate surface stiffness when using neuronal spiking models to trigger vibrations in the haptic interface. Specifically, fractional variations of stiffness down to 0.67 were significantly discriminated with the developed neuromorphic haptic interface. This is a performance comparable, though slightly worse, to the threshold obtained in a benchmark experiment evaluating the same set of stimuli naturally with the own hand. Our paper presents a bioinspired method for delivering sensory feedback about object properties to human skin based on contingency-mimetic neuronal models, and can be useful for the design of high performance haptic devices.
Collapse
Affiliation(s)
- Francesca Sorgini
- Sant'Anna School of Advanced Studies, The BioRobotics Institute, 56025 Pisa, Italy.
| | - Luca Massari
- Sant'Anna School of Advanced Studies, The BioRobotics Institute, 56025 Pisa, Italy.
| | - Jessica D'Abbraccio
- Sant'Anna School of Advanced Studies, The BioRobotics Institute, 56025 Pisa, Italy.
| | - Eduardo Palermo
- Department of Mechanical and Aerospace Engineering, "Sapienza" University of Rome, 00185 Roma, Italy.
| | - Arianna Menciassi
- Sant'Anna School of Advanced Studies, The BioRobotics Institute, 56025 Pisa, Italy.
| | - Petar B Petrovic
- Production Engineering Department, Faculty of Mechanical Engineering, University of Belgrade, 11120 Belgrade, Serbia.
- Academy of Engineering Sciences of Serbia (AISS), 11120 Belgrade, Serbia.
| | - Alberto Mazzoni
- Sant'Anna School of Advanced Studies, The BioRobotics Institute, 56025 Pisa, Italy.
| | | | - Fiona N Newell
- School of Psychology and Institute of Neuroscience, Trinity College, 2 Dublin, Ireland.
| | - Calogero M Oddo
- Sant'Anna School of Advanced Studies, The BioRobotics Institute, 56025 Pisa, Italy.
| |
Collapse
|
10
|
Abstract
Many philosophers use findings about sensory substitution devices in the grand debate about how we should individuate the senses. The big question is this: Is "vision" assisted by (tactile) sensory substitution really vision? Or is it tactile perception? Or some sui generis novel form of perception? My claim is that sensory substitution assisted "vision" is neither vision nor tactile perception, because it is not perception at all. It is mental imagery: visual mental imagery triggered by tactile sensory stimulation. But it is a special form of mental imagery that is triggered by corresponding sensory stimulation in a different sense modality, which I call "multimodal mental imagery."
Collapse
Affiliation(s)
- Bence Nanay
- University of Antwerp, Belgium; Peterhouse, University of Cambridge, UK
| |
Collapse
|
11
|
Schumann F, O'Regan JK. Sensory augmentation: integration of an auditory compass signal into human perception of space. Sci Rep 2017; 7:42197. [PMID: 28195187 PMCID: PMC5307328 DOI: 10.1038/srep42197] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2016] [Accepted: 01/06/2017] [Indexed: 12/30/2022] Open
Abstract
Bio-mimetic approaches to restoring sensory function show great promise in that they rapidly produce perceptual experience, but have the disadvantage of being invasive. In contrast, sensory substitution approaches are non-invasive, but may lead to cognitive rather than perceptual experience. Here we introduce a new non-invasive approach that leads to fast and truly perceptual experience like bio-mimetic techniques. Instead of building on existing circuits at the neural level as done in bio-mimetics, we piggy-back on sensorimotor contingencies at the stimulus level. We convey head orientation to geomagnetic North, a reliable spatial relation not normally sensed by humans, by mimicking sensorimotor contingencies of distal sounds via head-related transfer functions. We demonstrate rapid and long-lasting integration into the perception of self-rotation. Short training with amplified or reduced rotation gain in the magnetic signal can expand or compress the perceived extent of vestibular self-rotation, even with the magnetic signal absent in the test. We argue that it is the reliability of the magnetic signal that allows vestibular spatial recalibration, and the coding scheme mimicking sensorimotor contingencies of distal sounds that permits fast integration. Hence we propose that contingency-mimetic feedback has great potential for creating sensory augmentation devices that achieve fast and genuinely perceptual experiences.
Collapse
Affiliation(s)
- Frank Schumann
- Laboratoire Psychologie de la Perception - CNRS UMR 8242, Université Paris Descartes, Paris, France
| | - J Kevin O'Regan
- Laboratoire Psychologie de la Perception - CNRS UMR 8242, Université Paris Descartes, Paris, France
| |
Collapse
|
12
|
König SU, Schumann F, Keyser J, Goeke C, Krause C, Wache S, Lytochkin A, Ebert M, Brunsch V, Wahn B, Kaspar K, Nagel SK, Meilinger T, Bülthoff H, Wolbers T, Büchel C, König P. Learning New Sensorimotor Contingencies: Effects of Long-Term Use of Sensory Augmentation on the Brain and Conscious Perception. PLoS One 2016; 11:e0166647. [PMID: 27959914 PMCID: PMC5154504 DOI: 10.1371/journal.pone.0166647] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2016] [Accepted: 09/28/2016] [Indexed: 11/19/2022] Open
Abstract
Theories of embodied cognition propose that perception is shaped by sensory stimuli and by the actions of the organism. Following sensorimotor contingency theory, the mastery of lawful relations between own behavior and resulting changes in sensory signals, called sensorimotor contingencies, is constitutive of conscious perception. Sensorimotor contingency theory predicts that, after training, knowledge relating to new sensorimotor contingencies develops, leading to changes in the activation of sensorimotor systems, and concomitant changes in perception. In the present study, we spell out this hypothesis in detail and investigate whether it is possible to learn new sensorimotor contingencies by sensory augmentation. Specifically, we designed an fMRI compatible sensory augmentation device, the feelSpace belt, which gives orientation information about the direction of magnetic north via vibrotactile stimulation on the waist of participants. In a longitudinal study, participants trained with this belt for seven weeks in natural environment. Our EEG results indicate that training with the belt leads to changes in sleep architecture early in the training phase, compatible with the consolidation of procedural learning as well as increased sensorimotor processing and motor programming. The fMRI results suggest that training entails activity in sensory as well as higher motor centers and brain areas known to be involved in navigation. These neural changes are accompanied with changes in how space and the belt signal are perceived, as well as with increased trust in navigational ability. Thus, our data on physiological processes and subjective experiences are compatible with the hypothesis that new sensorimotor contingencies can be acquired using sensory augmentation.
Collapse
Affiliation(s)
- Sabine U. König
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | - Frank Schumann
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
- Laboratoire Psychologie de la Perception, Université Paris Descartes, Paris, France
| | - Johannes Keyser
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | - Caspar Goeke
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | - Carina Krause
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | - Susan Wache
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | - Aleksey Lytochkin
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | - Manuel Ebert
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | - Vincent Brunsch
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | - Basil Wahn
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | - Kai Kaspar
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
- Department of Psychology, University of Cologne, Cologne, Germany
| | - Saskia K. Nagel
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | - Tobias Meilinger
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | | | - Thomas Wolbers
- Aging & Cognition Research Group, German Center for Neurodegenerative Diseases (DZNE), Magdeburg, Germany
| | - Christian Büchel
- NeuroImage Nord, Department of Systems Neuroscience, Hamburg University Hospital Eppendorf, Hamburg, Germany
| | - Peter König
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
13
|
Shokur S, Gallo S, Moioli RC, Donati ARC, Morya E, Bleuler H, Nicolelis MAL. Assimilation of virtual legs and perception of floor texture by complete paraplegic patients receiving artificial tactile feedback. Sci Rep 2016; 6:32293. [PMID: 27640345 PMCID: PMC5027552 DOI: 10.1038/srep32293] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2016] [Accepted: 08/04/2016] [Indexed: 11/23/2022] Open
Abstract
Spinal cord injuries disrupt bidirectional communication between the patient’s brain and body. Here, we demonstrate a new approach for reproducing lower limb somatosensory feedback in paraplegics by remapping missing leg/foot tactile sensations onto the skin of patients’ forearms. A portable haptic display was tested in eight patients in a setup where the lower limbs were simulated using immersive virtual reality (VR). For six out of eight patients, the haptic display induced the realistic illusion of walking on three different types of floor surfaces: beach sand, a paved street or grass. Additionally, patients experienced the movements of the virtual legs during the swing phase or the sensation of the foot rolling on the floor while walking. Relying solely on this tactile feedback, patients reported the position of the avatar leg during virtual walking. Crossmodal interference between vision of the virtual legs and tactile feedback revealed that patients assimilated the virtual lower limbs as if they were their own legs. We propose that the addition of tactile feedback to neuroprosthetic devices is essential to restore a full lower limb perceptual experience in spinal cord injury (SCI) patients, and will ultimately, lead to a higher rate of prosthetic acceptance/use and a better level of motor proficiency.
Collapse
Affiliation(s)
- Solaiman Shokur
- Neurorehabilitation Laboratory, Associação Alberto Santos Dumont para Apoio à Pesquisa (AASDAP), São Paulo, Brazil
| | - Simone Gallo
- STI IMT, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Renan C Moioli
- Edmond and Lily Safra International Institute of Neuroscience, Santos Dumont Institute, Macaiba, Brazil.,Alberto Santos Dumont Education and Research Institute, São Paulo, Brazil
| | - Ana Rita C Donati
- Neurorehabilitation Laboratory, Associação Alberto Santos Dumont para Apoio à Pesquisa (AASDAP), São Paulo, Brazil.,Associação de Assistência à Criança Deficiente (AACD), São Paulo, Brazil
| | - Edgard Morya
- Edmond and Lily Safra International Institute of Neuroscience, Santos Dumont Institute, Macaiba, Brazil.,Alberto Santos Dumont Education and Research Institute, São Paulo, Brazil
| | - Hannes Bleuler
- STI IMT, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Miguel A L Nicolelis
- Neurorehabilitation Laboratory, Associação Alberto Santos Dumont para Apoio à Pesquisa (AASDAP), São Paulo, Brazil.,Edmond and Lily Safra International Institute of Neuroscience, Santos Dumont Institute, Macaiba, Brazil.,Alberto Santos Dumont Education and Research Institute, São Paulo, Brazil.,Department of Neurobiology, Duke University, Durham, NC, USA.,Department of Biomedical Engineering, Duke University, Durham, NC, USA.,Department of Psychology and Neuroscience, Duke University, Durham, NC, USA.,Center for Neuroengineering, Duke University, Durham, NC, USA
| |
Collapse
|
14
|
Maidenbaum S, Buchs G, Abboud S, Lavi-Rotbain O, Amedi A. Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution. PLoS One 2016; 11:e0147501. [PMID: 26882473 PMCID: PMC4755598 DOI: 10.1371/journal.pone.0147501] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2015] [Accepted: 01/05/2016] [Indexed: 12/20/2022] Open
Abstract
Graphical virtual environments are currently far from accessible to blind users as their content is mostly visual. This is especially unfortunate as these environments hold great potential for this population for purposes such as safe orientation, education, and entertainment. Previous tools have increased accessibility but there is still a long way to go. Visual-to-audio Sensory-Substitution-Devices (SSDs) can increase accessibility generically by sonifying on-screen content regardless of the specific environment and offer increased accessibility without the use of expensive dedicated peripherals like electrode/vibrator arrays. Using SSDs virtually utilizes similar skills as when using them in the real world, enabling both training on the device and training on environments virtually before real-world visits. This could enable more complex, standardized and autonomous SSD training and new insights into multisensory interaction and the visually-deprived brain. However, whether congenitally blind users, who have never experienced virtual environments, will be able to use this information for successful perception and interaction within them is currently unclear.We tested this using the EyeMusic SSD, which conveys whole-scene visual information, to perform virtual tasks otherwise impossible without vision. Congenitally blind users had to navigate virtual environments and find doors, differentiate between them based on their features (Experiment1:task1) and surroundings (Experiment1:task2) and walk through them; these tasks were accomplished with a 95% and 97% success rate, respectively. We further explored the reactions of congenitally blind users during their first interaction with a more complex virtual environment than in the previous tasks-walking down a virtual street, recognizing different features of houses and trees, navigating to cross-walks, etc. Users reacted enthusiastically and reported feeling immersed within the environment. They highlighted the potential usefulness of such environments for understanding what visual scenes are supposed to look like and their potential for complex training and suggested many future environments they wished to experience.
Collapse
Affiliation(s)
- Shachar Maidenbaum
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Galit Buchs
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Sami Abboud
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Ori Lavi-Rotbain
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Amir Amedi
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Jerusalem, Israel
- Sorbonne Universités UPMC Univ Paris 06, Institut de la Vision Paris, Paris, France
- * E-mail:
| |
Collapse
|
15
|
Buchs G, Maidenbaum S, Levy-Tzedek S, Amedi A. Integration and binding in rehabilitative sensory substitution: Increasing resolution using a new Zooming-in approach. Restor Neurol Neurosci 2016; 34:97-105. [PMID: 26518671 PMCID: PMC4927841 DOI: 10.3233/rnn-150592] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
PURPOSE To visually perceive our surroundings we constantly move our eyes and focus on particular details, and then integrate them into a combined whole. Current visual rehabilitation methods, both invasive, like bionic-eyes and non-invasive, like Sensory Substitution Devices (SSDs), down-sample visual stimuli into low-resolution images. Zooming-in to sub-parts of the scene could potentially improve detail perception. Can congenitally blind individuals integrate a 'visual' scene when offered this information via different sensory modalities, such as audition? Can they integrate visual information -perceived in parts - into larger percepts despite never having had any visual experience? METHODS We explored these questions using a zooming-in functionality embedded in the EyeMusic visual-to-auditory SSD. Eight blind participants were tasked with identifying cartoon faces by integrating their individual components recognized via the EyeMusic's zooming mechanism. RESULTS After specialized training of just 6-10 hours, blind participants successfully and actively integrated facial features into cartooned identities in 79±18% of the trials in a highly significant manner, (chance level 10% ; rank-sum P < 1.55E-04). CONCLUSIONS These findings show that even users who lacked any previous visual experience whatsoever can indeed integrate this visual information with increased resolution. This potentially has important practical visual rehabilitation implications for both invasive and non-invasive methods.
Collapse
Affiliation(s)
- Galit Buchs
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem Hadassah Ein-Kerem, Jerusalem, Israel
| | - Shachar Maidenbaum
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem Hadassah Ein-Kerem, Jerusalem, Israel
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Shelly Levy-Tzedek
- Recanati School for Community Health Professions, Department of Physical Therapy, Ben Gurion University of the Negev, Beer-Sheva, Israel
- Zlotowski Center for Neuroscience, Ben Gurion University of the Negev, Beer-Sheva, Israel
| | - Amir Amedi
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem Hadassah Ein-Kerem, Jerusalem, Israel
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- Sorbonne Universités UPMC Univ Paris 06, Institut de la Vision Paris, France
| |
Collapse
|
16
|
Reich L, Amedi A. 'Visual' parsing can be taught quickly without visual experience during critical periods. Sci Rep 2015; 5:15359. [PMID: 26482105 PMCID: PMC4611203 DOI: 10.1038/srep15359] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2015] [Accepted: 09/15/2015] [Indexed: 12/12/2022] Open
Abstract
Cases of invasive sight-restoration in congenital blind adults demonstrated that acquiring visual abilities is extremely challenging, presumably because visual-experience during critical-periods is crucial for learning visual-unique concepts (e.g. size constancy). Visual rehabilitation can also be achieved using sensory-substitution-devices (SSDs) which convey visual information non-invasively through sounds. We tested whether one critical concept – visual parsing, which is highly-impaired in sight-restored patients – can be learned using SSD. To this end, congenitally blind adults participated in a unique, relatively short (~70 hours), SSD-‘vision’ training. Following this, participants successfully parsed 2D and 3D visual objects. Control individuals naïve to SSDs demonstrated that while some aspects of parsing with SSD are intuitive, the blind’s success could not be attributed to auditory processing alone. Furthermore, we had a unique opportunity to compare the SSD-users’ abilities to those reported for sight-restored patients who performed similar tasks visually, and who had months of eyesight. Intriguingly, the SSD-users outperformed the patients on most criteria tested. These suggest that with adequate training and technologies, key high-order visual features can be quickly acquired in adulthood, and lack of visual-experience during critical-periods can be somewhat compensated for. Practically, these highlight the potential of SSDs as standalone-aids or combined with invasive restoration approaches.
Collapse
Affiliation(s)
- Lior Reich
- Department of Medical Neurobiology, The Institute for Medical Research Israel-Canada, Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem 91220, Israel
| | - Amir Amedi
- Department of Medical Neurobiology, The Institute for Medical Research Israel-Canada, Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem 91220, Israel.,The Edmond and Lily Safra Center for Brain Sciences (ELSC), The Hebrew University of Jerusalem, Jerusalem 91220, Israel
| |
Collapse
|
17
|
Fulkerson M. Rethinking the senses and their interactions: the case for sensory pluralism. Front Psychol 2014; 5:1426. [PMID: 25540630 PMCID: PMC4261717 DOI: 10.3389/fpsyg.2014.01426] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2014] [Accepted: 11/22/2014] [Indexed: 11/13/2022] Open
Abstract
I argue for sensory pluralism. This is the view that there are many forms of sensory interaction and unity, and no single category that classifies them all. In other words, sensory interactions do not form a single natural kind. This view suggests that how we classify sensory systems (and the experiences they generate) partly depends on our explanatory purposes. I begin with a detailed discussion of the issue as it arises for our understanding of thermal perception, followed by a general account and defense of sensory pluralism.
Collapse
Affiliation(s)
- Matthew Fulkerson
- Department of Philosophy, University of California San Diego, La Jolla, CA, USA
| |
Collapse
|
18
|
Lewis PM, Ackland HM, Lowery AJ, Rosenfeld JV. Restoration of vision in blind individuals using bionic devices: a review with a focus on cortical visual prostheses. Brain Res 2014; 1595:51-73. [PMID: 25446438 DOI: 10.1016/j.brainres.2014.11.020] [Citation(s) in RCA: 114] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2014] [Revised: 11/05/2014] [Accepted: 11/08/2014] [Indexed: 12/13/2022]
Abstract
The field of neurobionics offers hope to patients with sensory and motor impairment. Blindness is a common cause of major sensory loss, with an estimated 39 million people worldwide suffering from total blindness in 2010. Potential treatment options include bionic devices employing electrical stimulation of the visual pathways. Retinal stimulation can restore limited visual perception to patients with retinitis pigmentosa, however loss of retinal ganglion cells precludes this approach. The optic nerve, lateral geniculate nucleus and visual cortex provide alternative stimulation targets, with several research groups actively pursuing a cortically-based device capable of driving several hundred stimulating electrodes. While great progress has been made since the earliest works of Brindley and Dobelle in the 1960s and 1970s, significant clinical, surgical, psychophysical, neurophysiological, and engineering challenges remain to be overcome before a commercially-available cortical implant will be realized. Selection of candidate implant recipients will require assessment of their general, psychological and mental health, and likely responses to visual cortex stimulation. Implant functionality, longevity and safety may be enhanced by careful electrode insertion, optimization of electrical stimulation parameters and modification of immune responses to minimize or prevent the host response to the implanted electrodes. Psychophysical assessment will include mapping the positions of potentially several hundred phosphenes, which may require repetition if electrode performance deteriorates over time. Therefore, techniques for rapid psychophysical assessment are required, as are methods for objectively assessing the quality of life improvements obtained from the implant. These measures must take into account individual differences in image processing, phosphene distribution and rehabilitation programs that may be required to optimize implant functionality. In this review, we detail these and other challenges facing developers of cortical visual prostheses in addition to briefly outlining the epidemiology of blindness, and the history of cortical electrical stimulation in the context of visual prosthetics.
Collapse
Affiliation(s)
- Philip M Lewis
- Department of Neurosurgery, Alfred Hospital, Melbourne, Australia; Department of Surgery, Monash University, Central Clinical School, Melbourne, Australia; Monash Vision Group, Faculty of Engineering, Monash University, Melbourne, Australia; Monash Institute of Medical Engineering, Monash University, Melbourne, Australia.
| | - Helen M Ackland
- Department of Neurosurgery, Alfred Hospital, Melbourne, Australia; Department of Epidemiology and Preventive Medicine, Monash University, Melbourne, Australia.
| | - Arthur J Lowery
- Monash Vision Group, Faculty of Engineering, Monash University, Melbourne, Australia; Monash Institute of Medical Engineering, Monash University, Melbourne, Australia; Department of Electrical and Computer Systems Engineering, Faculty of Engineering, Monash University, Melbourne, Australia.
| | - Jeffrey V Rosenfeld
- Department of Neurosurgery, Alfred Hospital, Melbourne, Australia; Department of Surgery, Monash University, Central Clinical School, Melbourne, Australia; Monash Vision Group, Faculty of Engineering, Monash University, Melbourne, Australia; Monash Institute of Medical Engineering, Monash University, Melbourne, Australia; F. Edward Hébert School of Medicine, Uniformed Services University of the Health Sciences, Bethesda, USA.
| |
Collapse
|
19
|
Affiliation(s)
- Malika Auvray
- Institut Jean Nicod, CNRS UMR 8129, Département d’Etudes Cognitives, Ecole Normale Supérieure, 29 rue d’ULM, Paris, France
- LIMSI-CNRS, UPR 3251, rue John von Neumann, 91400 Orsay, France
| | - Laurence R. Harris
- Centre for Vision Research, York University, 4700 Keele St, Toronto, Ontario, M3J 1P3, Canada
| |
Collapse
|