1
|
Snir A, Cieśla K, Ozdemir G, Vekslar R, Amedi A. Localizing 3D motion through the fingertips: Following in the footsteps of elephants. iScience 2024; 27:109820. [PMID: 38799571 PMCID: PMC11126990 DOI: 10.1016/j.isci.2024.109820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Revised: 03/07/2024] [Accepted: 04/24/2024] [Indexed: 05/29/2024] Open
Abstract
Each sense serves a different specific function in spatial perception, and they all form a joint multisensory spatial representation. For instance, hearing enables localization in the entire 3D external space, while touch traditionally only allows localization of objects on the body (i.e., within the peripersonal space alone). We use an in-house touch-motion algorithm (TMA) to evaluate individuals' capability to understand externalized 3D information through touch, a skill that was not acquired during an individual's development or in evolution. Four experiments demonstrate quick learning and high accuracy in localization of motion using vibrotactile inputs on fingertips and successful audio-tactile integration in background noise. Subjective responses in some participants imply spatial experiences through visualization and perception of tactile "moving" sources beyond reach. We discuss our findings with respect to developing new skills in an adult brain, including combining a newly acquired "sense" with an existing one and computation-based brain organization.
Collapse
Affiliation(s)
- Adi Snir
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Katarzyna Cieśla
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Mokra 17, 05-830 Kajetany, Nadarzyn, Poland
| | - Gizem Ozdemir
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Rotem Vekslar
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| |
Collapse
|
2
|
Tivadar RI, Franceschiello B, Minier A, Murray MM. Learning and navigating digitally rendered haptic spatial layouts. NPJ SCIENCE OF LEARNING 2023; 8:61. [PMID: 38102127 PMCID: PMC10724186 DOI: 10.1038/s41539-023-00208-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Accepted: 11/28/2023] [Indexed: 12/17/2023]
Abstract
Learning spatial layouts and navigating through them rely not simply on sight but rather on multisensory processes, including touch. Digital haptics based on ultrasounds are effective for creating and manipulating mental images of individual objects in sighted and visually impaired participants. Here, we tested if this extends to scenes and navigation within them. Using only tactile stimuli conveyed via ultrasonic feedback on a digital touchscreen (i.e., a digital interactive map), 25 sighted, blindfolded participants first learned the basic layout of an apartment based on digital haptics only and then one of two trajectories through it. While still blindfolded, participants successfully reconstructed the haptically learned 2D spaces and navigated these spaces. Digital haptics were thus an effective means to learn and translate, on the one hand, 2D images into 3D reconstructions of layouts and, on the other hand, navigate actions within real spaces. Digital haptics based on ultrasounds represent an alternative learning tool for complex scenes as well as for successful navigation in previously unfamiliar layouts, which can likely be further applied in the rehabilitation of spatial functions and mitigation of visual impairments.
Collapse
Affiliation(s)
- Ruxandra I Tivadar
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland.
- Centre for Integrative and Complementary Medicine, Department of Anesthesiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- Cognitive Computational Neuroscience Group, Institute for Computer Science, University of Bern, Bern, Switzerland.
- The Sense Innovation and Research Center, Lausanne and Sion, Switzerland.
| | - Benedetta Franceschiello
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- The Sense Innovation and Research Center, Lausanne and Sion, Switzerland
- Institute of Systems Engineering, School of Engineering, University of Applied Sciences Western Switzerland (HES-SO Valais), Sion, Switzerland
| | - Astrid Minier
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Micah M Murray
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland.
- The Sense Innovation and Research Center, Lausanne and Sion, Switzerland.
| |
Collapse
|
3
|
Kristjánsson Á, Sigurdardottir HM. The Role of Visual Factors in Dyslexia. J Cogn 2023; 6:31. [PMID: 37397349 PMCID: PMC10312247 DOI: 10.5334/joc.287] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Accepted: 06/13/2023] [Indexed: 07/04/2023] Open
Abstract
What are the causes of dyslexia? Decades of research reflect a determined search for a single cause where a common assumption is that dyslexia is a consequence of problems with converting phonological information into lexical codes. But reading is a highly complex activity requiring many well-functioning mechanisms, and several different visual problems have been documented in dyslexic readers. We critically review evidence from various sources for the role of visual factors in dyslexia, from magnocellular dysfunction through accounts based on abnormal eye movements and attentional processing, to recent proposals that problems with high-level vision contribute to dyslexia. We believe that the role of visual problems in dyslexia has been underestimated in the literature, to the detriment of the understanding and treatment of the disorder. We propose that rather than focusing on a single core cause, the role of visual factors in dyslexia fits well with risk and resilience models that assume that several variables interact throughout prenatal and postnatal development to either promote or hinder efficient reading.
Collapse
Affiliation(s)
- Árni Kristjánsson
- Icelandic Vision Lab, Department of Psychology, University of Iceland, IS
| | | |
Collapse
|
4
|
Berger CC, Coppi S, Ehrsson HH. Synchronous motor imagery and visual feedback of finger movement elicit the moving rubber hand illusion, at least in illusion-susceptible individuals. Exp Brain Res 2023; 241:1021-1039. [PMID: 36928694 PMCID: PMC10081980 DOI: 10.1007/s00221-023-06586-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Accepted: 02/26/2023] [Indexed: 03/18/2023]
Abstract
Recent evidence suggests that imagined auditory and visual sensory stimuli can be integrated with real sensory information from a different sensory modality to change the perception of external events via cross-modal multisensory integration mechanisms. Here, we explored whether imagined voluntary movements can integrate visual and proprioceptive cues to change how we perceive our own limbs in space. Participants viewed a robotic hand wearing a glove repetitively moving its right index finger up and down at a frequency of 1 Hz, while they imagined executing the corresponding movements synchronously or asynchronously (kinesthetic-motor imagery); electromyography (EMG) from the participants' right index flexor muscle confirmed that the participants kept their hand relaxed while imagining the movements. The questionnaire results revealed that the synchronously imagined movements elicited illusory ownership and a sense of agency over the moving robotic hand-the moving rubber hand illusion-compared with asynchronously imagined movements; individuals who affirmed experiencing the illusion with real synchronous movement also did so with synchronous imagined movements. The results from a proprioceptive drift task further demonstrated a shift in the perceived location of the participants' real hand toward the robotic hand in the synchronous versus the asynchronous motor imagery condition. These results suggest that kinesthetic motor imagery can be used to replace veridical congruent somatosensory feedback from a moving finger in the moving rubber hand illusion to trigger illusory body ownership and agency, but only if the temporal congruence rule of the illusion is obeyed. This observation extends previous studies on the integration of mental imagery and sensory perception to the case of multisensory bodily awareness, which has potentially important implications for research into embodiment of brain-computer interface controlled robotic prostheses and computer-generated limbs in virtual reality.
Collapse
Affiliation(s)
- Christopher C Berger
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden
- Division of Biology and Biological Engineering/Computation and Neural Systems, California Institute of Technology, Pasadena, CA, USA
| | - Sara Coppi
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | - H Henrik Ehrsson
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| |
Collapse
|
5
|
Maimon A, Wald IY, Ben Oz M, Codron S, Netzer O, Heimler B, Amedi A. The Topo-Speech sensory substitution system as a method of conveying spatial information to the blind and vision impaired. Front Hum Neurosci 2023; 16:1058093. [PMID: 36776219 PMCID: PMC9909096 DOI: 10.3389/fnhum.2022.1058093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 12/13/2022] [Indexed: 01/27/2023] Open
Abstract
Humans, like most animals, integrate sensory input in the brain from different sensory modalities. Yet humans are distinct in their ability to grasp symbolic input, which is interpreted into a cognitive mental representation of the world. This representation merges with external sensory input, providing modality integration of a different sort. This study evaluates the Topo-Speech algorithm in the blind and visually impaired. The system provides spatial information about the external world by applying sensory substitution alongside symbolic representations in a manner that corresponds with the unique way our brains acquire and process information. This is done by conveying spatial information, customarily acquired through vision, through the auditory channel, in a combination of sensory (auditory) features and symbolic language (named/spoken) features. The Topo-Speech sweeps the visual scene or image and represents objects' identity by employing naming in a spoken word and simultaneously conveying the objects' location by mapping the x-axis of the visual scene or image to the time it is announced and the y-axis by mapping the location to the pitch of the voice. This proof of concept study primarily explores the practical applicability of this approach in 22 visually impaired and blind individuals. The findings showed that individuals from both populations could effectively interpret and use the algorithm after a single training session. The blind showed an accuracy of 74.45%, while the visually impaired had an average accuracy of 72.74%. These results are comparable to those of the sighted, as shown in previous research, with all participants above chance level. As such, we demonstrate practically how aspects of spatial information can be transmitted through non-visual channels. To complement the findings, we weigh in on debates concerning models of spatial knowledge (the persistent, cumulative, or convergent models) and the capacity for spatial representation in the blind. We suggest the present study's findings support the convergence model and the scenario that posits the blind are capable of some aspects of spatial representation as depicted by the algorithm comparable to those of the sighted. Finally, we present possible future developments, implementations, and use cases for the system as an aid for the blind and visually impaired.
Collapse
Affiliation(s)
- Amber Maimon
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Iddo Yehoshua Wald
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Meshi Ben Oz
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Sophie Codron
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Ophir Netzer
- Gonda Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Benedetta Heimler
- Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
6
|
Bordeau C, Scalvini F, Migniot C, Dubois J, Ambard M. Cross-modal correspondence enhances elevation localization in visual-to-auditory sensory substitution. Front Psychol 2023; 14:1079998. [PMID: 36777233 PMCID: PMC9909421 DOI: 10.3389/fpsyg.2023.1079998] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 01/06/2023] [Indexed: 01/27/2023] Open
Abstract
Introduction Visual-to-auditory sensory substitution devices are assistive devices for the blind that convert visual images into auditory images (or soundscapes) by mapping visual features with acoustic cues. To convey spatial information with sounds, several sensory substitution devices use a Virtual Acoustic Space (VAS) using Head Related Transfer Functions (HRTFs) to synthesize natural acoustic cues used for sound localization. However, the perception of the elevation is known to be inaccurate with generic spatialization since it is based on notches in the audio spectrum that are specific to each individual. Another method used to convey elevation information is based on the audiovisual cross-modal correspondence between pitch and visual elevation. The main drawback of this second method is caused by the limitation of the ability to perceive elevation through HRTFs due to the spectral narrowband of the sounds. Method In this study we compared the early ability to localize objects with a visual-to-auditory sensory substitution device where elevation is either conveyed using a spatialization-based only method (Noise encoding) or using pitch-based methods with different spectral complexities (Monotonic and Harmonic encodings). Thirty eight blindfolded participants had to localize a virtual target using soundscapes before and after having been familiarized with the visual-to-auditory encodings. Results Participants were more accurate to localize elevation with pitch-based encodings than with the spatialization-based only method. Only slight differences in azimuth localization performance were found between the encodings. Discussion This study suggests the intuitiveness of a pitch-based encoding with a facilitation effect of the cross-modal correspondence when a non-individualized sound spatialization is used.
Collapse
Affiliation(s)
- Camille Bordeau
- LEAD-CNRS UMR5022, Université de Bourgogne, Dijon, France,*Correspondence: Camille Bordeau ✉
| | | | | | - Julien Dubois
- ImViA EA 7535, Université de Bourgogne, Dijon, France
| | - Maxime Ambard
- LEAD-CNRS UMR5022, Université de Bourgogne, Dijon, France
| |
Collapse
|
7
|
Witter M, de Rooij A, van Dartel M, Krahmer E. Bridging a sensory gap between deaf and hearing people–A plea for a situated design approach to sensory augmentation. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.991180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Abstract
Deaf and hearing people can encounter challenges when communicating with one another in everyday situations. Although problems in verbal communication are often seen as the main cause, such challenges may also result from sensory differences between deaf and hearing people and their impact on individual understandings of the world. That is, challenges arising from a sensory gap. Proposals for innovative communication technologies to address this have been met with criticism by the deaf community. They are mostly designed to enhance deaf people's understanding of the verbal cues that hearing people rely on, but omit many critical sensory signals that deaf people rely on to understand (others in) their environment and to which hearing people are not tuned to. In this perspective paper, sensory augmentation, i.e., technologically extending people's sensory capabilities, is put forward as a way to bridge this sensory gap: (1) by tuning to the signals deaf people rely on more strongly but are commonly missed by hearing people, and vice versa, and (2) by sensory augmentations that enable deaf and hearing people to sense signals that neither person is able to normally sense. Usability and user-acceptance challenges, however, lie ahead of realizing the alleged potential of sensory augmentation for bridging the sensory gap between deaf and hearing people. Addressing these requires a novel approach to how such technologies are designed. We contend this requires a situated design approach.
Collapse
|
8
|
Malešević J, Kostić M, Jure FA, Spaich EG, Došen S, Ilić V, Bijelić G, Štrbac M. Electrotactile Communication via Matrix Electrode Placed on the Torso Using Fast Calibration, and Static vs. Dynamic Encoding. SENSORS (BASEL, SWITZERLAND) 2022; 22:7658. [PMID: 36236758 PMCID: PMC9572222 DOI: 10.3390/s22197658] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Revised: 10/05/2022] [Accepted: 10/07/2022] [Indexed: 06/16/2023]
Abstract
Electrotactile stimulation is a technology that reproducibly elicits tactile sensations and can be used as an alternative channel to communicate information to the user. The presented work is a part of an effort to develop this technology into an unobtrusive communication tool for first responders. In this study, the aim was to compare the success rate (SR) between discriminating stimulation at six spatial locations (static encoding) and recognizing six spatio-temporal patterns where pads are activated sequentially in a predetermined order (dynamic encoding). Additionally, a procedure for a fast amplitude calibration, that includes a semi-automated initialization and an optional manual adjustment, was employed and evaluated. Twenty subjects, including twelve first responders, participated in the study. The electrode comprising the 3 × 2 matrix of pads was placed on the lateral torso. The results showed that high SRs could be achieved for both types of message encoding after a short learning phase; however, the dynamic approach led to a statistically significant improvement in messages recognition (SR of 93.3%), compared to static stimulation (SR of 83.3%). The proposed calibration procedure was also effective since in 83.8% of the cases the subjects did not need to adjust the stimulation amplitude manually.
Collapse
Affiliation(s)
| | | | - Fabricio A. Jure
- Neurorehabilitation Systems, Department of Health Science and Technology, Faculty of Medicine, Aalborg University, 9220 Aalborg, Denmark
| | - Erika G. Spaich
- Neurorehabilitation Systems, Department of Health Science and Technology, Faculty of Medicine, Aalborg University, 9220 Aalborg, Denmark
| | - Strahinja Došen
- Neurorehabilitation Systems, Department of Health Science and Technology, Faculty of Medicine, Aalborg University, 9220 Aalborg, Denmark
| | - Vojin Ilić
- Department of Computing and Control Engineering, Faculty of Technical Sciences, University of Novi Sad, 21102 Novi Sad, Serbia
| | - Goran Bijelić
- Tecnalia, Basque Research and Technology Alliance (BRTA), 20009 Donostia-San Sebastian, Spain
| | | |
Collapse
|
9
|
Gonzalez M, Bismuth A, Lee C, Chestek CA, Gates DH. Artificial referred sensation in upper and lower limb prosthesis users: a systematic review. J Neural Eng 2022; 19:10.1088/1741-2552/ac8c38. [PMID: 36001115 PMCID: PMC9514130 DOI: 10.1088/1741-2552/ac8c38] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 08/23/2022] [Indexed: 11/12/2022]
Abstract
Objective.Electrical stimulation can induce sensation in the phantom limb of individuals with amputation. It is difficult to generalize existing findings as there are many approaches to delivering stimulation and to assessing the characteristics and benefits of sensation. Therefore, the goal of this systematic review was to explore the stimulation parameters that effectively elicited referred sensation, the qualities of elicited sensation, and how the utility of referred sensation was assessed.Approach.We searched PubMed, Web of Science, and Engineering Village through January of 2022 to identify relevant papers. We included papers which electrically induced referred sensation in individuals with limb loss and excluded papers that did not contain stimulation parameters or outcome measures pertaining to stimulation. We extracted information on participant demographics, stimulation approaches, and participant outcomes.Main results.After applying exclusion criteria, 49 papers were included covering nine stimulation methods. Amplitude was the most commonly adjusted parameter (n= 25), followed by frequency (n= 22), and pulse width (n= 15). Of the 63 reports of sensation quality, most reported feelings of pressure (n= 52), paresthesia (n= 48), or vibration (n= 40) while less than half (n= 29) reported a sense of position or movement. Most papers evaluated the functional benefits of sensation (n= 33) using force matching or object identification tasks, while fewer papers quantified subjective measures (n= 16) such as pain or embodiment. Only 15 studies (36%) observed percept intensity, quality, or location over multiple sessions.Significance.Most studies that measured functional performance demonstrated some benefit to providing participants with sensory feedback. However, few studies could experimentally manipulate sensation location or quality. Direct comparisons between studies were limited by variability in methodologies and outcome measures. As such, we offer recommendations to aid in more standardized reporting for future research.
Collapse
Affiliation(s)
- Michael Gonzalez
- Department of Robotics, University of Michigan, Ann Arbor, MI, United States of America
| | - Alex Bismuth
- School of Kinesiology, University of Michigan, Ann Arbor, MI, United States of America
| | - Christina Lee
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, United States of America
| | - Cynthia A Chestek
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, United States of America
| | - Deanna H Gates
- School of Kinesiology, University of Michigan, Ann Arbor, MI, United States of America
| |
Collapse
|
10
|
Colorophone 2.0: A Wearable Color Sonification Device Generating Live Stereo-Soundscapes-Design, Implementation, and Usability Audit. SENSORS 2021; 21:s21217351. [PMID: 34770658 PMCID: PMC8587929 DOI: 10.3390/s21217351] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 10/29/2021] [Accepted: 11/01/2021] [Indexed: 11/20/2022]
Abstract
The successful development of a system realizing color sonification would enable auditory representation of the visual environment. The primary beneficiary of such a system would be people that cannot directly access visual information—the visually impaired community. Despite the plethora of sensory substitution devices, developing systems that provide intuitive color sonification remains a challenge. This paper presents design considerations, development, and the usability audit of a sensory substitution device that converts spatial color information into soundscapes. The implemented wearable system uses a dedicated color space and continuously generates natural, spatialized sounds based on the information acquired from a camera. We developed two head-mounted prototype devices and two graphical user interface (GUI) versions. The first GUI is dedicated to researchers, and the second has been designed to be easily accessible for visually impaired persons. Finally, we ran fundamental usability tests to evaluate the new spatial color sonification algorithm and to compare the two prototypes. Furthermore, we propose recommendations for the development of the next iteration of the system.
Collapse
|
11
|
Jouybari AF, Franza M, Kannape OA, Hara M, Blanke O. Tactile spatial discrimination on the torso using vibrotactile and force stimulation. Exp Brain Res 2021; 239:3175-3188. [PMID: 34424361 PMCID: PMC8541989 DOI: 10.1007/s00221-021-06181-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Accepted: 07/12/2021] [Indexed: 11/30/2022]
Abstract
There is a steadily growing number of mobile communication systems that provide spatially encoded tactile information to the humans' torso. However, the increased use of such hands-off displays is currently not matched with or supported by systematic perceptual characterization of tactile spatial discrimination on the torso. Furthermore, there are currently no data testing spatial discrimination for dynamic force stimuli applied to the torso. In the present study, we measured tactile point localization (LOC) and tactile direction discrimination (DIR) on the thoracic spine using two unisex torso-worn tactile vests realized with arrays of 3 × 3 vibrotactile or force feedback actuators. We aimed to, first, evaluate and compare the spatial discrimination of vibrotactile and force stimulations on the thoracic spine and, second, to investigate the relationship between the LOC and DIR results across stimulations. Thirty-four healthy participants performed both tasks with both vests. Tactile accuracies for vibrotactile and force stimulations were 60.7% and 54.6% for the LOC task; 71.0% and 67.7% for the DIR task, respectively. Performance correlated positively with both stimulations, although accuracies were higher for the vibrotactile than for the force stimulation across tasks, arguably due to specific properties of vibrotactile stimulations. We observed comparable directional anisotropies in the LOC results for both stimulations; however, anisotropies in the DIR task were only observed with vibrotactile stimulations. We discuss our findings with respect to tactile perception research as well as their implications for the design of high-resolution torso-mounted tactile displays for spatial cueing.
Collapse
Affiliation(s)
- Atena Fadaei Jouybari
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland.,Laboratory of Cognitive Neuroscience, Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland
| | - Matteo Franza
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland.,Laboratory of Cognitive Neuroscience, Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland
| | - Oliver Alan Kannape
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland.,Laboratory of Cognitive Neuroscience, Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland
| | - Masayuki Hara
- Graduate School of Science and Engineering, Saitama University, Saitama, Japan
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland. .,Laboratory of Cognitive Neuroscience, Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland. .,Bertarelli Chair in Cognitive Neuroprosthetics, Center for Neuroprosthetics and Brain Mind Institute, School of Life Sciences, Campus Biotech, Swiss Federal Institute of Technology (EPFL), 1012, Geneva, Switzerland.
| |
Collapse
|
12
|
Parry R, Sarlegna FR, Jarrassé N, Roby-Brami A. Anticipation and compensation for somatosensory deficits in object handling: evidence from a patient with large fiber sensory neuropathy. J Neurophysiol 2021; 126:575-590. [PMID: 34232757 DOI: 10.1152/jn.00517.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The purpose of this study was to determine the contributions of feedforward and feedback processes on grip force regulation and object orientation during functional manipulation tasks. One patient with massive somatosensory loss resulting from large fiber sensory neuropathy and 10 control participants were recruited. Three experiments were conducted: 1) perturbation to static holding; 2) discrete vertical movement; and 3) functional grasp and place. The availability of visual feedback was also manipulated to assess the nature of compensatory mechanisms. Results from experiment 1 indicated that both the deafferented patient and controls used anticipatory grip force adjustments before self-induced perturbation to static holding. The patient exhibited increased grip response time, but the magnitude of grip force adjustments remained correlated with perturbation forces in the self-induced and external perturbation conditions. In experiment 2, the patient applied peak grip force substantially in advance of maximum load force. Unlike controls, the patient's ability to regulate object orientation was impaired without visual feedback. In experiment 3, the duration of unloading, transport, and release phases were longer for the patient, with increased deviation of object orientation at phase transitions. These findings show that the deafferented patient uses distinct modes of anticipatory control according to task constraints and that responses to perturbations are mediated by alternative afferent information. The loss of somatosensory feedback thus appears to impair control of object orientation, whereas variation in the temporal organization of functional tasks may reflect strategies to mitigate object instability associated with changes in movement dynamics.NEW & NOTEWORTHY This study evaluates the effects of sensory neuropathy on the scaling and timing of grip force adjustments across different object handling tasks (i.e., holding, vertical movement, grasping, and placement). In particular, these results illustrate how novel anticipatory and online control processes emerge to compensate for the loss of somatosensory feedback. In addition, we provide new evidence on the role of somatosensory feedback for regulating object orientation during functional prehensile movement.
Collapse
Affiliation(s)
- Ross Parry
- LINP2 - Laboratoire Interdisciplinaire en Neurosciences, Physiologie et Psychologie: Activité Physique, Santé et Apprentissages, UPL, Université Paris Nanterre, Nanterre, France.,ISIR (Institute of Intelligent systems and robotics), Sorbonne Université UMR CNRS 7222, AGATHE team INSERM U 1150, Paris, France
| | | | - Nathanaël Jarrassé
- ISIR (Institute of Intelligent systems and robotics), Sorbonne Université UMR CNRS 7222, AGATHE team INSERM U 1150, Paris, France
| | - Agnès Roby-Brami
- ISIR (Institute of Intelligent systems and robotics), Sorbonne Université UMR CNRS 7222, AGATHE team INSERM U 1150, Paris, France
| |
Collapse
|
13
|
Singh HP, Kumar P. Developments in the human machine interface technologies and their applications: a review. J Med Eng Technol 2021; 45:552-573. [PMID: 34184601 DOI: 10.1080/03091902.2021.1936237] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
Human-machine interface (HMI) techniques use bioelectrical signals to gain real-time synchronised communication between the human body and machine functioning. HMI technology not only provides a real-time control access but also has the ability to control multiple functions at a single instance of time with modest human inputs and increased efficiency. The HMI technologies yield advanced control access on numerous applications such as health monitoring, medical diagnostics, development of prosthetic and assistive devices, automotive and aerospace industry, robotic controls and many more fields. In this paper, various physiological signals, their acquisition and processing techniques along with their respective applications in different HMI technologies have been discussed.
Collapse
Affiliation(s)
- Harpreet Pal Singh
- Department of Mechanical Engineering, Punjabi University, Patiala, India
| | - Parlad Kumar
- Department of Mechanical Engineering, Punjabi University, Patiala, India
| |
Collapse
|
14
|
Abstract
The perceived distance between two touches is anisotropic on many parts of the body. Generally, tactile distances oriented across body width are perceived as larger than distances oriented along body length, though the magnitude of such biases differs substantially across the body. In this study, we investigated tactile distance perception on the back. Participants made verbal estimates of the perceived distance between pairs of touches oriented either across body width or along body length on (a) the left hand, (b) the left upper back, and (c) the left lower back. There were clear tactile distance anisotropies on the hand and upper back, with distances oriented across body width overestimated relative to those along body length/height, consistent with previous results. On the lower back, however, an anisotropy in exactly the opposite direction was found. These results provide further evidence that tactile distance anisotropies vary systematically across the body and suggest that the spatial representation of touch on the lower back may differ qualitatively from that on other regions of the body.
Collapse
|
15
|
Analysis and Validation of Cross-Modal Generative Adversarial Network for Sensory Substitution. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18126216. [PMID: 34201269 PMCID: PMC8228544 DOI: 10.3390/ijerph18126216] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 06/03/2021] [Accepted: 06/03/2021] [Indexed: 11/20/2022]
Abstract
Visual-auditory sensory substitution has demonstrated great potential to help visually impaired and blind groups to recognize objects and to perform basic navigational tasks. However, the high latency between visual information acquisition and auditory transduction may contribute to the lack of the successful adoption of such aid technologies in the blind community; thus far, substitution methods have remained only laboratory-scale research or pilot demonstrations. This high latency for data conversion leads to challenges in perceiving fast-moving objects or rapid environmental changes. To reduce this latency, prior analysis of auditory sensitivity is necessary. However, existing auditory sensitivity analyses are subjective because they were conducted using human behavioral analysis. Therefore, in this study, we propose a cross-modal generative adversarial network-based evaluation method to find an optimal auditory sensitivity to reduce transmission latency in visual-auditory sensory substitution, which is related to the perception of visual information. We further conducted a human-based assessment to evaluate the effectiveness of the proposed model-based analysis in human behavioral experiments. We conducted experiments with three participant groups, including sighted users (SU), congenitally blind (CB) and late-blind (LB) individuals. Experimental results from the proposed model showed that the temporal length of the auditory signal for sensory substitution could be reduced by 50%. This result indicates the possibility of improving the performance of the conventional vOICe method by up to two times. We confirmed that our experimental results are consistent with human assessment through behavioral experiments. Analyzing auditory sensitivity with deep learning models has the potential to improve the efficiency of sensory substitution.
Collapse
|
16
|
Abstract
Vibrotactile displays worn on the back can be used as sensory substitution device. Often vibrotactile stimulation is chosen because vibration motors are easy to incorporate and relatively cheap. When designing such displays knowledge about vibrotactile perception on the back is crucial. In the current study we investigated distance perception. Biases in distance perception can explain spatial distortions that occur when, for instance, tracing a shape using vibration. We investigated the effect of orientation (horizontal vs vertical), the effect of positioning with respect to the spine and the effect of switching vibration motors on sequentially versus simultaneously. Our study includes four conditions. The condition which had a horizontal orientation with both vibration motors switching on sequentially on the same side of the spine was chosen is the baseline condition. The other three conditions were compared to this baseline condition. We found that distances felt longer in the vertical direction than in the horizontal direction. Furthermore, distances were perceived to be longer when vibration motors were distributed on both sides of the spine compared to when they were on the same side. Finally, distances felt shorter when vibration motors were switched on simultaneously compared to sequentially. In the simultaneous case a distance of 4 cm was not clearly perceived differently than a distance of 12 cm. When designing vibrotactile displays these anisotropies in perceived distance need to be taken into account because otherwise the intended shape will not match the perceived shape. Also, dynamically presented distances are more clearly perceived than static distances. This finding supports recommendations made in previous studies that dynamic patterns are easier to perceive than static patterns.
Collapse
|
17
|
Scurry AN, Chifamba K, Jiang F. Electrophysiological Dynamics of Visual-Tactile Temporal Order Perception in Early Deaf Adults. Front Neurosci 2020; 14:544472. [PMID: 33071731 PMCID: PMC7539666 DOI: 10.3389/fnins.2020.544472] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Accepted: 08/19/2020] [Indexed: 11/17/2022] Open
Abstract
Studies of compensatory plasticity in early deaf (ED) individuals have mainly focused on unisensory processing, and on spatial rather than temporal coding. However, precise discrimination of the temporal relationship between stimuli is imperative for successful perception of and interaction with the complex, multimodal environment. Although the properties of cross-modal temporal processing have been extensively studied in neurotypical populations, remarkably little is known about how the loss of one sense impacts the integrity of temporal interactions among the remaining senses. To understand how auditory deprivation affects multisensory temporal interactions, ED and age-matched normal hearing (NH) controls performed a visual-tactile temporal order judgment task in which visual and tactile stimuli were separated by varying stimulus onset asynchronies (SOAs) and subjects had to discern the leading stimulus. Participants performed the task while EEG data were recorded. Group averaged event-related potential waveforms were compared between groups in occipital and fronto-central electrodes. Despite similar temporal order sensitivities and performance accuracy, ED had larger visual P100 amplitudes for all SOA levels and larger tactile N140 amplitudes for the shortest asynchronous (± 30 ms) and synchronous SOA levels. The enhanced signal strength reflected in these components from ED adults are discussed in terms of compensatory recruitment of cortical areas for visual-tactile processing. In addition, ED adults had similar tactile P200 amplitudes as NH but longer P200 latencies suggesting reduced efficiency in later processing of tactile information. Overall, these results suggest that greater responses by ED for early processing of visual and tactile signals are likely critical for maintained performance in visual-tactile temporal order discrimination.
Collapse
Affiliation(s)
- Alexandra N Scurry
- Department of Psychology, University of Nevada, Reno, Reno, NV, United States
| | - Kudzai Chifamba
- Department of Psychology, University of Nevada, Reno, Reno, NV, United States
| | - Fang Jiang
- Department of Psychology, University of Nevada, Reno, Reno, NV, United States
| |
Collapse
|
18
|
Cognitive and Affective Assessment of Navigation and Mobility Tasks for the Visually Impaired via Electroencephalography and Behavioral Signals. SENSORS 2020; 20:s20205821. [PMID: 33076251 PMCID: PMC7602506 DOI: 10.3390/s20205821] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 10/12/2020] [Accepted: 10/13/2020] [Indexed: 11/25/2022]
Abstract
This paper presented the assessment of cognitive load (as an effective real-time index of task difficulty) and the level of brain activation during an experiment in which eight visually impaired subjects performed two types of tasks while using the white cane and the Sound of Vision assistive device with three types of sensory input—audio, haptic, and multimodal (audio and haptic simultaneously). The first task was to identify object properties and the second to navigate and avoid obstacles in both the virtual environment and real-world settings. The results showed that the haptic stimuli were less intuitive than the audio ones and that the navigation with the Sound of Vision device increased cognitive load and working memory. Visual cortex asymmetry was lower in the case of multimodal stimulation than in the case of separate stimulation (audio or haptic). There was no correlation between visual cortical activity and the number of collisions during navigation, regardless of the type of navigation or sensory input. The visual cortex was activated when using the device, but only for the late-blind users. For all the subjects, the navigation with the Sound of Vision device induced a low negative valence, in contrast with the white cane navigation.
Collapse
|
19
|
Neugebauer A, Rifai K, Getzlaff M, Wahl S. Navigation aid for blind persons by visual-to-auditory sensory substitution: A pilot study. PLoS One 2020; 15:e0237344. [PMID: 32818953 PMCID: PMC7446825 DOI: 10.1371/journal.pone.0237344] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Accepted: 07/23/2020] [Indexed: 11/19/2022] Open
Abstract
PURPOSE In this study, we investigate to what degree augmented reality technology can be used to create and evaluate a visual-to-auditory sensory substitution device to improve the performance of blind persons in navigation and recognition tasks. METHODS A sensory substitution algorithm that translates 3D visual information into audio feedback was designed. This algorithm was integrated in an augmented reality based mobile phone application. Using the mobile device as sensory substitution device, a study with blind participants (n = 7) was performed. The participants navigated through pseudo-randomized obstacle courses using either the sensory substitution device, a white cane or a combination of both. In a second task, virtual 3D objects and structures had to be identified by the participants using the same sensory substitution device. RESULTS The realized application for mobile devices enabled participants to complete the navigation and object recognition tasks in an experimental environment already within the first trials without previous training. This demonstrates the general feasibility and low entry barrier of the designed sensory substitution algorithm. In direct comparison to the white cane, within the study duration of ten hours the sensory substitution device did not offer a statistically significant improvement in navigation.
Collapse
Affiliation(s)
- Alexander Neugebauer
- ZEISS Vision Science Lab, Eberhard-Karls-University Tuebingen, Tübingen, Germany
- * E-mail:
| | - Katharina Rifai
- ZEISS Vision Science Lab, Eberhard-Karls-University Tuebingen, Tübingen, Germany
- Carl Zeiss Vision International GmbH, Aalen, Germany
| | - Mathias Getzlaff
- Institute for Applied Physics, Heinrich-Heine University Duesseldorf, Duesseldorf, Germany
| | - Siegfried Wahl
- ZEISS Vision Science Lab, Eberhard-Karls-University Tuebingen, Tübingen, Germany
- Carl Zeiss Vision International GmbH, Aalen, Germany
| |
Collapse
|
20
|
Allison TS, Moritz J, Turk P, Stone-Roy LM. Lingual electrotactile discrimination ability is associated with the presence of specific connective tissue structures (papillae) on the tongue surface. PLoS One 2020; 15:e0237142. [PMID: 32764778 PMCID: PMC7413419 DOI: 10.1371/journal.pone.0237142] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2019] [Accepted: 07/21/2020] [Indexed: 11/19/2022] Open
Abstract
Electrical stimulation of nerve endings in the tongue can be used to communicate information to users and has been shown to be highly effective in sensory substitution applications. The anterior tip of the tongue has very small somatosensory receptive fields, comparable to those of the finger tips, allowing for precise two-point discrimination and high tactile sensitivity. However, perception of electrotactile stimuli varies significantly between users, and across the tongue surface. Despite this, previous studies all used uniform electrode grids to stimulate a region of the dorsal-medial tongue surface. In an effort to customize electrode layouts for individual users, and thus improve efficacy for sensory substitution applications, we investigated whether specific neuroanatomical and physiological features of the tongue are associated with enhanced ability to perceive active electrodes. Specifically, the study described here was designed to test whether fungiform papillae density and/or propylthiouracil sensitivity are positively or negatively associated with perceived intensity and/or discrimination ability for lingual electrotactile stimuli. Fungiform papillae number and distribution were determined for 15 participants and they were exposed to patterns of electrotactile stimulation (ETS) and asked to report perceived intensity and perceived number of stimuli. Fungiform papillae number and distribution were then compared to ETS characteristics using comprehensive and rigorous statistical analyses. Our results indicate that fungiform papillae density is correlated with enhanced discrimination ability for electrical stimuli. In contrast, papillae density, on average, is not correlated with perceived intensity of active electrodes. However, results for at least one participant suggest that further research is warranted. Our data indicate that propylthiouracil taster status is not related to ETS perceived intensity or discrimination ability. These data indicate that individuals with higher fungiform papillae number and density in the anterior medial tongue region may be better able to use lingual ETS for sensory substitution.
Collapse
Affiliation(s)
- Tyler S. Allison
- Department of Biomedical Sciences, Colorado State University, Fort Collins, Colorado, United States of America
| | - Joel Moritz
- Department of Mechanical Engineering, Colorado State University, Fort Collins, Colorado, United States of America
- Sapien LLC, Fort Collins, Colorado, United States of America
| | - Philip Turk
- Department of Statistics, Colorado State University, Fort Collins, Colorado, United States of America
| | - Leslie M. Stone-Roy
- Department of Biomedical Sciences, Colorado State University, Fort Collins, Colorado, United States of America
- * E-mail:
| |
Collapse
|
21
|
Isaksson J, Jansson T, Nilsson J. Audomni: Super-Scale Sensory Supplementation to Increase the Mobility of Blind and Low-Vision Individuals-A Pilot Study. IEEE Trans Neural Syst Rehabil Eng 2020; 28:1187-1197. [PMID: 32286992 DOI: 10.1109/tnsre.2020.2985626] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Blindness and low vision have severe effects on individuals' quality of life and socioeconomic cost; a main contributor of which is a prevalent and acutely decreased mobility level. To alleviate this, numerous technological solutions have been proposed in the last 70 years; however, none has become widespread. METHOD In this paper, we introduce the vision-to-audio, super-scale sensory substitution/supplementation device Audomni; we address the field-encompassing issues of ill-motivated and overabundant test methodologies and metrics; and we utilize our proposed Desire of Use model to evaluate proposed pilot user tests, their results, and Audomni itself. RESULTS Audomni holds a spatial resolution of 80 x 60 pixels at ~1.2° angular resolution and close to real-time temporal resolution, outdoor-viable technology, and several novel differentiation methods. The tests indicated that Audomni has a low learning curve, and several key mobility subtasks were accomplished; however, the tests would benefit from higher real-life motivation and data collection affordability. CONCLUSION Audomni shows promise to be a viable mobility device - with some addressable issues. Employing Desire of Use to design future tests should provide both high real-life motivation and relevance to them. SIGNIFICANCE As far as we know, Audomni features the greatest information conveyance rate in the field, yet seems to offer comprehensible and fairly intuitive sonification; this work is also the first to utilize Desire of Use as a tool to evaluate user tests, a device, and to lay out an overarching project aim.
Collapse
|
22
|
Isaksson J, Jansson T, Nilsson J. Desire of Use: A Hierarchical Decomposition of Activities and its Application on Mobility of Blind and Low-Vision Individuals. IEEE Trans Neural Syst Rehabil Eng 2020; 28:1146-1156. [PMID: 32286991 DOI: 10.1109/tnsre.2020.2985616] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Blind and low-vision individuals often have severely reduced mobility, affecting their quality of life and associated socioeconomic cost. Despite numerous efforts and great technological progress, the only used primary mobility aids are still white canes and seeing-eye dogs. Furthermore, there is a permeating tendency in the field to ignore knowledge of both mobility and the target group, as well as constantly design new metrics and tests that makes comparisons between solutions markedly more difficult. METHOD The Desire of Use model is introduced in an effort to promote a more holistic approach; it should be generalizable for any activity by any user, but is here applied on mobility of blind and low-vision individuals by a proposal and integration of parameters. RESULTS An embodiment of the model is presented and with it we show why popular mobility metrics of today are insufficient to guide design; what tasks and metrics that should provide better understanding; as well as which fundamental properties determine them and are critical to discuss. CONCLUSION Desire of Use has been introduced as a tool and a theoretical framework, and a realization has been proposed. SIGNIFICANCE Desire of Use offers both a structured perspective of pertinent design challenges facing a given solution, as well as a platform from which to compare test results and properties of existing solutions; in for example the field of electronic travel aids it should prove valuable for designing and evaluating new tests and devices.
Collapse
|
23
|
Brown FE, Sutton J, Yuen HM, Green D, Van Dorn S, Braun T, Cree AJ, Russell SR, Lotery AJ. A novel, wearable, electronic visual aid to assist those with reduced peripheral vision. PLoS One 2019; 14:e0223755. [PMID: 31613911 PMCID: PMC6793879 DOI: 10.1371/journal.pone.0223755] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2019] [Accepted: 09/29/2019] [Indexed: 12/05/2022] Open
Abstract
Purpose To determine whether visual-tactile sensory substitution utilizing the Low-vision Enhancement Optoelectronic (LEO) Belt prototype is suitable as a new visual aid for those with reduced peripheral vision by assessing mobility performance and user opinions. Methods Sighted subjects (n = 20) and subjects with retinitis pigmentosa (RP) (n = 6) were recruited. The LEO Belt was evaluated on two cohorts: normally sighted subjects wearing goggles to artificially reduce peripheral vision to simulate stages of RP progression, and subjects with advanced visual field limitation from RP. Mobility speed and accuracy was assessed using simple mazes, with and without the LEO Belt, to determine its usefulness across disease severities and lighting conditions. Results Sighted subjects wearing most narrowed field goggles simulating most advanced RP had increased mobility accuracy (44% mean reduction in errors, p = 0.014) and self-reported confidence (77% mean increase, p = 0.004) when using the LEO Belt. Additionally, use of LEO doubled mobility accuracy for RP subjects with remaining visual fields between 10° and 20°. Further, in dim lighting, confidence scores for this group also doubled. By patient reported outcomes, subjects largely deemed the device comfortable (100%), easy to use (92.3%) and thought it had potential future benefit as a visual aid (96.2%). However, regardless of severity of vision loss or simulated vision loss, all subjects were slower to complete the mazes using the device. Conclusions The LEO Belt improves mobility accuracy and therefore confidence in those with severely restricted peripheral vision. The LEO Belt’s positive user feedback suggests it has potential to become the next generation of visual aid for visually impaired individuals. Given the novelty of this approach, we expect navigation speeds may improve with experience.
Collapse
Affiliation(s)
- Ffion E. Brown
- Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, University Hospital Southampton, Tremona Road, Southampton, England, United Kingdom
| | - Janice Sutton
- Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, University Hospital Southampton, Tremona Road, Southampton, England, United Kingdom
| | - Ho M. Yuen
- Primary Care and Population Sciences, Faculty of Medicine, University of Southampton, University Hospital Southampton, Tremona Road, Southampton, England, United Kingdom
| | - Dylan Green
- Department of Ophthalmology and Visual Sciences, Carver College of Medicine, University of Iowa, Iowa City, IA, United States of America
| | - Spencer Van Dorn
- Department of Ophthalmology and Visual Sciences, Carver College of Medicine, University of Iowa, Iowa City, IA, United States of America
| | - Terry Braun
- Department of Ophthalmology and Visual Sciences, Carver College of Medicine, University of Iowa, Iowa City, IA, United States of America
| | - Angela J. Cree
- Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, University Hospital Southampton, Tremona Road, Southampton, England, United Kingdom
| | - Stephen R. Russell
- Department of Ophthalmology and Visual Sciences, Carver College of Medicine, University of Iowa, Iowa City, IA, United States of America
- * E-mail: (AL); (SR)
| | - Andrew J. Lotery
- Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, University Hospital Southampton, Tremona Road, Southampton, England, United Kingdom
- Southampton Eye Unit, University Hospital Southampton NHS Foundation Trust, University Hospital Southampton, Southampton, England, United Kingdom
- * E-mail: (AL); (SR)
| |
Collapse
|
24
|
Evaluation of an Audio-haptic Sensory Substitution Device for Enhancing Spatial Awareness for the Visually Impaired. Optom Vis Sci 2019; 95:757-765. [PMID: 30153241 PMCID: PMC6133230 DOI: 10.1097/opx.0000000000001284] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023] Open
Abstract
Supplemental digital content is available in the text. SIGNIFICANCE Visually impaired participants were surprisingly fast in learning a new sensory substitution device, which allows them to detect obstacles within a 3.5-m radius and to find the optimal path in between. Within a few hours of training, participants successfully performed complex navigation as well as with the white cane. PURPOSE Globally, millions of people live with vision impairment, yet effective assistive devices to increase their independence remain scarce. A promising method is the use of sensory substitution devices, which are human-machine interfaces transforming visual into auditory or tactile information. The Sound of Vision (SoV) system continuously encodes visual elements of the environment into audio-haptic signals. Here, we evaluated the SoV system in complex navigation tasks, to compare performance with the SoV system with the white cane, quantify training effects, and collect user feedback. METHODS Six visually impaired participants received eight hours of training with the SoV system, completed a usability questionnaire, and repeatedly performed assessments, for which they navigated through standardized scenes. In each assessment, participants had to avoid collisions with obstacles, using the SoV system, the white cane, or both assistive devices. RESULTS The results show rapid and substantial learning with the SoV system, with less collisions and higher obstacle awareness. After four hours of training, visually impaired people were able to successfully avoid collisions in a difficult navigation task as well as when using the cane, although they still needed more time. Overall, participants rated the SoV system's usability favorably. CONCLUSIONS Contrary to the cane, the SoV system enables users to detect the best free space between objects within a 3.5-m (up to 10-m) radius and, importantly, elevated and dynamic obstacles. All in all, we consider that visually impaired people can learn to adapt to the haptic-auditory representation and achieve expertise in usage through well-defined training within acceptable time.
Collapse
|
25
|
Richardson M, Thar J, Alvarez J, Borchers J, Ward J, Hamilton-Fletcher G. How Much Spatial Information Is Lost in the Sensory Substitution Process? Comparing Visual, Tactile, and Auditory Approaches. Perception 2019; 48:1079-1103. [PMID: 31547778 DOI: 10.1177/0301006619873194] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Sensory substitution devices (SSDs) can convey visuospatial information through spatialised auditory or tactile stimulation using wearable technology. However, the level of information loss associated with this transformation is unknown. In this study, novice users discriminated the location of two objects at 1.2 m using devices that transformed a 16 × 8-depth map into spatially distributed patterns of light, sound, or touch on the abdomen. Results showed that through active sensing, participants could discriminate the vertical position of objects to a visual angle of 1°, 14°, and 21°, and their distance to 2 cm, 8 cm, and 29 cm using these visual, auditory, and haptic SSDs, respectively. Visual SSDs significantly outperformed auditory and tactile SSDs on vertical localisation, whereas for depth perception, all devices significantly differed from one another (visual > auditory > haptic). Our findings highlight the high level of acuity possible for SSDs even with low spatial resolutions (e.g., 16 × 8) and quantify the level of information loss attributable to this transformation for the SSD user. Finally, we discuss ways of closing this “modality gap” found in SSDs and conclude that this process is best benchmarked against performance with SSDs that return to their primary modality (e.g., visuospatial into visual).
Collapse
Affiliation(s)
| | - Jan Thar
- Media Computing Group, RWTH Aachen University, Germany
| | - James Alvarez
- Department of Psychology, University of Sussex, Brighton, UK
| | - Jan Borchers
- Media Computing Group, RWTH Aachen University, Germany
| | - Jamie Ward
- Department of Psychology, University of Sussex, Brighton, UK; Sackler Centre for Consciousness Science, University of Sussex, Brighton, UK
| | - Giles Hamilton-Fletcher
- Department of Psychology, University of Sussex, Brighton, UK; Neuroimaging and Visual Science Laboratory, New York University Langone Health, NY, USA
| |
Collapse
|
26
|
Hoffmann R, Brinkhuis MAB, Unnthorsson R, Kristjánsson Á. The intensity order illusion: temporal order of different vibrotactile intensity causes systematic localization errors. J Neurophysiol 2019; 122:1810-1820. [PMID: 31433718 DOI: 10.1152/jn.00125.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Haptic illusions serve as important tools for studying neurocognitive processing of touch and can be utilized in practical contexts. We report a new spatiotemporal haptic illusion that involves mislocalization when the order of vibrotactile intensity is manipulated. We tested two types of motors mounted in a 4 × 4 array in the lower thoracic region. We created apparent movement with two successive vibrotactile stimulations of varying distance (40, 20, or 0 mm) and direction (up, down, or same) while changing the temporal order of stimulation intensity (strong-weak vs. weak-strong). Participants judged the perceived direction of movement in a 2-alternative forced-choice task. The results suggest that varying the temporal order of vibrotactile stimuli with different intensity leads to systematic localization errors: when a strong-intensity stimulus was followed by a weak-intensity stimulus, the probability that participants perceived a downward movement increased, and vice versa. The illusion is so strong that the order of the strength of stimulation determined perception even when the actual presentation movement was the opposite. We then verified this "intensity order illusion" using an open response format where observers judged the orientation of an imaginary line drawn between two sequential tactor activations. The intensity order illusion reveals a strong bias in vibrotactile perception that has strong implications for the design of haptic information systems.NEW & NOTEWORTHY We report a new illusion involving mislocalization of stimulation when the order of vibrotactile intensity is manipulated. When a strong-intensity stimulus follows a weak-intensity stimulus, the probability that participants perceive an upward movement increases, and vice versa. The illusion is so strong that the order of the strength of stimulation determined perception even when the actual presentation movement was the opposite. This illusion is important for the design of vibrotactile stimulation displays.
Collapse
Affiliation(s)
- Rebekka Hoffmann
- Faculty of Psychology, University of Iceland, Reykjavik, Iceland.,Faculty of Industrial Engineering, Mechanical Engineering and Computer Science, University of Iceland, Reykjavik, Iceland
| | | | - Runar Unnthorsson
- Faculty of Industrial Engineering, Mechanical Engineering and Computer Science, University of Iceland, Reykjavik, Iceland
| | - Árni Kristjánsson
- Faculty of Psychology, University of Iceland, Reykjavik, Iceland.,School of Psychology, National Research University Higher School of Economics, Moscow, Russian Federation
| |
Collapse
|
27
|
Caraiman S, Zvoristeanu O, Burlacu A, Herghelegiu P. Stereo Vision Based Sensory Substitution for the Visually Impaired. SENSORS 2019; 19:s19122771. [PMID: 31226796 PMCID: PMC6630569 DOI: 10.3390/s19122771] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/22/2019] [Revised: 06/11/2019] [Accepted: 06/17/2019] [Indexed: 11/25/2022]
Abstract
The development of computer vision based systems dedicated to help visually impaired people to perceive the environment, to orientate and navigate has been the main research subject of many works in the recent years. A significant ensemble of resources has been employed to support the development of sensory substitution devices (SSDs) and electronic travel aids for the rehabilitation of the visually impaired. The Sound of Vision (SoV) project used a comprehensive approach to develop such an SSD, tackling all the challenging aspects that so far restrained the large scale adoption of such systems by the intended audience: Wearability, real-time operation, pervasiveness, usability, cost. This article is set to present the artificial vision based component of the SoV SSD that performs the scene reconstruction and segmentation in outdoor environments. In contrast with the indoor use case, where the system acquires depth input from a structured light camera, in outdoors SoV relies on stereo vision to detect the elements of interest and provide an audio and/or haptic representation of the environment to the user. Our stereo-based method is designed to work with wearable acquisition devices and still provide a real-time, reliable description of the scene in the context of unreliable depth input from the stereo correspondence and of the complex 6 DOF motion of the head-worn camera. We quantitatively evaluate our approach on a custom benchmarking dataset acquired with SoV cameras and provide the highlights of the usability evaluation with visually impaired users.
Collapse
Affiliation(s)
- Simona Caraiman
- Faculty of Automatic Control and Computer Engineering,"Gheorghe Asachi" Technical University of Iasi, D. Mangeron 27, 700050 Iasi, Romania.
| | - Otilia Zvoristeanu
- Faculty of Automatic Control and Computer Engineering,"Gheorghe Asachi" Technical University of Iasi, D. Mangeron 27, 700050 Iasi, Romania.
| | - Adrian Burlacu
- Faculty of Automatic Control and Computer Engineering,"Gheorghe Asachi" Technical University of Iasi, D. Mangeron 27, 700050 Iasi, Romania.
| | - Paul Herghelegiu
- Faculty of Automatic Control and Computer Engineering,"Gheorghe Asachi" Technical University of Iasi, D. Mangeron 27, 700050 Iasi, Romania.
| |
Collapse
|
28
|
Measuring relative vibrotactile spatial acuity: effects of tactor type, anchor points and tactile anisotropy. Exp Brain Res 2018; 236:3405-3416. [PMID: 30293171 PMCID: PMC6267683 DOI: 10.1007/s00221-018-5387-z] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2018] [Accepted: 09/27/2018] [Indexed: 12/26/2022]
Abstract
Vibrotactile displays can compensate for the loss of sensory function of people with permanent or temporary deficiencies in vision, hearing, or balance, and can augment the immersive experience in virtual environments for entertainment, or professional training. This wide range of potential applications highlights the need for research on the basic psychophysics of mechanisms underlying human vibrotactile perception. One key consideration when designing tactile displays is determining the minimal possible spacing between tactile motors (tactors), by empirically assessing the maximal throughput of the skin, or, in other words, vibrotactile spatial acuity. Notably, such estimates may vary by tactor type. We assessed vibrotactile spatial acuity in the lower thoracic region for three different tactor types, each mounted in a 4 × 4 array with center-to-center inter-tactor distances of 25 mm, 20 mm, and 10 mm. Seventeen participants performed a relative three-alternative forced-choice point localization task with successive tactor activation for both vertical and horizontal stimulus presentation. The results demonstrate that specific tactor characteristics (frequency, acceleration, contact area) significantly affect spatial acuity measurements, highlighting that the results of spatial acuity measurements may only apply to the specific tactors tested. Furthermore, our results reveal an anisotropy in vibrotactile perception, with higher spatial acuity for horizontal than for vertical stimulus presentation. The findings allow better understanding of vibrotactile spatial acuity and can be used for formulating guidelines for the design of tactile displays, such as regarding inter-tactor spacing, choice of tactor type, and direction of stimulus presentation.
Collapse
|
29
|
Jóhannesson ÓI, Hoffmann R, Valgeirsdóttir VV, Unnþórsson R, Moldoveanu A, Kristjánsson Á. Relative vibrotactile spatial acuity of the torso. Exp Brain Res 2017; 235:3505-3515. [PMID: 28856387 PMCID: PMC5649388 DOI: 10.1007/s00221-017-5073-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2017] [Accepted: 08/22/2017] [Indexed: 11/25/2022]
Abstract
While tactile acuity for pressure has been extensively investigated, far less is known about acuity for vibrotactile stimulation. Vibrotactile acuity is important however, as such stimulation is used in many applications, including sensory substitution devices. We tested discrimination of vibrotactile stimulation from eccentric rotating mass motors with in-plane vibration. In 3 experiments, we tested gradually decreasing center-to-center (c/c) distances from 30 mm (experiment 1) to 13 mm (experiment 3). Observers judged whether a second vibrating stimulator (‘tactor’) was to the left or right or in the same place as a first one that came on 250 ms before the onset of the second (with a 50-ms inter-stimulus interval). The results show that while accuracy tends to decrease the closer the tactors are, discrimination accuracy is still well above chance for the smallest distance, which places the threshold for vibrotactile stimulation well below 13 mm, which is lower than recent estimates. The results cast new light on vibrotactile sensitivity and can furthermore be of use in the design of devices that convey information through vibrotactile stimulation.
Collapse
Affiliation(s)
- Ómar I Jóhannesson
- Faculty of Psychology, School of Health Sciences, University of Iceland, Reykjavik, Iceland
| | - Rebekka Hoffmann
- Faculty of Psychology, School of Health Sciences, University of Iceland, Reykjavik, Iceland.
- Faculty of Industrial Engineering, Mechanical Engineering and Computer Science, University of Iceland, Reykjavik, Iceland.
| | | | - Rúnar Unnþórsson
- Faculty of Industrial Engineering, Mechanical Engineering and Computer Science, University of Iceland, Reykjavik, Iceland
| | - Alin Moldoveanu
- Faculty of Automatic Control and Computers, Polytechnic University of Bucharest, Bucharest, Romania
| | - Árni Kristjánsson
- Faculty of Psychology, School of Health Sciences, University of Iceland, Reykjavik, Iceland
| |
Collapse
|