1
|
Shayman CS, McCracken MK, Finney HC, Katsanevas AM, Fino PC, Stefanucci JK, Creem-Regehr SH. Effects of older age on visual and self-motion sensory cue integration in navigation. Exp Brain Res 2024; 242:1277-1289. [PMID: 38548892 PMCID: PMC11111325 DOI: 10.1007/s00221-024-06818-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 03/01/2024] [Indexed: 05/16/2024]
Abstract
Older adults demonstrate impairments in navigation that cannot be explained by general cognitive and motor declines. Previous work has shown that older adults may combine sensory cues during navigation differently than younger adults, though this work has largely been done in dark environments where sensory integration may differ from full-cue environments. Here, we test whether aging adults optimally combine cues from two sensory systems critical for navigation: vision (landmarks) and body-based self-motion cues. Participants completed a homing (triangle completion) task using immersive virtual reality to offer the ability to navigate in a well-lit environment including visibility of the ground plane. An optimal model, based on principles of maximum-likelihood estimation, predicts that precision in homing should increase with multisensory information in a manner consistent with each individual sensory cue's perceived reliability (measured by variability). We found that well-aging adults (with normal or corrected-to-normal sensory acuity and active lifestyles) were more variable and less accurate than younger adults during navigation. Both older and younger adults relied more on their visual systems than a maximum likelihood estimation model would suggest. Overall, younger adults' visual weighting matched the model's predictions whereas older adults showed sub-optimal sensory weighting. In addition, high inter-individual differences were seen in both younger and older adults. These results suggest that older adults do not optimally weight each sensory system when combined during navigation, and that older adults may benefit from interventions that help them recalibrate the combination of visual and self-motion cues for navigation.
Collapse
Affiliation(s)
- Corey S Shayman
- Department of Psychology, University of Utah, 380 S. 1500 E. Room 502, Salt Lake City, UT, 84112, USA.
- Interdisciplinary Program in Neuroscience, University of Utah, Salt Lake City, USA.
| | - Maggie K McCracken
- Department of Psychology, University of Utah, 380 S. 1500 E. Room 502, Salt Lake City, UT, 84112, USA
| | - Hunter C Finney
- Department of Psychology, University of Utah, 380 S. 1500 E. Room 502, Salt Lake City, UT, 84112, USA
| | - Andoni M Katsanevas
- Department of Psychology, University of Utah, 380 S. 1500 E. Room 502, Salt Lake City, UT, 84112, USA
| | - Peter C Fino
- Department of Health and Kinesiology, University of Utah, Salt Lake City, USA
| | - Jeanine K Stefanucci
- Department of Psychology, University of Utah, 380 S. 1500 E. Room 502, Salt Lake City, UT, 84112, USA
| | - Sarah H Creem-Regehr
- Department of Psychology, University of Utah, 380 S. 1500 E. Room 502, Salt Lake City, UT, 84112, USA
| |
Collapse
|
2
|
Scheller M, Nardini M. Correctly establishing evidence for cue combination via gains in sensory precision: Why the choice of comparator matters. Behav Res Methods 2024; 56:2842-2858. [PMID: 37730934 PMCID: PMC11133123 DOI: 10.3758/s13428-023-02227-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/27/2023] [Indexed: 09/22/2023]
Abstract
Studying how sensory signals from different sources (sensory cues) are integrated within or across multiple senses allows us to better understand the perceptual computations that lie at the foundation of adaptive behaviour. As such, determining the presence of precision gains - the classic hallmark of cue combination - is important for characterising perceptual systems, their development and functioning in clinical conditions. However, empirically measuring precision gains to distinguish cue combination from alternative perceptual strategies requires careful methodological considerations. Here, we note that the majority of existing studies that tested for cue combination either omitted this important contrast, or used an analysis approach that, unknowingly, strongly inflated false positives. Using simulations, we demonstrate that this approach enhances the chances of finding significant cue combination effects in up to 100% of cases, even when cues are not combined. We establish how this error arises when the wrong cue comparator is chosen and recommend an alternative analysis that is easy to implement but has only been adopted by relatively few studies. By comparing combined-cue perceptual precision with the best single-cue precision, determined for each observer individually rather than at the group level, researchers can enhance the credibility of their reported effects. We also note that testing for deviations from optimal predictions alone is not sufficient to ascertain whether cues are combined. Taken together, to correctly test for perceptual precision gains, we advocate for a careful comparator selection and task design to ensure that cue combination is tested with maximum power, while reducing the inflation of false positives.
Collapse
Affiliation(s)
- Meike Scheller
- Department of Psychology, Durham University, Durham, UK.
| | - Marko Nardini
- Department of Psychology, Durham University, Durham, UK
| |
Collapse
|
3
|
Hamilton-Fletcher G, Liu M, Sheng D, Feng C, Hudson TE, Rizzo JR, Chan KC. Accuracy and Usability of Smartphone-Based Distance Estimation Approaches for Visual Assistive Technology Development. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:54-58. [PMID: 38487094 PMCID: PMC10939328 DOI: 10.1109/ojemb.2024.3358562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 12/08/2023] [Accepted: 01/22/2024] [Indexed: 03/17/2024] Open
Abstract
Goal: Distance information is highly requested in assistive smartphone Apps by people who are blind or low vision (PBLV). However, current techniques have not been evaluated systematically for accuracy and usability. Methods: We tested five smartphone-based distance-estimation approaches in the image center and periphery at 1-3 meters, including machine learning (CoreML), infrared grid distortion (IR_self), light detection and ranging (LiDAR_back), and augmented reality room-tracking on the front (ARKit_self) and back-facing cameras (ARKit_back). Results: For accuracy in the image center, all approaches had <±2.5 cm average error, except CoreML which had ±5.2-6.2 cm average error at 2-3 meters. In the periphery, all approaches were more inaccurate, with CoreML and IR_self having the highest average errors at ±41 cm and ±32 cm respectively. For usability, CoreML fared favorably with the lowest central processing unit usage, second lowest battery usage, highest field-of-view, and no specialized sensor requirements. Conclusions: We provide key information that helps design reliable smartphone-based visual assistive technologies to enhance the functionality of PBLV.
Collapse
Affiliation(s)
- Giles Hamilton-Fletcher
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone HealthNew York UniversityNew YorkNY10017USA
- Department of Rehabilitative Medicine, NYU Grossman School of Medicine, NYU Langone HealthNew York UniversityNew YorkNY10017USA
| | - Mingxin Liu
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone HealthNew York UniversityNew YorkNY10017USA
| | - Diwei Sheng
- Department of Civil and Urban Engineering & Department of Mechanical and Aerospace EngineeringNew York University Tandon School of EngineeringBrooklynNY11201USA
| | - Chen Feng
- Department of Civil and Urban Engineering & Department of Mechanical and Aerospace EngineeringNew York University Tandon School of EngineeringBrooklynNY11201USA
| | - Todd E. Hudson
- Department of Rehabilitative Medicine, NYU Grossman School of Medicine, NYU Langone HealthNew York UniversityNew YorkNY10017USA
| | - John-Ross Rizzo
- Department of Rehabilitative Medicine, NYU Grossman School of Medicine, NYU Langone HealthNew York UniversityNew YorkNY10017USA
- Department of Biomedical Engineering, Tandon School of EngineeringNew York UniversityNew YorkNY11201USA
| | - Kevin C. Chan
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone HealthNew York UniversityNew YorkNY10017USA
- Department of Biomedical Engineering, Tandon School of EngineeringNew York UniversityNew YorkNY11201USA
- Department of Radiology, NYU Grossman School of Medicine, NYU Langone HealthNew York UniversityNew YorkNY10017USA
| |
Collapse
|
4
|
de Paz C, Travieso D. A direct comparison of sound and vibration as sources of stimulation for a sensory substitution glove. Cogn Res Princ Implic 2023; 8:41. [PMID: 37402032 DOI: 10.1186/s41235-023-00495-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 06/18/2023] [Indexed: 07/05/2023] Open
Abstract
Sensory substitution devices (SSDs) facilitate the detection of environmental information through enhancement of touch and/or hearing capabilities. Research has demonstrated that several tasks can be successfully completed using acoustic, vibrotactile, and multimodal devices. The suitability of a substituting modality is also mediated by the type of information required to perform the specific task. The present study tested the adequacy of touch and hearing in a grasping task by utilizing a sensory substitution glove. The substituting modalities inform, through increases in stimulation intensity, about the distance between the fingers and the objects. A psychophysical experiment of magnitude estimation was conducted. Forty blindfolded sighted participants discriminated equivalently the intensity of both vibrotactile and acoustic stimulation, although they experienced some difficulty with the more intense stimuli. Additionally, a grasping task involving cylindrical objects of varying diameters, distances and orientations was performed. Thirty blindfolded sighted participants were divided into vibration, sound, or multimodal groups. High performance was achieved (84% correct grasps) with equivalent success rate between groups. Movement variables showed more precision and confidence in the multimodal condition. Through a questionnaire, the multimodal group indicated their preference for using a multimodal SSD in daily life and identified vibration as their primary source of stimulation. These results demonstrate that there is an improvement in performance with specific-purpose SSDs, when the necessary information for a task is identified and coupled with the delivered stimulation. Furthermore, the results suggest that it is possible to achieve functional equivalence between substituting modalities when these previous steps are met.
Collapse
Affiliation(s)
- Carlos de Paz
- Facultad de Psicología, Universidad Autónoma de Madrid, 28049, Madrid, Spain
| | - David Travieso
- Facultad de Psicología, Universidad Autónoma de Madrid, 28049, Madrid, Spain.
| |
Collapse
|
5
|
Negen J, Slater H, Nardini M. Sensory augmentation for a rapid motor task in a multisensory environment. Restor Neurol Neurosci 2023:RNN221279. [PMID: 37302045 DOI: 10.3233/rnn-221279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
BACKGROUND Sensory substitution and augmentation systems (SSASy) seek to either replace or enhance existing sensory skills by providing a new route to access information about the world. Tests of such systems have largely been limited to untimed, unisensory tasks. OBJECTIVE To test the use of a SSASy for rapid, ballistic motor actions in a multisensory environment. METHODS Participants played a stripped-down version of air hockey in virtual reality with motion controls (Oculus Touch). They were trained to use a simple SASSy (novel audio cue) for the puck's location. They were tested on ability to strike an oncoming puck with the SASSy, degraded vision, or both. RESULTS Participants coordinated vision and the SSASy to strike the target with their hand more consistently than with the best single cue alone, t(13) = 9.16, p <.001, Cohen's d = 2.448. CONCLUSIONS People can adapt flexibly to using a SSASy in tasks that require tightly timed, precise, and rapid body movements. SSASys can augment and coordinate with existing sensorimotor skills rather than being limited to replacement use cases - in particular, there is potential scope for treating moderate vision loss. These findings point to the potential for augmenting human abilities, not only for static perceptual judgments, but in rapid and demanding perceptual-motor tasks.
Collapse
Affiliation(s)
- James Negen
- School of Psychology, Liverpool John Moores University
| | | | | |
Collapse
|
6
|
Bordeau C, Scalvini F, Migniot C, Dubois J, Ambard M. Cross-modal correspondence enhances elevation localization in visual-to-auditory sensory substitution. Front Psychol 2023; 14:1079998. [PMID: 36777233 PMCID: PMC9909421 DOI: 10.3389/fpsyg.2023.1079998] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 01/06/2023] [Indexed: 01/27/2023] Open
Abstract
Introduction Visual-to-auditory sensory substitution devices are assistive devices for the blind that convert visual images into auditory images (or soundscapes) by mapping visual features with acoustic cues. To convey spatial information with sounds, several sensory substitution devices use a Virtual Acoustic Space (VAS) using Head Related Transfer Functions (HRTFs) to synthesize natural acoustic cues used for sound localization. However, the perception of the elevation is known to be inaccurate with generic spatialization since it is based on notches in the audio spectrum that are specific to each individual. Another method used to convey elevation information is based on the audiovisual cross-modal correspondence between pitch and visual elevation. The main drawback of this second method is caused by the limitation of the ability to perceive elevation through HRTFs due to the spectral narrowband of the sounds. Method In this study we compared the early ability to localize objects with a visual-to-auditory sensory substitution device where elevation is either conveyed using a spatialization-based only method (Noise encoding) or using pitch-based methods with different spectral complexities (Monotonic and Harmonic encodings). Thirty eight blindfolded participants had to localize a virtual target using soundscapes before and after having been familiarized with the visual-to-auditory encodings. Results Participants were more accurate to localize elevation with pitch-based encodings than with the spatialization-based only method. Only slight differences in azimuth localization performance were found between the encodings. Discussion This study suggests the intuitiveness of a pitch-based encoding with a facilitation effect of the cross-modal correspondence when a non-individualized sound spatialization is used.
Collapse
Affiliation(s)
- Camille Bordeau
- LEAD-CNRS UMR5022, Université de Bourgogne, Dijon, France,*Correspondence: Camille Bordeau ✉
| | | | | | - Julien Dubois
- ImViA EA 7535, Université de Bourgogne, Dijon, France
| | - Maxime Ambard
- LEAD-CNRS UMR5022, Université de Bourgogne, Dijon, France
| |
Collapse
|
7
|
Zou X, Zhou Y. Spatial Cognition of the Visually Impaired: A Case Study in a Familiar Environment. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:1753. [PMID: 36767116 PMCID: PMC9914542 DOI: 10.3390/ijerph20031753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 01/12/2023] [Accepted: 01/15/2023] [Indexed: 06/18/2023]
Abstract
OBJECTIVES This paper aims to explore the factors influencing the spatial cognition of the visually impaired in familiar environments. BACKGROUND Massage hospitals are some of the few places that can provide work for the visually impaired in China. Studying the spatial cognition of the visually impaired in a massage hospital could be instructive for the design of working environments for the visually impaired and other workplaces in the future. METHODS First, the subjective spatial cognition of the visually impaired was evaluated by object layout tasks for describing the spatial relationships among object parts. Second, physiological monitoring signal data, including the electrodermal activity, heart rate variability, and electroencephalography, were collected while the visually impaired doctors walked along prescribed routes based on the feature analysis of the physical environment in the hospital, and then their physiological monitoring signal data for each route were compared. The visual factors, physical environmental factors, and human-environment interactive factors that significantly impact the spatial cognition of visually impaired people were discussed. CONCLUSIONS (1) visual acuity affects the spatial cognition of the visually impaired in familiar environments; (2) the spatial cognition of the visually impaired can be promoted by a longer staying time and the more regular sequence of a physical environment; (3) the spatial comfort of the visually impaired can be improved by increasing the amount of greenery; and (4) the visual comfort of the visually impaired can be reduced by rich interior colors and contrasting lattice floor tiles.
Collapse
|
8
|
Nanegrungsunk O, Au A, Sarraf D, Sadda SR. New frontiers of retinal therapeutic intervention: a critical analysis of novel approaches. Ann Med 2022; 54:1067-1080. [PMID: 35467460 PMCID: PMC9045775 DOI: 10.1080/07853890.2022.2066169] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
A recent wave of pharmacologic and technologic innovations has revolutionized our management of retinal diseases. Many of these advancements have demonstrated efficacy and can increase the quality of life while potentially reducing complications and decreasing the burden of care for patients. Some advances, such as longer-acting anti-vascular endothelial growth factor agents, port delivery systems, gene therapy, and retinal prosthetics have been approved by the US Food and Drug Administration, and are available for clinical use. Countless other therapeutics are in various stages of development, promising a bright future for further improvements in the management of the retinal disease. Herein, we have highlighted several important novel therapies and therapeutic approaches and examine the opportunities and limitations offered by these innovations at the new frontier. KEY MESSAGESNumerous pharmacologic and technologic advancements have been emerging, providing a higher treatment efficacy while decreasing the burden and associated side effects.Anti-vascular endothelial growth factor (anti-VEGF) and its longer-acting agents have dramatically improved visual outcomes and have become a mainstay treatment in various retinal diseases.Gene therapy and retinal prosthesis implantation in the treatment of congenital retinal dystrophy can accomplish the partial restoration of vision and improved daily function in patients with blindness, an unprecedented success in the field of retina.
Collapse
Affiliation(s)
- Onnisa Nanegrungsunk
- Doheny Eye Institute, Pasadena, CA, USA.,Department of Ophthalmology, David Geffen School of Medicine, University of California-Los Angeles, Los Angeles, CA, USA.,Retina Division, Department of Ophthalmology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Adrian Au
- Stein Eye Institute, David Geffen School of Medicine, University of California-Los Angeles, Los Angeles, CA, USA
| | - David Sarraf
- Stein Eye Institute, David Geffen School of Medicine, University of California-Los Angeles, Los Angeles, CA, USA
| | - Srinivas R Sadda
- Doheny Eye Institute, Pasadena, CA, USA.,Department of Ophthalmology, David Geffen School of Medicine, University of California-Los Angeles, Los Angeles, CA, USA
| |
Collapse
|
9
|
Liang I, Spencer B, Scheller M, Proulx MJ, Petrini K. Assessing people with visual impairments’ access to information, awareness and satisfaction with high-tech assistive technology. BRITISH JOURNAL OF VISUAL IMPAIRMENT 2022. [DOI: 10.1177/02646196221131746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Assistive technology (AT) devices are designed to help people with visual impairments (PVIs) perform activities that would otherwise be difficult or impossible. Devices specifically designed to assist PVIs by attempting to restore sight or substitute it for another sense have a very low uptake rate. This study, conducted in England, aimed to investigate why this is the case by assessing accessibility to knowledge, awareness, and satisfaction with AT in general and with sensory restoration and substitution devices in particular. From a sample of 25 PVIs, ranging from 21 to 68 years old, results showed that participants knew where to find AT information; however, health care providers were not the main source of this information. Participants reported good awareness of different ATs, and of technologies they would not use, but reported poor awareness of specific sensory substitution and restoration devices. Only three participants reported using AT, each with different devices and varying levels of satisfaction. The results from this study suggest a possible breakdown in communication between health care providers and PVIs, and dissociation between reported AT awareness and reported access to AT information. Moreover, awareness of sensory restoration and substitution devices is poor, which may explain the limited use of such technology.
Collapse
|
10
|
Setti W, Cuturi LF, Cocchi E, Gori M. Spatial Memory and Blindness: The Role of Visual Loss on the Exploration and Memorization of Spatialized Sounds. Front Psychol 2022; 13:784188. [PMID: 35686077 PMCID: PMC9171105 DOI: 10.3389/fpsyg.2022.784188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 04/21/2022] [Indexed: 11/20/2022] Open
Abstract
Spatial memory relies on encoding, storing, and retrieval of knowledge about objects’ positions in their surrounding environment. Blind people have to rely on sensory modalities other than vision to memorize items that are spatially displaced, however, to date, very little is known about the influence of early visual deprivation on a person’s ability to remember and process sound locations. To fill this gap, we tested sighted and congenitally blind adults and adolescents in an audio-spatial memory task inspired by the classical card game “Memory.” In this research, subjects (blind, n = 12; sighted, n = 12) had to find pairs among sounds (i.e., animal calls) displaced on an audio-tactile device composed of loudspeakers covered by tactile sensors. To accomplish this task, participants had to remember the spatialized sounds’ position and develop a proper mental spatial representation of their locations. The test was divided into two experimental conditions of increasing difficulty dependent on the number of sounds to be remembered (8 vs. 24). Results showed that sighted participants outperformed blind participants in both conditions. Findings were discussed considering the crucial role of visual experience in properly manipulating auditory spatial representations, particularly in relation to the ability to explore complex acoustic configurations.
Collapse
Affiliation(s)
- Walter Setti
- Unit for Visually Impaired People (U-VIP), Italian Institute of Technology, Genoa, Italy
- *Correspondence: Walter Setti,
| | - Luigi F. Cuturi
- Unit for Visually Impaired People (U-VIP), Italian Institute of Technology, Genoa, Italy
| | | | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Italian Institute of Technology, Genoa, Italy
| |
Collapse
|
11
|
Cieśla K, Wolak T, Lorens A, Mentzel M, Skarżyński H, Amedi A. Effects of training and using an audio-tactile sensory substitution device on speech-in-noise understanding. Sci Rep 2022; 12:3206. [PMID: 35217676 PMCID: PMC8881456 DOI: 10.1038/s41598-022-06855-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Accepted: 01/28/2022] [Indexed: 11/09/2022] Open
Abstract
Understanding speech in background noise is challenging. Wearing face-masks, imposed by the COVID19-pandemics, makes it even harder. We developed a multi-sensory setup, including a sensory substitution device (SSD) that can deliver speech simultaneously through audition and as vibrations on the fingertips. The vibrations correspond to low frequencies extracted from the speech input. We trained two groups of non-native English speakers in understanding distorted speech in noise. After a short session (30-45 min) of repeating sentences, with or without concurrent matching vibrations, we showed comparable mean group improvement of 14-16 dB in Speech Reception Threshold (SRT) in two test conditions, i.e., when the participants were asked to repeat sentences only from hearing and also when matching vibrations on fingertips were present. This is a very strong effect, if one considers that a 10 dB difference corresponds to doubling of the perceived loudness. The number of sentence repetitions needed for both types of training to complete the task was comparable. Meanwhile, the mean group SNR for the audio-tactile training (14.7 ± 8.7) was significantly lower (harder) than for the auditory training (23.9 ± 11.8), which indicates a potential facilitating effect of the added vibrations. In addition, both before and after training most of the participants (70-80%) showed better performance (by mean 4-6 dB) in speech-in-noise understanding when the audio sentences were accompanied with matching vibrations. This is the same magnitude of multisensory benefit that we reported, with no training at all, in our previous study using the same experimental procedures. After training, performance in this test condition was also best in both groups (SRT ~ 2 dB). The least significant effect of both training types was found in the third test condition, i.e. when participants were repeating sentences accompanied with non-matching tactile vibrations and the performance in this condition was also poorest after training. The results indicate that both types of training may remove some level of difficulty in sound perception, which might enable a more proper use of speech inputs delivered via vibrotactile stimulation. We discuss the implications of these novel findings with respect to basic science. In particular, we show that even in adulthood, i.e. long after the classical "critical periods" of development have passed, a new pairing between a certain computation (here, speech processing) and an atypical sensory modality (here, touch) can be established and trained, and that this process can be rapid and intuitive. We further present possible applications of our training program and the SSD for auditory rehabilitation in patients with hearing (and sight) deficits, as well as healthy individuals in suboptimal acoustic situations.
Collapse
Affiliation(s)
- K Cieśla
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology and the Ruth and Meir Rosental Brain Imaging Center, Reichman University, Herzliya, Israel. .,World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland.
| | - T Wolak
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - A Lorens
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - M Mentzel
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology and the Ruth and Meir Rosental Brain Imaging Center, Reichman University, Herzliya, Israel
| | - H Skarżyński
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - A Amedi
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology and the Ruth and Meir Rosental Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
12
|
Real S, Araujo A. VES: A Mixed-Reality Development Platform of Navigation Systems for Blind and Visually Impaired. SENSORS 2021; 21:s21186275. [PMID: 34577482 PMCID: PMC8469526 DOI: 10.3390/s21186275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 09/13/2021] [Accepted: 09/14/2021] [Indexed: 11/30/2022]
Abstract
Herein, we describe the Virtually Enhanced Senses (VES) system, a novel and highly configurable wireless sensor-actuator network conceived as a development and test-bench platform of navigation systems adapted for blind and visually impaired people. It allows to immerse its users into “walkable” purely virtual or mixed environments with simulated sensors and validate navigation system designs prior to prototype development. The haptic, acoustic, and proprioceptive feedback supports state-of-art sensory substitution devices (SSD). In this regard, three SSD were integrated in VES as examples, including the well-known “The vOICe”. Additionally, the data throughput, latency and packet loss of the wireless communication can be controlled to observe its impact in the provided spatial knowledge and resulting mobility and orientation performance. Finally, the system has been validated by testing a combination of two previous visual-acoustic and visual-haptic sensory substitution schemas with 23 normal-sighted subjects. The recorded data includes the output of a “gaze-tracking” utility adapted for SSD.
Collapse
|
13
|
Ahmad H, Tonelli A, Campus C, Capris E, Facchini V, Sandini G, Gori M. An audio-visual motor training improves audio spatial localization skills in individuals with scotomas due to retinal degenerative diseases. Acta Psychol (Amst) 2021; 219:103384. [PMID: 34365274 DOI: 10.1016/j.actpsy.2021.103384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 07/05/2021] [Accepted: 07/31/2021] [Indexed: 11/29/2022] Open
Abstract
Several studies have shown that impairments in a sensory modality can induce perceptual deficits in tasks involving the remaining senses. For example, people with retinal degenerative diseases like Macular Degeneration (MD) and with central scotoma show biased auditory localization abilities towards the visual field's scotoma area. This result indicates an auditory spatial reorganization of cross-modal processing in people with scotoma when the visual information is impaired. Recent works showed that multisensory training could be beneficial to improve spatial perception. In line with this idea, here we hypothesize that audio-visual and motor training could improve people's spatial skills with retinal degenerative diseases. In the present study, we tested this hypothesis by testing two groups of scotoma patients in an auditory and visual localization task before and after a training or rest performance. The training group was tested before and after multisensory training, while the control group performed the two tasks twice after 10 min of break. The training was done with a portable device positioned on the finger, providing spatially and temporally congruent audio and visual feedback during arm movement. Our findings show improved audio and visual localization for the training group and not for the control group. These results suggest that integrating multiple spatial sensory cues can improve the spatial perception of scotoma patients. This finding ignites further research and applications for people with central scotoma for whom rehabilitation is classically focused on training visual modality only.
Collapse
Affiliation(s)
- Hafsah Ahmad
- Robotics, Brain and Cognitive Sciences (RBCS), Genova, Italy; Unit for Visually Impaired People (U-VIP), Italian Institute of Technology (IIT), Genova, Italy; University of Genova, Genova, Italy; Sino-Pakistan Centre for Artificial Intelligence (SPCAI), Pak-Austria Fachhochschule: Institute of Applied Sciences and Technology (PAF-IAST), Haripur, Pakistan
| | - Alessia Tonelli
- Unit for Visually Impaired People (U-VIP), Italian Institute of Technology (IIT), Genova, Italy
| | - Claudio Campus
- Unit for Visually Impaired People (U-VIP), Italian Institute of Technology (IIT), Genova, Italy
| | | | | | - Giulio Sandini
- Robotics, Brain and Cognitive Sciences (RBCS), Genova, Italy
| | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Italian Institute of Technology (IIT), Genova, Italy.
| |
Collapse
|
14
|
Analysis and Validation of Cross-Modal Generative Adversarial Network for Sensory Substitution. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18126216. [PMID: 34201269 PMCID: PMC8228544 DOI: 10.3390/ijerph18126216] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 06/03/2021] [Accepted: 06/03/2021] [Indexed: 11/20/2022]
Abstract
Visual-auditory sensory substitution has demonstrated great potential to help visually impaired and blind groups to recognize objects and to perform basic navigational tasks. However, the high latency between visual information acquisition and auditory transduction may contribute to the lack of the successful adoption of such aid technologies in the blind community; thus far, substitution methods have remained only laboratory-scale research or pilot demonstrations. This high latency for data conversion leads to challenges in perceiving fast-moving objects or rapid environmental changes. To reduce this latency, prior analysis of auditory sensitivity is necessary. However, existing auditory sensitivity analyses are subjective because they were conducted using human behavioral analysis. Therefore, in this study, we propose a cross-modal generative adversarial network-based evaluation method to find an optimal auditory sensitivity to reduce transmission latency in visual-auditory sensory substitution, which is related to the perception of visual information. We further conducted a human-based assessment to evaluate the effectiveness of the proposed model-based analysis in human behavioral experiments. We conducted experiments with three participant groups, including sighted users (SU), congenitally blind (CB) and late-blind (LB) individuals. Experimental results from the proposed model showed that the temporal length of the auditory signal for sensory substitution could be reduced by 50%. This result indicates the possibility of improving the performance of the conventional vOICe method by up to two times. We confirmed that our experimental results are consistent with human assessment through behavioral experiments. Analyzing auditory sensitivity with deep learning models has the potential to improve the efficiency of sensory substitution.
Collapse
|
15
|
Kritly L, Sluyts Y, Pelegrín-García D, Glorieux C, Rychtáriková M. Discrimination of 2D wall textures by passive echolocation for different reflected-to-direct level difference configurations. PLoS One 2021; 16:e0251397. [PMID: 34043655 PMCID: PMC8158938 DOI: 10.1371/journal.pone.0251397] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 04/25/2021] [Indexed: 11/19/2022] Open
Abstract
In this work, we study people's ability to discriminate between different 2D textures of walls by passive listening to a pre-recorded tongue click in an auralized echolocation scenario. In addition, the impact of artificially enhancing the early reflection magnitude by 6dB and of removing the direct component while equalizing the loudness was investigated. Listening test results for different textures, ranging from a flat wall to a staircase, were assessed using a 2 Alternative-Forced-Choice (2AFC) method, in which 14 sighted, untrained participants were indicating 2 equally perceived stimuli out of 3 presented stimuli. The average performance of the listening subjects to discriminate between different textures was found to be significantly higher for walls at 5m distance, without overlap between the reflected and direct sound, compared to the same walls at 0.8m distance. Enhancing the reflections as well as removing the direct sound were found to be beneficial to differentiate textures. This finding highlights the importance of forward masking in the discrimination process. The overall texture discriminability was found to be larger for the walls reflecting with a higher spectral coloration.
Collapse
Affiliation(s)
- Léopold Kritly
- Research Department of Architecture—Building and Room Acoustics, Faculty of Architecture, KU Leuven, Brussel, Belgium
- EPF–Graduate School of Engineering, Sceaux, France
| | - Yannick Sluyts
- Research Department of Architecture—Building and Room Acoustics, Faculty of Architecture, KU Leuven, Brussel, Belgium
| | - David Pelegrín-García
- ZMB Lab. of Acoustics, Department of Physics and Astronomy, KU Leuven, Heverlee, Belgium
| | - Christ Glorieux
- ZMB Lab. of Acoustics, Department of Physics and Astronomy, KU Leuven, Heverlee, Belgium
| | - Monika Rychtáriková
- Research Department of Architecture—Building and Room Acoustics, Faculty of Architecture, KU Leuven, Brussel, Belgium
- Faculty of Civil Engineering, STU Bratislava, Bratislava, Slovakia
| |
Collapse
|
16
|
Buchs G, Haimler B, Kerem M, Maidenbaum S, Braun L, Amedi A. A self-training program for sensory substitution devices. PLoS One 2021; 16:e0250281. [PMID: 33905446 PMCID: PMC8078811 DOI: 10.1371/journal.pone.0250281] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 04/01/2021] [Indexed: 11/30/2022] Open
Abstract
Sensory Substitution Devices (SSDs) convey visual information through audition or touch, targeting blind and visually impaired individuals. One bottleneck towards adopting SSDs in everyday life by blind users, is the constant dependency on sighted instructors throughout the learning process. Here, we present a proof-of-concept for the efficacy of an online self-training program developed for learning the basics of the EyeMusic visual-to-auditory SSD tested on sighted blindfolded participants. Additionally, aiming to identify the best training strategy to be later re-adapted for the blind, we compared multisensory vs. unisensory as well as perceptual vs. descriptive feedback approaches. To these aims, sighted participants performed identical SSD-stimuli identification tests before and after ~75 minutes of self-training on the EyeMusic algorithm. Participants were divided into five groups, differing by the feedback delivered during training: auditory-descriptive, audio-visual textual description, audio-visual perceptual simultaneous and interleaved, and a control group which had no training. At baseline, before any EyeMusic training, participants SSD objects’ identification was significantly above chance, highlighting the algorithm’s intuitiveness. Furthermore, self-training led to a significant improvement in accuracy between pre- and post-training tests in each of the four feedback groups versus control, though no significant difference emerged among those groups. Nonetheless, significant correlations between individual post-training success rates and various learning measures acquired during training, suggest a trend for an advantage of multisensory vs. unisensory feedback strategies, while no trend emerged for perceptual vs. descriptive strategies. The success at baseline strengthens the conclusion that cross-modal correspondences facilitate learning, given SSD algorithms are based on such correspondences. Additionally, and crucially, the results highlight the feasibility of self-training for the first stages of SSD learning, and suggest that for these initial stages, unisensory training, easily implemented also for blind and visually impaired individuals, may suffice. Together, these findings will potentially boost the use of SSDs for rehabilitation.
Collapse
Affiliation(s)
- Galit Buchs
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Jerusalem, Israel
- * E-mail: (AA); (GB)
| | - Benedetta Haimler
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Center of Advanced Technologies in Rehabilitation (CATR), The Chaim Sheba Medical Center, Ramat Gan, Israel
| | - Menachem Kerem
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
| | - Shachar Maidenbaum
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Department of Biomedical Engineering, Ben Gurion University, Beersheba, Israel
| | - Liraz Braun
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Hebrew University of Jerusalem, Jerusalem, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- * E-mail: (AA); (GB)
| |
Collapse
|
17
|
Paré S, Bleau M, Djerourou I, Malotaux V, Kupers R, Ptito M. Spatial navigation with horizontally spatialized sounds in early and late blind individuals. PLoS One 2021; 16:e0247448. [PMID: 33635892 PMCID: PMC7909643 DOI: 10.1371/journal.pone.0247448] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 02/07/2021] [Indexed: 12/02/2022] Open
Abstract
Blind individuals often report difficulties to navigate and to detect objects placed outside their peri-personal space. Although classical sensory substitution devices could be helpful in this respect, these devices often give a complex signal which requires intensive training to analyze. New devices that provide a less complex output signal are therefore needed. Here, we evaluate a smartphone-based sensory substitution device that offers navigation guidance based on strictly spatial cues in the form of horizontally spatialized sounds. The system uses multiple sensors to either detect obstacles at a distance directly in front of the user or to create a 3D map of the environment (detection and avoidance mode, respectively), and informs the user with auditory feedback. We tested 12 early blind, 11 late blind and 24 blindfolded-sighted participants for their ability to detect obstacles and to navigate in an obstacle course. The three groups did not differ in the number of objects detected and avoided. However, early blind and late blind participants were faster than their sighted counterparts to navigate through the obstacle course. These results are consistent with previous research on sensory substitution showing that vision can be replaced by other senses to improve performance in a wide variety of tasks in blind individuals. This study offers new evidence that sensory substitution devices based on horizontally spatialized sounds can be used as a navigation tool with a minimal amount of training.
Collapse
Affiliation(s)
- Samuel Paré
- École d’Optométrie, Université de Montréal, Québec, Canada
| | - Maxime Bleau
- École d’Optométrie, Université de Montréal, Québec, Canada
| | | | - Vincent Malotaux
- Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium
| | - Ron Kupers
- École d’Optométrie, Université de Montréal, Québec, Canada
- Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium
- Institute of Neuroscience and Pharmacology (INF), University of Copenhagen, Copenhagen, Denmark
| | - Maurice Ptito
- École d’Optométrie, Université de Montréal, Québec, Canada
- Institute of Neuroscience and Pharmacology (INF), University of Copenhagen, Copenhagen, Denmark
- * E-mail:
| |
Collapse
|
18
|
Ptito M, Bleau M, Djerourou I, Paré S, Schneider FC, Chebat DR. Brain-Machine Interfaces to Assist the Blind. Front Hum Neurosci 2021; 15:638887. [PMID: 33633557 PMCID: PMC7901898 DOI: 10.3389/fnhum.2021.638887] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Accepted: 01/19/2021] [Indexed: 12/31/2022] Open
Abstract
The loss or absence of vision is probably one of the most incapacitating events that can befall a human being. The importance of vision for humans is also reflected in brain anatomy as approximately one third of the human brain is devoted to vision. It is therefore unsurprising that throughout history many attempts have been undertaken to develop devices aiming at substituting for a missing visual capacity. In this review, we present two concepts that have been prevalent over the last two decades. The first concept is sensory substitution, which refers to the use of another sensory modality to perform a task that is normally primarily sub-served by the lost sense. The second concept is cross-modal plasticity, which occurs when loss of input in one sensory modality leads to reorganization in brain representation of other sensory modalities. Both phenomena are training-dependent. We also briefly describe the history of blindness from ancient times to modernity, and then proceed to address the means that have been used to help blind individuals, with an emphasis on modern technologies, invasive (various type of surgical implants) and non-invasive devices. With the advent of brain imaging, it has become possible to peer into the neural substrates of sensory substitution and highlight the magnitude of the plastic processes that lead to a rewired brain. Finally, we will address the important question of the value and practicality of the available technologies and future directions.
Collapse
Affiliation(s)
- Maurice Ptito
- École d’Optométrie, Université de Montréal, Montréal, QC, Canada
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
- Department of Neuroscience, University of Copenhagen, Copenhagen, Denmark
| | - Maxime Bleau
- École d’Optométrie, Université de Montréal, Montréal, QC, Canada
| | - Ismaël Djerourou
- École d’Optométrie, Université de Montréal, Montréal, QC, Canada
| | - Samuel Paré
- École d’Optométrie, Université de Montréal, Montréal, QC, Canada
| | - Fabien C. Schneider
- TAPE EA7423 University of Lyon-Saint Etienne, Saint Etienne, France
- Neuroradiology Unit, University Hospital of Saint-Etienne, Saint-Etienne, France
| | - Daniel-Robert Chebat
- Visual and Cognitive Neuroscience Laboratory (VCN Lab), Department of Psychology, Faculty of Social Sciences and Humanities, Ariel University, Ariel, Israël
- Navigation and Accessibility Research Center of Ariel University (NARCA), Ariel, Israël
| |
Collapse
|
19
|
Cognitive and Affective Assessment of Navigation and Mobility Tasks for the Visually Impaired via Electroencephalography and Behavioral Signals. SENSORS 2020; 20:s20205821. [PMID: 33076251 PMCID: PMC7602506 DOI: 10.3390/s20205821] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 10/12/2020] [Accepted: 10/13/2020] [Indexed: 11/25/2022]
Abstract
This paper presented the assessment of cognitive load (as an effective real-time index of task difficulty) and the level of brain activation during an experiment in which eight visually impaired subjects performed two types of tasks while using the white cane and the Sound of Vision assistive device with three types of sensory input—audio, haptic, and multimodal (audio and haptic simultaneously). The first task was to identify object properties and the second to navigate and avoid obstacles in both the virtual environment and real-world settings. The results showed that the haptic stimuli were less intuitive than the audio ones and that the navigation with the Sound of Vision device increased cognitive load and working memory. Visual cortex asymmetry was lower in the case of multimodal stimulation than in the case of separate stimulation (audio or haptic). There was no correlation between visual cortical activity and the number of collisions during navigation, regardless of the type of navigation or sensory input. The visual cortex was activated when using the device, but only for the late-blind users. For all the subjects, the navigation with the Sound of Vision device induced a low negative valence, in contrast with the white cane navigation.
Collapse
|
20
|
Lloyd-Esenkaya T, Lloyd-Esenkaya V, O'Neill E, Proulx MJ. Multisensory inclusive design with sensory substitution. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2020; 5:37. [PMID: 32770416 PMCID: PMC7415050 DOI: 10.1186/s41235-020-00240-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2019] [Accepted: 07/13/2020] [Indexed: 11/10/2022]
Abstract
Sensory substitution techniques are perceptual and cognitive phenomena used to represent one sensory form with an alternative. Current applications of sensory substitution techniques are typically focused on the development of assistive technologies whereby visually impaired users can acquire visual information via auditory and tactile cross-modal feedback. But despite their evident success in scientific research and furthering theory development in cognition, sensory substitution techniques have not yet gained widespread adoption within sensory-impaired populations. Here we argue that shifting the focus from assistive to mainstream applications may resolve some of the current issues regarding the use of sensory substitution devices to improve outcomes for those with disabilities. This article provides a tutorial guide on how to use research into multisensory processing and sensory substitution techniques from the cognitive sciences to design new inclusive cross-modal displays. A greater focus on developing inclusive mainstream applications could lead to innovative technologies that could be enjoyed by every person.
Collapse
Affiliation(s)
- Tayfun Lloyd-Esenkaya
- Crossmodal Cognition Lab, University of Bath, Bath, BA2 7AY, UK.,Department of Computer Science, University of Bath, Bath, UK
| | | | - Eamonn O'Neill
- Department of Computer Science, University of Bath, Bath, UK
| | - Michael J Proulx
- Crossmodal Cognition Lab, University of Bath, Bath, BA2 7AY, UK. .,Department of Psychology, University of Bath, Bath, UK.
| |
Collapse
|