1
|
Shafique S, Setti W, Campus C, Zanchi S, Del Bue A, Gori M. How path integration abilities of blind people change in different exploration conditions. Front Neurosci 2024; 18:1375225. [PMID: 38826777 PMCID: PMC11140012 DOI: 10.3389/fnins.2024.1375225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 05/02/2024] [Indexed: 06/04/2024] Open
Abstract
For animals to locate resources and stay safe, navigation is an essential cognitive skill. Blind people use different navigational strategies to encode the environment. Path integration significantly influences spatial navigation, which is the ongoing update of position and orientation during self-motion. This study examines two separate things: (i) how guided and non-guided strategies affect blind individuals in encoding and mentally representing a trajectory and (ii) the sensory preferences for potential navigational aids through questionnaire-based research. This study first highlights the significant role that the absence of vision plays in understanding body centered and proprioceptive cues. Furthermore, it also underscores the urgent need to develop navigation-assistive technologies customized to meet the specific needs of users.
Collapse
Affiliation(s)
- Shehzaib Shafique
- Unit of Visually Impaired People (U-VIP), Italian Institute of Technology, Genova, Italy
| | - Walter Setti
- Unit of Visually Impaired People (U-VIP), Italian Institute of Technology, Genova, Italy
| | - Claudio Campus
- Unit of Visually Impaired People (U-VIP), Italian Institute of Technology, Genova, Italy
| | - Silvia Zanchi
- Unit of Visually Impaired People (U-VIP), Italian Institute of Technology, Genova, Italy
| | - Alessio Del Bue
- Pattern Analysis and Computer Vision (PAVIS), Italian Institute of Technology, Genova, Italy
| | - Monica Gori
- Unit of Visually Impaired People (U-VIP), Italian Institute of Technology, Genova, Italy
| |
Collapse
|
2
|
Hao Y, Yang F, Huang H, Yuan S, Rangan S, Rizzo JR, Wang Y, Fang Y. A Multi-Modal Foundation Model to Assist People with Blindness and Low Vision in Environmental Interaction. J Imaging 2024; 10:103. [PMID: 38786557 PMCID: PMC11122237 DOI: 10.3390/jimaging10050103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 04/14/2024] [Accepted: 04/19/2024] [Indexed: 05/25/2024] Open
Abstract
People with blindness and low vision (pBLV) encounter substantial challenges when it comes to comprehensive scene recognition and precise object identification in unfamiliar environments. Additionally, due to the vision loss, pBLV have difficulty in accessing and identifying potential tripping hazards independently. Previous assistive technologies for the visually impaired often struggle in real-world scenarios due to the need for constant training and lack of robustness, which limits their effectiveness, especially in dynamic and unfamiliar environments, where accurate and efficient perception is crucial. Therefore, we frame our research question in this paper as: How can we assist pBLV in recognizing scenes, identifying objects, and detecting potential tripping hazards in unfamiliar environments, where existing assistive technologies often falter due to their lack of robustness? We hypothesize that by leveraging large pretrained foundation models and prompt engineering, we can create a system that effectively addresses the challenges faced by pBLV in unfamiliar environments. Motivated by the prevalence of large pretrained foundation models, particularly in assistive robotics applications, due to their accurate perception and robust contextual understanding in real-world scenarios induced by extensive pretraining, we present a pioneering approach that leverages foundation models to enhance visual perception for pBLV, offering detailed and comprehensive descriptions of the surrounding environment and providing warnings about potential risks. Specifically, our method begins by leveraging a large-image tagging model (i.e., Recognize Anything Model (RAM)) to identify all common objects present in the captured images. The recognition results and user query are then integrated into a prompt, tailored specifically for pBLV, using prompt engineering. By combining the prompt and input image, a vision-language foundation model (i.e., InstructBLIP) generates detailed and comprehensive descriptions of the environment and identifies potential risks in the environment by analyzing environmental objects and scenic landmarks, relevant to the prompt. We evaluate our approach through experiments conducted on both indoor and outdoor datasets. Our results demonstrate that our method can recognize objects accurately and provide insightful descriptions and analysis of the environment for pBLV.
Collapse
Affiliation(s)
- Yu Hao
- Tandon School of Engineering, New York University, Brooklyn, NY 11201, USA; (Y.H.); (F.Y.); (H.H.); (S.Y.); (S.R.); (J.-R.R.); (Y.W.)
| | - Fan Yang
- Tandon School of Engineering, New York University, Brooklyn, NY 11201, USA; (Y.H.); (F.Y.); (H.H.); (S.Y.); (S.R.); (J.-R.R.); (Y.W.)
| | - Hao Huang
- Tandon School of Engineering, New York University, Brooklyn, NY 11201, USA; (Y.H.); (F.Y.); (H.H.); (S.Y.); (S.R.); (J.-R.R.); (Y.W.)
| | - Shuaihang Yuan
- Tandon School of Engineering, New York University, Brooklyn, NY 11201, USA; (Y.H.); (F.Y.); (H.H.); (S.Y.); (S.R.); (J.-R.R.); (Y.W.)
| | - Sundeep Rangan
- Tandon School of Engineering, New York University, Brooklyn, NY 11201, USA; (Y.H.); (F.Y.); (H.H.); (S.Y.); (S.R.); (J.-R.R.); (Y.W.)
| | - John-Ross Rizzo
- Tandon School of Engineering, New York University, Brooklyn, NY 11201, USA; (Y.H.); (F.Y.); (H.H.); (S.Y.); (S.R.); (J.-R.R.); (Y.W.)
- NYU Langone Health, New York University, New York, NY 10016, USA
| | - Yao Wang
- Tandon School of Engineering, New York University, Brooklyn, NY 11201, USA; (Y.H.); (F.Y.); (H.H.); (S.Y.); (S.R.); (J.-R.R.); (Y.W.)
| | - Yi Fang
- Tandon School of Engineering, New York University, Brooklyn, NY 11201, USA; (Y.H.); (F.Y.); (H.H.); (S.Y.); (S.R.); (J.-R.R.); (Y.W.)
- Electrical Engineering and Center for Artificial Intelligence and Robotics, New York University Abu Dhabi, Abu Dhabi 129188, United Arab Emirates
| |
Collapse
|
3
|
Steffens H, Schutte M, Ewert SD. Auditory orientation and distance estimation of sighted humans using virtual echolocation with artificial and self-generated sounds. JASA EXPRESS LETTERS 2022; 2:124403. [PMID: 36586958 DOI: 10.1121/10.0016403] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Active echolocation of sighted humans using predefined synthetic and self-emitted sounds, as habitually used by blind individuals, was investigated. Using virtual acoustics, distance estimation and directional localization of a wall in different rooms were assessed. A virtual source was attached to either the head or hand with realistic or increased source directivity. A control condition was tested with a virtual sound source located at the wall. Untrained echolocation performance comparable to performance in the control condition was achieved on an individual level. On average, the echolocation performance was considerably lower than in the control condition, however, it benefitted from increased directivity.
Collapse
Affiliation(s)
- Henning Steffens
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, Oldenburg, 26111, Germany , ,
| | - Michael Schutte
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, Oldenburg, 26111, Germany , ,
| | - Stephan D Ewert
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, Oldenburg, 26111, Germany , ,
| |
Collapse
|
4
|
Steffens H, Schutte M, Ewert SD. Acoustically driven orientation and navigation in enclosed spaces. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1767. [PMID: 36182293 DOI: 10.1121/10.0013702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 08/02/2022] [Indexed: 06/16/2023]
Abstract
Awareness of space, and subsequent orientation and navigation in rooms, is dominated by the visual system. However, humans are able to extract auditory information about their surroundings from early reflections and reverberation in enclosed spaces. To better understand orientation and navigation based on acoustic cues only, three virtual corridor layouts (I-, U-, and Z-shaped) were presented using real-time virtual acoustics in a three-dimensional 86-channel loudspeaker array. Participants were seated on a rotating chair in the center of the loudspeaker array and navigated using real rotation and virtual locomotion by "teleporting" in steps on a grid in the invisible environment. A head mounted display showed control elements and the environment in a visual reference condition. Acoustical information about the environment originated from a virtual sound source at the collision point of a virtual ray with the boundaries. In different control modes, the ray was cast either in view or hand direction or in a rotating, "radar"-like fashion in 90° steps to all sides. Time to complete, number of collisions, and movement patterns were evaluated. Navigation and orientation were possible based on the direct sound with little effect of room acoustics and control mode. Underlying acoustic cues were analyzed using an auditory model.
Collapse
Affiliation(s)
- Henning Steffens
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, 26111 Oldenburg, Germany
| | - Michael Schutte
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, 26111 Oldenburg, Germany
| | - Stephan D Ewert
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, 26111 Oldenburg, Germany
| |
Collapse
|
5
|
Virtual reality for the observation of oncology models (VROOM): immersive analytics for oncology patient cohorts. Sci Rep 2022; 12:11337. [PMID: 35790803 PMCID: PMC9256599 DOI: 10.1038/s41598-022-15548-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Accepted: 06/24/2022] [Indexed: 11/08/2022] Open
Abstract
The significant advancement of inexpensive and portable virtual reality (VR) and augmented reality devices has re-energised the research in the immersive analytics field. The immersive environment is different from a traditional 2D display used to analyse 3D data as it provides a unified environment that supports immersion in a 3D scene, gestural interaction, haptic feedback and spatial audio. Genomic data analysis has been used in oncology to understand better the relationship between genetic profile, cancer type, and treatment option. This paper proposes a novel immersive analytics tool for cancer patient cohorts in a virtual reality environment, virtual reality to observe oncology data models. We utilise immersive technologies to analyse the gene expression and clinical data of a cohort of cancer patients. Various machine learning algorithms and visualisation methods have also been deployed in VR to enhance the data interrogation process. This is supported with established 2D visual analytics and graphical methods in bioinformatics, such as scatter plots, descriptive statistical information, linear regression, box plot and heatmap into our visualisation. Our approach allows the clinician to interrogate the information that is familiar and meaningful to them while providing them immersive analytics capabilities to make new discoveries toward personalised medicine.
Collapse
|
6
|
Cooper T, Lai H, Gorlewicz J. Do You Hear What I Hear: The Balancing Act of Designing an Electronic Hockey Puck for Playing Hockey Non-Visually. ACM TRANSACTIONS ON ACCESSIBLE COMPUTING 2022. [DOI: 10.1145/3507660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Blind hockey is a sport that is gaining popularity in the United States after having an international presence for years. In blind hockey, a modified puck is used that emits sounds via ball bearings that rattle inside the puck when it is moving. The modified puck’s lifetime is minimal due to its lack of durability, and it does not provide feedback when the puck stops moving. This article presents an evaluation of multiple prototypes that investigate the appropriate acoustic profiles for an electronic version of a puck that has the ability to overcome some of these challenges. Our approach leverages the use of alternative 3D printable materials and the implementation of four distinct sound profiles: the league-standard puck in blind hockey, a 3.5kHz piezo buzzer, an 800Hz sine tone, and simulated white noise. We present the design and prototype of the pucks, along with benchtop and user validation tests of the prototypes, comparing them to the league standard puck with a focus on acoustic performance. Participants rated the white noise sound profile highest in pleasantness and loudness and the LSP highest in localization. The white noise sound profile was associated with lower angle and distance errors. Of the prototypes produced, the white noise prototype puck appeared to demonstrate the most promise for playing hockey non-visually. We close with a discussion of recommendations for future electronic hockey puck designs to support blind hockey moving forward.
Collapse
|
7
|
Kilian J, Neugebauer A, Scherffig L, Wahl S. The Unfolding Space Glove: A Wearable Spatio-Visual to Haptic Sensory Substitution Device for Blind People. SENSORS 2022; 22:s22051859. [PMID: 35271009 PMCID: PMC8914703 DOI: 10.3390/s22051859] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 02/18/2022] [Accepted: 02/22/2022] [Indexed: 02/04/2023]
Abstract
This paper documents the design, implementation and evaluation of the Unfolding Space Glove—an open source sensory substitution device. It transmits the relative position and distance of nearby objects as vibratory stimuli to the back of the hand and thus enables blind people to haptically explore the depth of their surrounding space, assisting with navigation tasks such as object recognition and wayfinding. The prototype requires no external hardware, is highly portable, operates in all lighting conditions, and provides continuous and immediate feedback—all while being visually unobtrusive. Both blind (n = 8) and blindfolded sighted participants (n = 6) completed structured training and obstacle courses with both the prototype and a white long cane to allow performance comparisons to be drawn between them. The subjects quickly learned how to use the glove and successfully completed all of the trials, though still being slower with it than with the cane. Qualitative interviews revealed a high level of usability and user experience. Overall, the results indicate the general processability of spatial information through sensory substitution using haptic, vibrotactile interfaces. Further research would be required to evaluate the prototype’s capabilities after extensive training and to derive a fully functional navigation aid from its features.
Collapse
Affiliation(s)
- Jakob Kilian
- Köln International School of Design, TH Köln, 50678 Köln, Germany; (J.K.); (L.S.)
- ZEISS Vision Science Laboratory, Eberhard-Karls-University Tübingen, 72076 Tübingen, Germany;
| | - Alexander Neugebauer
- ZEISS Vision Science Laboratory, Eberhard-Karls-University Tübingen, 72076 Tübingen, Germany;
| | - Lasse Scherffig
- Köln International School of Design, TH Köln, 50678 Köln, Germany; (J.K.); (L.S.)
| | - Siegfried Wahl
- ZEISS Vision Science Laboratory, Eberhard-Karls-University Tübingen, 72076 Tübingen, Germany;
- Carl Zeiss Vision International GmbH, 73430 Aalen, Germany
- Correspondence: ; Tel.: +49-7071-29-84512
| |
Collapse
|
8
|
Lee SH, Kim M, Kim H, Park CY. Visual fatigue induced by watching virtual reality device and the effect of anisometropia. ERGONOMICS 2021; 64:1522-1531. [PMID: 34270388 DOI: 10.1080/00140139.2021.1957158] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Accepted: 07/14/2021] [Indexed: 06/13/2023]
Abstract
The effect of small anisometropia on visual fatigue when using virtual reality (VR) devices was investigated. Participants (n = 34) visited three times. In the first visit, VR exposure (10 min) was conducted with the full correction of the refractive error of both eyes. Experimental anisometropia was induced by adding a + 1.0 dioptre spherical lens either on the dominant eyes in the second visit or on the non-dominant eyes in the third visit. At each visit, the participants played a predetermined video game using a head-mounted display VR for 10 min. Visual fatigue was assessed before and after playing VR game using the Virtual Reality Symptom Questionnaire (VRSQ) and high-frequency component of accommodative microfluctuation. Results showed that watching VR induced significant increase of VRSQ score, significant decrease in the maximum accommodation power and objective increase in visual fatigue. Experimental anisometropia induction either on the dominant or non-dominant eyes did not aggravate visual fatigue. Practitioner summary: Mild differences in refractive error (up to 1.0 dioptre) between both eyes do not significantly increase ocular fatigue by viewing virtual reality device (10 min). The impact of small anisometropia may be limited in developing a virtual reality device. Abbreviations: VR: virtual reality; VRSQ: virtual reality symptom questionnaire; HMD: head-mounted display; HFC: high-frequency component.
Collapse
Affiliation(s)
- Sang Hyeok Lee
- Department of Ophthalmology, Dongguk University, Ilsan Hospital, Goyang, South Korea
| | - Martha Kim
- Department of Ophthalmology, Dongguk University, Ilsan Hospital, Goyang, South Korea
| | - Hyosun Kim
- Samsung Display, Display R&D center, Suwon, South Korea
| | - Choul Yong Park
- Department of Ophthalmology, Dongguk University, Ilsan Hospital, Goyang, South Korea
| |
Collapse
|
9
|
Real S, Araujo A. VES: A Mixed-Reality Development Platform of Navigation Systems for Blind and Visually Impaired. SENSORS 2021; 21:s21186275. [PMID: 34577482 PMCID: PMC8469526 DOI: 10.3390/s21186275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 09/13/2021] [Accepted: 09/14/2021] [Indexed: 11/30/2022]
Abstract
Herein, we describe the Virtually Enhanced Senses (VES) system, a novel and highly configurable wireless sensor-actuator network conceived as a development and test-bench platform of navigation systems adapted for blind and visually impaired people. It allows to immerse its users into “walkable” purely virtual or mixed environments with simulated sensors and validate navigation system designs prior to prototype development. The haptic, acoustic, and proprioceptive feedback supports state-of-art sensory substitution devices (SSD). In this regard, three SSD were integrated in VES as examples, including the well-known “The vOICe”. Additionally, the data throughput, latency and packet loss of the wireless communication can be controlled to observe its impact in the provided spatial knowledge and resulting mobility and orientation performance. Finally, the system has been validated by testing a combination of two previous visual-acoustic and visual-haptic sensory substitution schemas with 23 normal-sighted subjects. The recorded data includes the output of a “gaze-tracking” utility adapted for SSD.
Collapse
|
10
|
Sounds That People with Visual Impairment Want to Experience. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18052630. [PMID: 33807924 PMCID: PMC7967530 DOI: 10.3390/ijerph18052630] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Accepted: 03/02/2021] [Indexed: 11/17/2022]
Abstract
This article presents the expectations of visually impaired people with regard to the content of a set of sound exercises planned for implementation, which will mainly enable these people to become familiar with the sounds associated with specific life situations. Consultations were carried out with 20 people with visual impairment, which allowed for the recognition of the needs of these people regarding the sounds with which they wish to become acquainted. The 35 initially proposed sounds were assessed using a five-grade scale. These sounds included those that would be heard in a number of situations in which a person with a visual impairment could potentially be found, both at home and, for example, while in the street or at an office. During the consultations, people with visual impairment usually rated the sounds proposed for inclusion in the set of sound exercises as highly relevant or relevant. In most cases, the assessment was analogous regardless of whether the person had a visual impairment since birth or developed it relatively recently. There were also more than 100 sounds that were proposed for inclusion in the set. The results of the consultation demonstrate how important the information contained in sound is for people with visual impairment.
Collapse
|
11
|
Kvansakul J, Hamilton L, Ayton LN, McCarthy C, Petoe MA. Sensory augmentation to aid training with retinal prostheses. J Neural Eng 2020; 17:045001. [PMID: 32554868 DOI: 10.1088/1741-2552/ab9e1d] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE Retinal prosthesis recipients require rehabilitative training to learn the non-intuitive nature of prosthetic 'phosphene vision'. This study investigated whether the addition of auditory cues, using The vOICe sensory substitution device (SSD), could improve functional performance with simulated phosphene vision. APPROACH Forty normally sighted subjects completed two visual tasks under three conditions. The phosphene condition converted the image to simulated phosphenes displayed on a virtual reality headset. The SSD condition provided auditory information via stereo headphones, translating the image into sound. Horizontal information was encoded as stereo timing differences between ears, vertical information as pitch, and pixel intensity as audio intensity. The third condition combined phosphenes and SSD. Tasks comprised light localisation from the Basic Assessment of Light and Motion (BaLM) and the Tumbling-E from the Freiburg Acuity and Contrast Test (FrACT). To examine learning effects, twenty of the forty subjects received SSD training prior to assessment. MAIN RESULTS Combining phosphenes with auditory SSD provided better light localisation accuracy than either phosphenes or SSD alone, suggesting a compound benefit of integrating modalities. Although response times for SSD-only were significantly longer than all other conditions, combined condition response times were as fast as phosphene-only, highlighting that audio-visual integration provided both response time and accuracy benefits. Prior SSD training provided a benefit to localisation accuracy and speed in SSD-only (as expected) and Combined conditions compared to untrained SSD-only. Integration of the two modalities did not improve spatial resolution task performance, with resolution limited to that of the higher resolution modality (SSD). SIGNIFICANCE Combining phosphene (visual) and SSD (auditory) modalities was effective even without SSD training and led to an improvement in light localisation accuracy and response times. Spatial resolution performance was dominated by auditory SSD. The results suggest there may be a benefit to including auditory cues when training vision prosthesis recipients.
Collapse
Affiliation(s)
- Jessica Kvansakul
- Bionics Institute, East Melbourne, VIC, Australia. Department of Medical Bionics, University of Melbourne, Parkville, VIC, Australia
| | | | | | | | | |
Collapse
|
12
|
Research on Optimization Method of VR Task Scenario Resources Driven by User Cognitive Needs. INFORMATION 2020. [DOI: 10.3390/info11020064] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Research was performed in order to improve the efficiency of a user’s access to information and the interactive experience of task selection in a virtual reality (VR) system, reduce the level of a user’s cognitive load, and improve the efficiency of designers in building a VR system. On the basis of user behavior cognition-system resource mapping, a task scenario resource optimization method for VR system based on quality function deployment-convolution neural network (QFD-CNN) was proposed. Firstly, under the guidance of user behavior cognition, the characteristics of multi-channel information resources in a VR system were analyzed, and the correlation matrix of the VR system scenario resource characteristics was constructed based on the design criteria of human–computer interaction, cognition, and low-load demand. Secondly, analytic hierarchy process (AHP)-QFD combined with evaluation matrix is used to output the priority ranking of VR system resource characteristics. Then, the VR system task scenario cognitive load experiment is carried out on users, and the CNN input set and output set data are collected through the experiment, in order to build a CNN system and predict the user cognitive load and satisfaction in the human–computer interaction in the VR system. Finally, combined with the task information interface of a VR system in a smart city, the application research of the system resource feature optimization method under multi-channel cognition is carried out. The results show that the test coefficient CR value of the AHP-QFD model based on cognitive load is less than 0.1, and the MSE of CNN prediction model network is 0.004247, which proves the effectiveness of this model. According to the requirements of the same design task in a VR system, by comparing the scheme formed by the traditional design process with the scheme optimized by the method in this paper, the results show that the user has a lower cognitive load and better task operation experience when interacting with the latter scheme, so the optimization method studied in this paper can provide a reference for the system construction of virtual reality.
Collapse
|
13
|
Navigation and perception of spatial layout in virtual echo-acoustic space. Cognition 2020; 197:104185. [PMID: 31951856 PMCID: PMC7033557 DOI: 10.1016/j.cognition.2020.104185] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Revised: 01/03/2020] [Accepted: 01/07/2020] [Indexed: 11/20/2022]
Abstract
Successful navigation involves finding the way, planning routes, and avoiding collisions. Whilst previous research has shown that people can navigate using non-visual cues, it is not clear to what degree learned non-visual navigational abilities generalise to 'new' environments. Furthermore, the ability to successfully avoid collisions has not been investigated separately from the ability to perceive spatial layout or to orient oneself in space. Here, we address these important questions using a virtual echolocation paradigm in sighted people. Fourteen sighted blindfolded participants completed 20 virtual navigation training sessions over the course of 10 weeks. In separate sessions, before and after training, we also tested their ability to perceive the spatial layout of virtual echo-acoustic space. Furthermore, three blind echolocation experts completed the tasks without training, thus validating our virtual echo-acoustic paradigm. We found that over the course of 10 weeks sighted people became better at navigating, i.e. they reduced collisions and time needed to complete the route, and increased success rates. This also generalised to 'new' (i.e. untrained) virtual spaces. In addition, after training, their ability to judge spatial layout was better than before training. The data suggest that participants acquired a 'true' sensory driven navigational ability using echo-acoustics. In addition, we show that people not only developed navigational skills related to avoidance of collisions and finding safe passage, but also processes related to spatial perception and orienting. In sum, our results provide strong support for the idea that navigation is a skill which people can achieve via various modalities, here: echolocation.
Collapse
|
14
|
VES: A Mixed-Reality System to Assist Multisensory Spatial Perception and Cognition for Blind and Visually Impaired People. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10020523] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In this paper, the Virtually Enhanced Senses (VES) System is described. It is an ARCore-based, mixed-reality system meant to assist blind and visually impaired people’s navigation. VES operates in indoor and outdoor environments without any previous in-situ installation. It provides users with specific, runtime-configurable stimuli according to their pose, i.e., position and orientation, and the information of the environment recorded in a virtual replica. It implements three output data modalities: Wall-tracking assistance, acoustic compass, and a novel sensory substitution algorithm, Geometry-based Virtual Acoustic Space (GbVAS). The multimodal output of this algorithm takes advantage of natural human perception encoding of spatial data. Preliminary experiments of GbVAS have been conducted with sixteen subjects in three different scenarios, demonstrating basic orientation and mobility skills after six minutes training.
Collapse
|
15
|
Coco-Martin MB, Piñero DP, Leal-Vega L, Hernández-Rodríguez CJ, Adiego J, Molina-Martín A, de Fez D, Arenillas JF. The Potential of Virtual Reality for Inducing Neuroplasticity in Children with Amblyopia. J Ophthalmol 2020; 2020:7067846. [PMID: 32676202 PMCID: PMC7341422 DOI: 10.1155/2020/7067846] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Revised: 06/02/2020] [Accepted: 06/11/2020] [Indexed: 12/12/2022] Open
Abstract
In recent years, virtual reality (VR) has emerged as a new safe and effective tool for neurorehabilitation of different childhood and adulthood conditions. VR-based therapies can induce cortical reorganization and promote the activation of different neuronal connections over a wide range of ages, leading to contrasted improvements in motor and functional skills. The use of VR for the visual rehabilitation in amblyopia has been investigated in the last years, with the potential of using serious games combining perceptual learning and dichoptic stimulation. This combination of technologies allows the clinician to measure, treat, and control changes in interocular suppression, which is one of the factors leading to cortical alterations in amblyopia. Several clinical researches on this issue have been conducted, showing the potential of promoting visual acuity, contrast sensitivity, and stereopsis improvement. Indeed, several systems have been evaluated for amblyopia treatment including the use of different commercially available types of head mounted displays (HMDs). These HMDs are mostly well tolerated by patients during short exposures and do not cause significant long-term side effects, although their use has been occasionally associated with some visual discomfort and other complications in certain types of subjects. More studies are needed to confirm these promising therapies in controlled randomized clinical trials, with special emphasis on the definition of the most adequate planning for obtaining an effective recovery of the visual and binocular function.
Collapse
Affiliation(s)
- María B. Coco-Martin
- 1Group of Applied Clinical Neurosciences and Advanced Data Analysis, Neurology Department, Faculty of Medicine, University of Valladolid, Valladolid, Spain
| | - David P. Piñero
- 2Department of Optics, Pharmacology and Anatomy, University of Alicante, Alicante, Spain
- 3Department of Ophthalmology, Vithas Medimar International Hospital, Alicante, Spain
| | - Luis Leal-Vega
- 1Group of Applied Clinical Neurosciences and Advanced Data Analysis, Neurology Department, Faculty of Medicine, University of Valladolid, Valladolid, Spain
| | - Carlos J. Hernández-Rodríguez
- 2Department of Optics, Pharmacology and Anatomy, University of Alicante, Alicante, Spain
- 3Department of Ophthalmology, Vithas Medimar International Hospital, Alicante, Spain
| | - Joaquin Adiego
- 4Group of Applied Clinical Neurosciences and Advanced Data Analysis, Computer Science Department, School of Computing, University of Valladolid, Valladolid, Spain
| | - Ainhoa Molina-Martín
- 2Department of Optics, Pharmacology and Anatomy, University of Alicante, Alicante, Spain
| | - Dolores de Fez
- 2Department of Optics, Pharmacology and Anatomy, University of Alicante, Alicante, Spain
| | - Juan F. Arenillas
- 1Group of Applied Clinical Neurosciences and Advanced Data Analysis, Neurology Department, Faculty of Medicine, University of Valladolid, Valladolid, Spain
- 5Department of Neurology, Stroke Unit and Stroke Program, University Hospital, University of Valladolid, Valladolid, Spain
| |
Collapse
|
16
|
Audio Guide for Visually Impaired People Based on Combination of Stereo Vision and Musical Tones. SENSORS 2019; 20:s20010151. [PMID: 31881738 PMCID: PMC6982926 DOI: 10.3390/s20010151] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/27/2019] [Revised: 11/25/2019] [Accepted: 12/18/2019] [Indexed: 12/04/2022]
Abstract
Indoor navigation systems offer many application possibilities for people who need information about the scenery and the possible fixed and mobile obstacles placed along the paths. In these systems, the main factors considered for their construction and evaluation are the level of accuracy and the delivery time of the information. However, it is necessary to notice obstacles placed above the user’s waistline to avoid accidents and collisions. In this paper, different methodologies are associated to define a hybrid navigation model called iterative pedestrian dead reckoning (i-PDR). i-PDR combines the PDR algorithm with a Kalman linear filter to correct the location, reducing the system’s margin of error iteratively. Obstacle perception was addressed through the use of stereo vision combined with a musical sounding scheme and spoken instructions that covered an angle of 120 degrees in front of the user. The results obtained in the margin of error and the maximum processing time are 0.70 m and 0.09 s, respectively, with obstacles at ground level and suspended with an accuracy equivalent to 90%.
Collapse
|