1
|
Memeo M, Sandini G, Cocchi E, Brayda L. Blind people can actively manipulate virtual objects with a novel tactile device. Sci Rep 2023; 13:22845. [PMID: 38129483 PMCID: PMC10739710 DOI: 10.1038/s41598-023-49507-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 12/08/2023] [Indexed: 12/23/2023] Open
Abstract
Frequently in rehabilitation, visually impaired persons are passive agents of exercises with fixed environmental constraints. In fact, a printed tactile map, i.e. a particular picture with a specific spatial arrangement, can usually not be edited. Interaction with map content, instead, facilitates the learning of spatial skills because it exploits mental imagery, manipulation and strategic planning simultaneously. However, it has rarely been applied to maps, mainly because of technological limitations. This study aims to understand if visually impaired people can autonomously build objects that are completely virtual. Specifically, we investigated if a group of twelve blind persons, with a wide age range, could exploit mental imagery to interact with virtual content and actively manipulate it by means of a haptic device. The device is mouse-shaped and designed to jointly perceive, with one finger only, local tactile height and inclination cues of arbitrary scalar fields. Spatial information can be mentally constructed by integrating local tactile cues, given by the device, with global proprioceptive cues, given by hand and arm motion. The experiment consisted of a bi-manual task, in which one hand explored some basic virtual objects and the other hand acted on a keyboard to change the position of one object in real-time. The goal was to merge basic objects into more complex objects, like a puzzle. The experiment spanned different resolutions of the tactile information. We measured task accuracy, efficiency, usability and execution time. The average accuracy in solving the puzzle was 90.5%. Importantly, accuracy was linearly predicted by efficiency, measured as the number of moves needed to solve the task. Subjective parameters linked to usability and spatial resolutions did not predict accuracy; gender modulated the execution time, with men being faster than women. Overall, we show that building purely virtual tactile objects is possible in absence of vision and that the process is measurable and achievable in partial autonomy. Introducing virtual tactile graphics in rehabilitation protocols could facilitate the stimulation of mental imagery, a basic element for the ability to orient in space. The behavioural variable introduced in the current study can be calculated after each trial and therefore could be used to automatically measure and tailor protocols to specific user needs. In perspective, our experimental setup can inspire remote rehabilitation scenarios for visually impaired people.
Collapse
Affiliation(s)
- Mariacarla Memeo
- Robotics, Brain and Cognitive Sciences Department Now With Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, Genoa, Italy
| | - Giulio Sandini
- Robotics, Brain and Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, Genoa, Italy
| | - Elena Cocchi
- Istituto David Chiossone per Ciechi e Ipovedenti Onlus, Geona, Italy
| | - Luca Brayda
- Robotics, Brain and Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, Genoa, Italy.
- Acoesis srl, Via Enrico Melen 83, Genoa, Italy.
- Nextage srl, Piazza della Vittoria 12, Genova, Italia.
| |
Collapse
|
2
|
Tsay JS, Tan S, Chu MA, Ivry RB, Cooper EA. Low Vision Impairs Implicit Sensorimotor Adaptation in Response to Small Errors, But Not Large Errors. J Cogn Neurosci 2023; 35:736-748. [PMID: 36724396 PMCID: PMC10512469 DOI: 10.1162/jocn_a_01969] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Successful goal-directed actions require constant fine-tuning of the motor system. This fine-tuning is thought to rely on an implicit adaptation process that is driven by sensory prediction errors (e.g., where you see your hand after reaching vs. where you expected it to be). Individuals with low vision experience challenges with visuomotor control, but whether low vision disrupts motor adaptation is unknown. To explore this question, we assessed individuals with low vision and matched controls with normal vision on a visuomotor task designed to isolate implicit adaptation. We found that low vision was associated with attenuated implicit adaptation only for small visual errors, but not for large visual errors. This result highlights important constraints underlying how low-fidelity visual information is processed by the sensorimotor system to enable successful implicit adaptation.
Collapse
|
3
|
Islam MS, Lim S. Vibrotactile feedback in virtual motor learning: A systematic review. APPLIED ERGONOMICS 2022; 101:103694. [PMID: 35086007 DOI: 10.1016/j.apergo.2022.103694] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Revised: 01/14/2022] [Accepted: 01/15/2022] [Indexed: 06/14/2023]
Abstract
Vibrotactile feedback can be effectively applied to motor (physical) learning in virtual environments, as it can provide task-intrinsic and augmented feedback to users, assisting them in enhancing their motor performance. This review investigates current uses of vibrotactile feedback systems in motor learning applications built upon virtual environments by systematically synthesizing 24 peer-reviewed studies. We aim to understand: (1) the current state of the science of using real-time vibrotactile feedback in virtual environments for aiding the acquisition (or improvement) of motor skills, (2) the effectiveness of using vibrotactile feedback in such applications, and (3) research gaps and opportunities in current technology. We used the Sensing-Analysis-Assessment-Intervention framework to assess the scientific literature in our review. The review identifies several research gaps in current studies, as well as potential design considerations that can improve vibrotactile feedback systems in virtual motor learning applications, including the selection and placement of feedback devices and feedback designs.
Collapse
Affiliation(s)
- Md Shafiqul Islam
- Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA, 24061, USA
| | - Sol Lim
- Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA, 24061, USA.
| |
Collapse
|
4
|
Goulart AA, Lucatelli A, Silveira PSP, Siqueira JDO, Pereira VFA, Carmona MJC, Valentin LSS, Vieira JE. Comparison of digital games as a cognitive function assessment tool for current standardized neuropsychological tests. Braz J Anesthesiol 2021; 72:13-20. [PMID: 34411626 PMCID: PMC9373409 DOI: 10.1016/j.bjane.2021.06.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Revised: 06/17/2021] [Accepted: 06/26/2021] [Indexed: 11/30/2022] Open
Abstract
Objective Cognitive dysfunction may occur postoperatively. Fast and efficient assessment of Postoperative Cognitive Dysfunction (POCD) can minimize loss of quality of life, and therefore, a study comparing a digital game with standard neuropsychological tests to assess executive, mnemonic, and attention functions to evaluate POCD seems to be relevant both for research and clinical practice. Methods A battery of standardized tests and a digital game (MentalPlus®) were administered to 60 patients at the Central Institute of Hospital das Clínicas in São Paulo (36 women and 24 men), with ages between 29 and 82 years, pre- and post-surgery performed under anesthesia. Correlation and linear regression model were used to compare the scores obtained from the standardized tests to the scores of the six executive and cognitive functions evaluated by the game (short- and long-term memory, selective and alternating attention, inhibitory control, and visual perception). Results After correlation analysis, a statistically significant result was found mainly for the correlation between the scores from the phase of the digital game assessing the visuoperception function and the scores from the A and B cards of the Stroop Test (p < 0.001, r = 0.99 and r = 0.64, respectively), and the scores from TMTA (p = 0.0046, r = 0.51). We also found a moderate correlation between the phase of the game assessing short-memory function and VVLT (p < 0.001, r = 0.41). No statistically significant correlations were found for the other functions assessed. Conclusion The digital game provided scores in agreement with standardized tests for evaluating visual perception and possibly short-term memory cognitive functions. Further studies are necessary to verify the correlation of other phases of the digital game with standardized tests assessing cognitive functions.
Collapse
Affiliation(s)
- Ananaira Alves Goulart
- Universidade de São Paulo, Faculdade de Medicina, Programa de Pós-Graduação em Anestesiologia, Ciências Cirúrgicas e Medicina Perioperatória, São Paulo, SP, Brazil.
| | - André Lucatelli
- Universidade de São Paulo, Faculdade de Medicina, Programa de Pós-Graduação em Anestesiologia, Ciências Cirúrgicas e Medicina Perioperatória, São Paulo, SP, Brazil
| | - Paulo Sergio Panse Silveira
- Universidade de São Paulo, Faculdade de Medicina, Departamento de Patologia, São Paulo, SP, Brazil; Universidade de São Paulo, Faculdade de Medicida, Departamento de Medicina Legal, Ética Médica e Medicina Social e do Trabalho, São Paulo, SP, Brazil
| | - José de Oliveira Siqueira
- Universidade de São Paulo, Faculdade de Medicida, Departamento de Medicina Legal, Ética Médica e Medicina Social e do Trabalho, São Paulo, SP, Brazil
| | - Valéria Fontanelle Angelim Pereira
- Universidade de São Paulo, Faculdade de Medicina, Programa de Pós-Graduação em Anestesiologia, Ciências Cirúrgicas e Medicina Perioperatória, São Paulo, SP, Brazil; Associação MentalPlus, Barueri, SP, Brazil; Instituto do Coração (InCor), São Paulo, SP, Brazil
| | - Maria José Carvalho Carmona
- Universidade de São Paulo, Faculdade de Medicina, Departamento de Cirurgia, Disciplina de Anestesiologia, São Paulo, SP, Brazil
| | | | - Joaquim Edson Vieira
- Universidade de São Paulo, Faculdade de Medicina, Departamento de Cirurgia, Disciplina de Anestesiologia, São Paulo, SP, Brazil
| |
Collapse
|
5
|
Assessment of a digital game as a neuropsychological test for postoperative cognitive dysfunction. Braz J Anesthesiol 2021; 72:7-12. [PMID: 34332955 PMCID: PMC9373221 DOI: 10.1016/j.bjane.2021.06.025] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2020] [Revised: 06/07/2021] [Accepted: 06/20/2021] [Indexed: 11/20/2022] Open
Abstract
OBJECTIVE Postoperative cognitive dysfunction may result from worsening in a condition of previous impairment. It causes greater difficulty in recovery, longer hospital stays, and consequent delay in returning to work activities. Digital games have a potential neuromodulatory and rehabilitation effect. In this study, a digital game was used as a neuropsychological test to assess postoperative cognitive dysfunction, with preoperative patient performance as control. METHODS It was a non-controlled study, with patients selected among candidates for elective non-cardiac surgery, evaluated in the pre- and postoperative periods. The digital game used has six phases developed to evaluate selective attention, alternating attention, visuoperception, inhibitory control, short-term memory, and long-term memory. The digital game takes about 25 minutes. Scores are the sum of correct answers in each cognitive domain. Statistical analysis compared these cognitive functions pre- and post-surgery using a generalized linear mixed model (ANCOVA). RESULTS Sixty patients were evaluated, 40% male and 60% female, with a mean age of 52.7 ± 13.5 years. Except for visuoperception, a reduction in post-surgery scores was found in all phases of the digital game. CONCLUSION The digital game was able to detect decline in several cognitive functions postoperatively. As its completion is faster than in conventional tests on paper, this digital game may be a potentially recommended tool for assessing patients, especially the elderly and in the early postoperative period.
Collapse
|
6
|
Tseng RMWW, Tham YC, Rim TH, Cheng CY. Emergence of non-artificial intelligence digital health innovations in ophthalmology: A systematic review. Clin Exp Ophthalmol 2021; 49:741-756. [PMID: 34235833 DOI: 10.1111/ceo.13971] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 07/03/2021] [Indexed: 11/30/2022]
Abstract
The prominent rise of digital health in ophthalmology is evident in the current age of Industry 4.0. Despite the many facets of digital health, there has been a greater slant in interest and focus on artificial intelligence recently. Other major elements of digital health like wearables could also substantially impact patient-focused outcomes but have been relatively less explored and discussed. In this review, we comprehensively evaluate the use of non-artificial intelligence digital health tools in ophthalmology. 53 papers were included in this systematic review - 25 papers discuss virtual or augmented reality, 14 discuss mobile applications and 14 discuss wearables. Most papers focused on the use of technologies to detect or rehabilitate visual impairment, glaucoma and age-related macular degeneration. Overall, the findings on patient-focused outcomes with the adoption of these technologies are encouraging. Further validation, large-scale studies and earlier consideration of real-world barriers are warranted to enable better real-world implementation.
Collapse
Affiliation(s)
| | - Yih-Chung Tham
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore.,Duke-NUS Medical School, Singapore
| | - Tyler Hyungtaek Rim
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore.,Duke-NUS Medical School, Singapore
| | - Ching-Yu Cheng
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore.,Duke-NUS Medical School, Singapore.,Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore
| |
Collapse
|
7
|
Chang KJ, Dillon LL, Deverell L, Boon MY, Keay L. Orientation and mobility outcome measures. Clin Exp Optom 2021; 103:434-448. [DOI: 10.1111/cxo.13004] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Revised: 08/27/2019] [Accepted: 10/09/2019] [Indexed: 11/26/2022] Open
Affiliation(s)
- Kuo‐yi Jade Chang
- School of Public Health, The University of Sydney, Sydney, Australia,
- Injury Division, The George Institute for Global Health, Sydney, Australia,
| | - Lisa Lorraine Dillon
- Injury Division, The George Institute for Global Health, Sydney, Australia,
- Faculty of Medicine, The University of New South Wales, Sydney, Australia,
| | - Lil Deverell
- School of Health Sciences, Swinburne University of Technology, Melbourne, Australia,
| | - Mei Ying Boon
- School of Optometry and Vision Science, The University of New South Wales, Sydney, Australia,
| | - Lisa Keay
- Injury Division, The George Institute for Global Health, Sydney, Australia,
- Faculty of Medicine, The University of New South Wales, Sydney, Australia,
| |
Collapse
|
8
|
Wrzesińska MA, Tabała K, Stecz P. Gaming Behaviors among Polish Students with Visual Impairment. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:1545. [PMID: 33561942 PMCID: PMC7914894 DOI: 10.3390/ijerph18041545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Accepted: 02/02/2021] [Indexed: 11/16/2022]
Abstract
The access of people with disabilities to digital solutions promotes their inclusion and participation in many aspects of life. Computer games based on hearing or haptic devices have been gaining popularity among persons with visual impairment (VI), and players tend to display improved spatial and abstract reasoning skills, as well as better social interaction and self-confidence, after playing these games. However, a recent survey suggested that excessive gaming could represent a public health concern as a harmful form of behavior in young people associated with risk factors of negative psychosomatic and physical complaints. Young persons with VI are regular users of various technologies, but little is still known about their media patterns. This study aimed to determine the characteristics of the variables associated with gaming for adolescents with VI. The participants were 490 students, aged 13-24 years, from special schools for students with VI. Data was collected using a self-administered questionnaire. The current survey indicated a tendency towards excessive gaming in a significant proportion of young persons with VI. Sociodemographic variables are important in predicting gaming prevalence or screen time, but further research focused on establishing possible mediators (such as parental attitudes towards media) are necessary for identifying problematic gaming behaviors among students with VI.
Collapse
Affiliation(s)
| | - Klaudia Tabała
- Department of Psychosocial Rehabilitation, Medical University of Lodz, 90-419 Lodz, Poland;
| | - Patryk Stecz
- Department of Clinical Psychology and Psychopathology, Faculty of Educational Sciences, Institute of Psychology, University of Lodz, 91-433 Lodz, Poland;
| |
Collapse
|
9
|
Two Decades of Touchable and Walkable Virtual Reality for Blind and Visually Impaired People: A High-Level Taxonomy. MULTIMODAL TECHNOLOGIES AND INTERACTION 2020. [DOI: 10.3390/mti4040079] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Although most readers associate the term virtual reality (VR) with visually appealing entertainment content, this technology also promises to be helpful to disadvantaged people like blind or visually impaired people. While overcoming physical objects’ and spaces’ limitations, virtual objects and environments that can be spatially explored have a particular benefit. To give readers a complete, clear and concise overview of current and past publications on touchable and walkable audio supplemented VR applications for blind and visually impaired users, this survey paper presents a high-level taxonomy to cluster the work done up to now from the perspective of technology, interaction and application. In this respect, we introduced a classification into small-, medium- and large-scale virtual environments to cluster and characterize related work. Our comprehensive table shows that especially grounded force feedback devices for haptic feedback (‘small scale’) were strongly researched in different applications scenarios and mainly from an exocentric perspective, but there are also increasingly physically (‘medium scale’) or avatar-walkable (‘large scale’) egocentric audio-haptic virtual environments. In this respect, novel and widespread interfaces such as smartphones or nowadays consumer grade VR components represent a promising potential for further improvements. Our survey paper provides a database on related work to foster the creation process of new ideas and approaches for both technical and methodological aspects.
Collapse
|
10
|
Griffin E, Picinali L, Scase M. The effectiveness of an interactive audio-tactile map for the process of cognitive mapping and recall among people with visual impairments. Brain Behav 2020; 10:e01650. [PMID: 32445295 PMCID: PMC7375097 DOI: 10.1002/brb3.1650] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 03/19/2020] [Accepted: 03/21/2020] [Indexed: 11/11/2022] Open
Abstract
BACKGROUND People with visual impairments can experience numerous challenges navigating unfamiliar environments. Systems that operate as prenavigation tools can assist such individuals. This mixed-methods study examined the effectiveness of an interactive audio-tactile map tool on the process of cognitive mapping and recall, among people who were blind or had visual impairments. The tool was developed with the involvement of visually impaired individuals who additionally provided further feedback throughout this research. METHODS A mixed-methods experimental design was employed. Fourteen participants were allocated to either an experimental group who were exposed to an audio-tactile map, or a control group exposed to a verbally annotated tactile map. After five minutes' exposure, multiple-choice questions examined participants' recall of the spatial and navigational content. Subsequent semi-structured interviews were conducted to examine their views surrounding the study and the product. RESULTS The experimental condition had significantly better overall recall than the control group and higher average scores in all four areas examined by the questions. The interviews suggested that the interactive component offered individuals the freedom to learn the map in several ways and did not restrict them to a sequential and linear approach to learning. CONCLUSION Assistive technology can reduce challenges faced by people with visual impairments, and the flexible learning approach offered by the audio-tactile map may be of particular value. Future researchers and assistive technology developers may wish to explore this further.
Collapse
Affiliation(s)
- Edward Griffin
- School of Nursing and Midwifery, De Montfort University, Leicester, UK
| | - Lorenzo Picinali
- Dyson School of Design Engineering, Imperial College London, London, UK
| | - Mark Scase
- Division of Psychology, De Montfort University, Leicester, UK
| |
Collapse
|
11
|
May KR, Tomlinson BJ, Ma X, Roberts P, Walker BN. Spotlights and Soundscapes. ACM TRANSACTIONS ON ACCESSIBLE COMPUTING 2020. [DOI: 10.1145/3378576] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
For persons with visual impairment, forming cognitive maps of unfamiliar interior spaces can be challenging. Various technical developments have converged to make it feasible, without specialized equipment, to represent a variety of useful landmark objects via spatial audio, rather than solely dispensing route information. Although such systems could be key to facilitating cognitive map formation, high-density auditory environments must be crafted carefully to avoid overloading the listener. This article recounts a set of research exercises with potential users, in which the optimization of such systems was explored. In Experiment 1, a virtual reality environment was used to rapidly prototype and adjust the auditory environment in response to participant comments. In Experiment 2, three variants of the system were evaluated in terms of their effectiveness in a real-world building. This methodology revealed a variety of optimization approaches and recommendations for designing dense mixed-reality auditory environments aimed at supporting cognitive map formation by visually impaired persons.
Collapse
Affiliation(s)
| | | | - Xiaomeng Ma
- Georgia Institute of Technology, Atlanta, Georgia
| | | | | |
Collapse
|
12
|
Law SK. Virtual Reality Simulation to Identify Vision-Associated Disability in Patients With Glaucoma. JAMA Ophthalmol 2020; 138:499-500. [PMID: 32191272 DOI: 10.1001/jamaophthalmol.2020.0391] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Affiliation(s)
- Simon K Law
- Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles, Los Angeles
| |
Collapse
|
13
|
Abstract
My goal in searching for the big pictures is to discover novel ways of organizing information in psychology that will have both theoretical and practical significance. The first section lists my reasons for writing each of five articles. The second section discusses an additional five articles that integrate advancements in artificial intelligence and cognitive psychology. The following two sections elaborate on my collaboration with ontologists to use formal ontologies to organize psychological knowledge, including the National Institute of Mental Health Research Domain Criteria, for formulating a biological basis for mental illness. I next discuss strategies for writing integrative articles. The following section describes the helpfulness of the integrations for making psychology relevant to a general audience. I conclude with recommendations for creating breadth in doctoral training.
Collapse
Affiliation(s)
- Stephen K Reed
- Department of Psychology, San Diego State University; Center for Research in Mathematics and Science Education, San Diego State University; and Department of Psychology, University of California, San Diego
| |
Collapse
|
14
|
Navigation and perception of spatial layout in virtual echo-acoustic space. Cognition 2020; 197:104185. [PMID: 31951856 PMCID: PMC7033557 DOI: 10.1016/j.cognition.2020.104185] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Revised: 01/03/2020] [Accepted: 01/07/2020] [Indexed: 11/20/2022]
Abstract
Successful navigation involves finding the way, planning routes, and avoiding collisions. Whilst previous research has shown that people can navigate using non-visual cues, it is not clear to what degree learned non-visual navigational abilities generalise to 'new' environments. Furthermore, the ability to successfully avoid collisions has not been investigated separately from the ability to perceive spatial layout or to orient oneself in space. Here, we address these important questions using a virtual echolocation paradigm in sighted people. Fourteen sighted blindfolded participants completed 20 virtual navigation training sessions over the course of 10 weeks. In separate sessions, before and after training, we also tested their ability to perceive the spatial layout of virtual echo-acoustic space. Furthermore, three blind echolocation experts completed the tasks without training, thus validating our virtual echo-acoustic paradigm. We found that over the course of 10 weeks sighted people became better at navigating, i.e. they reduced collisions and time needed to complete the route, and increased success rates. This also generalised to 'new' (i.e. untrained) virtual spaces. In addition, after training, their ability to judge spatial layout was better than before training. The data suggest that participants acquired a 'true' sensory driven navigational ability using echo-acoustics. In addition, we show that people not only developed navigational skills related to avoidance of collisions and finding safe passage, but also processes related to spatial perception and orienting. In sum, our results provide strong support for the idea that navigation is a skill which people can achieve via various modalities, here: echolocation.
Collapse
|
15
|
Hungry Cat—A Serious Game for Conveying Spatial Information to the Visually Impaired. MULTIMODAL TECHNOLOGIES AND INTERACTION 2019. [DOI: 10.3390/mti3010012] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Navigation is done through obtaining spatial information from the environment and forming a spatial map about it. The visually impaired rely mainly on orientation and mobility training by a certified specialist to acquire spatial navigation skills. However, it is manpower intensive and costly. This research designed and developed a serious game, Hungry Cat. This game can convey spatial information of virtual rooms to children with visual impairment through game playing. An evaluation with 30 visually impaired participants was conducted by allowing them to explore each virtual room in Hungry Cat. After exploration, the food finding test, which is a game mode available in Hungry Cat, was conducted, followed by the physical wire net test to evaluate their ability in forming the spatial mental maps of the virtual rooms. The positive results of the evaluation obtained demonstrate the ability of Hungry Cat, in conveying spatial information about virtual rooms and aiding the development of spatial mental maps of these rooms through game playing.
Collapse
|
16
|
|
17
|
Differences between blind people's cognitive maps after proximity and distant exploration of virtual environments. COMPUTERS IN HUMAN BEHAVIOR 2017. [DOI: 10.1016/j.chb.2017.09.007] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
18
|
Kristjánsson Á, Moldoveanu A, Jóhannesson ÓI, Balan O, Spagnol S, Valgeirsdóttir VV, Unnthorsson R. Designing sensory-substitution devices: Principles, pitfalls and potential1. Restor Neurol Neurosci 2016; 34:769-87. [PMID: 27567755 PMCID: PMC5044782 DOI: 10.3233/rnn-160647] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
An exciting possibility for compensating for loss of sensory function is to augment deficient senses by conveying missing information through an intact sense. Here we present an overview of techniques that have been developed for sensory substitution (SS) for the blind, through both touch and audition, with special emphasis on the importance of training for the use of such devices, while highlighting potential pitfalls in their design. One example of a pitfall is how conveying extra information about the environment risks sensory overload. Related to this, the limits of attentional capacity make it important to focus on key information and avoid redundancies. Also, differences in processing characteristics and bandwidth between sensory systems severely constrain the information that can be conveyed. Furthermore, perception is a continuous process and does not involve a snapshot of the environment. Design of sensory substitution devices therefore requires assessment of the nature of spatiotemporal continuity for the different senses. Basic psychophysical and neuroscientific research into representations of the environment and the most effective ways of conveying information should lead to better design of sensory substitution systems. Sensory substitution devices should emphasize usability, and should not interfere with other inter- or intramodal perceptual function. Devices should be task-focused since in many cases it may be impractical to convey too many aspects of the environment. Evidence for multisensory integration in the representation of the environment suggests that researchers should not limit themselves to a single modality in their design. Finally, we recommend active training on devices, especially since it allows for externalization, where proximal sensory stimulation is attributed to a distinct exterior object.
Collapse
Affiliation(s)
- Árni Kristjánsson
- Laboratory of Visual Perception and Visuomotor control, University of Iceland, Faculty of Psychology, School of Health Sciences, Reykjavik, Iceland
| | - Alin Moldoveanu
- University Politehnica of Bucharest, Faculty of Automatic Control and Computers, Computer Science and Engineering Department, Bucharest, Romania
| | - Ómar I. Jóhannesson
- Laboratory of Visual Perception and Visuomotor control, University of Iceland, Faculty of Psychology, School of Health Sciences, Reykjavik, Iceland
| | - Oana Balan
- University Politehnica of Bucharest, Faculty of Automatic Control and Computers, Computer Science and Engineering Department, Bucharest, Romania
| | - Simone Spagnol
- Faculty of Industrial Engineering, Mechanical Engineering and Computer Science, School of Engineering and Natural Sciences, Reykjavik, Iceland
| | - Vigdís Vala Valgeirsdóttir
- Laboratory of Visual Perception and Visuomotor control, University of Iceland, Faculty of Psychology, School of Health Sciences, Reykjavik, Iceland
| | - Rúnar Unnthorsson
- Faculty of Industrial Engineering, Mechanical Engineering and Computer Science, School of Engineering and Natural Sciences, Reykjavik, Iceland
| |
Collapse
|
19
|
Horton EL, Renganathan R, Toth BN, Cohen AJ, Bajcsy AV, Bateman A, Jennings MC, Khattar A, Kuo RS, Lee FA, Lim MK, Migasiuk LW, Zhang A, Zhao OK, Oliveira MA. A review of principles in design and usability testing of tactile technology for individuals with visual impairments. Assist Technol 2016; 29:28-36. [PMID: 27187665 DOI: 10.1080/10400435.2016.1176083] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022] Open
Abstract
To lay the groundwork for devising, improving, and implementing new technologies to meet the needs of individuals with visual impairments, a systematic literature review was conducted to: a) describe hardware platforms used in assistive devices, b) identify their various applications, and c) summarize practices in user testing conducted with these devices. A search in relevant EBSCO databases for articles published between 1980 and 2014 with terminology related to visual impairment, technology, and tactile sensory adaptation yielded 62 articles that met the inclusion criteria for final review. It was found that while earlier hardware development focused on pin matrices, the emphasis then shifted toward force feedback haptics and accessible touch screens. The inclusion of interactive and multimodal features has become increasingly prevalent. The quantity and consistency of research on navigation, education, and computer accessibility suggest that these are pertinent areas of need for the visually impaired community. Methodologies for usability testing ranged from case studies to larger cross-sectional studies. Many studies used blindfolded sighted users to draw conclusions about design principles and usability. Altogether, the findings presented in this review provide insight on effective design strategies and user testing methodologies for future research on assistive technology for individuals with visual impairments.
Collapse
Affiliation(s)
- Emily L Horton
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Ramkesh Renganathan
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Bryan N Toth
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Alexa J Cohen
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Andrea V Bajcsy
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Amelia Bateman
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Mathew C Jennings
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Anish Khattar
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Ryan S Kuo
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Felix A Lee
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Meilin K Lim
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Laura W Migasiuk
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Amy Zhang
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Oliver K Zhao
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Marcio A Oliveira
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| |
Collapse
|
20
|
Levy-Tzedek S, Maidenbaum S, Amedi A, Lackner J. Aging and Sensory Substitution in a Virtual Navigation Task. PLoS One 2016; 11:e0151593. [PMID: 27007812 PMCID: PMC4805187 DOI: 10.1371/journal.pone.0151593] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2015] [Accepted: 03/01/2016] [Indexed: 11/21/2022] Open
Abstract
Virtual environments are becoming ubiquitous, and used in a variety of contexts–from entertainment to training and rehabilitation. Recently, technology for making them more accessible to blind or visually impaired users has been developed, by using sound to represent visual information. The ability of older individuals to interpret these cues has not yet been studied. In this experiment, we studied the effects of age and sensory modality (visual or auditory) on navigation through a virtual maze. We added a layer of complexity by conducting the experiment in a rotating room, in order to test the effect of the spatial bias induced by the rotation on performance. Results from 29 participants showed that with the auditory cues, it took participants a longer time to complete the mazes, they took a longer path length through the maze, they paused more, and had more collisions with the walls, compared to navigation with the visual cues. The older group took a longer time to complete the mazes, they paused more, and had more collisions with the walls, compared to the younger group. There was no effect of room rotation on the performance, nor were there any significant interactions among age, feedback modality and room rotation. We conclude that there is a decline in performance with age, and that while navigation with auditory cues is possible even at an old age, it presents more challenges than visual navigation.
Collapse
Affiliation(s)
- S. Levy-Tzedek
- Recanati School for Community Health Professions, Department of Physical Therapy, Ben Gurion University of the Negev, Beer-Sheva, Israel
- Zlotowski Center for Neuroscience, Ben Gurion University of the Negev, Beer-Sheva, Israel
- * E-mail:
| | - S. Maidenbaum
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
| | - A. Amedi
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Jerusalem, Israel
- Sorbonne Universités UPMC Univ Paris 06, Institut de la Vision, Paris, France
| | - J. Lackner
- Ashton Graybiel Spatial Orientation Laboratory, Department of Physiology, Brandeis University, Waltham, Massachusetts, United States of America
| |
Collapse
|
21
|
Maidenbaum S, Buchs G, Abboud S, Lavi-Rotbain O, Amedi A. Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution. PLoS One 2016; 11:e0147501. [PMID: 26882473 PMCID: PMC4755598 DOI: 10.1371/journal.pone.0147501] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2015] [Accepted: 01/05/2016] [Indexed: 12/20/2022] Open
Abstract
Graphical virtual environments are currently far from accessible to blind users as their content is mostly visual. This is especially unfortunate as these environments hold great potential for this population for purposes such as safe orientation, education, and entertainment. Previous tools have increased accessibility but there is still a long way to go. Visual-to-audio Sensory-Substitution-Devices (SSDs) can increase accessibility generically by sonifying on-screen content regardless of the specific environment and offer increased accessibility without the use of expensive dedicated peripherals like electrode/vibrator arrays. Using SSDs virtually utilizes similar skills as when using them in the real world, enabling both training on the device and training on environments virtually before real-world visits. This could enable more complex, standardized and autonomous SSD training and new insights into multisensory interaction and the visually-deprived brain. However, whether congenitally blind users, who have never experienced virtual environments, will be able to use this information for successful perception and interaction within them is currently unclear.We tested this using the EyeMusic SSD, which conveys whole-scene visual information, to perform virtual tasks otherwise impossible without vision. Congenitally blind users had to navigate virtual environments and find doors, differentiate between them based on their features (Experiment1:task1) and surroundings (Experiment1:task2) and walk through them; these tasks were accomplished with a 95% and 97% success rate, respectively. We further explored the reactions of congenitally blind users during their first interaction with a more complex virtual environment than in the previous tasks-walking down a virtual street, recognizing different features of houses and trees, navigating to cross-walks, etc. Users reacted enthusiastically and reported feeling immersed within the environment. They highlighted the potential usefulness of such environments for understanding what visual scenes are supposed to look like and their potential for complex training and suggested many future environments they wished to experience.
Collapse
Affiliation(s)
- Shachar Maidenbaum
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Galit Buchs
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Sami Abboud
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Ori Lavi-Rotbain
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Amir Amedi
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Jerusalem, Israel
- Sorbonne Universités UPMC Univ Paris 06, Institut de la Vision Paris, Paris, France
- * E-mail:
| |
Collapse
|
22
|
Schinazi VR, Thrash T, Chebat DR. Spatial navigation by congenitally blind individuals. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2015; 7:37-58. [PMID: 26683114 PMCID: PMC4737291 DOI: 10.1002/wcs.1375] [Citation(s) in RCA: 71] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2015] [Revised: 10/16/2015] [Accepted: 11/17/2015] [Indexed: 11/08/2022]
Abstract
Spatial navigation in the absence of vision has been investigated from a variety of perspectives and disciplines. These different approaches have progressed our understanding of spatial knowledge acquisition by blind individuals, including their abilities, strategies, and corresponding mental representations. In this review, we propose a framework for investigating differences in spatial knowledge acquisition by blind and sighted people consisting of three longitudinal models (i.e., convergent, cumulative, and persistent). Recent advances in neuroscience and technological devices have provided novel insights into the different neural mechanisms underlying spatial navigation by blind and sighted people and the potential for functional reorganization. Despite these advances, there is still a lack of consensus regarding the extent to which locomotion and wayfinding depend on amodal spatial representations. This challenge largely stems from methodological limitations such as heterogeneity in the blind population and terminological ambiguity related to the concept of cognitive maps. Coupled with an over‐reliance on potential technological solutions, the field has diffused into theoretical and applied branches that do not always communicate. Here, we review research on navigation by congenitally blind individuals with an emphasis on behavioral and neuroscientific evidence, as well as the potential of technological assistance. Throughout the article, we emphasize the need to disentangle strategy choice and performance when discussing the navigation abilities of the blind population. WIREs Cogn Sci 2016, 7:37–58. doi: 10.1002/wcs.1375 For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
- Victor R Schinazi
- Department of Humanities, Social, and Political Sciences, ETH Zürich, Zürich, Switzerland
| | - Tyler Thrash
- Department of Humanities, Social, and Political Sciences, ETH Zürich, Zürich, Switzerland
| | | |
Collapse
|
23
|
Chebat DR, Maidenbaum S, Amedi A. Navigation using sensory substitution in real and virtual mazes. PLoS One 2015; 10:e0126307. [PMID: 26039580 PMCID: PMC4454637 DOI: 10.1371/journal.pone.0126307] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2014] [Accepted: 03/31/2015] [Indexed: 01/27/2023] Open
Abstract
Under certain specific conditions people who are blind have a perception of space that is equivalent to that of sighted individuals. However, in most cases their spatial perception is impaired. Is this simply due to their current lack of access to visual information or does the lack of visual information throughout development prevent the proper integration of the neural systems underlying spatial cognition? Sensory Substitution devices (SSDs) can transfer visual information via other senses and provide a unique tool to examine this question. We hypothesize that the use of our SSD (The EyeCane: a device that translates distance information into sounds and vibrations) can enable blind people to attain a similar performance level as the sighted in a spatial navigation task. We gave fifty-six participants training with the EyeCane. They navigated in real life-size mazes using the EyeCane SSD and in virtual renditions of the same mazes using a virtual-EyeCane. The participants were divided into four groups according to visual experience: congenitally blind, low vision & late blind, blindfolded sighted and sighted visual controls. We found that with the EyeCane participants made fewer errors in the maze, had fewer collisions, and completed the maze in less time on the last session compared to the first. By the third session, participants improved to the point where individual trials were no longer significantly different from the initial performance of the sighted visual group in terms of errors, time and collision.
Collapse
Affiliation(s)
- Daniel-Robert Chebat
- The Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel; The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel; Department of Behavioral Sciences, Ariel University, Ariel, Israel
| | - Shachar Maidenbaum
- The Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel; The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Amir Amedi
- The Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel; The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| |
Collapse
|
24
|
Maidenbaum S, Levy-Tzedek S, Chebat DR, Namer-Furstenberg R, Amedi A. The effect of extended sensory range via the EyeCane sensory substitution device on the characteristics of visionless virtual navigation. Multisens Res 2015; 27:379-97. [PMID: 25693302 DOI: 10.1163/22134808-00002463] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Mobility training programs for helping the blind navigate through unknown places with a White-Cane significantly improve their mobility. However, what is the effect of new assistive technologies, offering more information to the blind user, on the underlying premises of these programs such as navigation patterns? We developed the virtual-EyeCane, a minimalistic sensory substitution device translating single-point-distance into auditory cues identical to the EyeCane's in the real world. We compared performance in virtual environments when using the virtual-EyeCane, a virtual-White-Cane, no device and visual navigation. We show that the characteristics of virtual-EyeCane navigation differ from navigation with a virtual-White-Cane or no device, and that virtual-EyeCane users complete more levels successfully, taking shorter paths and with less collisions than these groups, and we demonstrate the relative similarity of virtual-EyeCane and visual navigation patterns. This suggests that additional distance information indeed changes navigation patterns from virtual-White-Cane use, and brings them closer to visual navigation.
Collapse
|
25
|
Elli GV, Benetti S, Collignon O. Is there a future for sensory substitution outside academic laboratories? Multisens Res 2015; 27:271-91. [PMID: 25693297 DOI: 10.1163/22134808-00002460] [Citation(s) in RCA: 48] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Sensory substitution devices (SSDs) have been developed with the ultimate purpose of supporting sensory deprived individuals in their daily activities. However, more than forty years after their first appearance in the scientific literature, SSDs still remain more common in research laboratories than in the daily life of people with sensory deprivation. Here, we seek to identify the reasons behind the limited diffusion of SSDs among the blind community by discussing the ergonomic, neurocognitive and psychosocial issues potentially associated with the use of these systems. We stress that these issues should be considered together when developing future devices or improving existing ones. We provide some examples of how to achieve this by adopting a multidisciplinary and participatory approach. These efforts would contribute not solely to address fundamental theoretical research questions, but also to better understand the everyday needs of blind people and eventually promote the use of SSDs outside laboratories.
Collapse
|
26
|
Vergnieux V, Macé MJM, Jouffrais C. Wayfinding with simulated prosthetic vision: performance comparison with regular and structure-enhanced renderings. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2014:2585-8. [PMID: 25570519 DOI: 10.1109/embc.2014.6944151] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
In this study, we used a simulation of upcoming low-resolution visual neuroprostheses to evaluate the benefit of embedded computer vision techniques in a wayfinding task. We showed that augmenting the classical phosphene rendering with the basic structure of the environment - displaying the ground plane with a different level of brightness - increased both wayfinding performance and cognitive mapping. In spite of the low resolution of current and upcoming visual implants, the improvement of these cognitive functions may already be possible with embedded artificial vision algorithms.
Collapse
|
27
|
Sánchez J, de Borba Campos M, Espinoza M, Merabet LB. Audio Haptic Videogaming for Developing Wayfinding Skills in Learners Who are Blind. IUI. INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES 2014; 2014:199-208. [PMID: 25485312 DOI: 10.1145/2557500.2557519] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Interactive digital technologies are currently being developed as a novel tool for education and skill development. Audiopolis is an audio and haptic based videogame designed for developing orientation and mobility (O&M) skills in people who are blind. We have evaluated the cognitive impact of videogame play on O&M skills by assessing performance on a series of behavioral tasks carried out in both indoor and outdoor virtual spaces. Our results demonstrate that the use of Audiopolis had a positive impact on the development and use of O&M skills in school-aged learners who are blind. The impact of audio and haptic information on learning is also discussed.
Collapse
Affiliation(s)
- Jaime Sánchez
- Department of Computer Science and Center for Advanced Research in Education (CARE), University of Chile Santiago, Chile
| | - Marcia de Borba Campos
- Faculty of Informatics, Pontifical Catholic University of Rio Grande do Sul Rio Grande do Sul, Brazil
| | - Matías Espinoza
- Department of Computer Science and Center for Advanced Research in Education (CARE), University of Chile Santiago, Chile
| | - Lotfi B Merabet
- Dept. Ophthalmology Massachusetts Eye and Ear Infirmary, Harvard Medical School Boston, MA, USA
| |
Collapse
|
28
|
Connors EC, Chrastil ER, Sánchez J, Merabet LB. Virtual environments for the transfer of navigation skills in the blind: a comparison of directed instruction vs. video game based learning approaches. Front Hum Neurosci 2014; 8:223. [PMID: 24822044 PMCID: PMC4013463 DOI: 10.3389/fnhum.2014.00223] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2013] [Accepted: 03/30/2014] [Indexed: 11/13/2022] Open
Abstract
For profoundly blind individuals, navigating in an unfamiliar building can represent a significant challenge. We investigated the use of an audio-based, virtual environment called Audio-based Environment Simulator (AbES) that can be explored for the purposes of learning the layout of an unfamiliar, complex indoor environment. Furthermore, we compared two modes of interaction with AbES. In one group, blind participants implicitly learned the layout of a target environment while playing an exploratory, goal-directed video game. By comparison, a second group was explicitly taught the same layout following a standard route and instructions provided by a sighted facilitator. As a control, a third group interacted with AbES while playing an exploratory, goal-directed video game however, the explored environment did not correspond to the target layout. Following interaction with AbES, a series of route navigation tasks were carried out in the virtual and physical building represented in the training environment to assess the transfer of acquired spatial information. We found that participants from both modes of interaction were able to transfer the spatial knowledge gained as indexed by their successful route navigation performance. This transfer was not apparent in the control participants. Most notably, the game-based learning strategy was also associated with enhanced performance when participants were required to find alternate routes and short cuts within the target building suggesting that a ludic-based training approach may provide for a more flexible mental representation of the environment. Furthermore, outcome comparisons between early and late blind individuals suggested that greater prior visual experience did not have a significant effect on overall navigation performance following training. Finally, performance did not appear to be associated with other factors of interest such as age, gender, and verbal memory recall. We conclude that the highly interactive and immersive exploration of the virtual environment greatly engages a blind user to develop skills akin to positive near transfer of learning. Learning through a game play strategy appears to confer certain behavioral advantages with respect to how spatial information is acquired and ultimately manipulated for navigation.
Collapse
Affiliation(s)
- Erin C Connors
- The Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Harvard Medical School Boston, MA, USA
| | - Elizabeth R Chrastil
- Department of Psychology, Center for Memory and Brain, Boston University Boston, MA, USA
| | - Jaime Sánchez
- Department of Computer Science, Center for Advanced Research in Education, University of Chile Santiago, Chile
| | - Lotfi B Merabet
- The Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Harvard Medical School Boston, MA, USA
| |
Collapse
|
29
|
Connors EC, Chrastil ER, Sánchez J, Merabet LB. Action video game play and transfer of navigation and spatial cognition skills in adolescents who are blind. Front Hum Neurosci 2014; 8:133. [PMID: 24653690 PMCID: PMC3949101 DOI: 10.3389/fnhum.2014.00133] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2013] [Accepted: 02/21/2014] [Indexed: 11/16/2022] Open
Abstract
For individuals who are blind, navigating independently in an unfamiliar environment represents a considerable challenge. Inspired by the rising popularity of video games, we have developed a novel approach to train navigation and spatial cognition skills in adolescents who are blind. Audio-based Environment Simulator (AbES) is a software application that allows for the virtual exploration of an existing building set in an action video game metaphor. Using this ludic-based approach to learning, we investigated the ability and efficacy of adolescents with early onset blindness to acquire spatial information gained from the exploration of a target virtual indoor environment. Following game play, participants were assessed on their ability to transfer and mentally manipulate acquired spatial information on a set of navigation tasks carried out in the real environment. Success in transfer of navigation skill performance was markedly high suggesting that interacting with AbES leads to the generation of an accurate spatial mental representation. Furthermore, there was a positive correlation between success in game play and navigation task performance. The role of virtual environments and gaming in the development of mental spatial representations is also discussed. We conclude that this game based learning approach can facilitate the transfer of spatial knowledge and further, can be used by individuals who are blind for the purposes of navigation in real-world environments.
Collapse
Affiliation(s)
- Erin C Connors
- The Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Harvard Medical School Boston, MA, USA
| | - Elizabeth R Chrastil
- Department of Psychological and Brain Sciences, Center for Memory and Brain, Boston University Boston, MA, USA
| | - Jaime Sánchez
- Department of Computer Science, Center for Advanced Research in Education, University of Chile Santiago, Chile
| | - Lotfi B Merabet
- The Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Harvard Medical School Boston, MA, USA
| |
Collapse
|
30
|
Maidenbaum S, Abboud S, Amedi A. Sensory substitution: closing the gap between basic research and widespread practical visual rehabilitation. Neurosci Biobehav Rev 2013; 41:3-15. [PMID: 24275274 DOI: 10.1016/j.neubiorev.2013.11.007] [Citation(s) in RCA: 89] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2013] [Revised: 10/06/2013] [Accepted: 11/08/2013] [Indexed: 11/25/2022]
Abstract
Sensory substitution devices (SSDs) have come a long way since first developed for visual rehabilitation. They have produced exciting experimental results, and have furthered our understanding of the human brain. Unfortunately, they are still not used for practical visual rehabilitation, and are currently considered as reserved primarily for experiments in controlled settings. Over the past decade, our understanding of the neural mechanisms behind visual restoration has changed as a result of converging evidence, much of which was gathered with SSDs. This evidence suggests that the brain is more than a pure sensory-machine but rather is a highly flexible task-machine, i.e., brain regions can maintain or regain their function in vision even with input from other senses. This complements a recent set of more promising behavioral achievements using SSDs and new promising technologies and tools. All these changes strongly suggest that the time has come to revive the focus on practical visual rehabilitation with SSDs and we chart several key steps in this direction such as training protocols and self-train tools.
Collapse
Affiliation(s)
- Shachar Maidenbaum
- Department of Medical Neurobiology, The Institute for Medical Research Israel-Canada, Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem 91220, Israel
| | - Sami Abboud
- Department of Medical Neurobiology, The Institute for Medical Research Israel-Canada, Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem 91220, Israel
| | - Amir Amedi
- Department of Medical Neurobiology, The Institute for Medical Research Israel-Canada, Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem 91220, Israel; The Edmond and Lily Safra Center for Brain Sciences (ELSC), The Hebrew University of Jerusalem, Jerusalem 91220, Israel.
| |
Collapse
|
31
|
Indoor navigation by people with visual impairment using a digital sign system. PLoS One 2013; 8:e76783. [PMID: 24116156 PMCID: PMC3792873 DOI: 10.1371/journal.pone.0076783] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2013] [Accepted: 09/03/2013] [Indexed: 12/05/2022] Open
Abstract
There is a need for adaptive technology to enhance indoor wayfinding by visually-impaired people. To address this need, we have developed and tested a Digital Sign System. The hardware and software consist of digitally-encoded signs widely distributed throughout a building, a handheld sign-reader based on an infrared camera, image-processing software, and a talking digital map running on a mobile device. Four groups of subjects—blind, low vision, blindfolded sighted, and normally sighted controls—were evaluated on three navigation tasks. The results demonstrate that the technology can be used reliably in retrieving information from the signs during active mobility, in finding nearby points of interest, and following routes in a building from a starting location to a destination. The visually impaired subjects accurately and independently completed the navigation tasks, but took substantially longer than normally sighted controls. This fully functional prototype system demonstrates the feasibility of technology enabling independent indoor navigation by people with visual impairment.
Collapse
|
32
|
Halko MA, Connors EC, Sánchez J, Merabet LB. Real world navigation independence in the early blind correlates with differential brain activity associated with virtual navigation. Hum Brain Mapp 2013; 35:2768-78. [PMID: 24027192 DOI: 10.1002/hbm.22365] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2013] [Revised: 06/07/2013] [Accepted: 07/01/2013] [Indexed: 11/05/2022] Open
Abstract
Navigating is a complex cognitive task that places high demands on spatial abilities, particularly in the absence of sight. Significant advances have been made in identifying the neural correlates associated with various aspects of this skill; however, how the brain is able to navigate in the absence of visual experience remains poorly understood. Furthermore, how neural network activity relates to the wide variability in navigational independence and skill in the blind population is also unknown. Using functional magnetic resonance imaging, we investigated the neural correlates of audio-based navigation within a large scale, indoor virtual environment in early profoundly blind participants with differing levels of spatial navigation independence (assessed by the Santa Barbara Sense of Direction scale). Performing path integration tasks in the virtual environment was associated with activation within areas of a core network implicated in navigation. Furthermore, we found a positive relationship between Santa Barbara Sense of Direction scores and activation within right temporal parietal junction during the planning and execution phases of the task. These findings suggest that differential navigational ability in the blind may be related to the utilization of different brain network structures. Further characterization of the factors that influence network activity may have important implications regarding how this skill is taught in the blind community.
Collapse
Affiliation(s)
- Mark A Halko
- Berenson-Allen Center for Noninvasive Brain Stimulation, Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts
| | | | | | | |
Collapse
|
33
|
Tan SS, Maul THB, Mennie NR. Measuring the performance of visual to auditory information conversion. PLoS One 2013; 8:e63042. [PMID: 23696791 PMCID: PMC3656041 DOI: 10.1371/journal.pone.0063042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2013] [Accepted: 03/27/2013] [Indexed: 11/22/2022] Open
Abstract
Background Visual to auditory conversion systems have been in existence for several decades. Besides being among the front runners in providing visual capabilities to blind users, the auditory cues generated from image sonification systems are still easier to learn and adapt to compared to other similar techniques. Other advantages include low cost, easy customizability, and universality. However, every system developed so far has its own set of strengths and weaknesses. In order to improve these systems further, we propose an automated and quantitative method to measure the performance of such systems. With these quantitative measurements, it is possible to gauge the relative strengths and weaknesses of different systems and rank the systems accordingly. Methodology Performance is measured by both the interpretability and also the information preservation of visual to auditory conversions. Interpretability is measured by computing the correlation of inter image distance (IID) and inter sound distance (ISD) whereas the information preservation is computed by applying Information Theory to measure the entropy of both visual and corresponding auditory signals. These measurements provide a basis and some insights on how the systems work. Conclusions With an automated interpretability measure as a standard, more image sonification systems can be developed, compared, and then improved. Even though the measure does not test systems as thoroughly as carefully designed psychological experiments, a quantitative measurement like the one proposed here can compare systems to a certain degree without incurring much cost. Underlying this research is the hope that a major breakthrough in image sonification systems will allow blind users to cost effectively regain enough visual functions to allow them to lead secure and productive lives.
Collapse
Affiliation(s)
- Shern Shiou Tan
- School of Computer Science, The University of Nottingham Malaysia Campus, Semenyih, Selangor, Malaysia.
| | | | | |
Collapse
|