1
|
Ahmed I, Farrok O. SwingBoard: introducing swipe based virtual keyboard for visually impaired and blind users. Disabil Rehabil Assist Technol 2024; 19:1482-1493. [PMID: 37098085 DOI: 10.1080/17483107.2023.2199793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Accepted: 03/31/2023] [Indexed: 04/26/2023]
Abstract
PURPOSE Nowadays, typing is considered as one of the highly important aspects of accessibility that the visually impaired and blinds struggle with the most as existing virtual keyboards are complex and slow. MATERIALS AND METHODS This paper proposes a new text entry method named SwingBoard for visually impaired and blind smartphone users to solve their accessibility problem. It supports a-z, 0-9, 7 punctuations, 12 symbols, and eight keyboard functionalities that are arranged in 8 zones (specific range of angles), four segments, two modes, and different gestures. The proposed keyboard is suitable for the either single-handed or both-handed operation that tracks swipe angle and length to trigger any of the 66 key events. The key triggering process is based on only swiping the finger at different angles with different lengths. Typing speed of SwingBoard is increased by including some effective features such as the quick alphabet and number mode shifting, haptic feedback feature, talkback on swipe to learn the map quickly, and customizable swipe length feature. RESULTS At the end of 150 one-minute tests, seven blind participants reached an average of 19.89 words per minute (WPM) with an 88% accuracy rate which is one of the fastest-ever recorded average typing speeds for the blind. CONCLUSION Almost all users found SwingBoard effective, easy to learn and want to keep using it. SwingBoard is a handy virtual keyboard for visually impaired people with amazing typing speed and accuracy.
Collapse
Affiliation(s)
- Iftekhar Ahmed
- Department of Computer Science and Engineering, BRAC University, Dhaka, Bangladesh
| | - Omar Farrok
- Department of Electrical and Electronic Engineering, Ahsanullah University of Science and Technology, Tejgaon, Bangladesh
| |
Collapse
|
2
|
Gabdreshov G, Magzymov D, Yensebayev N. Preliminary investigation of SEZUAL device for basic material identification and simple spatial navigation for blind and visually impaired people. Disabil Rehabil Assist Technol 2024; 19:1343-1350. [PMID: 36756982 DOI: 10.1080/17483107.2023.2176555] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 12/27/2022] [Accepted: 01/31/2023] [Indexed: 02/10/2023]
Abstract
PURPOSE we present a preliminary set of experimental studies that demonstrates device-aided echolocation enabling in blind and visually impaired individuals. The proposed device emits a click-like sound into the surrounding space and returning sound is perceived by participants to infer the surrounding environment. MATERIALS AND METHODS two sets of experiments were set up to evaluate the echolocation abilities of nine blind participants. The first setup was designed to identify four material types based on the sound reflection properties of materials, such as glass, metal, wood, and ceramics. The second setup was navigation through a basic maze with the device. RESULTS experimental data demonstrate that the use of the proposed device enables active echolocation abilities in blind participants, particularly for material identification and spatial mobility. CONCLUSION the proposed device can potentially be used to rehabilitate disabled blind and visually impaired individuals in terms of spatial mobility and orientation.
Collapse
|
3
|
Liu BM, Beheshti M, Naeimi T, Zhu Z, Vedanthan R, Seiple W, Rizzo JR. The BLV App Arcade: a new curated repository and evaluation rubric for mobile applications supporting blindness and low vision. Disabil Rehabil Assist Technol 2024; 19:1405-1414. [PMID: 36927193 DOI: 10.1080/17483107.2023.2187094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Accepted: 02/24/2023] [Indexed: 03/18/2023]
Abstract
PURPOSE Visual impairment-related disabilities have become increasingly pervasive. Current reports estimate a total of 36 million persons with blindness and 217 million persons with moderate to severe visual impairment worldwide. Assistive technologies (AT), including text-to-speech software, navigational/spatial guides, and object recognition tools have the capacity to improve the lives of people with blindness and low vision. However, access to such AT is constrained by high costs and implementation barriers. More recently, expansive growth in mobile computing has enabled many technologies to be translated into mobile applications. As a result, a marketplace of accessibility apps has become available, yet no framework exists to facilitate navigation of this voluminous space. MATERIALS AND METHODS We developed the BLV (Blind and Low Vision) App Arcade: a fun, engaging, and searchable curated repository of app AT broken down into 11 categories spanning a wide variety of themes from entertainment to navigation. Additionally, a standardized evaluation metric was formalized to assess each app in five key dimensions: reputability, privacy, data sharing, effectiveness, and ease of use/accessibility. In this paper, we describe the methodological approaches, considerations, and metrics used to find, store and score mobile applications. CONCLUSION The development of a comprehensive and standardized database of apps with a scoring rubric has the potential to increase access to reputable tools for the visually impaired community, especially for those in low- and middle-income demographics, who may have access to mobile devices but otherwise have limited access to more expensive technologies or services.
Collapse
Affiliation(s)
- Bennett M Liu
- Department of Rehabilitation Medicine, NYU Langone Health, New York, NY, USA
- Stanford University, Stanford, CA, USA
| | - Mahya Beheshti
- Department of Rehabilitation Medicine, NYU Langone Health, New York, NY, USA
- Department of Mechanical & Aerospace Engineering, NYU Tandon School of Engineering, New York, NY, USA
| | - Tahareh Naeimi
- Department of Rehabilitation Medicine, NYU Langone Health, New York, NY, USA
| | - Zhigang Zhu
- Department of Computer Science, The CUNY City College, New York, NY, USA
- Department of Computer Science, The CUNY Graduate Center, New York, NY, USA
| | - Rajesh Vedanthan
- Department of Population Health, NYU Langone Health, New York, NY, USA
- Department of Medicine, NYU Langone Health, New York, NY, USA
| | - William Seiple
- Lighthouse Guild, New York, NY, USA
- Department of Ophthalmology, NYU Langone Health, New York, NY, USA
| | - John-Ross Rizzo
- Department of Rehabilitation Medicine, NYU Langone Health, New York, NY, USA
- Department of Computer Science, The CUNY City College, New York, NY, USA
- Department of Neurology, NYU Langone Health, New York, NY, USA
- Department of Biomedical Engineering, NYU Tandon School of Engineering, New York, NY, USA
| |
Collapse
|
4
|
Lian Y, Liu DE, Ji WZ. Survey and analysis of the current status of research in the field of outdoor navigation for the blind. Disabil Rehabil Assist Technol 2024; 19:1657-1675. [PMID: 37402242 DOI: 10.1080/17483107.2023.2227224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 03/14/2023] [Accepted: 06/13/2023] [Indexed: 07/06/2023]
Abstract
PURPOSE In this article, we comprehensively review the current situation and research on technology related to outdoor travel for blind and visually impaired people (BVIP), given the diverse types and incomplete functionality of navigation aids for the blind. This aims to provide a reference for related research in the fields of outdoor travel for BVIP and blind navigation. MATERIALS AND METHODS We compiled articles related to blind navigation, of which a total of 227 of them are included in the search criteria. One hundred and seventy-nine articles are selected from the initial set, from a technical point of view, to elaborate on five aspects of blind navigation: system equipment, data sources, guidance algorithms, optimization of related methods, and navigation maps. RESULTS The wearable form of assistive devices for the blind has the most research, followed by the handheld type of aids. The RGB data class based on vision sensor is the most common source of navigation environment information data. Object detection based on picture data is also particularly rich among navigation algorithms and associated methods, indicating that computer vision technology has become an important study content in the field of blind navigation. However, research on navigation maps is relatively less. CONCLUSIONS In the study and development of assistive equipment for BVIP, there will be an emphasis on prioritizing attributes, such as lightness, portability, and efficiency. In light of the upcoming driverless era, the research focus will be on the development of visual sensors and computer vision technologies that can aid in navigation for the blind.
Collapse
Affiliation(s)
- Yue Lian
- School of Civil Engineering and Mapping and Engineering, Jiangxi University of Technology, Ganzhou, Jiangxi, China
| | - De-Er Liu
- School of Civil Engineering and Mapping and Engineering, Jiangxi University of Technology, Ganzhou, Jiangxi, China
| | - Wei-Zhen Ji
- State Key Laboratory of Remote Sensing Science, Beijing Normal University, Beijing, China
| |
Collapse
|
5
|
Abstract
BACKGROUND Low vision affects over 300 million people worldwide and can compromise both activities of daily living and quality of life. Rehabilitative training and vision assistive equipment (VAE) may help, but some visually impaired people have limited resources to attend in-person visits to rehabilitation clinics to be trained to learn to use VAE. These people may be able to overcome barriers to care through access to remote, internet-based consultation (telerehabilitation). OBJECTIVES To compare the effects of telerehabilitation with face-to-face (e.g. in-office or inpatient) vision rehabilitation services for improving vision-related quality of life and near reading ability in people with visual function loss due to any ocular condition. Secondary objectives were to evaluate compliance with scheduled rehabilitation sessions, abandonment rates for VAE devices, and patient satisfaction ratings. SEARCH METHODS We searched the Cochrane Central Register of Controlled Trials (CENTRAL), which contains the Cochrane Eyes and Vision Trials Register) (2021, Issue 9); Ovid MEDLINE; Embase.com; PubMed; ClinicalTrials.gov; and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP). We did not use any language restriction or study design filter in the electronic searches; however, we restricted the searches from 1980 onwards because the internet was not introduced to the public until 1982. We last searched CENTRAL, MEDLINE Ovid, Embase, and PubMed on 14 September 2021, and the trial registries on 16 March 2022. SELECTION CRITERIA We included randomized controlled trials (RCTs) or controlled clinical trials (CCTs) in which participants diagnosed with low vision had received vision rehabilitation services remotely from a human provider using internet, web-based technology compared with an approach involving in-person consultations. DATA COLLECTION AND ANALYSIS Two review authors independently screened titles and abstracts retrieved by the searches of the electronic databases and then full-text articles for eligible studies. Two review authors independently abstracted data from the included studies. Any discrepancies were resolved by discussion. MAIN RESULTS We identified one RCT/CCT that indirectly met our inclusion criteria, and two ongoing trials that met our inclusion criteria. The included trial had an overall high risk of bias. We did not conduct a quantitative analysis since multiple controlled trials were not identified. The single included trial of 57 participants utilized a parallel-group design. It compared 30 hours of either personalized low vision training through telerehabilitation with a low vision therapist (the experimental group) with the self-training standard provided by eSight using the eSkills User Guide that was self-administered by the participants at home for one hour per day for 30 days (the comparison group). The trial investigators found a similar direction of effects for both groups for vision-related quality of life and satisfaction at two weeks, three months, and six months. A greater proportion of participants in the comparison group had abandoned or discontinued use of the eSight Eyewear at two weeks than those in the telerehabilitation group, but discontinuance rates were similar between groups at one month and three months. We rated the certainty of the evidence for all outcomes as very low due to high risk of bias in randomization processes and missing outcome data and imprecision. AUTHORS' CONCLUSIONS: The included trial found similar efficacy between telerehabilitation with a therapist and an active control intervention of self-guided training in mostly younger to middle-aged adults with low vision who received a new wearable electronic aid. Given the disease burden and the growing interest in telemedicine, the two ongoing studies, when completed, may provide further evidence of the potential for telerehabilitation as a platform for providing services to people with low vision.
Collapse
Affiliation(s)
- Ava K Bittner
- Ophthalmology, UCLA Stein Eye Institute, Los Angeles, California, USA
| | - Patrick D Yoshinaga
- Southern California College of Optometry, Marshall B Ketchum University, Fullerton, California, USA
| | - Thanitsara Rittiphairoj
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA
- Department of Ophthalmology, University of Colorado Denver Anschutz Medical Campus, Aurora, CO, USA
| | - Tianjing Li
- Department of Ophthalmology, University of Colorado Denver Anschutz Medical Campus, Aurora, CO, USA
| |
Collapse
|
6
|
Zhang C, Xiao R, Wang A, Zhao Z. SILICONE OIL-FILLED FOLDABLE CAPSULAR VITREOUS BODY VERSUS SILICONE OIL ENDOTAMPONADE FOR TREATMENT OF NO LIGHT PERCEPTION AFTER SEVERE OCULAR TRAUMA. Retina 2022; 42:553-560. [PMID: 35188493 PMCID: PMC9561226 DOI: 10.1097/iae.0000000000003336] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
PURPOSE To compare the anatomical and functional outcomes of silicone oil (SO)-filled foldable capsular vitreous body (FCVB) and SO endotamponade in vitrectomy for patients with no light perception after ocular trauma. METHODS A total of 64 patients (64 eyes) with no light perception caused by severe ocular trauma were divided into FCVB and SO groups based on the surgical treatment. The main outcome measurements were retinal reattachment rate, intraocular pressure, best-corrected visual acuity, and number of operations. RESULTS Both the FCVB group (29 eyes) and the SO group (35 eyes) showed significant improvement in postoperative best-corrected visual acuity and intraocular pressure. The two groups showed no significant differences in final intraocular pressure and the retinal reattachment rate. The postoperative vision (≥LP) in the FCVB group was significantly worse than in the SO group (FCVB [4/29] vs. SO [18/35], P = 0.003). However, the number of surgeries in the FCVB group was significantly lower than in the SO group (FCVB [1.10] vs. SO [2.23], P < 0.001). CONCLUSION Vitrectomy combined with SO endotamponade shows better short-term improvement in the treatment of no light perception caused by severe ocular trauma. However, SO-filled FCVB can effectively prevent many complications caused by direct SO endotamponade, such as secondary surgeries or SO dependence.
Collapse
Affiliation(s)
- Chun Zhang
- Department of Ophthalmology, Jiangxi Clinical Research Center for Ophthalmic Disease, Affiliated Eye Hospital of Nanchang University, Nanchang, China
| | - Ruihan Xiao
- Department of Ophthalmology, Jiangxi Clinical Research Center for Ophthalmic Disease, Affiliated Eye Hospital of Nanchang University, Nanchang, China
| | - Anan Wang
- Department of Ophthalmology, Jiangxi Clinical Research Center for Ophthalmic Disease, Affiliated Eye Hospital of Nanchang University, Nanchang, China
| | - Zhenquan Zhao
- Department of Ophthalmology, Eye Hospital of School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, Zhejiang, China; and
- Department of Ophthalmology, National Clinical Research Center for Ocular Diseases, Wenzhou, Zhejiang, China
| |
Collapse
|
7
|
Dunne S, Close H, Richards N, Ellison A, Lane AR. Maximizing Telerehabilitation for Patients With Visual Loss After Stroke: Interview and Focus Group Study With Stroke Survivors, Carers, and Occupational Therapists. J Med Internet Res 2020; 22:e19604. [PMID: 33095179 PMCID: PMC7647809 DOI: 10.2196/19604] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 07/28/2020] [Accepted: 07/29/2020] [Indexed: 01/26/2023] Open
Abstract
BACKGROUND Visual field defects are a common consequence of stroke, and compensatory eye movement strategies have been identified as the most promising rehabilitation option. There has been a move toward compensatory telerehabilitation options, such as the Durham Reading and Exploration (DREX) training app, which significantly improves visual exploration, reading, and self-reported quality of life. OBJECTIVE This study details an iterative process of liaising with stroke survivors, carers, and health care professionals to identify barriers and facilitators to using rehabilitation tools, as well as elements of good practice in telerehabilitation, with a focus on how the DREX package can be maximized. METHODS Survey data from 75 stroke survivors informed 12 semistructured engagement activities (7 focus groups and 5 interviews) with 32 stroke survivors, 10 carers, and 24 occupational therapists. RESULTS Thematic analysis identified key themes within the data. Themes identified problems associated with poststroke health care from both patients' and occupational therapists' perspectives that need to be addressed to improve uptake of this rehabilitation tool and telerehabilitation options generally. This included identifying additional materials or assistance that were required to boost the impact of training packages. The acute rehabilitation setting was an identified barrier, and perceptions of technology were considered a barrier by some but a facilitator by others. In addition, 4 key features of telerehabilitation were identified: additional materials, the importance of goal setting, repetition, and feedback. CONCLUSIONS The data were used to try to overcome some barriers to the DREX training and are further discussed as considerations for telerehabilitation in general moving forward.
Collapse
Affiliation(s)
- Stephen Dunne
- School of Psychology, University of Sunderland, Sunderland, United Kingdom
| | - Helen Close
- Population Health Sciences Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Nicola Richards
- Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Amanda Ellison
- Department of Psychology, Durham University, Durham, United Kingdom
| | - Alison R Lane
- Department of Psychology, Durham University, Durham, United Kingdom
| |
Collapse
|
8
|
Martinez M, Yang K, Constantinescu A, Stiefelhagen R. Helping the Blind to Get through COVID-19: Social Distancing Assistant Using Real-Time Semantic Segmentation on RGB-D Video. Sensors (Basel) 2020; 20:s20185202. [PMID: 32932585 PMCID: PMC7571123 DOI: 10.3390/s20185202] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 09/09/2020] [Accepted: 09/10/2020] [Indexed: 11/16/2022]
Abstract
The current COVID-19 pandemic is having a major impact on our daily lives. Social distancing is one of the measures that has been implemented with the aim of slowing the spread of the disease, but it is difficult for blind people to comply with this. In this paper, we present a system that helps blind people to maintain physical distance to other persons using a combination of RGB and depth cameras. We use a real-time semantic segmentation algorithm on the RGB camera to detect where persons are and use the depth camera to assess the distance to them; then, we provide audio feedback through bone-conducting headphones if a person is closer than 1.5 m. Our system warns the user only if persons are nearby but does not react to non-person objects such as walls, trees or doors; thus, it is not intrusive, and it is possible to use it in combination with other assistive devices. We have tested our prototype system on one blind and four blindfolded persons, and found that the system is precise, easy to use, and amounts to low cognitive load.
Collapse
Affiliation(s)
- Manuel Martinez
- Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, 76131 Karlsruhe, Germany; (M.M.); (R.S.)
| | - Kailun Yang
- Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, 76131 Karlsruhe, Germany; (M.M.); (R.S.)
- Correspondence: ; Tel.: +49-(0)721-608-41954
| | - Angela Constantinescu
- Study Centre for the Visually Impaired, Karlsruhe Institute of Technology, 76131 Karlsruhe, Germany;
| | - Rainer Stiefelhagen
- Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, 76131 Karlsruhe, Germany; (M.M.); (R.S.)
- Study Centre for the Visually Impaired, Karlsruhe Institute of Technology, 76131 Karlsruhe, Germany;
| |
Collapse
|
9
|
Neugebauer A, Rifai K, Getzlaff M, Wahl S. Navigation aid for blind persons by visual-to-auditory sensory substitution: A pilot study. PLoS One 2020; 15:e0237344. [PMID: 32818953 PMCID: PMC7446825 DOI: 10.1371/journal.pone.0237344] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Accepted: 07/23/2020] [Indexed: 11/19/2022] Open
Abstract
PURPOSE In this study, we investigate to what degree augmented reality technology can be used to create and evaluate a visual-to-auditory sensory substitution device to improve the performance of blind persons in navigation and recognition tasks. METHODS A sensory substitution algorithm that translates 3D visual information into audio feedback was designed. This algorithm was integrated in an augmented reality based mobile phone application. Using the mobile device as sensory substitution device, a study with blind participants (n = 7) was performed. The participants navigated through pseudo-randomized obstacle courses using either the sensory substitution device, a white cane or a combination of both. In a second task, virtual 3D objects and structures had to be identified by the participants using the same sensory substitution device. RESULTS The realized application for mobile devices enabled participants to complete the navigation and object recognition tasks in an experimental environment already within the first trials without previous training. This demonstrates the general feasibility and low entry barrier of the designed sensory substitution algorithm. In direct comparison to the white cane, within the study duration of ten hours the sensory substitution device did not offer a statistically significant improvement in navigation.
Collapse
Affiliation(s)
- Alexander Neugebauer
- ZEISS Vision Science Lab, Eberhard-Karls-University Tuebingen, Tübingen, Germany
- * E-mail:
| | - Katharina Rifai
- ZEISS Vision Science Lab, Eberhard-Karls-University Tuebingen, Tübingen, Germany
- Carl Zeiss Vision International GmbH, Aalen, Germany
| | - Mathias Getzlaff
- Institute for Applied Physics, Heinrich-Heine University Duesseldorf, Duesseldorf, Germany
| | - Siegfried Wahl
- ZEISS Vision Science Lab, Eberhard-Karls-University Tuebingen, Tübingen, Germany
- Carl Zeiss Vision International GmbH, Aalen, Germany
| |
Collapse
|
10
|
Allison TS, Moritz J, Turk P, Stone-Roy LM. Lingual electrotactile discrimination ability is associated with the presence of specific connective tissue structures (papillae) on the tongue surface. PLoS One 2020; 15:e0237142. [PMID: 32764778 PMCID: PMC7413419 DOI: 10.1371/journal.pone.0237142] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2019] [Accepted: 07/21/2020] [Indexed: 11/19/2022] Open
Abstract
Electrical stimulation of nerve endings in the tongue can be used to communicate information to users and has been shown to be highly effective in sensory substitution applications. The anterior tip of the tongue has very small somatosensory receptive fields, comparable to those of the finger tips, allowing for precise two-point discrimination and high tactile sensitivity. However, perception of electrotactile stimuli varies significantly between users, and across the tongue surface. Despite this, previous studies all used uniform electrode grids to stimulate a region of the dorsal-medial tongue surface. In an effort to customize electrode layouts for individual users, and thus improve efficacy for sensory substitution applications, we investigated whether specific neuroanatomical and physiological features of the tongue are associated with enhanced ability to perceive active electrodes. Specifically, the study described here was designed to test whether fungiform papillae density and/or propylthiouracil sensitivity are positively or negatively associated with perceived intensity and/or discrimination ability for lingual electrotactile stimuli. Fungiform papillae number and distribution were determined for 15 participants and they were exposed to patterns of electrotactile stimulation (ETS) and asked to report perceived intensity and perceived number of stimuli. Fungiform papillae number and distribution were then compared to ETS characteristics using comprehensive and rigorous statistical analyses. Our results indicate that fungiform papillae density is correlated with enhanced discrimination ability for electrical stimuli. In contrast, papillae density, on average, is not correlated with perceived intensity of active electrodes. However, results for at least one participant suggest that further research is warranted. Our data indicate that propylthiouracil taster status is not related to ETS perceived intensity or discrimination ability. These data indicate that individuals with higher fungiform papillae number and density in the anterior medial tongue region may be better able to use lingual ETS for sensory substitution.
Collapse
Affiliation(s)
- Tyler S. Allison
- Department of Biomedical Sciences, Colorado State University, Fort Collins, Colorado, United States of America
| | - Joel Moritz
- Department of Mechanical Engineering, Colorado State University, Fort Collins, Colorado, United States of America
- Sapien LLC, Fort Collins, Colorado, United States of America
| | - Philip Turk
- Department of Statistics, Colorado State University, Fort Collins, Colorado, United States of America
| | - Leslie M. Stone-Roy
- Department of Biomedical Sciences, Colorado State University, Fort Collins, Colorado, United States of America
- * E-mail:
| |
Collapse
|
11
|
Urqueta Alfaro A, Guthrie DM, McGraw C, Wittich W. Older adults with dual sensory loss in rehabilitation show high functioning and may fare better than those with single sensory loss. PLoS One 2020; 15:e0237152. [PMID: 32745118 PMCID: PMC7398548 DOI: 10.1371/journal.pone.0237152] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Accepted: 07/21/2020] [Indexed: 11/25/2022] Open
Abstract
The population of older adults that have Dual Sensory Loss (DSL) is increasing, yet most research to date has focused on single sensory impairment and is inconclusive as to whether DSL is associated with worse impact on health and well-being over single sensory loss. The primary aim of this study was to characterize the health and functioning of community-dwelling older adults with DSL who were receiving sensory rehabilitation, using an understudied assessment: the interRAI Community Health Assessment (CHA). The secondary aim was to investigate whether older adults with DSL had worse health-related outcomes than their peers with only vision loss (VL) or only hearing loss (HL). We report and compare the interRAI CHA results in a sample of 200 older adults (61+ years of age) who had DSL, VL or HL. Overall, all sensory impairment groups showed high functioning in the areas of cognition, communication, activities of daily living, depression, and psycho-social well-being. DSL was not always associated with worse outcomes compared to a single sensory loss. Rather, the results varied depending on the tasks assessed, as well as which groups were compared. Our findings highlight that despite the negative impact of sensory losses, community-dwelling older adults receiving sensory rehabilitation services tend to have overall good health and a high level of independence. These results also show that DSL is not always associated with worse outcomes compared to a single sensory loss. Further research is needed to better characterize older adults with DSL who have more severe sensory and cognitive difficulties than those in our sample, and among those who are not receiving rehabilitation services.
Collapse
Affiliation(s)
- Andrea Urqueta Alfaro
- School of Optometry, University of Montréal, Montréal, Quebec, Canada
- Centre de Recherche Interdisciplinaire en Réadaptation du Montréal Métropolitain, Montreal, Canada
- * E-mail:
| | - Dawn M. Guthrie
- Department of Kinesiology & Physical Education, Wilfrid Laurier University, Waterloo, Ontario, Canada
- Department of Health Sciences, Wilfrid Laurier University, Waterloo, Ontario, Canada
| | - Cathy McGraw
- CRIR/Lethbridge-Layton-Mackay Rehabilitation Centre of West-Central Montreal, Montréal, Quebec, Canada
| | - Walter Wittich
- School of Optometry, University of Montréal, Montréal, Quebec, Canada
- CRIR/Lethbridge-Layton-Mackay Rehabilitation Centre of West-Central Montreal, Montréal, Quebec, Canada
- CRIR/Institut Nazareth et Louis-Braille du CISSS de la Montérégie-Centre, Montréal, Quebec, Canada
| |
Collapse
|
12
|
Satpute SA, Canady JR, Klatzky RL, Stetten GD. FingerSight: A Vibrotactile Wearable Ring for Assistance With Locating and Reaching Objects in Peripersonal Space. IEEE Trans Haptics 2020; 13:325-333. [PMID: 31603801 DOI: 10.1109/toh.2019.2945561] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
This paper describes a prototype guidance system, "FingerSight," to help people without vision locate and reach to objects in peripersonal space. It consists of four evenly spaced tactors embedded into a ring worn on the index finger, with a small camera mounted on top. Computer-vision analysis of the camera image controls vibrotactile feedback, leading users to move their hand to near targets. Two experiments tested the functionality of the prototype system. The first found that participants could discriminate between five different vibrotactile sites (four individual tactors and all simultaneously) with a mean accuracy of 88.8% after initial training. In the second experiment, participants were blindfolded and instructed to move their hand wearing the device to one of four locations within arm's reach, while hand trajectories were tracked. The tactors were controlled using two different strategies: (1) repeatedly signal axis with largest error, and (2) signal both axes in alternation. Participants demonstrated essentially straight-line trajectories toward the target under both instructions, but the temporal parameters (rate of approach, duration) showed an advantage for correction on both axes in sequence.
Collapse
|
13
|
Zhang X, Zhang H, Zhang L, Zhu Y, Hu F. Double-Diamond Model-Based Orientation Guidance in Wearable Human-Machine Navigation Systems for Blind and Visually Impaired People. Sensors (Basel) 2019; 19:s19214670. [PMID: 31661798 PMCID: PMC6864851 DOI: 10.3390/s19214670] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2019] [Revised: 10/06/2019] [Accepted: 10/24/2019] [Indexed: 11/17/2022]
Abstract
This paper presents the analysis and design of a new, wearable orientation guidance device in modern travel aid systems for blind and visually impaired people. The four-stage double-diamond design model was applied in the design process to achieve human-centric innovation and to ensure technical feasibility and economic viability. Consequently, a sliding tactile feedback wristband was designed and prototyped. Furthermore, a Bezier curve-based adaptive path planner is proposed to guarantee collision-free planned motion. Proof-of-concept experiments on both virtual and real-world scenarios are conducted. The evaluation results confirmed the efficiency and feasibility of the design and imply the design’s remarkable potential in spatial perception rehabilitation.
Collapse
Affiliation(s)
- Xiaochen Zhang
- Department of Industrial Design, Guangdong University of Technology, Guangzhou 510006, China.
| | - Hui Zhang
- Department of Industrial Design, Guangdong University of Technology, Guangzhou 510006, China.
| | - Linyue Zhang
- School of Communication and Design, Sun Yat-Sen University, Guangzhou 510275, China.
| | - Yi Zhu
- Department of Industrial Design, Guangdong University of Technology, Guangzhou 510006, China.
- School of Industrial Design, Georgia Institute of Technology, Atlanta, GA 30332, USA.
| | - Fei Hu
- Department of Industrial Design, Guangdong University of Technology, Guangzhou 510006, China.
| |
Collapse
|
14
|
Earley B, Fashner J. Eye Conditions in Infants and Children: Accommodations for Children With Vision Impairment. FP Essent 2019; 484:28-32. [PMID: 31454215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The effects of vision impairment and blindness on children can last a lifetime. Most children with vision impairments need a multidisciplinary team of teachers, child development specialists, and social workers. Blindness often is associated with other risk factors, disease processes, and/or disabilities. In the United States, the Social Security Administration defines children as legally blind when best corrected visual acuity is less than 20/200. The US law concerning accommodations for children with impairments is part of the Americans with Disabilities Act of 1990 (ADA), and specifically the Individuals with Disabilities Education Act (IDEA), which covers preschool-age and school-age children. Accommodations for children with vision impairment include low vision aids allowing them to stay in mainstream classes and schools.
Collapse
Affiliation(s)
- Brian Earley
- Ocala Health Family Medicine Residency, 1431 SW 1st Ave Bitzer Bldg Suite 7, Ocala, FL 34471
| | - Julia Fashner
- Ocala Health Family Medicine Residency, 1431 SW 1st Ave Bitzer Bldg Suite 7, Ocala, FL 34471
| |
Collapse
|
15
|
Leo F, Ferrari E, Baccelliere C, Zarate J, Shea H, Cocchi E, Waszkielewicz A, Brayda L. Enhancing general spatial skills of young visually impaired people with a programmable distance discrimination training: a case control study. J Neuroeng Rehabil 2019; 16:108. [PMID: 31462262 PMCID: PMC6714081 DOI: 10.1186/s12984-019-0580-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Accepted: 08/19/2019] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The estimation of relative distance is a perceptual task used extensively in everyday life. This important skill suffers from biases that may be more pronounced when estimation is based on haptics. This is especially true for the blind and visually impaired, for which haptic estimation of distances is paramount but not systematically trained. We investigated whether a programmable tactile display, used autonomously, can improve distance discrimination ability in blind and severely visually impaired youngsters between 7 and 22 years-old. METHODS Training consisted of four weekly sessions in which participants were asked to haptically find, on the programmable tactile display, the pairs of squares which were separated by the shortest and longest distance in tactile images with multiple squares. A battery of haptic tests with raised-line drawings was administered before and after training, and scores were compared to those of a control group that did only the haptic battery, without doing the distance discrimination training on the tactile display. RESULTS Both blind and severely impaired youngsters became more accurate and faster at the task during training. In haptic battery results, blind and severely impaired youngsters who used the programmable display improved in three and two tests, respectively. In contrast, in the control groups, the blind control group improved in only one test, and the severely visually impaired in no tests. CONCLUSIONS Distance discrimination skills can be trained equally well in both blind and severely impaired participants. More importantly, autonomous training with the programmable tactile display had generalized effects beyond the trained task. Participants improved not only in the size discrimination test but also in memory span tests. Our study shows that tactile stimulation training that requires minimal human assistance can effectively improve generic spatial skills.
Collapse
Affiliation(s)
- Fabrizio Leo
- Robotics, Brain and Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| | - Elisabetta Ferrari
- Robotics, Brain and Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| | - Caterina Baccelliere
- Robotics, Brain and Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| | - Juan Zarate
- LMTS, Ecole Polytechnique Fédérale de Lausanne (EPFL), Neuchâtel, Switzerland
| | - Herbert Shea
- LMTS, Ecole Polytechnique Fédérale de Lausanne (EPFL), Neuchâtel, Switzerland
| | | | | | - Luca Brayda
- Robotics, Brain and Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
16
|
McGrath CE, Corrado AM. Adaptations to support occupational engagement with age-related vision loss: A metasynthesis study. Can J Occup Ther 2019; 86:377-387. [PMID: 31060363 DOI: 10.1177/0008417419834422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND. Age-related vision loss (ARVL) is a progressive process that adversely affects older adults' occupational engagement. As such, older adults often employ a variety of psychological adaptation strategies. PURPOSE. The purpose of this study was to identify those psychological adaptation strategies employed by older adults aging with ARVL. METHOD. This metasynthesis searched and identified 21 qualitative articles that described a link between psychological adaptation strategies and occupational engagement. FINDINGS. The psychological strategies identified were categorized into five themes. The strategies of persisting with hope, positivity, and acceptance and portraying a self-image consistent with independence, competence, and self-reliance were well established in the literature, while other themes were more emerging, such as using humour, relying on religious/spiritual beliefs, and comparing the self to others. IMPLICATIONS. By understanding the psychological adaptation strategies employed by older adults with ARVL, occupational therapists will be better positioned to guide their clients toward positive adaptive patterns.
Collapse
|
17
|
Abstract
It is well known that people who read print or braille sometimes make eye or finger movements against the reading direction. The way these regressions are elicited has been studied in detail by manipulating linguistic aspects of the reading material. Actually, it has been shown that reducing the physical intensity or clarity of the visual input signal can also lead to increased regressions during reading. We asked whether the same might be true in the haptic realm while reading braille. We set the height of braille dots at three different levels (high, medium, and low) and asked adult blind, practiced braille readers to read standardized texts without any repetition of content. The results show that setting the braille dot height near the tactile threshold significantly increased the frequency of regressive finger movements. Additionally, at the lowest braille dot height, braille reading speed significantly diminished. These effects did not occur at braille dot heights that were closer to the height of standard braille (medium and high). We tentatively conclude that this effect may be due to a heightened sense of uncertainty elicited by perception near the threshold that seems to be common to the reading process, independent of the sensory input modality. Furthermore, the described effect may be a feature of a brain area that contributes to the reading process mediated by vision as well as touch.
Collapse
Affiliation(s)
- Daisy Lei
- The Smith-Kettlewell Eye Research Institute, San Francisco, California, United States of America
| | - Natalie N. Stepien-Bernabe
- The Smith-Kettlewell Eye Research Institute, San Francisco, California, United States of America
- Vision Science Program, School of Optometry, University of California, Berkeley, California, United States of America
| | - Valerie S. Morash
- The Smith-Kettlewell Eye Research Institute, San Francisco, California, United States of America
| | - Manfred MacKeben
- The Smith-Kettlewell Eye Research Institute, San Francisco, California, United States of America
| |
Collapse
|
18
|
Katzschmann RK, Araki B, Rus D. Safe Local Navigation for Visually Impaired Users With a Time-of-Flight and Haptic Feedback Device. IEEE Trans Neural Syst Rehabil Eng 2019. [PMID: 29522402 DOI: 10.1109/tnsre.2018.2800665] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This paper presents ALVU (Array of Lidars and Vibrotactile Units), a contactless, intuitive, hands-free, and discreet wearable device that allows visually impaired users to detect low- and high-hanging obstacles, as well as physical boundaries in their immediate environment. The solution allows for safe local navigation in both confined and open spaces by enabling the user to distinguish free space from obstacles. The device presented is composed of two parts: a sensor belt and a haptic strap. The sensor belt is an array of time-of-flight distance sensors worn around the front of a user's waist, and the pulses of infrared light provide reliable and accurate measurements of the distances between the user and surrounding obstacles or surfaces. The haptic strap communicates the measured distances through an array of vibratory motors worn around the user's upper abdomen, providing haptic feedback. The linear vibration motors are combined with a point-loaded pretensioned applicator to transmit isolated vibrations to the user. We validated the device's capability in an extensive user study entailing 162 trials with 12 blind users. Users wearing the device successfully walked through hallways, avoided obstacles, and detected staircases.
Collapse
|
19
|
Ton C, Omar A, Szedenko V, Tran VH, Aftab A, Perla F, Bernstein MJ, Yang Y. LIDAR Assist Spatial Sensing for the Visually Impaired and Performance Analysis. IEEE Trans Neural Syst Rehabil Eng 2018; 26:1727-1734. [PMID: 30047892 DOI: 10.1109/tnsre.2018.2859800] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Echolocation enables people with impaired or no vision to comprehend the surrounding spatial information through the reflected sound. However, this technique often requires substantial training, and the accuracy of echolocation is subject to various conditions. Furthermore, the individuals who practice this sensing method must simultaneously generate the sound and process the received audio information. This paper proposes and evaluates a proof-of-concept light detection and ranging (LIDAR) assist spatial sensing (LASS) system, which intends to overcome these restrictions by obtaining the spatial information of the user's surroundings through a LIDAR sensor and translating the spatial information into the stereo sound of various pitches. The stereo sound of relative pitch represents the information regarding objects' angular orientation and horizontal distance, respectively, thus granting visually impaired users an enhanced spatial perception of his or her surrounding areas and potential obstacles. This paper is divided into two phases: Phase I is to engineer the hardware and software of the LASS system and Phase II focuses on the system efficacy study. The study, approved by the Penn State Institutional Review Board, included 18 student volunteers, who were recruited through the Penn State Department of Psychology Subject Pool. This paper demonstrates that the blindfolded individuals equipped with the LASS system are able to quantitatively identify the surrounding obstacles, differentiate their relative distance, and distinguish the angular location of multiple objects with minimal training.
Collapse
|
20
|
Tan CW. Braille and the Need to Innovate for the Blind. Ann Acad Med Singap 2018; 47:1-2. [PMID: 29493705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Affiliation(s)
- Clement Wt Tan
- Department of Ophthalmology, National University Hospital, Singapore
| |
Collapse
|
21
|
Deverell L, Meyer D, Lau BT, Al Mahmud A, Sukunesan S, Bhowmik J, Chai A, McCarthy C, Zheng P, Pipingas A, Islam FMA. Optimising technology to measure functional vision, mobility and service outcomes for people with low vision or blindness: protocol for a prospective cohort study in Australia and Malaysia. BMJ Open 2017; 7:e018140. [PMID: 29273657 PMCID: PMC5770903 DOI: 10.1136/bmjopen-2017-018140] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
INTRODUCTION Orientation and mobility (O&M) specialists assess the functional vision and O&M skills of people with mobility problems, usually relating to low vision or blindness. There are numerous O&M assessment checklists but no measures that reduce qualitative assessment data to a single comparable score suitable for assessing any O&M client, of any age or ability, in any location. Functional measures are needed internationally to align O&M assessment practices, guide referrals, profile O&M clients, plan appropriate services and evaluate outcomes from O&M programmes (eg, long cane training), assistive technology (eg, hazard sensors) and medical interventions (eg, retinal implants). This study aims to validate two new measures of functional performance vision-related outcomes in orientation and mobility (VROOM) and orientation and mobility outcomes (OMO) in the context of ordinary O&M assessments in Australia, with cultural comparisons in Malaysia, also developing phone apps and online training to streamline professional assessment practices. METHODS AND ANALYSIS This multiphase observational study will employ embedded mixed methods with a qualitative/quantitative priority: corating functional vision and O&M during social inquiry. Australian O&M agencies (n=15) provide the sampling frame. O&M specialists will use quota sampling to generate cross-sectional assessment data (n=400) before investigating selected cohorts in outcome studies. Cultural relevance of the VROOM and OMO tools will be investigated in Malaysia, where the tools will inform the design of assistive devices and evaluate prototypes. Exploratory and confirmatory factor analysis, Rasch modelling, cluster analysis and analysis of variance will be undertaken along with descriptive analysis of measurement data. Qualitative findings will be used to interpret VROOM and OMO scores, filter statistically significant results, warrant their generalisability and identify additional relevant constructs that could also be measured. ETHICS AND DISSEMINATION Ethical approval has been granted by the Human Research Ethics Committee at Swinburne University (SHR Project 2016/316). Dissemination of results will be via agency reports, journal articles and conference presentations.
Collapse
Affiliation(s)
- Lil Deverell
- Department of Statistics, Data Science and Epidemiology, Faculty of Health, Arts and Design, Swinburne University of Technology, Hawthorn, Australia
- Client Services, Guide Dogs Victoria, Kew, Australia
| | - Denny Meyer
- Department of Statistics, Data Science and Epidemiology, Faculty of Health, Arts and Design, Swinburne University of Technology, Hawthorn, Australia
| | - Bee Theng Lau
- Department of Computing, Faculty of Engineering, Computing and Science, Swinburne University of Technology, Kuching, Malaysia
| | - Abdullah Al Mahmud
- Centre for Design Innovation, School of Design, Faculty of Health, Arts and Design, Swinburne University of Technology, Hawthorn, Australia
| | - Suku Sukunesan
- Faculty of Business and Law, Swinburne Business School, Swinburne University of Technology, Hawthorn, Australia
| | - Jahar Bhowmik
- Department of Statistics, Data Science and Epidemiology, Faculty of Health, Arts and Design, Swinburne University of Technology, Hawthorn, Australia
| | - Almon Chai
- Robotics and Mechatronics Engineering, Faculty of Engineering, Computing and Science, Swinburne University of Technology, Kuching, Malaysia
| | - Chris McCarthy
- School of Software and Electrical Engineering, Faculty of Science, Engineering and Technology, Swinburne University of Technology, Hawthorn, Australia
| | - Pan Zheng
- Department of Computing, Faculty of Engineering, Computing and Science, Swinburne University of Technology, Kuching, Malaysia
| | - Andrew Pipingas
- Department of Psychological Sciences, Faculty of Health, Arts and Design, Swinburne University of Technology, Hawthorn, Australia
| | - Fakir M Amirul Islam
- Department of Statistics, Data Science and Epidemiology, Faculty of Health, Arts and Design, Swinburne University of Technology, Hawthorn, Australia
| |
Collapse
|
22
|
Abstract
Electrical stimulation of the cerebral cortex is a powerful tool for exploring cortical function. Stimulation of early visual cortical areas is easily detected by subjects and produces simple visual percepts known as phosphenes. A device implanted in visual cortex that generates patterns of phosphenes could be used as a substitute for natural vision in blind patients. We review the possibilities and limitations of such a device, termed a visual cortical prosthetic. Currently, we can predict the location and size of phosphenes produced by stimulation of single electrodes. A functional prosthetic, however, must produce spatial temporal patterns of activity that will result in the perception of complex visual objects. Although stimulation of later visual cortical areas alone usually does not lead to a visual percept, it can alter visual perception and the performance of visual behaviors, and training subjects to use signals injected into these areas may be possible.
Collapse
Affiliation(s)
- William H Bosking
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas 77030; , ,
| | - Michael S Beauchamp
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas 77030; , ,
| | - Daniel Yoshor
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas 77030; , ,
| |
Collapse
|
23
|
Rachitskaya AV, Yuan A, Marino MJ, Reese J, Ehlers JP. Intraoperative OCT Imaging of the Argus II Retinal Prosthesis System. Ophthalmic Surg Lasers Imaging Retina 2017; 47:999-1003. [PMID: 27842194 DOI: 10.3928/23258160-20161031-03] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2016] [Accepted: 09/16/2016] [Indexed: 11/20/2022]
Abstract
BACKGROUND AND OBJECTIVE Optimal placement of the Argus II Retinal Prosthesis System (Second Sight Medical Products, Sylmar, CA) is critical. Intraoperative optical coherence tomography (OCT) allows for intrasurgical visualization and confirmation of array placement. In this study, two different OCT systems were evaluated to assess the feasibility and utility of this technology during Argus II surgery. PATIENTS AND METHODS Intraoperative OCT was performed on five patients undergoing Argus II implantation at Cole Eye Institute from June 2015 to July 2016. The EnVisu portable OCT (Bioptigen, Morrisville, NC) and microscope-integrated RESCAN 700 (Zeiss, Oberkochen, Germany) intraoperative OCT systems were utilized. The EnVisu was used in three patients and the RESCAN 700 in three of the five patients. Following array tacking, intraoperative OCT was performed over the entire array including the edges and tack. RESULTS Intraoperative OCT allowed for visualization of the array/retina interface. Microscope integration of the OCT system facilitated ease of focusing, real-time feedback, surgeon-directed OCT scanning to the areas of interest, and enhanced image quality at points of interest. CONCLUSIONS Intraoperative imaging of the Argus II electrode array is feasible and provides information about electrode array-retina interface and distance to help guide a surgeon. Microscope integration of OCT appears to provide an optimal and efficient approach to intraoperative OCT during Argus II array placement. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:999-1003.].
Collapse
|
24
|
Dagnelie G, Christopher P, Arditi A, da Cruz L, Duncan JL, Ho AC, Olmos de Koo LC, Sahel J, Stanga PE, Thumann G, Wang Y, Arsiero M, Dorn JD, Greenberg RJ. Performance of real-world functional vision tasks by blind subjects improves after implantation with the Argus® II retinal prosthesis system. Clin Exp Ophthalmol 2017; 45:152-159. [PMID: 27495262 PMCID: PMC5293683 DOI: 10.1111/ceo.12812] [Citation(s) in RCA: 57] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2016] [Revised: 07/26/2016] [Accepted: 07/31/2016] [Indexed: 11/30/2022]
Abstract
BACKGROUND The main objective of this study was to test Argus II subjects on three real-world functional vision tasks. DESIGN The study was designed to be randomized and prospective. Testing was conducted in a hospital/research laboratory setting at the various participating centres. PARTICIPANTS Twenty eight Argus II subjects, all profoundly blind, participated in this study. METHODS Subjects were tested on the three real-world functional vision tasks: Sock Sorting, Sidewalk Tracking and Walking Direction Discrimination task MAIN OUTCOME MEASURES: For the Sock Sorting task, percentage correct was computed based on how accurately subjects sorted the piles on a cloth-covered table and on a bare table. In the Sidewalk Tracking task, an 'out of bounds' count was recorded, signifying how often the subject veered away from the test course. During the Walking Direction Discrimination task, subjects were tested on the number of times they correctly identified the direction of testers walking across their field of view. RESULTS The mean percentage correct OFF versus ON for the Sock Sorting task was found to be significantly different for both testing conditions (t-test, P < 0.01). On the Sidewalk Tracking task, subjects performed significantly better with the system ON than they did with the system OFF (t-test, P < 0.05). Eighteen (18) of 27 subjects (67%) performed above chance with the system ON, and 6 (22%) did so with system OFF on the Walking Direction Discrimination task. CONCLUSIONS Argus II subjects performed better on all three tasks with their systems ON than they did with their systems OFF.
Collapse
Affiliation(s)
- Gislin Dagnelie
- Lions Vision Research and Rehab CenterJohns Hopkins UniversityBaltimoreMarylandUSA
| | | | | | | | | | - Allen C Ho
- Wills Eye HospitalPhiladelphiaPennsylvaniaUSA
| | - Lisa C Olmos de Koo
- Department of OphthalmologyUniversity of Southern CaliforniaLos AngelesCaliforniaUSA
| | | | | | | | - Yizhong Wang
- Retina Foundation of the SouthwestDallasTexasUSA
| | - Maura Arsiero
- Second Sight Medical Products IncSylmarCaliforniaUSA
| | - Jessy D Dorn
- Second Sight Medical Products IncSylmarCaliforniaUSA
| | | | | |
Collapse
|
25
|
Kessler R, Bach M, Heinrich SP. Two-Tactor Vibrotactile Navigation Information for the Blind: Directional Resolution and Intuitive Interpretation. IEEE Trans Neural Syst Rehabil Eng 2017; 25:279-286. [PMID: 28113435 DOI: 10.1109/tnsre.2016.2569258] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Lacking vision, blind people have to exploit other senses for navigation. Using the tactile rather than the auditory sense avoids masking important environmental information. Directional information is particularly important and traditionally conveyed through an array of tactors, each coding one direction. Here, we present a different approach to represent arbitrary directions with only two tactors. We tested intuitiveness, plasticity, and variability of direction perception in a behavioral experiment in 33 seeing participants.
Collapse
|
26
|
Abstract
Most travellers who are blind rely on a long cane to detect drop-offs on their walking paths. We examined how different cane shaft materials affect drop-off detection performance through providing different vibrotactile and proprioceptive feedbacks to the cane user. Results of the study showed a significant interaction between cane shaft weight and how the cane is used. A heavier cane was advantageous for detecting drop-offs when the individual used the 'constant contact technique' - cane tip stays in contact with the walking surface at all times - but not when he used the 'two-point touch technique' - cane tip is rhythmically tapped on the surface. In addition, a more flexible cane was advantageous for detecting drop-offs when the two-point touch technique was used but not when the constant contact technique was used. It is recommended that, when blind individuals select a cane shaft material, they consider which long cane technique they use more often. Practitioner Summary: Long cane shaft material affects how well a blind individual can detect drop-offs. A heavier shaft was advantageous when using the constant contact technique (cane tip stays in continuous contact with the surface), while a more flexible shaft was better when using the two-point touch technique (cane tip rhythmically taps the surface).
Collapse
Affiliation(s)
- Dae Shik Kim
- Department of Blindness and Low Vision Studies, Western Michigan University, Kalamazoo, MI, USA
| | - Robert Wall Emerson
- Department of Blindness and Low Vision Studies, Western Michigan University, Kalamazoo, MI, USA
| | - Koorosh Naghshineh
- Department of Mechanical and Aerospace Engineering, Western Michigan University, Kalamazoo, MI, USA
| | - Alexander Auer
- Global Headquarters, Whirlpool Corporation, Benton Harbor, MI, USA
| |
Collapse
|
27
|
Secchi S, Lauria A, Cellai G. Acoustic wayfinding: A method to measure the acoustic contrast of different paving materials for blind people. Appl Ergon 2017; 58:435-445. [PMID: 27633240 DOI: 10.1016/j.apergo.2016.08.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/13/2016] [Revised: 08/02/2016] [Accepted: 08/03/2016] [Indexed: 06/06/2023]
Abstract
Acoustic wayfinding involves using a variety of auditory cues to create a mental map of the surrounding environment. For blind people, these auditory cues become the primary substitute for visual information in order to understand the features of the spatial context and orient themselves. This can include creating sound waves, such as tapping a cane. This paper reports the results of a research about the "acoustic contrast" parameter between paving materials functioning as a cue and the surrounding or adjacent surface functioning as a background. A number of different materials was selected in order to create a test path and a procedure was defined for the verification of the ability of blind people to distinguish different acoustic contrasts. A method is proposed for measuring acoustic contrast generated by the impact of a cane tip on the ground to provide blind people with environmental information on spatial orientation and wayfinding in urban places.
Collapse
Affiliation(s)
- Simone Secchi
- Department of Industrial Engineering, University of Florence, Via Santa Marta 2, Florence, Italy.
| | - Antonio Lauria
- Department of Architecture, University of Florence, Via San Niccolò 93, Florence, Italy
| | - Gianfranco Cellai
- Department of Industrial Engineering, University of Florence, Via Santa Marta 2, Florence, Italy
| |
Collapse
|
28
|
Abstract
This article describes the development of a database for the cost of inpatient rehabilitation, mental health, and long-term care stays in the Department of Veterans Affairs from fiscal year 1998 forward. Using “bedsection,” which is analogous to a hospital ward, the authors categorize inpatient services into nine categories: rehabilitation, blind rehabilitation, spinal cord injury, psychiatry, substance abuse, intermediate medicine, domiciliary, psychosocial residential rehabilitation, and nursing home. For each of the nine categories, they estimated a national and a local (i.e., medical center) average per diem cost. The nursing home average per diem costs were adjusted for case mix using patient assessment information. Encounter-level costs were then calculated by multiplying the aver-age per diem cost by the number of days of stay in the fiscal year. The national cost estimates are more reliable than the local cost estimates.
Collapse
Affiliation(s)
- Wei Yu
- VA HSR&D Health Economics Resource Center, Stanford University, USA
| | | | | | | |
Collapse
|
29
|
Leo F, Cocchi E, Brayda L. The Effect of Programmable Tactile Displays on Spatial Learning Skills in Children and Adolescents of Different Visual Disability. IEEE Trans Neural Syst Rehabil Eng 2016; 25:861-872. [PMID: 27775905 DOI: 10.1109/tnsre.2016.2619742] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Vision loss has severe impacts on physical, social and emotional well-being. The education of blind children poses issues as many scholar disciplines (e.g., geometry, mathematics) are normally taught by heavily relying on vision. Touch-based assistive technologies are potential tools to provide graphical contents to blind users, improving learning possibilities and social inclusion. Raised-lines drawings are still the golden standard, but stimuli cannot be reconfigured or adapted and the blind person constantly requires assistance. Although much research concerns technological development, little work concerned the assessment of programmable tactile graphics, in educative and rehabilitative contexts. Here we designed, on programmable tactile displays, tests aimed at assessing spatial memory skills and shapes recognition abilities. Tests involved a group of blind and a group of low vision children and adolescents in a four-week longitudinal schedule. After establishing subject-specific difficulty levels, we observed a significant enhancement of performance across sessions and for both groups. Learning effects were comparable to raised paper control tests: however, our setup required minimal external assistance. Overall, our results demonstrate that programmable maps are an effective way to display graphical contents in educative/rehabilitative contexts. They can be at least as effective as traditional paper tests yet providing superior flexibility and versatility.
Collapse
|
30
|
Abstract
Two blind women affected by severe mental retardation were exposed to two previously developed orientation systems. One of the systems was based on acoustic cues, the other on vibratory feedback. The aim was to assess the relative effectiveness of the two systems. Data indicated that the acoustic system ensured a higher frequency of correct moves for one of the subjects and a more rapid performance of the moves for both subjects. The findings are reviewed in relation to the characteristics and applicability of the systems.
Collapse
Affiliation(s)
- G E Lancioni
- Department of Psychology, University of Leiden, The Netherlands
| | | | | |
Collapse
|
31
|
Affiliation(s)
- Susan Okie
- Dr. Okie is a medical journalist and a clinical assistant professor of family medicine at Georgetown University School of Medicine, Washington, DC
| |
Collapse
|
32
|
Fröhlich S. [Will the retinal chip restore sight?]. MMW Fortschr Med 2016; 158:8. [PMID: 27323972 DOI: 10.1007/s15006-016-8404-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
33
|
Egan JA. Improving Healthcare Experiences for Visually Impaired Service Members and Veterans: A Multidisciplinary Joint Agency Collaboration. Insight 2016; 41:24-26. [PMID: 27209688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
|
34
|
Buchs G, Maidenbaum S, Levy-Tzedek S, Amedi A. Integration and binding in rehabilitative sensory substitution: Increasing resolution using a new Zooming-in approach. Restor Neurol Neurosci 2016; 34:97-105. [PMID: 26518671 PMCID: PMC4927841 DOI: 10.3233/rnn-150592] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
PURPOSE To visually perceive our surroundings we constantly move our eyes and focus on particular details, and then integrate them into a combined whole. Current visual rehabilitation methods, both invasive, like bionic-eyes and non-invasive, like Sensory Substitution Devices (SSDs), down-sample visual stimuli into low-resolution images. Zooming-in to sub-parts of the scene could potentially improve detail perception. Can congenitally blind individuals integrate a 'visual' scene when offered this information via different sensory modalities, such as audition? Can they integrate visual information -perceived in parts - into larger percepts despite never having had any visual experience? METHODS We explored these questions using a zooming-in functionality embedded in the EyeMusic visual-to-auditory SSD. Eight blind participants were tasked with identifying cartoon faces by integrating their individual components recognized via the EyeMusic's zooming mechanism. RESULTS After specialized training of just 6-10 hours, blind participants successfully and actively integrated facial features into cartooned identities in 79±18% of the trials in a highly significant manner, (chance level 10% ; rank-sum P < 1.55E-04). CONCLUSIONS These findings show that even users who lacked any previous visual experience whatsoever can indeed integrate this visual information with increased resolution. This potentially has important practical visual rehabilitation implications for both invasive and non-invasive methods.
Collapse
Affiliation(s)
- Galit Buchs
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem Hadassah Ein-Kerem, Jerusalem, Israel
| | - Shachar Maidenbaum
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem Hadassah Ein-Kerem, Jerusalem, Israel
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Shelly Levy-Tzedek
- Recanati School for Community Health Professions, Department of Physical Therapy, Ben Gurion University of the Negev, Beer-Sheva, Israel
- Zlotowski Center for Neuroscience, Ben Gurion University of the Negev, Beer-Sheva, Israel
| | - Amir Amedi
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem Hadassah Ein-Kerem, Jerusalem, Israel
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- Sorbonne Universités UPMC Univ Paris 06, Institut de la Vision Paris, France
| |
Collapse
|
35
|
Rao GN. Progress in the past century and future of eye care. Br J Ophthalmol 2015; 100:2. [PMID: 26692272 DOI: 10.1136/bjophthalmol-2015-308162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
|
36
|
Abstract
Echolocation can be used by blind and sighted humans to navigate their environment. The current study investigated the neural activity underlying processing of path direction during walking. Brain activity was measured with fMRI in three blind echolocation experts, and three blind and three sighted novices. During scanning, participants listened to binaural recordings that had been made prior to scanning while echolocation experts had echolocated during walking along a corridor which could continue to the left, right, or straight ahead. Participants also listened to control sounds that contained ambient sounds and clicks, but no echoes. The task was to decide if the corridor in the recording continued to the left, right, or straight ahead, or if they were listening to a control sound. All participants successfully dissociated echo from no echo sounds, however, echolocation experts were superior at direction detection. We found brain activations associated with processing of path direction (contrast: echo vs. no echo) in superior parietal lobule (SPL) and inferior frontal cortex in each group. In sighted novices, additional activation occurred in the inferior parietal lobule (IPL) and middle and superior frontal areas. Within the framework of the dorso-dorsal and ventro-dorsal pathway proposed by Rizzolatti and Matelli (2003), our results suggest that blind participants may automatically assign directional meaning to the echoes, while sighted participants may apply more conscious, high-level spatial processes. High similarity of SPL and IFC activations across all three groups, in combination with previous research, also suggest that all participants recruited a multimodal spatial processing system for action (here: locomotion).
Collapse
|
37
|
Heed T, Möller J, Röder B. Movement Induces the Use of External Spatial Coordinates for Tactile Localization in Congenitally Blind Humans. Multisens Res 2015; 28:173-94. [PMID: 26152057 DOI: 10.1163/22134808-00002485] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
To localize touch, the brain integrates spatial information coded in anatomically based and external spatial reference frames. Sighted humans, by default, use both reference frames in tactile localization. In contrast, congenitally blind individuals have been reported to rely exclusively on anatomical coordinates, suggesting a crucial role of the visual system for tactile spatial processing. We tested whether the use of external spatial information in touch can, alternatively, be induced by a movement context. Sighted and congenitally blind humans performed a tactile temporal order judgment task that indexes the use of external coordinates for tactile localization, while they executed bimanual arm movements with uncrossed and crossed start and end postures. In the sighted, start posture and planned end posture of the arm movement modulated tactile localization for stimuli presented before and during movement, indicating automatic, external recoding of touch. Contrary to previous findings, tactile localization of congenitally blind participants, too, was affected by external coordinates, though only for stimuli presented before movement start. Furthermore, only the movement's start posture, but not the planned end posture affected blind individuals' tactile performance. Thus, integration of external coordinates in touch is established without vision, though more selectively than when vision has developed normally, and possibly restricted to movement contexts. The lack of modulation by the planned posture in congenitally blind participants suggests that external coordinates in this group are not mediated by motor efference copy. Instead the task-related frequent posture changes, that is, movement consequences rather than planning, appear to have induced their use of external coordinates.
Collapse
|
38
|
Riehle TH, Anderson SM, Lichter PA, Whalen WE, Giudice NA. Indoor inertial waypoint navigation for the blind. Annu Int Conf IEEE Eng Med Biol Soc 2015; 2013:5187-90. [PMID: 24110904 DOI: 10.1109/embc.2013.6610717] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Indoor navigation technology is needed to support seamless mobility for the visually impaired. This paper describes the construction and evaluation of an inertial dead reckoning navigation system that provides real-time auditory guidance along mapped routes. Inertial dead reckoning is a navigation technique coupling step counting together with heading estimation to compute changes in position at each step. The research described here outlines the development and evaluation of a novel navigation system that utilizes information from the mapped route to limit the problematic error accumulation inherent in traditional dead reckoning approaches. The prototype system consists of a wireless inertial sensor unit, placed at the users' hip, which streams readings to a smartphone processing a navigation algorithm. Pilot human trials were conducted assessing system efficacy by studying route-following performance with blind and sighted subjects using the navigation system with real-time guidance, versus offline verbal directions.
Collapse
|
39
|
Pawluk DTV, Adams RJ, Kitada R. Designing Haptic Assistive Technology for Individuals Who Are Blind or Visually Impaired. IEEE Trans Haptics 2015; 8:258-278. [PMID: 26336151 DOI: 10.1109/toh.2015.2471300] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
This paper considers issues relevant for the design and use of haptic technology for assistive devices for individuals who are blind or visually impaired in some of the major areas of importance: Braille reading, tactile graphics, orientation and mobility. We show that there is a wealth of behavioral research that is highly applicable to assistive technology design. In a few cases, conclusions from behavioral experiments have been directly applied to design with positive results. Differences in brain organization and performance capabilities between individuals who are "early blind" and "late blind" from using the same tactile/haptic accommodations, such as the use of Braille, suggest the importance of training and assessing these groups individually. Practical restrictions on device design, such as performance limitations of the technology and cost, raise questions as to which aspects of these restrictions are truly important to overcome to achieve high performance. In general, this raises the question of what it means to provide functional equivalence as opposed to sensory equivalence.
Collapse
|
40
|
Pawluk D, Bourbakis N, Giudice N, Hayward V, Heller M. Guest Editorial: Haptic Assistive Technology for Individuals who are Visually Impaired. IEEE Trans Haptics 2015; 8:245-247. [PMID: 26649374 DOI: 10.1109/toh.2015.2476735] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
|
41
|
Flores G, Kurniawan S, Manduchi R, Martinson E, Morales LM, Sisbot EA. Vibrotactile Guidance for Wayfinding of Blind Walkers. IEEE Trans Haptics 2015; 8:306-317. [PMID: 25781953 DOI: 10.1109/toh.2015.2409980] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
We propose a vibrotactile interface in the form of a belt for guiding blind walkers. This interface enables blind walkers to receive haptic directional instructions along complex paths without negatively impacting users' ability to listen and/or perceive the environment the way some auditory directional instructions do. The belt interface was evaluated in a controlled study with 10 blind individuals and compared to the audio guidance. The experiments were videotaped and the participants' behaviors and comments were content analyzed. Completion times and deviations from ideal paths were also collected and statistically analyzed. By triangulating the quantitative and qualitative data, we found that the belt resulted in closer path following to the expense of speed. In general, the participants were positive about the use of vibrotactile belt to provide directional guidance.
Collapse
|
42
|
Brayda L, Campus C, Memeo M, Lucagrossi L. The Importance of Visual Experience, Gender, and Emotion in the Assessment of an Assistive Tactile Mouse. IEEE Trans Haptics 2015; 8:279-286. [PMID: 25935047 DOI: 10.1109/toh.2015.2426692] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Tactile maps are efficient tools to improve spatial understanding and mobility skills of visually impaired people. Their limited adaptability can be compensated with haptic devices which display graphical information, but their assessment is frequently limited to performance-based metrics only which can hide potential spatial abilities in O&M protocols. We assess a low-tech tactile mouse able to deliver three-dimensional content considering how performance, mental workload, behavior, and anxiety status vary with task difficulty and gender in congenitally blind, late blind, and sighted subjects. Results show that task difficulty coherently modulates the efficiency and difficulty to build mental maps, regardless of visual experience. Although exhibiting attitudes that were similar and gender-independent, the females had lower performance and higher cognitive load, especially when congenitally blind. All groups showed a significant decrease in anxiety after using the device. Tactile graphics with our device seems therefore to be applicable with different visual experiences, with no negative emotional consequences of mentally demanding spatial tasks. Going beyond performance-based assessment, our methodology can help with better targeting technological solutions in orientation and mobility protocols.
Collapse
|
43
|
Abstract
The increased access to books afforded to blind people via e-publishing has given them long-sought independence for both recreational and educational reading. In most cases, blind readers access materials using speech output. For some content such as highly technical texts, music, and graphics, speech is not an appropriate access modality as it does not promote deep understanding. Therefore blind braille readers often prefer electronic braille displays. But, these are prohibitively expensive. The search is on, therefore, for a low-cost refreshable display that would go beyond current technologies and deliver graphical content as well as text. And many solutions have been proposed, some of which reduce costs by restricting the number of characters that can be displayed, even down to a single braille cell. In this paper, we demonstrate that restricting tactile cues during braille reading leads to poorer performance in a letter recognition task. In particular, we show that lack of sliding contact between the fingertip and the braille reading surface results in more errors and that the number of errors increases as a function of presentation speed. These findings suggest that single cell displays which do not incorporate sliding contact are likely to be less effective for braille reading.
Collapse
|
44
|
O'Modhrain S, Giudice NA, Gardner JA, Legge GE. Designing Media for Visually-Impaired Users of Refreshable Touch Displays: Possibilities and Pitfalls. IEEE Trans Haptics 2015; 8:248-257. [PMID: 26276998 DOI: 10.1109/toh.2015.2466231] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This paper discusses issues of importance to designers of media for visually impaired users. The paper considers the influence of human factors on the effectiveness of presentation as well as the strengths and weaknesses of tactile, vibrotactile, haptic, and multimodal methods of rendering maps, graphs, and models. The authors, all of whom are visually impaired researchers in this domain, present findings from their own work and work of many others who have contributed to the current understanding of how to prepare and render images for both hard-copy and technology-mediated presentation of Braille and tangible graphics.
Collapse
|
45
|
Jeter PE, Haaz Moonaz S, Bittner AK, Dagnelie G. Ashtanga-Based Yoga Therapy Increases the Sensory Contribution to Postural Stability in Visually-Impaired Persons at Risk for Falls as Measured by the Wii Balance Board: A Pilot Randomized Controlled Trial. PLoS One 2015; 10:e0129646. [PMID: 26107256 PMCID: PMC4479589 DOI: 10.1371/journal.pone.0129646] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2015] [Accepted: 05/09/2015] [Indexed: 11/23/2022] Open
Abstract
Objective Persons with visual impairment (VI) are at greater risk for falls due to irreparable damage to visual sensory input contributing to balance. Targeted training may significantly improve postural stability by strengthening the remaining sensory systems. Here, we evaluate the Ashtanga-based Yoga Therapy (AYT) program as a multi-sensory behavioral intervention to develop postural stability in VI. Design A randomized, waitlist-controlled, single-blind clinical trial Methods The trial was conducted between October 2012 and December 2013. Twenty-one legally blind participants were randomized to an 8-week AYT program (n = 11, mean (SD) age = 55(17)) or waitlist control (n=10, mean (SD) age = 55(10)). AYT subjects convened for one group session at a local yoga studio with an instructor and two individual home-based practice sessions per week for a total of 8 weeks. Subjects completed outcome measures at baseline and post-8 weeks of AYT. The primary outcome, absolute Center of Pressure (COP), was derived from the Wii Balance Board (WBB), a standalone posturography device, in 4 sensory conditions: firm surface, eyes open (EO); firm surface, eyes closed (EC); foam surface, EO; and foam surface, EC. Stabilization Indices (SI) were computed from COP measures to determine the relative visual (SIfirm, SIfoam), somatosensory (SIEO, SIEC) and vestibular (SIV, i.e., FoamEC vs. FirmEO) contributions to balance. This study was not powered to detect between group differences, so significance of pre-post changes was assessed by paired samples t-tests within each group. Results Groups were equivalent at baseline (all p > 0.05). In the AYT group, absolute COP significantly increased in the FoamEO (t(8) = -3.66, p = 0.01) and FoamEC (t(8) = -3.90, p = 0.01) conditions. Relative somatosensory SIEO (t(8) = -2.42, p = 0.04) and SIEC (t(8) = -3.96, p = 0.01), and vestibular SIV (t(8) = -2.47, p = 0.04) contributions to balance increased significantly. As expected, no significant changes from EO to EC conditions were found indicating an absence of visual dependency in VI. No significant pre-post changes were observed in the control group (all p > 0.05). Conclusions These preliminary results establish the potential for AYT training to develop the remaining somatosensory and vestibular responses used to optimize postural stability in a VI population. Trial Registration www.ClinicalTrials.govNCT01366677
Collapse
Affiliation(s)
- Pamela E. Jeter
- Department of Ophthalmology, Lions Vision Research Center, Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland, United States of America
- Department of Integrative Health Sciences, Maryland University of Integrative Health, Laurel, Maryland, United States of America
- * E-mail:
| | - Steffany Haaz Moonaz
- Department of Integrative Health Sciences, Maryland University of Integrative Health, Laurel, Maryland, United States of America
| | - Ava K. Bittner
- College of Optometry, Nova Southeastern University, Ft. Lauderdale, Florida, United States of America
| | - Gislin Dagnelie
- Department of Ophthalmology, Lions Vision Research Center, Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland, United States of America
| |
Collapse
|
46
|
Affiliation(s)
- R T Ramsden
- Department of Otolaryngology, University of Manchester, UK
| | | | | | | | | |
Collapse
|
47
|
Hafed ZM, Stingl K, Bartz-Schmidt KU, Gekeler F, Zrenner E. Oculomotor behavior of blind patients seeing with a subretinal visual implant. Vision Res 2015; 118:119-31. [PMID: 25906684 DOI: 10.1016/j.visres.2015.04.006] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2014] [Revised: 03/31/2015] [Accepted: 04/02/2015] [Indexed: 11/19/2022]
Abstract
Electronic implants are able to restore some visual function in blind patients with hereditary retinal degenerations. Subretinal visual implants, such as the CE-approved Retina Implant Alpha IMS (Retina Implant AG, Reutlingen, Germany), sense light through the eye's optics and subsequently stimulate retinal bipolar cells via ∼1500 independent pixels to project visual signals to the brain. Because these devices are directly implanted beneath the fovea, they potentially harness the full benefit of eye movements to scan scenes and fixate objects. However, so far, the oculomotor behavior of patients using subretinal implants has not been characterized. Here, we tracked eye movements in two blind patients seeing with a subretinal implant, and we compared them to those of three healthy controls. We presented bright geometric shapes on a dark background, and we asked the patients to report seeing them or not. We found that once the patients visually localized the shapes, they fixated well and exhibited classic oculomotor fixational patterns, including the generation of microsaccades and ocular drifts. Further, we found that a reduced frequency of saccades and microsaccades was correlated with loss of visibility. Last, but not least, gaze location corresponded to the location of the stimulus, and shape and size aspects of the viewed stimulus were reflected by the direction and size of saccades. Our results pave the way for future use of eye tracking in subretinal implant patients, not only to understand their oculomotor behavior, but also to design oculomotor training strategies that can help improve their quality of life.
Collapse
Affiliation(s)
- Ziad M Hafed
- Werner Reichardt Centre for Integrative Neuroscience, Otfried-Mueller Strasse 25, Tuebingen 72076, Germany.
| | - Katarina Stingl
- Center for Ophthalmology, Institute for Ophthalmic Research, University of Tuebingen, Schleichstrasse 12-16, Tuebingen 72076, Germany.
| | - Karl-Ulrich Bartz-Schmidt
- Center for Ophthalmology, Institute for Ophthalmic Research, University of Tuebingen, Schleichstrasse 12-16, Tuebingen 72076, Germany
| | - Florian Gekeler
- Augenklinik Katharinenhospital, Klinikum Stuttgart, Kriegsbergstrasse 60, Stuttgart 70174, Germany
| | - Eberhart Zrenner
- Werner Reichardt Centre for Integrative Neuroscience, Otfried-Mueller Strasse 25, Tuebingen 72076, Germany; Center for Ophthalmology, Institute for Ophthalmic Research, University of Tuebingen, Schleichstrasse 12-16, Tuebingen 72076, Germany
| |
Collapse
|
48
|
Abstract
The last 50 years or so has seen great optimism concerning the potential of sensory substitution and augmentation devices to enhance the lives of those with (or even those without) some form of sensory loss (in practice, this has typically meant those who are blind or suffering from low vision). One commonly discussed solution for those individuals who are blind has been to use one of a range of tactile-visual sensory substitution systems that represent objects captured by a camera as outline images on the skin surface in real-time (what Loomis, Klatzky and Giudice, 2012, term general-purpose sensory substitution devices). However, despite the fact that touch, like vision, initially codes information spatiotopically, I would like to argue that a number of fundamental perceptual, attentional, and cognitive limitations constraining the processing of tactile information mean that the skin surface is unlikely ever to provide such general-purpose sensory substitution capabilities. At present, there is little evidence to suggest that the extensive cortical plasticity that has been demonstrated in those who have lost (or never had) a sense can do much to overcome the limitations associated with trying to perceive high rates of spatiotemporally varying information presented via the skin surface (no matter whether that surface be the back, stomach, forehead, or tongue). Instead, the use of the skin will likely be restricted to various special-purpose devices that enable specific activities such as navigation, the control of locomotion, pattern perception, etc.
Collapse
|
49
|
Klatzky RL, Giudice NA, Bennett CR, Loomis JM. Touch-screen technology for the dynamic display of -2D spatial information without vision: promise and progress. Multisens Res 2015; 27:359-78. [PMID: 25693301 DOI: 10.1163/22134808-00002447] [Citation(s) in RCA: 52] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Many developers wish to capitalize on touch-screen technology for developing aids for the blind, particularly by incorporating vibrotactile stimulation to convey patterns on their surfaces, which otherwise are featureless. Our belief is that they will need to take into account basic research on haptic perception in designing these graphics interfaces. We point out constraints and limitations in haptic processing that affect the use of these devices. We also suggest ways to use sound to augment basic information from touch, and we include evaluation data from users of a touch-screen device with vibrotactile and auditory feedback that we have been developing, called a vibro-audio interface.
Collapse
|
50
|
Maidenbaum S, Levy-Tzedek S, Chebat DR, Namer-Furstenberg R, Amedi A. The effect of extended sensory range via the EyeCane sensory substitution device on the characteristics of visionless virtual navigation. Multisens Res 2015; 27:379-97. [PMID: 25693302 DOI: 10.1163/22134808-00002463] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Mobility training programs for helping the blind navigate through unknown places with a White-Cane significantly improve their mobility. However, what is the effect of new assistive technologies, offering more information to the blind user, on the underlying premises of these programs such as navigation patterns? We developed the virtual-EyeCane, a minimalistic sensory substitution device translating single-point-distance into auditory cues identical to the EyeCane's in the real world. We compared performance in virtual environments when using the virtual-EyeCane, a virtual-White-Cane, no device and visual navigation. We show that the characteristics of virtual-EyeCane navigation differ from navigation with a virtual-White-Cane or no device, and that virtual-EyeCane users complete more levels successfully, taking shorter paths and with less collisions than these groups, and we demonstrate the relative similarity of virtual-EyeCane and visual navigation patterns. This suggests that additional distance information indeed changes navigation patterns from virtual-White-Cane use, and brings them closer to visual navigation.
Collapse
|