1
|
Kilian J, Neugebauer A, Scherffig L, Wahl S. The Unfolding Space Glove: A Wearable Spatio-Visual to Haptic Sensory Substitution Device for Blind People. SENSORS 2022; 22:s22051859. [PMID: 35271009 PMCID: PMC8914703 DOI: 10.3390/s22051859] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 02/18/2022] [Accepted: 02/22/2022] [Indexed: 02/04/2023]
Abstract
This paper documents the design, implementation and evaluation of the Unfolding Space Glove—an open source sensory substitution device. It transmits the relative position and distance of nearby objects as vibratory stimuli to the back of the hand and thus enables blind people to haptically explore the depth of their surrounding space, assisting with navigation tasks such as object recognition and wayfinding. The prototype requires no external hardware, is highly portable, operates in all lighting conditions, and provides continuous and immediate feedback—all while being visually unobtrusive. Both blind (n = 8) and blindfolded sighted participants (n = 6) completed structured training and obstacle courses with both the prototype and a white long cane to allow performance comparisons to be drawn between them. The subjects quickly learned how to use the glove and successfully completed all of the trials, though still being slower with it than with the cane. Qualitative interviews revealed a high level of usability and user experience. Overall, the results indicate the general processability of spatial information through sensory substitution using haptic, vibrotactile interfaces. Further research would be required to evaluate the prototype’s capabilities after extensive training and to derive a fully functional navigation aid from its features.
Collapse
Affiliation(s)
- Jakob Kilian
- Köln International School of Design, TH Köln, 50678 Köln, Germany; (J.K.); (L.S.)
- ZEISS Vision Science Laboratory, Eberhard-Karls-University Tübingen, 72076 Tübingen, Germany;
| | - Alexander Neugebauer
- ZEISS Vision Science Laboratory, Eberhard-Karls-University Tübingen, 72076 Tübingen, Germany;
| | - Lasse Scherffig
- Köln International School of Design, TH Köln, 50678 Köln, Germany; (J.K.); (L.S.)
| | - Siegfried Wahl
- ZEISS Vision Science Laboratory, Eberhard-Karls-University Tübingen, 72076 Tübingen, Germany;
- Carl Zeiss Vision International GmbH, 73430 Aalen, Germany
- Correspondence: ; Tel.: +49-7071-29-84512
| |
Collapse
|
2
|
Buchs G, Haimler B, Kerem M, Maidenbaum S, Braun L, Amedi A. A self-training program for sensory substitution devices. PLoS One 2021; 16:e0250281. [PMID: 33905446 PMCID: PMC8078811 DOI: 10.1371/journal.pone.0250281] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 04/01/2021] [Indexed: 11/30/2022] Open
Abstract
Sensory Substitution Devices (SSDs) convey visual information through audition or touch, targeting blind and visually impaired individuals. One bottleneck towards adopting SSDs in everyday life by blind users, is the constant dependency on sighted instructors throughout the learning process. Here, we present a proof-of-concept for the efficacy of an online self-training program developed for learning the basics of the EyeMusic visual-to-auditory SSD tested on sighted blindfolded participants. Additionally, aiming to identify the best training strategy to be later re-adapted for the blind, we compared multisensory vs. unisensory as well as perceptual vs. descriptive feedback approaches. To these aims, sighted participants performed identical SSD-stimuli identification tests before and after ~75 minutes of self-training on the EyeMusic algorithm. Participants were divided into five groups, differing by the feedback delivered during training: auditory-descriptive, audio-visual textual description, audio-visual perceptual simultaneous and interleaved, and a control group which had no training. At baseline, before any EyeMusic training, participants SSD objects’ identification was significantly above chance, highlighting the algorithm’s intuitiveness. Furthermore, self-training led to a significant improvement in accuracy between pre- and post-training tests in each of the four feedback groups versus control, though no significant difference emerged among those groups. Nonetheless, significant correlations between individual post-training success rates and various learning measures acquired during training, suggest a trend for an advantage of multisensory vs. unisensory feedback strategies, while no trend emerged for perceptual vs. descriptive strategies. The success at baseline strengthens the conclusion that cross-modal correspondences facilitate learning, given SSD algorithms are based on such correspondences. Additionally, and crucially, the results highlight the feasibility of self-training for the first stages of SSD learning, and suggest that for these initial stages, unisensory training, easily implemented also for blind and visually impaired individuals, may suffice. Together, these findings will potentially boost the use of SSDs for rehabilitation.
Collapse
Affiliation(s)
- Galit Buchs
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Jerusalem, Israel
- * E-mail: (AA); (GB)
| | - Benedetta Haimler
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Center of Advanced Technologies in Rehabilitation (CATR), The Chaim Sheba Medical Center, Ramat Gan, Israel
| | - Menachem Kerem
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
| | - Shachar Maidenbaum
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Department of Biomedical Engineering, Ben Gurion University, Beersheba, Israel
| | - Liraz Braun
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Hebrew University of Jerusalem, Jerusalem, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- * E-mail: (AA); (GB)
| |
Collapse
|
3
|
Kumaran N, Ali RR, Tyler NA, Bainbridge JWB, Michaelides M, Rubin GS. Validation of a Vision-Guided Mobility Assessment for RPE65-Associated Retinal Dystrophy. Transl Vis Sci Technol 2020; 9:5. [PMID: 32953245 PMCID: PMC7476654 DOI: 10.1167/tvst.9.10.5] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2019] [Accepted: 07/22/2020] [Indexed: 11/24/2022] Open
Abstract
Purpose To validate a vision-guided mobility assessment for individuals affected by RPE65-associated retinal dystrophy (RPE65-RD). Methods In this comparative cross-sectional study, 29 subjects, comprising 19 subjects with RPE65-RD and 10 normally-sighted subjects undertook three assessments of mobility: following a straight line, navigating a simple maze, and stepping over a sidewalk "kerb." Performance was quantified as the time taken to complete each assessment, number of errors made, walking speed, and percent preferred walking speed, for each assessment. Subjects also undertook assessments of visual acuity, contrast sensitivity, full-field static perimetry, and age-appropriate quality of life questionnaires. To identify the most relevant metric to quantify vision-guided mobility, we investigated repeatability, as well as convergent, discriminant, and criterion validity. We also measured the effect of illumination on mobility. Results Walking speed through the maze assessment best discriminated between RPE65-RD and normally-sighted subjects, with both convergent and discriminant validity. Walking speed also approached statistical significance when assessed for criterion validity (P = 0.052). Subjects with RPE65-RD had quantifiably poorer mobility at lower illumination levels. A relatively small mean difference (-0.09 m/s) was identified in comparison to a relatively large repeatability coefficient (1.10 m/s). Conclusions We describe a novel, quantifiable, repeatable, and valid assessment of mobility designed specifically for subjects with RPE65-RD. The assessment is sensitive to the visual impairment of individuals with RPE65-RD in low illumination, identifies the known phenotypic heterogeneity and will furthermore provide an important outcome measure for RPE65-RD. Translational Relevance This assessment of vision-guided mobility, validated in a dedicated cohort of subjects with RPE65-RD, is a relevant and quantifiable outcome measure for RPE65-RD.
Collapse
Affiliation(s)
- Neruban Kumaran
- UCL Institute of Ophthalmology, University College London, London, UK.,Moorfields Eye Hospital, London, UK.,Guy's and St. Thomas' NHS Foundation Trust, London, UK
| | - Robin R Ali
- UCL Institute of Ophthalmology, University College London, London, UK.,NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital, London, UK
| | - Nick A Tyler
- Department of Civil, Environmental and Geomatic Engineering, University College London, London, UK
| | - James W B Bainbridge
- UCL Institute of Ophthalmology, University College London, London, UK.,Moorfields Eye Hospital, London, UK.,NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital, London, UK
| | - Michel Michaelides
- UCL Institute of Ophthalmology, University College London, London, UK.,Moorfields Eye Hospital, London, UK.,NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital, London, UK
| | - Gary S Rubin
- UCL Institute of Ophthalmology, University College London, London, UK.,Moorfields Eye Hospital, London, UK.,NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital, London, UK
| |
Collapse
|
4
|
Ayton LN, Rizzo JF, Bailey IL, Colenbrander A, Dagnelie G, Geruschat DR, Hessburg PC, McCarthy CD, Petoe MA, Rubin GS, Troyk PR. Harmonization of Outcomes and Vision Endpoints in Vision Restoration Trials: Recommendations from the International HOVER Taskforce. Transl Vis Sci Technol 2020; 9:25. [PMID: 32864194 PMCID: PMC7426586 DOI: 10.1167/tvst.9.8.25] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Accepted: 12/08/2019] [Indexed: 01/05/2023] Open
Abstract
Translational research in vision prosthetics, gene therapy, optogenetics, stem cell and other forms of transplantation, and sensory substitution is creating new therapeutic options for patients with neural forms of blindness. The technical challenges faced by each of these disciplines differ considerably, but they all face the same challenge of how to assess vision in patients with ultra-low vision (ULV), who will be the earliest subjects to receive new therapies. Historically, there were few tests to assess vision in ULV patients. In the 1990s, the field of visual prosthetics expanded rapidly, and this activity led to a heightened need to develop better tests to quantify end points for clinical studies. Each group tended to develop novel tests, which made it difficult to compare outcomes across groups. The common lack of validation of the tests and the variable use of controls added to the challenge of interpreting the outcomes of these clinical studies. In 2014, at the bi-annual International "Eye and the Chip" meeting of experts in the field of visual prosthetics, a group of interested leaders agreed to work cooperatively to develop the International Harmonization of Outcomes and Vision Endpoints in Vision Restoration Trials (HOVER) Taskforce. Under this banner, more than 80 specialists across seven topic areas joined an effort to formulate guidelines for performing and reporting psychophysical tests in humans who participate in clinical trials for visual restoration. This document provides the complete version of the consensus opinions from the HOVER taskforce, which, together with its rules of governance, will be posted on the website of the Henry Ford Department of Ophthalmology (www.artificialvision.org). Research groups or companies that choose to follow these guidelines are encouraged to include a specific statement to that effect in their communications to the public. The Executive Committee of the HOVER Taskforce will maintain a list of all human psychophysical research in the relevant fields of research on the same website to provide an overview of methods and outcomes of all clinical work being performed in an attempt to restore vision to the blind. This website will also specify which scientific publications contain the statement of certification. The website will be updated every 2 years and continue to exist as a living document of worldwide efforts to restore vision to the blind. The HOVER consensus document has been written by over 80 of the world's experts in vision restoration and low vision and provides recommendations on the measurement and reporting of patient outcomes in vision restoration trials.
Collapse
Affiliation(s)
- Lauren N. Ayton
- Department of Optometry and Vision Sciences and Department of Surgery (Ophthalmology), The University of Melbourne, Parkville, Australia
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Joseph F. Rizzo
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Ian L. Bailey
- School of Optometry, University of California-Berkeley, Berkeley, CA, USA
| | - August Colenbrander
- Smith-Kettlewell Eye Research Institute and California Pacific Medical Center, San Francisco, CA, USA
| | - Gislin Dagnelie
- Lions Vision Research and Rehabilitation Center, Johns Hopkins Wilmer Eye Institute, Baltimore, MD, USA
| | - Duane R. Geruschat
- Lions Vision Research and Rehabilitation Center, Johns Hopkins Wilmer Eye Institute, Baltimore, MD, USA
| | - Philip C. Hessburg
- Detroit Institute of Ophthalmology, Henry Ford Health System, Grosse Pointe Park, MI, USA
| | - Chris D. McCarthy
- Department of Computer Science & Software Engineering, Swinburne University of Technology, Melbourne, Australia
| | | | - Gary S. Rubin
- University College London Institute of Ophthalmology, London, UK
| | - Philip R. Troyk
- Armour College of Engineering, Illinois Institute of Technology, Chicago, IL, USA
| | | |
Collapse
|
5
|
Isaksson J, Jansson T, Nilsson J. Audomni: Super-Scale Sensory Supplementation to Increase the Mobility of Blind and Low-Vision Individuals-A Pilot Study. IEEE Trans Neural Syst Rehabil Eng 2020; 28:1187-1197. [PMID: 32286992 DOI: 10.1109/tnsre.2020.2985626] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Blindness and low vision have severe effects on individuals' quality of life and socioeconomic cost; a main contributor of which is a prevalent and acutely decreased mobility level. To alleviate this, numerous technological solutions have been proposed in the last 70 years; however, none has become widespread. METHOD In this paper, we introduce the vision-to-audio, super-scale sensory substitution/supplementation device Audomni; we address the field-encompassing issues of ill-motivated and overabundant test methodologies and metrics; and we utilize our proposed Desire of Use model to evaluate proposed pilot user tests, their results, and Audomni itself. RESULTS Audomni holds a spatial resolution of 80 x 60 pixels at ~1.2° angular resolution and close to real-time temporal resolution, outdoor-viable technology, and several novel differentiation methods. The tests indicated that Audomni has a low learning curve, and several key mobility subtasks were accomplished; however, the tests would benefit from higher real-life motivation and data collection affordability. CONCLUSION Audomni shows promise to be a viable mobility device - with some addressable issues. Employing Desire of Use to design future tests should provide both high real-life motivation and relevance to them. SIGNIFICANCE As far as we know, Audomni features the greatest information conveyance rate in the field, yet seems to offer comprehensible and fairly intuitive sonification; this work is also the first to utilize Desire of Use as a tool to evaluate user tests, a device, and to lay out an overarching project aim.
Collapse
|
6
|
Petsiuk AL, Pearce JM. Low-Cost Open Source Ultrasound-Sensing Based Navigational Support for the Visually Impaired. SENSORS 2019; 19:s19173783. [PMID: 31480451 PMCID: PMC6749373 DOI: 10.3390/s19173783] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2019] [Revised: 08/07/2019] [Accepted: 08/28/2019] [Indexed: 11/29/2022]
Abstract
Nineteen million Americans have significant vision loss. Over 70% of these are not employed full-time, and more than a quarter live below the poverty line. Globally, there are 36 million blind people, but less than half use white canes or more costly commercial sensory substitutions. The quality of life for visually impaired people is hampered by the resultant lack of independence. To help alleviate these challenges this study reports on the development of a low-cost, open-source ultrasound-based navigational support system in the form of a wearable bracelet to allow people with the lost vision to navigate, orient themselves in their surroundings and avoid obstacles when moving. The system can be largely made with digitally distributed manufacturing using low-cost 3-D printing/milling. It conveys point-distance information by utilizing the natural active sensing approach and modulates measurements into haptic feedback with various vibration patterns within the four-meter range. It does not require complex calibrations and training, consists of the small number of available and inexpensive components, and can be used as an independent addition to traditional tools. Sighted blindfolded participants successfully demonstrated the device for nine primary everyday navigation and guidance tasks including indoor and outdoor navigation and avoiding collisions with other pedestrians.
Collapse
Affiliation(s)
- Aliaksei L Petsiuk
- Department of Electrical & Computer Engineering, Michigan Technological University, Houghton, MI 49931, USA
| | - Joshua M Pearce
- Department of Electrical & Computer Engineering, Michigan Technological University, Houghton, MI 49931, USA.
- Department of Material Science & Engineering, Michigan Technological University, Houghton, MI 49931, USA.
- Department of Electronics and Nanoengineering, School of Electrical Engineering, Aalto University, Espoo, FI-00076, Finland.
| |
Collapse
|
7
|
Enhanced Depth Navigation Through Augmented Reality Depth Mapping in Patients with Low Vision. Sci Rep 2019; 9:11230. [PMID: 31375713 PMCID: PMC6677879 DOI: 10.1038/s41598-019-47397-w] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Accepted: 07/15/2019] [Indexed: 11/29/2022] Open
Abstract
Patients diagnosed with Retinitis Pigmentosa (RP) show, in the advanced stage of the disease, severely restricted peripheral vision causing poor mobility and decline in quality of life. This vision loss causes difficulty identifying obstacles and their relative distances. Thus, RP patients use mobility aids such as canes to navigate, especially in dark environments. A number of high-tech visual aids using virtual reality (VR) and sensory substitution have been developed to support or supplant traditional visual aids. These have not achieved widespread use because they are difficult to use or block off residual vision. This paper presents a unique depth to high-contrast pseudocolor mapping overlay developed and tested on a Microsoft Hololens 1 as a low vision aid for RP patients. A single-masked and randomized trial of the AR pseudocolor low vision aid to evaluate real world mobility and near obstacle avoidance was conducted consisting of 10 RP subjects. An FDA-validated functional obstacle course and a custom-made grasping setup were used. The use of the AR visual aid reduced collisions by 50% in mobility testing (p = 0.02), and by 70% in grasp testing (p = 0.03). This paper introduces a new technique, the pseudocolor wireframe, and reports the first significant statistics showing improvements for the population of RP patients with mobility and grasp.
Collapse
|
8
|
Han S, Qiu C, Lee KR, Jung JH, Peli E. Word recognition: re-thinking prosthetic vision evaluation. J Neural Eng 2018; 15:055003. [PMID: 29781807 DOI: 10.1088/1741-2552/aac663] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
OBJECTIVE Evaluations of vision prostheses and sensory substitution devices have frequently relied on repeated training and then testing with the same small set of items. These multiple forced-choice tasks produced above chance performance in blind users, but it is unclear if the observed performance represents restoration of vision that transfers to novel, untrained items. APPROACH Here, we tested the generalizability of the forced-choice paradigm on discrimination of low-resolution word images. Extensive visual training was conducted with the same 10 words used in previous BrainPort tongue stimulation studies. The performance on these 10 words and an additional 50 words was measured before and after the training sessions. MAIN RESULTS The results revealed minimal performance improvement with the untrained words, demonstrating instead pattern discrimination limited mostly to the trained words. SIGNIFICANCE These findings highlight the need to reconsider current evaluation practices, in particular, the use of forced-choice paradigms with a few highly trained items. While appropriate for measuring the performance thresholds in acuity or contrast sensitivity of a functioning visual system, performance on such tasks cannot be taken to indicate restored spatial pattern vision.
Collapse
Affiliation(s)
- Shui'Er Han
- Department of Ophthalmology, The Schepens Eye Research Institute, Massachusetts Eye and Ear, Harvard Medical School, 20 Staniford Street, Boston, MA 02114-2500, United States of America. School of Psychology, University of Sydney, Sydney, Australia
| | | | | | | | | |
Collapse
|
9
|
Deverell L, Meyer D, Lau BT, Al Mahmud A, Sukunesan S, Bhowmik J, Chai A, McCarthy C, Zheng P, Pipingas A, Islam FMA. Optimising technology to measure functional vision, mobility and service outcomes for people with low vision or blindness: protocol for a prospective cohort study in Australia and Malaysia. BMJ Open 2017; 7:e018140. [PMID: 29273657 PMCID: PMC5770903 DOI: 10.1136/bmjopen-2017-018140] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
INTRODUCTION Orientation and mobility (O&M) specialists assess the functional vision and O&M skills of people with mobility problems, usually relating to low vision or blindness. There are numerous O&M assessment checklists but no measures that reduce qualitative assessment data to a single comparable score suitable for assessing any O&M client, of any age or ability, in any location. Functional measures are needed internationally to align O&M assessment practices, guide referrals, profile O&M clients, plan appropriate services and evaluate outcomes from O&M programmes (eg, long cane training), assistive technology (eg, hazard sensors) and medical interventions (eg, retinal implants). This study aims to validate two new measures of functional performance vision-related outcomes in orientation and mobility (VROOM) and orientation and mobility outcomes (OMO) in the context of ordinary O&M assessments in Australia, with cultural comparisons in Malaysia, also developing phone apps and online training to streamline professional assessment practices. METHODS AND ANALYSIS This multiphase observational study will employ embedded mixed methods with a qualitative/quantitative priority: corating functional vision and O&M during social inquiry. Australian O&M agencies (n=15) provide the sampling frame. O&M specialists will use quota sampling to generate cross-sectional assessment data (n=400) before investigating selected cohorts in outcome studies. Cultural relevance of the VROOM and OMO tools will be investigated in Malaysia, where the tools will inform the design of assistive devices and evaluate prototypes. Exploratory and confirmatory factor analysis, Rasch modelling, cluster analysis and analysis of variance will be undertaken along with descriptive analysis of measurement data. Qualitative findings will be used to interpret VROOM and OMO scores, filter statistically significant results, warrant their generalisability and identify additional relevant constructs that could also be measured. ETHICS AND DISSEMINATION Ethical approval has been granted by the Human Research Ethics Committee at Swinburne University (SHR Project 2016/316). Dissemination of results will be via agency reports, journal articles and conference presentations.
Collapse
Affiliation(s)
- Lil Deverell
- Department of Statistics, Data Science and Epidemiology, Faculty of Health, Arts and Design, Swinburne University of Technology, Hawthorn, Australia
- Client Services, Guide Dogs Victoria, Kew, Australia
| | - Denny Meyer
- Department of Statistics, Data Science and Epidemiology, Faculty of Health, Arts and Design, Swinburne University of Technology, Hawthorn, Australia
| | - Bee Theng Lau
- Department of Computing, Faculty of Engineering, Computing and Science, Swinburne University of Technology, Kuching, Malaysia
| | - Abdullah Al Mahmud
- Centre for Design Innovation, School of Design, Faculty of Health, Arts and Design, Swinburne University of Technology, Hawthorn, Australia
| | - Suku Sukunesan
- Faculty of Business and Law, Swinburne Business School, Swinburne University of Technology, Hawthorn, Australia
| | - Jahar Bhowmik
- Department of Statistics, Data Science and Epidemiology, Faculty of Health, Arts and Design, Swinburne University of Technology, Hawthorn, Australia
| | - Almon Chai
- Robotics and Mechatronics Engineering, Faculty of Engineering, Computing and Science, Swinburne University of Technology, Kuching, Malaysia
| | - Chris McCarthy
- School of Software and Electrical Engineering, Faculty of Science, Engineering and Technology, Swinburne University of Technology, Hawthorn, Australia
| | - Pan Zheng
- Department of Computing, Faculty of Engineering, Computing and Science, Swinburne University of Technology, Kuching, Malaysia
| | - Andrew Pipingas
- Department of Psychological Sciences, Faculty of Health, Arts and Design, Swinburne University of Technology, Hawthorn, Australia
| | - Fakir M Amirul Islam
- Department of Statistics, Data Science and Epidemiology, Faculty of Health, Arts and Design, Swinburne University of Technology, Hawthorn, Australia
| |
Collapse
|
10
|
Zapf MPH, Boon MY, Lovell NH, Suaning GJ. Assistive peripheral phosphene arrays deliver advantages in obstacle avoidance in simulated end-stage retinitis pigmentosa: a virtual-reality study. J Neural Eng 2016; 13:026022. [PMID: 26902525 DOI: 10.1088/1741-2560/13/2/026022] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
OBJECTIVE The prospective efficacy of peripheral retinal prostheses for guiding orientation and mobility in the absence of residual vision, as compared to an implant for the central visual field (VF), was evaluated using simulated prosthetic vision (SPV). APPROACH Sighted volunteers wearing a head-mounted display performed an obstacle circumvention task under SPV. Mobility and orientation performance with three layouts of prosthetic vision were compared: peripheral prosthetic vision of higher visual acuity (VA) but limited VF, of wider VF but limited VA, as well as centrally restricted prosthetic vision. Learning curves using these layouts were compared fitting an exponential model to the mobility and orientation measures. MAIN RESULTS Using peripheral layouts, performance was superior to the central layout. Walking speed with both higher-acuity and wider-angle layouts was 5.6% higher, and mobility errors reduced by 46.4% and 48.6%, respectively, as compared to the central layout. The wider-angle layout yielded the least number of collisions, 63% less than the higher-acuity and 73% less than the central layout. Using peripheral layouts, the number of visual-scanning related head movements was 54.3% (higher-acuity) and 60.7% (wider-angle) lower, as compared to the central layout, and the ratio of time standing versus time walking was 51.9% and 61.5% lower, respectively. Learning curves did not differ between layouts, except for time standing versus time walking, where both peripheral layouts achieved significantly lower asymptotic values compared to the central layout. SIGNIFICANCE Beyond complementing residual vision for an improved performance, peripheral prosthetic vision can effectively guide mobility in the later stages of retinitis pigmentosa (RP) without residual vision. Further, the temporal dynamics of learning peripheral and central prosthetic vision are similar. Therefore, development of a peripheral retinal prosthesis and early implantation to alleviate VF constriction in RP should be considered to extend the target group and the time of benefit for potential retinal prosthesis implantees.
Collapse
Affiliation(s)
- Marc Patrick H Zapf
- Graduate School of Biomedical Engineering, UNSW Australia, Sydney 2052, Australia
| | | | | | | |
Collapse
|
11
|
Murphy MC, Nau AC, Fisher C, Kim SG, Schuman JS, Chan KC. Top-down influence on the visual cortex of the blind during sensory substitution. Neuroimage 2015; 125:932-940. [PMID: 26584776 DOI: 10.1016/j.neuroimage.2015.11.021] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2015] [Revised: 11/07/2015] [Accepted: 11/10/2015] [Indexed: 10/22/2022] Open
Abstract
Visual sensory substitution devices provide a non-surgical and flexible approach to vision rehabilitation in the blind. These devices convert images taken by a camera into cross-modal sensory signals that are presented as a surrogate for direct visual input. While previous work has demonstrated that the visual cortex of blind subjects is recruited during sensory substitution, the cognitive basis of this activation remains incompletely understood. To test the hypothesis that top-down input provides a significant contribution to this activation, we performed functional MRI scanning in 11 blind (7 acquired and 4 congenital) and 11 sighted subjects under two conditions: passive listening of image-encoded soundscapes before sensory substitution training and active interpretation of the same auditory sensory substitution signals after a 10-minute training session. We found that the modulation of visual cortex activity due to active interpretation was significantly stronger in the blind over sighted subjects. In addition, congenitally blind subjects showed stronger task-induced modulation in the visual cortex than acquired blind subjects. In a parallel experiment, we scanned 18 blind (11 acquired and 7 congenital) and 18 sighted subjects at rest to investigate alterations in functional connectivity due to visual deprivation. The results demonstrated that visual cortex connectivity of the blind shifted away from sensory networks and toward known areas of top-down input. Taken together, our data support the model of the brain, including the visual system, as a highly flexible task-based and not sensory-based machine.
Collapse
Affiliation(s)
- Matthew C Murphy
- NeuroImaging Laboratory, University of Pittsburgh, Pittsburgh, PA, USA; Sensory Substitution Laboratory, University of Pittsburgh, Pittsburgh, PA, USA; UPMC Eye Center, Ophthalmology and Visual Science Research Center, Department of Ophthalmology, School of Medicine, University of Pittsburgh, Pittsburgh, PA, USA; Louis J. Fox Center for Vision Restoration, University of Pittsburgh and UPMC, Pittsburgh, PA, USA
| | - Amy C Nau
- Sensory Substitution Laboratory, University of Pittsburgh, Pittsburgh, PA, USA; UPMC Eye Center, Ophthalmology and Visual Science Research Center, Department of Ophthalmology, School of Medicine, University of Pittsburgh, Pittsburgh, PA, USA; McGowan Institute for Regenerative Medicine, University of Pittsburgh and UPMC, Pittsburgh, PA, USA; Louis J. Fox Center for Vision Restoration, University of Pittsburgh and UPMC, Pittsburgh, PA, USA
| | - Christopher Fisher
- Sensory Substitution Laboratory, University of Pittsburgh, Pittsburgh, PA, USA; UPMC Eye Center, Ophthalmology and Visual Science Research Center, Department of Ophthalmology, School of Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| | - Seong-Gi Kim
- NeuroImaging Laboratory, University of Pittsburgh, Pittsburgh, PA, USA; Department of Bioengineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, USA; McGowan Institute for Regenerative Medicine, University of Pittsburgh and UPMC, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University, Pittsburgh, PA, USA; Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea; Department of Biomedical Engineering, Sungkyunkwan University, Suwon, Republic of Korea; Department of Biological Sciences, Sungkyunkwan University, Suwon, Republic of Korea
| | - Joel S Schuman
- Sensory Substitution Laboratory, University of Pittsburgh, Pittsburgh, PA, USA; UPMC Eye Center, Ophthalmology and Visual Science Research Center, Department of Ophthalmology, School of Medicine, University of Pittsburgh, Pittsburgh, PA, USA; Department of Bioengineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, USA; McGowan Institute for Regenerative Medicine, University of Pittsburgh and UPMC, Pittsburgh, PA, USA; Louis J. Fox Center for Vision Restoration, University of Pittsburgh and UPMC, Pittsburgh, PA, USA; Clinical and Translational Science Institute, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University, Pittsburgh, PA, USA
| | - Kevin C Chan
- NeuroImaging Laboratory, University of Pittsburgh, Pittsburgh, PA, USA; UPMC Eye Center, Ophthalmology and Visual Science Research Center, Department of Ophthalmology, School of Medicine, University of Pittsburgh, Pittsburgh, PA, USA; Department of Bioengineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, USA; McGowan Institute for Regenerative Medicine, University of Pittsburgh and UPMC, Pittsburgh, PA, USA; Louis J. Fox Center for Vision Restoration, University of Pittsburgh and UPMC, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University, Pittsburgh, PA, USA.
| |
Collapse
|
12
|
Navigating from a Depth Image Converted into Sound. Appl Bionics Biomech 2015; 2015:543492. [PMID: 27019586 PMCID: PMC4745448 DOI: 10.1155/2015/543492] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2014] [Accepted: 01/18/2015] [Indexed: 11/18/2022] Open
Abstract
Background. Common manufactured depth sensors generate depth images that humans normally obtain from their eyes and hands. Various designs converting spatial data into sound have been recently proposed, speculating on their applicability as sensory substitution devices (SSDs). Objective. We tested such a design as a travel aid in a navigation task. Methods. Our portable device (MeloSee) converted 2D array of a depth image into melody in real-time. Distance from the sensor was translated into sound intensity, stereo-modulated laterally, and the pitch represented verticality. Twenty-one blindfolded young adults navigated along four different paths during two sessions separated by one-week interval. In some instances, a dual task required them to recognize a temporal pattern applied through a tactile vibrator while they navigated. Results. Participants learnt how to use the system on both new paths and on those they had already navigated from. Based on travel time and errors, performance improved from one week to the next. The dual task was achieved successfully, slightly affecting but not preventing effective navigation. Conclusions. The use of Kinect-type sensors to implement SSDs is promising, but it is restricted to indoor use and it is inefficient on too short range.
Collapse
|
13
|
Nau AC, Pintar C, Arnoldussen A, Fisher C. Acquisition of Visual Perception in Blind Adults Using the BrainPort Artificial Vision Device. Am J Occup Ther 2015; 69:6901290010p1-8. [PMID: 25553750 PMCID: PMC4281706 DOI: 10.5014/ajot.2015.011809] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
OBJECTIVE We sought to determine whether intensive low vision rehabilitation would confer any functional improvement in a sample of blind adults using the BrainPort artificial vision device. METHOD Eighteen adults ages 28-69 yr (n=10 men and n=8 women) who had light perception only or worse vision bilaterally spent up to 6 hr per day for 1 wk undergoing structured rehabilitation interventions. The functional outcomes of object identification and word recognition were tested at baseline and after rehabilitation training. RESULTS At baseline, participants were unable to complete the two functional assessments. After participation in the 1-wk training protocol, participants were able to use the BrainPort device to complete the two tasks with moderate success. CONCLUSION Without training, participants were not able to perform above chance level using the BrainPort device. As artificial vision technologies become available, occupational therapy practitioners can play a key role in clients' success or failure in using these devices.
Collapse
Affiliation(s)
- Amy C Nau
- Amy C. Nau, OD, is Assistant Professor, University of Pittsburgh Medical Center Eye Center; McGowan Institute for Regenerative Medicine; and Fox Center for Vision Restoration, Korb & Associates, Boston MA;
| | - Christine Pintar
- Christine Pintar, MS, is Clinical Research Coordinator, Fox Center for Vision Restoration, Pittsburgh, PA
| | - Aimee Arnoldussen
- Aimee Arnoldussen, PhD, is Technology Assessment Program Manager, University of Wisconsin, Madison
| | - Christopher Fisher
- Christopher Fisher is Research Assistant, Fox Center for Vision Restoration, Sensory Substitution Laboratory, Pittsburgh, PA
| |
Collapse
|