1
|
White J, Ruiz-Serra J, Petrie S, Kameneva T, McCarthy C. Self-Attention Based Vision Processing for Prosthetic Vision. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083046 DOI: 10.1109/embc40787.2023.10341053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
We investigate Self-Attention (SA) networks for directly learning visual representations for prosthetic vision. Specifically, we explore how the SA mechanism can be leveraged to produce task-specific scene representations for prosthetic vision, overcoming the need for explicit hand-selection of learnt features and post-processing. Further, we demonstrate how the mapping of importance to image regions can serve as an explainability tool to analyse the learnt vision processing behaviour, providing enhanced validation and interpretation capability than current learning-based methods for prosthetic vision. We investigate our approach in the context of an orientation and mobility (OM) task, and demonstrate its feasibility for learning vision processing pipelines for prosthetic vision.
Collapse
|
2
|
Kasowski J, Johnson BA, Neydavood R, Akkaraju A, Beyeler M. A systematic review of extended reality (XR) for understanding and augmenting vision loss. J Vis 2023; 23:5. [PMID: 37140911 PMCID: PMC10166121 DOI: 10.1167/jov.23.5.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 04/04/2023] [Indexed: 05/05/2023] Open
Abstract
Over the past decade, extended reality (XR) has emerged as an assistive technology not only to augment residual vision of people losing their sight but also to study the rudimentary vision restored to blind people by a visual neuroprosthesis. A defining quality of these XR technologies is their ability to update the stimulus based on the user's eye, head, or body movements. To make the best use of these emerging technologies, it is valuable and timely to understand the state of this research and identify any shortcomings that are present. Here we present a systematic literature review of 227 publications from 106 different venues assessing the potential of XR technology to further visual accessibility. In contrast to other reviews, we sample studies from multiple scientific disciplines, focus on technology that augments a person's residual vision, and require studies to feature a quantitative evaluation with appropriate end users. We summarize prominent findings from different XR research areas, show how the landscape has changed over the past decade, and identify scientific gaps in the literature. Specifically, we highlight the need for real-world validation, the broadening of end-user participation, and a more nuanced understanding of the usability of different XR-based accessibility aids.
Collapse
Affiliation(s)
- Justin Kasowski
- Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara, CA, USA
| | - Byron A Johnson
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Ryan Neydavood
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Anvitha Akkaraju
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Michael Beyeler
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
- Department of Computer Science, University of California, Santa Barbara, CA, USA
| |
Collapse
|
3
|
Wang C, Fang C, Zou Y, Yang J, Sawan M. Artificial intelligence techniques for retinal prostheses: a comprehensive review and future direction. J Neural Eng 2023; 20. [PMID: 36634357 DOI: 10.1088/1741-2552/acb295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 01/12/2023] [Indexed: 01/14/2023]
Abstract
Objective. Retinal prostheses are promising devices to restore vision for patients with severe age-related macular degeneration or retinitis pigmentosa disease. The visual processing mechanism embodied in retinal prostheses play an important role in the restoration effect. Its performance depends on our understanding of the retina's working mechanism and the evolvement of computer vision models. Recently, remarkable progress has been made in the field of processing algorithm for retinal prostheses where the new discovery of the retina's working principle and state-of-the-arts computer vision models are combined together.Approach. We investigated the related research on artificial intelligence techniques for retinal prostheses. The processing algorithm in these studies could be attributed to three types: computer vision-related methods, biophysical models, and deep learning models.Main results. In this review, we first illustrate the structure and function of the normal and degenerated retina, then demonstrate the vision rehabilitation mechanism of three representative retinal prostheses. It is necessary to summarize the computational frameworks abstracted from the normal retina. In addition, the development and feature of three types of different processing algorithms are summarized. Finally, we analyze the bottleneck in existing algorithms and propose our prospect about the future directions to improve the restoration effect.Significance. This review systematically summarizes existing processing models for predicting the response of the retina to external stimuli. What's more, the suggestions for future direction may inspire researchers in this field to design better algorithms for retinal prostheses.
Collapse
Affiliation(s)
- Chuanqing Wang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Chaoming Fang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Yong Zou
- Beijing Institute of Radiation Medicine, Beijing, People's Republic of China
| | - Jie Yang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Mohamad Sawan
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| |
Collapse
|
4
|
Beyeler M, Sanchez-Garcia M. Towards a Smart Bionic Eye: AI-powered artificial vision for the treatment of incurable blindness. J Neural Eng 2022; 19:10.1088/1741-2552/aca69d. [PMID: 36541463 PMCID: PMC10507809 DOI: 10.1088/1741-2552/aca69d] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 11/28/2022] [Indexed: 11/30/2022]
Abstract
Objective.How can we return a functional form of sight to people who are living with incurable blindness? Despite recent advances in the development of visual neuroprostheses, the quality of current prosthetic vision is still rudimentary and does not differ much across different device technologies.Approach.Rather than aiming to represent the visual scene as naturally as possible, aSmart Bionic Eyecould provide visual augmentations through the means of artificial intelligence-based scene understanding, tailored to specific real-world tasks that are known to affect the quality of life of people who are blind, such as face recognition, outdoor navigation, and self-care.Main results.Complementary to existing research aiming to restore natural vision, we propose a patient-centered approach to incorporate deep learning-based visual augmentations into the next generation of devices.Significance.The ability of a visual prosthesis to support everyday tasks might make the difference between abandoned technology and a widely adopted next-generation neuroprosthetic device.
Collapse
Affiliation(s)
- Michael Beyeler
- Department of Computer Science,University of California,Santa Barbara, CA, United States of America
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, United States of America
| | - Melani Sanchez-Garcia
- Department of Computer Science,University of California,Santa Barbara, CA, United States of America
| |
Collapse
|
5
|
de Ruyter van Steveninck J, van Gestel T, Koenders P, van der Ham G, Vereecken F, Güçlü U, van Gerven M, Güçlütürk Y, van Wezel R. Real-world indoor mobility with simulated prosthetic vision: The benefits and feasibility of contour-based scene simplification at different phosphene resolutions. J Vis 2022; 22:1. [PMID: 35103758 PMCID: PMC8819280 DOI: 10.1167/jov.22.2.1] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 12/28/2021] [Indexed: 11/24/2022] Open
Abstract
Neuroprosthetic implants are a promising technology for restoring some form of vision in people with visual impairments via electrical neurostimulation in the visual pathway. Although an artificially generated prosthetic percept is relatively limited compared with normal vision, it may provide some elementary perception of the surroundings, re-enabling daily living functionality. For mobility in particular, various studies have investigated the benefits of visual neuroprosthetics in a simulated prosthetic vision paradigm with varying outcomes. The previous literature suggests that scene simplification via image processing, and particularly contour extraction, may potentially improve the mobility performance in a virtual environment. In the current simulation study with sighted participants, we explore both the theoretically attainable benefits of strict scene simplification in an indoor environment by controlling the environmental complexity, as well as the practically achieved improvement with a deep learning-based surface boundary detection implementation compared with traditional edge detection. A simulated electrode resolution of 26 × 26 was found to provide sufficient information for mobility in a simple environment. Our results suggest that, for a lower number of implanted electrodes, the removal of background textures and within-surface gradients may be beneficial in theory. However, the deep learning-based implementation for surface boundary detection did not improve mobility performance in the current study. Furthermore, our findings indicate that, for a greater number of electrodes, the removal of within-surface gradients and background textures may deteriorate, rather than improve, mobility. Therefore, finding a balanced amount of scene simplification requires a careful tradeoff between informativity and interpretability that may depend on the number of implanted electrodes.
Collapse
Affiliation(s)
- Jaap de Ruyter van Steveninck
- Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Tom van Gestel
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Paula Koenders
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Guus van der Ham
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Floris Vereecken
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Umut Güçlü
- Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Marcel van Gerven
- Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Yagmur Güçlütürk
- Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Richard van Wezel
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
- Biomedical Signal and Systems, MIRA Institute for Biomedical Technology and Technical Medicine, University of Twente, Enschede, the Netherlands
| |
Collapse
|
6
|
Petoe MA, Titchener SA, Kolic M, Kentler WG, Abbott CJ, Nayagam DAX, Baglin EK, Kvansakul J, Barnes N, Walker JG, Epp SB, Young KA, Ayton LN, Luu CD, Allen PJ. A Second-Generation (44-Channel) Suprachoroidal Retinal Prosthesis: Interim Clinical Trial Results. Transl Vis Sci Technol 2021; 10:12. [PMID: 34581770 PMCID: PMC8479573 DOI: 10.1167/tvst.10.10.12] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose To report the initial safety and efficacy results of a second-generation (44-channel) suprachoroidal retinal prosthesis at 56 weeks after device activation. Methods Four subjects, with advanced retinitis pigmentosa and bare-light perception only, enrolled in a phase II trial (NCT03406416). A 44-channel electrode array was implanted in a suprachoroidal pocket. Device stability, efficacy, and adverse events were investigated at 12-week intervals. Results All four subjects were implanted successfully and there were no device-related serious adverse events. Color fundus photography indicated a mild postoperative subretinal hemorrhage in two recipients, which cleared spontaneously within 2 weeks. Optical coherence tomography confirmed device stability and position under the macula. Screen-based localization accuracy was significantly better for all subjects with device on versus device off. Two subjects were significantly better with the device on in a motion discrimination task at 7, 15, and 30°/s and in a spatial discrimination task at 0.033 cycles per degree. All subjects were more accurate with the device on than device off at walking toward a target on a modified door task, localizing and touching tabletop objects, and detecting obstacles in an obstacle avoidance task. A positive effect of the implant on subjects' daily lives was confirmed by an orientation and mobility assessor and subject self-report. Conclusions These interim study data demonstrate that the suprachoroidal prosthesis is safe and provides significant improvements in functional vision, activities of daily living, and observer-rated quality of life. Translational Relevance A suprachoroidal prosthesis can provide clinically useful artificial vision while maintaining a safe surgical profile.
Collapse
Affiliation(s)
- Matthew A Petoe
- Bionics Institute, East Melbourne, Victoria, Australia.,Medical Bionics Department, University of Melbourne, Melbourne, Victoria, Australia
| | - Samuel A Titchener
- Bionics Institute, East Melbourne, Victoria, Australia.,Medical Bionics Department, University of Melbourne, Melbourne, Victoria, Australia
| | - Maria Kolic
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, Victoria, Australia
| | - William G Kentler
- Department of Biomedical Engineering, University of Melbourne, Melbourne, Victoria, Australia
| | - Carla J Abbott
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, Victoria, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia
| | - David A X Nayagam
- Bionics Institute, East Melbourne, Victoria, Australia.,Department of Pathology, University of Melbourne, St. Vincent's Hospital, Victoria, Australia
| | - Elizabeth K Baglin
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, Victoria, Australia
| | - Jessica Kvansakul
- Bionics Institute, East Melbourne, Victoria, Australia.,Medical Bionics Department, University of Melbourne, Melbourne, Victoria, Australia
| | - Nick Barnes
- Research School of Engineering, Australian National University, Canberra, Australian Capital Territory, Australia
| | - Janine G Walker
- Research School of Engineering, Australian National University, Canberra, Australian Capital Territory, Australia.,Health & Biosecurity, CSIRO, Canberra, Australian Capital Territory, Australia
| | | | - Kiera A Young
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, Victoria, Australia
| | - Lauren N Ayton
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, Victoria, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia.,Department of Optometry and Vision Sciences, University of Melbourne, Australia
| | - Chi D Luu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, Victoria, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia
| | - Penelope J Allen
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, Victoria, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia
| | | |
Collapse
|
7
|
Ayton LN, Rizzo JF, Bailey IL, Colenbrander A, Dagnelie G, Geruschat DR, Hessburg PC, McCarthy CD, Petoe MA, Rubin GS, Troyk PR. Harmonization of Outcomes and Vision Endpoints in Vision Restoration Trials: Recommendations from the International HOVER Taskforce. Transl Vis Sci Technol 2020; 9:25. [PMID: 32864194 PMCID: PMC7426586 DOI: 10.1167/tvst.9.8.25] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Accepted: 12/08/2019] [Indexed: 01/05/2023] Open
Abstract
Translational research in vision prosthetics, gene therapy, optogenetics, stem cell and other forms of transplantation, and sensory substitution is creating new therapeutic options for patients with neural forms of blindness. The technical challenges faced by each of these disciplines differ considerably, but they all face the same challenge of how to assess vision in patients with ultra-low vision (ULV), who will be the earliest subjects to receive new therapies. Historically, there were few tests to assess vision in ULV patients. In the 1990s, the field of visual prosthetics expanded rapidly, and this activity led to a heightened need to develop better tests to quantify end points for clinical studies. Each group tended to develop novel tests, which made it difficult to compare outcomes across groups. The common lack of validation of the tests and the variable use of controls added to the challenge of interpreting the outcomes of these clinical studies. In 2014, at the bi-annual International "Eye and the Chip" meeting of experts in the field of visual prosthetics, a group of interested leaders agreed to work cooperatively to develop the International Harmonization of Outcomes and Vision Endpoints in Vision Restoration Trials (HOVER) Taskforce. Under this banner, more than 80 specialists across seven topic areas joined an effort to formulate guidelines for performing and reporting psychophysical tests in humans who participate in clinical trials for visual restoration. This document provides the complete version of the consensus opinions from the HOVER taskforce, which, together with its rules of governance, will be posted on the website of the Henry Ford Department of Ophthalmology (www.artificialvision.org). Research groups or companies that choose to follow these guidelines are encouraged to include a specific statement to that effect in their communications to the public. The Executive Committee of the HOVER Taskforce will maintain a list of all human psychophysical research in the relevant fields of research on the same website to provide an overview of methods and outcomes of all clinical work being performed in an attempt to restore vision to the blind. This website will also specify which scientific publications contain the statement of certification. The website will be updated every 2 years and continue to exist as a living document of worldwide efforts to restore vision to the blind. The HOVER consensus document has been written by over 80 of the world's experts in vision restoration and low vision and provides recommendations on the measurement and reporting of patient outcomes in vision restoration trials.
Collapse
Affiliation(s)
- Lauren N. Ayton
- Department of Optometry and Vision Sciences and Department of Surgery (Ophthalmology), The University of Melbourne, Parkville, Australia
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Joseph F. Rizzo
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Ian L. Bailey
- School of Optometry, University of California-Berkeley, Berkeley, CA, USA
| | - August Colenbrander
- Smith-Kettlewell Eye Research Institute and California Pacific Medical Center, San Francisco, CA, USA
| | - Gislin Dagnelie
- Lions Vision Research and Rehabilitation Center, Johns Hopkins Wilmer Eye Institute, Baltimore, MD, USA
| | - Duane R. Geruschat
- Lions Vision Research and Rehabilitation Center, Johns Hopkins Wilmer Eye Institute, Baltimore, MD, USA
| | - Philip C. Hessburg
- Detroit Institute of Ophthalmology, Henry Ford Health System, Grosse Pointe Park, MI, USA
| | - Chris D. McCarthy
- Department of Computer Science & Software Engineering, Swinburne University of Technology, Melbourne, Australia
| | | | - Gary S. Rubin
- University College London Institute of Ophthalmology, London, UK
| | - Philip R. Troyk
- Armour College of Engineering, Illinois Institute of Technology, Chicago, IL, USA
| | | |
Collapse
|
8
|
An update on retinal prostheses. Clin Neurophysiol 2019; 131:1383-1398. [PMID: 31866339 DOI: 10.1016/j.clinph.2019.11.029] [Citation(s) in RCA: 85] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 11/13/2019] [Accepted: 11/14/2019] [Indexed: 11/23/2022]
Abstract
Retinal prostheses are designed to restore a basic sense of sight to people with profound vision loss. They require a relatively intact posterior visual pathway (optic nerve, lateral geniculate nucleus and visual cortex). Retinal implants are options for people with severe stages of retinal degenerative disease such as retinitis pigmentosa and age-related macular degeneration. There have now been three regulatory-approved retinal prostheses. Over five hundred patients have been implanted globally over the past 15 years. Devices generally provide an improved ability to localize high-contrast objects, navigate, and perform basic orientation tasks. Adverse events have included conjunctival erosion, retinal detachment, loss of light perception, and the need for revision surgery, but are rare. There are also specific device risks, including overstimulation (which could cause damage to the retina) or delamination of implanted components, but these are very unlikely. Current challenges include how to improve visual acuity, enlarge the field-of-view, and reduce a complex visual scene to its most salient components through image processing. This review encompasses the work of over 40 individual research groups who have built devices, developed stimulation strategies, or investigated the basic physiology underpinning retinal prostheses. Current technologies are summarized, along with future challenges that face the field.
Collapse
|
9
|
McKone E, Robbins RA, He X, Barnes N. Caricaturing faces to improve identity recognition in low vision simulations: How effective is current-generation automatic assignment of landmark points? PLoS One 2018; 13:e0204361. [PMID: 30286112 PMCID: PMC6171855 DOI: 10.1371/journal.pone.0204361] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2018] [Accepted: 09/05/2018] [Indexed: 11/25/2022] Open
Abstract
PURPOSE Previous behavioural studies demonstrate that face caricaturing can provide an effective image enhancement method for improving poor face identity perception in low vision simulations (e.g., age-related macular degeneration, bionic eye). To translate caricaturing usefully to patients, assignment of the multiple face landmark points needed to produce the caricatures needs to be fully automatised. Recent development in computer science allows automatic face landmark detection of 68 points in real time and in multiple viewpoints. However, previous demonstrations of the behavioural effectiveness of caricaturing have used higher-precision caricatures with 147 landmark points per face, assigned by hand. Here, we test the effectiveness of the auto-assigned 68-point caricatures. We also compare this to the hand-assigned 147-point caricatures. METHOD We assessed human perception of how different in identity pairs of faces appear, when veridical (uncaricatured), caricatured with 68-points, and caricatured with 147-points. Across two experiments, we tested two types of low-vision images: a simulation of blur, as experienced in macular degeneration (testing two blur levels); and a simulation of the phosphenised images seen in prosthetic vision (at three resolutions). RESULTS The 68-point caricatures produced significant improvements in identity discrimination relative to veridical. They were approximately 50% as effective as the 147-point caricatures. CONCLUSION Realistic translation to patients (e.g., via real time caricaturing with the enhanced signal sent to smart glasses or visual prosthetic) is approaching feasibility. For maximum effectiveness software needs to be able to assign landmark points tracing out all details of feature and face shape, to produce high-precision caricatures.
Collapse
Affiliation(s)
- Elinor McKone
- Research School of Psychology, and ARC Centre of Excellence in Cognition and its Disorders, The Australian National University, Canberra, Australian Capital Territory, Australia
| | - Rachel A. Robbins
- Research School of Psychology, The Australian National University, Canberra, Australian Capital Territory, Australia
| | - Xuming He
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Nick Barnes
- Research School of Engineering, Australian National University, Canberra, Australian Capital Territory, Australia
- Data61, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Canberra, Australian Capital Territory, Australia
- Bionic Vision Australia, Carlton, Victoria, Australia
| |
Collapse
|
10
|
Titchener SA, Shivdasani MN, Fallon JB, Petoe MA. Gaze Compensation as a Technique for Improving Hand-Eye Coordination in Prosthetic Vision. Transl Vis Sci Technol 2018; 7:2. [PMID: 29321945 PMCID: PMC5759363 DOI: 10.1167/tvst.7.1.2] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2017] [Accepted: 11/07/2017] [Indexed: 11/24/2022] Open
Abstract
Purpose Shifting the region-of-interest within the input image to compensate for gaze shifts (“gaze compensation”) may improve hand–eye coordination in visual prostheses that incorporate an external camera. The present study investigated the effects of eye movement on hand-eye coordination under simulated prosthetic vision (SPV), and measured the coordination benefits of gaze compensation. Methods Seven healthy-sighted subjects performed a target localization-pointing task under SPV. Three conditions were tested, modeling: retinally stabilized phosphenes (uncompensated); gaze compensation; and no phosphene movement (center-fixed). The error in pointing was quantified for each condition. Results Gaze compensation yielded a significantly smaller pointing error than the uncompensated condition for six of seven subjects, and a similar or smaller pointing error than the center-fixed condition for all subjects (two-way ANOVA, P < 0.05). Pointing error eccentricity and gaze eccentricity were moderately correlated in the uncompensated condition (azimuth: R2 = 0.47; elevation: R2 = 0.51) but not in the gaze-compensated condition (azimuth: R2 = 0.01; elevation: R2 = 0.00). Increased variability in gaze at the time of pointing was correlated with greater reduction in pointing error in the center-fixed condition compared with the uncompensated condition (R2 = 0.64). Conclusions Eccentric eye position impedes hand–eye coordination in SPV. While limiting eye eccentricity in uncompensated viewing can reduce errors, gaze compensation is effective in improving coordination for subjects unable to maintain fixation. Translational Relevance The results highlight the present necessity for suppressing eye movement and support the use of gaze compensation to improve hand–eye coordination and localization performance in prosthetic vision.
Collapse
Affiliation(s)
- Samuel A Titchener
- The Bionics Institute of Australia, East Melbourne, Australia.,Department of Medical Bionics, University of Melbourne, Parkville, Australia
| | - Mohit N Shivdasani
- The Bionics Institute of Australia, East Melbourne, Australia.,Department of Medical Bionics, University of Melbourne, Parkville, Australia
| | - James B Fallon
- The Bionics Institute of Australia, East Melbourne, Australia.,Department of Medical Bionics, University of Melbourne, Parkville, Australia
| | - Matthew A Petoe
- The Bionics Institute of Australia, East Melbourne, Australia.,Department of Medical Bionics, University of Melbourne, Parkville, Australia
| |
Collapse
|
11
|
Li H, Su X, Wang J, Kan H, Han T, Zeng Y, Chai X. Image processing strategies based on saliency segmentation for object recognition under simulated prosthetic vision. Artif Intell Med 2018; 84:64-78. [PMID: 29129481 DOI: 10.1016/j.artmed.2017.11.001] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2016] [Revised: 11/03/2017] [Accepted: 11/07/2017] [Indexed: 11/19/2022]
Affiliation(s)
- Heng Li
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Xiaofan Su
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Jing Wang
- College of Information Technology, Shanghai Ocean University, Shanghai 201306, China
| | - Han Kan
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Tingting Han
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Yajie Zeng
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Xinyu Chai
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China.
| |
Collapse
|
12
|
Li H, Han T, Wang J, Lu Z, Cao X, Chen Y, Li L, Zhou C, Chai X. A real-time image optimization strategy based on global saliency detection for artificial retinal prostheses. Inf Sci (N Y) 2017. [DOI: 10.1016/j.ins.2017.06.014] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
13
|
Li H, Zeng Y, Lu Z, Cao X, Su X, Sui X, Wang J, Chai X. An optimized content-aware image retargeting method: toward expanding the perceived visual field of the high-density retinal prosthesis recipients. J Neural Eng 2017; 15:026025. [PMID: 29076459 DOI: 10.1088/1741-2552/aa966d] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
OBJECTIVE Retinal prosthesis devices have shown great value in restoring some sight for individuals with profoundly impaired vision, but the visual acuity and visual field provided by prostheses greatly limit recipients' visual experience. In this paper, we employ computer vision approaches to seek to expand the perceptible visual field in patients implanted potentially with a high-density retinal prosthesis while maintaining visual acuity as much as possible. APPROACH We propose an optimized content-aware image retargeting method, by introducing salient object detection based on color and intensity-difference contrast, aiming to remap important information of a scene into a small visual field and preserve their original scale as much as possible. It may improve prosthetic recipients' perceived visual field and aid in performing some visual tasks (e.g. object detection and object recognition). To verify our method, psychophysical experiments, detecting object number and recognizing objects, are conducted under simulated prosthetic vision. As control, we use three other image retargeting techniques, including Cropping, Scaling, and seam-assisted shrinkability. MAIN RESULTS Results show that our method outperforms in preserving more key features and has significantly higher recognition accuracy in comparison with other three image retargeting methods under the condition of small visual field and low-resolution. SIGNIFICANCE The proposed method is beneficial to expand the perceived visual field of prosthesis recipients and improve their object detection and recognition performance. It suggests that our method may provide an effective option for image processing module in future high-density retinal implants.
Collapse
|
14
|
Irons JL, Gradden T, Zhang A, He X, Barnes N, Scott AF, McKone E. Face identity recognition in simulated prosthetic vision is poorer than previously reported and can be improved by caricaturing. Vision Res 2017; 137:61-79. [PMID: 28688907 DOI: 10.1016/j.visres.2017.06.002] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2017] [Revised: 04/15/2017] [Accepted: 06/08/2017] [Indexed: 10/19/2022]
Abstract
The visual prosthesis (or "bionic eye") has become a reality but provides a low resolution view of the world. Simulating prosthetic vision in normal-vision observers, previous studies report good face recognition ability using tasks that allow recognition to be achieved on the basis of information that survives low resolution well, including basic category (sex, age) and extra-face information (hairstyle, glasses). Here, we test within-category individuation for face-only information (e.g., distinguishing between multiple Caucasian young men with hair covered). Under these conditions, recognition was poor (although above chance) even for a simulated 40×40 array with all phosphene elements assumed functional, a resolution above the upper end of current-generation prosthetic implants. This indicates that a significant challenge is to develop methods to improve face identity recognition. Inspired by "bionic ear" improvements achieved by altering signal input to match high-level perceptual (speech) requirements, we test a high-level perceptual enhancement of face images, namely face caricaturing (exaggerating identity information away from an average face). Results show caricaturing improved identity recognition in memory and/or perception (degree by which two faces look dissimilar) down to a resolution of 32×32 with 30% phosphene dropout. Findings imply caricaturing may offer benefits for patients at resolutions realistic for some current-generation or in-development implants.
Collapse
Affiliation(s)
- Jessica L Irons
- Research School of Psychology, Australian National University, Australia; ARC Centre for Cognition and Its Disorders, Australian National University, Australia.
| | - Tamara Gradden
- Research School of Psychology, Australian National University, Australia
| | - Angel Zhang
- Research School of Psychology, Australian National University, Australia
| | - Xuming He
- National Information and Communication Technology Australia (NICTA), Australia; College of Engineering and Computer Science, Australian National University, Australia; Data61, CSIRO, Australia
| | - Nick Barnes
- National Information and Communication Technology Australia (NICTA), Australia; College of Engineering and Computer Science, Australian National University, Australia; Bionic Vision Australia, Australia; Data61, CSIRO, Australia
| | - Adele F Scott
- National Information and Communication Technology Australia (NICTA), Australia; Bionic Vision Australia, Australia; Data61, CSIRO, Australia
| | - Elinor McKone
- Research School of Psychology, Australian National University, Australia; ARC Centre for Cognition and Its Disorders, Australian National University, Australia.
| |
Collapse
|
15
|
Bermudez-Cameo J, Badias-Herbera A, Guerrero-Viu M, Lopez-Nicolas G, Guerrero JJ. RGB-D Computer Vision Techniques for Simulated Prosthetic Vision. PATTERN RECOGNITION AND IMAGE ANALYSIS 2017. [DOI: 10.1007/978-3-319-58838-4_47] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
16
|
Stronks HC, Mitchell EB, Nau AC, Barnes N. Visual task performance in the blind with the BrainPort V100 Vision Aid. Expert Rev Med Devices 2016; 13:919-931. [DOI: 10.1080/17434440.2016.1237287] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- H. Christiaan Stronks
- Department of Otorhinolaryngology, Leiden University Medical Centre, Leiden, The Netherlands
- Smart Vision Systems Research Group, Data61, CSIRO, Canberra, Australia
- Department of Neuroscience, The John Curtin School of Medical Research, Australian National University, Canberra, Australia
| | - Ellen B. Mitchell
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
- Children’s Hospital of Pittsburgh of UPMC, Pittsburgh, PA, USA
| | | | - Nick Barnes
- Smart Vision Systems Research Group, Data61, CSIRO, Canberra, Australia
- Research School of Engineering, College of Engineering and Computer Science, Australian National University, Canberra, Australia
| |
Collapse
|
17
|
Barnes N, Scott AF, Lieby P, Petoe MA, McCarthy C, Stacey A, Ayton LN, Sinclair NC, Shivdasani MN, Lovell NH, McDermott HJ, Walker JG. Vision function testing for a suprachoroidal retinal prosthesis: effects of image filtering. J Neural Eng 2016; 13:036013. [DOI: 10.1088/1741-2560/13/3/036013] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
18
|
Horne L, Alvarez JM, McCarthy C, Barnes N. Semantic labelling to aid navigation in prosthetic vision. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2015:3379-82. [PMID: 26737017 DOI: 10.1109/embc.2015.7319117] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Current and near-term implantable prosthetic vision systems offer the potential to restore some visual function, but suffer from limited resolution and dynamic range of induced visual percepts. This can make navigating complex environments difficult for users. Using semantic labelling techniques, we demonstrate that a computer system can aid in obstacle avoidance, and localizing distant objects. Our system automatically classifies each pixel in a natural image into a semantic class, then produces an image from the induced visual percepts that highlights certain classes. This technique allows the user to clearly perceive the location of different types of objects in their field of view, and can be adapted for a range of navigation tasks.
Collapse
|
19
|
Zapf MPH, Boon MY, Matteucci PB, Lovell NH, Suaning GJ. Towards an assistive peripheral visual prosthesis for long-term treatment of retinitis pigmentosa: evaluating mobility performance in immersive simulations. J Neural Eng 2015; 12:036001. [DOI: 10.1088/1741-2560/12/3/036001] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|