1
|
Guo K, Lu J, Wu Y, Hu X, Yang H. The Latest Research Progress on Bionic Artificial Hands: A Systematic Review. MICROMACHINES 2024; 15:891. [PMID: 39064402 PMCID: PMC11278702 DOI: 10.3390/mi15070891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Revised: 07/01/2024] [Accepted: 07/03/2024] [Indexed: 07/28/2024]
Abstract
Bionic prosthetic hands hold the potential to replicate the functionality of human hands. The use of bionic limbs can assist amputees in performing everyday activities. This article systematically reviews the research progress on bionic prostheses, with a focus on control mechanisms, sensory feedback integration, and mechanical design innovations. It emphasizes the use of bioelectrical signals, such as electromyography (EMG), for prosthetic control and discusses the application of machine learning algorithms to enhance the accuracy of gesture recognition. Additionally, the paper explores advancements in sensory feedback technologies, including tactile, visual, and auditory modalities, which enhance user interaction by providing essential environmental feedback. The mechanical design of prosthetic hands is also examined, with particular attention to achieving a balance between dexterity, weight, and durability. Our contribution consists of compiling current research trends and identifying key areas for future development, including the enhancement of control system integration and improving the aesthetic and functional resemblance of prostheses to natural limbs. This work aims to inform and inspire ongoing research that seeks to refine the utility and accessibility of prosthetic hands for amputees, emphasizing user-centric innovations.
Collapse
Affiliation(s)
- Kai Guo
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Jingxin Lu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- College of Mechanical and Electrical Engineering, Changchun University of Science and Technology, Changchun 130022, China
| | - Yuwen Wu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Xuhui Hu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Hongbo Yang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
- College of Mechanical and Electrical Engineering, Changchun University of Science and Technology, Changchun 130022, China
| |
Collapse
|
2
|
de Ruyter van Steveninck J, Nipshagen M, van Gerven M, Güçlü U, Güçlüturk Y, van Wezel R. Gaze-contingent processing improves mobility, scene recognition and visual search in simulated head-steered prosthetic vision. J Neural Eng 2024; 21:026037. [PMID: 38502957 DOI: 10.1088/1741-2552/ad357d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 03/19/2024] [Indexed: 03/21/2024]
Abstract
Objective.The enabling technology of visual prosthetics for the blind is making rapid progress. However, there are still uncertainties regarding the functional outcomes, which can depend on many design choices in the development. In visual prostheses with a head-mounted camera, a particularly challenging question is how to deal with the gaze-locked visual percept associated with spatial updating conflicts in the brain. The current study investigates a recently proposed compensation strategy based on gaze-contingent image processing with eye-tracking. Gaze-contingent processing is expected to reinforce natural-like visual scanning and reestablished spatial updating based on eye movements. The beneficial effects remain to be investigated for daily life activities in complex visual environments.Approach.The current study evaluates the benefits of gaze-contingent processing versus gaze-locked and gaze-ignored simulations in the context of mobility, scene recognition and visual search, using a virtual reality simulated prosthetic vision paradigm with sighted subjects.Main results.Compared to gaze-locked vision, gaze-contingent processing was consistently found to improve the speed in all experimental tasks, as well as the subjective quality of vision. Similar or further improvements were found in a control condition that ignores gaze-dependent effects, a simulation that is unattainable in the clinical reality.Significance.Our results suggest that gaze-locked vision and spatial updating conflicts can be debilitating for complex visually-guided activities of daily living such as mobility and orientation. Therefore, for prospective users of head-steered prostheses with an unimpaired oculomotor system, the inclusion of a compensatory eye-tracking system is strongly endorsed.
Collapse
Affiliation(s)
| | - Mo Nipshagen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Marcel van Gerven
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Umut Güçlü
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Yağmur Güçlüturk
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Richard van Wezel
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Biomedical Signals and Systems Group, University of Twente, Enschede, The Netherlands
| |
Collapse
|
3
|
van der Grinten M, de Ruyter van Steveninck J, Lozano A, Pijnacker L, Rueckauer B, Roelfsema P, van Gerven M, van Wezel R, Güçlü U, Güçlütürk Y. Towards biologically plausible phosphene simulation for the differentiable optimization of visual cortical prostheses. eLife 2024; 13:e85812. [PMID: 38386406 PMCID: PMC10883675 DOI: 10.7554/elife.85812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 01/21/2024] [Indexed: 02/23/2024] Open
Abstract
Blindness affects millions of people around the world. A promising solution to restoring a form of vision for some individuals are cortical visual prostheses, which bypass part of the impaired visual pathway by converting camera input to electrical stimulation of the visual system. The artificially induced visual percept (a pattern of localized light flashes, or 'phosphenes') has limited resolution, and a great portion of the field's research is devoted to optimizing the efficacy, efficiency, and practical usefulness of the encoding of visual information. A commonly exploited method is non-invasive functional evaluation in sighted subjects or with computational models by using simulated prosthetic vision (SPV) pipelines. An important challenge in this approach is to balance enhanced perceptual realism, biologically plausibility, and real-time performance in the simulation of cortical prosthetic vision. We present a biologically plausible, PyTorch-based phosphene simulator that can run in real-time and uses differentiable operations to allow for gradient-based computational optimization of phosphene encoding models. The simulator integrates a wide range of clinical results with neurophysiological evidence in humans and non-human primates. The pipeline includes a model of the retinotopic organization and cortical magnification of the visual cortex. Moreover, the quantitative effects of stimulation parameters and temporal dynamics on phosphene characteristics are incorporated. Our results demonstrate the simulator's suitability for both computational applications such as end-to-end deep learning-based prosthetic vision optimization as well as behavioral experiments. The modular and open-source software provides a flexible simulation framework for computational, clinical, and behavioral neuroscientists working on visual neuroprosthetics.
Collapse
Affiliation(s)
| | | | - Antonio Lozano
- Netherlands Institute for Neuroscience, Vrije Universiteit, Amsterdam, Netherlands
| | - Laura Pijnacker
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Bodo Rueckauer
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Pieter Roelfsema
- Netherlands Institute for Neuroscience, Vrije Universiteit, Amsterdam, Netherlands
| | - Marcel van Gerven
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Richard van Wezel
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
- Biomedical Signals and Systems Group, University of Twente, Enschede, Netherlands
| | - Umut Güçlü
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Yağmur Güçlütürk
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| |
Collapse
|
4
|
Wang HZ, Wong YT. A novel simulation paradigm utilising MRI-derived phosphene maps for cortical prosthetic vision. J Neural Eng 2023; 20:046027. [PMID: 37531948 PMCID: PMC10594539 DOI: 10.1088/1741-2552/aceca2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 07/13/2023] [Accepted: 08/01/2023] [Indexed: 08/04/2023]
Abstract
Objective.We developed a realistic simulation paradigm for cortical prosthetic vision and investigated whether we can improve visual performance using a novel clustering algorithm.Approach.Cortical visual prostheses have been developed to restore sight by stimulating the visual cortex. To investigate the visual experience, previous studies have used uniform phosphene maps, which may not accurately capture generated phosphene map distributions of implant recipients. The current simulation paradigm was based on the Human Connectome Project retinotopy dataset and the placement of implants on the cortices from magnetic resonance imaging scans. Five unique retinotopic maps were derived using this method. To improve performance on these retinotopic maps, we enabled head scanning and a density-based clustering algorithm was then used to relocate centroids of visual stimuli. The impact of these improvements on visual detection performance was tested. Using spatially evenly distributed maps as a control, we recruited ten subjects and evaluated their performance across five sessions on the Berkeley Rudimentary Visual Acuity test and the object recognition task.Main results.Performance on control maps is significantly better than on retinotopic maps in both tasks. Both head scanning and the clustering algorithm showed the potential of improving visual ability across multiple sessions in the object recognition task.Significance.The current paradigm is the first that simulates the experience of cortical prosthetic vision based on brain scans and implant placement, which captures the spatial distribution of phosphenes more realistically. Utilisation of evenly distributed maps may overestimate the performance that visual prosthetics can restore. This simulation paradigm could be used in clinical practice when making plans for where best to implant cortical visual prostheses.
Collapse
Affiliation(s)
- Haozhe Zac Wang
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia
| | - Yan Tat Wong
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia
- Department of Physiology, Monash University, Melbourne, Australia
| |
Collapse
|
5
|
Kasowski J, Johnson BA, Neydavood R, Akkaraju A, Beyeler M. A systematic review of extended reality (XR) for understanding and augmenting vision loss. J Vis 2023; 23:5. [PMID: 37140911 PMCID: PMC10166121 DOI: 10.1167/jov.23.5.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 04/04/2023] [Indexed: 05/05/2023] Open
Abstract
Over the past decade, extended reality (XR) has emerged as an assistive technology not only to augment residual vision of people losing their sight but also to study the rudimentary vision restored to blind people by a visual neuroprosthesis. A defining quality of these XR technologies is their ability to update the stimulus based on the user's eye, head, or body movements. To make the best use of these emerging technologies, it is valuable and timely to understand the state of this research and identify any shortcomings that are present. Here we present a systematic literature review of 227 publications from 106 different venues assessing the potential of XR technology to further visual accessibility. In contrast to other reviews, we sample studies from multiple scientific disciplines, focus on technology that augments a person's residual vision, and require studies to feature a quantitative evaluation with appropriate end users. We summarize prominent findings from different XR research areas, show how the landscape has changed over the past decade, and identify scientific gaps in the literature. Specifically, we highlight the need for real-world validation, the broadening of end-user participation, and a more nuanced understanding of the usability of different XR-based accessibility aids.
Collapse
Affiliation(s)
- Justin Kasowski
- Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara, CA, USA
| | - Byron A Johnson
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Ryan Neydavood
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Anvitha Akkaraju
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Michael Beyeler
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
- Department of Computer Science, University of California, Santa Barbara, CA, USA
| |
Collapse
|
6
|
Titchener SA, Goossens J, Kvansakul J, Nayagam DAX, Kolic M, Baglin EK, Ayton LN, Abbott CJ, Luu CD, Barnes N, Kentler WG, Shivdasani MN, Allen PJ, Petoe MA. Estimating Phosphene Locations Using Eye Movements of Suprachoroidal Retinal Prosthesis Users. Transl Vis Sci Technol 2023; 12:20. [PMID: 36943168 PMCID: PMC10043502 DOI: 10.1167/tvst.12.3.20] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/23/2023] Open
Abstract
Purpose Accurate mapping of phosphene locations from visual prostheses is vital to encode spatial information. This process may involve the subject pointing to evoked phosphene locations with their finger. Here, we demonstrate phosphene mapping for a retinal implant using eye movements and compare it with retinotopic electrode positions and previous results using conventional finger-based mapping. Methods Three suprachoroidal retinal implant recipients (NCT03406416) indicated the spatial position of phosphenes. Electrodes were stimulated individually, and the subjects moved their finger (finger based) or their eyes (gaze based) to the perceived phosphene location. The distortion of the measured phosphene locations from the expected locations (retinotopic electrode locations) was characterized with Procrustes analysis. Results The finger-based phosphene locations were compressed spatially relative to the expected locations all three subjects, but preserved the general retinotopic arrangement (scale factors ranged from 0.37 to 0.83). In two subjects, the gaze-based phosphene locations were similar to the expected locations (scale factors of 0.72 and 0.99). For the third subject, there was no apparent relationship between gaze-based phosphene locations and electrode locations (scale factor of 0.07). Conclusions Gaze-based phosphene mapping was achievable in two of three tested retinal prosthesis subjects and their derived phosphene maps correlated well with the retinotopic electrode layout. A third subject could not produce a coherent gaze-based phosphene map, but this may have revealed that their phosphenes were indistinct spatially. Translational Relevance Gaze-based phosphene mapping is a viable alternative to conventional finger-based mapping, but may not be suitable for all subjects.
Collapse
Affiliation(s)
- Samuel A Titchener
- Bionics Institute, East Melbourne, VIC, Australia
- Medical Bionics Department, University of Melbourne, Melbourne, VIC, Australia
| | - Jeroen Goossens
- Donders Institute for Brain Cognition and Behaviour, Radboudumc, the Netherlands
| | - Jessica Kvansakul
- Bionics Institute, East Melbourne, VIC, Australia
- Medical Bionics Department, University of Melbourne, Melbourne, VIC, Australia
| | - David A X Nayagam
- Bionics Institute, East Melbourne, VIC, Australia
- Department of Pathology, University of Melbourne, Victoria, Australia
- Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, VIC, Australia
| | - Maria Kolic
- Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, VIC, Australia
| | - Elizabeth K Baglin
- Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, VIC, Australia
| | - Lauren N Ayton
- Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, VIC, Australia
- Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, VIC, Australia
- Department of Optometry and Vision Sciences, University of Melbourne, Melbourne, VIC, Australia
| | - Carla J Abbott
- Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, VIC, Australia
- Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, VIC, Australia
| | - Chi D Luu
- Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, VIC, Australia
- Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, VIC, Australia
| | - Nick Barnes
- Data61, CSIRO, Canberra, ACT, Australia
- Research School of Engineering, Australian National University, ACT, Australia
| | - William G Kentler
- Department of Biomedical Engineering, University of Melbourne, Melbourne, VIC, Australia
| | - Mohit N Shivdasani
- Graduate School of Biomedical Engineering, University of New South Wales, Kensington, NSW, Australia
| | - Penelope J Allen
- Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, VIC, Australia
- Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, VIC, Australia
| | - Matthew A Petoe
- Bionics Institute, East Melbourne, VIC, Australia
- Medical Bionics Department, University of Melbourne, Melbourne, VIC, Australia
| |
Collapse
|
7
|
Kasowski J, Beyeler M. Immersive Virtual Reality Simulations of Bionic Vision. AUGMENTED HUMANS 2022 2022; 2022:82-93. [PMID: 35856703 PMCID: PMC9289996 DOI: 10.1145/3519391.3522752] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Bionic vision uses neuroprostheses to restore useful vision to people living with incurable blindness. However, a major outstanding challenge is predicting what people "see" when they use their devices. The limited field of view of current devices necessitates head movements to scan the scene, which is difficult to simulate on a computer screen. In addition, many computational models of bionic vision lack biological realism. To address these challenges, we present VR-SPV, an open-source virtual reality toolbox for simulated prosthetic vision that uses a psychophysically validated computational model to allow sighted participants to "see through the eyes" of a bionic eye user. To demonstrate its utility, we systematically evaluated how clinically reported visual distortions affect performance in a letter recognition and an immersive obstacle avoidance task. Our results highlight the importance of using an appropriate phosphene model when predicting visual outcomes for bionic vision.
Collapse
|
8
|
Petoe MA, Titchener SA, Kolic M, Kentler WG, Abbott CJ, Nayagam DAX, Baglin EK, Kvansakul J, Barnes N, Walker JG, Epp SB, Young KA, Ayton LN, Luu CD, Allen PJ. A Second-Generation (44-Channel) Suprachoroidal Retinal Prosthesis: Interim Clinical Trial Results. Transl Vis Sci Technol 2021; 10:12. [PMID: 34581770 PMCID: PMC8479573 DOI: 10.1167/tvst.10.10.12] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose To report the initial safety and efficacy results of a second-generation (44-channel) suprachoroidal retinal prosthesis at 56 weeks after device activation. Methods Four subjects, with advanced retinitis pigmentosa and bare-light perception only, enrolled in a phase II trial (NCT03406416). A 44-channel electrode array was implanted in a suprachoroidal pocket. Device stability, efficacy, and adverse events were investigated at 12-week intervals. Results All four subjects were implanted successfully and there were no device-related serious adverse events. Color fundus photography indicated a mild postoperative subretinal hemorrhage in two recipients, which cleared spontaneously within 2 weeks. Optical coherence tomography confirmed device stability and position under the macula. Screen-based localization accuracy was significantly better for all subjects with device on versus device off. Two subjects were significantly better with the device on in a motion discrimination task at 7, 15, and 30°/s and in a spatial discrimination task at 0.033 cycles per degree. All subjects were more accurate with the device on than device off at walking toward a target on a modified door task, localizing and touching tabletop objects, and detecting obstacles in an obstacle avoidance task. A positive effect of the implant on subjects' daily lives was confirmed by an orientation and mobility assessor and subject self-report. Conclusions These interim study data demonstrate that the suprachoroidal prosthesis is safe and provides significant improvements in functional vision, activities of daily living, and observer-rated quality of life. Translational Relevance A suprachoroidal prosthesis can provide clinically useful artificial vision while maintaining a safe surgical profile.
Collapse
Affiliation(s)
- Matthew A Petoe
- Bionics Institute, East Melbourne, Victoria, Australia.,Medical Bionics Department, University of Melbourne, Melbourne, Victoria, Australia
| | - Samuel A Titchener
- Bionics Institute, East Melbourne, Victoria, Australia.,Medical Bionics Department, University of Melbourne, Melbourne, Victoria, Australia
| | - Maria Kolic
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, Victoria, Australia
| | - William G Kentler
- Department of Biomedical Engineering, University of Melbourne, Melbourne, Victoria, Australia
| | - Carla J Abbott
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, Victoria, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia
| | - David A X Nayagam
- Bionics Institute, East Melbourne, Victoria, Australia.,Department of Pathology, University of Melbourne, St. Vincent's Hospital, Victoria, Australia
| | - Elizabeth K Baglin
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, Victoria, Australia
| | - Jessica Kvansakul
- Bionics Institute, East Melbourne, Victoria, Australia.,Medical Bionics Department, University of Melbourne, Melbourne, Victoria, Australia
| | - Nick Barnes
- Research School of Engineering, Australian National University, Canberra, Australian Capital Territory, Australia
| | - Janine G Walker
- Research School of Engineering, Australian National University, Canberra, Australian Capital Territory, Australia.,Health & Biosecurity, CSIRO, Canberra, Australian Capital Territory, Australia
| | | | - Kiera A Young
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, Victoria, Australia
| | - Lauren N Ayton
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, Victoria, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia.,Department of Optometry and Vision Sciences, University of Melbourne, Australia
| | - Chi D Luu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, Victoria, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia
| | - Penelope J Allen
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Melbourne, Victoria, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia
| | | |
Collapse
|
9
|
Full gaze contingency provides better reading performance than head steering alone in a simulation of prosthetic vision. Sci Rep 2021; 11:11121. [PMID: 34045485 PMCID: PMC8160142 DOI: 10.1038/s41598-021-86996-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Accepted: 03/23/2021] [Indexed: 11/08/2022] Open
Abstract
The visual pathway is retinotopically organized and sensitive to gaze position, leading us to hypothesize that subjects using visual prostheses incorporating eye position would perform better on perceptual tasks than with devices that are merely head-steered. We had sighted subjects read sentences from the MNREAD corpus through a simulation of artificial vision under conditions of full gaze compensation, and head-steered viewing. With 2000 simulated phosphenes, subjects (n = 23) were immediately able to read under full gaze compensation and were assessed at an equivalent visual acuity of 1.0 logMAR, but were nearly unable to perform the task under head-steered viewing. At the largest font size tested, 1.4 logMAR, subjects read at 59 WPM (50% of normal speed) with 100% accuracy under the full-gaze condition, but at 0.7 WPM (under 1% of normal) with below 15% accuracy under head-steering. We conclude that gaze-compensated prostheses are likely to produce considerably better patient outcomes than those not incorporating eye movements.
Collapse
|
10
|
Abbasi B, Rizzo JF. Advances in Neuroscience, Not Devices, Will Determine the Effectiveness of Visual Prostheses. Semin Ophthalmol 2021; 36:168-175. [PMID: 33734937 DOI: 10.1080/08820538.2021.1887902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Background: Innovations in engineering and neuroscience have enabled the development of sophisticated visual prosthetic devices. In clinical trials, these devices have provided visual acuities as high as 20/460, enabled coarse navigation, and even allowed for reading of short words. However, long-term commercial viability arguably rests on attaining even better vision and more definitive improvements in tasks of daily living and quality of life. Purpose: Here we review technological and biological obstacles in the implementation of visual prosthetics. Conclusions: Research in the visual prosthetic field has tackled significant technical challenges, including biocompatibility, signal spread through neural tissue, and inadvertent activation of passing axons; however, significant gaps in knowledge remain in the realm of neuroscience, including the neural code of vision and visual plasticity. We assert that further optimization of prosthetic devices alone will not provide markedly improved visual outcomes without significant advances in our understanding of neuroscience.
Collapse
Affiliation(s)
- Bardia Abbasi
- Neuro-Ophthalmology Service, Department of Ophthalmology, Massachusetts Eye and Ear and Harvard Medical School, Boston, MA, USA
| | - Joseph F Rizzo
- Neuro-Ophthalmology Service, Department of Ophthalmology, Massachusetts Eye and Ear and Harvard Medical School, Boston, MA, USA
| |
Collapse
|
11
|
Titchener SA, Kvansakul J, Shivdasani MN, Fallon JB, Nayagam DAX, Epp SB, Williams CE, Barnes N, Kentler WG, Kolic M, Baglin EK, Ayton LN, Abbott CJ, Luu CD, Allen PJ, Petoe MA. Oculomotor Responses to Dynamic Stimuli in a 44-Channel Suprachoroidal Retinal Prosthesis. Transl Vis Sci Technol 2020; 9:31. [PMID: 33384885 PMCID: PMC7757638 DOI: 10.1167/tvst.9.13.31] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2020] [Accepted: 11/12/2020] [Indexed: 12/20/2022] Open
Abstract
Purpose To investigate oculomotor behavior in response to dynamic stimuli in retinal implant recipients. Methods Three suprachoroidal retinal implant recipients performed a four-alternative forced-choice motion discrimination task over six sessions longitudinally. Stimuli were a single white bar (“moving bar”) or a series of white bars (“moving grating”) sweeping left, right, up, or down across a 42″ monitor. Performance was compared with normal video processing and scrambled video processing (randomized image-to-electrode mapping to disrupt spatiotemporal structure). Eye and head movement was monitored throughout the task. Results Two subjects had diminished performance with scrambling, suggesting retinotopic discrimination was used in the normal condition and made smooth pursuit eye movements congruent to the moving bar stimulus direction. These two subjects also made stimulus-related eye movements resembling optokinetic reflex (OKR) for moving grating stimuli, but the movement was incongruent with stimulus direction. The third subject was less adept at the task, appeared primarily reliant on head position cues (head movements were congruent to stimulus direction), and did not exhibit retinotopic discrimination and associated eye movements. Conclusions Our observation of smooth pursuit indicates residual functionality of cortical direction-selective circuits and implies a more naturalistic perception of motion than expected. A distorted OKR implies improper functionality of retinal direction-selective circuits, possibly due to retinal remodeling or the non-selective nature of the electrical stimulation. Translational Relevance Retinal implant users can make naturalistic eye movements in response to moving stimuli, highlighting the potential for eye tracker feedback to improve perceptual localization and image stabilization in camera-based visual prostheses.
Collapse
Affiliation(s)
- Samuel A Titchener
- Bionics Institute, East Melbourne, Australia.,Medical Bionics Department, University of Melbourne, Melbourne, Australia
| | - Jessica Kvansakul
- Bionics Institute, East Melbourne, Australia.,Medical Bionics Department, University of Melbourne, Melbourne, Australia
| | - Mohit N Shivdasani
- Graduate School of Biomedical Engineering, University of New South Wales, Kensington, Australia.,Bionics Institute, East Melbourne, Australia
| | - James B Fallon
- Bionics Institute, East Melbourne, Australia.,Medical Bionics Department, University of Melbourne, Melbourne, Australia
| | - D A X Nayagam
- Bionics Institute, East Melbourne, Australia.,Department of Pathology, University of Melbourne, St. Vincent's Hospital, Melbourne, Australia
| | | | - Chris E Williams
- Bionics Institute, East Melbourne, Australia.,Medical Bionics Department, University of Melbourne, Melbourne, Australia
| | - Nick Barnes
- Data61, CSIRO, Canberra, Australia.,Research School of Engineering, Australian National University, Canberra, Australia
| | - William G Kentler
- Department of Biomedical Engineering, University of Melbourne, Melbourne, Australia
| | - Maria Kolic
- Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, Australia
| | - Elizabeth K Baglin
- Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, Australia
| | - Lauren N Ayton
- Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, Australia
| | - Carla J Abbott
- Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Australia
| | - Chi D Luu
- Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Australia
| | - Penelope J Allen
- Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Australia
| | - Matthew A Petoe
- Bionics Institute, East Melbourne, Australia.,Medical Bionics Department, University of Melbourne, Melbourne, Australia
| |
Collapse
|
12
|
Nowik K, Langwińska-Wośko E, Skopiński P, Nowik KE, Szaflik JP. Bionic eye review – An update. J Clin Neurosci 2020; 78:8-19. [DOI: 10.1016/j.jocn.2020.05.041] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Accepted: 05/03/2020] [Indexed: 01/26/2023]
|
13
|
An update on retinal prostheses. Clin Neurophysiol 2019; 131:1383-1398. [PMID: 31866339 DOI: 10.1016/j.clinph.2019.11.029] [Citation(s) in RCA: 101] [Impact Index Per Article: 16.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 11/13/2019] [Accepted: 11/14/2019] [Indexed: 11/23/2022]
Abstract
Retinal prostheses are designed to restore a basic sense of sight to people with profound vision loss. They require a relatively intact posterior visual pathway (optic nerve, lateral geniculate nucleus and visual cortex). Retinal implants are options for people with severe stages of retinal degenerative disease such as retinitis pigmentosa and age-related macular degeneration. There have now been three regulatory-approved retinal prostheses. Over five hundred patients have been implanted globally over the past 15 years. Devices generally provide an improved ability to localize high-contrast objects, navigate, and perform basic orientation tasks. Adverse events have included conjunctival erosion, retinal detachment, loss of light perception, and the need for revision surgery, but are rare. There are also specific device risks, including overstimulation (which could cause damage to the retina) or delamination of implanted components, but these are very unlikely. Current challenges include how to improve visual acuity, enlarge the field-of-view, and reduce a complex visual scene to its most salient components through image processing. This review encompasses the work of over 40 individual research groups who have built devices, developed stimulation strategies, or investigated the basic physiology underpinning retinal prostheses. Current technologies are summarized, along with future challenges that face the field.
Collapse
|
14
|
Titchener SA, Ayton LN, Abbott CJ, Fallon JB, Shivdasani MN, Caruso E, Sivarajah P, Petoe MA. Head and Gaze Behavior in Retinitis Pigmentosa. Invest Ophthalmol Vis Sci 2019; 60:2263-2273. [PMID: 31112611 DOI: 10.1167/iovs.18-26121] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose Peripheral visual field loss (PVFL) due to retinitis pigmentosa (RP) decreases saccades to areas of visual defect, leading to a habitually confined range of eye movement. We investigated the relative contributions of head and eye movement in RP patients and normal-sighted controls to determine whether this reduced eye movement is offset by increased head movement. Methods Eye-head coordination was examined in 18 early-moderate RP patients, 4 late-stage RP patients, and 19 normal-sighted controls. Three metrics were extracted: the extent of eye, head, and total gaze (eye+head) movement while viewing a naturalistic scene; head gain, the ratio of head movement to total gaze movement during smooth pursuit; and the customary oculomotor range (COMR), the orbital range within which the eye is preferentially maintained during a pro-saccade task. Results The late-stage RP group had minimal gaze movement and could not discern the naturalistic scene. Variance in head position in early-moderate RP was significantly greater than in controls, whereas variance in total gaze was similar. Head gain was greater in early-moderate RP than in controls, whereas COMR was smaller. Across groups, visual field extent was negatively correlated with head gain and positively correlated with COMR. Accounting for age effects, these results demonstrate increased head movement at the expense of eye movement in participants with PVFL. Conclusions RP is associated with an increased propensity for head movement during gaze shifts, and the magnitude of this effect is dependent on the severity of visual field loss.
Collapse
Affiliation(s)
- Samuel A Titchener
- The Bionics Institute of Australia, East Melbourne, Victoria, Australia.,Department of Medical Bionics, University of Melbourne, Parkville, Victoria, Australia
| | - Lauren N Ayton
- Centre for Eye Research Australia, East Melbourne, Victoria, Australia.,Department of Surgery (Ophthalmology), University of Melbourne, Parkville, Victoria, Australia
| | - Carla J Abbott
- Centre for Eye Research Australia, East Melbourne, Victoria, Australia.,Department of Surgery (Ophthalmology), University of Melbourne, Parkville, Victoria, Australia
| | - James B Fallon
- The Bionics Institute of Australia, East Melbourne, Victoria, Australia.,Department of Medical Bionics, University of Melbourne, Parkville, Victoria, Australia
| | - Mohit N Shivdasani
- The Bionics Institute of Australia, East Melbourne, Victoria, Australia.,Graduate School of Biomedical Engineering, The University of New South Wales, Kensington, New South Wales, Australia
| | - Emily Caruso
- Centre for Eye Research Australia, East Melbourne, Victoria, Australia.,Department of Surgery (Ophthalmology), University of Melbourne, Parkville, Victoria, Australia
| | - Pyrawy Sivarajah
- Centre for Eye Research Australia, East Melbourne, Victoria, Australia
| | - Matthew A Petoe
- The Bionics Institute of Australia, East Melbourne, Victoria, Australia.,Department of Medical Bionics, University of Melbourne, Parkville, Victoria, Australia
| |
Collapse
|
15
|
Endo T, Hozumi K, Hirota M, Kanda H, Morimoto T, Nishida K, Fujikado T. The influence of visual field position induced by a retinal prosthesis simulator on mobility. Graefes Arch Clin Exp Ophthalmol 2019; 257:1765-1770. [PMID: 31147839 DOI: 10.1007/s00417-019-04375-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2018] [Revised: 04/16/2019] [Accepted: 05/20/2019] [Indexed: 11/26/2022] Open
Abstract
PURPOSE Our aim is to develop a new generation of suprachoroidal-transretinal stimulation (STS) retinal prosthesis using a dual-stimulating electrode array to enlarge the visual field. In the present study, we aimed to examine how position and size of the visual field-created by a retinal prosthesis simulator-influenced mobility. METHODS Twelve healthy subjects wore retinal prosthesis simulators. Images captured by a web camera attached to a head-mounted display (HMD) were processed by a computer and displayed on the HMD. Three types of artificial visual fields-designed to imitate phosphenes-obtained by a single (5 × 5 electrodes; visual angle, 15°) or dual (5 × 5 electrodes ×2; visual angle, 30°) electrode array were created. Visual field (VF)1 is an inferior visual field, which corresponds to a dual-electrode array implanted in the superior hemisphere. VF2 is a superior visual field, which corresponds to a single-electrode array implanted in the inferior hemisphere. VF3 is a superior visual field, which corresponds to a dual-electrode array implanted in the inferior hemisphere. In each type of artificial visual field, a natural circular visual field (visual angle, 5°) which imitated the vision of patients with advanced retinitis pigmentosa existed at the center. Subjects were instructed to walk along a black carpet (6 m long × 2.2 m wide) without stepping on attached white circular obstacles. Each obstacle was 20 cm in diameter, and obstacles were installed at 40-cm intervals. We measured the number of footsteps on the obstacles, the time taken to complete the obstacle course, and the extent of head movement to scan the area (head-scanning). We then compared the results recorded from these 3 types of artificial visual field. RESULTS The number of footsteps on obstacles was lowest in VF3 (One-way ANOVA; P = 0.028, Fisher's LSD; VF 1 versus 3 P = 0.039, 2 versus 3 P = 0.012). No significant difference was observed for the time to complete the obstacle course or the extent of head movement between the 3 visual fields. CONCLUSION The superior and wide visual field (VF3) obtained by the retinal prosthesis simulator resulted in better mobility performance than the other visual fields.
Collapse
Affiliation(s)
- Takao Endo
- Department of Ophthalmology, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Kenta Hozumi
- Department of Ophthalmology, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Masakazu Hirota
- Department of Applied Visual Science, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Hiroyuki Kanda
- Department of Applied Visual Science, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Takeshi Morimoto
- Department of Applied Visual Science, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Kohji Nishida
- Department of Ophthalmology, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Takashi Fujikado
- Department of Applied Visual Science, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka, 565-0871, Japan.
| |
Collapse
|