1
|
Nadolskis LG, Turkstra LM, Larnyo E, Beyeler M. Great expectations: Aligning visual prosthetic development with implantee needs. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.03.12.24304186. [PMID: 38559196 PMCID: PMC10980134 DOI: 10.1101/2024.03.12.24304186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Purpose Visual prosthetics have emerged as a promising assistive technology for individuals with vision loss, yet research often overlooks the human aspects of this technology. While previous studies have concentrated on the perceptual experiences of implant recipients (implantees) or the attitudes of potential implantees towards near-future implants, a systematic account of how current implants are being used in everyday life is still lacking. Methods We interviewed six recipients of the most widely used visual implants (Argus II and Orion) and six leading researchers in the field. Through thematic and statistical analyses, we explored the daily usage of these implants by implantees and compared their responses to the expectations of researchers. We also sought implantees' input on desired features for future versions, aiming to inform the development of the next generation of implants. Results Although implants are designed to facilitate various daily activities, we found that implantees use them less frequently than researchers expected. This discrepancy primarily stems from issues with usability and reliability, with implantees finding alternative methods to accomplish tasks, reducing the need to rely on the implant. For future implants, implantees emphasized the desire for improved vision, smart integration, and increased independence. Conclusions Our study reveals a significant gap between researcher expectations and implantee experiences with visual prostheses, underscoring the importance of focusing future research on usability and real-world application. Translational relevance This work advocates for a better alignment between technology development and implantee needs to enhance clinical relevance and practical utility of visual prosthetics.
Collapse
Affiliation(s)
- Lucas Gil Nadolskis
- Interdepartmental Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara
| | - Lily Marie Turkstra
- Department of Psychological & Brain Sciences, University of California, Santa Barbara
| | - Ebenezer Larnyo
- Center for Black Studies Research, University of California, Santa Barbara
| | - Michael Beyeler
- Department of Psychological & Brain Sciences, University of California, Santa Barbara
- Department of Computer Science, University of California, Santa Barbara
| |
Collapse
|
2
|
van der Grinten M, de Ruyter van Steveninck J, Lozano A, Pijnacker L, Rueckauer B, Roelfsema P, van Gerven M, van Wezel R, Güçlü U, Güçlütürk Y. Towards biologically plausible phosphene simulation for the differentiable optimization of visual cortical prostheses. eLife 2024; 13:e85812. [PMID: 38386406 PMCID: PMC10883675 DOI: 10.7554/elife.85812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 01/21/2024] [Indexed: 02/23/2024] Open
Abstract
Blindness affects millions of people around the world. A promising solution to restoring a form of vision for some individuals are cortical visual prostheses, which bypass part of the impaired visual pathway by converting camera input to electrical stimulation of the visual system. The artificially induced visual percept (a pattern of localized light flashes, or 'phosphenes') has limited resolution, and a great portion of the field's research is devoted to optimizing the efficacy, efficiency, and practical usefulness of the encoding of visual information. A commonly exploited method is non-invasive functional evaluation in sighted subjects or with computational models by using simulated prosthetic vision (SPV) pipelines. An important challenge in this approach is to balance enhanced perceptual realism, biologically plausibility, and real-time performance in the simulation of cortical prosthetic vision. We present a biologically plausible, PyTorch-based phosphene simulator that can run in real-time and uses differentiable operations to allow for gradient-based computational optimization of phosphene encoding models. The simulator integrates a wide range of clinical results with neurophysiological evidence in humans and non-human primates. The pipeline includes a model of the retinotopic organization and cortical magnification of the visual cortex. Moreover, the quantitative effects of stimulation parameters and temporal dynamics on phosphene characteristics are incorporated. Our results demonstrate the simulator's suitability for both computational applications such as end-to-end deep learning-based prosthetic vision optimization as well as behavioral experiments. The modular and open-source software provides a flexible simulation framework for computational, clinical, and behavioral neuroscientists working on visual neuroprosthetics.
Collapse
Affiliation(s)
| | | | - Antonio Lozano
- Netherlands Institute for Neuroscience, Vrije Universiteit, Amsterdam, Netherlands
| | - Laura Pijnacker
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Bodo Rueckauer
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Pieter Roelfsema
- Netherlands Institute for Neuroscience, Vrije Universiteit, Amsterdam, Netherlands
| | - Marcel van Gerven
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Richard van Wezel
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
- Biomedical Signals and Systems Group, University of Twente, Enschede, Netherlands
| | - Umut Güçlü
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Yağmur Güçlütürk
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| |
Collapse
|
3
|
Leong F, Rahmani B, Psaltis D, Moser C, Ghezzi D. An actor-model framework for visual sensory encoding. Nat Commun 2024; 15:808. [PMID: 38280912 PMCID: PMC10821921 DOI: 10.1038/s41467-024-45105-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 01/15/2024] [Indexed: 01/29/2024] Open
Abstract
A fundamental challenge in neuroengineering is determining a proper artificial input to a sensory system that yields the desired perception. In neuroprosthetics, this process is known as artificial sensory encoding, and it holds a crucial role in prosthetic devices restoring sensory perception in individuals with disabilities. For example, in visual prostheses, one key aspect of artificial image encoding is to downsample images captured by a camera to a size matching the number of inputs and resolution of the prosthesis. Here, we show that downsampling an image using the inherent computation of the retinal network yields better performance compared to learning-free downsampling methods. We have validated a learning-based approach (actor-model framework) that exploits the signal transformation from photoreceptors to retinal ganglion cells measured in explanted mouse retinas. The actor-model framework generates downsampled images eliciting a neuronal response in-silico and ex-vivo with higher neuronal reliability than the one produced by a learning-free approach. During the learning process, the actor network learns to optimize contrast and the kernel's weights. This methodological approach might guide future artificial image encoding strategies for visual prostheses. Ultimately, this framework could be applicable for encoding strategies in other sensory prostheses such as cochlear or limb.
Collapse
Affiliation(s)
- Franklin Leong
- Medtronic Chair in Neuroengineering, Center for Neuroprosthetics and Institute of Bioengineering, School of Engineering, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Babak Rahmani
- Laboratory of Applied Photonics Devices, Institute of Electrical and Micro Engineering, School of Engineering, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- Microsoft Research, Cambridge, UK
| | - Demetri Psaltis
- Optics Laboratory, Institute of Electrical and Micro Engineering, School of Engineering, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Christophe Moser
- Laboratory of Applied Photonics Devices, Institute of Electrical and Micro Engineering, School of Engineering, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Diego Ghezzi
- Medtronic Chair in Neuroengineering, Center for Neuroprosthetics and Institute of Bioengineering, School of Engineering, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland.
- Ophthalmic and Neural Technologies Laboratory, Department of Ophthalmology, University of Lausanne, Hôpital ophtalmique Jules-Gonin, Fondation Asile des Aveugles, Lausanne, Switzerland.
| |
Collapse
|
4
|
Wang HZ, Wong YT. A novel simulation paradigm utilising MRI-derived phosphene maps for cortical prosthetic vision. J Neural Eng 2023; 20:046027. [PMID: 37531948 PMCID: PMC10594539 DOI: 10.1088/1741-2552/aceca2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 07/13/2023] [Accepted: 08/01/2023] [Indexed: 08/04/2023]
Abstract
Objective.We developed a realistic simulation paradigm for cortical prosthetic vision and investigated whether we can improve visual performance using a novel clustering algorithm.Approach.Cortical visual prostheses have been developed to restore sight by stimulating the visual cortex. To investigate the visual experience, previous studies have used uniform phosphene maps, which may not accurately capture generated phosphene map distributions of implant recipients. The current simulation paradigm was based on the Human Connectome Project retinotopy dataset and the placement of implants on the cortices from magnetic resonance imaging scans. Five unique retinotopic maps were derived using this method. To improve performance on these retinotopic maps, we enabled head scanning and a density-based clustering algorithm was then used to relocate centroids of visual stimuli. The impact of these improvements on visual detection performance was tested. Using spatially evenly distributed maps as a control, we recruited ten subjects and evaluated their performance across five sessions on the Berkeley Rudimentary Visual Acuity test and the object recognition task.Main results.Performance on control maps is significantly better than on retinotopic maps in both tasks. Both head scanning and the clustering algorithm showed the potential of improving visual ability across multiple sessions in the object recognition task.Significance.The current paradigm is the first that simulates the experience of cortical prosthetic vision based on brain scans and implant placement, which captures the spatial distribution of phosphenes more realistically. Utilisation of evenly distributed maps may overestimate the performance that visual prosthetics can restore. This simulation paradigm could be used in clinical practice when making plans for where best to implant cortical visual prostheses.
Collapse
Affiliation(s)
- Haozhe Zac Wang
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia
| | - Yan Tat Wong
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia
- Department of Physiology, Monash University, Melbourne, Australia
| |
Collapse
|
5
|
White J, Ruiz-Serra J, Petrie S, Kameneva T, McCarthy C. Self-Attention Based Vision Processing for Prosthetic Vision. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083046 DOI: 10.1109/embc40787.2023.10341053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
We investigate Self-Attention (SA) networks for directly learning visual representations for prosthetic vision. Specifically, we explore how the SA mechanism can be leveraged to produce task-specific scene representations for prosthetic vision, overcoming the need for explicit hand-selection of learnt features and post-processing. Further, we demonstrate how the mapping of importance to image regions can serve as an explainability tool to analyse the learnt vision processing behaviour, providing enhanced validation and interpretation capability than current learning-based methods for prosthetic vision. We investigate our approach in the context of an orientation and mobility (OM) task, and demonstrate its feasibility for learning vision processing pipelines for prosthetic vision.
Collapse
|
6
|
Wang C, Fang C, Zou Y, Yang J, Sawan M. Artificial intelligence techniques for retinal prostheses: a comprehensive review and future direction. J Neural Eng 2023; 20. [PMID: 36634357 DOI: 10.1088/1741-2552/acb295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 01/12/2023] [Indexed: 01/14/2023]
Abstract
Objective. Retinal prostheses are promising devices to restore vision for patients with severe age-related macular degeneration or retinitis pigmentosa disease. The visual processing mechanism embodied in retinal prostheses play an important role in the restoration effect. Its performance depends on our understanding of the retina's working mechanism and the evolvement of computer vision models. Recently, remarkable progress has been made in the field of processing algorithm for retinal prostheses where the new discovery of the retina's working principle and state-of-the-arts computer vision models are combined together.Approach. We investigated the related research on artificial intelligence techniques for retinal prostheses. The processing algorithm in these studies could be attributed to three types: computer vision-related methods, biophysical models, and deep learning models.Main results. In this review, we first illustrate the structure and function of the normal and degenerated retina, then demonstrate the vision rehabilitation mechanism of three representative retinal prostheses. It is necessary to summarize the computational frameworks abstracted from the normal retina. In addition, the development and feature of three types of different processing algorithms are summarized. Finally, we analyze the bottleneck in existing algorithms and propose our prospect about the future directions to improve the restoration effect.Significance. This review systematically summarizes existing processing models for predicting the response of the retina to external stimuli. What's more, the suggestions for future direction may inspire researchers in this field to design better algorithms for retinal prostheses.
Collapse
Affiliation(s)
- Chuanqing Wang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Chaoming Fang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Yong Zou
- Beijing Institute of Radiation Medicine, Beijing, People's Republic of China
| | - Jie Yang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Mohamad Sawan
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| |
Collapse
|
7
|
Elnabawy RH, Abdennadher S, Hellwich O, Eldawlatly S. Object recognition and localization enhancement in visual prostheses: a real-time mixed reality simulation. Biomed Eng Online 2022; 21:91. [PMID: 36566183 DOI: 10.1186/s12938-022-01059-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Accepted: 12/12/2022] [Indexed: 12/25/2022] Open
Abstract
Blindness is a main threat that affects the daily life activities of any human. Visual prostheses have been introduced to provide artificial vision to the blind with the aim of allowing them to restore confidence and independence. In this article, we propose an approach that involves four image enhancement techniques to facilitate object recognition and localization for visual prostheses users. These techniques are clip art representation of the objects, edge sharpening, corner enhancement and electrode dropout handling. The proposed techniques are tested in a real-time mixed reality simulation environment that mimics vision perceived by visual prostheses users. Twelve experiments were conducted to measure the performance of the participants in object recognition and localization. The experiments involved single objects, multiple objects and navigation. To evaluate the performance of the participants in objects recognition, we measure their recognition time, recognition accuracy and confidence level. For object localization, two metrics were used to measure the performance of the participants which are the grasping attempt time and the grasping accuracy. The results demonstrate that using all enhancement techniques simultaneously gives higher accuracy, higher confidence level and less time for recognizing and grasping objects in comparison to not applying the enhancement techniques or applying pair-wise combinations of them. Visual prostheses could benefit from the proposed approach to provide users with an enhanced perception.
Collapse
Affiliation(s)
- Reham H Elnabawy
- Digital Media Engineering and Technology Department, Faculty of Media Engineering and Technology, German University in Cairo, Cairo, Egypt
| | - Slim Abdennadher
- Computer Science and Engineering Department, Faculty of Media Engineering and Technology, German University in Cairo, Cairo, Egypt.,Computer Science Department, Faculty of Informatics and Computer Science, German International University, New Administrative Capital, Egypt
| | - Olaf Hellwich
- Chair of Computer Vision and Remote Sensing, Technische Universität Berlin, Berlin, Germany
| | - Seif Eldawlatly
- Computer and Systems Engineering Department, Faculty of Engineering, Ain Shams University, 1 El-Sarayat St., Abbassia, Cairo, Egypt. .,Computer Science and Engineering Department, The American University in Cairo, Cairo, Egypt.
| |
Collapse
|
8
|
Beyeler M, Sanchez-Garcia M. Towards a Smart Bionic Eye: AI-powered artificial vision for the treatment of incurable blindness. J Neural Eng 2022; 19:10.1088/1741-2552/aca69d. [PMID: 36541463 PMCID: PMC10507809 DOI: 10.1088/1741-2552/aca69d] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 11/28/2022] [Indexed: 11/30/2022]
Abstract
Objective.How can we return a functional form of sight to people who are living with incurable blindness? Despite recent advances in the development of visual neuroprostheses, the quality of current prosthetic vision is still rudimentary and does not differ much across different device technologies.Approach.Rather than aiming to represent the visual scene as naturally as possible, aSmart Bionic Eyecould provide visual augmentations through the means of artificial intelligence-based scene understanding, tailored to specific real-world tasks that are known to affect the quality of life of people who are blind, such as face recognition, outdoor navigation, and self-care.Main results.Complementary to existing research aiming to restore natural vision, we propose a patient-centered approach to incorporate deep learning-based visual augmentations into the next generation of devices.Significance.The ability of a visual prosthesis to support everyday tasks might make the difference between abandoned technology and a widely adopted next-generation neuroprosthetic device.
Collapse
Affiliation(s)
- Michael Beyeler
- Department of Computer Science,University of California,Santa Barbara, CA, United States of America
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, United States of America
| | - Melani Sanchez-Garcia
- Department of Computer Science,University of California,Santa Barbara, CA, United States of America
| |
Collapse
|
9
|
|
10
|
Wang J, Zhao R, Li P, Fang Z, Li Q, Han Y, Zhou R, Zhang Y. Clinical Progress and Optimization of Information Processing in Artificial Visual Prostheses. SENSORS (BASEL, SWITZERLAND) 2022; 22:6544. [PMID: 36081002 PMCID: PMC9460383 DOI: 10.3390/s22176544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 08/22/2022] [Accepted: 08/26/2022] [Indexed: 06/15/2023]
Abstract
Visual prostheses, used to assist in restoring functional vision to the visually impaired, convert captured external images into corresponding electrical stimulation patterns that are stimulated by implanted microelectrodes to induce phosphenes and eventually visual perception. Detecting and providing useful visual information to the prosthesis wearer under limited artificial vision has been an important concern in the field of visual prosthesis. Along with the development of prosthetic device design and stimulus encoding methods, researchers have explored the possibility of the application of computer vision by simulating visual perception under prosthetic vision. Effective image processing in computer vision is performed to optimize artificial visual information and improve the ability to restore various important visual functions in implant recipients, allowing them to better achieve their daily demands. This paper first reviews the recent clinical implantation of different types of visual prostheses, summarizes the artificial visual perception of implant recipients, and especially focuses on its irregularities, such as dropout and distorted phosphenes. Then, the important aspects of computer vision in the optimization of visual information processing are reviewed, and the possibilities and shortcomings of these solutions are discussed. Ultimately, the development direction and emphasis issues for improving the performance of visual prosthesis devices are summarized.
Collapse
Affiliation(s)
- Jing Wang
- School of Information, Shanghai Ocean University, Shanghai 201306, China
- Key Laboratory of Fishery Information, Ministry of Agriculture, Shanghai 200335, China
| | - Rongfeng Zhao
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Peitong Li
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Zhiqiang Fang
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Qianqian Li
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Yanling Han
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Ruyan Zhou
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Yun Zhang
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| |
Collapse
|
11
|
Avraham D, Yitzhaky Y. Simulating the perceptual effects of electrode-retina distance in prosthetic vision. J Neural Eng 2022; 19. [PMID: 35561665 DOI: 10.1088/1741-2552/ac6f82] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2022] [Accepted: 05/13/2022] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Retinal prostheses aim to restore some vision in retinitis pigmentosa and age-related macular degeneration blind patients. Many spatial and temporal aspects have been found to affect prosthetic vision. Our objective is to study the impact of the space-variant distance between the stimulating electrodes and the surface of the retina on prosthetic vision and how to mitigate this impact. APPROACH A prosthetic vision simulation was built to demonstrate the perceptual effects of the electrode-retina distance (ERD) with different random spatial variations, such as size, brightness, shape, dropout, and spatial shifts. Three approaches for reducing the ERD effects are demonstrated: electrode grouping (quads), ERD-based input-image enhancement, and object scanning with and without phosphene persistence. A quantitative assessment for the first two approaches was done based on experiments with 20 subjects and three vision-based computational image similarity metrics. MAIN RESULTS The effects of various ERDs on phosphenes' size, brightness, and shape were simulated. Quads, chosen according to the ERDs, effectively elicit phosphenes without exceeding the safe charge density limit, whereas single electrodes with large ERD cannot do so. Input-image enhancement reduced the ERD effects effectively. These two approaches significantly improved ERD-affected prosthetic vision according to the experiment and image similarity metrics. A further reduction of the ERD effects was achieved by scanning an object while moving the head. SIGNIFICANCE ERD has multiple effects on perception with retinal prostheses. One of them is vision loss caused by the incapability of electrodes with large ERD to evoke phosphenes. The three approaches presented in this study can be used separately or together to mitigate the impact of ERD. A consideration of our approaches in reducing the perceptual effects of the ERD may help improve the perception with current prosthetic technology and influence the design of future prostheses.
Collapse
Affiliation(s)
- David Avraham
- Department of Electro-Optical Engineering, Ben-Gurion University of the Negev, 1 Ben-Gurion Blvd., Beer-Sheva, 84105, ISRAEL
| | - Yitzhak Yitzhaky
- Electro-Optical Engineering, School of Engineering, Ben-Gurion University of the Negev, 1 Ben-Gurion Blvd., Beer-Sheva, Southern, 84105, ISRAEL
| |
Collapse
|
12
|
de Ruyter van Steveninck J, Güçlü U, van Wezel R, van Gerven M. End-to-end optimization of prosthetic vision. J Vis 2022; 22:20. [PMID: 35703408 PMCID: PMC8899855 DOI: 10.1167/jov.22.2.20] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Neural prosthetics may provide a promising solution to restore visual perception in some forms of blindness. The restored prosthetic percept is rudimentary compared to normal vision and can be optimized with a variety of image preprocessing techniques to maximize relevant information transfer. Extracting the most useful features from a visual scene is a nontrivial task and optimal preprocessing choices strongly depend on the context. Despite rapid advancements in deep learning, research currently faces a difficult challenge in finding a general and automated preprocessing strategy that can be tailored to specific tasks or user requirements. In this paper, we present a novel deep learning approach that explicitly addresses this issue by optimizing the entire process of phosphene generation in an end-to-end fashion. The proposed model is based on a deep auto-encoder architecture and includes a highly adjustable simulation module of prosthetic vision. In computational validation experiments, we show that such an approach is able to automatically find a task-specific stimulation protocol. The results of these proof-of-principle experiments illustrate the potential of end-to-end optimization for prosthetic vision. The presented approach is highly modular and our approach could be extended to automated dynamic optimization of prosthetic vision for everyday tasks, given any specific constraints, accommodating individual requirements of the end-user.
Collapse
Affiliation(s)
- Jaap de Ruyter van Steveninck
- Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Umut Güçlü
- Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Richard van Wezel
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Biomedical Signal and Systems, MIRA Institute for Biomedical Technology and Technical Medicine, University of Twente, Enschede, The Netherlands
| | - Marcel van Gerven
- Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
13
|
de Ruyter van Steveninck J, van Gestel T, Koenders P, van der Ham G, Vereecken F, Güçlü U, van Gerven M, Güçlütürk Y, van Wezel R. Real-world indoor mobility with simulated prosthetic vision: The benefits and feasibility of contour-based scene simplification at different phosphene resolutions. J Vis 2022; 22:1. [PMID: 35103758 PMCID: PMC8819280 DOI: 10.1167/jov.22.2.1] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 12/28/2021] [Indexed: 11/24/2022] Open
Abstract
Neuroprosthetic implants are a promising technology for restoring some form of vision in people with visual impairments via electrical neurostimulation in the visual pathway. Although an artificially generated prosthetic percept is relatively limited compared with normal vision, it may provide some elementary perception of the surroundings, re-enabling daily living functionality. For mobility in particular, various studies have investigated the benefits of visual neuroprosthetics in a simulated prosthetic vision paradigm with varying outcomes. The previous literature suggests that scene simplification via image processing, and particularly contour extraction, may potentially improve the mobility performance in a virtual environment. In the current simulation study with sighted participants, we explore both the theoretically attainable benefits of strict scene simplification in an indoor environment by controlling the environmental complexity, as well as the practically achieved improvement with a deep learning-based surface boundary detection implementation compared with traditional edge detection. A simulated electrode resolution of 26 × 26 was found to provide sufficient information for mobility in a simple environment. Our results suggest that, for a lower number of implanted electrodes, the removal of background textures and within-surface gradients may be beneficial in theory. However, the deep learning-based implementation for surface boundary detection did not improve mobility performance in the current study. Furthermore, our findings indicate that, for a greater number of electrodes, the removal of within-surface gradients and background textures may deteriorate, rather than improve, mobility. Therefore, finding a balanced amount of scene simplification requires a careful tradeoff between informativity and interpretability that may depend on the number of implanted electrodes.
Collapse
Affiliation(s)
- Jaap de Ruyter van Steveninck
- Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Tom van Gestel
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Paula Koenders
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Guus van der Ham
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Floris Vereecken
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Umut Güçlü
- Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Marcel van Gerven
- Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Yagmur Güçlütürk
- Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Richard van Wezel
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
- Biomedical Signal and Systems, MIRA Institute for Biomedical Technology and Technical Medicine, University of Twente, Enschede, the Netherlands
| |
Collapse
|
14
|
Kravchenko SV, Sakhnov SN, Myasnikova VV. Modern concepts of bionic vision. Vestn Oftalmol 2022; 138:95-101. [PMID: 35801887 DOI: 10.17116/oftalma202213803195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Loss of vision is a pressing medical and social problem leading to profound disability, loss of ability to work, serious alterations in the psycho-emotional state, and a decline of the quality of life. When conservative or surgical treatment can not help restore vision, the use of visual prosthesis - bionic eye - can be an effective solution. This review covers the main modern approaches to the development of visual prosthetic systems. Analysis of publications revealed that there are several main approaches to visual prosthesis differing primarily by the anatomical structure targeted for stimulation in order to activate visual sensations. The most significant among them are retinal prostheses, optic nerve stimulation, and cortical visual prostheses. Currently, retinal prostheses such as ARGUS II demonstrate the most successful results, since the stimulation of the surviving neural structures of the retina is a relatively easy task, but their field of application is limited to diseases associated with pathological changes in photoreceptors. The development of cortical visual prostheses is more difficult, but in the future they may allow using more stimulation channels to obtain a more detailed visual perception. In addition, cortical visual prostheses are universal, as they do not require preservation of any structures of the visual organ, only the primary visual cortex.
Collapse
Affiliation(s)
- S V Kravchenko
- Krasnodar branch of S.N. Fedorov National Medical Research Center «MNTK «Eye Microsurgery», Krasnodar, Russia
| | - S N Sakhnov
- Krasnodar branch of S.N. Fedorov National Medical Research Center «MNTK «Eye Microsurgery», Krasnodar, Russia
- Kuban State Medical University, Krasnodar, Russia
| | - V V Myasnikova
- Krasnodar branch of S.N. Fedorov National Medical Research Center «MNTK «Eye Microsurgery», Krasnodar, Russia
- Kuban State Medical University, Krasnodar, Russia
| |
Collapse
|
15
|
Elnabawy RH, Abdennadher S, Hellwich O, Eldawlatly S. Electrode Dropout Compensation in Visual Prostheses: An Optimal Object Placement Approach. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:6515-6518. [PMID: 34892602 DOI: 10.1109/embc46164.2021.9630991] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Visual prostheses provide promising solution to the blind through partial restoration of their vision via electrical stimulation of the visual system. However, there are some challenges that hinder the ability of subjects implanted with visual prostheses to correctly identify an object. One of these challenges is electrode dropout; the malfunction of some electrodes resulting in consistently dark phosphenes. In this paper, we propose a dropout handling algorithm for better and faster identification of objects. In this algorithm, phosphenes representing the object are translated to another location within the same image that has the minimum number of dropouts. Using simulated prosthetic vision, experiments were conducted to test the efficacy of our proposed algorithm. Electrode dropout rates of 10%, 20% and 30% were examined. Our results demonstrate significant increase in the object recognition accuracy, reduction in the recognition time and increase in the recognition confidence level using the proposed approach compared to presenting the images without dropout handling.Clinical Relevance- These results demonstrate the utility of dropout handling in enhancing the perception of images in prosthetic vision.
Collapse
|
16
|
Hou Y, Zhang W, Liu Q, Ge H, Meng J, Zhang Q, Wei X. Adaptive kernel selection network with attention constraint for surgical instrument classification. Neural Comput Appl 2021; 34:1577-1591. [PMID: 34539089 PMCID: PMC8435567 DOI: 10.1007/s00521-021-06368-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2021] [Accepted: 07/26/2021] [Indexed: 11/15/2022]
Abstract
Computer vision (CV) technologies are assisting the health care industry in many respects, i.e., disease diagnosis. However, as a pivotal procedure before and after surgery, the inventory work of surgical instruments has not been researched with the CV-powered technologies. To reduce the risk and hazard of surgical tools' loss, we propose a study of systematic surgical instrument classification and introduce a novel attention-based deep neural network called SKA-ResNet which is mainly composed of: (a) A feature extractor with selective kernel attention module to automatically adjust the receptive fields of neurons and enhance the learnt expression and (b) A multi-scale regularizer with KL-divergence as the constraint to exploit the relationships between feature maps. Our method is easily trained end-to-end in only one stage with few additional calculation burdens. Moreover, to facilitate our study, we create a new surgical instrument dataset called SID19 (with 19 kinds of surgical tools consisting of 3800 images) for the first time. Experimental results show the superiority of SKA-ResNet for the classification of surgical tools on SID19 when compared with state-of-the-art models. The classification accuracy of our method reaches up to 97.703%, which is well supportive for the inventory and recognition study of surgical tools. Also, our method can achieve state-of-the-art performance on four challenging fine-grained visual classification datasets.
Collapse
Affiliation(s)
- Yaqing Hou
- School of Computer Science and Technology, Dalian University of Technology, Dalian, China
| | - Wenkai Zhang
- School of Computer Science and Technology, Dalian University of Technology, Dalian, China
| | - Qian Liu
- School of Computer Science and Technology, Dalian University of Technology, Dalian, China
| | - Hongwei Ge
- School of Computer Science and Technology, Dalian University of Technology, Dalian, China
| | - Jun Meng
- School of Computer Science and Technology, Dalian University of Technology, Dalian, China
| | - Qiang Zhang
- School of Computer Science and Technology, Dalian University of Technology, Dalian, China
| | - Xiaopeng Wei
- School of Computer Science and Technology, Dalian University of Technology, Dalian, China
| |
Collapse
|
17
|
Abbasi B, Rizzo JF. Advances in Neuroscience, Not Devices, Will Determine the Effectiveness of Visual Prostheses. Semin Ophthalmol 2021; 36:168-175. [PMID: 33734937 DOI: 10.1080/08820538.2021.1887902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Background: Innovations in engineering and neuroscience have enabled the development of sophisticated visual prosthetic devices. In clinical trials, these devices have provided visual acuities as high as 20/460, enabled coarse navigation, and even allowed for reading of short words. However, long-term commercial viability arguably rests on attaining even better vision and more definitive improvements in tasks of daily living and quality of life. Purpose: Here we review technological and biological obstacles in the implementation of visual prosthetics. Conclusions: Research in the visual prosthetic field has tackled significant technical challenges, including biocompatibility, signal spread through neural tissue, and inadvertent activation of passing axons; however, significant gaps in knowledge remain in the realm of neuroscience, including the neural code of vision and visual plasticity. We assert that further optimization of prosthetic devices alone will not provide markedly improved visual outcomes without significant advances in our understanding of neuroscience.
Collapse
Affiliation(s)
- Bardia Abbasi
- Neuro-Ophthalmology Service, Department of Ophthalmology, Massachusetts Eye and Ear and Harvard Medical School, Boston, MA, USA
| | - Joseph F Rizzo
- Neuro-Ophthalmology Service, Department of Ophthalmology, Massachusetts Eye and Ear and Harvard Medical School, Boston, MA, USA
| |
Collapse
|
18
|
Sanchez-Garcia M, Martinez-Cantin R, Bermudez-Cameo J, Guerrero JJ. Influence of field of view in visual prostheses design: Analysis with a VR system. J Neural Eng 2020; 17:056002. [PMID: 32947270 DOI: 10.1088/1741-2552/abb9be] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Visual prostheses are designed to restore partial functional vision in patients with total vision loss. Retinal visual prostheses provide limited capabilities as a result of low resolution, limited field of view and poor dynamic range. Understanding the influence of these parameters in the perception results can guide prostheses research and design. APPROACH In this work, we evaluate the influence of field of view with respect to spatial resolution in visual prostheses, measuring the accuracy and response time in a search and recognition task. Twenty-four normally sighted participants were asked to find and recognize usual objects, such as furniture and home appliance in indoor room scenes. For the experiment, we use a new simulated prosthetic vision system that allows simple and effective experimentation. Our system uses a virtual-reality environment based on panoramic scenes. The simulator employs a head-mounted display which allows users to feel immersed in the scene by perceiving the entire scene all around. Our experiments use public image datasets and a commercial head-mounted display. We have also released the virtual-reality software for replicating and extending the experimentation. MAIN RESULTS Results show that the accuracy and response time decrease when the field of view is increased. Furthermore, performance appears to be correlated with the angular resolution, but showing a diminishing return even with a resolution of less than 2.3 phosphenes per degree. SIGNIFICANCE Our results seem to indicate that, for the design of retinal prostheses, it is better to concentrate the phosphenes in a small area, to maximize the angular resolution, even if that implies sacrificing field of view.
Collapse
Affiliation(s)
- Melani Sanchez-Garcia
- Instituto de Investigación en Ingeniería de Aragón, (I3A). Universidad de Zaragoza, Spain
| | | | | | | |
Collapse
|
19
|
Fernández E, Alfaro A, González-López P. Toward Long-Term Communication With the Brain in the Blind by Intracortical Stimulation: Challenges and Future Prospects. Front Neurosci 2020; 14:681. [PMID: 32848535 PMCID: PMC7431631 DOI: 10.3389/fnins.2020.00681] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2019] [Accepted: 06/03/2020] [Indexed: 11/15/2022] Open
Abstract
The restoration of a useful visual sense in a profoundly blind person by direct electrical stimulation of the visual cortex has been a subject of study for many years. However, the field of cortically based sight restoration has made few advances in the last few decades, and many problems remain. In this context, the scientific and technological problems associated with safe and effective communication with the brain are very complex, and there are still many unresolved issues delaying its development. In this work, we review some of the biological and technical issues that still remain to be solved, including long-term biotolerability, the number of electrodes required to provide useful vision, and the delivery of information to the implants. Furthermore, we emphasize the possible role of the neuroplastic changes that follow vision loss in the success of this approach. We propose that increased collaborations among clinicians, basic researchers, and neural engineers will enhance our ability to send meaningful information to the brain and restore a limited but useful sense of vision to many blind individuals.
Collapse
Affiliation(s)
- Eduardo Fernández
- Institute of Bioengineering, Universidad Miguel Hernández, Elche, Spain
- Center for Biomedical Research in the Network in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Madrid, Spain
- John A. Moran Eye Center, University of Utah, Salt Lake City, UT, United States
| | - Arantxa Alfaro
- Center for Biomedical Research in the Network in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Madrid, Spain
- Hospital Vega Baja, Orihuela, Spain
| | - Pablo González-López
- Institute of Bioengineering, Universidad Miguel Hernández, Elche, Spain
- Hospital General Universitario de Alicante, Alicante, Spain
| |
Collapse
|
20
|
Lozano A, Suárez JS, Soto-Sánchez C, Garrigós J, Martínez-Alvarez JJ, Ferrández JM, Fernández E. Neurolight: A Deep Learning Neural Interface for Cortical Visual Prostheses. Int J Neural Syst 2020; 30:2050045. [DOI: 10.1142/s0129065720500458] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Visual neuroprosthesis, that provide electrical stimulation along several sites of the human visual system, constitute a potential tool for vision restoration for the blind. Scientific and technological progress in the fields of neural engineering and artificial vision comes with new theories and tools that, along with the dawn of modern artificial intelligence, constitute a promising framework for the further development of neurotechnology. In the framework of the development of a Cortical Visual Neuroprosthesis for the blind (CORTIVIS), we are now facing the challenge of developing not only computationally powerful tools and flexible approaches that will allow us to provide some degree of functional vision to individuals who are profoundly blind. In this work, we propose a general neuroprosthesis framework composed of several task-oriented and visual encoding modules. We address the development and implementation of computational models of the firing rates of retinal ganglion cells and design a tool — Neurolight — that allows these models to be interfaced with intracortical microelectrodes in order to create electrical stimulation patterns that can evoke useful perceptions. In addition, the developed framework allows the deployment of a diverse array of state-of-the-art deep-learning techniques for task-oriented and general image pre-processing, such as semantic segmentation and object detection in our system’s pipeline. To the best of our knowledge, this constitutes the first deep-learning-based system designed to directly interface with the visual brain through an intracortical microelectrode array. We implement the complete pipeline, from obtaining a video stream to developing and deploying task-oriented deep-learning models and predictive models of retinal ganglion cells’ encoding of visual inputs under the control of a neurostimulation device able to send electrical train pulses to a microelectrode array implanted at the visual cortex.
Collapse
Affiliation(s)
- Antonio Lozano
- Departamento de Electrónica, Tecnología de Computadoras y Proyectos, Universidad Politécnica de Cartagena, 30202 Cartagena, Spain
| | - Juan Sebastián Suárez
- Instituto de Bioingeniería, Universidad Miguel Hernández, 03202 Alicante, Spain
- CIBER-BBN, 28029 Madrid, Spain
| | - Cristina Soto-Sánchez
- Instituto de Bioingeniería, Universidad Miguel Hernández, 03202 Alicante, Spain
- CIBER-BBN, 28029 Madrid, Spain
| | - Javier Garrigós
- Departamento de Electrónica, Tecnología de Computadoras y Proyectos, Universidad Politécnica de Cartagena, 30202 Cartagena, Spain
| | - J. Javier Martínez-Alvarez
- Departamento de Electrónica, Tecnología de Computadoras y Proyectos, Universidad Politécnica de Cartagena, 30202 Cartagena, Spain
| | - J. Manuel Ferrández
- Departamento de Electrónica, Tecnología de Computadoras y Proyectos, Universidad Politécnica de Cartagena, 30202 Cartagena, Spain
| | - Eduardo Fernández
- Instituto de Bioingeniería, Universidad Miguel Hernández, 03202 Alicante, Spain
| |
Collapse
|