1
|
van der Grinten M, de Ruyter van Steveninck J, Lozano A, Pijnacker L, Rueckauer B, Roelfsema P, van Gerven M, van Wezel R, Güçlü U, Güçlütürk Y. Towards biologically plausible phosphene simulation for the differentiable optimization of visual cortical prostheses. eLife 2024; 13:e85812. [PMID: 38386406 PMCID: PMC10883675 DOI: 10.7554/elife.85812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 01/21/2024] [Indexed: 02/23/2024] Open
Abstract
Blindness affects millions of people around the world. A promising solution to restoring a form of vision for some individuals are cortical visual prostheses, which bypass part of the impaired visual pathway by converting camera input to electrical stimulation of the visual system. The artificially induced visual percept (a pattern of localized light flashes, or 'phosphenes') has limited resolution, and a great portion of the field's research is devoted to optimizing the efficacy, efficiency, and practical usefulness of the encoding of visual information. A commonly exploited method is non-invasive functional evaluation in sighted subjects or with computational models by using simulated prosthetic vision (SPV) pipelines. An important challenge in this approach is to balance enhanced perceptual realism, biologically plausibility, and real-time performance in the simulation of cortical prosthetic vision. We present a biologically plausible, PyTorch-based phosphene simulator that can run in real-time and uses differentiable operations to allow for gradient-based computational optimization of phosphene encoding models. The simulator integrates a wide range of clinical results with neurophysiological evidence in humans and non-human primates. The pipeline includes a model of the retinotopic organization and cortical magnification of the visual cortex. Moreover, the quantitative effects of stimulation parameters and temporal dynamics on phosphene characteristics are incorporated. Our results demonstrate the simulator's suitability for both computational applications such as end-to-end deep learning-based prosthetic vision optimization as well as behavioral experiments. The modular and open-source software provides a flexible simulation framework for computational, clinical, and behavioral neuroscientists working on visual neuroprosthetics.
Collapse
Affiliation(s)
| | | | - Antonio Lozano
- Netherlands Institute for Neuroscience, Vrije Universiteit, Amsterdam, Netherlands
| | - Laura Pijnacker
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Bodo Rueckauer
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Pieter Roelfsema
- Netherlands Institute for Neuroscience, Vrije Universiteit, Amsterdam, Netherlands
| | - Marcel van Gerven
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Richard van Wezel
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
- Biomedical Signals and Systems Group, University of Twente, Enschede, Netherlands
| | - Umut Güçlü
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Yağmur Güçlütürk
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| |
Collapse
|
2
|
Wang C, Fang C, Zou Y, Yang J, Sawan M. SpikeSEE: An energy-efficient dynamic scenes processing framework for retinal prostheses. Neural Netw 2023; 164:357-368. [PMID: 37167749 DOI: 10.1016/j.neunet.2023.05.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Revised: 04/29/2023] [Accepted: 05/01/2023] [Indexed: 05/13/2023]
Abstract
Intelligent and low-power retinal prostheses are highly demanded in this era, where wearable and implantable devices are used for numerous healthcare applications. In this paper, we propose an energy-efficient dynamic scenes processing framework (SpikeSEE) that combines a spike representation encoding technique and a bio-inspired spiking recurrent neural network (SRNN) model to achieve intelligent processing and extreme low-power computation for retinal prostheses. The spike representation encoding technique could interpret dynamic scenes with sparse spike trains, decreasing the data volume. The SRNN model, inspired by the human retina's special structure and spike processing method, is adopted to predict the response of ganglion cells to dynamic scenes. Experimental results show that the Pearson correlation coefficient of the proposed SRNN model achieves 0.93, which outperforms the state-of-the-art processing framework for retinal prostheses. Thanks to the spike representation and SRNN processing, the model can extract visual features in a multiplication-free fashion. The framework achieves 8 times power reduction compared with the convolutional recurrent neural network (CRNN) processing-based framework. Our proposed SpikeSEE predicts the response of ganglion cells more accurately with lower energy consumption, which alleviates the precision and power issues of retinal prostheses and provides a potential solution for wearable or implantable prostheses.
Collapse
Affiliation(s)
- Chuanqing Wang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou, 310024, Zhejiang, China
| | - Chaoming Fang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou, 310024, Zhejiang, China
| | - Yong Zou
- Beijing Institute of Radiation Medicine, Beijing, 100850, China
| | - Jie Yang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou, 310024, Zhejiang, China.
| | - Mohamad Sawan
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou, 310024, Zhejiang, China.
| |
Collapse
|
3
|
Wang C, Fang C, Zou Y, Yang J, Sawan M. Artificial intelligence techniques for retinal prostheses: a comprehensive review and future direction. J Neural Eng 2023; 20. [PMID: 36634357 DOI: 10.1088/1741-2552/acb295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 01/12/2023] [Indexed: 01/14/2023]
Abstract
Objective. Retinal prostheses are promising devices to restore vision for patients with severe age-related macular degeneration or retinitis pigmentosa disease. The visual processing mechanism embodied in retinal prostheses play an important role in the restoration effect. Its performance depends on our understanding of the retina's working mechanism and the evolvement of computer vision models. Recently, remarkable progress has been made in the field of processing algorithm for retinal prostheses where the new discovery of the retina's working principle and state-of-the-arts computer vision models are combined together.Approach. We investigated the related research on artificial intelligence techniques for retinal prostheses. The processing algorithm in these studies could be attributed to three types: computer vision-related methods, biophysical models, and deep learning models.Main results. In this review, we first illustrate the structure and function of the normal and degenerated retina, then demonstrate the vision rehabilitation mechanism of three representative retinal prostheses. It is necessary to summarize the computational frameworks abstracted from the normal retina. In addition, the development and feature of three types of different processing algorithms are summarized. Finally, we analyze the bottleneck in existing algorithms and propose our prospect about the future directions to improve the restoration effect.Significance. This review systematically summarizes existing processing models for predicting the response of the retina to external stimuli. What's more, the suggestions for future direction may inspire researchers in this field to design better algorithms for retinal prostheses.
Collapse
Affiliation(s)
- Chuanqing Wang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Chaoming Fang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Yong Zou
- Beijing Institute of Radiation Medicine, Beijing, People's Republic of China
| | - Jie Yang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Mohamad Sawan
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| |
Collapse
|
4
|
Papadopoulos N, Melanitis N, Lozano A, Soto-Sanchez C, Fernandez E, Nikita KS. Machine Learning Method for Functional Assessment of Retinal Models. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:4293-4296. [PMID: 34892171 DOI: 10.1109/embc46164.2021.9629599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Challenges in the field of retinal prostheses motivate the development of retinal models to accurately simulate Retinal Ganglion Cells (RGCs) responses. The goal of retinal prostheses is to enable blind individuals to solve complex, reallife visual tasks. In this paper, we introduce the functional assessment (FA) of retinal models, which describes the concept of evaluating the performance of retinal models on visual understanding tasks. We present a machine learning method for FA: we feed traditional machine learning classifiers with RGC responses generated by retinal models, to solve object and digit recognition tasks (CIFAR-10, MNIST, Fashion MNIST, Imagenette). We examined critical FA aspects, including how the performance of FA depends on the task, how to optimally feed RGC responses to the classifiers and how the number of output neurons correlates with the model's accuracy. To increase the number of output neurons, we manipulated input images - by splitting and then feeding them to the retinal model and we found that image splitting does not significantly improve the model's accuracy. We also show that differences in the structure of datasets result in largely divergent performance of the retinal model (MNIST and Fashion MNIST exceeded 80% accuracy, while CIFAR-10 and Imagenette achieved ∼40%). Furthermore, retinal models which perform better in standard evaluation, i.e. more accurately predict RGC response, perform better in FA as well. However, unlike standard evaluation, FA results can be straightforwardly interpreted in the context of comparing the quality of visual perception.
Collapse
|
5
|
Melanitis N, Nakopoulos G, Lozano A, Soto-Sanchez C, Fernandez E, Nikita KS. Using Biologically-inspired Image Features to Model Retinal Response: Evidence from Biological Datasets. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3378-3381. [PMID: 34891964 DOI: 10.1109/embc46164.2021.9629869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Retinal models are needed to simulate the translation of visual percepts to Retinal Ganglion Cells (RGCs) neural spike trains, through which visual information is transmitted to the brain. Restoring vision through neural prostheses motivates the development of accurate retinal models. We integrate biologically-inspired image features to RGC models. We trained Linear-Nonlinear models using response data from biological retinae. We show that augmenting raw image input with retina-inspired image features leads to performance improvements: in a smaller (30sec. of retina recordings) set integration of features leads to improved models in approximately $\frac{2}{3}$ of the modeled RGCS; in a larger (4min. recording) we show that utilizing Spike Triggered Average analysis to localize RGCs in input images and extract features in a cell-based manner leads to improved models in all (except two) of the modeled RGCs.
Collapse
|
6
|
Fernández E, Alfaro A, Soto-Sánchez C, González-López P, Lozano Ortega AM, Peña S, Grima MD, Rodil A, Gómez B, Chen X, Roelfsema PR, Rolston JD, Davis TS, Normann RA. Visual percepts evoked with an Intracortical 96-channel microelectrode array inserted in human occipital cortex. J Clin Invest 2021; 131:151331. [PMID: 34665780 DOI: 10.1172/jci151331] [Citation(s) in RCA: 64] [Impact Index Per Article: 21.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Accepted: 09/28/2021] [Indexed: 01/11/2023] Open
Abstract
BACKGROUND A long-held dream of scientists is to transfer information directly to the visual cortex of blind individuals, thereby restoring a rudimentary form of sight. However, no clinically available cortical visual prosthesis yet exists. METHODS We implanted an intracortical microelectrode array consisting of 96 electrodes in the visual cortex of a 57-year-old person with complete blindness for a six- month period. We measured thresholds and the characteristics of the visual percepts elicited by intracortical microstimulation. RESULTS Implantation and subsequent explantation of intracortical microelectrodes were carried out without complications. The mean stimulation threshold for single electrodes was 66.8 ± 36.5 μA. We consistently obtained high-quality recordings from visually deprived neurons and the stimulation parameters remained stable over time. Simultaneous stimulation via multiple electrodes were associated with a significant reduction in thresholds (p<0.001, ANOVA test) and evoked discriminable phosphene percepts, allowing the blind participant to identify some letters and recognize object boundaries. Furthermore, we observed a learning process that helped the subject to recognize complex patterns over time. CONCLUSIONS Our results demonstrate the safety and efficacy of chronic intracortical microstimulation via a large number of electrodes in human visual cortex, showing its high potential for restoring functional vision in the blind. TRIAL REGISTRATION ClinicalTrials.gov identifier NCT02983370. FUNDING Funding was provided by grant RTI2018-098969-B-100 from the Spanish Ministerio de Ciencia Innovación y Universidades, by grant PROMETEO/2019/119 from the Generalitat Valenciana (Spain), by the Bidons Egara Research Chair of the University Miguel Hernández (Spain) and by the John Moran Eye Center of the University of Utah (US).
Collapse
Affiliation(s)
| | - Arantxa Alfaro
- Servicio de Neurología, Hospital Vega Baja, Elche, Spain
| | | | - Pablo González-López
- Servicio de Neurología, Hospital General Universitario de Alicante, Alicante, Spain
| | | | - Sebastian Peña
- Bioengineering Institute, University Miguel Hernandez, Elche, Spain
| | | | - Alfonso Rodil
- Bioengineering Institute, University Miguel Hernandez, Elche, Spain
| | - Bernardeta Gómez
- Bioengineering Institute, University Miguel Hernandez, Elche, Spain
| | - Xing Chen
- Department of Vision & Cognition, Netherland Institute for Neuroscience, Amsterdam, Netherlands
| | - Pieter R Roelfsema
- Department of Vision & Cognition, Netherland Institute for Neuroscience, Amsterdam, Netherlands
| | - John D Rolston
- Department of Neurosurgery and Biomedical Engineering, University of Utah, Salt Lake City, United States of America
| | - Tyler S Davis
- Department of Neurosurgery and Biomedical Engineering, University of Utah, Salt Lake City, United States of America
| | - Richard A Normann
- John Moran Eye Center and Biomedical Engineering, University of Utah, Salt Lake City, United States of America
| |
Collapse
|
7
|
Lozano A, Suárez JS, Soto-Sánchez C, Garrigós J, Martínez-Alvarez JJ, Ferrández JM, Fernández E. Neurolight: A Deep Learning Neural Interface for Cortical Visual Prostheses. Int J Neural Syst 2020; 30:2050045. [DOI: 10.1142/s0129065720500458] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Visual neuroprosthesis, that provide electrical stimulation along several sites of the human visual system, constitute a potential tool for vision restoration for the blind. Scientific and technological progress in the fields of neural engineering and artificial vision comes with new theories and tools that, along with the dawn of modern artificial intelligence, constitute a promising framework for the further development of neurotechnology. In the framework of the development of a Cortical Visual Neuroprosthesis for the blind (CORTIVIS), we are now facing the challenge of developing not only computationally powerful tools and flexible approaches that will allow us to provide some degree of functional vision to individuals who are profoundly blind. In this work, we propose a general neuroprosthesis framework composed of several task-oriented and visual encoding modules. We address the development and implementation of computational models of the firing rates of retinal ganglion cells and design a tool — Neurolight — that allows these models to be interfaced with intracortical microelectrodes in order to create electrical stimulation patterns that can evoke useful perceptions. In addition, the developed framework allows the deployment of a diverse array of state-of-the-art deep-learning techniques for task-oriented and general image pre-processing, such as semantic segmentation and object detection in our system’s pipeline. To the best of our knowledge, this constitutes the first deep-learning-based system designed to directly interface with the visual brain through an intracortical microelectrode array. We implement the complete pipeline, from obtaining a video stream to developing and deploying task-oriented deep-learning models and predictive models of retinal ganglion cells’ encoding of visual inputs under the control of a neurostimulation device able to send electrical train pulses to a microelectrode array implanted at the visual cortex.
Collapse
Affiliation(s)
- Antonio Lozano
- Departamento de Electrónica, Tecnología de Computadoras y Proyectos, Universidad Politécnica de Cartagena, 30202 Cartagena, Spain
| | - Juan Sebastián Suárez
- Instituto de Bioingeniería, Universidad Miguel Hernández, 03202 Alicante, Spain
- CIBER-BBN, 28029 Madrid, Spain
| | - Cristina Soto-Sánchez
- Instituto de Bioingeniería, Universidad Miguel Hernández, 03202 Alicante, Spain
- CIBER-BBN, 28029 Madrid, Spain
| | - Javier Garrigós
- Departamento de Electrónica, Tecnología de Computadoras y Proyectos, Universidad Politécnica de Cartagena, 30202 Cartagena, Spain
| | - J. Javier Martínez-Alvarez
- Departamento de Electrónica, Tecnología de Computadoras y Proyectos, Universidad Politécnica de Cartagena, 30202 Cartagena, Spain
| | - J. Manuel Ferrández
- Departamento de Electrónica, Tecnología de Computadoras y Proyectos, Universidad Politécnica de Cartagena, 30202 Cartagena, Spain
| | - Eduardo Fernández
- Instituto de Bioingeniería, Universidad Miguel Hernández, 03202 Alicante, Spain
| |
Collapse
|