1
|
Kish KE, Yuan A, Weiland JD. Patient-specific computational models of retinal prostheses. Sci Rep 2023; 13:22271. [PMID: 38097732 PMCID: PMC10721907 DOI: 10.1038/s41598-023-49580-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Accepted: 12/09/2023] [Indexed: 12/17/2023] Open
Abstract
Retinal prostheses stimulate inner retinal neurons to create visual perception for blind patients. Implanted arrays have many small electrodes. Not all electrodes induce perception at the same stimulus amplitude, requiring clinicians to manually establish a visual perception threshold for each one. Phosphenes created by single-electrode stimuli can also vary in shape, size, and brightness. Computational models provide a tool to predict inter-electrode variability and automate device programming. In this study, we created statistical and patient-specific field-cable models to investigate inter-electrode variability across seven epiretinal prosthesis users. Our statistical analysis revealed that retinal thickness beneath the electrode correlated with perceptual threshold, with a significant fixed effect across participants. Electrode-retina distance and electrode impedance also correlated with perceptual threshold for some participants, but these effects varied by individual. We developed a novel method to construct patient-specific field-cable models from optical coherence tomography images. Predictions with these models significantly correlated with perceptual threshold for 80% of participants. Additionally, we demonstrated that patient-specific field-cable models could predict retinal activity and phosphene size. These computational models could be beneficial for determining optimal stimulation settings in silico, circumventing the trial-and-error testing of a large parameter space in clinic.
Collapse
Affiliation(s)
- Kathleen E Kish
- Biomedical Engineering, University of Michigan, Ann Arbor, 48105, USA
- BioInterfaces Institute, University of Michigan, Ann Arbor, 48105, USA
| | - Alex Yuan
- Ophthalmology and Ophthalmic Research, Cole Eye Institute, Cleveland Clinic Foundation, Cleveland, 44195, USA
| | - James D Weiland
- Biomedical Engineering, University of Michigan, Ann Arbor, 48105, USA.
- BioInterfaces Institute, University of Michigan, Ann Arbor, 48105, USA.
- Ophthalmology and Visual Science, University of Michigan, Ann Arbor, 48105, USA.
| |
Collapse
|
2
|
Wood EH, Kreymerman A, Kowal T, Buickians D, Sun Y, Muscat S, Mercola M, Moshfeghi DM, Goldberg JL. Cellular and subcellular optogenetic approaches towards neuroprotection and vision restoration. Prog Retin Eye Res 2023; 96:101153. [PMID: 36503723 PMCID: PMC10247900 DOI: 10.1016/j.preteyeres.2022.101153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 11/27/2022] [Accepted: 11/28/2022] [Indexed: 12/13/2022]
Abstract
Optogenetics is defined as the combination of genetic and optical methods to induce or inhibit well-defined events in isolated cells, tissues, or animals. While optogenetics within ophthalmology has been primarily applied towards treating inherited retinal disease, there are a myriad of other applications that hold great promise for a variety of eye diseases including cellular regeneration, modulation of mitochondria and metabolism, regulation of intraocular pressure, and pain control. Supported by primary data from the authors' work with in vitro and in vivo applications, we introduce a novel approach to metabolic regulation, Opsins to Restore Cellular ATP (ORCA). We review the fundamental constructs for ophthalmic optogenetics, present current therapeutic approaches and clinical trials, and discuss the future of subcellular and signaling pathway applications for neuroprotection and vision restoration.
Collapse
Affiliation(s)
- Edward H Wood
- Spencer Center for Vision Research, Byers Eye Institute, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA, USA; Stanford Cardiovascular Institute, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Alexander Kreymerman
- Spencer Center for Vision Research, Byers Eye Institute, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA, USA; Stanford Cardiovascular Institute, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Tia Kowal
- Spencer Center for Vision Research, Byers Eye Institute, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA, USA
| | - David Buickians
- Spencer Center for Vision Research, Byers Eye Institute, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Yang Sun
- Spencer Center for Vision Research, Byers Eye Institute, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Stephanie Muscat
- Spencer Center for Vision Research, Byers Eye Institute, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Mark Mercola
- Stanford Cardiovascular Institute, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Darius M Moshfeghi
- Spencer Center for Vision Research, Byers Eye Institute, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Jeffrey L Goldberg
- Spencer Center for Vision Research, Byers Eye Institute, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, CA, USA.
| |
Collapse
|
3
|
Kish KE, Yuan A, Weiland JD. Patient-specific computational models of retinal prostheses. RESEARCH SQUARE 2023:rs.3.rs-3168193. [PMID: 37577674 PMCID: PMC10418526 DOI: 10.21203/rs.3.rs-3168193/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
Retinal prostheses stimulate inner retinal neurons to create visual perception for blind patients. Implanted arrays have many small electrodes, which act as pixels. Not all electrodes induce perception at the same stimulus amplitude, requiring clinicians to manually establish a visual perception threshold for each one. Phosphenes created by single-electrode stimuli can also vary in shape, size, and brightness. Computational models provide a tool to predict inter-electrode variability and automate device programming. In this study, we created statistical and patient-specific field-cable models to investigate inter-electrode variability across seven epiretinal prosthesis users. Our statistical analysis revealed that retinal thickness beneath the electrode correlated with perceptual threshold, with a significant fixed effect across participants. Electrode-retina distance and electrode impedance also correlated with perceptual threshold for some participants, but these effects varied by individual. We developed a novel method to construct patient-specific field-cable models from optical coherence tomography images. Predictions with these models significantly correlated with perceptual threshold for 80% of participants. Additionally, we demonstrated that patient-specific field-cable models could predict retinal activity and phosphene size. These computational models could be beneficial for determining optimal stimulation settings in silico, circumventing the trial-and-error testing of a large parameter space in clinic.
Collapse
Affiliation(s)
| | - Alex Yuan
- Cole Eye Institute, Cleveland Clinic Foundation
| | | |
Collapse
|
4
|
Wang C, Fang C, Zou Y, Yang J, Sawan M. Artificial intelligence techniques for retinal prostheses: a comprehensive review and future direction. J Neural Eng 2023; 20. [PMID: 36634357 DOI: 10.1088/1741-2552/acb295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 01/12/2023] [Indexed: 01/14/2023]
Abstract
Objective. Retinal prostheses are promising devices to restore vision for patients with severe age-related macular degeneration or retinitis pigmentosa disease. The visual processing mechanism embodied in retinal prostheses play an important role in the restoration effect. Its performance depends on our understanding of the retina's working mechanism and the evolvement of computer vision models. Recently, remarkable progress has been made in the field of processing algorithm for retinal prostheses where the new discovery of the retina's working principle and state-of-the-arts computer vision models are combined together.Approach. We investigated the related research on artificial intelligence techniques for retinal prostheses. The processing algorithm in these studies could be attributed to three types: computer vision-related methods, biophysical models, and deep learning models.Main results. In this review, we first illustrate the structure and function of the normal and degenerated retina, then demonstrate the vision rehabilitation mechanism of three representative retinal prostheses. It is necessary to summarize the computational frameworks abstracted from the normal retina. In addition, the development and feature of three types of different processing algorithms are summarized. Finally, we analyze the bottleneck in existing algorithms and propose our prospect about the future directions to improve the restoration effect.Significance. This review systematically summarizes existing processing models for predicting the response of the retina to external stimuli. What's more, the suggestions for future direction may inspire researchers in this field to design better algorithms for retinal prostheses.
Collapse
Affiliation(s)
- Chuanqing Wang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Chaoming Fang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Yong Zou
- Beijing Institute of Radiation Medicine, Beijing, People's Republic of China
| | - Jie Yang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Mohamad Sawan
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| |
Collapse
|
5
|
Fauvel T, Chalk M. Human-in-the-loop optimization of visual prosthetic stimulation. J Neural Eng 2022; 19. [PMID: 35667363 DOI: 10.1088/1741-2552/ac7615] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 06/06/2022] [Indexed: 11/12/2022]
Abstract
Retinal prostheses are a promising strategy to restore sight to patients with retinal degenerative diseases. These devices compensate for the loss of photoreceptors by electrically stimulating neurons in the retina. Currently, the visual function that can be recovered with such devices is very limited. This is due, in part, to current spread, unintended axonal activation, and the limited resolution of existing devices. Here we show, using a recent model of prosthetic vision, that optimizing how visual stimuli are encoded by the device can help overcome some of these limitations, leading to dramatic improvements in visual perception. APPROACH We propose a strategy to do this in practice, using patients' feedback in a visual task. The main challenge of our approach comes from the fact that, typically, one only has access to a limited number of noisy responses from patients. We propose two ways to deal with this: first, we use a model of prosthetic vision to constrain and simplify the optimization. We show that, if one knew the parameters of this model for a given patient, it would be possible to greatly improve their perceptual performance. Second we propose a preferential Bayesian optimization to efficiently learn these model parameters for each patient, using minimal trials. MAIN RESULTS To test our approach, we presented healthy subjects with visual stimuli generated by a recent model of prosthetic vision, to replicate the perceptual experience of patients fitted with an implant. Our optimization procedure led to significant and robust improvements in perceived image quality, that transferred to increased performance in other tasks. SIGNIFICANCE Importantly, our strategy is agnostic to the type of prosthesis and thus could readily be implemented in existing implants.
Collapse
Affiliation(s)
- Tristan Fauvel
- Institut de la Vision, INSERM, 17 Rue Moreau, Paris, Île-de-France, 75014, FRANCE
| | - Matthew Chalk
- Institut de l a Vision, INSERM, 17 Rue Moreau, Paris, 75014, FRANCE
| |
Collapse
|
6
|
Ahn J, Cha S, Choi KE, Kim SW, Yoo Y, Goo YS. Correlated Activity in the Degenerate Retina Inhibits Focal Response to Electrical Stimulation. Front Cell Neurosci 2022; 16:889663. [PMID: 35602554 PMCID: PMC9114441 DOI: 10.3389/fncel.2022.889663] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 04/13/2022] [Indexed: 11/24/2022] Open
Abstract
Retinal prostheses have shown some clinical success in patients with retinitis pigmentosa and age-related macular degeneration. However, even after the implantation of a retinal prosthesis, the patient’s visual acuity is at best less than 20/420. Reduced visual acuity may be explained by a decrease in the signal-to-noise ratio due to the spontaneous hyperactivity of retinal ganglion cells (RGCs) found in degenerate retinas. Unfortunately, abnormal retinal rewiring, commonly observed in degenerate retinas, has rarely been considered for the development of retinal prostheses. The purpose of this study was to investigate the aberrant retinal network response to electrical stimulation in terms of the spatial distribution of the electrically evoked RGC population. An 8 × 8 multielectrode array was used to measure the spiking activity of the RGC population. RGC spikes were recorded in wild-type [C57BL/6J; P56 (postnatal day 56)], rd1 (P56), rd10 (P14 and P56) mice, and macaque [wild-type and drug-induced retinal degeneration (RD) model] retinas. First, we performed a spike correlation analysis between RGCs to determine RGC connectivity. No correlation was observed between RGCs in the control group, including wild-type mice, rd10 P14 mice, and wild-type macaque retinas. In contrast, for the RD group, including rd1, rd10 P56, and RD macaque retinas, RGCs, up to approximately 400–600 μm apart, were significantly correlated. Moreover, to investigate the RGC population response to electrical stimulation, the number of electrically evoked RGC spikes was measured as a function of the distance between the stimulation and recording electrodes. With an increase in the interelectrode distance, the number of electrically evoked RGC spikes decreased exponentially in the control group. In contrast, electrically evoked RGC spikes were observed throughout the retina in the RD group, regardless of the inter-electrode distance. Taken together, in the degenerate retina, a more strongly coupled retinal network resulted in the widespread distribution of electrically evoked RGC spikes. This finding could explain the low-resolution vision in prosthesis-implanted patients.
Collapse
Affiliation(s)
- Jungryul Ahn
- Department of Physiology, Chungbuk National University School of Medicine, Cheongju, South Korea
| | - Seongkwang Cha
- Department of Physiology, Chungbuk National University School of Medicine, Cheongju, South Korea
| | - Kwang-Eon Choi
- Department of Ophthalmology, Korea University College of Medicine, Seoul, South Korea
| | - Seong-Woo Kim
- Department of Ophthalmology, Korea University College of Medicine, Seoul, South Korea
- *Correspondence: Seong-Woo Kim,
| | - Yongseok Yoo
- Department of Electronics Engineering, Incheon National University, Incheon, South Korea
- Yongseok Yoo,
| | - Yong Sook Goo
- Department of Physiology, Chungbuk National University School of Medicine, Cheongju, South Korea
- Yong Sook Goo,
| |
Collapse
|
7
|
Abbasi B, Rizzo JF. Advances in Neuroscience, Not Devices, Will Determine the Effectiveness of Visual Prostheses. Semin Ophthalmol 2021; 36:168-175. [PMID: 33734937 DOI: 10.1080/08820538.2021.1887902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Background: Innovations in engineering and neuroscience have enabled the development of sophisticated visual prosthetic devices. In clinical trials, these devices have provided visual acuities as high as 20/460, enabled coarse navigation, and even allowed for reading of short words. However, long-term commercial viability arguably rests on attaining even better vision and more definitive improvements in tasks of daily living and quality of life. Purpose: Here we review technological and biological obstacles in the implementation of visual prosthetics. Conclusions: Research in the visual prosthetic field has tackled significant technical challenges, including biocompatibility, signal spread through neural tissue, and inadvertent activation of passing axons; however, significant gaps in knowledge remain in the realm of neuroscience, including the neural code of vision and visual plasticity. We assert that further optimization of prosthetic devices alone will not provide markedly improved visual outcomes without significant advances in our understanding of neuroscience.
Collapse
Affiliation(s)
- Bardia Abbasi
- Neuro-Ophthalmology Service, Department of Ophthalmology, Massachusetts Eye and Ear and Harvard Medical School, Boston, MA, USA
| | - Joseph F Rizzo
- Neuro-Ophthalmology Service, Department of Ophthalmology, Massachusetts Eye and Ear and Harvard Medical School, Boston, MA, USA
| |
Collapse
|
8
|
What do blind people "see" with retinal prostheses? Observations and qualitative reports of epiretinal implant users. PLoS One 2021; 16:e0229189. [PMID: 33566851 PMCID: PMC7875418 DOI: 10.1371/journal.pone.0229189] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Accepted: 11/30/2020] [Indexed: 01/13/2023] Open
Abstract
INTRODUCTION Retinal implants have now been approved and commercially available for certain clinical populations for over 5 years, with hundreds of individuals implanted, scores of them closely followed in research trials. Despite these numbers, however, few data are available that would help us answer basic questions regarding the nature and outcomes of artificial vision: what do recipients see when the device is turned on for the first time, and how does that change over time? METHODS Semi-structured interviews and observations were undertaken at two sites in France and the UK with 16 recipients who had received either the Argus II or IRIS II devices. Data were collected at various time points in the process that implant recipients went through in receiving and learning to use the device, including initial evaluation, implantation, initial activation and systems fitting, re-education and finally post-education. These data were supplemented with data from interviews conducted with vision rehabilitation specialists at the clinical sites and clinical researchers at the device manufacturers (Second Sight and Pixium Vision). Observational and interview data were transcribed, coded and analyzed using an approach guided by Interpretative Phenomenological Analysis (IPA). RESULTS Implant recipients described the perceptual experience produced by their epiretinal implants as fundamentally, qualitatively different than natural vision. All used terms that invoked electrical stimuli to describe the appearance of their percepts, yet the characteristics used to describe the percepts varied significantly between recipients. Artificial vision for these recipients was a highly specific, learned skill-set that combined particular bodily techniques, associative learning and deductive reasoning in order to build a "lexicon of flashes"-a distinct perceptual vocabulary that they then used to decompose, recompose and interpret their surroundings. The percept did not transform over time; rather, the recipient became better at interpreting the signals they received, using cognitive techniques. The process of using the device never ceased to be cognitively fatiguing, and did not come without risk or cost to the recipient. In exchange, recipients received hope and purpose through participation, as well as a new kind of sensory signal that may not have afforded practical or functional use in daily life but, for some, provided a kind of "contemplative perception" that recipients tailored to individualized activities. CONCLUSION Attending to the qualitative reports of implant recipients regarding the experience of artificial vision provides valuable information not captured by extant clinical outcome measures.
Collapse
|
9
|
Brackbill N, Rhoades C, Kling A, Shah NP, Sher A, Litke AM, Chichilnisky EJ. Reconstruction of natural images from responses of primate retinal ganglion cells. eLife 2020; 9:e58516. [PMID: 33146609 PMCID: PMC7752138 DOI: 10.7554/elife.58516] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2020] [Accepted: 11/02/2020] [Indexed: 11/23/2022] Open
Abstract
The visual message conveyed by a retinal ganglion cell (RGC) is often summarized by its spatial receptive field, but in principle also depends on the responses of other RGCs and natural image statistics. This possibility was explored by linear reconstruction of natural images from responses of the four numerically-dominant macaque RGC types. Reconstructions were highly consistent across retinas. The optimal reconstruction filter for each RGC - its visual message - reflected natural image statistics, and resembled the receptive field only when nearby, same-type cells were included. ON and OFF cells conveyed largely independent, complementary representations, and parasol and midget cells conveyed distinct features. Correlated activity and nonlinearities had statistically significant but minor effects on reconstruction. Simulated reconstructions, using linear-nonlinear cascade models of RGC light responses that incorporated measured spatial properties and nonlinearities, produced similar results. Spatiotemporal reconstructions exhibited similar spatial properties, suggesting that the results are relevant for natural vision.
Collapse
Affiliation(s)
- Nora Brackbill
- Department of Physics, Stanford UniversityStanfordUnited States
| | - Colleen Rhoades
- Department of Bioengineering, Stanford UniversityStanfordUnited States
| | - Alexandra Kling
- Department of Neurosurgery, Stanford School of MedicineStanfordUnited States
- Department of Ophthalmology, Stanford UniversityStanfordUnited States
- Hansen Experimental Physics Laboratory, Stanford UniversityStanfordUnited States
| | - Nishal P Shah
- Department of Electrical Engineering, Stanford UniversityStanfordUnited States
| | - Alexander Sher
- Santa Cruz Institute for Particle Physics, University of California, Santa CruzSanta CruzUnited States
| | - Alan M Litke
- Santa Cruz Institute for Particle Physics, University of California, Santa CruzSanta CruzUnited States
| | - EJ Chichilnisky
- Department of Neurosurgery, Stanford School of MedicineStanfordUnited States
- Department of Ophthalmology, Stanford UniversityStanfordUnited States
- Hansen Experimental Physics Laboratory, Stanford UniversityStanfordUnited States
| |
Collapse
|
10
|
Shah NP, Chichilnisky EJ. Computational challenges and opportunities for a bi-directional artificial retina. J Neural Eng 2020; 17:055002. [PMID: 33089827 DOI: 10.1088/1741-2552/aba8b1] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
A future artificial retina that can restore high acuity vision in blind people will rely on the capability to both read (observe) and write (control) the spiking activity of neurons using an adaptive, bi-directional and high-resolution device. Although current research is focused on overcoming the technical challenges of building and implanting such a device, exploiting its capabilities to achieve more acute visual perception will also require substantial computational advances. Using high-density large-scale recording and stimulation in the primate retina with an ex vivo multi-electrode array lab prototype, we frame several of the major computational problems, and describe current progress and future opportunities in solving them. First, we identify cell types and locations from spontaneous activity in the blind retina, and then efficiently estimate their visual response properties by using a low-dimensional manifold of inter-retina variability learned from a large experimental dataset. Second, we estimate retinal responses to a large collection of relevant electrical stimuli by passing current patterns through an electrode array, spike sorting the resulting recordings and using the results to develop a model of evoked responses. Third, we reproduce the desired responses for a given visual target by temporally dithering a diverse collection of electrical stimuli within the integration time of the visual system. Together, these novel approaches may substantially enhance artificial vision in a next-generation device.
Collapse
Affiliation(s)
- Nishal P Shah
- Department of Electrical Engineering, Stanford University, Stanford, CA, United States of America. Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA, United States of America. Department of Neurosurgery, Stanford University, Stanford, CA, United States of America. Author to whom any correspondence should be addressed
| | | |
Collapse
|
11
|
Lozano A, Suárez JS, Soto-Sánchez C, Garrigós J, Martínez-Alvarez JJ, Ferrández JM, Fernández E. Neurolight: A Deep Learning Neural Interface for Cortical Visual Prostheses. Int J Neural Syst 2020; 30:2050045. [DOI: 10.1142/s0129065720500458] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Visual neuroprosthesis, that provide electrical stimulation along several sites of the human visual system, constitute a potential tool for vision restoration for the blind. Scientific and technological progress in the fields of neural engineering and artificial vision comes with new theories and tools that, along with the dawn of modern artificial intelligence, constitute a promising framework for the further development of neurotechnology. In the framework of the development of a Cortical Visual Neuroprosthesis for the blind (CORTIVIS), we are now facing the challenge of developing not only computationally powerful tools and flexible approaches that will allow us to provide some degree of functional vision to individuals who are profoundly blind. In this work, we propose a general neuroprosthesis framework composed of several task-oriented and visual encoding modules. We address the development and implementation of computational models of the firing rates of retinal ganglion cells and design a tool — Neurolight — that allows these models to be interfaced with intracortical microelectrodes in order to create electrical stimulation patterns that can evoke useful perceptions. In addition, the developed framework allows the deployment of a diverse array of state-of-the-art deep-learning techniques for task-oriented and general image pre-processing, such as semantic segmentation and object detection in our system’s pipeline. To the best of our knowledge, this constitutes the first deep-learning-based system designed to directly interface with the visual brain through an intracortical microelectrode array. We implement the complete pipeline, from obtaining a video stream to developing and deploying task-oriented deep-learning models and predictive models of retinal ganglion cells’ encoding of visual inputs under the control of a neurostimulation device able to send electrical train pulses to a microelectrode array implanted at the visual cortex.
Collapse
Affiliation(s)
- Antonio Lozano
- Departamento de Electrónica, Tecnología de Computadoras y Proyectos, Universidad Politécnica de Cartagena, 30202 Cartagena, Spain
| | - Juan Sebastián Suárez
- Instituto de Bioingeniería, Universidad Miguel Hernández, 03202 Alicante, Spain
- CIBER-BBN, 28029 Madrid, Spain
| | - Cristina Soto-Sánchez
- Instituto de Bioingeniería, Universidad Miguel Hernández, 03202 Alicante, Spain
- CIBER-BBN, 28029 Madrid, Spain
| | - Javier Garrigós
- Departamento de Electrónica, Tecnología de Computadoras y Proyectos, Universidad Politécnica de Cartagena, 30202 Cartagena, Spain
| | - J. Javier Martínez-Alvarez
- Departamento de Electrónica, Tecnología de Computadoras y Proyectos, Universidad Politécnica de Cartagena, 30202 Cartagena, Spain
| | - J. Manuel Ferrández
- Departamento de Electrónica, Tecnología de Computadoras y Proyectos, Universidad Politécnica de Cartagena, 30202 Cartagena, Spain
| | - Eduardo Fernández
- Instituto de Bioingeniería, Universidad Miguel Hernández, 03202 Alicante, Spain
| |
Collapse
|
12
|
Melanitis N, Nikita KS. Biologically-inspired image processing in computational retina models. Comput Biol Med 2019; 113:103399. [PMID: 31472425 DOI: 10.1016/j.compbiomed.2019.103399] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Revised: 08/20/2019] [Accepted: 08/20/2019] [Indexed: 11/19/2022]
Abstract
Retinal Prosthesis (RP) is an approach to restore vision, using an implanted device to electrically stimulate the retina. A fundamental problem in RP is to translate the visual scene to retina neural spike patterns, mimicking the computations normally done by retina neural circuits. Towards the perspective of improved RP interventions, we propose a Computer Vision (CV) image preprocessing method based on Retinal Ganglion Cells functions and then use the method to reproduce retina output with a standard Generalized Integrate & Fire (GIF) neuron model. "Virtual Retina" simulation software is used to provide the stimulus-retina response data to train and test our model. We use a sequence of natural images as model input and show that models using the proposed CV image preprocessing outperform models using raw image intensity (interspike-interval distance 0.17 vs 0.27). This result is aligned with our hypothesis that raw image intensity is an improper image representation for Retinal Ganglion Cells response prediction.
Collapse
Affiliation(s)
- Nikos Melanitis
- Biomedical Simulations and Imaging Laboratory, School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece.
| | - Konstantina S Nikita
- Biomedical Simulations and Imaging Laboratory, School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece.
| |
Collapse
|
13
|
Kupers ER, Carrasco M, Winawer J. Modeling visual performance differences 'around' the visual field: A computational observer approach. PLoS Comput Biol 2019; 15:e1007063. [PMID: 31125331 PMCID: PMC6553792 DOI: 10.1371/journal.pcbi.1007063] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 06/06/2019] [Accepted: 05/02/2019] [Indexed: 01/25/2023] Open
Abstract
Visual performance depends on polar angle, even when eccentricity is held constant; on many psychophysical tasks observers perform best when stimuli are presented on the horizontal meridian, worst on the upper vertical, and intermediate on the lower vertical meridian. This variation in performance 'around' the visual field can be as pronounced as that of doubling the stimulus eccentricity. The causes of these asymmetries in performance are largely unknown. Some factors in the eye, e.g. cone density, are positively correlated with the reported variations in visual performance with polar angle. However, the question remains whether these correlations can quantitatively explain the perceptual differences observed 'around' the visual field. To investigate the extent to which the earliest stages of vision-optical quality and cone density-contribute to performance differences with polar angle, we created a computational observer model. The model uses the open-source software package ISETBIO to simulate an orientation discrimination task for which visual performance differs with polar angle. The model starts from the photons emitted by a display, which pass through simulated human optics with fixational eye movements, followed by cone isomerizations in the retina. Finally, we classify stimulus orientation using a support vector machine to learn a linear classifier on the photon absorptions. To account for the 30% increase in contrast thresholds for upper vertical compared to horizontal meridian, as observed psychophysically on the same task, our computational observer model would require either an increase of ~7 diopters of defocus or a reduction of 500% in cone density. These values far exceed the actual variations as a function of polar angle observed in human eyes. Therefore, we conclude that these factors in the eye only account for a small fraction of differences in visual performance with polar angle. Substantial additional asymmetries must arise in later retinal and/or cortical processing.
Collapse
Affiliation(s)
- Eline R. Kupers
- Department of Psychology, New York University, New York, New York, United States of America
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, New York, United States of America
- Center for Neural Science, New York University, New York, New York, United States of America
| | - Jonathan Winawer
- Department of Psychology, New York University, New York, New York, United States of America
- Center for Neural Science, New York University, New York, New York, United States of America
| |
Collapse
|