1
|
Wang J, Azimi H, Zhao Y, Kaeser M, Vaca Sánchez P, Vazquez-Guardado A, Rogers JA, Harvey M, Rainer G. Optogenetic activation of visual thalamus generates artificial visual percepts. eLife 2023; 12:e90431. [PMID: 37791662 PMCID: PMC10593406 DOI: 10.7554/elife.90431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 10/03/2023] [Indexed: 10/05/2023] Open
Abstract
The lateral geniculate nucleus (LGN), a retinotopic relay center where visual inputs from the retina are processed and relayed to the visual cortex, has been proposed as a potential target for artificial vision. At present, it is unknown whether optogenetic LGN stimulation is sufficient to elicit behaviorally relevant percepts, and the properties of LGN neural responses relevant for artificial vision have not been thoroughly characterized. Here, we demonstrate that tree shrews pretrained on a visual detection task can detect optogenetic LGN activation using an AAV2-CamKIIα-ChR2 construct and readily generalize from visual to optogenetic detection. Simultaneous recordings of LGN spiking activity and primary visual cortex (V1) local field potentials (LFPs) during optogenetic LGN stimulation show that LGN neurons reliably follow optogenetic stimulation at frequencies up to 60 Hz and uncovered a striking phase locking between the V1 LFP and the evoked spiking activity in LGN. These phase relationships were maintained over a broad range of LGN stimulation frequencies, up to 80 Hz, with spike field coherence values favoring higher frequencies, indicating the ability to relay temporally precise information to V1 using light activation of the LGN. Finally, V1 LFP responses showed sensitivity values to LGN optogenetic activation that were similar to the animal's behavioral performance. Taken together, our findings confirm the LGN as a potential target for visual prosthetics in a highly visual mammal closely related to primates.
Collapse
Affiliation(s)
- Jing Wang
- Department of Medicine, University of FribourgFribourgSwitzerland
- Department of Neurobiology, School of Basic Medical Sciences, Nanjing Medical UniversityNanjingChina
| | - Hamid Azimi
- Department of Medicine, University of FribourgFribourgSwitzerland
| | - Yilei Zhao
- Department of Medicine, University of FribourgFribourgSwitzerland
| | - Melanie Kaeser
- Department of Medicine, University of FribourgFribourgSwitzerland
| | | | | | - John A Rogers
- Querrey Simpson Institute for Bioelectronics, Northwestern UniversityEvanstonUnited States
| | - Michael Harvey
- Department of Medicine, University of FribourgFribourgSwitzerland
| | - Gregor Rainer
- Department of Medicine, University of FribourgFribourgSwitzerland
| |
Collapse
|
2
|
Caravaca-Rodriguez D, Gaytan SP, Suaning GJ, Barriga-Rivera A. Implications of Neural Plasticity in Retinal Prosthesis. Invest Ophthalmol Vis Sci 2022; 63:11. [PMID: 36251317 DOI: 10.1167/iovs.63.11.11] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Retinal degenerative diseases such as retinitis pigmentosa cause a progressive loss of photoreceptors that eventually prevents the affected person from perceiving visual sensations. The absence of a visual input produces a neural rewiring cascade that propagates along the visual system. This remodeling occurs first within the retina. Then, subsequent neuroplastic changes take place at higher visual centers in the brain, produced by either the abnormal neural encoding of the visual inputs delivered by the diseased retina or as the result of an adaptation to visual deprivation. While retinal implants can activate the surviving retinal neurons by delivering electric current, the unselective activation patterns of the different neural populations that exist in the retinal layers differ substantially from those in physiologic vision. Therefore, artificially induced neural patterns are being delivered to a brain that has already undergone important neural reconnections. Whether or not the modulation of this neural rewiring can improve the performance for retinal prostheses remains a critical question whose answer may be the enabler of improved functional artificial vision and more personalized neurorehabilitation strategies.
Collapse
Affiliation(s)
- Daniel Caravaca-Rodriguez
- Department of Applied Physics III, Technical School of Engineering, Universidad de Sevilla, Sevilla, Spain
| | - Susana P Gaytan
- Department of Physiology, Universidad de Sevilla, Sevilla, Spain
| | - Gregg J Suaning
- School of Biomedical Engineering, University of Sydney, Sydney, Australia
| | - Alejandro Barriga-Rivera
- Department of Applied Physics III, Technical School of Engineering, Universidad de Sevilla, Sevilla, Spain.,School of Biomedical Engineering, University of Sydney, Sydney, Australia
| |
Collapse
|
3
|
Mounier E, Abdullah B, Mahdi H, Eldawlatly S. A deep convolutional visual encoding model of neuronal responses in the LGN. Brain Inform 2021; 8:11. [PMID: 34129111 PMCID: PMC8206408 DOI: 10.1186/s40708-021-00132-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Accepted: 05/31/2021] [Indexed: 11/10/2022] Open
Abstract
The Lateral Geniculate Nucleus (LGN) represents one of the major processing sites along the visual pathway. Despite its crucial role in processing visual information and its utility as one target for recently developed visual prostheses, it is much less studied compared to the retina and the visual cortex. In this paper, we introduce a deep learning encoder to predict LGN neuronal firing in response to different visual stimulation patterns. The encoder comprises a deep Convolutional Neural Network (CNN) that incorporates visual stimulus spatiotemporal representation in addition to LGN neuronal firing history to predict the response of LGN neurons. Extracellular activity was recorded in vivo using multi-electrode arrays from single units in the LGN in 12 anesthetized rats with a total neuronal population of 150 units. Neural activity was recorded in response to single-pixel, checkerboard and geometrical shapes visual stimulation patterns. Extracted firing rates and the corresponding stimulation patterns were used to train the model. The performance of the model was assessed using different testing data sets and different firing rate windows. An overall mean correlation coefficient between the actual and the predicted firing rates of 0.57 and 0.7 was achieved for the 10 ms and the 50 ms firing rate windows, respectively. Results demonstrate that the model is robust to variability in the spatiotemporal properties of the recorded neurons outperforming other examined models including the state-of-the-art Generalized Linear Model (GLM). The results indicate the potential of deep convolutional neural networks as viable models of LGN firing.
Collapse
Affiliation(s)
- Eslam Mounier
- Computer and Systems Engineering Department, Faculty of Engineering, Ain Shams University, 1 El-Sarayat St., Abbassia, Cairo, Egypt
| | - Bassem Abdullah
- Computer and Systems Engineering Department, Faculty of Engineering, Ain Shams University, 1 El-Sarayat St., Abbassia, Cairo, Egypt
| | - Hani Mahdi
- Computer and Systems Engineering Department, Faculty of Engineering, Ain Shams University, 1 El-Sarayat St., Abbassia, Cairo, Egypt
| | - Seif Eldawlatly
- Computer and Systems Engineering Department, Faculty of Engineering, Ain Shams University, 1 El-Sarayat St., Abbassia, Cairo, Egypt.
- Faculty of Media Engineering and Technology, German University in Cairo, Cairo, Egypt.
| |
Collapse
|
4
|
Zhang X, Ma Z, Zheng H, Li T, Chen K, Wang X, Liu C, Xu L, Wu X, Lin D, Lin H. The combination of brain-computer interfaces and artificial intelligence: applications and challenges. ANNALS OF TRANSLATIONAL MEDICINE 2020; 8:712. [PMID: 32617332 PMCID: PMC7327323 DOI: 10.21037/atm.2019.11.109] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Brain-computer interfaces (BCIs) have shown great prospects as real-time bidirectional links between living brains and actuators. Artificial intelligence (AI), which can advance the analysis and decoding of neural activity, has turbocharged the field of BCIs. Over the past decade, a wide range of BCI applications with AI assistance have emerged. These "smart" BCIs including motor and sensory BCIs have shown notable clinical success, improved the quality of paralyzed patients' lives, expanded the athletic ability of common people and accelerated the evolution of robots and neurophysiological discoveries. However, despite technological improvements, challenges remain with regard to the long training periods, real-time feedback, and monitoring of BCIs. In this article, the authors review the current state of AI as applied to BCIs and describe advances in BCI applications, their challenges and where they could be headed in the future.
Collapse
Affiliation(s)
- Xiayin Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Ziyue Ma
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Huaijin Zheng
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Tongkeng Li
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Kexin Chen
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Xun Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chenting Liu
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Linxi Xu
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,Center of Precision Medicine, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|