1
|
Yin X, Wu Z, Wang H. A novel DRL-guided sparse voxel decoding model for reconstructing perceived images from brain activity. J Neurosci Methods 2024; 412:110292. [PMID: 39299579 DOI: 10.1016/j.jneumeth.2024.110292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 08/31/2024] [Accepted: 09/15/2024] [Indexed: 09/22/2024]
Abstract
BACKGROUND Due to the sparse encoding character of the human visual cortex and the scarcity of paired training samples for {images, fMRIs}, voxel selection is an effective means of reconstructing perceived images from fMRI. However, the existing data-driven voxel selection methods have not achieved satisfactory results. NEW METHOD Here, a novel deep reinforcement learning-guided sparse voxel (DRL-SV) decoding model is proposed to reconstruct perceived images from fMRI. We innovatively describe voxel selection as a Markov decision process (MDP), training agents to select voxels that are highly involved in specific visual encoding. RESULTS Experimental results on two public datasets verify the effectiveness of the proposed DRL-SV, which can accurately select voxels highly involved in neural encoding, thereby improving the quality of visual image reconstruction. COMPARISON WITH EXISTING METHODS We qualitatively and quantitatively compared our results with the state-of-the-art (SOTA) methods, getting better reconstruction results. We compared the proposed DRL-SV with traditional data-driven baseline methods, obtaining sparser voxel selection results, but better reconstruction performance. CONCLUSIONS DRL-SV can accurately select voxels involved in visual encoding on few-shot, compared to data-driven voxel selection methods. The proposed decoding model provides a new avenue to improving the image reconstruction quality of the primary visual cortex.
Collapse
Affiliation(s)
- Xu Yin
- Key Laboratory of Child Development and Learning Science of Ministry of Education, School of Biological Science & Medical Engineering, Southeast University, Nanjing, Jiangsu 210096, China
| | - Zhengping Wu
- School of Innovations, Sanjiang University, China; School of Electronic Science and Engineering, Nanjing University, China
| | - Haixian Wang
- Key Laboratory of Child Development and Learning Science of Ministry of Education, School of Biological Science & Medical Engineering, Southeast University, Nanjing, Jiangsu 210096, China.
| |
Collapse
|
2
|
Wu EG, Brackbill N, Rhoades C, Kling A, Gogliettino AR, Shah NP, Sher A, Litke AM, Simoncelli EP, Chichilnisky EJ. Fixational eye movements enhance the precision of visual information transmitted by the primate retina. Nat Commun 2024; 15:7964. [PMID: 39261491 PMCID: PMC11390888 DOI: 10.1038/s41467-024-52304-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 08/29/2024] [Indexed: 09/13/2024] Open
Abstract
Fixational eye movements alter the number and timing of spikes transmitted from the retina to the brain, but whether these changes enhance or degrade the retinal signal is unclear. To quantify this, we developed a Bayesian method for reconstructing natural images from the recorded spikes of hundreds of retinal ganglion cells (RGCs) in the macaque retina (male), combining a likelihood model for RGC light responses with the natural image prior implicitly embedded in an artificial neural network optimized for denoising. The method matched or surpassed the performance of previous reconstruction algorithms, and provides an interpretable framework for characterizing the retinal signal. Reconstructions were improved with artificial stimulus jitter that emulated fixational eye movements, even when the eye movement trajectory was assumed to be unknown and had to be inferred from retinal spikes. Reconstructions were degraded by small artificial perturbations of spike times, revealing more precise temporal encoding than suggested by previous studies. Finally, reconstructions were substantially degraded when derived from a model that ignored cell-to-cell interactions, indicating the importance of stimulus-evoked correlations. Thus, fixational eye movements enhance the precision of the retinal representation.
Collapse
Affiliation(s)
- Eric G Wu
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.
| | - Nora Brackbill
- Department of Physics, Stanford University, Stanford, CA, USA
| | - Colleen Rhoades
- Department of Bioengineering, Stanford University, Stanford, CA, USA
| | - Alexandra Kling
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Department of Ophthalmology, Stanford University, Stanford, CA, USA
- Hansen Experimental Physics Laboratory, Stanford University, 452 Lomita Mall, Stanford, 94305, CA, USA
| | - Alex R Gogliettino
- Hansen Experimental Physics Laboratory, Stanford University, 452 Lomita Mall, Stanford, 94305, CA, USA
- Neurosciences PhD Program, Stanford University, Stanford, CA, USA
| | - Nishal P Shah
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
| | - Alexander Sher
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Alan M Litke
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Eero P Simoncelli
- Flatiron Institute, Simons Foundation, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
- Courant Institute of Mathematical Sciences, New York University, New York, NY, USA
| | - E J Chichilnisky
- Department of Neurosurgery, Stanford University, Stanford, CA, USA.
- Department of Ophthalmology, Stanford University, Stanford, CA, USA.
- Hansen Experimental Physics Laboratory, Stanford University, 452 Lomita Mall, Stanford, 94305, CA, USA.
| |
Collapse
|
3
|
Chen Q, Ingram NT, Baudin J, Angueyra JM, Sinha R, Rieke F. Predictably manipulating photoreceptor light responses to reveal their role in downstream visual responses. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.10.20.563304. [PMID: 37961603 PMCID: PMC10634684 DOI: 10.1101/2023.10.20.563304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
Computation in neural circuits relies on the judicious use of nonlinear circuit components. In many cases, multiple nonlinear components work collectively to control circuit outputs. Separating the contributions of these different components is difficult, and this limits our understanding of the mechanistic basis of many important computations. Here, we introduce a tool that permits the design of light stimuli that predictably alter rod and cone phototransduction currents - including stimuli that compensate for nonlinear properties such as light adaptation. This tool, based on well-established models for the rod and cone phototransduction cascade, permits the separation of nonlinearities in phototransduction from those in downstream circuits. This will allow, for example, direct tests of how adaptation in rod and cone phototransduction affects downstream visual signals and perception.
Collapse
Affiliation(s)
- Qiang Chen
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| | - Norianne T. Ingram
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| | - Jacob Baudin
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| | | | | | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195
| |
Collapse
|
4
|
Wu EG, Brackbill N, Rhoades C, Kling A, Gogliettino AR, Shah NP, Sher A, Litke AM, Simoncelli EP, Chichilnisky E. Fixational Eye Movements Enhance the Precision of Visual Information Transmitted by the Primate Retina. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.08.12.552902. [PMID: 37645934 PMCID: PMC10462030 DOI: 10.1101/2023.08.12.552902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
Fixational eye movements alter the number and timing of spikes transmitted from the retina to the brain, but whether these changes enhance or degrade the retinal signal is unclear. To quantify this, we developed a Bayesian method for reconstructing natural images from the recorded spikes of hundreds of retinal ganglion cells (RGCs) in the macaque retina (male), combining a likelihood model for RGC light responses with the natural image prior implicitly embedded in an artificial neural network optimized for denoising. The method matched or surpassed the performance of previous reconstruction algorithms, and provides an interpretable framework for characterizing the retinal signal. Reconstructions were improved with artificial stimulus jitter that emulated fixational eye movements, even when the eye movement trajectory was assumed to be unknown and had to be inferred from retinal spikes. Reconstructions were degraded by small artificial perturbations of spike times, revealing more precise temporal encoding than suggested by previous studies. Finally, reconstructions were substantially degraded when derived from a model that ignored cell-to-cell interactions, indicating the importance of stimulus-evoked correlations. Thus, fixational eye movements enhance the precision of the retinal representation.
Collapse
Affiliation(s)
- Eric G. Wu
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Nora Brackbill
- Department of Physics, Stanford University, Stanford, CA, USA
| | - Colleen Rhoades
- Department of Bioengineering, Stanford University, Stanford, CA, USA
| | - Alexandra Kling
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Department of Ophthalmology, Stanford University, Stanford, CA, USA
- Hansen Experimental Physics Laboratory, Stanford University, 452 Lomita Mall, Stanford, 94305, CA, USA
| | - Alex R. Gogliettino
- Hansen Experimental Physics Laboratory, Stanford University, 452 Lomita Mall, Stanford, 94305, CA, USA
- Neurosciences PhD Program, Stanford University, Stanford, CA, USA
| | - Nishal P. Shah
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
| | - Alexander Sher
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Alan M. Litke
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Eero P. Simoncelli
- Flatiron Institute, Simons Foundation, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
- Courant Institute of Mathematical Sciences, New York University, New York, NY, USA
| | - E.J. Chichilnisky
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Department of Ophthalmology, Stanford University, Stanford, CA, USA
- Hansen Experimental Physics Laboratory, Stanford University, 452 Lomita Mall, Stanford, 94305, CA, USA
| |
Collapse
|
5
|
Chen Y, Beech P, Yin Z, Jia S, Zhang J, Yu Z, Liu JK. Decoding dynamic visual scenes across the brain hierarchy. PLoS Comput Biol 2024; 20:e1012297. [PMID: 39093861 PMCID: PMC11324145 DOI: 10.1371/journal.pcbi.1012297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 08/14/2024] [Accepted: 07/03/2024] [Indexed: 08/04/2024] Open
Abstract
Understanding the computational mechanisms that underlie the encoding and decoding of environmental stimuli is a crucial investigation in neuroscience. Central to this pursuit is the exploration of how the brain represents visual information across its hierarchical architecture. A prominent challenge resides in discerning the neural underpinnings of the processing of dynamic natural visual scenes. Although considerable research efforts have been made to characterize individual components of the visual pathway, a systematic understanding of the distinctive neural coding associated with visual stimuli, as they traverse this hierarchical landscape, remains elusive. In this study, we leverage the comprehensive Allen Visual Coding-Neuropixels dataset and utilize the capabilities of deep learning neural network models to study neural coding in response to dynamic natural visual scenes across an expansive array of brain regions. Our study reveals that our decoding model adeptly deciphers visual scenes from neural spiking patterns exhibited within each distinct brain area. A compelling observation arises from the comparative analysis of decoding performances, which manifests as a notable encoding proficiency within the visual cortex and subcortical nuclei, in contrast to a relatively reduced encoding activity within hippocampal neurons. Strikingly, our results unveil a robust correlation between our decoding metrics and well-established anatomical and functional hierarchy indexes. These findings corroborate existing knowledge in visual coding related to artificial visual stimuli and illuminate the functional role of these deeper brain regions using dynamic stimuli. Consequently, our results suggest a novel perspective on the utility of decoding neural network models as a metric for quantifying the encoding quality of dynamic natural visual scenes represented by neural responses, thereby advancing our comprehension of visual coding within the complex hierarchy of the brain.
Collapse
Affiliation(s)
- Ye Chen
- School of Computer Science, Peking University, Beijing, China
- Institute for Artificial Intelligence, Peking University, Beijing, China
| | - Peter Beech
- School of Computing, University of Leeds, Leeds, United Kingdom
| | - Ziwei Yin
- School of Computer Science, Centre for Human Brain Health, University of Birmingham, Birmingham, United Kingdom
| | - Shanshan Jia
- School of Computer Science, Peking University, Beijing, China
- Institute for Artificial Intelligence, Peking University, Beijing, China
| | - Jiayi Zhang
- Institutes of Brain Science, State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institute for Medical and Engineering Innovation, Eye & ENT Hospital, Fudan University, Shanghai, China
| | - Zhaofei Yu
- School of Computer Science, Peking University, Beijing, China
- Institute for Artificial Intelligence, Peking University, Beijing, China
| | - Jian K. Liu
- School of Computing, University of Leeds, Leeds, United Kingdom
- School of Computer Science, Centre for Human Brain Health, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|
6
|
Fine I, Boynton GM. A virtual patient simulation modeling the neural and perceptual effects of human visual cortical stimulation, from pulse trains to percepts. Sci Rep 2024; 14:17400. [PMID: 39075065 PMCID: PMC11286872 DOI: 10.1038/s41598-024-65337-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 06/19/2024] [Indexed: 07/31/2024] Open
Abstract
The field of cortical sight restoration prostheses is making rapid progress with three clinical trials of visual cortical prostheses underway. However, as yet, we have only limited insight into the perceptual experiences produced by these implants. Here we describe a computational model or 'virtual patient', based on the neurophysiological architecture of V1, which successfully predicts the perceptual experience of participants across a wide range of previously published human cortical stimulation studies describing the location, size, brightness and spatiotemporal shape of electrically induced percepts in humans. Our simulations suggest that, in the foreseeable future the perceptual quality of cortical prosthetic devices is likely to be limited by the neurophysiological organization of visual cortex, rather than engineering constraints.
Collapse
Affiliation(s)
- Ione Fine
- Department of Psychology, University of Washington, Seattle, 98195, USA.
- Faculty of Biological Sciences, University of Leeds, Leeds, UK.
| | | |
Collapse
|
7
|
Gogliettino AR, Cooler S, Vilkhu RS, Brackbill NJ, Rhoades C, Wu EG, Kling A, Sher A, Litke AM, Chichilnisky EJ. Modeling responses of macaque and human retinal ganglion cells to natural images using a convolutional neural network. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.22.586353. [PMID: 38585930 PMCID: PMC10996505 DOI: 10.1101/2024.03.22.586353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/09/2024]
Abstract
Linear-nonlinear (LN) cascade models provide a simple way to capture retinal ganglion cell (RGC) responses to artificial stimuli such as white noise, but their ability to model responses to natural images is limited. Recently, convolutional neural network (CNN) models have been shown to produce light response predictions that were substantially more accurate than those of a LN model. However, this modeling approach has not yet been applied to responses of macaque or human RGCs to natural images. Here, we train and test a CNN model on responses to natural images of the four numerically dominant RGC types in the macaque and human retina - ON parasol, OFF parasol, ON midget and OFF midget cells. Compared with the LN model, the CNN model provided substantially more accurate response predictions. Linear reconstructions of the visual stimulus were more accurate for CNN compared to LN model-generated responses, relative to reconstructions obtained from the recorded data. These findings demonstrate the effectiveness of a CNN model in capturing light responses of major RGC types in the macaque and human retinas in natural conditions.
Collapse
|
8
|
Zaidi M, Aggarwal G, Shah NP, Karniol-Tambour O, Goetz G, Madugula SS, Gogliettino AR, Wu EG, Kling A, Brackbill N, Sher A, Litke AM, Chichilnisky EJ. Inferring light responses of primate retinal ganglion cells using intrinsic electrical signatures. J Neural Eng 2023; 20:10.1088/1741-2552/ace657. [PMID: 37433293 PMCID: PMC11067857 DOI: 10.1088/1741-2552/ace657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 07/11/2023] [Indexed: 07/13/2023]
Abstract
Objective. Retinal implants are designed to stimulate retinal ganglion cells (RGCs) in a way that restores sight to individuals blinded by photoreceptor degeneration. Reproducing high-acuity vision with these devices will likely require inferring the natural light responses of diverse RGC types in the implanted retina, without being able to measure them directly. Here we demonstrate an inference approach that exploits intrinsic electrophysiological features of primate RGCs.Approach.First, ON-parasol and OFF-parasol RGC types were identified using their intrinsic electrical features in large-scale multi-electrode recordings from macaque retina. Then, the electrically inferred somatic location, inferred cell type, and average linear-nonlinear-Poisson model parameters of each cell type were used to infer a light response model for each cell. The accuracy of the cell type classification and of reproducing measured light responses with the model were evaluated.Main results.A cell-type classifier trained on 246 large-scale multi-electrode recordings from 148 retinas achieved 95% mean accuracy on 29 test retinas. In five retinas tested, the inferred models achieved an average correlation with measured firing rates of 0.49 for white noise visual stimuli and 0.50 for natural scenes stimuli, compared to 0.65 and 0.58 respectively for models fitted to recorded light responses (an upper bound). Linear decoding of natural images from predicted RGC activity in one retina showed a mean correlation of 0.55 between decoded and true images, compared to an upper bound of 0.81 using models fitted to light response data.Significance.These results suggest that inference of RGC light response properties from intrinsic features of their electrical activity may be a useful approach for high-fidelity sight restoration. The overall strategy of first inferring cell type from electrical features and then exploiting cell type to help infer natural cell function may also prove broadly useful to neural interfaces.
Collapse
Affiliation(s)
- Moosa Zaidi
- Stanford University School of Medicine, Stanford University, Stanford, CA, United States of America
- Neurosurgery, Stanford University, Stanford, CA, United States of America
| | - Gorish Aggarwal
- Neurosurgery, Stanford University, Stanford, CA, United States of America
- Electrical Engineering, Stanford University, Stanford, CA, United States of America
| | - Nishal P Shah
- Neurosurgery, Stanford University, Stanford, CA, United States of America
| | - Orren Karniol-Tambour
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, United States of America
| | - Georges Goetz
- Neurosurgery, Stanford University, Stanford, CA, United States of America
| | - Sasidhar S Madugula
- Stanford University School of Medicine, Stanford University, Stanford, CA, United States of America
- Neurosciences, Stanford University, Stanford, CA, United States of America
| | - Alex R Gogliettino
- Neurosciences, Stanford University, Stanford, CA, United States of America
| | - Eric G Wu
- Electrical Engineering, Stanford University, Stanford, CA, United States of America
| | - Alexandra Kling
- Neurosurgery, Stanford University, Stanford, CA, United States of America
| | - Nora Brackbill
- Physics, Stanford University, Stanford, CA, United States of America
| | - Alexander Sher
- Santa Cruz Institute for Particle Physics, University of California Santa Cruz, Santa Cruz, CA, United States of America
| | - Alan M Litke
- Santa Cruz Institute for Particle Physics, University of California Santa Cruz, Santa Cruz, CA, United States of America
| | - E J Chichilnisky
- Neurosurgery, Stanford University, Stanford, CA, United States of America
- Ophthalmology, Stanford University, Stanford, CA, United States of America
| |
Collapse
|
9
|
Madugula SS, Vilkhu R, Shah NP, Grosberg LE, Kling A, Gogliettino AR, Nguyen H, Hottowy P, Sher A, Litke AM, Chichilnisky EJ. Inference of Electrical Stimulation Sensitivity from Recorded Activity of Primate Retinal Ganglion Cells. J Neurosci 2023; 43:4808-4820. [PMID: 37268418 PMCID: PMC10312054 DOI: 10.1523/jneurosci.1023-22.2023] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 05/22/2023] [Accepted: 05/24/2023] [Indexed: 06/04/2023] Open
Abstract
High-fidelity electronic implants can in principle restore the function of neural circuits by precisely activating neurons via extracellular stimulation. However, direct characterization of the individual electrical sensitivity of a large population of target neurons, to precisely control their activity, can be difficult or impossible. A potential solution is to leverage biophysical principles to infer sensitivity to electrical stimulation from features of spontaneous electrical activity, which can be recorded relatively easily. Here, this approach is developed and its potential value for vision restoration is tested quantitatively using large-scale multielectrode stimulation and recording from retinal ganglion cells (RGCs) of male and female macaque monkeys ex vivo Electrodes recording larger spikes from a given cell exhibited lower stimulation thresholds across cell types, retinas, and eccentricities, with systematic and distinct trends for somas and axons. Thresholds for somatic stimulation increased with distance from the axon initial segment. The dependence of spike probability on injected current was inversely related to threshold, and was substantially steeper for axonal than somatic compartments, which could be identified by their recorded electrical signatures. Dendritic stimulation was largely ineffective for eliciting spikes. These trends were quantitatively reproduced with biophysical simulations. Results from human RGCs were broadly similar. The inference of stimulation sensitivity from recorded electrical features was tested in a data-driven simulation of visual reconstruction, revealing that the approach could significantly improve the function of future high-fidelity retinal implants.SIGNIFICANCE STATEMENT This study demonstrates that individual in situ primate retinal ganglion cells of different types respond to artificially generated, external electrical fields in a systematic manner, in accordance with theoretical predictions, that allows for prediction of electrical stimulus sensitivity from recorded spontaneous activity. It also provides evidence that such an approach could be immensely helpful in the calibration of clinical retinal implants.
Collapse
Affiliation(s)
- Sasidhar S Madugula
- Neurosciences PhD Program, Stanford University, Stanford, California 94305
- School of Medicine, Stanford University, Stanford, California 94305
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
| | - Ramandeep Vilkhu
- Department of Electrical Engineering, Stanford University, Stanford, California 94305
| | - Nishal P Shah
- Department of Neurosurgery, Stanford University, Stanford, California 94305
- Department of Electrical Engineering, Stanford University, Stanford, California 94305
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
| | - Lauren E Grosberg
- Department of Neurosurgery, Stanford University, Stanford, California 94305
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
- Facebook Reality Labs, Facebook, Mountain View, California 94040
| | - Alexandra Kling
- Department of Neurosurgery, Stanford University, Stanford, California 94305
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
| | - Alex R Gogliettino
- Neurosciences PhD Program, Stanford University, Stanford, California 94305
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
| | - Huy Nguyen
- Department of Neurosurgery, Stanford University, Stanford, California 94305
| | - Paweł Hottowy
- Faculty of Physics and Applied Computer Science, AGH University of Science and Technology, Krakow, Poland 30-059
| | - Alexander Sher
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, California 95064
| | - Alan M Litke
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, California 95064
| | - E J Chichilnisky
- Department of Neurosurgery, Stanford University, Stanford, California 94305
- Department of Ophthalmology, Stanford University, Stanford, California 94305
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
| |
Collapse
|
10
|
Gogliettino AR, Madugula SS, Grosberg LE, Vilkhu RS, Brown J, Nguyen H, Kling A, Hottowy P, Dąbrowski W, Sher A, Litke AM, Chichilnisky EJ. High-Fidelity Reproduction of Visual Signals by Electrical Stimulation in the Central Primate Retina. J Neurosci 2023; 43:4625-4641. [PMID: 37188516 PMCID: PMC10286946 DOI: 10.1523/jneurosci.1091-22.2023] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 05/08/2023] [Accepted: 05/10/2023] [Indexed: 05/17/2023] Open
Abstract
Electrical stimulation of retinal ganglion cells (RGCs) with electronic implants provides rudimentary artificial vision to people blinded by retinal degeneration. However, current devices stimulate indiscriminately and therefore cannot reproduce the intricate neural code of the retina. Recent work has demonstrated more precise activation of RGCs using focal electrical stimulation with multielectrode arrays in the peripheral macaque retina, but it is unclear how effective this can be in the central retina, which is required for high-resolution vision. This work probes the neural code and effectiveness of focal epiretinal stimulation in the central macaque retina, using large-scale electrical recording and stimulation ex vivo The functional organization, light response properties, and electrical properties of the major RGC types in the central retina were mostly similar to the peripheral retina, with some notable differences in density, kinetics, linearity, spiking statistics, and correlations. The major RGC types could be distinguished by their intrinsic electrical properties. Electrical stimulation targeting parasol cells revealed similar activation thresholds and reduced axon bundle activation in the central retina, but lower stimulation selectivity. Quantitative evaluation of the potential for image reconstruction from electrically evoked parasol cell signals revealed higher overall expected image quality in the central retina. An exploration of inadvertent midget cell activation suggested that it could contribute high spatial frequency noise to the visual signal carried by parasol cells. These results support the possibility of reproducing high-acuity visual signals in the central retina with an epiretinal implant.SIGNIFICANCE STATEMENT Artificial restoration of vision with retinal implants is a major treatment for blindness. However, present-day implants do not provide high-resolution visual perception, in part because they do not reproduce the natural neural code of the retina. Here, we demonstrate the level of visual signal reproduction that is possible with a future implant by examining how accurately responses to electrical stimulation of parasol retinal ganglion cells can convey visual signals. Although the precision of electrical stimulation in the central retina was diminished relative to the peripheral retina, the quality of expected visual signal reconstruction in parasol cells was greater. These findings suggest that visual signals could be restored with high fidelity in the central retina using a future retinal implant.
Collapse
Affiliation(s)
- Alex R Gogliettino
- Neurosciences PhD Program, Stanford University, Stanford, California 94305
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
| | - Sasidhar S Madugula
- Neurosciences PhD Program, Stanford University, Stanford, California 94305
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
- Stanford School of Medicine, Stanford University, Stanford, California 94305
| | - Lauren E Grosberg
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
- Department of Neurosurgery, Stanford University, Stanford, California 94305
| | - Ramandeep S Vilkhu
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
- Department of Electrical Engineering, Stanford University, Stanford, California 94305
| | - Jeff Brown
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
- Department of Neurosurgery, Stanford University, Stanford, California 94305
- Department of Electrical Engineering, Stanford University, Stanford, California 94305
| | - Huy Nguyen
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
| | - Alexandra Kling
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
- Department of Neurosurgery, Stanford University, Stanford, California 94305
| | - Paweł Hottowy
- Faculty of Physics and Applied Computer Science, AGH University of Science and Technology, 30-059, Kraków, Poland
| | - Władysław Dąbrowski
- Faculty of Physics and Applied Computer Science, AGH University of Science and Technology, 30-059, Kraków, Poland
| | - Alexander Sher
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, California 95064
| | - Alan M Litke
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, California 95064
| | - E J Chichilnisky
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, California 94305
- Department of Neurosurgery, Stanford University, Stanford, California 94305
- Department of Electrical Engineering, Stanford University, Stanford, California 94305
- Department of Ophthalmology, Stanford University, Stanford, California 94305
| |
Collapse
|
11
|
Grani F, Soto-Sánchez C, Fimia A, Fernández E. Toward a personalized closed-loop stimulation of the visual cortex: Advances and challenges. Front Cell Neurosci 2022; 16:1034270. [PMID: 36582211 PMCID: PMC9792612 DOI: 10.3389/fncel.2022.1034270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Accepted: 11/24/2022] [Indexed: 12/15/2022] Open
Abstract
Current cortical visual prosthesis approaches are primarily unidirectional and do not consider the feed-back circuits that exist in just about every part of the nervous system. Herein, we provide a brief overview of some recent developments for better controlling brain stimulation and present preliminary human data indicating that closed-loop strategies could considerably enhance the effectiveness, safety, and long-term stability of visual cortex stimulation. We propose that the development of improved closed-loop strategies may help to enhance our capacity to communicate with the brain.
Collapse
Affiliation(s)
- Fabrizio Grani
- Institute of Bioengineering, Universidad Miguel Hernández de Elche, Elche, Spain
| | - Cristina Soto-Sánchez
- Institute of Bioengineering, Universidad Miguel Hernández de Elche, Elche, Spain,Biomedical Research Networking Center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Madrid, Spain
| | - Antonio Fimia
- Departamento de Ciencia de Materiales, Óptica y Tecnología Electrónica, Universidad Miguel Hernández de Elche, Elche, Spain
| | - Eduardo Fernández
- Institute of Bioengineering, Universidad Miguel Hernández de Elche, Elche, Spain,Biomedical Research Networking Center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Madrid, Spain,*Correspondence: Eduardo Fernández,
| |
Collapse
|
12
|
In vivo chromatic and spatial tuning of foveolar retinal ganglion cells in Macaca fascicularis. PLoS One 2022; 17:e0278261. [PMID: 36445926 PMCID: PMC9707781 DOI: 10.1371/journal.pone.0278261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 11/13/2022] [Indexed: 11/30/2022] Open
Abstract
The primate fovea is specialized for high acuity chromatic vision, with the highest density of cone photoreceptors and a disproportionately large representation in visual cortex. The unique visual properties conferred by the fovea are conveyed to the brain by retinal ganglion cells, the somas of which lie at the margin of the foveal pit. Microelectrode recordings of these centermost retinal ganglion cells have been challenging due to the fragility of the fovea in the excised retina. Here we overcome this challenge by combining high resolution fluorescence adaptive optics ophthalmoscopy with calcium imaging to optically record functional responses of foveal retinal ganglion cells in the living eye. We use this approach to study the chromatic responses and spatial transfer functions of retinal ganglion cells using spatially uniform fields modulated in different directions in color space and monochromatic drifting gratings. We recorded from over 350 cells across three Macaca fascicularis primates over a time period of weeks to months. We find that the majority of the L vs. M cone opponent cells serving the most central foveolar cones have spatial transfer functions that peak at high spatial frequencies (20-40 c/deg), reflecting strong surround inhibition that sacrifices sensitivity at low spatial frequencies but preserves the transmission of fine detail in the retinal image. In addition, we fit to the drifting grating data a detailed model of how ganglion cell responses draw on the cone mosaic to derive receptive field properties of L vs. M cone opponent cells at the very center of the foveola. The fits are consistent with the hypothesis that foveal midget ganglion cells are specialized to preserve information at the resolution of the cone mosaic. By characterizing the functional properties of retinal ganglion cells in vivo through adaptive optics, we characterize the response characteristics of these cells in situ.
Collapse
|
13
|
Zhang YJ, Yu ZF, Liu JK, Huang TJ. Neural Decoding of Visual Information Across Different Neural Recording Modalities and Approaches. MACHINE INTELLIGENCE RESEARCH 2022. [PMCID: PMC9283560 DOI: 10.1007/s11633-022-1335-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Vision plays a peculiar role in intelligence. Visual information, forming a large part of the sensory information, is fed into the human brain to formulate various types of cognition and behaviours that make humans become intelligent agents. Recent advances have led to the development of brain-inspired algorithms and models for machine vision. One of the key components of these methods is the utilization of the computational principles underlying biological neurons. Additionally, advanced experimental neuroscience techniques have generated different types of neural signals that carry essential visual information. Thus, there is a high demand for mapping out functional models for reading out visual information from neural signals. Here, we briefly review recent progress on this issue with a focus on how machine learning techniques can help in the development of models for contending various types of neural signals, from fine-scale neural spikes and single-cell calcium imaging to coarse-scale electroencephalography (EEG) and functional magnetic resonance imaging recordings of brain signals.
Collapse
|
14
|
Abstract
An ultimate goal in retina science is to understand how the neural circuit of the retina processes natural visual scenes. Yet most studies in laboratories have long been performed with simple, artificial visual stimuli such as full-field illumination, spots of light, or gratings. The underlying assumption is that the features of the retina thus identified carry over to the more complex scenario of natural scenes. As the application of corresponding natural settings is becoming more commonplace in experimental investigations, this assumption is being put to the test and opportunities arise to discover processing features that are triggered by specific aspects of natural scenes. Here, we review how natural stimuli have been used to probe, refine, and complement knowledge accumulated under simplified stimuli, and we discuss challenges and opportunities along the way toward a comprehensive understanding of the encoding of natural scenes. Expected final online publication date for the Annual Review of Vision Science, Volume 8 is September 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Dimokratis Karamanlis
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany.,International Max Planck Research School for Neurosciences, Göttingen, Germany
| | - Helene Marianne Schreyer
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Tim Gollisch
- Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany.,Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany.,Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany
| |
Collapse
|
15
|
Italiano ML, Guo T, Lovell NH, Tsai D. Improving the spatial resolution of artificial vision using midget retinal ganglion cell populations modelled at the human fovea. J Neural Eng 2022; 19. [PMID: 35609556 DOI: 10.1088/1741-2552/ac72c2] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Accepted: 05/24/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Retinal prostheses seek to create artificial vision by stimulating surviving retinal neurons of patients with profound vision impairment. Notwithstanding tremendous research efforts, the performance of all implants tested to date has remained rudimentary, incapable of overcoming the threshold for legal blindness. To maximize the perceptual efficacy of retinal prostheses, a device must be capable of controlling retinal neurons with greater spatiotemporal precision. Most studies of retinal stimulation were derived from either non-primate species or the peripheral primate retina. We investigated if artificial stimulation could leverage the high spatial resolution afforded by the neural substrates at the primate fovea and surrounding regions to achieve improved percept qualities. APPROACH We began by developing a new computational model capable of generating anatomically accurate retinal ganglion cell (RGC) populations within the human central retina. Next, multiple RGC populations across the central retina were stimulated in-silico to compare clinical and recently proposed neurostimulation configurations based on their ability to improve perceptual efficacy and reduce activation thresholds. MAIN RESULTS Our model uniquely upholds eccentricity-dependent characteristics such as RGC density and dendritic field diameter, whilst incorporating anatomically accurate features such as axon projection and three-dimensional RGC layering, features often forgone in favor of reduced computational complexity. Following epiretinal stimulation, the RGCs in our model produced response patterns in shapes akin to the complex percepts reported in clinical trials. Our results also demonstrated that even within the neuron-dense central retina, epiretinal stimulation using a multi-return hexapolar electrode arrangement could reliably achieve spatially focused RGC activation and could achieve single-cell excitation in 74% of all tested locations. SIGNIFICANCE This study establishes an anatomically accurate three-dimensional model of the human central retina and demonstrates the potential for an epiretinal hexapolar configuration to achieve consistent, spatially confined retinal responses, even within the neuron-dense foveal region. Our results promote the prospect and optimization of higher spatial resolution in future epiretinal implants.
Collapse
Affiliation(s)
- Michael Lewis Italiano
- Graduate School of Biomedical Engineering, University of New South Wales, Sydney, Sydney, New South Wales, 2052, AUSTRALIA
| | - Tianruo Guo
- Graduate School of Biomedical Engineering, University of New South Wales, Sydney, Sydney, New South Wales, 2052, AUSTRALIA
| | - Nigel H Lovell
- Graduate School of Biomedical Engineering, University of New South Wales, Sydney, Sydney, New South Wales, 2052, AUSTRALIA
| | - David Tsai
- Graduate School of Biomedical Engineering, University of New South Wales, Sydney, Sydney, New South Wales, 2052, AUSTRALIA
| |
Collapse
|
16
|
Zhang Y, Bu T, Zhang J, Tang S, Yu Z, Liu JK, Huang T. Decoding Pixel-Level Image Features from Two-Photon Calcium Signals of Macaque Visual Cortex. Neural Comput 2022; 34:1369-1397. [PMID: 35534008 DOI: 10.1162/neco_a_01498] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 12/20/2021] [Indexed: 11/04/2022]
Abstract
Images of visual scenes comprise essential features important for visual cognition of the brain. The complexity of visual features lies at different levels, from simple artificial patterns to natural images with different scenes. It has been a focus of using stimulus images to predict neural responses. However, it remains unclear how to extract features from neuronal responses. Here we address this question by leveraging two-photon calcium neural data recorded from the visual cortex of awake macaque monkeys. With stimuli including various categories of artificial patterns and diverse scenes of natural images, we employed a deep neural network decoder inspired by image segmentation technique. Consistent with the notation of sparse coding for natural images, a few neurons with stronger responses dominated the decoding performance, whereas decoding of ar tificial patterns needs a large number of neurons. When natural images using the model pretrained on artificial patterns are decoded, salient features of natural scenes can be extracted, as well as the conventional category information. Altogether, our results give a new perspective on studying neural encoding principles using reverse-engineering decoding strategies.
Collapse
Affiliation(s)
- Yijun Zhang
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240.,Department of Computer Science and Technology, Peking University, Peking 100871, P.R.C.
| | - Tong Bu
- Department of Computer Science and Technology, Peking University, Beijing 100871, P.R.C.
| | - Jiyuan Zhang
- Department of Computer Science and Technology, Peking University, Beijing 100871, P.R.C.
| | - Shiming Tang
- School of Life Sciences and Peking-Tsinghua Center for Life Sciences, Peking University, Beijing 100871, P.R.C.
| | - Zhaofei Yu
- Department of Computer Science and Technology and In stitute for Artificial Intelligence, Peking University, Beijing 100871, P.R.C.
| | - Jian K Liu
- School of Computing, University of Leeds, Leeds LS2 9JT, U.K.
| | - Tiejun Huang
- Department of Computer Science and Technology and Institute for Artificial Intelligence, Peking University, Beijing 100871, P.R.C.,Beijing Academy of Artificial Intelligence, Beijing 100190, P.R.C.
| |
Collapse
|
17
|
Li W, Joseph Raj AN, Tjahjadi T, Zhuang Z. Fusion of ANNs as decoder of retinal spike trains for scene reconstruction. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03402-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
18
|
Abstract
Taste information is encoded in the gustatory nervous system much as in other sensory systems, with notable exceptions. The concept of adequate stimulus is common to all sensory modalities, from somatosensory to auditory, visual, and so forth. That is, sensory cells normally respond only to one particular form of stimulation, the adequate stimulus, such as photons (photoreceptors in the visual system), odors (olfactory sensory neurons in the olfactory system), noxious heat (nociceptors in the somatosensory system), etc. Peripheral sensory receptors transduce the stimulus into membrane potential changes transmitted to the brain in the form of trains of action potentials. How information concerning different aspects of the stimulus such as quality, intensity, and duration are encoded in the trains of action potentials is hotly debated in the field of taste. At one extreme is the notion of labeled line/spatial coding - information for each different taste quality (sweet, salty, sour, etc.) is transmitted along a parallel but separate series of neurons (a "line") that project to focal clusters ("spaces") of neurons in the gustatory cortex. These clusters are distinct for each taste quality. Opposing this are concepts of population/combinatorial coding and temporal coding, where taste information is encrypted by groups of neurons (circuits) and patterns of impulses within these neuronal circuits. Key to population/combinatorial and temporal coding is that impulse activity in an individual neuron does not provide unambiguous information about the taste stimulus. Only populations of neurons and their impulse firing pattern yield that information.
Collapse
Affiliation(s)
- Stephen D Roper
- Department of Physiology and Biophysics, Miller School of Medicine, University of Miami, Miami, FL, USA.
- Department of Otolaryngology, Miller School of Medicine, University of Miami, Miami, FL, USA.
| |
Collapse
|
19
|
Tandon P, Bhaskhar N, Shah N, Madugula S, Grosberg L, Fan VH, Hottowy P, Sher A, Litke AM, Chichilnisky EJ, Mitra S. Automatic Identification of Axon Bundle Activation for Epiretinal Prosthesis. IEEE Trans Neural Syst Rehabil Eng 2021; 29:2496-2502. [PMID: 34784278 PMCID: PMC8860174 DOI: 10.1109/tnsre.2021.3128486] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Objective: Retinal prostheses must be able to activate cells in a selective way in order to restore high-fidelity vision. However, inadvertent activation of far-away retinal ganglion cells (RGCs) through electrical stimulation of axon bundles can produce irregular and poorly controlled percepts, limiting artificial vision. In this work, we aim to provide an algorithmic solution to the problem of detecting axon bundle activation with a bi-directional epiretinal prostheses. Methods: The algorithm utilizes electrical recordings to determine the stimulation current amplitudes above which axon bundle activation occurs. Bundle activation is defined as the axonal stimulation of RGCs with unknown soma and receptive field locations, typically beyond the electrode array. The method exploits spatiotemporal characteristics of electrically-evoked spikes to overcome the challenge of detecting small axonal spikes. Results: The algorithm was validated using large-scale, single-electrode and short pulse, ex vivo stimulation and recording experiments in macaque retina, by comparing algorithmically and manually identified bundle activation thresholds. For 88% of the electrodes analyzed, the threshold identified by the algorithm was within ±10% of the manually identified threshold, with a correlation coefficient of 0.95. Conclusion: This works presents a simple, accurate and efficient algorithm to detect axon bundle activation in epiretinal prostheses. Significance: The algorithm could be used in a closed-loop manner by a future epiretinal prosthesis to reduce poorly controlled visual percepts associated with bundle activation. Activation of distant cells via axonal stimulation will likely occur in other types of retinal implants and cortical implants, and the method may therefore be broadly applicable.
Collapse
|
20
|
Romeni S, Zoccolan D, Micera S. A machine learning framework to optimize optic nerve electrical stimulation for vision restoration. PATTERNS (NEW YORK, N.Y.) 2021; 2:100286. [PMID: 34286301 PMCID: PMC8276026 DOI: 10.1016/j.patter.2021.100286] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Revised: 03/05/2021] [Accepted: 05/17/2021] [Indexed: 11/25/2022]
Abstract
Optic nerve electrical stimulation is a promising technique to restore vision in blind subjects. Machine learning methods can be used to select effective stimulation protocols, but they require a model of the stimulated system to generate enough training data. Here, we use a convolutional neural network (CNN) as a model of the ventral visual stream. A genetic algorithm drives the activation of the units in a layer of the CNN representing a cortical region toward a desired pattern, by refining the activation imposed at a layer representing the optic nerve. To simulate the pattern of activation elicited by the sites of an electrode array, a simple point-source model was introduced and its optimization process was investigated for static and dynamic scenes. Psychophysical data confirm that our stimulation evolution framework produces results compatible with natural vision. Machine learning approaches could become a very powerful tool to optimize and personalize neuroprosthetic systems.
Collapse
Affiliation(s)
- Simone Romeni
- Bertarelli Foundation Chair in Translational NeuroEngineering, Center for Neuroprosthetics and Institute of Bioengineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Davide Zoccolan
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA), Trieste, Italy
| | - Silvestro Micera
- Bertarelli Foundation Chair in Translational NeuroEngineering, Center for Neuroprosthetics and Institute of Bioengineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
- The Biorobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant’Anna, Pontedera, Italy
| |
Collapse
|
21
|
Schottdorf M, Lee BB. A quantitative description of macaque ganglion cell responses to natural scenes: the interplay of time and space. J Physiol 2021; 599:3169-3193. [PMID: 33913164 DOI: 10.1113/jp281200] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Accepted: 04/20/2021] [Indexed: 11/08/2022] Open
Abstract
KEY POINTS Responses to natural scenes are the business of the retina. We find primate ganglion cell responses to such scenes consistent with those to simpler stimuli. A biophysical model confirmed this and predicted ganglion cell responses with close to retinal reliability. Primate ganglion cell responses to natural scenes were driven by temporal variations in colour and luminance over the receptive field centre caused by eye movements, and little influenced by interaction of centre and surround with structure in the scene. We discuss implications in the context of efficient coding of the visual environment. Much information in a higher spatiotemporal frequency band is concentrated in the magnocellular pathway. ABSTRACT Responses of visual neurons to natural scenes provide a link between classical descriptions of receptive field structure and visual perception of the natural environment. A natural scene video with a movement pattern resembling that of primate eye movements was used to evoke responses from macaque ganglion cells. Cell responses were well described through known properties of cell receptive fields. Different analyses converge to show that responses primarily derive from the temporal pattern of stimulation derived from eye movements, rather than spatial receptive field structure beyond centre size and position. This was confirmed using a model that predicted ganglion cell responses close to retinal reliability, with only a small contribution of the surround relative to the centre. We also found that the spatiotemporal spectrum of the stimulus is modified in ganglion cell responses, and this can reduce redundancy in the retinal signal. This is more pronounced in the magnocellular pathway, which is much better suited to transmit the detailed structure of natural scenes than the parvocellular pathway. Whitening is less important for chromatic channels. Taken together, this shows how a complex interplay across space, time and spectral content sculpts ganglion cell responses.
Collapse
Affiliation(s)
- Manuel Schottdorf
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, D-37077, Germany.,Max Planck Institute of Experimental Medicine, Göttingen, D-37075, Germany.,Princeton Neuroscience Institute, Princeton, NJ, 08544, USA
| | - Barry B Lee
- Graduate Center for Vision Research, Department of Biological Sciences, SUNY College of Optometry, 33 West 42nd St., New York, NY, 10036, USA.,Department of Neurobiology, Max Planck Institute for Biophysical Chemistry, Göttingen, D-37077, Germany
| |
Collapse
|