1
|
Larisch R, Hamker FH. A systematic analysis of the joint effects of ganglion cells, lagged LGN cells, and intercortical inhibition on spatiotemporal processing and direction selectivity. Neural Netw 2025; 186:107273. [PMID: 40020308 DOI: 10.1016/j.neunet.2025.107273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2024] [Revised: 11/30/2024] [Accepted: 02/11/2025] [Indexed: 03/03/2025]
Abstract
Simple cells in the visual cortex process spatial as well as temporal information of the visual stream and enable the perception of motion information. Previous work suggests different mechanisms associated with direction selectivity, such as a temporal offset in thalamocortical input stream through lagged and non-lagged cells of the lateral geniculate nucleus (LGN), or solely from intercortical inhibition, or through a baseline selectivity provided by the thalamocortical connection tuned by intercortical inhibition. While there exists a large corpus of models for spatiotemporal receptive fields, the majority of them built-in the spatiotemporal dynamics by utilizing a combination of spatial and temporal functions and thus, do not explain the emergence of spatiotemporal dynamics on basis of network dynamics emerging in the retina and the LGN. In order to better comprehend the emergence of spatiotemporal processing and direction selectivity, we used a spiking neural network to implement the visual pathway from the retina to the primary visual cortex. By varying different functional parts in our network, we demonstrate how the direction selectivity of simple cells emerges through the interplay between two components: tuned intercortical inhibition and a temporal offset in the feedforward path through lagged LGN cells. In contrast to previous findings, our model simulations suggest an alternative dynamic between these two mechanisms: While intercortical inhibition alone leads to bidirectional selectivity, a temporal shift in the thalamocortical pathway breaks this symmetry in favor of one direction, leading to unidirectional selectivity.
Collapse
Affiliation(s)
- René Larisch
- Chemnitz University of Technology, Str. der Nationen, 62, 09111, Chemnitz, Germany.
| | - Fred H Hamker
- Chemnitz University of Technology, Str. der Nationen, 62, 09111, Chemnitz, Germany.
| |
Collapse
|
2
|
Yu Z, Bu T, Zhang Y, Jia S, Huang T, Liu JK. Robust Decoding of Rich Dynamical Visual Scenes With Retinal Spikes. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:3396-3409. [PMID: 38265909 DOI: 10.1109/tnnls.2024.3351120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Sensory information transmitted to the brain activates neurons to create a series of coping behaviors. Understanding the mechanisms of neural computation and reverse engineering the brain to build intelligent machines requires establishing a robust relationship between stimuli and neural responses. Neural decoding aims to reconstruct the original stimuli that trigger neural responses. With the recent upsurge of artificial intelligence, neural decoding provides an insightful perspective for designing novel algorithms of brain-machine interface. For humans, vision is the dominant contributor to the interaction between the external environment and the brain. In this study, utilizing the retinal neural spike data collected over multi trials with visual stimuli of two movies with different levels of scene complexity, we used a neural network decoder to quantify the decoded visual stimuli with six different metrics for image quality assessment establishing comprehensive inspection of decoding. With the detailed and systematical study of the effect and single and multiple trials of data, different noise in spikes, and blurred images, our results provide an in-depth investigation of decoding dynamical visual scenes using retinal spikes. These results provide insights into the neural coding of visual scenes and services as a guideline for designing next-generation decoding algorithms of neuroprosthesis and other devices of brain-machine interface.
Collapse
|
3
|
Idrees S, Manookin MB, Rieke F, Field GD, Zylberberg J. Biophysical neural adaptation mechanisms enable artificial neural networks to capture dynamic retinal computation. Nat Commun 2024; 15:5957. [PMID: 39009568 PMCID: PMC11251147 DOI: 10.1038/s41467-024-50114-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 06/28/2024] [Indexed: 07/17/2024] Open
Abstract
Adaptation is a universal aspect of neural systems that changes circuit computations to match prevailing inputs. These changes facilitate efficient encoding of sensory inputs while avoiding saturation. Conventional artificial neural networks (ANNs) have limited adaptive capabilities, hindering their ability to reliably predict neural output under dynamic input conditions. Can embedding neural adaptive mechanisms in ANNs improve their performance? To answer this question, we develop a new deep learning model of the retina that incorporates the biophysics of photoreceptor adaptation at the front-end of conventional convolutional neural networks (CNNs). These conventional CNNs build on 'Deep Retina,' a previously developed model of retinal ganglion cell (RGC) activity. CNNs that include this new photoreceptor layer outperform conventional CNN models at predicting male and female primate and rat RGC responses to naturalistic stimuli that include dynamic local intensity changes and large changes in the ambient illumination. These improved predictions result directly from adaptation within the phototransduction cascade. This research underscores the potential of embedding models of neural adaptation in ANNs and using them to determine how neural circuits manage the complexities of encoding natural inputs that are dynamic and span a large range of light levels.
Collapse
Affiliation(s)
- Saad Idrees
- Department of Physics and Astronomy, York University, Toronto, ON, Canada.
- Centre for Vision Research, York University, Toronto, ON, Canada.
| | | | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
| | - Greg D Field
- Stein Eye Institute, Department of Ophthalmology, University of California, Los Angeles, CA, USA
| | - Joel Zylberberg
- Department of Physics and Astronomy, York University, Toronto, ON, Canada.
- Centre for Vision Research, York University, Toronto, ON, Canada.
- Learning in Machines and Brains Program, Canadian Institute for Advanced Research, Toronto, ON, Canada.
| |
Collapse
|
4
|
Wang C, Fang C, Zou Y, Yang J, Sawan M. Artificial intelligence techniques for retinal prostheses: a comprehensive review and future direction. J Neural Eng 2023; 20. [PMID: 36634357 DOI: 10.1088/1741-2552/acb295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 01/12/2023] [Indexed: 01/14/2023]
Abstract
Objective. Retinal prostheses are promising devices to restore vision for patients with severe age-related macular degeneration or retinitis pigmentosa disease. The visual processing mechanism embodied in retinal prostheses play an important role in the restoration effect. Its performance depends on our understanding of the retina's working mechanism and the evolvement of computer vision models. Recently, remarkable progress has been made in the field of processing algorithm for retinal prostheses where the new discovery of the retina's working principle and state-of-the-arts computer vision models are combined together.Approach. We investigated the related research on artificial intelligence techniques for retinal prostheses. The processing algorithm in these studies could be attributed to three types: computer vision-related methods, biophysical models, and deep learning models.Main results. In this review, we first illustrate the structure and function of the normal and degenerated retina, then demonstrate the vision rehabilitation mechanism of three representative retinal prostheses. It is necessary to summarize the computational frameworks abstracted from the normal retina. In addition, the development and feature of three types of different processing algorithms are summarized. Finally, we analyze the bottleneck in existing algorithms and propose our prospect about the future directions to improve the restoration effect.Significance. This review systematically summarizes existing processing models for predicting the response of the retina to external stimuli. What's more, the suggestions for future direction may inspire researchers in this field to design better algorithms for retinal prostheses.
Collapse
Affiliation(s)
- Chuanqing Wang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Chaoming Fang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Yong Zou
- Beijing Institute of Radiation Medicine, Beijing, People's Republic of China
| | - Jie Yang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Mohamad Sawan
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| |
Collapse
|
5
|
Wang C, Yang J, Sawan M. NeuroSEE: A Neuromorphic Energy Efficient Processing Framework for Visual Prostheses. IEEE J Biomed Health Inform 2022; 26:4132-4141. [PMID: 35503849 DOI: 10.1109/jbhi.2022.3172306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visual prostheses with both comprehensive visual signal processing capability and energy efficiency are becoming increasingly demanded in the age of intelligent personal healthcare, particularly with the rise of wearable and implantable devices. To address this trend, we propose NeuroSEE, a neuromorphic energy-efficient processing framework that combines a spike representation encoding technique and a bio-inspired processing method. This framework first utilizes sparse spike trains to represent visual information, and then a bio-inspired spiking neural network (SNN) is adopted to process the spike trains. The SNN model makes use of an IF neuron with multiple spikefiring rates to decrease the energy consumption without compensating for prediction performance. The experimental results indicate that when predicting the response of the primary visual cortex, the framework achieves a state-ofthe- art Pearson correlation coefficient performance. Spikebased recording and processing methods simplify the storage and transmission of redundant scene information and complex calculation processes. It could reduce power consumption by 15 times compared with the existing Convolutional neural network (CNN) processing framework. The proposed NeuroSEE framework predicts the response of the primary visual cortex in an energy efficient manner, making it a powerful tool for visual prostheses.
Collapse
|
6
|
Zheng Y, Jia S, Yu Z, Liu JK, Huang T. Unraveling neural coding of dynamic natural visual scenes via convolutional recurrent neural networks. PATTERNS (NEW YORK, N.Y.) 2021; 2:100350. [PMID: 34693375 PMCID: PMC8515013 DOI: 10.1016/j.patter.2021.100350] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 06/22/2021] [Accepted: 08/23/2021] [Indexed: 11/18/2022]
Abstract
Traditional models of retinal system identification analyze the neural response to artificial stimuli using models consisting of predefined components. The model design is limited to prior knowledge, and the artificial stimuli are too simple to be compared with stimuli processed by the retina. To fill in this gap with an explainable model that reveals how a population of neurons work together to encode the larger field of natural scenes, here we used a deep-learning model for identifying the computational elements of the retinal circuit that contribute to learning the dynamics of natural scenes. Experimental results verify that the recurrent connection plays a key role in encoding complex dynamic visual scenes while learning biological computational underpinnings of the retinal circuit. In addition, the proposed models reveal both the shapes and the locations of the spatiotemporal receptive fields of ganglion cells.
Collapse
Affiliation(s)
- Yajing Zheng
- Department of Computer Science and Technology, National Engineering Laboratory for Video Technology, Peking University, Beijing 100871, China
| | - Shanshan Jia
- Department of Computer Science and Technology, National Engineering Laboratory for Video Technology, Peking University, Beijing 100871, China
| | - Zhaofei Yu
- Department of Computer Science and Technology, National Engineering Laboratory for Video Technology, Peking University, Beijing 100871, China
- Institute for Artificial Intelligence, Peking University, Beijing 100871, China
| | - Jian K. Liu
- School of Computing, University of Leeds, Leeds LS2 9JT, UK
| | - Tiejun Huang
- Department of Computer Science and Technology, National Engineering Laboratory for Video Technology, Peking University, Beijing 100871, China
- Institute for Artificial Intelligence, Peking University, Beijing 100871, China
| |
Collapse
|