1
|
Grimaldi A, Perrinet LU. Learning heterogeneous delays in a layer of spiking neurons for fast motion detection. BIOLOGICAL CYBERNETICS 2023; 117:373-387. [PMID: 37695359 DOI: 10.1007/s00422-023-00975-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 08/18/2023] [Indexed: 09/12/2023]
Abstract
The precise timing of spikes emitted by neurons plays a crucial role in shaping the response of efferent biological neurons. This temporal dimension of neural activity holds significant importance in understanding information processing in neurobiology, especially for the performance of neuromorphic hardware, such as event-based cameras. Nonetheless, many artificial neural models disregard this critical temporal dimension of neural activity. In this study, we present a model designed to efficiently detect temporal spiking motifs using a layer of spiking neurons equipped with heterogeneous synaptic delays. Our model capitalizes on the diverse synaptic delays present on the dendritic tree, enabling specific arrangements of temporally precise synaptic inputs to synchronize upon reaching the basal dendritic tree. We formalize this process as a time-invariant logistic regression, which can be trained using labeled data. To demonstrate its practical efficacy, we apply the model to naturalistic videos transformed into event streams, simulating the output of the biological retina or event-based cameras. To evaluate the robustness of the model in detecting visual motion, we conduct experiments by selectively pruning weights and demonstrate that the model remains efficient even under significantly reduced workloads. In conclusion, by providing a comprehensive, event-driven computational building block, the incorporation of heterogeneous delays has the potential to greatly improve the performance of future spiking neural network algorithms, particularly in the context of neuromorphic chips.
Collapse
Affiliation(s)
- Antoine Grimaldi
- Institut de Neurosciences de la Timone, Aix Marseille Univ, CNRS, 27 boulevard Jean Moulin, 13005, Marseille, France
| | - Laurent U Perrinet
- Institut de Neurosciences de la Timone, Aix Marseille Univ, CNRS, 27 boulevard Jean Moulin, 13005, Marseille, France.
| |
Collapse
|
2
|
Pooling strategies in V1 can account for the functional and structural diversity across species. PLoS Comput Biol 2022; 18:e1010270. [PMID: 35862423 PMCID: PMC9345491 DOI: 10.1371/journal.pcbi.1010270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 08/02/2022] [Accepted: 06/01/2022] [Indexed: 11/19/2022] Open
Abstract
Neurons in the primary visual cortex are selective to orientation with various degrees of selectivity to the spatial phase, from high selectivity in simple cells to low selectivity in complex cells. Various computational models have suggested a possible link between the presence of phase invariant cells and the existence of orientation maps in higher mammals’ V1. These models, however, do not explain the emergence of complex cells in animals that do not show orientation maps. In this study, we build a theoretical model based on a convolutional network called Sparse Deep Predictive Coding (SDPC) and show that a single computational mechanism, pooling, allows the SDPC model to account for the emergence in V1 of complex cells with or without that of orientation maps, as observed in distinct species of mammals. In particular, we observed that pooling in the feature space is directly related to the orientation map formation while pooling in the retinotopic space is responsible for the emergence of a complex cells population. Introducing different forms of pooling in a predictive model of early visual processing as implemented in SDPC can therefore be viewed as a theoretical framework that explains the diversity of structural and functional phenomena observed in V1.
Collapse
|
3
|
Fang MYS, Mudigonda M, Zarcone R, Khosrowshahi A, Olshausen BA. Learning and Inference in Sparse Coding Models With Langevin Dynamics. Neural Comput 2022; 34:1676-1700. [PMID: 35798329 DOI: 10.1162/neco_a_01505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Accepted: 01/22/2022] [Indexed: 11/04/2022]
Abstract
We describe a stochastic, dynamical system capable of inference and learning in a probabilistic latent variable model. The most challenging problem in such models-sampling the posterior distribution over latent variables-is proposed to be solved by harnessing natural sources of stochasticity inherent in electronic and neural systems. We demonstrate this idea for a sparse coding model by deriving a continuous-time equation for inferring its latent variables via Langevin dynamics. The model parameters are learned by simultaneously evolving according to another continuous-time equation, thus bypassing the need for digital accumulators or a global clock. Moreover, we show that Langevin dynamics lead to an efficient procedure for sampling from the posterior distribution in the L0 sparse regime, where latent variables are encouraged to be set to zero as opposed to having a small L1 norm. This allows the model to properly incorporate the notion of sparsity rather than having to resort to a relaxed version of sparsity to make optimization tractable. Simulations of the proposed dynamical system on both synthetic and natural image data sets demonstrate that the model is capable of probabilistically correct inference, enabling learning of the dictionary as well as parameters of the prior.
Collapse
Affiliation(s)
- Michael Y-S Fang
- Department of Physics, University of California, Berkeley, Berkeley, CA 94720, U.S.A.,Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA 94720, U.S.A.
| | - Mayur Mudigonda
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA 94720, U.S.A.
| | - Ryan Zarcone
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA 94720, U.S.A.,Biophysics Graduate Group, University of California, Berkeley, Berkeley, CA 94720, U.S.A.
| | - Amir Khosrowshahi
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA 94720, U.S.A.,Intel Corporation, Santa Clara, CA 95054, U.S.A.
| | - Bruno A Olshausen
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA, U.S.A.,Helen Wills Neuroscience Institute and School of Optometry, University of California, Berkeley, Berkeley, CA 94720, U.S.A.
| |
Collapse
|
4
|
Boutin V, Franciosini A, Chavane F, Ruffier F, Perrinet L. Sparse deep predictive coding captures contour integration capabilities of the early visual system. PLoS Comput Biol 2021; 17:e1008629. [PMID: 33497381 PMCID: PMC7864399 DOI: 10.1371/journal.pcbi.1008629] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 02/05/2021] [Accepted: 12/12/2020] [Indexed: 11/20/2022] Open
Abstract
Both neurophysiological and psychophysical experiments have pointed out the crucial role of recurrent and feedback connections to process context-dependent information in the early visual cortex. While numerous models have accounted for feedback effects at either neural or representational level, none of them were able to bind those two levels of analysis. Is it possible to describe feedback effects at both levels using the same model? We answer this question by combining Predictive Coding (PC) and Sparse Coding (SC) into a hierarchical and convolutional framework applied to realistic problems. In the Sparse Deep Predictive Coding (SDPC) model, the SC component models the internal recurrent processing within each layer, and the PC component describes the interactions between layers using feedforward and feedback connections. Here, we train a 2-layered SDPC on two different databases of images, and we interpret it as a model of the early visual system (V1 & V2). We first demonstrate that once the training has converged, SDPC exhibits oriented and localized receptive fields in V1 and more complex features in V2. Second, we analyze the effects of feedback on the neural organization beyond the classical receptive field of V1 neurons using interaction maps. These maps are similar to association fields and reflect the Gestalt principle of good continuation. We demonstrate that feedback signals reorganize interaction maps and modulate neural activity to promote contour integration. Third, we demonstrate at the representational level that the SDPC feedback connections are able to overcome noise in input images. Therefore, the SDPC captures the association field principle at the neural level which results in a better reconstruction of blurred images at the representational level.
Collapse
Affiliation(s)
- Victor Boutin
- Aix Marseille Univ, CNRS, INT, Inst Neurosci Timone, Marseille, France
- Aix Marseille Univ, CNRS, ISM, Marseille, France
| | | | - Frederic Chavane
- Aix Marseille Univ, CNRS, INT, Inst Neurosci Timone, Marseille, France
| | | | - Laurent Perrinet
- Aix Marseille Univ, CNRS, INT, Inst Neurosci Timone, Marseille, France
| |
Collapse
|