1
|
Mattera A, Alfieri V, Granato G, Baldassarre G. Chaotic recurrent neural networks for brain modelling: A review. Neural Netw 2025; 184:107079. [PMID: 39756119 DOI: 10.1016/j.neunet.2024.107079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2024] [Revised: 11/25/2024] [Accepted: 12/19/2024] [Indexed: 01/07/2025]
Abstract
Even in the absence of external stimuli, the brain is spontaneously active. Indeed, most cortical activity is internally generated by recurrence. Both theoretical and experimental studies suggest that chaotic dynamics characterize this spontaneous activity. While the precise function of brain chaotic activity is still puzzling, we know that chaos confers many advantages. From a computational perspective, chaos enhances the complexity of network dynamics. From a behavioural point of view, chaotic activity could generate the variability required for exploration. Furthermore, information storage and transfer are maximized at the critical border between order and chaos. Despite these benefits, many computational brain models avoid incorporating spontaneous chaotic activity due to the challenges it poses for learning algorithms. In recent years, however, multiple approaches have been proposed to overcome this limitation. As a result, many different algorithms have been developed, initially within the reservoir computing paradigm. Over time, the field has evolved to increase the biological plausibility and performance of the algorithms, sometimes going beyond the reservoir computing framework. In this review article, we examine the computational benefits of chaos and the unique properties of chaotic recurrent neural networks, with a particular focus on those typically utilized in reservoir computing. We also provide a detailed analysis of the algorithms designed to train chaotic RNNs, tracing their historical evolution and highlighting key milestones in their development. Finally, we explore the applications and limitations of chaotic RNNs for brain modelling, consider their potential broader impacts beyond neuroscience, and outline promising directions for future research.
Collapse
Affiliation(s)
- Andrea Mattera
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy.
| | - Valerio Alfieri
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy; International School of Advanced Studies, Center for Neuroscience, University of Camerino, Via Gentile III Da Varano, 62032, Camerino, Italy
| | - Giovanni Granato
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy
| | - Gianluca Baldassarre
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy
| |
Collapse
|
2
|
Li T, Lyu R, Xie Z. Pattern memory cannot be completely and truly realized in deep neural networks. Sci Rep 2024; 14:31649. [PMID: 39738102 DOI: 10.1038/s41598-024-80647-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Accepted: 11/21/2024] [Indexed: 01/01/2025] Open
Abstract
The unknown boundary issue, between superior computational capability of deep neural networks (DNNs) and human cognitive ability, has becoming crucial and foundational theoretical problem in AI evolution. Undoubtedly, DNN-empowered AI capability is increasingly surpassing human intelligence in handling general intelligent tasks. However, the absence of DNN's interpretability and recurrent erratic behavior remain incontrovertible facts. Inspired by perceptual characteristics of human vision on optical illusions, we propose a novel working capability analysis framework for DNNs through innovative cognitive response characteristics on visual illusion images, accompanied with fine adjustable sample image construction strategy. Our findings indicate that, although DNNs can infinitely approximate human-provided empirical standards in pattern classification, object detection and semantic segmentation, they are still unable to truly realize independent pattern memorization. All super cognitive abilities of DNNs purely come from their powerful sample classification performance on similar known scenes. Above discovery establishes a new foundation for advancing artificial general intelligence.
Collapse
Affiliation(s)
- Tingting Li
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, 214122, China
- Jiangsu Key University Laboratory of Software and Media Technology under Human-Computer Cooperation, Jiangnan University, Wuxi, 214122, China
| | - Ruimin Lyu
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, 214122, China
- Jiangsu Key University Laboratory of Software and Media Technology under Human-Computer Cooperation, Jiangnan University, Wuxi, 214122, China
| | - Zhenping Xie
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, 214122, China.
- Jiangsu Key University Laboratory of Software and Media Technology under Human-Computer Cooperation, Jiangnan University, Wuxi, 214122, China.
| |
Collapse
|
3
|
Hiramoto M, Cline HT. Identification of movie encoding neurons enables movie recognition AI. Proc Natl Acad Sci U S A 2024; 121:e2412260121. [PMID: 39560649 PMCID: PMC11621835 DOI: 10.1073/pnas.2412260121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2024] [Accepted: 09/12/2024] [Indexed: 11/20/2024] Open
Abstract
Natural visual scenes are dominated by spatiotemporal image dynamics, but how the visual system integrates "movie" information over time is unclear. We characterized optic tectal neuronal receptive fields using sparse noise stimuli and reverse correlation analysis. Neurons recognized movies of ~200-600 ms durations with defined start and stop stimuli. Movie durations from start to stop responses were tuned by sensory experience though a hierarchical algorithm. Neurons encoded families of image sequences following trigonometric functions. Spike sequence and information flow suggest that repetitive circuit motifs underlie movie detection. Principles of frog topographic retinotectal plasticity and cortical simple cells are employed in machine learning networks for static image recognition, suggesting that discoveries of principles of movie encoding in the brain, such as how image sequences and duration are encoded, may benefit movie recognition technology. We built and trained a machine learning network that mimicked neural principles of visual system movie encoders. The network, named MovieNet, outperformed current machine learning image recognition networks in classifying natural movie scenes, while reducing data size and steps to complete the classification task. This study reveals how movie sequences and time are encoded in the brain and demonstrates that brain-based movie processing principles enable efficient machine learning.
Collapse
Affiliation(s)
- Masaki Hiramoto
- Department of Neuroscience, Dorris Neuroscience Center, Scripps Research Institute, La Jolla, CA92037
| | - Hollis T. Cline
- Department of Neuroscience, Dorris Neuroscience Center, Scripps Research Institute, La Jolla, CA92037
| |
Collapse
|
4
|
Dura-Bernal S, Herrera B, Lupascu C, Marsh BM, Gandolfi D, Marasco A, Neymotin S, Romani A, Solinas S, Bazhenov M, Hay E, Migliore M, Reinmann M, Arkhipov A. Large-Scale Mechanistic Models of Brain Circuits with Biophysically and Morphologically Detailed Neurons. J Neurosci 2024; 44:e1236242024. [PMID: 39358017 PMCID: PMC11450527 DOI: 10.1523/jneurosci.1236-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2024] [Revised: 07/09/2024] [Accepted: 07/31/2024] [Indexed: 10/04/2024] Open
Abstract
Understanding the brain requires studying its multiscale interactions from molecules to networks. The increasing availability of large-scale datasets detailing brain circuit composition, connectivity, and activity is transforming neuroscience. However, integrating and interpreting this data remains challenging. Concurrently, advances in supercomputing and sophisticated modeling tools now enable the development of highly detailed, large-scale biophysical circuit models. These mechanistic multiscale models offer a method to systematically integrate experimental data, facilitating investigations into brain structure, function, and disease. This review, based on a Society for Neuroscience 2024 MiniSymposium, aims to disseminate recent advances in large-scale mechanistic modeling to the broader community. It highlights (1) examples of current models for various brain regions developed through experimental data integration; (2) their predictive capabilities regarding cellular and circuit mechanisms underlying experimental recordings (e.g., membrane voltage, spikes, local-field potential, electroencephalography/magnetoencephalography) and brain function; and (3) their use in simulating biomarkers for brain diseases like epilepsy, depression, schizophrenia, and Parkinson's, aiding in understanding their biophysical underpinnings and developing novel treatments. The review showcases state-of-the-art models covering hippocampus, somatosensory, visual, motor, auditory cortical, and thalamic circuits across species. These models predict neural activity at multiple scales and provide insights into the biophysical mechanisms underlying sensation, motor behavior, brain signals, neural coding, disease, pharmacological interventions, and neural stimulation. Collaboration with experimental neuroscientists and clinicians is essential for the development and validation of these models, particularly as datasets grow. Hence, this review aims to foster interest in detailed brain circuit models, leading to cross-disciplinary collaborations that accelerate brain research.
Collapse
Affiliation(s)
- Salvador Dura-Bernal
- State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, New York 11203
- Nathan S. Kline Institute for Psychiatric Research, Orangeburg, New York 10962
| | | | - Carmen Lupascu
- Institute of Biophysics, National Research Council/Human Brain Project, Palermo 90146, Italy
| | - Brianna M Marsh
- University of California San Diego, La Jolla, California 92093
| | - Daniela Gandolfi
- Department of Engineering "Enzo Ferrari", University of Modena and Reggio Emilia, Modena 41125, Italy
| | | | - Samuel Neymotin
- Nathan S. Kline Institute for Psychiatric Research, Orangeburg, New York 10962
- School of Medicine, New York University, New York 10012
| | - Armando Romani
- Swiss Federal Institute of Technology Lausanne (EPFL)/Blue Brain Project, Lausanne 1015, Switzerland
| | | | - Maxim Bazhenov
- University of California San Diego, La Jolla, California 92093
| | - Etay Hay
- Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, Toronto, Ontario M5T 1R8, Canada
- University of Toronto, Toronto, Ontario M5S 1A1, Canada
| | - Michele Migliore
- Institute of Biophysics, National Research Council/Human Brain Project, Palermo 90146, Italy
| | - Michael Reinmann
- Swiss Federal Institute of Technology Lausanne (EPFL)/Blue Brain Project, Lausanne 1015, Switzerland
| | | |
Collapse
|
5
|
Urbizagastegui P, van Schaik A, Wang R. Memory-efficient neurons and synapses for spike-timing-dependent-plasticity in large-scale spiking networks. Front Neurosci 2024; 18:1450640. [PMID: 39308944 PMCID: PMC11412959 DOI: 10.3389/fnins.2024.1450640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2024] [Accepted: 08/12/2024] [Indexed: 09/25/2024] Open
Abstract
This paper addresses the challenges posed by frequent memory access during simulations of large-scale spiking neural networks involving synaptic plasticity. We focus on the memory accesses performed during a common synaptic plasticity rule since this can be a significant factor limiting the efficiency of the simulations. We propose neuron models that are represented by only three state variables, which are engineered to enforce the appropriate neuronal dynamics. Additionally, memory retrieval is executed solely by fetching postsynaptic variables, promoting a contiguous memory storage and leveraging the capabilities of burst mode operations to reduce the overhead associated with each access. Different plasticity rules could be implemented despite the adopted simplifications, each leading to a distinct synaptic weight distribution (i.e., unimodal and bimodal). Moreover, our method requires fewer average memory accesses compared to a naive approach. We argue that the strategy described can speed up memory transactions and reduce latencies while maintaining a small memory footprint.
Collapse
Affiliation(s)
- Pablo Urbizagastegui
- International Centre for Neuromorphic Systems, The MARCS Institute for Brain, Behavior, and Development, Western Sydney University, Kingswood, NSW, Australia
| | | | | |
Collapse
|
6
|
Maass W. How can neuromorphic hardware attain brain-like functional capabilities? Natl Sci Rev 2024; 11:nwad301. [PMID: 38577672 PMCID: PMC10989294 DOI: 10.1093/nsr/nwad301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 10/22/2023] [Accepted: 11/30/2023] [Indexed: 04/06/2024] Open
Abstract
The author provides 4 design principles of how to make cortical microcircuits into neuromorphic hardwares, shedding light for the next generation neuromorphic hardware design.
Collapse
Affiliation(s)
- Wolfgang Maass
- Computer Science and Biomedical Engineering, Graz University of Technology, Austria
| |
Collapse
|
7
|
Ramaswamy S. Data-driven multiscale computational models of cortical and subcortical regions. Curr Opin Neurobiol 2024; 85:102842. [PMID: 38320453 DOI: 10.1016/j.conb.2024.102842] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 01/04/2024] [Accepted: 01/05/2024] [Indexed: 02/08/2024]
Abstract
Data-driven computational models of neurons, synapses, microcircuits, and mesocircuits have become essential tools in modern brain research. The goal of these multiscale models is to integrate and synthesize information from different levels of brain organization, from cellular properties, dendritic excitability, and synaptic dynamics to microcircuits, mesocircuits, and ultimately behavior. This article surveys recent advances in the genesis of data-driven computational models of mammalian neural networks in cortical and subcortical areas. I discuss the challenges and opportunities in developing data-driven multiscale models, including the need for interdisciplinary collaborations, the importance of model validation and comparison, and the potential impact on basic and translational neuroscience research. Finally, I highlight future directions and emerging technologies that will enable more comprehensive and predictive data-driven models of brain function and dysfunction.
Collapse
Affiliation(s)
- Srikanth Ramaswamy
- Neural Circuits Laboratory, Biosciences Institute, Newcastle University, Newcastle Upon Tyne, NE2 4HH, United Kingdom.
| |
Collapse
|
8
|
Liu M, Gao Y, Xin F, Hu Y, Wang T, Xie F, Shao C, Li T, Wang N, Yuan K. Parvalbumin and Somatostatin: Biomarkers for Two Parallel Tectothalamic Pathways in the Auditory Midbrain. J Neurosci 2024; 44:e1655232024. [PMID: 38326037 PMCID: PMC10919325 DOI: 10.1523/jneurosci.1655-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Revised: 01/15/2024] [Accepted: 01/20/2024] [Indexed: 02/09/2024] Open
Abstract
The inferior colliculus (IC) represents a crucial relay station in the auditory pathway, located in the midbrain's tectum and primarily projecting to the thalamus. Despite the identification of distinct cell classes based on various biomarkers in the IC, their specific contributions to the organization of auditory tectothalamic pathways have remained poorly understood. In this study, we demonstrate that IC neurons expressing parvalbumin (ICPV+) or somatostatin (ICSOM+) represent two minimally overlapping cell classes throughout the three IC subdivisions in mice of both sexes. Strikingly, regardless of their location within the IC, these neurons predominantly project to the primary and secondary auditory thalamic nuclei, respectively. Cell class-specific input tracing suggested that ICPV+ neurons primarily receive auditory inputs, whereas ICSOM+ neurons receive significantly more inputs from the periaqueductal gray and the superior colliculus (SC), which are sensorimotor regions critically involved in innate behaviors. Furthermore, ICPV+ neurons exhibit significant heterogeneity in both intrinsic electrophysiological properties and presynaptic terminal size compared with ICSOM+ neurons. Notably, approximately one-quarter of ICPV+ neurons are inhibitory neurons, whereas all ICSOM+ neurons are excitatory neurons. Collectively, our findings suggest that parvalbumin and somatostatin expression in the IC can serve as biomarkers for two functionally distinct, parallel tectothalamic pathways. This discovery suggests an alternative way to define tectothalamic pathways and highlights the potential usefulness of Cre mice in understanding the multifaceted roles of the IC at the circuit level.
Collapse
Affiliation(s)
- Mengting Liu
- Department of Otorhinolaryngology Head and Neck Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang 150001, China
| | - Yixiao Gao
- Department of Basic Medical Sciences, School of Medicine, Tsinghua University, Beijing 100084, China
- Tsinghua-Peking Joint Center for Life Sciences, Tsinghua University, Beijing 100084, China
| | - Fengyuan Xin
- School of Biomedical Engineering, Tsinghua University, Beijing 100084, China
| | - Ying Hu
- Zhili College, Tsinghua University, Beijing 100084, China
| | - Tao Wang
- Tsinghua-Peking Joint Center for Life Sciences, Tsinghua University, Beijing 100084, China
- School of Life Sciences, Tsinghua University, Beijing 100084, China
| | - Fenghua Xie
- School of Biomedical Engineering, Tsinghua University, Beijing 100084, China
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China
| | - Chengjun Shao
- School of Life Sciences, Tsinghua University, Beijing 100084, China
| | - Tianyu Li
- School of Biomedical Engineering, Tsinghua University, Beijing 100084, China
| | - Ningyu Wang
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing 100020, China
| | - Kexin Yuan
- School of Biomedical Engineering, Tsinghua University, Beijing 100084, China
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China
- IDG/McGovern Institute for Brain Research at Tsinghua, Tsinghua University, Beijing 10084, China
| |
Collapse
|
9
|
Ohmae K, Ohmae S. Emergence of syntax and word prediction in an artificial neural circuit of the cerebellum. Nat Commun 2024; 15:927. [PMID: 38296954 PMCID: PMC10831061 DOI: 10.1038/s41467-024-44801-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 01/03/2024] [Indexed: 02/02/2024] Open
Abstract
The cerebellum, interconnected with the cerebral neocortex, plays a vital role in human-characteristic cognition such as language processing, however, knowledge about the underlying circuit computation of the cerebellum remains very limited. To gain a better understanding of the computation underlying cerebellar language processing, we developed a biologically constrained cerebellar artificial neural network (cANN) model, which implements the recently identified cerebello-cerebellar recurrent pathway. We found that while cANN acquires prediction of future words, another function of syntactic recognition emerges in the middle layer of the prediction circuit. The recurrent pathway of the cANN was essential for the two language functions, whereas cANN variants with further biological constraints preserved these functions. Considering the uniform structure of cerebellar circuitry across all functional domains, the single-circuit computation, which is the common basis of the two language functions, can be generalized to fundamental cerebellar functions of prediction and grammar-like rule extraction from sequences, that underpin a wide range of cerebellar motor and cognitive functions. This is a pioneering study to understand the circuit computation of human-characteristic cognition using biologically-constrained ANNs.
Collapse
Affiliation(s)
- Keiko Ohmae
- Neuroscience Department, Baylor College of Medicine, Houston, TX, USA
- Chinese Institute for Brain Research (CIBR), Beijing, China
| | - Shogo Ohmae
- Neuroscience Department, Baylor College of Medicine, Houston, TX, USA.
- Chinese Institute for Brain Research (CIBR), Beijing, China.
| |
Collapse
|
10
|
Suzuki M, Pennartz CMA, Aru J. How deep is the brain? The shallow brain hypothesis. Nat Rev Neurosci 2023; 24:778-791. [PMID: 37891398 DOI: 10.1038/s41583-023-00756-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/25/2023] [Indexed: 10/29/2023]
Abstract
Deep learning and predictive coding architectures commonly assume that inference in neural networks is hierarchical. However, largely neglected in deep learning and predictive coding architectures is the neurobiological evidence that all hierarchical cortical areas, higher or lower, project to and receive signals directly from subcortical areas. Given these neuroanatomical facts, today's dominance of cortico-centric, hierarchical architectures in deep learning and predictive coding networks is highly questionable; such architectures are likely to be missing essential computational principles the brain uses. In this Perspective, we present the shallow brain hypothesis: hierarchical cortical processing is integrated with a massively parallel process to which subcortical areas substantially contribute. This shallow architecture exploits the computational capacity of cortical microcircuits and thalamo-cortical loops that are not included in typical hierarchical deep learning and predictive coding networks. We argue that the shallow brain architecture provides several critical benefits over deep hierarchical structures and a more complete depiction of how mammalian brains achieve fast and flexible computational capabilities.
Collapse
Affiliation(s)
- Mototaka Suzuki
- Department of Cognitive and Systems Neuroscience, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, The Netherlands.
| | - Cyriel M A Pennartz
- Department of Cognitive and Systems Neuroscience, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, The Netherlands
| | - Jaan Aru
- Institute of Computer Science, University of Tartu, Tartu, Estonia.
| |
Collapse
|
11
|
Wu Z, Shen Y, Zhang J, Liang H, Zhao R, Li H, Xiong J, Zhang X, Chua Y. BIDL: a brain-inspired deep learning framework for spatiotemporal processing. Front Neurosci 2023; 17:1213720. [PMID: 37564366 PMCID: PMC10410154 DOI: 10.3389/fnins.2023.1213720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 06/22/2023] [Indexed: 08/12/2023] Open
Abstract
Brain-inspired deep spiking neural network (DSNN) which emulates the function of the biological brain provides an effective approach for event-stream spatiotemporal perception (STP), especially for dynamic vision sensor (DVS) signals. However, there is a lack of generalized learning frameworks that can handle various spatiotemporal modalities beyond event-stream, such as video clips and 3D imaging data. To provide a unified design flow for generalized spatiotemporal processing (STP) and to investigate the capability of lightweight STP processing via brain-inspired neural dynamics, this study introduces a training platform called brain-inspired deep learning (BIDL). This framework constructs deep neural networks, which leverage neural dynamics for processing temporal information and ensures high-accuracy spatial processing via artificial neural network layers. We conducted experiments involving various types of data, including video information processing, DVS information processing, 3D medical imaging classification, and natural language processing. These experiments demonstrate the efficiency of the proposed method. Moreover, as a research framework for researchers in the fields of neuroscience and machine learning, BIDL facilitates the exploration of different neural models and enables global-local co-learning. For easily fitting to neuromorphic chips and GPUs, the framework incorporates several optimizations, including iteration representation, state-aware computational graph, and built-in neural functions. This study presents a user-friendly and efficient DSNN builder for lightweight STP applications and has the potential to drive future advancements in bio-inspired research.
Collapse
Affiliation(s)
- Zhenzhi Wu
- Lynxi Technologies, Co. Ltd., Beijing, China
| | - Yangshu Shen
- Lynxi Technologies, Co. Ltd., Beijing, China
- Department of Precision Instruments and Mechanology, Tsinghua University, Beijing, China
| | - Jing Zhang
- Lynxi Technologies, Co. Ltd., Beijing, China
| | - Huaju Liang
- Neuromorphic Computing Laboratory, China Nanhu Academy of Electronics and Information Technology (CNAEIT), Jiaxing, Zhejiang, China
| | | | - Han Li
- Lynxi Technologies, Co. Ltd., Beijing, China
| | - Jianping Xiong
- Department of Precision Instruments and Mechanology, Tsinghua University, Beijing, China
| | - Xiyu Zhang
- School of Automation Science and Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Yansong Chua
- Neuromorphic Computing Laboratory, China Nanhu Academy of Electronics and Information Technology (CNAEIT), Jiaxing, Zhejiang, China
| |
Collapse
|
12
|
Barak O, Tsodyks M. Mathematical models of learning and what can be learned from them. Curr Opin Neurobiol 2023; 80:102721. [PMID: 37043892 DOI: 10.1016/j.conb.2023.102721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2022] [Revised: 02/28/2023] [Accepted: 03/03/2023] [Indexed: 04/14/2023]
Abstract
Learning is a multi-faceted phenomenon of critical importance and hence attracted a great deal of research, both experimental and theoretical. In this review, we will consider some of the paradigmatic examples of learning and discuss the common themes in theoretical learning research, such as levels of modeling and their corresponding relation to experimental observations and mathematical ideas common to different types of learning.
Collapse
Affiliation(s)
- Omri Barak
- Rappaport Faculty of Medicine and Network Biology Research Laboratories, Technion - Israeli Institute of Technology, Haifa, Israel
| | - Misha Tsodyks
- School of Natural Sciences, Institute for Advanced Study, Princeton, USA; Department of Brain Sciences, Weizmann Institute of Studies, Rehovot, Israel.
| |
Collapse
|