1
|
Malkin J, O'Donnell C, Houghton CJ, Aitchison L. Signatures of Bayesian inference emerge from energy-efficient synapses. eLife 2024; 12:RP92595. [PMID: 39106188 DOI: 10.7554/elife.92595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/09/2024] Open
Abstract
Biological synaptic transmission is unreliable, and this unreliability likely degrades neural circuit performance. While there are biophysical mechanisms that can increase reliability, for instance by increasing vesicle release probability, these mechanisms cost energy. We examined four such mechanisms along with the associated scaling of the energetic costs. We then embedded these energetic costs for reliability in artificial neural networks (ANNs) with trainable stochastic synapses, and trained these networks on standard image classification tasks. The resulting networks revealed a tradeoff between circuit performance and the energetic cost of synaptic reliability. Additionally, the optimised networks exhibited two testable predictions consistent with pre-existing experimental data. Specifically, synapses with lower variability tended to have (1) higher input firing rates and (2) lower learning rates. Surprisingly, these predictions also arise when synapse statistics are inferred through Bayesian inference. Indeed, we were able to find a formal, theoretical link between the performance-reliability cost tradeoff and Bayesian inference. This connection suggests two incompatible possibilities: evolution may have chanced upon a scheme for implementing Bayesian inference by optimising energy efficiency, or alternatively, energy-efficient synapses may display signatures of Bayesian inference without actually using Bayes to reason about uncertainty.
Collapse
Affiliation(s)
- James Malkin
- Faculty of Engineering, University of Bristol, Bristol, United Kingdom
| | - Cian O'Donnell
- Faculty of Engineering, University of Bristol, Bristol, United Kingdom
- Intelligent Systems Research Centre, School of Computing, Engineering, and Intelligent Systems, Ulster University, Derry/Londonderry, United Kingdom
| | - Conor J Houghton
- Faculty of Engineering, University of Bristol, Bristol, United Kingdom
| | | |
Collapse
|
2
|
Bredenberg C, Savin C. Desiderata for Normative Models of Synaptic Plasticity. Neural Comput 2024; 36:1245-1285. [PMID: 38776950 DOI: 10.1162/neco_a_01671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 02/06/2024] [Indexed: 05/25/2024]
Abstract
Normative models of synaptic plasticity use computational rationales to arrive at predictions of behavioral and network-level adaptive phenomena. In recent years, there has been an explosion of theoretical work in this realm, but experimental confirmation remains limited. In this review, we organize work on normative plasticity models in terms of a set of desiderata that, when satisfied, are designed to ensure that a given model demonstrates a clear link between plasticity and adaptive behavior, is consistent with known biological evidence about neural plasticity and yields specific testable predictions. As a prototype, we include a detailed analysis of the REINFORCE algorithm. We also discuss how new models have begun to improve on the identified criteria and suggest avenues for further development. Overall, we provide a conceptual guide to help develop neural learning theories that are precise, powerful, and experimentally testable.
Collapse
Affiliation(s)
- Colin Bredenberg
- Center for Neural Science, New York University, New York, NY 10003, U.S.A
- Mila-Quebec AI Institute, Montréal, QC H2S 3H1, Canada
| | - Cristina Savin
- Center for Neural Science, New York University, New York, NY 10003, U.S.A
- Center for Data Science, New York University, New York, NY 10011, U.S.A.
| |
Collapse
|
3
|
Parnas M, Manoim JE, Lin AC. Sensory encoding and memory in the mushroom body: signals, noise, and variability. Learn Mem 2024; 31:a053825. [PMID: 38862174 PMCID: PMC11199953 DOI: 10.1101/lm.053825.123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Accepted: 11/21/2023] [Indexed: 06/13/2024]
Abstract
To survive in changing environments, animals need to learn to associate specific sensory stimuli with positive or negative valence. How do they form stimulus-specific memories to distinguish between positively/negatively associated stimuli and other irrelevant stimuli? Solving this task is one of the functions of the mushroom body, the associative memory center in insect brains. Here we summarize recent work on sensory encoding and memory in the Drosophila mushroom body, highlighting general principles such as pattern separation, sparse coding, noise and variability, coincidence detection, and spatially localized neuromodulation, and placing the mushroom body in comparative perspective with mammalian memory systems.
Collapse
Affiliation(s)
- Moshe Parnas
- Department of Physiology and Pharmacology, Faculty of Medicine, Tel Aviv University, Tel Aviv 69978, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 69978, Israel
| | - Julia E Manoim
- Department of Physiology and Pharmacology, Faculty of Medicine, Tel Aviv University, Tel Aviv 69978, Israel
| | - Andrew C Lin
- School of Biosciences, University of Sheffield, Sheffield S10 2TN, United Kingdom
- Neuroscience Institute, University of Sheffield, Sheffield S10 2TN, United Kingdom
| |
Collapse
|
4
|
Rajeswaran P, Payeur A, Lajoie G, Orsborn AL. Assistive sensory-motor perturbations influence learned neural representations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.20.585972. [PMID: 38562772 PMCID: PMC10983972 DOI: 10.1101/2024.03.20.585972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Task errors are used to learn and refine motor skills. We investigated how task assistance influences learned neural representations using Brain-Computer Interfaces (BCIs), which map neural activity into movement via a decoder. We analyzed motor cortex activity as monkeys practiced BCI with a decoder that adapted to improve or maintain performance over days. Population dimensionality remained constant or increased with learning, counter to trends with non-adaptive BCIs. Yet, over time, task information was contained in a smaller subset of neurons or population modes. Moreover, task information was ultimately stored in neural modes that occupied a small fraction of the population variance. An artificial neural network model suggests the adaptive decoders contribute to forming these compact neural representations. Our findings show that assistive decoders manipulate error information used for long-term learning computations, like credit assignment, which informs our understanding of motor learning and has implications for designing real-world BCIs.
Collapse
Affiliation(s)
| | - Alexandre Payeur
- Université de Montreál, Department of Mathematics and Statistics, Montreál (QC), Canada, H3C 3J7
- Mila - Québec Artificial Intelligence Institute, Montreál (QC), Canada, H2S 3H1
| | - Guillaume Lajoie
- Université de Montreál, Department of Mathematics and Statistics, Montreál (QC), Canada, H3C 3J7
- Mila - Québec Artificial Intelligence Institute, Montreál (QC), Canada, H2S 3H1
| | - Amy L. Orsborn
- University of Washington, Bioengineering, Seattle, 98115, USA
- University of Washington, Electrical and Computer Engineering, Seattle, 98115, USA
- Washington National Primate Research Center, Seattle, Washington, 98115, USA
| |
Collapse
|
5
|
Rvachev MM. An operating principle of the cerebral cortex, and a cellular mechanism for attentional trial-and-error pattern learning and useful classification extraction. Front Neural Circuits 2024; 18:1280604. [PMID: 38505865 PMCID: PMC10950307 DOI: 10.3389/fncir.2024.1280604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Accepted: 02/13/2024] [Indexed: 03/21/2024] Open
Abstract
A feature of the brains of intelligent animals is the ability to learn to respond to an ensemble of active neuronal inputs with a behaviorally appropriate ensemble of active neuronal outputs. Previously, a hypothesis was proposed on how this mechanism is implemented at the cellular level within the neocortical pyramidal neuron: the apical tuft or perisomatic inputs initiate "guess" neuron firings, while the basal dendrites identify input patterns based on excited synaptic clusters, with the cluster excitation strength adjusted based on reward feedback. This simple mechanism allows neurons to learn to classify their inputs in a surprisingly intelligent manner. Here, we revise and extend this hypothesis. We modify synaptic plasticity rules to align with behavioral time scale synaptic plasticity (BTSP) observed in hippocampal area CA1, making the framework more biophysically and behaviorally plausible. The neurons for the guess firings are selected in a voluntary manner via feedback connections to apical tufts in the neocortical layer 1, leading to dendritic Ca2+ spikes with burst firing, which are postulated to be neural correlates of attentional, aware processing. Once learned, the neuronal input classification is executed without voluntary or conscious control, enabling hierarchical incremental learning of classifications that is effective in our inherently classifiable world. In addition to voluntary, we propose that pyramidal neuron burst firing can be involuntary, also initiated via apical tuft inputs, drawing attention toward important cues such as novelty and noxious stimuli. We classify the excitations of neocortical pyramidal neurons into four categories based on their excitation pathway: attentional versus automatic and voluntary/acquired versus involuntary. Additionally, we hypothesize that dendrites within pyramidal neuron minicolumn bundles are coupled via depolarization cross-induction, enabling minicolumn functions such as the creation of powerful hierarchical "hyperneurons" and the internal representation of the external world. We suggest building blocks to extend the microcircuit theory to network-level processing, which, interestingly, yields variants resembling the artificial neural networks currently in use. On a more speculative note, we conjecture that principles of intelligence in universes governed by certain types of physical laws might resemble ours.
Collapse
|
6
|
Suzuki M, Pennartz CMA, Aru J. How deep is the brain? The shallow brain hypothesis. Nat Rev Neurosci 2023; 24:778-791. [PMID: 37891398 DOI: 10.1038/s41583-023-00756-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/25/2023] [Indexed: 10/29/2023]
Abstract
Deep learning and predictive coding architectures commonly assume that inference in neural networks is hierarchical. However, largely neglected in deep learning and predictive coding architectures is the neurobiological evidence that all hierarchical cortical areas, higher or lower, project to and receive signals directly from subcortical areas. Given these neuroanatomical facts, today's dominance of cortico-centric, hierarchical architectures in deep learning and predictive coding networks is highly questionable; such architectures are likely to be missing essential computational principles the brain uses. In this Perspective, we present the shallow brain hypothesis: hierarchical cortical processing is integrated with a massively parallel process to which subcortical areas substantially contribute. This shallow architecture exploits the computational capacity of cortical microcircuits and thalamo-cortical loops that are not included in typical hierarchical deep learning and predictive coding networks. We argue that the shallow brain architecture provides several critical benefits over deep hierarchical structures and a more complete depiction of how mammalian brains achieve fast and flexible computational capabilities.
Collapse
Affiliation(s)
- Mototaka Suzuki
- Department of Cognitive and Systems Neuroscience, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, The Netherlands.
| | - Cyriel M A Pennartz
- Department of Cognitive and Systems Neuroscience, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, The Netherlands
| | - Jaan Aru
- Institute of Computer Science, University of Tartu, Tartu, Estonia.
| |
Collapse
|
7
|
Francioni V, Tang VD, Brown NJ, Toloza EH, Harnett M. Vectorized instructive signals in cortical dendrites during a brain-computer interface task. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.11.03.565534. [PMID: 37961227 PMCID: PMC10635122 DOI: 10.1101/2023.11.03.565534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
Backpropagation of error is the most widely used learning algorithm in artificial neural networks, forming the backbone of modern machine learning and artificial intelligence1,2. Backpropagation provides a solution to the credit assignment problem by vectorizing an error signal tailored to individual neurons. Recent theoretical models have suggested that neural circuits could implement backpropagation-like learning by semi-independently processing feedforward and feedback information streams in separate dendritic compartments3-7. This presents a compelling, but untested, hypothesis for how cortical circuits could solve credit assignment in the brain. We designed a neurofeedback brain-computer interface (BCI) task with an experimenter-defined reward function to evaluate the key requirements for dendrites to implement backpropagation-like learning. We trained mice to modulate the activity of two spatially intermingled populations (4 or 5 neurons each) of layer 5 pyramidal neurons in the retrosplenial cortex to rotate a visual grating towards a target orientation while we recorded GCaMP activity from somas and corresponding distal apical dendrites. We observed that the relative magnitudes of somatic versus dendritic signals could be predicted using the activity of the surrounding network and contained information about task-related variables that could serve as instructive signals, including reward and error. The signs of these putative teaching signals both depended on the causal role of individual neurons in the task and predicted changes in overall activity over the course of learning. These results provide the first biological evidence of a backpropagation-like solution to the credit assignment problem in the brain.
Collapse
Affiliation(s)
- Valerio Francioni
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
| | - Vincent D Tang
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
| | - Norma J. Brown
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
| | - Enrique H.S. Toloza
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
| | - Mark Harnett
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
| |
Collapse
|
8
|
Muller SZ, Abbott LF, Sawtell NB. A mechanism for differential control of axonal and dendritic spiking underlying learning in a cerebellum-like circuit. Curr Biol 2023; 33:2657-2667.e4. [PMID: 37311457 PMCID: PMC10524478 DOI: 10.1016/j.cub.2023.05.040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 04/06/2023] [Accepted: 05/17/2023] [Indexed: 06/15/2023]
Abstract
In addition to the action potentials used for axonal signaling, many neurons generate dendritic "spikes" associated with synaptic plasticity. However, in order to control both plasticity and signaling, synaptic inputs must be able to differentially modulate the firing of these two spike types. Here, we investigate this issue in the electrosensory lobe (ELL) of weakly electric mormyrid fish, where separate control over axonal and dendritic spikes is essential for the transmission of learned predictive signals from inhibitory interneurons to the output stage of the circuit. Through a combination of experimental and modeling studies, we uncover a novel mechanism by which sensory input selectively modulates the rate of dendritic spiking by adjusting the amplitude of backpropagating axonal action potentials. Interestingly, this mechanism does not require spatially segregated synaptic inputs or dendritic compartmentalization but relies instead on an electrotonically distant spike initiation site in the axon-a common biophysical feature of neurons.
Collapse
Affiliation(s)
- Salomon Z Muller
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA
| | - L F Abbott
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA; Department of Physiology and Cellular Biophysics, Columbia University, New York, NY 10027, USA
| | - Nathaniel B Sawtell
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA.
| |
Collapse
|
9
|
Doerig A, Sommers RP, Seeliger K, Richards B, Ismael J, Lindsay GW, Kording KP, Konkle T, van Gerven MAJ, Kriegeskorte N, Kietzmann TC. The neuroconnectionist research programme. Nat Rev Neurosci 2023:10.1038/s41583-023-00705-w. [PMID: 37253949 DOI: 10.1038/s41583-023-00705-w] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/21/2023] [Indexed: 06/01/2023]
Abstract
Artificial neural networks (ANNs) inspired by biology are beginning to be widely used to model behavioural and neural data, an approach we call 'neuroconnectionism'. ANNs have been not only lauded as the current best models of information processing in the brain but also criticized for failing to account for basic cognitive functions. In this Perspective article, we propose that arguing about the successes and failures of a restricted set of current ANNs is the wrong approach to assess the promise of neuroconnectionism for brain science. Instead, we take inspiration from the philosophy of science, and in particular from Lakatos, who showed that the core of a scientific research programme is often not directly falsifiable but should be assessed by its capacity to generate novel insights. Following this view, we present neuroconnectionism as a general research programme centred around ANNs as a computational language for expressing falsifiable theories about brain computation. We describe the core of the programme, the underlying computational framework and its tools for testing specific neuroscientific hypotheses and deriving novel understanding. Taking a longitudinal view, we review past and present neuroconnectionist projects and their responses to challenges and argue that the research programme is highly progressive, generating new and otherwise unreachable insights into the workings of the brain.
Collapse
Affiliation(s)
- Adrien Doerig
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany.
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands.
| | - Rowan P Sommers
- Department of Neurobiology of Language, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Katja Seeliger
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Blake Richards
- Department of Neurology and Neurosurgery, McGill University, Montréal, QC, Canada
- School of Computer Science, McGill University, Montréal, QC, Canada
- Mila, Montréal, QC, Canada
- Montréal Neurological Institute, Montréal, QC, Canada
- Learning in Machines and Brains Program, CIFAR, Toronto, ON, Canada
| | | | | | - Konrad P Kording
- Learning in Machines and Brains Program, CIFAR, Toronto, ON, Canada
- Bioengineering, Neuroscience, University of Pennsylvania, Pennsylvania, PA, USA
| | | | | | | | - Tim C Kietzmann
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| |
Collapse
|
10
|
Auksztulewicz R, Rajendran VG, Peng F, Schnupp JWH, Harper NS. Omission responses in local field potentials in rat auditory cortex. BMC Biol 2023; 21:130. [PMID: 37254137 DOI: 10.1186/s12915-023-01592-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Accepted: 04/11/2023] [Indexed: 06/01/2023] Open
Abstract
BACKGROUND Non-invasive recordings of gross neural activity in humans often show responses to omitted stimuli in steady trains of identical stimuli. This has been taken as evidence for the neural coding of prediction or prediction error. However, evidence for such omission responses from invasive recordings of cellular-scale responses in animal models is scarce. Here, we sought to characterise omission responses using extracellular recordings in the auditory cortex of anaesthetised rats. We profiled omission responses across local field potentials (LFP), analogue multiunit activity (AMUA), and single/multi-unit spiking activity, using stimuli that were fixed-rate trains of acoustic noise bursts where 5% of bursts were randomly omitted. RESULTS Significant omission responses were observed in LFP and AMUA signals, but not in spiking activity. These omission responses had a lower amplitude and longer latency than burst-evoked sensory responses, and omission response amplitude increased as a function of the number of preceding bursts. CONCLUSIONS Together, our findings show that omission responses are most robustly observed in LFP and AMUA signals (relative to spiking activity). This has implications for models of cortical processing that require many neurons to encode prediction errors in their spike output.
Collapse
Affiliation(s)
- Ryszard Auksztulewicz
- Center for Cognitive Neuroscience Berlin, Free University Berlin, Berlin, Germany.
- Dept of Neuroscience, City University of Hong Kong, Hong Kong, Hong Kong S.A.R..
| | | | - Fei Peng
- Dept of Neuroscience, City University of Hong Kong, Hong Kong, Hong Kong S.A.R
| | | | | |
Collapse
|
11
|
Malakasis N, Chavlis S, Poirazi P. Synaptic turnover promotes efficient learning in bio-realistic spiking neural networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.22.541722. [PMID: 37292929 PMCID: PMC10245885 DOI: 10.1101/2023.05.22.541722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
While artificial machine learning systems achieve superhuman performance in specific tasks such as language processing, image and video recognition, they do so use extremely large datasets and huge amounts of power. On the other hand, the brain remains superior in several cognitively challenging tasks while operating with the energy of a small lightbulb. We use a biologically constrained spiking neural network model to explore how the neural tissue achieves such high efficiency and assess its learning capacity on discrimination tasks. We found that synaptic turnover, a form of structural plasticity, which is the ability of the brain to form and eliminate synapses continuously, increases both the speed and the performance of our network on all tasks tested. Moreover, it allows accurate learning using a smaller number of examples. Importantly, these improvements are most significant under conditions of resource scarcity, such as when the number of trainable parameters is halved and when the task difficulty is increased. Our findings provide new insights into the mechanisms that underlie efficient learning in the brain and can inspire the development of more efficient and flexible machine learning algorithms.
Collapse
Affiliation(s)
- Nikos Malakasis
- School of Medicine, University of Crete, Heraklion 70013, Greece
- Institute of Molecular Biology and Biotechnology, Foundation for Research and Technology-Hellas, Heraklion 70013, Greece
| | - Spyridon Chavlis
- Institute of Molecular Biology and Biotechnology, Foundation for Research and Technology-Hellas, Heraklion 70013, Greece
| | - Panayiota Poirazi
- Institute of Molecular Biology and Biotechnology, Foundation for Research and Technology-Hellas, Heraklion 70013, Greece
| |
Collapse
|
12
|
McFarlan AR, Chou CYC, Watanabe A, Cherepacha N, Haddad M, Owens H, Sjöström PJ. The plasticitome of cortical interneurons. Nat Rev Neurosci 2023; 24:80-97. [PMID: 36585520 DOI: 10.1038/s41583-022-00663-9] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/21/2022] [Indexed: 12/31/2022]
Abstract
Hebb postulated that, to store information in the brain, assemblies of excitatory neurons coding for a percept are bound together via associative long-term synaptic plasticity. In this view, it is unclear what role, if any, is carried out by inhibitory interneurons. Indeed, some have argued that inhibitory interneurons are not plastic. Yet numerous recent studies have demonstrated that, similar to excitatory neurons, inhibitory interneurons also undergo long-term plasticity. Here, we discuss the many diverse forms of long-term plasticity that are found at inputs to and outputs from several types of cortical inhibitory interneuron, including their plasticity of intrinsic excitability and their homeostatic plasticity. We explain key plasticity terminology, highlight key interneuron plasticity mechanisms, extract overarching principles and point out implications for healthy brain functionality as well as for neuropathology. We introduce the concept of the plasticitome - the synaptic plasticity counterpart to the genome or the connectome - as well as nomenclature and definitions for dealing with this rich diversity of plasticity. We argue that the great diversity of interneuron plasticity rules is best understood at the circuit level, for example as a way of elucidating how the credit-assignment problem is solved in deep biological neural networks.
Collapse
Affiliation(s)
- Amanda R McFarlan
- Centre for Research in Neuroscience, Department of Medicine, The Research Institute of the McGill University Health Centre, Montréal, Québec, Canada.,Integrated Program in Neuroscience, McGill University, Montréal, Québec, Canada
| | - Christina Y C Chou
- Centre for Research in Neuroscience, Department of Medicine, The Research Institute of the McGill University Health Centre, Montréal, Québec, Canada.,Integrated Program in Neuroscience, McGill University, Montréal, Québec, Canada
| | - Airi Watanabe
- Centre for Research in Neuroscience, Department of Medicine, The Research Institute of the McGill University Health Centre, Montréal, Québec, Canada.,Integrated Program in Neuroscience, McGill University, Montréal, Québec, Canada
| | - Nicole Cherepacha
- Centre for Research in Neuroscience, Department of Medicine, The Research Institute of the McGill University Health Centre, Montréal, Québec, Canada
| | - Maria Haddad
- Centre for Research in Neuroscience, Department of Medicine, The Research Institute of the McGill University Health Centre, Montréal, Québec, Canada.,Integrated Program in Neuroscience, McGill University, Montréal, Québec, Canada
| | - Hannah Owens
- Centre for Research in Neuroscience, Department of Medicine, The Research Institute of the McGill University Health Centre, Montréal, Québec, Canada.,Integrated Program in Neuroscience, McGill University, Montréal, Québec, Canada
| | - P Jesper Sjöström
- Centre for Research in Neuroscience, Department of Medicine, The Research Institute of the McGill University Health Centre, Montréal, Québec, Canada.
| |
Collapse
|
13
|
Shao F, Shen Z. How can artificial neural networks approximate the brain? Front Psychol 2023; 13:970214. [PMID: 36698593 PMCID: PMC9868316 DOI: 10.3389/fpsyg.2022.970214] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 11/28/2022] [Indexed: 01/11/2023] Open
Abstract
The article reviews the history development of artificial neural networks (ANNs), then compares the differences between ANNs and brain networks in their constituent unit, network architecture, and dynamic principle. The authors offer five points of suggestion for ANNs development and ten questions to be investigated further for the interdisciplinary field of brain simulation. Even though brain is a super-complex system with 1011 neurons, its intelligence does depend rather on the neuronal type and their energy supply mode than the number of neurons. It might be possible for ANN development to follow a new direction that is a combination of multiple modules with different architecture principle and multiple computation, rather than very large scale of neural networks with much more uniformed units and hidden layers.
Collapse
Affiliation(s)
- Feng Shao
- Beijing Key Laboratory of Behavior and Mental Health, School of Psychological and Cognitive Sciences, Peking University, Beijing, China
| | | |
Collapse
|
14
|
Mikulasch FA, Rudelt L, Wibral M, Priesemann V. Where is the error? Hierarchical predictive coding through dendritic error computation. Trends Neurosci 2023; 46:45-59. [PMID: 36577388 DOI: 10.1016/j.tins.2022.09.007] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 09/28/2022] [Accepted: 09/28/2022] [Indexed: 11/19/2022]
Abstract
Top-down feedback in cortex is critical for guiding sensory processing, which has prominently been formalized in the theory of hierarchical predictive coding (hPC). However, experimental evidence for error units, which are central to the theory, is inconclusive and it remains unclear how hPC can be implemented with spiking neurons. To address this, we connect hPC to existing work on efficient coding in balanced networks with lateral inhibition and predictive computation at apical dendrites. Together, this work points to an efficient implementation of hPC with spiking neurons, where prediction errors are computed not in separate units, but locally in dendritic compartments. We then discuss the correspondence of this model to experimentally observed connectivity patterns, plasticity, and dynamics in cortex.
Collapse
Affiliation(s)
- Fabian A Mikulasch
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany.
| | - Lucas Rudelt
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany
| | - Michael Wibral
- Göttingen Campus Institute for Dynamics of Biological Networks, Georg-August University, Göttingen, Germany
| | - Viola Priesemann
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany; Bernstein Center for Computational Neuroscience (BCCN), Göttingen, Germany; Department of Physics, Georg-August University, Göttingen, Germany
| |
Collapse
|
15
|
Hopkins M, Fil J, Jones EG, Furber S. BitBrain and Sparse Binary Coincidence (SBC) memories: Fast, robust learning and inference for neuromorphic architectures. Front Neuroinform 2023; 17:1125844. [PMID: 37025552 PMCID: PMC10071999 DOI: 10.3389/fninf.2023.1125844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 03/03/2023] [Indexed: 04/08/2023] Open
Abstract
We present an innovative working mechanism (the SBC memory) and surrounding infrastructure (BitBrain) based upon a novel synthesis of ideas from sparse coding, computational neuroscience and information theory that enables fast and adaptive learning and accurate, robust inference. The mechanism is designed to be implemented efficiently on current and future neuromorphic devices as well as on more conventional CPU and memory architectures. An example implementation on the SpiNNaker neuromorphic platform has been developed and initial results are presented. The SBC memory stores coincidences between features detected in class examples in a training set, and infers the class of a previously unseen test example by identifying the class with which it shares the highest number of feature coincidences. A number of SBC memories may be combined in a BitBrain to increase the diversity of the contributing feature coincidences. The resulting inference mechanism is shown to have excellent classification performance on benchmarks such as MNIST and EMNIST, achieving classification accuracy with single-pass learning approaching that of state-of-the-art deep networks with much larger tuneable parameter spaces and much higher training costs. It can also be made very robust to noise. BitBrain is designed to be very efficient in training and inference on both conventional and neuromorphic architectures. It provides a unique combination of single-pass, single-shot and continuous supervised learning; following a very simple unsupervised phase. Accurate classification inference that is very robust against imperfect inputs has been demonstrated. These contributions make it uniquely well-suited for edge and IoT applications.
Collapse
|
16
|
Scott DN, Frank MJ. Adaptive control of synaptic plasticity integrates micro- and macroscopic network function. Neuropsychopharmacology 2023; 48:121-144. [PMID: 36038780 PMCID: PMC9700774 DOI: 10.1038/s41386-022-01374-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 06/23/2022] [Accepted: 06/24/2022] [Indexed: 11/09/2022]
Abstract
Synaptic plasticity configures interactions between neurons and is therefore likely to be a primary driver of behavioral learning and development. How this microscopic-macroscopic interaction occurs is poorly understood, as researchers frequently examine models within particular ranges of abstraction and scale. Computational neuroscience and machine learning models offer theoretically powerful analyses of plasticity in neural networks, but results are often siloed and only coarsely linked to biology. In this review, we examine connections between these areas, asking how network computations change as a function of diverse features of plasticity and vice versa. We review how plasticity can be controlled at synapses by calcium dynamics and neuromodulatory signals, the manifestation of these changes in networks, and their impacts in specialized circuits. We conclude that metaplasticity-defined broadly as the adaptive control of plasticity-forges connections across scales by governing what groups of synapses can and can't learn about, when, and to what ends. The metaplasticity we discuss acts by co-opting Hebbian mechanisms, shifting network properties, and routing activity within and across brain systems. Asking how these operations can go awry should also be useful for understanding pathology, which we address in the context of autism, schizophrenia and Parkinson's disease.
Collapse
Affiliation(s)
- Daniel N Scott
- Cognitive Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA.
- Carney Institute for Brain Science, Brown University, Providence, RI, USA.
| | - Michael J Frank
- Cognitive Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA.
- Carney Institute for Brain Science, Brown University, Providence, RI, USA.
| |
Collapse
|
17
|
Wang MB, Halassa MM. Thalamocortical contribution to flexible learning in neural systems. Netw Neurosci 2022; 6:980-997. [PMID: 36875011 PMCID: PMC9976647 DOI: 10.1162/netn_a_00235] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Accepted: 01/19/2022] [Indexed: 11/04/2022] Open
Abstract
Animal brains evolved to optimize behavior in dynamic environments, flexibly selecting actions that maximize future rewards in different contexts. A large body of experimental work indicates that such optimization changes the wiring of neural circuits, appropriately mapping environmental input onto behavioral outputs. A major unsolved scientific question is how optimal wiring adjustments, which must target the connections responsible for rewards, can be accomplished when the relation between sensory inputs, action taken, and environmental context with rewards is ambiguous. The credit assignment problem can be categorized into context-independent structural credit assignment and context-dependent continual learning. In this perspective, we survey prior approaches to these two problems and advance the notion that the brain's specialized neural architectures provide efficient solutions. Within this framework, the thalamus with its cortical and basal ganglia interactions serves as a systems-level solution to credit assignment. Specifically, we propose that thalamocortical interaction is the locus of meta-learning where the thalamus provides cortical control functions that parametrize the cortical activity association space. By selecting among these control functions, the basal ganglia hierarchically guide thalamocortical plasticity across two timescales to enable meta-learning. The faster timescale establishes contextual associations to enable behavioral flexibility, while the slower one enables generalization to new contexts.
Collapse
Affiliation(s)
- Mien Brabeeba Wang
- Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Michael M. Halassa
- Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
18
|
Mercier MS, Magloire V, Cornford JH, Kullmann DM. Long-term potentiation in neurogliaform interneurons modulates excitation-inhibition balance in the temporoammonic pathway. J Physiol 2022; 600:4001-4017. [PMID: 35876215 PMCID: PMC9540908 DOI: 10.1113/jp282753] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 07/19/2022] [Indexed: 11/08/2022] Open
Abstract
Apical dendrites of pyramidal neurons integrate information from higher-order cortex and thalamus, and gate signalling and plasticity at proximal synapses. In the hippocampus, neurogliaform cells and other interneurons located within stratum lacunosum-moleculare (SLM) mediate powerful inhibition of CA1 pyramidal neuron distal dendrites. Is the recruitment of such inhibition itself subject to use-dependent plasticity, and if so, what induction rules apply? Here we show that interneurons in mouse SLM exhibit Hebbian NMDA receptor-dependent long-term potentiation (LTP). Such plasticity can be induced by selective optogenetic stimulation of afferents in the temporoammonic pathway from the entorhinal cortex (EC), but not by equivalent stimulation of afferents from the thalamic nucleus reuniens. We further show that theta-burst patterns of afferent firing induces LTP in neurogliaform interneurons identified using neuron-derived neurotrophic factor (Ndnf)-Cre mice. Theta-burst activity of EC afferents led to an increase in disynaptic feed-forward inhibition, but not monosynaptic excitation, of CA1 pyramidal neurons. Activity-dependent synaptic plasticity in SLM interneurons thus alters the excitation-inhibition balance at EC inputs to the apical dendrites of pyramidal neurons, implying a dynamic role for these interneurons in gating CA1 dendritic computations. KEY POINTS: Electrogenic phenomena in distal dendrites of principal neurons in the hippocampus have a major role in gating synaptic plasticity at afferent synapses on proximal dendrites. Apical dendrites also receive powerful feed-forward inhibition, mediated in large part by neurogliaform neurons. Here we show that theta-burst activity in afferents from the entorhinal cortex (EC) induces 'Hebbian' long-term potentiation (LTP) at excitatory synapses recruiting these GABAergic cells. LTP in interneurons innervating apical dendrites increases disynaptic inhibition of principal neurons, thus shifting the excitation-inhibition balance in the temporoammonic (TA) pathway in favour of inhibition, with implications for computations and learning rules in proximal dendrites.
Collapse
Affiliation(s)
- Marion S. Mercier
- UCL Queen Square Institute of NeurologyDepartment of Clinical and Experimental EpilepsyUniversity College LondonLondonUK
| | - Vincent Magloire
- UCL Queen Square Institute of NeurologyDepartment of Clinical and Experimental EpilepsyUniversity College LondonLondonUK
| | - Jonathan H. Cornford
- UCL Queen Square Institute of NeurologyDepartment of Clinical and Experimental EpilepsyUniversity College LondonLondonUK
| | - Dimitri M. Kullmann
- UCL Queen Square Institute of NeurologyDepartment of Clinical and Experimental EpilepsyUniversity College LondonLondonUK
| |
Collapse
|
19
|
Shen G, Zhao D, Zeng Y. Backpropagation with biologically plausible spatiotemporal adjustment for training deep spiking neural networks. PATTERNS (NEW YORK, N.Y.) 2022; 3:100522. [PMID: 35755868 PMCID: PMC9214320 DOI: 10.1016/j.patter.2022.100522] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 03/29/2022] [Accepted: 05/06/2022] [Indexed: 11/21/2022]
Abstract
The spiking neural network (SNN) mimics the information-processing operation in the human brain. Directly applying backpropagation to the training of the SNN still has a performance gap compared with traditional deep neural networks. To address the problem, we propose a biologically plausible spatial adjustment that rethinks the relationship between membrane potential and spikes and realizes a reasonable adjustment of gradients to different time steps. It precisely controls the backpropagation of the error along the spatial dimension. Secondly, we propose a biologically plausible temporal adjustment to make the error propagate across the spikes in the temporal dimension, which overcomes the problem of the temporal dependency within a single spike period of traditional spiking neurons. We have verified our algorithm on several datasets, and the experimental results have shown that our algorithm greatly reduces network latency and energy consumption while also improving network performance.
Collapse
Affiliation(s)
- Guobin Shen
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 100190, China
| | - Dongcheng Zhao
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Yi Zeng
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
- National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100190, China
| |
Collapse
|
20
|
Hodassman S, Vardi R, Tugendhaft Y, Goldental A, Kanter I. Efficient dendritic learning as an alternative to synaptic plasticity hypothesis. Sci Rep 2022; 12:6571. [PMID: 35484180 PMCID: PMC9051213 DOI: 10.1038/s41598-022-10466-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 04/08/2022] [Indexed: 11/09/2022] Open
Abstract
Synaptic plasticity is a long-lasting core hypothesis of brain learning that suggests local adaptation between two connecting neurons and forms the foundation of machine learning. The main complexity of synaptic plasticity is that synapses and dendrites connect neurons in series and existing experiments cannot pinpoint the significant imprinted adaptation location. We showed efficient backpropagation and Hebbian learning on dendritic trees, inspired by experimental-based evidence, for sub-dendritic adaptation and its nonlinear amplification. It has proven to achieve success rates approaching unity for handwritten digits recognition, indicating realization of deep learning even by a single dendrite or neuron. Additionally, dendritic amplification practically generates an exponential number of input crosses, higher-order interactions, with the number of inputs, which enhance success rates. However, direct implementation of a large number of the cross weights and their exhaustive manipulation independently is beyond existing and anticipated computational power. Hence, a new type of nonlinear adaptive dendritic hardware for imitating dendritic learning and estimating the computational capability of the brain must be built.
Collapse
Affiliation(s)
- Shiri Hodassman
- Department of Physics, Bar-Ilan University, 52900, Ramat-Gan, Israel
| | - Roni Vardi
- Gonda Interdisciplinary Brain Research Center, Bar-Ilan University, 52900, Ramat-Gan, Israel
| | - Yael Tugendhaft
- Department of Physics, Bar-Ilan University, 52900, Ramat-Gan, Israel
| | - Amir Goldental
- Department of Physics, Bar-Ilan University, 52900, Ramat-Gan, Israel
| | - Ido Kanter
- Department of Physics, Bar-Ilan University, 52900, Ramat-Gan, Israel. .,Gonda Interdisciplinary Brain Research Center, Bar-Ilan University, 52900, Ramat-Gan, Israel.
| |
Collapse
|
21
|
Kirchner JH, Gjorgjieva J. Emergence of synaptic organization and computation in dendrites. NEUROFORUM 2022; 28:21-30. [PMID: 35881644 PMCID: PMC8887907 DOI: 10.1515/nf-2021-0031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Single neurons in the brain exhibit astounding computational capabilities, which gradually emerge throughout development and enable them to become integrated into complex neural circuits. These capabilities derive in part from the precise arrangement of synaptic inputs on the neurons' dendrites. While the full computational benefits of this arrangement are still unknown, a picture emerges in which synapses organize according to their functional properties across multiple spatial scales. In particular, on the local scale (tens of microns), excitatory synaptic inputs tend to form clusters according to their functional similarity, whereas on the scale of individual dendrites or the entire tree, synaptic inputs exhibit dendritic maps where excitatory synapse function varies smoothly with location on the tree. The development of this organization is supported by inhibitory synapses, which are carefully interleaved with excitatory synapses and can flexibly modulate activity and plasticity of excitatory synapses. Here, we summarize recent experimental and theoretical research on the developmental emergence of this synaptic organization and its impact on neural computations.
Collapse
Affiliation(s)
- Jan H. Kirchner
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Max-von-Laue-Str. 4, 60438Frankfurt, Germany
- Technical University of Munich, School of Life Sciences, 85354Freising, Germany
| | - Julijana Gjorgjieva
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Max-von-Laue-Str. 4, 60438Frankfurt, Germany
- Technical University of Munich, School of Life Sciences, 85354Freising, Germany
| |
Collapse
|
22
|
Costa RM, Baxter DA, Byrne JH. Neuronal population activity dynamics reveal a low-dimensional signature of operant learning in Aplysia. Commun Biol 2022; 5:90. [PMID: 35075264 PMCID: PMC8786933 DOI: 10.1038/s42003-022-03044-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 01/07/2022] [Indexed: 11/24/2022] Open
Abstract
Learning engages a high-dimensional neuronal population space spanning multiple brain regions. However, it remains unknown whether it is possible to identify a low-dimensional signature associated with operant conditioning, a ubiquitous form of learning in which animals learn from the consequences of behavior. Using single-neuron resolution voltage imaging, here we identify two low-dimensional motor modules in the neuronal population underlying Aplysia feeding. Our findings point to a temporal shift in module recruitment as the primary signature of operant learning. Our findings can help guide characterization of learning signatures in systems in which only a smaller fraction of the relevant neuronal population can be monitored. Costa et al. use single-neuron resolution voltage imaging to identify two low-dimensional motor modules in the neuronal population underlying Aplysia feeding. Their findings point to a temporal shift in module recruitment as the primary signature of operant learning.
Collapse
|
23
|
Cell-type-specific neuromodulation guides synaptic credit assignment in a spiking neural network. Proc Natl Acad Sci U S A 2021; 118:2111821118. [PMID: 34916291 PMCID: PMC8713766 DOI: 10.1073/pnas.2111821118] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/28/2021] [Indexed: 12/27/2022] Open
Abstract
Synaptic connectivity provides the foundation for our present understanding of neuronal network function, but static connectivity cannot explain learning and memory. We propose a computational role for the diversity of cortical neuronal types and their associated cell-type–specific neuromodulators in improving the efficiency of synaptic weight adjustments for task learning in neuronal networks. Brains learn tasks via experience-driven differential adjustment of their myriad individual synaptic connections, but the mechanisms that target appropriate adjustment to particular connections remain deeply enigmatic. While Hebbian synaptic plasticity, synaptic eligibility traces, and top-down feedback signals surely contribute to solving this synaptic credit-assignment problem, alone, they appear to be insufficient. Inspired by new genetic perspectives on neuronal signaling architectures, here, we present a normative theory for synaptic learning, where we predict that neurons communicate their contribution to the learning outcome to nearby neurons via cell-type–specific local neuromodulation. Computational tests suggest that neuron-type diversity and neuron-type–specific local neuromodulation may be critical pieces of the biological credit-assignment puzzle. They also suggest algorithms for improved artificial neural network learning efficiency.
Collapse
|
24
|
Milstein AD, Li Y, Bittner KC, Grienberger C, Soltesz I, Magee JC, Romani S. Bidirectional synaptic plasticity rapidly modifies hippocampal representations. eLife 2021; 10:e73046. [PMID: 34882093 PMCID: PMC8776257 DOI: 10.7554/elife.73046] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Accepted: 12/08/2021] [Indexed: 11/13/2022] Open
Abstract
Learning requires neural adaptations thought to be mediated by activity-dependent synaptic plasticity. A relatively non-standard form of synaptic plasticity driven by dendritic calcium spikes, or plateau potentials, has been reported to underlie place field formation in rodent hippocampal CA1 neurons. Here, we found that this behavioral timescale synaptic plasticity (BTSP) can also reshape existing place fields via bidirectional synaptic weight changes that depend on the temporal proximity of plateau potentials to pre-existing place fields. When evoked near an existing place field, plateau potentials induced less synaptic potentiation and more depression, suggesting BTSP might depend inversely on postsynaptic activation. However, manipulations of place cell membrane potential and computational modeling indicated that this anti-correlation actually results from a dependence on current synaptic weight such that weak inputs potentiate and strong inputs depress. A network model implementing this bidirectional synaptic learning rule suggested that BTSP enables population activity, rather than pairwise neuronal correlations, to drive neural adaptations to experience.
Collapse
Affiliation(s)
- Aaron D Milstein
- Department of Neurosurgery and Stanford Neurosciences Institute, Stanford University School of MedicineStanfordUnited States
- Department of Neuroscience and Cell Biology, Robert Wood Johnson Medical School and Center for Advanced Biotechnology and Medicine, Rutgers UniversityPiscatawayUnited States
| | - Yiding Li
- Howard Hughes Medical Institute, Baylor College of MedicineHoustonUnited States
| | - Katie C Bittner
- Howard Hughes Medical Institute, Janelia Research CampusAshburnUnited States
| | | | - Ivan Soltesz
- Department of Neurosurgery and Stanford Neurosciences Institute, Stanford University School of MedicineStanfordUnited States
| | - Jeffrey C Magee
- Howard Hughes Medical Institute, Baylor College of MedicineHoustonUnited States
| | - Sandro Romani
- Howard Hughes Medical Institute, Janelia Research CampusAshburnUnited States
| |
Collapse
|
25
|
Thompson JAF. Forms of explanation and understanding for neuroscience and artificial intelligence. J Neurophysiol 2021; 126:1860-1874. [PMID: 34644128 DOI: 10.1152/jn.00195.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Much of the controversy evoked by the use of deep neural networks as models of biological neural systems amount to debates over what constitutes scientific progress in neuroscience. To discuss what constitutes scientific progress, one must have a goal in mind (progress toward what?). One such long-term goal is to produce scientific explanations of intelligent capacities (e.g., object recognition, relational reasoning). I argue that the most pressing philosophical questions at the intersection of neuroscience and artificial intelligence are ultimately concerned with defining the phenomena to be explained and with what constitute valid explanations of such phenomena. I propose that a foundation in the philosophy of scientific explanation and understanding can scaffold future discussions about how an integrated science of intelligence might progress. Toward this vision, I review relevant theories of scientific explanation and discuss strategies for unifying the scientific goals of neuroscience and AI.
Collapse
Affiliation(s)
- Jessica A F Thompson
- Human Information Processing Lab, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
26
|
Vercruysse F, Naud R, Sprekeler H. Self-organization of a doubly asynchronous irregular network state for spikes and bursts. PLoS Comput Biol 2021; 17:e1009478. [PMID: 34748532 PMCID: PMC8575278 DOI: 10.1371/journal.pcbi.1009478] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2021] [Accepted: 09/24/2021] [Indexed: 11/21/2022] Open
Abstract
Cortical pyramidal cells (PCs) have a specialized dendritic mechanism for the generation of bursts, suggesting that these events play a special role in cortical information processing. In vivo, bursts occur at a low, but consistent rate. Theory suggests that this network state increases the amount of information they convey. However, because burst activity relies on a threshold mechanism, it is rather sensitive to dendritic input levels. In spiking network models, network states in which bursts occur rarely are therefore typically not robust, but require fine-tuning. Here, we show that this issue can be solved by a homeostatic inhibitory plasticity rule in dendrite-targeting interneurons that is consistent with experimental data. The suggested learning rule can be combined with other forms of inhibitory plasticity to self-organize a network state in which both spikes and bursts occur asynchronously and irregularly at low rate. Finally, we show that this network state creates the network conditions for a recently suggested multiplexed code and thereby indeed increases the amount of information encoded in bursts.
Collapse
Affiliation(s)
- Filip Vercruysse
- Department for Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience, Berlin, Germany
| | - Richard Naud
- Department of Physics, University of Ottawa, Ottawa, Canada
- uOttawa Brain Mind Institute, Center for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Canada
| | - Henning Sprekeler
- Department for Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience, Berlin, Germany
| |
Collapse
|
27
|
Sjöström PJ. Grand Challenge at the Frontiers of Synaptic Neuroscience. Front Synaptic Neurosci 2021; 13:748937. [PMID: 34759809 PMCID: PMC8575031 DOI: 10.3389/fnsyn.2021.748937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Accepted: 09/22/2021] [Indexed: 11/24/2022] Open
Affiliation(s)
- P. Jesper Sjöström
- Department of Medicine, Department of Neurology and Neurosurgery, Centre for Research in Neuroscience, The Research Institute of the McGill University Health Centre, Montreal General Hospital, Montreal, QC, Canada
| |
Collapse
|
28
|
Acharya J, Basu A, Legenstein R, Limbacher T, Poirazi P, Wu X. Dendritic Computing: Branching Deeper into Machine Learning. Neuroscience 2021; 489:275-289. [PMID: 34656706 DOI: 10.1016/j.neuroscience.2021.10.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 09/07/2021] [Accepted: 10/03/2021] [Indexed: 12/31/2022]
Abstract
In this paper, we discuss the nonlinear computational power provided by dendrites in biological and artificial neurons. We start by briefly presenting biological evidence about the type of dendritic nonlinearities, respective plasticity rules and their effect on biological learning as assessed by computational models. Four major computational implications are identified as improved expressivity, more efficient use of resources, utilizing internal learning signals, and enabling continual learning. We then discuss examples of how dendritic computations have been used to solve real-world classification problems with performance reported on well known data sets used in machine learning. The works are categorized according to the three primary methods of plasticity used-structural plasticity, weight plasticity, or plasticity of synaptic delays. Finally, we show the recent trend of confluence between concepts of deep learning and dendritic computations and highlight some future research directions.
Collapse
Affiliation(s)
| | - Arindam Basu
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong
| | - Robert Legenstein
- Institute of Theoretical Computer Science, Graz University of Technology, Austria
| | - Thomas Limbacher
- Institute of Theoretical Computer Science, Graz University of Technology, Austria
| | - Panayiota Poirazi
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology-Hellas (FORTH), Greece
| | - Xundong Wu
- School of Computer Science, Hangzhou Dianzi University, China
| |
Collapse
|
29
|
Pampaloni NP, Plested AJR. Slow excitatory synaptic currents generated by AMPA receptors. J Physiol 2021; 600:217-232. [PMID: 34587649 DOI: 10.1113/jp280877] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 09/01/2021] [Indexed: 12/28/2022] Open
Abstract
Decades of literature indicate that the AMPA-type glutamate receptor is among the fastest acting of all neurotransmitter receptors. These receptors are located at excitatory synapses, and conventional wisdom says that they activate in hundreds of microseconds, deactivate in milliseconds due to their low affinity for glutamate and also desensitize profoundly. These properties circumscribe AMPA receptor activation in both space and time. However, accumulating evidence shows that AMPA receptors can also activate with slow, indefatigable responses. They do so through interactions with auxiliary subunits that are able promote a switch to a high open probability, high-conductance 'superactive' mode. In this review, we show that any assumption that this phenomenon is limited to heterologous expression is false and rather that slow AMPA currents have been widely and repeatedly observed throughout the nervous system. Hallmarks of the superactive mode are a lack of desensitization, resistance to competitive antagonists and a current decay that outlives free glutamate by hundreds of milliseconds. Because the switch to the superactive mode is triggered by activation, AMPA receptors can generate accumulating 'pedestal' currents in response to repetitive stimulation, constituting a postsynaptic mechanism for short-term potentiation in the range 5-100 Hz. Further, slow AMPA currents span 'cognitive' time intervals in the 100 ms range (theta rhythms), of particular interest for hippocampal function, where slow AMPA currents are widely expressed in a synapse-specific manner. Here, we outline the implications that slow AMPA receptors have for excitatory synaptic transmission and computation in the nervous system.
Collapse
Affiliation(s)
- Niccolò P Pampaloni
- Institute of Biology, Cellular Biophysics, Humboldt Universität zu Berlin, Berlin, Germany.,Leibniz-Forschungsinstitut für Molekulare Pharmakologie, Berlin, Germany.,NeuroCure Cluster of Excellence, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, and Berlin Institute of Health, Charitéplatz 1, Berlin, Germany
| | - Andrew J R Plested
- Institute of Biology, Cellular Biophysics, Humboldt Universität zu Berlin, Berlin, Germany.,Leibniz-Forschungsinstitut für Molekulare Pharmakologie, Berlin, Germany.,NeuroCure Cluster of Excellence, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, and Berlin Institute of Health, Charitéplatz 1, Berlin, Germany
| |
Collapse
|
30
|
Abadía I, Naveros F, Ros E, Carrillo RR, Luque NR. A cerebellar-based solution to the nondeterministic time delay problem in robotic control. Sci Robot 2021; 6:eabf2756. [PMID: 34516748 DOI: 10.1126/scirobotics.abf2756] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
[Figure: see text].
Collapse
Affiliation(s)
- Ignacio Abadía
- Research Centre for Information and Communication Technologies (CITIC), Department of Computer Architecture and Technology, University of Granada, Granada, Spain
| | - Francisco Naveros
- Research Centre for Information and Communication Technologies (CITIC), Department of Computer Architecture and Technology, University of Granada, Granada, Spain.,Computer School, Department of Architecture and Technology of Informatics Systems, Polytechnic University of Madrid, Madrid, Spain
| | - Eduardo Ros
- Research Centre for Information and Communication Technologies (CITIC), Department of Computer Architecture and Technology, University of Granada, Granada, Spain
| | - Richard R Carrillo
- Research Centre for Information and Communication Technologies (CITIC), Department of Computer Architecture and Technology, University of Granada, Granada, Spain
| | - Niceto R Luque
- Research Centre for Information and Communication Technologies (CITIC), Department of Computer Architecture and Technology, University of Granada, Granada, Spain
| |
Collapse
|
31
|
Blazek PJ, Lin MM. Explainable neural networks that simulate reasoning. NATURE COMPUTATIONAL SCIENCE 2021; 1:607-618. [PMID: 38217134 DOI: 10.1038/s43588-021-00132-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Accepted: 08/16/2021] [Indexed: 01/15/2024]
Abstract
The success of deep neural networks suggests that cognition may emerge from indecipherable patterns of distributed neural activity. Yet these networks are pattern-matching black boxes that cannot simulate higher cognitive functions and lack numerous neurobiological features. Accordingly, they are currently insufficient computational models for understanding neural information processing. Here, we show how neural circuits can directly encode cognitive processes via simple neurobiological principles. To illustrate, we implemented this model in a non-gradient-based machine learning algorithm to train deep neural networks called essence neural networks (ENNs). Neural information processing in ENNs is intrinsically explainable, even on benchmark computer vision tasks. ENNs can also simulate higher cognitive functions such as deliberation, symbolic reasoning and out-of-distribution generalization. ENNs display network properties associated with the brain, such as modularity, distributed and localist firing, and adversarial robustness. ENNs establish a broad computational framework to decipher the neural basis of cognition and pursue artificial general intelligence.
Collapse
Affiliation(s)
- Paul J Blazek
- Green Center for Systems Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Department of Biophysics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Milo M Lin
- Green Center for Systems Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA.
- Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA.
- Department of Biophysics, University of Texas Southwestern Medical Center, Dallas, TX, USA.
- Center for Alzheimer's and Neurodegenerative Diseases, University of Texas Southwestern Medical Center, Dallas, TX, USA.
| |
Collapse
|
32
|
Kaiser J, Billaudelle S, Müller E, Tetzlaff C, Schemmel J, Schmitt S. EMULATING DENDRITIC COMPUTING PARADIGMS ON ANALOG NEUROMORPHIC HARDWARE. Neuroscience 2021; 489:290-300. [PMID: 34428499 DOI: 10.1016/j.neuroscience.2021.08.013] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 08/10/2021] [Accepted: 08/11/2021] [Indexed: 10/20/2022]
Abstract
BrainScaleS-2 is an accelerated and highly configurable neuromorphic system with physical models of neurons and synapses. Beyond networks of spiking point neurons, it allows for the implementation of user-defined neuron morphologies. Both passive propagation of electric signals between compartments as well as dendritic spikes and plateau potentials can be emulated. In this paper, three multi-compartment neuron morphologies are chosen to demonstrate passive propagation of postsynaptic potentials, spatio-temporal coincidence detection of synaptic inputs in a dendritic branch, and the replication of the BAC burst firing mechanism found in layer 5 pyramidal neurons of the neocortex.
Collapse
Affiliation(s)
- Jakob Kaiser
- Heidelberg University, Kirchhoff-Institute for Physics, Germany
| | | | - Eric Müller
- Heidelberg University, Kirchhoff-Institute for Physics, Germany
| | | | | | | |
Collapse
|
33
|
Lipshutz D, Bahroun Y, Golkar S, Sengupta AM, Chklovskii DB. A Biologically Plausible Neural Network for Multichannel Canonical Correlation Analysis. Neural Comput 2021; 33:2309-2352. [PMID: 34412114 DOI: 10.1162/neco_a_01414] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 03/23/2021] [Indexed: 11/04/2022]
Abstract
Cortical pyramidal neurons receive inputs from multiple distinct neural populations and integrate these inputs in separate dendritic compartments. We explore the possibility that cortical microcircuits implement canonical correlation analysis (CCA), an unsupervised learning method that projects the inputs onto a common subspace so as to maximize the correlations between the projections. To this end, we seek a multichannel CCA algorithm that can be implemented in a biologically plausible neural network. For biological plausibility, we require that the network operates in the online setting and its synaptic update rules are local. Starting from a novel CCA objective function, we derive an online optimization algorithm whose optimization steps can be implemented in a single-layer neural network with multicompartmental neurons and local non-Hebbian learning rules. We also derive an extension of our online CCA algorithm with adaptive output rank and output whitening. Interestingly, the extension maps onto a neural network whose neural architecture and synaptic updates resemble neural circuitry and non-Hebbian plasticity observed in the cortex.
Collapse
Affiliation(s)
- David Lipshutz
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, U.S.A.
| | - Yanis Bahroun
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, U.S.A.
| | - Siavash Golkar
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, U.S.A.
| | - Anirvan M Sengupta
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, U.S.A., and Department of Physics and Astronomy, Rutgers University, Piscataway, NJ 08854 U.S.A.
| | - Dmitri B Chklovskii
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, U.S.A., and Neuroscience Institute, NYU Medical Center, New York, NY 10016, U.S.A.
| |
Collapse
|
34
|
Jones IS, Kording KP. Do Biological Constraints Impair Dendritic Computation? Neuroscience 2021; 489:262-274. [PMID: 34364955 PMCID: PMC8835230 DOI: 10.1016/j.neuroscience.2021.07.036] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 07/28/2021] [Accepted: 07/30/2021] [Indexed: 11/28/2022]
Abstract
Computations on the dendritic trees of neurons have important constraints. Voltage dependent conductances in dendrites are not similar to arbitrary direct-current generation, they are the basis for dendritic nonlinearities and they do not allow converting positive currents into negative currents. While it has been speculated that the dendritic tree of a neuron can be seen as a multi-layer neural network and it has been shown that such an architecture could be computationally strong, we do not know if that computational strength is preserved under these biological constraints. Here we simulate models of dendritic computation with and without these constraints. We find that dendritic model performance on interesting machine learning tasks is not hurt by these constraints but may benefit from them. Our results suggest that single real dendritic trees may be able to learn a surprisingly broad range of tasks.
Collapse
Affiliation(s)
| | - Konrad Paul Kording
- Department of Neuroscience, University of Pennsylvania, United States; Department Bioengineering, University of Pennsylvania, United States
| |
Collapse
|
35
|
Chavlis S, Poirazi P. Drawing inspiration from biological dendrites to empower artificial neural networks. Curr Opin Neurobiol 2021; 70:1-10. [PMID: 34087540 DOI: 10.1016/j.conb.2021.04.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Revised: 04/21/2021] [Accepted: 04/28/2021] [Indexed: 12/24/2022]
Abstract
This article highlights specific features of biological neurons and their dendritic trees, whose adoption may help advance artificial neural networks used in various machine learning applications. Advancements could take the form of increased computational capabilities and/or reduced power consumption. Proposed features include dendritic anatomy, dendritic nonlinearities, and compartmentalized plasticity rules, all of which shape learning and information processing in biological networks. We discuss the computational benefits provided by these features in biological neurons and suggest ways to adopt them in artificial neurons in order to exploit the respective benefits in machine learning.
Collapse
Affiliation(s)
- Spyridon Chavlis
- Institute of Molecular Biology and Biotechnology, Foundation for Research and Technology-Hellas, Heraklion, 70013, Greece
| | - Panayiota Poirazi
- Institute of Molecular Biology and Biotechnology, Foundation for Research and Technology-Hellas, Heraklion, 70013, Greece.
| |
Collapse
|
36
|
Payeur A, Guerguiev J, Zenke F, Richards BA, Naud R. Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits. Nat Neurosci 2021; 24:1010-1019. [PMID: 33986551 DOI: 10.1038/s41593-021-00857-x] [Citation(s) in RCA: 67] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Accepted: 04/15/2021] [Indexed: 01/25/2023]
Abstract
Synaptic plasticity is believed to be a key physiological mechanism for learning. It is well established that it depends on pre- and postsynaptic activity. However, models that rely solely on pre- and postsynaptic activity for synaptic changes have, so far, not been able to account for learning complex tasks that demand credit assignment in hierarchical networks. Here we show that if synaptic plasticity is regulated by high-frequency bursts of spikes, then pyramidal neurons higher in a hierarchical circuit can coordinate the plasticity of lower-level connections. Using simulations and mathematical analyses, we demonstrate that, when paired with short-term synaptic dynamics, regenerative activity in the apical dendrites and synaptic plasticity in feedback pathways, a burst-dependent learning rule can solve challenging tasks that require deep network architectures. Our results demonstrate that well-known properties of dendrites, synapses and synaptic plasticity are sufficient to enable sophisticated learning in hierarchical circuits.
Collapse
Affiliation(s)
- Alexandre Payeur
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada.,Ottawa Brain and Mind Institute, University of Ottawa, Ottawa, ON, Canada.,Centre for Neural Dynamics, University of Ottawa, Ottawa, ON, Canada.,University of Montréal and Mila, Montréal, QC, Canada
| | - Jordan Guerguiev
- Department of Biological Sciences, University of Toronto Scarborough, Toronto, ON, Canada.,Department of Cell and Systems Biology, University of Toronto, Toronto, ON, Canada
| | - Friedemann Zenke
- Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland
| | - Blake A Richards
- Mila, Montréal, QC, Canada. .,Department of Neurology and Neurosurgery, McGill University, Montréal, QC, Canada. .,School of Computer Science, McGill University, Montréal, QC, Canada. .,Learning in Machines and Brains Program, Canadian Institute for Advanced Research, Toronto, ON, Canada.
| | - Richard Naud
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada. .,Ottawa Brain and Mind Institute, University of Ottawa, Ottawa, ON, Canada. .,Centre for Neural Dynamics, University of Ottawa, Ottawa, ON, Canada. .,Department of Physics, University of Ottawa, Ottawa, ON, Canada.
| |
Collapse
|
37
|
Qin S, Mudur N, Pehlevan C. Contrastive Similarity Matching for Supervised Learning. Neural Comput 2021; 33:1300-1328. [PMID: 33617744 DOI: 10.1162/neco_a_01374] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Accepted: 11/23/2020] [Indexed: 11/04/2022]
Abstract
We propose a novel biologically plausible solution to the credit assignment problem motivated by observations in the ventral visual pathway and trained deep neural networks. In both, representations of objects in the same category become progressively more similar, while objects belonging to different categories become less similar. We use this observation to motivate a layer-specific learning goal in a deep network: each layer aims to learn a representational similarity matrix that interpolates between previous and later layers. We formulate this idea using a contrastive similarity matching objective function and derive from it deep neural networks with feedforward, lateral, and feedback connections and neurons that exhibit biologically plausible Hebbian and anti-Hebbian plasticity. Contrastive similarity matching can be interpreted as an energy-based learning algorithm, but with significant differences from others in how a contrastive function is constructed.
Collapse
Affiliation(s)
- Shanshan Qin
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, U.S.A.
| | - Nayantara Mudur
- Department of Physics, Harvard University, Cambridge, MA 02138, U.S.A.
| | - Cengiz Pehlevan
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, U.S.A.
| |
Collapse
|
38
|
Raman DV, O'Leary T. Frozen algorithms: how the brain's wiring facilitates learning. Curr Opin Neurobiol 2021; 67:207-214. [PMID: 33508698 PMCID: PMC8202511 DOI: 10.1016/j.conb.2020.12.017] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 12/21/2020] [Accepted: 12/30/2020] [Indexed: 12/03/2022]
Abstract
Synapses and neural connectivity are plastic and shaped by experience. But to what extent does connectivity itself influence the ability of a neural circuit to learn? Insights from optimization theory and AI shed light on how learning can be implemented in neural circuits. Though abstract in their nature, learning algorithms provide a principled set of hypotheses on the necessary ingredients for learning in neural circuits. These include the kinds of signals and circuit motifs that enable learning from experience, as well as an appreciation of the constraints that make learning challenging in a biological setting. Remarkably, some simple connectivity patterns can boost the efficiency of relatively crude learning rules, showing how the brain can use anatomy to compensate for the biological constraints of known synaptic plasticity mechanisms. Modern connectomics provides rich data for exploring this principle, and may reveal how brain connectivity is constrained by the requirement to learn efficiently.
Collapse
Affiliation(s)
- Dhruva V Raman
- Department of Engineering, University of Cambridge, United Kingdom
| | - Timothy O'Leary
- Department of Engineering, University of Cambridge, United Kingdom.
| |
Collapse
|
39
|
Rossbroich J, Trotter D, Beninger J, Tóth K, Naud R. Linear-nonlinear cascades capture synaptic dynamics. PLoS Comput Biol 2021; 17:e1008013. [PMID: 33720935 PMCID: PMC7993773 DOI: 10.1371/journal.pcbi.1008013] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 03/25/2021] [Accepted: 02/25/2021] [Indexed: 11/18/2022] Open
Abstract
Short-term synaptic dynamics differ markedly across connections and strongly regulate how action potentials communicate information. To model the range of synaptic dynamics observed in experiments, we have developed a flexible mathematical framework based on a linear-nonlinear operation. This model can capture various experimentally observed features of synaptic dynamics and different types of heteroskedasticity. Despite its conceptual simplicity, we show that it is more adaptable than previous models. Combined with a standard maximum likelihood approach, synaptic dynamics can be accurately and efficiently characterized using naturalistic stimulation patterns. These results make explicit that synaptic processing bears algorithmic similarities with information processing in convolutional neural networks.
Collapse
Affiliation(s)
- Julian Rossbroich
- Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland
| | - Daniel Trotter
- Department of Physics, University of Ottawa, Ottawa, ON, Canada
| | - John Beninger
- uOttawa Brain Mind Institute, Center for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Katalin Tóth
- uOttawa Brain Mind Institute, Center for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Richard Naud
- Department of Physics, University of Ottawa, Ottawa, ON, Canada
- uOttawa Brain Mind Institute, Center for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
40
|
Yang S, Gao T, Wang J, Deng B, Lansdell B, Linares-Barranco B. Efficient Spike-Driven Learning With Dendritic Event-Based Processing. Front Neurosci 2021; 15:601109. [PMID: 33679295 PMCID: PMC7933681 DOI: 10.3389/fnins.2021.601109] [Citation(s) in RCA: 60] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Accepted: 01/21/2021] [Indexed: 11/22/2022] Open
Abstract
A critical challenge in neuromorphic computing is to present computationally efficient algorithms of learning. When implementing gradient-based learning, error information must be routed through the network, such that each neuron knows its contribution to output, and thus how to adjust its weight. This is known as the credit assignment problem. Exactly implementing a solution like backpropagation involves weight sharing, which requires additional bandwidth and computations in a neuromorphic system. Instead, models of learning from neuroscience can provide inspiration for how to communicate error information efficiently, without weight sharing. Here we present a novel dendritic event-based processing (DEP) algorithm, using a two-compartment leaky integrate-and-fire neuron with partially segregated dendrites that effectively solves the credit assignment problem. In order to optimize the proposed algorithm, a dynamic fixed-point representation method and piecewise linear approximation approach are presented, while the synaptic events are binarized during learning. The presented optimization makes the proposed DEP algorithm very suitable for implementation in digital or mixed-signal neuromorphic hardware. The experimental results show that spiking representations can rapidly learn, achieving high performance by using the proposed DEP algorithm. We find the learning capability is affected by the degree of dendritic segregation, and the form of synaptic feedback connections. This study provides a bridge between the biological learning and neuromorphic learning, and is meaningful for the real-time applications in the field of artificial intelligence.
Collapse
Affiliation(s)
- Shuangming Yang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Tian Gao
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Jiang Wang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Bin Deng
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Benjamin Lansdell
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, United States
| | | |
Collapse
|
41
|
Young H, Belbut B, Baeta M, Petreanu L. Laminar-specific cortico-cortical loops in mouse visual cortex. eLife 2021; 10:e59551. [PMID: 33522479 PMCID: PMC7877907 DOI: 10.7554/elife.59551] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Accepted: 01/29/2021] [Indexed: 11/13/2022] Open
Abstract
Many theories propose recurrent interactions across the cortical hierarchy, but it is unclear if cortical circuits are selectively wired to implement looped computations. Using subcellular channelrhodopsin-2-assisted circuit mapping in mouse visual cortex, we compared feedforward (FF) or feedback (FB) cortico-cortical (CC) synaptic input to cells projecting back to the input source (looped neurons) with cells projecting to a different cortical or subcortical area. FF and FB afferents showed similar cell-type selectivity, making stronger connections with looped neurons than with other projection types in layer (L)5 and L6, but not in L2/3, resulting in selective modulation of activity in looped neurons. In most cases, stronger connections in looped L5 neurons were located on their apical tufts, but not on their perisomatic dendrites. Our results reveal that CC connections are selectively wired to form monosynaptic excitatory loops and support a differential role of supragranular and infragranular neurons in hierarchical recurrent computations.
Collapse
Affiliation(s)
- Hedi Young
- Champalimaud Research, Champalimaud Center for the UnknownLisbonPortugal
| | - Beatriz Belbut
- Champalimaud Research, Champalimaud Center for the UnknownLisbonPortugal
| | - Margarida Baeta
- Champalimaud Research, Champalimaud Center for the UnknownLisbonPortugal
| | - Leopoldo Petreanu
- Champalimaud Research, Champalimaud Center for the UnknownLisbonPortugal
| |
Collapse
|
42
|
Kruijne W, Bohte SM, Roelfsema PR, Olivers CNL. Flexible Working Memory Through Selective Gating and Attentional Tagging. Neural Comput 2020; 33:1-40. [PMID: 33080159 DOI: 10.1162/neco_a_01339] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
Working memory is essential: it serves to guide intelligent behavior of humans and nonhuman primates when task-relevant stimuli are no longer present to the senses. Moreover, complex tasks often require that multiple working memory representations can be flexibly and independently maintained, prioritized, and updated according to changing task demands. Thus far, neural network models of working memory have been unable to offer an integrative account of how such control mechanisms can be acquired in a biologically plausible manner. Here, we present WorkMATe, a neural network architecture that models cognitive control over working memory content and learns the appropriate control operations needed to solve complex working memory tasks. Key components of the model include a gated memory circuit that is controlled by internal actions, encoding sensory information through untrained connections, and a neural circuit that matches sensory inputs to memory content. The network is trained by means of a biologically plausible reinforcement learning rule that relies on attentional feedback and reward prediction errors to guide synaptic updates. We demonstrate that the model successfully acquires policies to solve classical working memory tasks, such as delayed recognition and delayed pro-saccade/anti-saccade tasks. In addition, the model solves much more complex tasks, including the hierarchical 12-AX task or the ABAB ordered recognition task, both of which demand an agent to independently store and updated multiple items separately in memory. Furthermore, the control strategies that the model acquires for these tasks subsequently generalize to new task contexts with novel stimuli, thus bringing symbolic production rule qualities to a neural network architecture. As such, WorkMATe provides a new solution for the neural implementation of flexible memory control.
Collapse
Affiliation(s)
- Wouter Kruijne
- Faculty of Behavior and Movement Sciences, Vrije Universiteit Amsterdam, 1081 BT Amsterdam, Noord Holland, The Netherlands
| | - Sander M Bohte
- Machine Learning Group, Centrum voor Wiskunde & Informatica, 1098 XG Amsterdam, Noord Holland, The Netherlands; Swammerdam Institute of Life Sciences, University of Amsterdam, 1098 XH Amsterdam, Noord Holland, The Netherlands; and Department of Computer Science, Rijksuniversiteit Groningen, 9747 AG Groningen, The Netherlands
| | - Pieter R Roelfsema
- Department of Vision & Cognition, Netherlands Institute for Neuroscience, 1105BA Amsterdam, Noord Holland, The Netherlands; Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, Vrije Universiteit Amsterdam, 1981 HV Amsterdam, Noord Holland, The Netherlands; and Department of Computer Science, Rijksuniversiteit Groningen, 9747 AG Groningen, The Netherlands
| | - Christian N L Olivers
- Faculty of Behavior and Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, Noord Holland, The Netherlands, Department of Psychiatry, Academic Medical Center, Amsterdam, The Netherlands
| |
Collapse
|
43
|
Ebner C, Clopath C, Jedlicka P, Cuntz H. Unifying Long-Term Plasticity Rules for Excitatory Synapses by Modeling Dendrites of Cortical Pyramidal Neurons. Cell Rep 2020; 29:4295-4307.e6. [PMID: 31875541 PMCID: PMC6941234 DOI: 10.1016/j.celrep.2019.11.068] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2018] [Revised: 05/02/2019] [Accepted: 11/15/2019] [Indexed: 11/30/2022] Open
Abstract
A large number of experiments have indicated that precise spike times, firing rates, and synapse locations crucially determine the dynamics of long-term plasticity induction in excitatory synapses. However, it remains unknown how plasticity mechanisms of synapses distributed along dendritic trees cooperate to produce the wide spectrum of outcomes for various plasticity protocols. Here, we propose a four-pathway plasticity framework that is well grounded in experimental evidence and apply it to a biophysically realistic cortical pyramidal neuron model. We show in computer simulations that several seemingly contradictory experimental landmark studies are consistent with one unifying set of mechanisms when considering the effects of signal propagation in dendritic trees with respect to synapse location. Our model identifies specific spatiotemporal contributions of dendritic and axo-somatic spikes as well as of subthreshold activation of synaptic clusters, providing a unified parsimonious explanation not only for rate and timing dependence but also for location dependence of synaptic changes. A phenomenological synaptic plasticity rule is applied to a pyramidal neuron model Model reproduces rate-, timing-, and location-dependent plasticity results Active dendrites allow plasticity via dendritic spikes and subthreshold events Cooperative plasticity exists across the dendritic tree and within single branches
Collapse
Affiliation(s)
- Christian Ebner
- Frankfurt Institute for Advanced Studies, 60438 Frankfurt am Main, Germany; Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany; NeuroCure Cluster of Excellence, Charité-Universitätsmedizin Berlin, 10117 Berlin, Germany; Institute for Biology, Humboldt-Universität zu Berlin, 10117 Berlin, Germany.
| | - Claudia Clopath
- Computational Neuroscience Laboratory, Bioengineering Department, Imperial College London, London SW7 2AZ, UK
| | - Peter Jedlicka
- Frankfurt Institute for Advanced Studies, 60438 Frankfurt am Main, Germany; Institute of Clinical Neuroanatomy, Neuroscience Center, Goethe University Frankfurt, 60528 Frankfurt am Main, Germany; ICAR3R-Interdisciplinary Centre for 3Rs in Animal Research, Faculty of Medicine, Justus-Liebig-University, 35392 Giessen, Germany
| | - Hermann Cuntz
- Frankfurt Institute for Advanced Studies, 60438 Frankfurt am Main, Germany; Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany
| |
Collapse
|
44
|
Hertäg L, Sprekeler H. Learning prediction error neurons in a canonical interneuron circuit. eLife 2020; 9:e57541. [PMID: 32820723 PMCID: PMC7442488 DOI: 10.7554/elife.57541] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Accepted: 07/28/2020] [Indexed: 11/13/2022] Open
Abstract
Sensory systems constantly compare external sensory information with internally generated predictions. While neural hallmarks of prediction errors have been found throughout the brain, the circuit-level mechanisms that underlie their computation are still largely unknown. Here, we show that a well-orchestrated interplay of three interneuron types shapes the development and refinement of negative prediction-error neurons in a computational model of mouse primary visual cortex. By balancing excitation and inhibition in multiple pathways, experience-dependent inhibitory plasticity can generate different variants of prediction-error circuits, which can be distinguished by simulated optogenetic experiments. The experience-dependence of the model circuit is consistent with that of negative prediction-error circuits in layer 2/3 of mouse primary visual cortex. Our model makes a range of testable predictions that may shed light on the circuitry underlying the neural computation of prediction errors.
Collapse
Affiliation(s)
- Loreen Hertäg
- Modelling of Cognitive Processes, Institute of Software Engineering and Theoretical Computer Science, Berlin Institute of TechnologyBerlinGermany
- Bernstein Center for Computational NeuroscienceBerlinGermany
| | - Henning Sprekeler
- Modelling of Cognitive Processes, Institute of Software Engineering and Theoretical Computer Science, Berlin Institute of TechnologyBerlinGermany
- Bernstein Center for Computational NeuroscienceBerlinGermany
| |
Collapse
|
45
|
Costa RM, Baxter DA, Byrne JH. Computational model of the distributed representation of operant reward memory: combinatoric engagement of intrinsic and synaptic plasticity mechanisms. ACTA ACUST UNITED AC 2020; 27:236-249. [PMID: 32414941 PMCID: PMC7233148 DOI: 10.1101/lm.051367.120] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2020] [Accepted: 02/13/2020] [Indexed: 01/15/2023]
Abstract
Operant reward learning of feeding behavior in Aplysia increases the frequency and regularity of biting, as well as biases buccal motor patterns (BMPs) toward ingestion-like BMPs (iBMPs). The engram underlying this memory comprises cells that are part of a central pattern generating (CPG) circuit and includes increases in the intrinsic excitability of identified cells B30, B51, B63, and B65, and increases in B63-B30 and B63-B65 electrical synaptic coupling. To examine the ways in which sites of plasticity (individually and in combination) contribute to memory expression, a model of the CPG was developed. The model included conductance-based descriptions of cells CBI-2, B4, B8, B20, B30, B31, B34, B40, B51, B52, B63, B64, and B65, and their synaptic connections. The model generated patterned activity that resembled physiological BMPs, and implementation of the engram reproduced increases in frequency, regularity, and bias. Combined enhancement of B30, B63, and B65 excitabilities increased BMP frequency and regularity, but not bias toward iBMPs. Individually, B30 increased regularity and bias, B51 increased bias, B63 increased frequency, and B65 decreased all three BMP features. Combined synaptic plasticity contributed primarily to regularity, but also to frequency and bias. B63-B30 coupling contributed to regularity and bias, and B63-B65 coupling contributed to all BMP features. Each site of plasticity altered multiple BMP features simultaneously. Moreover, plasticity loci exhibited mutual dependence and synergism. These results indicate that the memory for operant reward learning emerged from the combinatoric engagement of multiple sites of plasticity.
Collapse
Affiliation(s)
- Renan M Costa
- Keck Center for the Neurobiology of Learning and Memory, Department of Neurobiology and Anatomy, McGovern Medical School at The University of Texas Health Science Center at Houston, Houston, Texas 77030, USA.,MD Anderson UTHealth Graduate School of Biomedical Sciences, Houston, Texas 77030, USA
| | - Douglas A Baxter
- Keck Center for the Neurobiology of Learning and Memory, Department of Neurobiology and Anatomy, McGovern Medical School at The University of Texas Health Science Center at Houston, Houston, Texas 77030, USA.,Engineering in Medicine (EnMed), Texas A&M Health Science Center-Houston, Houston, Texas 77030, USA
| | - John H Byrne
- Keck Center for the Neurobiology of Learning and Memory, Department of Neurobiology and Anatomy, McGovern Medical School at The University of Texas Health Science Center at Houston, Houston, Texas 77030, USA.,MD Anderson UTHealth Graduate School of Biomedical Sciences, Houston, Texas 77030, USA
| |
Collapse
|
46
|
|
47
|
Hasselmo ME, Alexander AS, Hoyland A, Robinson JC, Bezaire MJ, Chapman GW, Saudargiene A, Carstensen LC, Dannenberg H. The Unexplored Territory of Neural Models: Potential Guides for Exploring the Function of Metabotropic Neuromodulation. Neuroscience 2020; 456:143-158. [PMID: 32278058 DOI: 10.1016/j.neuroscience.2020.03.048] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Revised: 03/30/2020] [Accepted: 03/31/2020] [Indexed: 12/16/2022]
Abstract
The space of possible neural models is enormous and under-explored. Single cell computational neuroscience models account for a range of dynamical properties of membrane potential, but typically do not address network function. In contrast, most models focused on network function address the dimensions of excitatory weight matrices and firing thresholds without addressing the complexities of metabotropic receptor effects on intrinsic properties. There are many under-explored dimensions of neural parameter space, and the field needs a framework for representing what has been explored and what has not. Possible frameworks include maps of parameter spaces, or efforts to categorize the fundamental elements and molecules of neural circuit function. Here we review dimensions that are under-explored in network models that include the metabotropic modulation of synaptic plasticity and presynaptic inhibition, spike frequency adaptation due to calcium-dependent potassium currents, and afterdepolarization due to calcium-sensitive non-specific cation currents and hyperpolarization activated cation currents. Neuroscience research should more effectively explore possible functional models incorporating under-explored dimensions of neural function.
Collapse
Affiliation(s)
- Michael E Hasselmo
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, 610 Commonwealth Ave., Boston, MA 02215, United States.
| | - Andrew S Alexander
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, 610 Commonwealth Ave., Boston, MA 02215, United States
| | - Alec Hoyland
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, 610 Commonwealth Ave., Boston, MA 02215, United States
| | - Jennifer C Robinson
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, 610 Commonwealth Ave., Boston, MA 02215, United States
| | - Marianne J Bezaire
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, 610 Commonwealth Ave., Boston, MA 02215, United States
| | - G William Chapman
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, 610 Commonwealth Ave., Boston, MA 02215, United States
| | - Ausra Saudargiene
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, 610 Commonwealth Ave., Boston, MA 02215, United States
| | - Lucas C Carstensen
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, 610 Commonwealth Ave., Boston, MA 02215, United States
| | - Holger Dannenberg
- Center for Systems Neuroscience, Department of Psychological and Brain Sciences, Boston University, 610 Commonwealth Ave., Boston, MA 02215, United States
| |
Collapse
|
48
|
Somatodendritic consistency check for temporal feature segmentation. Nat Commun 2020; 11:1554. [PMID: 32214100 PMCID: PMC7096495 DOI: 10.1038/s41467-020-15367-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2019] [Accepted: 03/06/2020] [Indexed: 11/08/2022] Open
Abstract
The brain identifies potentially salient features within continuous information streams to process hierarchical temporal events. This requires the compression of information streams, for which effective computational principles are yet to be explored. Backpropagating action potentials can induce synaptic plasticity in the dendrites of cortical pyramidal neurons. By analogy with this effect, we model a self-supervising process that increases the similarity between dendritic and somatic activities where the somatic activity is normalized by a running average. We further show that a family of networks composed of the two-compartment neurons performs a surprisingly wide variety of complex unsupervised learning tasks, including chunking of temporal sequences and the source separation of mixed correlated signals. Common methods applicable to these temporal feature analyses were previously unknown. Our results suggest the powerful ability of neural networks with dendrites to analyze temporal features. This simple neuron model may also be potentially useful in neural engineering applications. The authors propose a learning rule for a neuron model with dendrite. In their model, somatodendritic interaction implements self-supervised learning applicable to a wide range of sequence learning tasks, including spike pattern detection, chunking temporal input and blind source separation.
Collapse
|
49
|
Abstract
Synaptic plasticity, the activity-dependent change in neuronal connection strength, has long been considered an important component of learning and memory. Computational and engineering work corroborate the power of learning through the directed adjustment of connection weights. Here we review the fundamental elements of four broadly categorized forms of synaptic plasticity and discuss their functional capabilities and limitations. Although standard, correlation-based, Hebbian synaptic plasticity has been the primary focus of neuroscientists for decades, it is inherently limited. Three-factor plasticity rules supplement Hebbian forms with neuromodulation and eligibility traces, while true supervised types go even further by adding objectives and instructive signals. Finally, a recently discovered hippocampal form of synaptic plasticity combines the above elements, while leaving behind the primary Hebbian requirement. We suggest that the effort to determine the neural basis of adaptive behavior could benefit from renewed experimental and theoretical investigation of more powerful directed types of synaptic plasticity.
Collapse
Affiliation(s)
- Jeffrey C Magee
- Department of Neuroscience and Howard Hughes Medical Institute, Baylor College of Medicine, Houston, Texas 77030, USA;
| | - Christine Grienberger
- Department of Neuroscience and Howard Hughes Medical Institute, Baylor College of Medicine, Houston, Texas 77030, USA;
| |
Collapse
|
50
|
Tang J, Yuan F, Shen X, Wang Z, Rao M, He Y, Sun Y, Li X, Zhang W, Li Y, Gao B, Qian H, Bi G, Song S, Yang JJ, Wu H. Bridging Biological and Artificial Neural Networks with Emerging Neuromorphic Devices: Fundamentals, Progress, and Challenges. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2019; 31:e1902761. [PMID: 31550405 DOI: 10.1002/adma.201902761] [Citation(s) in RCA: 163] [Impact Index Per Article: 32.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Revised: 08/16/2019] [Indexed: 05/08/2023]
Abstract
As the research on artificial intelligence booms, there is broad interest in brain-inspired computing using novel neuromorphic devices. The potential of various emerging materials and devices for neuromorphic computing has attracted extensive research efforts, leading to a large number of publications. Going forward, in order to better emulate the brain's functions, its relevant fundamentals, working mechanisms, and resultant behaviors need to be re-visited, better understood, and connected to electronics. A systematic overview of biological and artificial neural systems is given, along with their related critical mechanisms. Recent progress in neuromorphic devices is reviewed and, more importantly, the existing challenges are highlighted to hopefully shed light on future research directions.
Collapse
Affiliation(s)
- Jianshi Tang
- Institute of Microelectronics, Beijing Innovation Center for Future Chips (ICFC), Tsinghua University, Beijing, 100084, China
- Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing, 100084, China
| | - Fang Yuan
- Institute of Microelectronics, Beijing Innovation Center for Future Chips (ICFC), Tsinghua University, Beijing, 100084, China
| | - Xinke Shen
- Tsinghua Laboratory of Brain and Intelligence and Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Zhongrui Wang
- Department of Electrical and Computer Engineering, University of Massachusetts, Amherst, MA, 01003, USA
| | - Mingyi Rao
- Department of Electrical and Computer Engineering, University of Massachusetts, Amherst, MA, 01003, USA
| | - Yuanyuan He
- Tsinghua Laboratory of Brain and Intelligence and Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Yuhao Sun
- Tsinghua Laboratory of Brain and Intelligence and Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Xinyi Li
- Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing, 100084, China
| | - Wenbin Zhang
- Institute of Microelectronics, Beijing Innovation Center for Future Chips (ICFC), Tsinghua University, Beijing, 100084, China
| | - Yijun Li
- Institute of Microelectronics, Beijing Innovation Center for Future Chips (ICFC), Tsinghua University, Beijing, 100084, China
| | - Bin Gao
- Institute of Microelectronics, Beijing Innovation Center for Future Chips (ICFC), Tsinghua University, Beijing, 100084, China
- Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing, 100084, China
| | - He Qian
- Institute of Microelectronics, Beijing Innovation Center for Future Chips (ICFC), Tsinghua University, Beijing, 100084, China
- Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing, 100084, China
| | - Guoqiang Bi
- School of Life Sciences, University of Science and Technology of China, Hefei, 230027, China
| | - Sen Song
- Tsinghua Laboratory of Brain and Intelligence and Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China
| | - J Joshua Yang
- Department of Electrical and Computer Engineering, University of Massachusetts, Amherst, MA, 01003, USA
| | - Huaqiang Wu
- Institute of Microelectronics, Beijing Innovation Center for Future Chips (ICFC), Tsinghua University, Beijing, 100084, China
- Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing, 100084, China
| |
Collapse
|