1
|
Maler L. Active sensing: Eavesdropping on your neighbor to locate prey. Curr Biol 2024; 34:R351-R353. [PMID: 38714163 DOI: 10.1016/j.cub.2024.03.055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/09/2024]
Abstract
When animals using active sensing, e.g., sonar or an electric organ discharge, cooperate while foraging, the emitted sound or electric field is available to neighboring conspecifics. Experimental and modelling studies have shown that an electric fish can use the discharge of neighbors to extend their own electrosensory prey detection range.
Collapse
Affiliation(s)
- Leonard Maler
- Department of Cellular and Molecular Medicine, Center for Neural Dynamics, University of Ottawa, Ottawa, ON K1H 8M5, Canada.
| |
Collapse
|
2
|
Friedenberger Z, Harkin E, Tóth K, Naud R. Silences, spikes and bursts: Three-part knot of the neural code. J Physiol 2023; 601:5165-5193. [PMID: 37889516 DOI: 10.1113/jp281510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 09/28/2023] [Indexed: 10/28/2023] Open
Abstract
When a neuron breaks silence, it can emit action potentials in a number of patterns. Some responses are so sudden and intense that electrophysiologists felt the need to single them out, labelling action potentials emitted at a particularly high frequency with a metonym - bursts. Is there more to bursts than a figure of speech? After all, sudden bouts of high-frequency firing are expected to occur whenever inputs surge. The burst coding hypothesis advances that the neural code has three syllables: silences, spikes and bursts. We review evidence supporting this ternary code in terms of devoted mechanisms for burst generation, synaptic transmission and synaptic plasticity. We also review the learning and attention theories for which such a triad is beneficial.
Collapse
Affiliation(s)
- Zachary Friedenberger
- Brain and Mind Institute, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Ontario, Canada
- Centre for Neural Dynamics and Artifical Intelligence, Department of Physics, University of Ottawa, Ottawa, Ontario, Ottawa
| | - Emerson Harkin
- Brain and Mind Institute, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Katalin Tóth
- Brain and Mind Institute, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Richard Naud
- Brain and Mind Institute, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Ontario, Canada
- Centre for Neural Dynamics and Artifical Intelligence, Department of Physics, University of Ottawa, Ottawa, Ontario, Ottawa
| |
Collapse
|
3
|
Xie M, Muscinelli SP, Decker Harris K, Litwin-Kumar A. Task-dependent optimal representations for cerebellar learning. eLife 2023; 12:e82914. [PMID: 37671785 PMCID: PMC10541175 DOI: 10.7554/elife.82914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 09/05/2023] [Indexed: 09/07/2023] Open
Abstract
The cerebellar granule cell layer has inspired numerous theoretical models of neural representations that support learned behaviors, beginning with the work of Marr and Albus. In these models, granule cells form a sparse, combinatorial encoding of diverse sensorimotor inputs. Such sparse representations are optimal for learning to discriminate random stimuli. However, recent observations of dense, low-dimensional activity across granule cells have called into question the role of sparse coding in these neurons. Here, we generalize theories of cerebellar learning to determine the optimal granule cell representation for tasks beyond random stimulus discrimination, including continuous input-output transformations as required for smooth motor control. We show that for such tasks, the optimal granule cell representation is substantially denser than predicted by classical theories. Our results provide a general theory of learning in cerebellum-like systems and suggest that optimal cerebellar representations are task-dependent.
Collapse
Affiliation(s)
- Marjorie Xie
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| | - Samuel P Muscinelli
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| | - Kameron Decker Harris
- Department of Computer Science, Western Washington UniversityBellinghamUnited States
| | - Ashok Litwin-Kumar
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| |
Collapse
|
4
|
Fukutomi M, Carlson BA. Hormonal coordination of motor output and internal prediction of sensory consequences in an electric fish. Curr Biol 2023; 33:3350-3359.e4. [PMID: 37490922 DOI: 10.1016/j.cub.2023.06.069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 06/22/2023] [Accepted: 06/28/2023] [Indexed: 07/27/2023]
Abstract
Steroid hormones remodel neural networks to induce seasonal or developmental changes in behavior. Hormonal changes in behavior likely require coordinated changes in sensorimotor integration. Here, we investigate hormonal effects on a predictive motor signal, termed corollary discharge, that modulates sensory processing in weakly electric mormyrid fish. In the electrosensory pathway mediating communication behavior, inhibition activated by a corollary discharge blocks sensory responses to self-generated electric pulses, allowing the downstream circuit to selectively analyze communication signals from nearby fish. These pulses are elongated by increasing testosterone levels in males during the breeding season. We induced electric-pulse elongation using testosterone treatment and found that the timing of electroreceptor responses to self-generated pulses was delayed as electric-pulse duration increased. Simultaneous recordings from an electrosensory nucleus and electromotor neurons revealed that the timing of corollary discharge inhibition was delayed and elongated by testosterone. Furthermore, this shift in the timing of corollary discharge inhibition was precisely matched to the shift in timing of receptor responses to self-generated pulses. We then asked whether the shift in inhibition timing was caused by direct action of testosterone on the corollary discharge circuit or by plasticity acting on the circuit in response to altered sensory feedback. We surgically silenced the electric organ of fish and found similar hormonal modulation of corollary discharge timing between intact and silent fish, suggesting that sensory feedback was not required for this shift. Our findings demonstrate that testosterone directly regulates motor output and internal prediction of the resulting sensory consequences in a coordinated manner.
Collapse
Affiliation(s)
- Matasaburo Fukutomi
- Department of Biology, Washington University in St. Louis, St. Louis, MO 63130, USA
| | - Bruce A Carlson
- Department of Biology, Washington University in St. Louis, St. Louis, MO 63130, USA.
| |
Collapse
|
5
|
Wallach A, Sawtell NB. An internal model for canceling self-generated sensory input in freely behaving electric fish. Neuron 2023; 111:2570-2582.e5. [PMID: 37321221 PMCID: PMC10524831 DOI: 10.1016/j.neuron.2023.05.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 03/10/2023] [Accepted: 05/18/2023] [Indexed: 06/17/2023]
Abstract
Internal models that predict the sensory consequences of motor actions are vital for sensory, motor, and cognitive functions. However, the relationship between motor action and sensory input is complex, often varying from one moment to another depending on the state of the animal and the environment. The neural mechanisms for generating predictions under such challenging, real-world conditions remain largely unknown. Using novel methods for underwater neural recording, a quantitative analysis of unconstrained behavior, and computational modeling, we provide evidence for an unexpectedly sophisticated internal model at the first stage of active electrosensory processing in mormyrid fish. Closed-loop manipulations reveal that electrosensory lobe neurons are capable of simultaneously learning and storing multiple predictions of the sensory consequences of motor commands specific to different sensory states. These results provide mechanistic insights into how internal motor signals and information about the sensory environment are combined within a cerebellum-like circuitry to predict the sensory consequences of natural behavior.
Collapse
Affiliation(s)
- Avner Wallach
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA.
| | - Nathaniel B Sawtell
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA.
| |
Collapse
|
6
|
Muller SZ, Abbott LF, Sawtell NB. A mechanism for differential control of axonal and dendritic spiking underlying learning in a cerebellum-like circuit. Curr Biol 2023; 33:2657-2667.e4. [PMID: 37311457 PMCID: PMC10524478 DOI: 10.1016/j.cub.2023.05.040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 04/06/2023] [Accepted: 05/17/2023] [Indexed: 06/15/2023]
Abstract
In addition to the action potentials used for axonal signaling, many neurons generate dendritic "spikes" associated with synaptic plasticity. However, in order to control both plasticity and signaling, synaptic inputs must be able to differentially modulate the firing of these two spike types. Here, we investigate this issue in the electrosensory lobe (ELL) of weakly electric mormyrid fish, where separate control over axonal and dendritic spikes is essential for the transmission of learned predictive signals from inhibitory interneurons to the output stage of the circuit. Through a combination of experimental and modeling studies, we uncover a novel mechanism by which sensory input selectively modulates the rate of dendritic spiking by adjusting the amplitude of backpropagating axonal action potentials. Interestingly, this mechanism does not require spatially segregated synaptic inputs or dendritic compartmentalization but relies instead on an electrotonically distant spike initiation site in the axon-a common biophysical feature of neurons.
Collapse
Affiliation(s)
- Salomon Z Muller
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA
| | - L F Abbott
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA; Department of Physiology and Cellular Biophysics, Columbia University, New York, NY 10027, USA
| | - Nathaniel B Sawtell
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA.
| |
Collapse
|
7
|
Li C, Huang Z, Zou W, Huang H. Statistical mechanics of continual learning: Variational principle and mean-field potential. Phys Rev E 2023; 108:014309. [PMID: 37583230 DOI: 10.1103/physreve.108.014309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 06/30/2023] [Indexed: 08/17/2023]
Abstract
An obstacle to artificial general intelligence is set by continual learning of multiple tasks of a different nature. Recently, various heuristic tricks, both from machine learning and from neuroscience angles, were proposed, but they lack a unified theory foundation. Here, we focus on continual learning in single-layered and multilayered neural networks of binary weights. A variational Bayesian learning setting is thus proposed in which the neural networks are trained in a field-space, rather than a gradient-ill-defined discrete-weight space, and furthermore, weight uncertainty is naturally incorporated, and it modulates synaptic resources among tasks. From a physics perspective, we translate variational continual learning into a Franz-Parisi thermodynamic potential framework, where previous task knowledge serves as a prior probability and a reference as well. We thus interpret the continual learning of the binary perceptron in a teacher-student setting as a Franz-Parisi potential computation. The learning performance can then be analytically studied with mean-field order parameters, whose predictions coincide with numerical experiments using stochastic gradient descent methods. Based on the variational principle and Gaussian field approximation of internal preactivations in hidden layers, we also derive the learning algorithm considering weight uncertainty, which solves the continual learning with binary weights using multilayered neural networks, and performs better than the currently available metaplasticity algorithm in which binary synapses bear hidden continuous states and the synaptic plasticity is modulated by a heuristic regularization function. Our proposed principled frameworks also connect to elastic weight consolidation, weight-uncertainty modulated learning, and neuroscience-inspired metaplasticity, providing a theoretically grounded method for real-world multitask learning with deep networks.
Collapse
Affiliation(s)
- Chan Li
- PMI Lab, School of Physics, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
| | - Zhenye Huang
- CAS Key Laboratory for Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, People's Republic of China
| | - Wenxuan Zou
- PMI Lab, School of Physics, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
| | - Haiping Huang
- PMI Lab, School of Physics, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
- Guangdong Provincial Key Laboratory of Magnetoelectric Physics and Devices, Sun Yat-sen University, Guangzhou 510275, China
| |
Collapse
|
8
|
Milstein AD, Li Y, Bittner KC, Grienberger C, Soltesz I, Magee JC, Romani S. Bidirectional synaptic plasticity rapidly modifies hippocampal representations. eLife 2021; 10:e73046. [PMID: 34882093 PMCID: PMC8776257 DOI: 10.7554/elife.73046] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Accepted: 12/08/2021] [Indexed: 11/13/2022] Open
Abstract
Learning requires neural adaptations thought to be mediated by activity-dependent synaptic plasticity. A relatively non-standard form of synaptic plasticity driven by dendritic calcium spikes, or plateau potentials, has been reported to underlie place field formation in rodent hippocampal CA1 neurons. Here, we found that this behavioral timescale synaptic plasticity (BTSP) can also reshape existing place fields via bidirectional synaptic weight changes that depend on the temporal proximity of plateau potentials to pre-existing place fields. When evoked near an existing place field, plateau potentials induced less synaptic potentiation and more depression, suggesting BTSP might depend inversely on postsynaptic activation. However, manipulations of place cell membrane potential and computational modeling indicated that this anti-correlation actually results from a dependence on current synaptic weight such that weak inputs potentiate and strong inputs depress. A network model implementing this bidirectional synaptic learning rule suggested that BTSP enables population activity, rather than pairwise neuronal correlations, to drive neural adaptations to experience.
Collapse
Affiliation(s)
- Aaron D Milstein
- Department of Neurosurgery and Stanford Neurosciences Institute, Stanford University School of MedicineStanfordUnited States
- Department of Neuroscience and Cell Biology, Robert Wood Johnson Medical School and Center for Advanced Biotechnology and Medicine, Rutgers UniversityPiscatawayUnited States
| | - Yiding Li
- Howard Hughes Medical Institute, Baylor College of MedicineHoustonUnited States
| | - Katie C Bittner
- Howard Hughes Medical Institute, Janelia Research CampusAshburnUnited States
| | | | - Ivan Soltesz
- Department of Neurosurgery and Stanford Neurosciences Institute, Stanford University School of MedicineStanfordUnited States
| | - Jeffrey C Magee
- Howard Hughes Medical Institute, Baylor College of MedicineHoustonUnited States
| | - Sandro Romani
- Howard Hughes Medical Institute, Janelia Research CampusAshburnUnited States
| |
Collapse
|
9
|
Zhang T, Cheng X, Jia S, Poo MM, Zeng Y, Xu B. Self-backpropagation of synaptic modifications elevates the efficiency of spiking and artificial neural networks. SCIENCE ADVANCES 2021; 7:eabh0146. [PMID: 34669481 PMCID: PMC8528419 DOI: 10.1126/sciadv.abh0146] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Many synaptic plasticity rules found in natural circuits have not been incorporated into artificial neural networks (ANNs). We showed that incorporating a nonlocal feature of synaptic plasticity found in natural neural networks, whereby synaptic modification at output synapses of a neuron backpropagates to its input synapses made by upstream neurons, markedly reduced the computational cost without affecting the accuracy of spiking neural networks (SNNs) and ANNs in supervised learning for three benchmark tasks. For SNNs, synaptic modification at output neurons generated by spike timing–dependent plasticity was allowed to self-propagate to limited upstream synapses. For ANNs, modified synaptic weights via conventional backpropagation algorithm at output neurons self-backpropagated to limited upstream synapses. Such self-propagating plasticity may produce coordinated synaptic modifications across neuronal layers that reduce computational cost.
Collapse
Affiliation(s)
- Tielin Zhang
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xiang Cheng
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Shuncheng Jia
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Mu-ming Poo
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
- Shanghai Center for Brain Science and Brain-Inspired Intelligence Technology, Shanghai 201210, China
| | - Yi Zeng
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Bo Xu
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
- Corresponding author.
| |
Collapse
|
10
|
Shadmehr R. Population coding in the cerebellum: a machine learning perspective. J Neurophysiol 2020; 124:2022-2051. [PMID: 33112717 DOI: 10.1152/jn.00449.2020] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023] Open
Abstract
The cere resembles a feedforward, three-layer network of neurons in which the "hidden layer" consists of Purkinje cells (P-cells) and the output layer consists of deep cerebellar nucleus (DCN) neurons. In this analogy, the output of each DCN neuron is a prediction that is compared with the actual observation, resulting in an error signal that originates in the inferior olive. Efficient learning requires that the error signal reach the DCN neurons, as well as the P-cells that project onto them. However, this basic rule of learning is violated in the cerebellum: the olivary projections to the DCN are weak, particularly in adulthood. Instead, an extraordinarily strong signal is sent from the olive to the P-cells, producing complex spikes. Curiously, P-cells are grouped into small populations that converge onto single DCN neurons. Why are the P-cells organized in this way, and what is the membership criterion of each population? Here, I apply elementary mathematics from machine learning and consider the fact that P-cells that form a population exhibit a special property: they can synchronize their complex spikes, which in turn suppress activity of DCN neuron they project to. Thus complex spikes cannot only act as a teaching signal for a P-cell, but through complex spike synchrony, a P-cell population may act as a surrogate teacher for the DCN neuron that produced the erroneous output. It appears that grouping of P-cells into small populations that share a preference for error satisfies a critical requirement of efficient learning: providing error information to the output layer neuron (DCN) that was responsible for the error, as well as the hidden layer neurons (P-cells) that contributed to it. This population coding may account for several remarkable features of behavior during learning, including multiple timescales, protection from erasure, and spontaneous recovery of memory.
Collapse
Affiliation(s)
- Reza Shadmehr
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore, Maryland
| |
Collapse
|
11
|
|
12
|
Neural Networks: How a Multi-Layer Network Learns to Disentangle Exogenous from Self-Generated Signals. Curr Biol 2020; 30:R224-R226. [PMID: 32155426 DOI: 10.1016/j.cub.2020.01.030] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
Artificial multi-layer networks can learn difficult tasks, such as recognizing faces, but their architecture and learning rules appear to be very different from those of biological neural networks. Experimental and computational studies of a two-layered biological neural network have revealed how the learning rules used in artificial neural networks can be efficiently implemented by neurons with complex dynamics and precisely organized connectivity.
Collapse
|
13
|
Perks KE, Krotinger A, Bodznick D. A cerebellum-like circuit in the lateral line system of fish cancels mechanosensory input associated with its own movements. J Exp Biol 2020; 223:jeb204438. [PMID: 31953367 DOI: 10.1242/jeb.204438] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Accepted: 01/08/2020] [Indexed: 01/03/2023]
Abstract
An animal's own movement exerts a profound impact on sensory input to its nervous system. Peripheral sensory receptors do not distinguish externally generated stimuli from stimuli generated by an animal's own behavior (reafference) - although the animal often must. One way that nervous systems can solve this problem is to provide movement-related signals (copies of motor commands and sensory feedback) to sensory systems, which can then be used to generate predictions that oppose or cancel out sensory responses to reafference. Here, we studied the use of movement-related signals to generate sensory predictions in the lateral line medial octavolateralis nucleus (MON) of the little skate. In the MON, mechanoreceptive afferents synapse on output neurons that also receive movement-related signals from central sources, via a granule cell parallel fiber system. This parallel fiber system organization is characteristic of a set of so-called cerebellum-like structures. Cerebellum-like structures have been shown to support predictive cancellation of reafference in the electrosensory systems of fish and the auditory system of mice. Here, we provide evidence that the parallel fiber system in the MON can generate predictions that are negative images of (and therefore cancel) sensory input associated with respiratory and fin movements. The MON, found in most aquatic vertebrates, is probably one of the most primitive cerebellum-like structures and a starting point for cerebellar evolution. The results of this study contribute to a growing body of work that uses an evolutionary perspective on the vertebrate cerebellum to understand its functional diversity in animal behavior.
Collapse
Affiliation(s)
- Krista E Perks
- Neurosciences Department and Zuckermann Institute, Columbia University, New York, NY 10027, USA
- Neuroscience & Behavior Program and Department of Biology, Wesleyan University, Middletown, CT 06459, USA
- Marine Biological Laboratory, Woods Hole, MA 02543, USA
| | - Anna Krotinger
- Neuroscience & Behavior Program and Department of Biology, Wesleyan University, Middletown, CT 06459, USA
- Marine Biological Laboratory, Woods Hole, MA 02543, USA
| | - David Bodznick
- Neuroscience & Behavior Program and Department of Biology, Wesleyan University, Middletown, CT 06459, USA
- Marine Biological Laboratory, Woods Hole, MA 02543, USA
| |
Collapse
|
14
|
Abstract
Synaptic plasticity, the activity-dependent change in neuronal connection strength, has long been considered an important component of learning and memory. Computational and engineering work corroborate the power of learning through the directed adjustment of connection weights. Here we review the fundamental elements of four broadly categorized forms of synaptic plasticity and discuss their functional capabilities and limitations. Although standard, correlation-based, Hebbian synaptic plasticity has been the primary focus of neuroscientists for decades, it is inherently limited. Three-factor plasticity rules supplement Hebbian forms with neuromodulation and eligibility traces, while true supervised types go even further by adding objectives and instructive signals. Finally, a recently discovered hippocampal form of synaptic plasticity combines the above elements, while leaving behind the primary Hebbian requirement. We suggest that the effort to determine the neural basis of adaptive behavior could benefit from renewed experimental and theoretical investigation of more powerful directed types of synaptic plasticity.
Collapse
Affiliation(s)
- Jeffrey C Magee
- Department of Neuroscience and Howard Hughes Medical Institute, Baylor College of Medicine, Houston, Texas 77030, USA;
| | - Christine Grienberger
- Department of Neuroscience and Howard Hughes Medical Institute, Baylor College of Medicine, Houston, Texas 77030, USA;
| |
Collapse
|