1
|
Kubo Y, Chalmers E, Luczak A. Biologically-inspired neuronal adaptation improves learning in neural networks. Commun Integr Biol 2023; 16:2163131. [PMID: 36685291 PMCID: PMC9851208 DOI: 10.1080/19420889.2022.2163131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
Since humans still outperform artificial neural networks on many tasks, drawing inspiration from the brain may help to improve current machine learning algorithms. Contrastive Hebbian learning (CHL) and equilibrium propagation (EP) are biologically plausible algorithms that update weights using only local information (without explicitly calculating gradients) and still achieve performance comparable to conventional backpropagation. In this study, we augmented CHL and EP with Adjusted Adaptation, inspired by the adaptation effect observed in neurons, in which a neuron's response to a given stimulus is adjusted after a short time. We add this adaptation feature to multilayer perceptrons and convolutional neural networks trained on MNIST and CIFAR-10. Surprisingly, adaptation improved the performance of these networks. We discuss the biological inspiration for this idea and investigate why Neuronal Adaptation could be an important brain mechanism to improve the stability and accuracy of learning.
Collapse
Affiliation(s)
- Yoshimasa Kubo
- Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, AB, Canada,CONTACT Yoshimasa Kubo
| | - Eric Chalmers
- Department of Mathematics & Computing, Mount Royal University, Calgary, AB, Canada
| | - Artur Luczak
- Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, AB, Canada,Artur Luczak Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, AB, Canada
| |
Collapse
|
2
|
Kubo Y, Chalmers E, Luczak A. Combining backpropagation with Equilibrium Propagation to improve an Actor-Critic reinforcement learning framework. Front Comput Neurosci 2022; 16:980613. [PMID: 36082305 PMCID: PMC9446087 DOI: 10.3389/fncom.2022.980613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Accepted: 08/05/2022] [Indexed: 01/09/2023] Open
Abstract
Backpropagation (BP) has been used to train neural networks for many years, allowing them to solve a wide variety of tasks like image classification, speech recognition, and reinforcement learning tasks. But the biological plausibility of BP as a mechanism of neural learning has been questioned. Equilibrium Propagation (EP) has been proposed as a more biologically plausible alternative and achieves comparable accuracy on the CIFAR-10 image classification task. This study proposes the first EP-based reinforcement learning architecture: an Actor-Critic architecture with the actor network trained by EP. We show that this model can solve the basic control tasks often used as benchmarks for BP-based models. Interestingly, our trained model demonstrates more consistent high-reward behavior than a comparable model trained exclusively by BP.
Collapse
Affiliation(s)
- Yoshimasa Kubo
- Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, AB, Canada
| | - Eric Chalmers
- Department of Mathematics and Computing, Mount Royal University, Calgary, AB, Canada
| | - Artur Luczak
- Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, AB, Canada
| |
Collapse
|
3
|
Duman I, Ehmann IS, Gonsalves AR, Gültekin Z, Van den Berckt J, van Leeuwen C. The No-Report Paradigm: A Revolution in Consciousness Research? Front Hum Neurosci 2022; 16:861517. [PMID: 35634201 PMCID: PMC9130851 DOI: 10.3389/fnhum.2022.861517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Accepted: 04/11/2022] [Indexed: 11/13/2022] Open
Abstract
In the cognitive neuroscience of consciousness, participants have commonly been instructed to report their conscious content. This, it was claimed, risks confounding the neural correlates of consciousness (NCC) with their preconditions, i.e., allocation of attention, and consequences, i.e., metacognitive reflection. Recently, the field has therefore been shifting towards no-report paradigms. No-report paradigms draw their validity from a direct comparison with no-report conditions. We analyze several examples of such comparisons and identify alternative interpretations of their results and/or methodological issues in all cases. These go beyond the previous criticism that just removing the report is insufficient, because it does not prevent metacognitive reflection. The conscious mind is fickle. Without having much to do, it will turn inward and switch, or timeshare, between the stimuli on display and daydreaming or mind-wandering. Thus, rather than the NCC, no-report paradigms might be addressing the neural correlates of conscious disengagement. This observation reaffirms the conclusion that no-report paradigms are no less problematic than report paradigms.
Collapse
Affiliation(s)
- Irem Duman
- Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Isabell Sophia Ehmann
- Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Alicia Ronnie Gonsalves
- Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Zeynep Gültekin
- Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Jonathan Van den Berckt
- Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Cees van Leeuwen
- Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
- Cognitive and Developmental Psychology, Faculty of Social Sciences, TU Kaiserslautern, Kaiserslautern, Germany
- *Correspondence: Cees van Leeuwen
| |
Collapse
|
4
|
Abstract
Understanding how the brain learns may lead to machines with human-like intellectual capacities. It was previously proposed that the brain may operate on the principle of predictive coding. However, it is still not well understood how a predictive system could be implemented in the brain. Here we demonstrate that the ability of a single neuron to predict its future activity may provide an effective learning mechanism. Interestingly, this predictive learning rule can be derived from a metabolic principle, where neurons need to minimize their own synaptic activity (cost), while maximizing their impact on local blood supply by recruiting other neurons. We show how this mathematically derived learning rule can provide a theoretical connection between diverse types of brain-inspired algorithms, thus, offering a step toward development of a general theory of neuronal learning. We tested this predictive learning rule in neural network simulations and in data recorded from awake animals. Our results also suggest that spontaneous brain activity provides “training data” for neurons to learn to predict cortical dynamics. Thus, the ability of a single neuron to minimize surprise: i.e. the difference between actual and expected activity, could be an important missing element to understand computation in the brain.
Collapse
|