1
|
Fukunishi A, Kutsuzawa K, Owaki D, Hayashibe M. Synergy quality assessment of muscle modules for determining learning performance using a realistic musculoskeletal model. Front Comput Neurosci 2024; 18:1355855. [PMID: 38873285 PMCID: PMC11171420 DOI: 10.3389/fncom.2024.1355855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 05/13/2024] [Indexed: 06/15/2024] Open
Abstract
How our central nervous system efficiently controls our complex musculoskeletal system is still debated. The muscle synergy hypothesis is proposed to simplify this complex system by assuming the existence of functional neural modules that coordinate several muscles. Modularity based on muscle synergies can facilitate motor learning without compromising task performance. However, the effectiveness of modularity in motor control remains debated. This ambiguity can, in part, stem from overlooking that the performance of modularity depends on the mechanical aspects of modules of interest, such as the torque the modules exert. To address this issue, this study introduces two criteria to evaluate the quality of module sets based on commonly used performance metrics in motor learning studies: the accuracy of torque production and learning speed. One evaluates the regularity in the direction of mechanical torque the modules exert, while the other evaluates the evenness of its magnitude. For verification of our criteria, we simulated motor learning of torque production tasks in a realistic musculoskeletal system of the upper arm using feed-forward neural networks while changing the control conditions. We found that the proposed criteria successfully explain the tendency of learning performance in various control conditions. These result suggest that regularity in the direction of and evenness in magnitude of mechanical torque of utilized modules are significant factor for determining learning performance. Although the criteria were originally conceived for an error-based learning scheme, the approach to pursue which set of modules is better for motor control can have significant implications in other studies of modularity in general.
Collapse
Affiliation(s)
- Akito Fukunishi
- Department of Robotics, Graduate School of Engineering, Tohoku University, Sendai, Japan
| | | | | | | |
Collapse
|
2
|
Bridges NR, Stickle M, Moxon KA. Transitioning from global to local computational strategies during brain-machine interface learning. Front Neurosci 2024; 18:1371107. [PMID: 38707591 PMCID: PMC11066153 DOI: 10.3389/fnins.2024.1371107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 03/05/2024] [Indexed: 05/07/2024] Open
Abstract
When learning to use a brain-machine interface (BMI), the brain modulates neuronal activity patterns, exploring and exploiting the state space defined by their neural manifold. Neurons directly involved in BMI control (i.e., direct neurons) can display marked changes in their firing patterns during BMI learning. However, the extent of firing pattern changes in neurons not directly involved in BMI control (i.e., indirect neurons) remains unclear. To clarify this issue, we localized direct and indirect neurons to separate hemispheres in a task designed to bilaterally engage these hemispheres while animals learned to control the position of a platform with their neural signals. Animals that learned to control the platform and improve their performance in the task shifted from a global strategy, where both direct and indirect neurons modified their firing patterns, to a local strategy, where only direct neurons modified their firing rate, as animals became expert in the task. Animals that did not learn the BMI task did not shift from utilizing a global to a local strategy. These results provide important insights into what differentiates successful and unsuccessful BMI learning and the computational mechanisms adopted by the neurons.
Collapse
Affiliation(s)
- Nathaniel R. Bridges
- Air Force Research Laboratory, Wright-Patterson Air Force Base, Dayton, OH, United States
| | - Matthew Stickle
- Department of Biomedical Engineering, University of California, Davis, Davis, CA, United States
| | - Karen A. Moxon
- Department of Biomedical Engineering, University of California, Davis, Davis, CA, United States
| |
Collapse
|
3
|
Antonov DI, Sviatov KV, Sukhov S. Continuous learning of spiking networks trained with local rules. Neural Netw 2022; 155:512-522. [PMID: 36166978 DOI: 10.1016/j.neunet.2022.09.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Revised: 06/29/2022] [Accepted: 09/02/2022] [Indexed: 10/31/2022]
Abstract
Artificial neural networks (ANNs) experience catastrophic forgetting (CF) during sequential learning. In contrast, the brain can learn continuously without any signs of catastrophic forgetting. Spiking neural networks (SNNs) are the next generation of ANNs with many features borrowed from biological neural networks. Thus, SNNs potentially promise better resilience to CF. In this paper, we study the susceptibility of SNNs to CF and test several biologically inspired methods for mitigating catastrophic forgetting. SNNs are trained with biologically plausible local training rules based on spike-timing-dependent plasticity (STDP). Local training prohibits the direct use of CF prevention methods based on gradients of a global loss function. We developed and tested the method to determine the importance of synapses (weights) based on stochastic Langevin dynamics without the need for the gradients. Several other methods of catastrophic forgetting prevention adapted from analog neural networks were tested as well. The experiments were performed on freely available datasets in the SpykeTorch environment.
Collapse
Affiliation(s)
- D I Antonov
- Kotelnikov Institute of Radio Engineering and Electronics of Russian Academy of Sciences (Ulyanovsk branch), 48/2 Goncharov Str., Ulyanovsk 432071, Russia.
| | - K V Sviatov
- Ulyanovsk State Technical University, 32 Severny Venets, Ulyanovsk 432027, Russia.
| | - S Sukhov
- Kotelnikov Institute of Radio Engineering and Electronics of Russian Academy of Sciences (Ulyanovsk branch), 48/2 Goncharov Str., Ulyanovsk 432071, Russia.
| |
Collapse
|
4
|
Bianchi S, Muñoz-Martin I, Ielmini D. Bio-Inspired Techniques in a Fully Digital Approach for Lifelong Learning. Front Neurosci 2020; 14:379. [PMID: 32425749 PMCID: PMC7203347 DOI: 10.3389/fnins.2020.00379] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Accepted: 03/27/2020] [Indexed: 11/13/2022] Open
Abstract
Lifelong learning has deeply underpinned the resilience of biological organisms respect to a constantly changing environment. This flexibility has allowed the evolution of parallel-distributed systems able to merge past information with new stimulus for accurate and efficient brain-computation. Nowadays, there is a strong attempt to reproduce such intelligent systems in standard artificial neural networks (ANNs). However, despite some great results in specific tasks, ANNs still appear too rigid and static in real life respect to the biological systems. Thus, it is necessary to define a new neural paradigm capable of merging the lifelong resilience of biological organisms with the great accuracy of ANNs. Here, we present a digital implementation of a novel mixed supervised-unsupervised neural network capable of performing lifelong learning. The network uses a set of convolutional filters to extract features from the input images of the MNIST and the Fashion-MNIST training datasets. This information defines an original combination of responses of both trained classes and non-trained classes by transfer learning. The responses are then used in the subsequent unsupervised learning based on spike-timing dependent plasticity (STDP). This procedure allows the clustering of non-trained information thanks to bio-inspired algorithms such as neuronal redundancy and spike-frequency adaptation. We demonstrate the implementation of the neural network in a fully digital environment, such as the Xilinx Zynq-7000 System on Chip (SoC). We illustrate a user-friendly interface to test the network by choosing the number and the type of the non-trained classes, or drawing a custom pattern on a tablet. Finally, we propose a comparison of this work with networks based on memristive synaptic devices capable of continual learning, highlighting the main differences and capabilities respect to a fully digital approach.
Collapse
Affiliation(s)
| | | | - Daniele Ielmini
- Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Politecnico di Milano, Milan, Italy
| |
Collapse
|
5
|
Raman DV, Rotondo AP, O'Leary T. Fundamental bounds on learning performance in neural circuits. Proc Natl Acad Sci U S A 2019; 116:10537-10546. [PMID: 31061133 PMCID: PMC6535002 DOI: 10.1073/pnas.1813416116] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
How does the size of a neural circuit influence its learning performance? Larger brains tend to be found in species with higher cognitive function and learning ability. Intuitively, we expect the learning capacity of a neural circuit to grow with the number of neurons and synapses. We show how adding apparently redundant neurons and connections to a network can make a task more learnable. Consequently, large neural circuits can either devote connectivity to generating complex behaviors or exploit this connectivity to achieve faster and more precise learning of simpler behaviors. However, we show that in a biologically relevant setting where synapses introduce an unavoidable amount of noise, there is an optimal size of network for a given task. Above the optimal network size, the addition of neurons and synaptic connections starts to impede learning performance. This suggests that the size of brain circuits may be constrained by the need to learn efficiently with unreliable synapses and provides a hypothesis for why some neurological learning deficits are associated with hyperconnectivity. Our analysis is independent of specific learning rules and uncovers fundamental relationships between learning rate, task performance, network size, and intrinsic noise in neural circuits.
Collapse
Affiliation(s)
- Dhruva Venkita Raman
- Department of Engineering, University of Cambridge, Cambridge CB21PZ, United Kingdom
| | - Adriana Perez Rotondo
- Department of Engineering, University of Cambridge, Cambridge CB21PZ, United Kingdom
| | - Timothy O'Leary
- Department of Engineering, University of Cambridge, Cambridge CB21PZ, United Kingdom
| |
Collapse
|
6
|
Furuki D, Takiyama K. Decomposing motion that changes over time into task-relevant and task-irrelevant components in a data-driven manner: application to motor adaptation in whole-body movements. Sci Rep 2019; 9:7246. [PMID: 31076575 PMCID: PMC6510796 DOI: 10.1038/s41598-019-43558-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2018] [Accepted: 04/26/2019] [Indexed: 01/02/2023] Open
Abstract
Motor variability is inevitable in human body movements and has been addressed from various perspectives in motor neuroscience and biomechanics: it may originate from variability in neural activities, or it may reflect a large number of degrees of freedom inherent in our body movements. How to evaluate motor variability is thus a fundamental question. Previous methods have quantified (at least) two striking features of motor variability: smaller variability in the task-relevant dimension than in the task-irrelevant dimension and a low-dimensional structure often referred to as synergy or principal components. However, the previous methods cannot be used to quantify these features simultaneously and are applicable only under certain limited conditions (e.g., one method does not consider how the motion changes over time, and another does not consider how each motion is relevant to performance). Here, we propose a flexible and straightforward machine learning technique for quantifying task-relevant variability, task-irrelevant variability, and the relevance of each principal component to task performance while considering how the motion changes over time and its relevance to task performance in a data-driven manner. Our method reveals the following novel property: in motor adaptation, the modulation of these different aspects of motor variability differs depending on the perturbation schedule.
Collapse
Affiliation(s)
- Daisuke Furuki
- Department of Electrical and Electronic Engineering, Tokyo University of Agriculture and Technology, Koganei-shi, Tokyo, 184-8588, Japan
| | - Ken Takiyama
- Department of Electrical and Electronic Engineering, Tokyo University of Agriculture and Technology, Koganei-shi, Tokyo, 184-8588, Japan.
| |
Collapse
|
7
|
Hagio S, Kouzaki M. Modularity speeds up motor learning by overcoming mechanical bias in musculoskeletal geometry. J R Soc Interface 2018; 15:rsif.2018.0249. [PMID: 30305418 DOI: 10.1098/rsif.2018.0249] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2018] [Accepted: 09/05/2018] [Indexed: 01/12/2023] Open
Abstract
We can easily learn and perform a variety of movements that fundamentally require complex neuromuscular control. Many empirical findings have demonstrated that a wide range of complex muscle activation patterns could be well captured by the combination of a few functional modules, the so-called muscle synergies. Modularity represented by muscle synergies would simplify the control of a redundant neuromuscular system. However, how the reduction of neuromuscular redundancy through a modular controller contributes to sensorimotor learning remains unclear. To clarify such roles, we constructed a simple neural network model of the motor control system that included three intermediate layers representing neurons in the primary motor cortex, spinal interneurons organized into modules and motoneurons controlling upper-arm muscles. After a model learning period to generate the desired shoulder and/or elbow joint torques, we compared the adaptation to a novel rotational perturbation between modular and non-modular models. A series of simulations demonstrated that the modules reduced the effect of the bias in the distribution of muscle pulling directions, as well as in the distribution of torques associated with individual cortical neurons, which led to a more rapid adaptation to multi-directional force generation. These results suggest that modularity is crucial not only for reducing musculoskeletal redundancy but also for overcoming mechanical bias due to the musculoskeletal geometry allowing for faster adaptation to certain external environments.
Collapse
Affiliation(s)
- Shota Hagio
- Graduate School of Education, The University of Tokyo, Tokyo, Japan .,Research Fellow of the Japan Society for the Promotion of Science, Tokyo, Japan
| | - Motoki Kouzaki
- Laboratory of Neurophysiology, Graduate School of Human and Environmental Studies, Kyoto University, Kyoto, Japan
| |
Collapse
|
8
|
Katlowitz KA, Picardo MA, Long MA. Stable Sequential Activity Underlying the Maintenance of a Precisely Executed Skilled Behavior. Neuron 2018; 98:1133-1140.e3. [PMID: 29861283 DOI: 10.1016/j.neuron.2018.05.017] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2018] [Revised: 04/10/2018] [Accepted: 05/09/2018] [Indexed: 11/17/2022]
Abstract
A vast array of motor skills can be maintained throughout life. Do these behaviors require stability of individual neuron tuning or can the output of a given circuit remain constant despite fluctuations in single cells? This question is difficult to address due to the variability inherent in most motor actions studied in the laboratory. A notable exception, however, is the courtship song of the adult zebra finch, which is a learned, highly precise motor act mediated by orderly dynamics within premotor neurons of the forebrain. By longitudinally tracking the activity of excitatory projection neurons during singing using two-photon calcium imaging, we find that both the number and the precise timing of song-related spiking events remain nearly identical over the span of several weeks to months. These findings demonstrate that learned, complex behaviors can be stabilized by maintaining precise and invariant tuning at the level of single neurons.
Collapse
Affiliation(s)
- Kalman A Katlowitz
- NYU Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10003, USA
| | - Michel A Picardo
- NYU Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10003, USA
| | - Michael A Long
- NYU Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10003, USA.
| |
Collapse
|
9
|
Conde-Ocazionez S, Altavini TS, Wunderle T, Schmidt KE. Motion contrast in primary visual cortex: a direct comparison of single neuron and population encoding. Eur J Neurosci 2017; 47:358-369. [PMID: 29178660 DOI: 10.1111/ejn.13786] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2017] [Revised: 11/20/2017] [Accepted: 11/21/2017] [Indexed: 11/29/2022]
Abstract
Features from outside the classical receptive field (CRF) can modulate the stimulus-driven activity of single cells in the primary visual cortex. This modulation, mediated by horizontal and feedback networks, has been extensively described as a variation of firing rate and is considered the basis of processing features as, for example, motion contrast. However, surround influences have also been identified in pairwise spiking or local field coherence. Yet, evidence about co-existence and integration of different neural signatures remains elusive. To compare multiple signatures, we recorded spiking and LFP activity evoked by stimuli exhibiting a motion contrast in the CRFs surround in anesthetized cat primary visual cortex. We chose natural-like scenes over gratings to avoid predominance of simple visual features, which could be easily represented by a rate code. We analyzed firing rates and phase-locking to low-gamma frequency in single cells and neuronal assemblies. Motion contrast was reflected in all measures but in semi-independent populations. Whereas activation of assemblies accompanied single neuron rates, their phase relations were modulated differently. Interestingly, only assembly phase relations mirrored the direction of movement of the surround and were selectively affected by thermal deactivation of visual interhemispheric connections. We argue that motion contrast can be reflected in complementary and superimposed neuronal signatures that can represent different surround features in independent neuronal populations.
Collapse
Affiliation(s)
- Sergio Conde-Ocazionez
- Brain Institute, Federal University of Rio Grande do Norte (UFRN), Av. Nascimento de Castro 2155, 59056-450, Natal, RN, Brazil.,Edson Queiroz Foundation, University of Fortaleza (UNIFOR), Fortaleza, Brazil
| | - Tiago S Altavini
- Brain Institute, Federal University of Rio Grande do Norte (UFRN), Av. Nascimento de Castro 2155, 59056-450, Natal, RN, Brazil.,Laboratory of Neurobiology, The Rockefeller University, New York, NY, USA
| | - Thomas Wunderle
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt, Germany
| | - Kerstin E Schmidt
- Brain Institute, Federal University of Rio Grande do Norte (UFRN), Av. Nascimento de Castro 2155, 59056-450, Natal, RN, Brazil
| |
Collapse
|
10
|
Takiyama K. Sensorimotor transformation via sparse coding. Sci Rep 2015; 5:9648. [PMID: 25923980 PMCID: PMC4413851 DOI: 10.1038/srep09648] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2014] [Accepted: 03/12/2015] [Indexed: 11/27/2022] Open
Abstract
Sensorimotor transformation is indispensable to the accurate motion of the human body in daily life. For instance, when we grasp an object, the distance from our hands to an object needs to be calculated by integrating multisensory inputs, and our motor system needs to appropriately activate the arm and hand muscles to minimize the distance. The sensorimotor transformation is implemented in our neural systems, and recent advances in measurement techniques have revealed an important property of neural systems: a small percentage of neurons exhibits extensive activity while a large percentage shows little activity, i.e., sparse coding. However, we do not yet know the functional role of sparse coding in sensorimotor transformation. In this paper, I show that sparse coding enables complete and robust learning in sensorimotor transformation. In general, if a neural network is trained to maximize the performance on training data, the network shows poor performance on test data. Nevertheless, sparse coding renders compatible the performance of the network on both training and test data. Furthermore, sparse coding can reproduce reported neural activities. Thus, I conclude that sparse coding is necessary and a biologically plausible factor in sensorimotor transformation.
Collapse
|
11
|
Takiyama K. Context-dependent memory decay is evidence of effort minimization in motor learning: a computational study. Front Comput Neurosci 2015; 9:4. [PMID: 25698963 PMCID: PMC4316784 DOI: 10.3389/fncom.2015.00004] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2014] [Accepted: 01/13/2015] [Indexed: 11/15/2022] Open
Abstract
Recent theoretical models suggest that motor learning includes at least two processes: error minimization and memory decay. While learning a novel movement, a motor memory of the movement is gradually formed to minimize the movement error between the desired and actual movements in each training trial, but the memory is slightly forgotten in each trial. The learning effects of error minimization trained with a certain movement are partially available in other non-trained movements, and this transfer of the learning effect can be reproduced by certain theoretical frameworks. Although most theoretical frameworks have assumed that a motor memory trained with a certain movement decays at the same speed during performing the trained movement as non-trained movements, a recent study reported that the motor memory decays faster during performing the trained movement than non-trained movements, i.e., the decay rate of motor memory is movement or context dependent. Although motor learning has been successfully modeled based on an optimization framework, e.g., movement error minimization, the type of optimization that can lead to context-dependent memory decay is unclear. Thus, context-dependent memory decay raises the question of what is optimized in motor learning. To reproduce context-dependent memory decay, I extend a motor primitive framework. Specifically, I introduce motor effort optimization into the framework because some previous studies have reported the existence of effort optimization in motor learning processes and no conventional motor primitive model has yet considered the optimization. Here, I analytically and numerically revealed that context-dependent decay is a result of motor effort optimization. My analyses suggest that context-dependent decay is not merely memory decay but is evidence of motor effort optimization in motor learning.
Collapse
Affiliation(s)
- Ken Takiyama
- Brain Science Institute, Tamagawa University Tokyo, Japan
| |
Collapse
|
12
|
Role of motor cortex NMDA receptors in learning-dependent synaptic plasticity of behaving mice. Nat Commun 2014; 4:2258. [PMID: 23978820 PMCID: PMC3759079 DOI: 10.1038/ncomms3258] [Citation(s) in RCA: 62] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2012] [Accepted: 07/05/2013] [Indexed: 01/22/2023] Open
Abstract
The primary motor cortex has an important role in the precise execution of learned motor responses. During motor learning, synaptic efficacy between sensory and primary motor cortical neurons is enhanced, possibly involving long-term potentiation and N-methyl-D-aspartate (NMDA)-specific glutamate receptor function. To investigate whether NMDA receptor in the primary motor cortex can act as a coincidence detector for activity-dependent changes in synaptic strength and associative learning, here we generate mice with deletion of the Grin1 gene, encoding the essential NMDA receptor subunit 1 (GluN1), specifically in the primary motor cortex. The loss of NMDA receptor function impairs primary motor cortex long-term potentiation in vivo. Importantly, it impairs the synaptic efficacy between the primary somatosensory and primary motor cortices and significantly reduces classically conditioned eyeblink responses. Furthermore, compared with wild-type littermates, mice lacking primary motor cortex show slower learning in Skinner-box tasks. Thus, primary motor cortex NMDA receptors are necessary for activity-dependent synaptic strengthening and associative learning. Motor cortex NMDA receptors have a key role in the acquisition of associative memories. Hasan et al. generate mice lacking NMDA receptor activity in the motor cortex and find that this impairs LTP, strengthening of synapses between somatosensory and motor cortices, and associative learning.
Collapse
|