1
|
Gu S, Mattar MG, Tang H, Pan G. Emergence and reconfiguration of modular structure for artificial neural networks during continual familiarity detection. SCIENCE ADVANCES 2024; 10:eadm8430. [PMID: 39058783 PMCID: PMC11277393 DOI: 10.1126/sciadv.adm8430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 06/21/2024] [Indexed: 07/28/2024]
Abstract
Advances in artificial intelligence enable neural networks to learn a wide variety of tasks, yet our understanding of the learning dynamics of these networks remains limited. Here, we study the temporal dynamics during learning of Hebbian feedforward neural networks in tasks of continual familiarity detection. Drawing inspiration from network neuroscience, we examine the network's dynamic reconfiguration, focusing on how network modules evolve throughout learning. Through a comprehensive assessment involving metrics like network accuracy, modular flexibility, and distribution entropy across diverse learning modes, our approach reveals various previously unknown patterns of network reconfiguration. We find that the emergence of network modularity is a salient predictor of performance and that modularization strengthens with increasing flexibility throughout learning. These insights not only elucidate the nuanced interplay of network modularity, accuracy, and learning dynamics but also bridge our understanding of learning in artificial and biological agents.
Collapse
Affiliation(s)
- Shi Gu
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
- Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China, Shenzhen, China
| | - Marcelo G. Mattar
- Department of Psychology, New York University, New York, NY 10003, USA
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
- State Key Laboratory of Brain Machine Intelligence, Zhejiang University, Hangzhou, China
| | - Gang Pan
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
- State Key Laboratory of Brain Machine Intelligence, Zhejiang University, Hangzhou, China
| |
Collapse
|
2
|
Kanemura I, Kitano K. Emergence of input selective recurrent dynamics via information transfer maximization. Sci Rep 2024; 14:13631. [PMID: 38871759 DOI: 10.1038/s41598-024-64417-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Accepted: 06/09/2024] [Indexed: 06/15/2024] Open
Abstract
Network structures of the brain have wiring patterns specialized for specific functions. These patterns are partially determined genetically or evolutionarily based on the type of task or stimulus. These wiring patterns are important in information processing; however, their organizational principles are not fully understood. This study frames the maximization of information transmission alongside the reduction of maintenance costs as a multi-objective optimization challenge, utilizing information theory and evolutionary computing algorithms with an emphasis on the visual system. The goal is to understand the underlying principles of circuit formation by exploring the patterns of wiring and information processing. The study demonstrates that efficient information transmission necessitates sparse circuits with internal modular structures featuring distinct wiring patterns. Significant trade-offs underscore the necessity of balance in wiring pattern development. The dynamics of effective circuits exhibit moderate flexibility in response to stimuli, in line with observations from prior visual system studies. Maximizing information transfer may allow for the self-organization of information processing functions similar to actual biological circuits, without being limited by modality. This study offers insights into neuroscience and the potential to improve reservoir computing performance.
Collapse
Affiliation(s)
- Itsuki Kanemura
- Graduate School of Information Science and Engineering, Ritsumeikan University, 2-150, Iwakuracho, Ibaraki, Osaka, 5670871, Japan.
| | - Katsunori Kitano
- Department of Information Science and Engineering, Ritsumeikan University, 2-150, Iwakuracho, Ibaraki, Osaka, 5670871, Japan
| |
Collapse
|
3
|
Miras K. Exploring the costs of phenotypic plasticity for evolvable digital organisms. Sci Rep 2024; 14:108. [PMID: 38168919 PMCID: PMC10761833 DOI: 10.1038/s41598-023-50683-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 12/22/2023] [Indexed: 01/05/2024] Open
Abstract
Phenotypic plasticity is usually defined as a property of individual genotypes to produce different phenotypes when exposed to different environmental conditions. While the benefits of plasticity for adaptation are well established, the costs associated with plasticity remain somewhat obscure. Understanding both why and how these costs arise could help us explain and predict the behavior of living creatures as well as allow the design of more adaptable robotic systems. One of the challenges of conducting such investigations concerns the difficulty of isolating the effects of different types of costs and the lack of control over environmental conditions. The present study addresses these challenges by using virtual worlds (software) to investigate the environmentally regulated phenotypic plasticity of digital organisms. The experimental setup guarantees that potential genetic costs of plasticity are isolated from other plasticity-related costs. Multiple populations of organisms endowed with and without phenotypic plasticity in either the body or the brain are evolved in simulation, and organisms must cope with different environmental conditions. The traits and fitness of the emergent organisms are compared, demonstrating cases in which plasticity is beneficial and cases in which it is neutral. The hypothesis put forward here is that the potential benefits of plasticity might be undermined by the genetic costs related to plasticity itself. The results suggest that this hypothesis is true, while further research is needed to guarantee that the observed effects unequivocally derive from genetic costs and not from some other (unforeseen) mechanism related to plasticity.
Collapse
Affiliation(s)
- Karine Miras
- Department of Computer Science, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.
| |
Collapse
|
4
|
Kozielska M, Weissing FJ. A neural network model for the evolution of learning in changing environments. PLoS Comput Biol 2024; 20:e1011840. [PMID: 38289971 PMCID: PMC10857588 DOI: 10.1371/journal.pcbi.1011840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 02/09/2024] [Accepted: 01/18/2024] [Indexed: 02/01/2024] Open
Abstract
Learning from past experience is an important adaptation and theoretical models may help to understand its evolution. Many of the existing models study simple phenotypes and do not consider the mechanisms underlying learning while the more complex neural network models often make biologically unrealistic assumptions and rarely consider evolutionary questions. Here, we present a novel way of modelling learning using small neural networks and a simple, biology-inspired learning algorithm. Learning affects only part of the network, and it is governed by the difference between expectations and reality. We use this model to study the evolution of learning under various environmental conditions and different scenarios for the trade-off between exploration (learning) and exploitation (foraging). Efficient learning readily evolves in our individual-based simulations. However, in line with previous studies, the evolution of learning is less likely in relatively constant environments, where genetic adaptation alone can lead to efficient foraging, or in short-lived organisms that cannot afford to spend much of their lifetime on exploration. Once learning does evolve, the characteristics of the learning strategy (i.e. the duration of the learning period and the learning rate) and the average performance after learning are surprisingly little affected by the frequency and/or magnitude of environmental change. In contrast, an organism's lifespan and the distribution of resources in the environment have a clear effect on the evolved learning strategy: a shorter lifespan or a broader resource distribution lead to fewer learning episodes and larger learning rates. Interestingly, a longer learning period does not always lead to better performance, indicating that the evolved neural networks differ in the effectiveness of learning. Overall, however, we show that a biologically inspired, yet relatively simple, learning mechanism can evolve to lead to an efficient adaptation in a changing environment.
Collapse
Affiliation(s)
- Magdalena Kozielska
- Groningen Institute for Evolutionary Life Sciences, University of Groningen, Groningen, The Netherlands
| | - Franz J. Weissing
- Groningen Institute for Evolutionary Life Sciences, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
5
|
Liu Z, Gan E, Tegmark M. Seeing Is Believing: Brain-Inspired Modular Training for Mechanistic Interpretability. ENTROPY (BASEL, SWITZERLAND) 2023; 26:41. [PMID: 38248167 PMCID: PMC10814460 DOI: 10.3390/e26010041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 12/21/2023] [Accepted: 12/27/2023] [Indexed: 01/23/2024]
Abstract
We introduce Brain-Inspired Modular Training (BIMT), a method for making neural networks more modular and interpretable. Inspired by brains, BIMT embeds neurons in a geometric space and augments the loss function with a cost proportional to the length of each neuron connection. This is inspired by the idea of minimum connection cost in evolutionary biology, but we are the first the combine this idea with training neural networks with gradient descent for interpretability. We demonstrate that BIMT discovers useful modular neural networks for many simple tasks, revealing compositional structures in symbolic formulas, interpretable decision boundaries and features for classification, and mathematical structure in algorithmic datasets. Qualitatively, BIMT-trained networks have modules readily identifiable by the naked eye, but regularly trained networks seem much more complicated. Quantitatively, we use Newman's method to compute the modularity of network graphs; BIMT achieves the highest modularity for all our test problems. A promising and ambitious future direction is to apply the proposed method to understand large models for vision, language, and science.
Collapse
Affiliation(s)
- Ziming Liu
- Institute for Artificial Intelligence and Fundamental Interactions, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; (E.G.); (M.T.)
| | | | | |
Collapse
|
6
|
Voina D, Shea-Brown E, Mihalas S. A biologically inspired architecture with switching units can learn to generalize across backgrounds. Neural Netw 2023; 168:615-630. [PMID: 37839332 PMCID: PMC10843013 DOI: 10.1016/j.neunet.2023.09.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 08/24/2023] [Accepted: 09/07/2023] [Indexed: 10/17/2023]
Abstract
Humans and other animals navigate different environments effortlessly, their brains rapidly and accurately generalizing across contexts. Despite recent progress in deep learning, this flexibility remains a challenge for many artificial systems. Here, we show how a bio-inspired network motif can explicitly address this issue. We do this using a dataset of MNIST digits of varying transparency, set on one of two backgrounds of different statistics that define two contexts: a pixel-wise noise or a more naturalistic background from the CIFAR-10 dataset. After learning digit classification when both contexts are shown sequentially, we find that both shallow and deep networks have sharply decreased performance when returning to the first background - an instance of the catastrophic forgetting phenomenon known from continual learning. To overcome this, we propose the bottleneck-switching network or switching network for short. This is a bio-inspired architecture analogous to a well-studied network motif in the visual cortex, with additional "switching" units that are activated in the presence of a new background, assuming a priori a contextual signal to turn these units on or off. Intriguingly, only a few of these switching units are sufficient to enable the network to learn the new context without catastrophic forgetting through inhibition of redundant background features. Further, the bottleneck-switching network can generalize to novel contexts similar to contexts it has learned. Importantly, we find that - again as in the underlying biological network motif, recurrently connecting the switching units to network layers is advantageous for context generalization.
Collapse
Affiliation(s)
- Doris Voina
- Department of Applied Mathematics, Computational Neuroscience Center, University of Washington, Seattle, WA 98195, USA.
| | - Eric Shea-Brown
- Department of Applied Mathematics, Computational Neuroscience Center, University of Washington, Seattle, WA 98195, USA; Allen Institute for Brain Science, 615 Westlake Ave N, Seattle, WA 98109, USA
| | - Stefan Mihalas
- Department of Applied Mathematics, Computational Neuroscience Center, University of Washington, Seattle, WA 98195, USA; Allen Institute for Brain Science, 615 Westlake Ave N, Seattle, WA 98109, USA
| |
Collapse
|
7
|
Jedlicka P, Tomko M, Robins A, Abraham WC. Contributions by metaplasticity to solving the Catastrophic Forgetting Problem: (Trends in Neurosciences, 45, 656-666, 2022). Trends Neurosci 2023; 46:893-894. [PMID: 37599184 DOI: 10.1016/j.tins.2023.07.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/22/2023]
|
8
|
Stricker JL, Corriveau-Lecavalier N, Wiepert DA, Botha H, Jones DT, Stricker NH. Neural network process simulations support a distributed memory system and aid design of a novel computer adaptive digital memory test for preclinical and prodromal Alzheimer's disease. Neuropsychology 2023; 37:698-715. [PMID: 36037486 PMCID: PMC9971333 DOI: 10.1037/neu0000847] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
OBJECTIVE Growing evidence supports the importance of learning as a central deficit in preclinical/prodromal Alzheimer's disease. The aims of this study were to conduct a series of neural network simulations to develop a functional understanding of a distributed, nonmodular memory system that can learn efficiently without interference. This understanding is applied to the development of a novel digital memory test. METHOD Simulations using traditional feed forward neural network architectures to learn simple logic problems are presented. The simulations demonstrate three limitations: (a) inefficiency, (b) an inability to learn problems consistently, and (c) catastrophic interference when given multiple problems. A new mirrored cascaded architecture is introduced to address these limitations, with support provided by a series of simulations. RESULTS The mirrored cascaded architecture demonstrates efficient and consistent learning relative to feed forward networks but also suffers from catastrophic interference. Addition of context values to add the capability of distinguishing features as part of learning eliminates the problem of interference in the mirrored cascaded, but not the feed forward, architectures. CONCLUSIONS A mirrored cascaded architecture addresses the limitations of traditional feed forward neural networks, provides support for a distributed memory system, and emphasizes the importance of context to avoid interference. These process models contributed to the design of a digital computer-adaptive word list learning test that places maximum stress on the capability to distinguish specific episodes of learning. Process simulations provide a useful method of testing models of brain function and contribute to new approaches to neuropsychological assessment. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
Affiliation(s)
- John L. Stricker
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, Minnesota, USA
- Department of Information Technology, Mayo Clinic, Rochester, Minnesota, USA
| | | | | | - Hugo Botha
- Department of Neurology, Mayo Clinic, Rochester, Minnesota, USA
| | - David T. Jones
- Department of Neurology, Mayo Clinic, Rochester, Minnesota, USA
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | - Nikki H. Stricker
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, Minnesota, USA
| |
Collapse
|
9
|
Jeon I, Kim T. Distinctive properties of biological neural networks and recent advances in bottom-up approaches toward a better biologically plausible neural network. Front Comput Neurosci 2023; 17:1092185. [PMID: 37449083 PMCID: PMC10336230 DOI: 10.3389/fncom.2023.1092185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 06/12/2023] [Indexed: 07/18/2023] Open
Abstract
Although it may appear infeasible and impractical, building artificial intelligence (AI) using a bottom-up approach based on the understanding of neuroscience is straightforward. The lack of a generalized governing principle for biological neural networks (BNNs) forces us to address this problem by converting piecemeal information on the diverse features of neurons, synapses, and neural circuits into AI. In this review, we described recent attempts to build a biologically plausible neural network by following neuroscientifically similar strategies of neural network optimization or by implanting the outcome of the optimization, such as the properties of single computational units and the characteristics of the network architecture. In addition, we proposed a formalism of the relationship between the set of objectives that neural networks attempt to achieve, and neural network classes categorized by how closely their architectural features resemble those of BNN. This formalism is expected to define the potential roles of top-down and bottom-up approaches for building a biologically plausible neural network and offer a map helping the navigation of the gap between neuroscience and AI engineering.
Collapse
Affiliation(s)
| | - Taegon Kim
- Brain Science Institute, Korea Institute of Science and Technology, Seoul, Republic of Korea
| |
Collapse
|
10
|
Emerging Modularity During the Evolution of Neural Networks. JOURNAL OF ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING RESEARCH 2023. [DOI: 10.2478/jaiscr-2023-0010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2023] Open
Abstract
Abstract
Modularity is a feature of most small, medium and large–scale living organisms that has evolved over many years of evolution. A lot of artificial systems are also modular, however, in this case, the modularity is the most frequently a consequence of a handmade design process. Modular systems that emerge automatically, as a result of a learning process, are very rare. What is more, we do not know mechanisms which result in modularity. The main goal of the paper is to continue the work of other researchers on the origins of modularity, which is a form of optimal organization of matter, and the mechanisms that led to the spontaneous formation of modular living forms in the process of evolution in response to limited resources and environmental variability. The paper focuses on artificial neural networks and proposes a number of mechanisms operating at the genetic level, both those borrowed from the natural world and those designed by hand, the use of which may lead to network modularity and hopefully to an increase in their effectiveness. In addition, the influence of external factors on the shape of the networks, such as the variability of tasks and the conditions in which these tasks are performed, is also analyzed. The analysis is performed using the Hill Climb Assembler Encoding constructive neuro-evolutionary algorithm. The algorithm was extended with various module-oriented mechanisms and tested under various conditions. The aim of the tests was to investigate how individual mechanisms involved in the evolutionary process and factors external to this process affect modularity and efficiency of neural networks.
Collapse
|
11
|
Hintze A, Adami C. Detecting Information Relays in Deep Neural Networks. ENTROPY (BASEL, SWITZERLAND) 2023; 25:401. [PMID: 36981289 PMCID: PMC10047156 DOI: 10.3390/e25030401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 02/20/2023] [Accepted: 02/20/2023] [Indexed: 06/18/2023]
Abstract
Deep learning of artificial neural networks (ANNs) is creating highly functional processes that are, unfortunately, nearly as hard to interpret as their biological counterparts. Identification of functional modules in natural brains plays an important role in cognitive and neuroscience alike, and can be carried out using a wide range of technologies such as fMRI, EEG/ERP, MEG, or calcium imaging. However, we do not have such robust methods at our disposal when it comes to understanding functional modules in artificial neural networks. Ideally, understanding which parts of an artificial neural network perform what function might help us to address a number of vexing problems in ANN research, such as catastrophic forgetting and overfitting. Furthermore, revealing a network's modularity could improve our trust in them by making these black boxes more transparent. Here, we introduce a new information-theoretic concept that proves useful in understanding and analyzing a network's functional modularity: the relay information IR. The relay information measures how much information groups of neurons that participate in a particular function (modules) relay from inputs to outputs. Combined with a greedy search algorithm, relay information can be used to identify computational modules in neural networks. We also show that the functionality of modules correlates with the amount of relay information they carry.
Collapse
Affiliation(s)
- Arend Hintze
- Department of MicroData Analytics, Dalarna University, 791 31 Falun, Sweden
- BEACON Center for the Study of Evolution in Action, Michigan State University, East Lansing, MI 48824, USA
| | - Christoph Adami
- BEACON Center for the Study of Evolution in Action, Michigan State University, East Lansing, MI 48824, USA
- Department of Microbiology and Molecular Genetics, Michigan State University, East Lansing, MI 48824, USA
- Program in Evolution, Ecology, and Behavior, Michigan State University, East Lansing, MI 48824, USA
| |
Collapse
|
12
|
Mei J, Meshkinnejad R, Mohsenzadeh Y. Effects of neuromodulation-inspired mechanisms on the performance of deep neural networks in a spatial learning task. iScience 2023; 26:106026. [PMID: 36818295 PMCID: PMC9929609 DOI: 10.1016/j.isci.2023.106026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Revised: 11/18/2022] [Accepted: 01/19/2023] [Indexed: 01/25/2023] Open
Abstract
In recent years, the biological underpinnings of adaptive learning have been modeled, leading to faster model convergence and various behavioral benefits in tasks including spatial navigation and cue-reward association. Furthermore, studies have investigated how the neuromodulatory system, a major driver of synaptic plasticity and state-dependent changes in the brain neuronal activities, plays a role in training deep neural networks (DNNs). In this study, we extended previous studies on neuromodulation-inspired DNNs and explored the effects of neuromodulatory components on learning and single unit activities in a spatial learning task. Under the multiscale neuromodulatory framework, plastic components, dropout probability modulation, and learning rate decay were added to the single unit, layer, and whole network levels of DNN models, respectively. We observed behavioral benefits including faster learning and smaller error of ambulation. We then concluded that neuromodulatory components can affect learning trajectories, outcomes, and single unit activities, in a component- and hyperparameter-dependent manner.
Collapse
Affiliation(s)
- Jie Mei
- Western Institute for Neuroscience, University of Western Ontario, London, ON N6A 5B7, Canada
- Department of Computer Science, University of Western Ontario, London, ON N6A 5B7, Canada
| | - Rouzbeh Meshkinnejad
- Department of Computer Science, University of Western Ontario, London, ON N6A 5B7, Canada
- Vector Institute for Artificial Intelligence, Toronto, ON M5G 1M1, Canada
| | - Yalda Mohsenzadeh
- Western Institute for Neuroscience, University of Western Ontario, London, ON N6A 5B7, Canada
- Department of Computer Science, University of Western Ontario, London, ON N6A 5B7, Canada
- Vector Institute for Artificial Intelligence, Toronto, ON M5G 1M1, Canada
| |
Collapse
|
13
|
Godin-Dubois K, Cussat-Blanc S, Duthen Y. Explaining the Neuroevolution of Fighting Creatures Through Virtual fMRI. ARTIFICIAL LIFE 2023; 29:66-93. [PMID: 36173656 DOI: 10.1162/artl_a_00389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
While interest in artificial neural networks (ANNs) has been renewed by the ubiquitous use of deep learning to solve high-dimensional problems, we are still far from general artificial intelligence. In this article, we address the problem of emergent cognitive capabilities and, more crucially, of their detection, by relying on co-evolving creatures with mutable morphology and neural structure. The former is implemented via both static and mobile structures whose shapes are controlled by cubic splines. The latter uses ESHyperNEAT to discover not only appropriate combinations of connections and weights but also to extrapolate hidden neuron distribution. The creatures integrate low-level perceptions (touch/pain proprioceptors, retina-based vision, frequency-based hearing) to inform their actions. By discovering a functional mapping between individual neurons and specific stimuli, we extract a high-level module-based abstraction of a creature's brain. This drastically simplifies the discovery of relationships between naturally occurring events and their neural implementation. Applying this methodology to creatures resulting from solitary and tag-team co-evolution showed remarkable dynamics such as range-finding and structured communication. Such discovery was made possible by the abstraction provided by the modular ANN which allowed groups of neurons to be viewed as functionally enclosed entities.
Collapse
Affiliation(s)
| | - Sylvain Cussat-Blanc
- CNRS
- University of Toulouse, IRIT
- Artificial and Natural Intelligence Toulouse Institute
| | | |
Collapse
|
14
|
Damicelli F, Hilgetag CC, Goulas A. Brain connectivity meets reservoir computing. PLoS Comput Biol 2022; 18:e1010639. [PMID: 36383563 PMCID: PMC9710781 DOI: 10.1371/journal.pcbi.1010639] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 11/30/2022] [Accepted: 10/05/2022] [Indexed: 11/17/2022] Open
Abstract
The connectivity of Artificial Neural Networks (ANNs) is different from the one observed in Biological Neural Networks (BNNs). Can the wiring of actual brains help improve ANNs architectures? Can we learn from ANNs about what network features support computation in the brain when solving a task? At a meso/macro-scale level of the connectivity, ANNs' architectures are carefully engineered and such those design decisions have crucial importance in many recent performance improvements. On the other hand, BNNs exhibit complex emergent connectivity patterns at all scales. At the individual level, BNNs connectivity results from brain development and plasticity processes, while at the species level, adaptive reconfigurations during evolution also play a major role shaping connectivity. Ubiquitous features of brain connectivity have been identified in recent years, but their role in the brain's ability to perform concrete computations remains poorly understood. Computational neuroscience studies reveal the influence of specific brain connectivity features only on abstract dynamical properties, although the implications of real brain networks topologies on machine learning or cognitive tasks have been barely explored. Here we present a cross-species study with a hybrid approach integrating real brain connectomes and Bio-Echo State Networks, which we use to solve concrete memory tasks, allowing us to probe the potential computational implications of real brain connectivity patterns on task solving. We find results consistent across species and tasks, showing that biologically inspired networks perform as well as classical echo state networks, provided a minimum level of randomness and diversity of connections is allowed. We also present a framework, bio2art, to map and scale up real connectomes that can be integrated into recurrent ANNs. This approach also allows us to show the crucial importance of the diversity of interareal connectivity patterns, stressing the importance of stochastic processes determining neural networks connectivity in general.
Collapse
Affiliation(s)
- Fabrizio Damicelli
- Institute of Computational Neuroscience, University Medical Center Hamburg Eppendorf, Hamburg University, Hamburg, Germany
| | - Claus C. Hilgetag
- Institute of Computational Neuroscience, University Medical Center Hamburg Eppendorf, Hamburg University, Hamburg, Germany
- Department of Health Sciences, Boston University, Boston, Massachusetts, United States of America
| | - Alexandros Goulas
- Institute of Computational Neuroscience, University Medical Center Hamburg Eppendorf, Hamburg University, Hamburg, Germany
| |
Collapse
|
15
|
Jedlicka P, Tomko M, Robins A, Abraham WC. Contributions by metaplasticity to solving the Catastrophic Forgetting Problem. Trends Neurosci 2022; 45:656-666. [PMID: 35798611 DOI: 10.1016/j.tins.2022.06.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 06/06/2022] [Accepted: 06/09/2022] [Indexed: 10/17/2022]
Abstract
Catastrophic forgetting (CF) refers to the sudden and severe loss of prior information in learning systems when acquiring new information. CF has been an Achilles heel of standard artificial neural networks (ANNs) when learning multiple tasks sequentially. The brain, by contrast, has solved this problem during evolution. Modellers now use a variety of strategies to overcome CF, many of which have parallels to cellular and circuit functions in the brain. One common strategy, based on metaplasticity phenomena, controls the future rate of change at key connections to help retain previously learned information. However, the metaplasticity properties so far used are only a subset of those existing in neurobiology. We propose that as models become more sophisticated, there could be value in drawing on a richer set of metaplasticity rules, especially when promoting continual learning in agents moving about the environment.
Collapse
Affiliation(s)
- Peter Jedlicka
- ICAR3R - Interdisciplinary Centre for 3Rs in Animal Research, Faculty of Medicine, Justus Liebig University, Giessen, Germany; Institute of Clinical Neuroanatomy, Neuroscience Center, Goethe University Frankfurt, Frankfurt/Main, Germany; Frankfurt Institute for Advanced Studies, Frankfurt 60438, Germany.
| | - Matus Tomko
- ICAR3R - Interdisciplinary Centre for 3Rs in Animal Research, Faculty of Medicine, Justus Liebig University, Giessen, Germany; Institute of Molecular Physiology and Genetics, Centre of Biosciences, Slovak Academy of Sciences, Bratislava, Slovakia
| | - Anthony Robins
- Department of Computer Science, University of Otago, Dunedin 9016, New Zealand
| | - Wickliffe C Abraham
- Department of Psychology, Brain Health Research Centre, University of Otago, Dunedin 9054, New Zealand.
| |
Collapse
|
16
|
Pigozzi F, Medvet E. Evolving Modularity in Soft Robots Through an Embodied and Self-Organizing Neural Controller. ARTIFICIAL LIFE 2022; 28:322-347. [PMID: 35834484 DOI: 10.1162/artl_a_00367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Modularity is a desirable property for embodied agents, as it could foster their suitability to different domains by disassembling them into transferable modules that can be reassembled differently. We focus on a class of embodied agents known as voxel-based soft robots (VSRs). They are aggregations of elastic blocks of soft material; as such, their morphologies are intrinsically modular. Nevertheless, controllers used until now for VSRs act as abstract, disembodied processing units: Disassembling such controllers for the purpose of module transferability is a challenging problem. Thus, the full potential of modularity for VSRs still remains untapped. In this work, we propose a novel self-organizing, embodied neural controller for VSRs. We optimize it for a given task and morphology by means of evolutionary computation: While evolving, the controller spreads across the VSR morphology in a way that permits emergence of modularity. We experimentally investigate whether such a controller (i) is effective and (ii) allows tuning of its degree of modularity, and with what kind of impact. To this end, we consider the task of locomotion on rugged terrains and evolve controllers for two morphologies. Our experiments confirm that our self-organizing, embodied controller is indeed effective. Moreover, by mimicking the structural modularity observed in biological neural networks, different levels of modularity can be achieved. Our findings suggest that the self-organization of modularity could be the basis for an automatic pipeline for assembling, disassembling, and reassembling embodied agents.
Collapse
Affiliation(s)
- Federico Pigozzi
- University of Trieste, Department of Engineering and Architecture
| | - Eric Medvet
- University of Trieste, Department of Engineering and Architecture.
| |
Collapse
|
17
|
Evolutionary neural networks for deep learning: a review. INT J MACH LEARN CYB 2022. [DOI: 10.1007/s13042-022-01578-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
18
|
Kudithipudi D, Aguilar-Simon M, Babb J, Bazhenov M, Blackiston D, Bongard J, Brna AP, Chakravarthi Raja S, Cheney N, Clune J, Daram A, Fusi S, Helfer P, Kay L, Ketz N, Kira Z, Kolouri S, Krichmar JL, Kriegman S, Levin M, Madireddy S, Manicka S, Marjaninejad A, McNaughton B, Miikkulainen R, Navratilova Z, Pandit T, Parker A, Pilly PK, Risi S, Sejnowski TJ, Soltoggio A, Soures N, Tolias AS, Urbina-Meléndez D, Valero-Cuevas FJ, van de Ven GM, Vogelstein JT, Wang F, Weiss R, Yanguas-Gil A, Zou X, Siegelmann H. Biological underpinnings for lifelong learning machines. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00452-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
19
|
The overlapping modular organization of human brain functional networks across the adult lifespan. Neuroimage 2022; 253:119125. [PMID: 35331872 DOI: 10.1016/j.neuroimage.2022.119125] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 03/02/2022] [Accepted: 03/19/2022] [Indexed: 01/06/2023] Open
Abstract
Previous studies have demonstrated that the brain functional modular organization, which is a fundamental feature of the human brain, would change along the adult lifespan. However, these studies assumed that each brain region belonged to a single functional module, although there has been convergent evidence supporting the existence of overlap among functional modules in the human brain. To reveal how age affects the overlapping functional modular organization, this study applied an overlapping module detection algorithm that requires no prior knowledge to the resting-state fMRI data of a healthy cohort (N = 570) aged from 18 to 88 years old. A series of measures were derived to delineate the characteristics of the overlapping modular structure and the set of overlapping nodes (brain regions participating in two or more modules) identified from each participant. Age-related regression analyses on these measures found linearly decreasing trends in the overlapping modularity and the modular similarity. The number of overlapping nodes was found increasing with age, but the increment was not even over the brain. In addition, across the adult lifespan and within each age group, the nodal overlapping probability consistently had positive correlations with both functional gradient and flexibility. Further, by correlation and mediation analyses, we showed that the influence of age on memory-related cognitive performance might be explained by the change in the overlapping functional modular organization. Together, our results revealed age-related decreased segregation from the brain functional overlapping modular organization perspective, which could provide new insight into the adult lifespan changes in brain function and the influence of such changes on cognitive performance.
Collapse
|
20
|
O’Reilly J, Pillay N. Supplementary-architecture weight-optimization neural networks. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07035-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
21
|
Mei J, Muller E, Ramaswamy S. Informing deep neural networks by multiscale principles of neuromodulatory systems. Trends Neurosci 2022; 45:237-250. [DOI: 10.1016/j.tins.2021.12.008] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 12/04/2021] [Accepted: 12/21/2021] [Indexed: 01/19/2023]
|
22
|
Nonlinear reconfiguration of network edges, topology and information content during an artificial learning task. Brain Inform 2021; 8:26. [PMID: 34859330 PMCID: PMC8639979 DOI: 10.1186/s40708-021-00147-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Accepted: 11/18/2021] [Indexed: 11/10/2022] Open
Abstract
Here, we combine network neuroscience and machine learning to reveal connections between the brain's network structure and the emerging network structure of an artificial neural network. Specifically, we train a shallow, feedforward neural network to classify hand-written digits and then used a combination of systems neuroscience and information-theoretic tools to perform 'virtual brain analytics' on the resultant edge weights and activity patterns of each node. We identify three distinct phases of network reconfiguration across learning, each of which are characterized by unique topological and information-theoretic signatures. Each phase involves aligning the connections of the neural network with patterns of information contained in the input dataset or preceding layers (as relevant). We also observe a process of low-dimensional category separation in the network as a function of learning. Our results offer a systems-level perspective of how artificial neural networks function-in terms of multi-stage reorganization of edge weights and activity patterns to effectively exploit the information content of input data during edge-weight training-while simultaneously enriching our understanding of the methods used by systems neuroscience.
Collapse
|
23
|
Silent Synapses in Cocaine-Associated Memory and Beyond. J Neurosci 2021; 41:9275-9285. [PMID: 34759051 DOI: 10.1523/jneurosci.1559-21.2021] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 09/22/2021] [Accepted: 09/27/2021] [Indexed: 11/21/2022] Open
Abstract
Glutamatergic synapses are key cellular sites where cocaine experience creates memory traces that subsequently promote cocaine craving and seeking. In addition to making across-the-board synaptic adaptations, cocaine experience also generates a discrete population of new synapses that selectively encode cocaine memories. These new synapses are glutamatergic synapses that lack functionally stable AMPARs, often referred to as AMPAR-silent synapses or, simply, silent synapses. They are generated de novo in the NAc by cocaine experience. After drug withdrawal, some of these synapses mature by recruiting AMPARs, contributing to the consolidation of cocaine-associated memory. After cue-induced retrieval of cocaine memories, matured silent synapses alternate between two dynamic states (AMPAR-absent vs AMPAR-containing) that correspond with the behavioral manifestations of destabilization and reconsolidation of these memories. Here, we review the molecular mechanisms underlying silent synapse dynamics during behavior, discuss their contributions to circuit remodeling, and analyze their role in cocaine-memory-driven behaviors. We also propose several mechanisms through which silent synapses can form neuronal ensembles as well as cross-region circuit engrams for cocaine-specific behaviors. These perspectives lead to our hypothesis that cocaine-generated silent synapses stand as a distinct set of synaptic substrates encoding key aspects of cocaine memory that drive cocaine relapse.
Collapse
|
24
|
Lui KFH, Lo JCM, Ho CSH, McBride C, Maurer U. Resting state EEG network modularity predicts literacy skills in L1 Chinese but not in L2 English. BRAIN AND LANGUAGE 2021; 220:104984. [PMID: 34175709 DOI: 10.1016/j.bandl.2021.104984] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Revised: 04/23/2021] [Accepted: 06/17/2021] [Indexed: 06/13/2023]
Abstract
EEG network modularity, as a proxy for cognitive plasticity, has been proposed to be a more reliable neural marker than power and coherence in predicting learning outcomes. The present study examined the associations between resting state EEG network modularity and both L1 Chinese and L2 English literacy skills among 90 Hong Kong first to fifth graders. The modularity indices of different frequency bands were highly correlated with one another. An exploratory factor analysis, performed to extract a general modularity index, explained 77.1% of the total variance. The modularity index was positively associated with Chinese word reading, Chinese phonological awareness, Chinese morphological awareness, and Chinese reading comprehension but was not significantly correlated with English word reading or English morphological awareness. Findings suggest that resting state EEG network modularity is likely to serve as a reasonable, reliable, and cost-effective neural marker of the development of first language but not second language literacy skills.
Collapse
Affiliation(s)
| | | | | | - Catherine McBride
- Department of Psychology, The Chinese University of Hong Kong, Hong Kong; Brain and Mind Institute, The Chinese University of Hong Kong, Hong Kong
| | - Urs Maurer
- Department of Psychology, The Chinese University of Hong Kong, Hong Kong; Brain and Mind Institute, The Chinese University of Hong Kong, Hong Kong.
| |
Collapse
|
25
|
Neuromodulated Dopamine Plastic Networks for Heterogeneous Transfer Learning with Hebbian Principle. Symmetry (Basel) 2021. [DOI: 10.3390/sym13081344] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023] Open
Abstract
The plastic modifications in synaptic connectivity is primarily from changes triggered by neuromodulated dopamine signals. These activities are controlled by neuromodulation, which is itself under the control of the brain. The subjective brain’s self-modifying abilities play an essential role in learning and adaptation. The artificial neural networks with neuromodulated plasticity are used to implement transfer learning in the image classification domain. In particular, this has application in image detection, image segmentation, and transfer of learning parameters with significant results. This paper proposes a novel approach to enhance transfer learning accuracy in a heterogeneous source and target, using the neuromodulation of the Hebbian learning principle, called NDHTL (Neuromodulated Dopamine Hebbian Transfer Learning). Neuromodulation of plasticity offers a powerful new technique with applications in training neural networks implementing asymmetric backpropagation using Hebbian principles in transfer learning motivated CNNs (Convolutional neural networks). Biologically motivated concomitant learning, where connected brain cells activate positively, enhances the synaptic connection strength between the network neurons. Using the NDHTL algorithm, the percentage of change of the plasticity between the neurons of the CNN layer is directly managed by the dopamine signal’s value. The discriminative nature of transfer learning fits well with the technique. The learned model’s connection weights must adapt to unseen target datasets with the least cost and effort in transfer learning. Using distinctive learning principles such as dopamine Hebbian learning in transfer learning for asymmetric gradient weights update is a novel approach. The paper emphasizes the NDHTL algorithmic technique as synaptic plasticity controlled by dopamine signals in transfer learning to classify images using source-target datasets. The standard transfer learning using gradient backpropagation is a symmetric framework. Experimental results using CIFAR-10 and CIFAR-100 datasets show that the proposed NDHTL algorithm can enhance transfer learning efficiency compared to existing methods.
Collapse
|
26
|
McCormick EM, Peters S, Crone EA, Telzer EH. Longitudinal network re-organization across learning and development. Neuroimage 2021; 229:117784. [PMID: 33503482 PMCID: PMC7994295 DOI: 10.1016/j.neuroimage.2021.117784] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 01/07/2021] [Accepted: 01/11/2021] [Indexed: 12/15/2022] Open
Abstract
While it is well understood that the brain experiences changes across short-term experience/learning and long-term development, it is unclear how these two mechanisms interact to produce developmental outcomes. Here we test an interactive model of learning and development where certain learning-related changes are constrained by developmental changes in the brain against an alternative development-as-practice model where outcomes are determined primarily by the accumulation of experience regardless of age. Participants (8-29 years) participated in a three-wave, accelerated longitudinal study during which they completed a feedback learning task during an fMRI scan. Adopting a novel longitudinal modeling approach, we probed the unique and moderated effects of learning, experience, and development simultaneously on behavioral performance and network modularity during the task. We found nonlinear patterns of development for both behavior and brain, and that greater experience supported increased learning and network modularity relative to naïve subjects. We also found changing brain-behavior relationships across adolescent development, where heightened network modularity predicted improved learning, but only following the transition from adolescence to young adulthood. These results present compelling support for an interactive view of experience and development, where changes in the brain impact behavior in context-specific fashion based on developmental goals.
Collapse
Affiliation(s)
- Ethan M McCormick
- Department of Psychology and Neuroscience, University of North Carolina, 235 E. Cameron Avenue, Chapel Hill, NC 27599, United States.
| | - Sabine Peters
- Department of Developmental and Educational Psychology, Leiden University, 2333AK Leiden, the Netherlands; Leiden Institute for Brain and Cognition, 2333ZA Leiden, the Netherlands
| | - Eveline A Crone
- Department of Developmental and Educational Psychology, Leiden University, 2333AK Leiden, the Netherlands; Leiden Institute for Brain and Cognition, 2333ZA Leiden, the Netherlands; School of Social and Behavioural Sciences, Erasmus University Rotterdam, Rotterdam, the Netherlands
| | - Eva H Telzer
- Department of Psychology and Neuroscience, University of North Carolina, 235 E. Cameron Avenue, Chapel Hill, NC 27599, United States
| |
Collapse
|
27
|
Probing the structure-function relationship with neural networks constructed by solving a system of linear equations. Sci Rep 2021; 11:3808. [PMID: 33589672 PMCID: PMC7884791 DOI: 10.1038/s41598-021-82964-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 01/27/2021] [Indexed: 11/17/2022] Open
Abstract
Neural network models are an invaluable tool to understand brain function since they allow us to connect the cellular and circuit levels with behaviour. Neural networks usually comprise a huge number of parameters, which must be chosen carefully such that networks reproduce anatomical, behavioural, and neurophysiological data. These parameters are usually fitted with off-the-shelf optimization algorithms that iteratively change network parameters and simulate the network to evaluate its performance and improve fitting. Here we propose to invert the fitting process by proceeding from the network dynamics towards network parameters. Firing state transitions are chosen according to the transition graph associated with the solution of a task. Then, a system of linear equations is constructed from the network firing states and membrane potentials, in a way that guarantees the consistency of the system. This allows us to uncouple the dynamical features of the model, like its neurons firing rate and correlation, from the structural features, and the task-solving algorithm implemented by the network. We employed our method to probe the structure–function relationship in a sequence memory task. The networks obtained showed connectivity and firing statistics that recapitulated experimental observations. We argue that the proposed method is a complementary and needed alternative to the way neural networks are constructed to model brain function.
Collapse
|
28
|
Abstract
We investigate whether standard evolutionary robotics methods can be extended to support the evolution of multiple behaviors by forcing the retention of variations that are adaptive with respect to all required behaviors. This is realized by selecting the individuals located in the first Pareto fronts of the multidimensional fitness space in the case of a standard evolutionary algorithms and by computing and using multiple gradients of the expected fitness in the case of a modern evolutionary strategies that move the population in the direction of the gradient of the fitness. The results collected on two extended versions of state-of-the-art benchmarking problems indicate that the latter method permits to evolve robots capable of producing the required multiple behaviors in the majority of the replications and produces significantly better results than all the other methods considered.
Collapse
|
29
|
Budaev S, Kristiansen TS, Giske J, Eliassen S. Computational animal welfare: towards cognitive architecture models of animal sentience, emotion and wellbeing. ROYAL SOCIETY OPEN SCIENCE 2020; 7:201886. [PMID: 33489298 PMCID: PMC7813262 DOI: 10.1098/rsos.201886] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Accepted: 12/04/2020] [Indexed: 05/08/2023]
Abstract
To understand animal wellbeing, we need to consider subjective phenomena and sentience. This is challenging, since these properties are private and cannot be observed directly. Certain motivations, emotions and related internal states can be inferred in animals through experiments that involve choice, learning, generalization and decision-making. Yet, even though there is significant progress in elucidating the neurobiology of human consciousness, animal consciousness is still a mystery. We propose that computational animal welfare science emerges at the intersection of animal behaviour, welfare and computational cognition. By using ideas from cognitive science, we develop a functional and generic definition of subjective phenomena as any process or state of the organism that exists from the first-person perspective and cannot be isolated from the animal subject. We then outline a general cognitive architecture to model simple forms of subjective processes and sentience. This includes evolutionary adaptation which contains top-down attention modulation, predictive processing and subjective simulation by re-entrant (recursive) computations. Thereafter, we show how this approach uses major characteristics of the subjective experience: elementary self-awareness, global workspace and qualia with unity and continuity. This provides a formal framework for process-based modelling of animal needs, subjective states, sentience and wellbeing.
Collapse
Affiliation(s)
- Sergey Budaev
- Department of Biological Sciences, University of Bergen, PO Box 7803, 5020 Bergen, Norway
| | - Tore S. Kristiansen
- Research Group Animal Welfare, Institute of Marine Research, PO Box 1870, 5817 Bergen, Norway
| | - Jarl Giske
- Department of Biological Sciences, University of Bergen, PO Box 7803, 5020 Bergen, Norway
| | - Sigrunn Eliassen
- Department of Biological Sciences, University of Bergen, PO Box 7803, 5020 Bergen, Norway
| |
Collapse
|
30
|
Wang L, Zheng J, Orchard J. Evolving Generalized Modulatory Learning: Unifying Neuromodulation and Synaptic Plasticity. IEEE Trans Cogn Dev Syst 2020. [DOI: 10.1109/tcds.2019.2960766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
31
|
Abstract
AbstractThe prospect of improving or maintaining cognitive functioning has provoked a steadily increasing number of cognitive training interventions over the last years, especially for clinical and elderly populations. However, there are discrepancies between the findings of the studies. One of the reasons behind these heterogeneous findings is that there are vast inter-individual differences in how people benefit from the training and in the extent that training-related gains are transferred to other untrained tasks and domains. In this paper, we address the value of incorporating neural measures to cognitive training studies in order to fully understand the mechanisms leading to inter-individual differences in training gains and their generalizability to other tasks. Our perspective is that it is necessary to collect multimodal neural measures in the pre- and post-training phase, which can enable us to understand the factors contributing to successful training outcomes. More importantly, this understanding can enable us to predict who will benefit from different types of interventions, thereby allowing the development of individually tailored intervention programs.
Collapse
|
32
|
Faradonbe SM, Safi-Esfahani F. A classifier task based on Neural Turing Machine and particle swarm algorithm. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2018.07.097] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
33
|
Li W, Li M, Qiao J, Guo X. A feature clustering-based adaptive modular neural network for nonlinear system modeling. ISA TRANSACTIONS 2020; 100:185-197. [PMID: 31767196 DOI: 10.1016/j.isatra.2019.11.015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Revised: 08/27/2019] [Accepted: 11/08/2019] [Indexed: 06/10/2023]
Abstract
To improve the performance of nonlinear system modeling, this study proposes a feature clustering-based adaptive modular neural network (FC-AMNN) by simulating information processing mechanism of human brains in the way that different information is processed by different modules in parallel. Firstly, features are clustered using an adaptive feature clustering algorithm, and the number of modules in FC-AMNN is determined by the number of feature clusters automatically. The features in each cluster are then allocated to the corresponding module in FC-AMNN. Then, a self-constructive RBF neural network based on Error Correction algorithm is adopted as the subnetwork to study the allocated features. All modules work in parallel and are finally integrated using a Bayesian method to obtain the output. To demonstrate the effectiveness of the proposed model, FC-AMNN is tested on several UCI benchmark problems as well as a practical problem in wastewater treatment process. The experimental results show that the FC-AMNN can achieve a better generalization performance and an accurate result for nonlinear system modeling compared with other modular neural networks.
Collapse
Affiliation(s)
- Wenjing Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, China; Beijing Key Laboratory of Computational Intelligence and Intelligent System, Beijing, 100124, China.
| | - Meng Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, China; Beijing Key Laboratory of Computational Intelligence and Intelligent System, Beijing, 100124, China
| | - Junfei Qiao
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, China; Beijing Key Laboratory of Computational Intelligence and Intelligent System, Beijing, 100124, China
| | - Xin Guo
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, China; Beijing Key Laboratory of Computational Intelligence and Intelligent System, Beijing, 100124, China
| |
Collapse
|
34
|
Lehman J, Clune J, Misevic D, Adami C, Altenberg L, Beaulieu J, Bentley PJ, Bernard S, Beslon G, Bryson DM, Cheney N, Chrabaszcz P, Cully A, Doncieux S, Dyer FC, Ellefsen KO, Feldt R, Fischer S, Forrest S, Fŕenoy A, Gagńe C, Le Goff L, Grabowski LM, Hodjat B, Hutter F, Keller L, Knibbe C, Krcah P, Lenski RE, Lipson H, MacCurdy R, Maestre C, Miikkulainen R, Mitri S, Moriarty DE, Mouret JB, Nguyen A, Ofria C, Parizeau M, Parsons D, Pennock RT, Punch WF, Ray TS, Schoenauer M, Schulte E, Sims K, Stanley KO, Taddei F, Tarapore D, Thibault S, Watson R, Weimer W, Yosinski J. The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities. ARTIFICIAL LIFE 2020; 26:274-306. [PMID: 32271631 DOI: 10.1162/artl_a_00319] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Evolution provides a creative fount of complex and subtle adaptations that often surprise the scientists who discover them. However, the creativity of evolution is not limited to the natural world: Artificial organisms evolving in computational environments have also elicited surprise and wonder from the researchers studying them. The process of evolution is an algorithmic process that transcends the substrate in which it occurs. Indeed, many researchers in the field of digital evolution can provide examples of how their evolving algorithms and organisms have creatively subverted their expectations or intentions, exposed unrecognized bugs in their code, produced unexpectedly adaptations, or engaged in behaviors and outcomes, uncannily convergent with ones found in nature. Such stories routinely reveal surprise and creativity by evolution in these digital worlds, but they rarely fit into the standard scientific narrative. Instead they are often treated as mere obstacles to be overcome, rather than results that warrant study in their own right. Bugs are fixed, experiments are refocused, and one-off surprises are collapsed into a single data point. The stories themselves are traded among researchers through oral tradition, but that mode of information transmission is inefficient and prone to error and outright loss. Moreover, the fact that these stories tend to be shared only among practitioners means that many natural scientists do not realize how interesting and lifelike digital organisms are and how natural their evolution can be. To our knowledge, no collection of such anecdotes has been published before. This article is the crowd-sourced product of researchers in the fields of artificial life and evolutionary computation who have provided first-hand accounts of such cases. It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. In doing so we also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may indeed be a universal property of all complex evolving systems.
Collapse
Affiliation(s)
| | | | - Dusan Misevic
- Université de Paris, INSERM U1284, Center for Research and Interdisciplinarity.
| | | | | | | | | | | | | | | | | | | | | | - Stephane Doncieux
- Sorbonne Universités, UPMC Univ Paris 06, CNRS, Institute of Intelligent Systems and Robotics (ISIR)
| | | | | | | | | | | | | | | | - Leni Le Goff
- Sorbonne Universités, UPMC Univ Paris 06, CNRS, Institute of Intelligent Systems and Robotics (ISIR)
| | | | | | | | - Laurent Keller
- Department of Fundamental Microbiology, University of Lausanne
| | | | | | | | | | | | - Carlos Maestre
- Sorbonne Universités, UPMC Univ Paris 06, CNRS, Institute of Intelligent Systems and Robotics (ISIR)
| | | | - Sara Mitri
- Department of Fundamental Microbiology, University of Lausanne
| | | | | | | | | | | | | | | | | | | | | | | | | | | | - François Taddei
- Center for Research and Interdisciplinarity, INSERM U1284, Université de Paris
| | | | | | | | | | | |
Collapse
|
35
|
Damicelli F, Hilgetag CC, Hütt MT, Messé A. Topological reinforcement as a principle of modularity emergence in brain networks. Netw Neurosci 2019; 3:589-605. [PMID: 31157311 PMCID: PMC6542620 DOI: 10.1162/netn_a_00085] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2018] [Accepted: 03/21/2019] [Indexed: 12/02/2022] Open
Abstract
Modularity is a ubiquitous topological feature of structural brain networks at various scales. Although a variety of potential mechanisms have been proposed, the fundamental principles by which modularity emerges in neural networks remain elusive. We tackle this question with a plasticity model of neural networks derived from a purely topological perspective. Our topological reinforcement model acts enhancing the topological overlap between nodes, that is, iteratively allowing connections between non-neighbor nodes with high neighborhood similarity. This rule reliably evolves synthetic random networks toward a modular architecture. Such final modular structure reflects initial "proto-modules," thus allowing to predict the modules of the evolved graph. Subsequently, we show that this topological selection principle might be biologically implemented as a Hebbian rule. Concretely, we explore a simple model of excitable dynamics, where the plasticity rule acts based on the functional connectivity (co-activations) between nodes. Results produced by the activity-based model are consistent with the ones from the purely topological rule in terms of the final network configuration and modules composition. Our findings suggest that the selective reinforcement of topological overlap may be a fundamental mechanism contributing to modularity emergence in brain networks.
Collapse
Affiliation(s)
- Fabrizio Damicelli
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg, Germany
| | - Claus C. Hilgetag
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg, Germany
- Department of Health Sciences, Boston University, Boston, Massachusetts, United States of America
| | - Marc-Thorsten Hütt
- Department of Life Sciences and Chemistry, Jacobs University, Bremen, Germany
| | - Arnaud Messé
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg, Germany
| |
Collapse
|
36
|
Gallen CL, D'Esposito M. Brain Modularity: A Biomarker of Intervention-related Plasticity. Trends Cogn Sci 2019; 23:293-304. [PMID: 30827796 PMCID: PMC6750199 DOI: 10.1016/j.tics.2019.01.014] [Citation(s) in RCA: 83] [Impact Index Per Article: 16.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2018] [Revised: 01/26/2019] [Accepted: 01/28/2019] [Indexed: 01/02/2023]
Abstract
Interventions using methods such as cognitive training and aerobic exercise have shown potential to enhance cognitive abilities. However, there is often pronounced individual variability in the magnitude of these gains. Here, we propose that brain network modularity, a measure of brain subnetwork segregation, is a unifying biomarker of intervention-related plasticity. We present work from multiple independent studies demonstrating that individual differences in baseline brain modularity predict gains in cognitive control functions across several populations and interventions, spanning healthy adults to patients with clinical deficits and cognitive training to aerobic exercise. We believe that this predictive framework provides a foundation for developing targeted, personalized interventions to improve cognition.
Collapse
Affiliation(s)
- Courtney L Gallen
- Department of Neurology, University of California San Francisco, San Francisco, CA, USA; Neuroscape, University of California San Francisco, San Francisco, CA, USA.
| | - Mark D'Esposito
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA; Department of Psychology, University of California, Berkeley, Berkeley, CA, USA
| |
Collapse
|
37
|
Rodriguez N, Izquierdo E, Ahn YY. Optimal modularity and memory capacity of neural reservoirs. Netw Neurosci 2019; 3:551-566. [PMID: 31089484 PMCID: PMC6497001 DOI: 10.1162/netn_a_00082] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2018] [Accepted: 02/25/2019] [Indexed: 11/04/2022] Open
Abstract
The neural network is a powerful computing framework that has been exploited by biological evolution and by humans for solving diverse problems. Although the computational capabilities of neural networks are determined by their structure, the current understanding of the relationships between a neural network's architecture and function is still primitive. Here we reveal that a neural network's modular architecture plays a vital role in determining the neural dynamics and memory performance of the network of threshold neurons. In particular, we demonstrate that there exists an optimal modularity for memory performance, where a balance between local cohesion and global connectivity is established, allowing optimally modular networks to remember longer. Our results suggest that insights from dynamical analysis of neural networks and information-spreading processes can be leveraged to better design neural networks and may shed light on the brain's modular organization.
Collapse
Affiliation(s)
- Nathaniel Rodriguez
- School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, USA
| | - Eduardo Izquierdo
- School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, USA
- Cognitive Science Program, Indiana University, Bloomington, IN, USA
| | - Yong-Yeol Ahn
- School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, USA
- Indiana University Network Science Institute, Bloomington, IN, USA
| |
Collapse
|
38
|
Ellefsen KO, Huizinga J, Torresen J. Guiding Neuroevolution with Structural Objectives. EVOLUTIONARY COMPUTATION 2019; 28:115-140. [PMID: 30767665 DOI: 10.1162/evco_a_00250] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The structure and performance of neural networks are intimately connected, and by use of evolutionary algorithms, neural network structures optimally adapted to a given task can be explored. Guiding such neuroevolution with additional objectives related to network structure has been shown to improve performance in some cases, especially when modular neural networks are beneficial. However, apart from objectives aiming to make networks more modular, such structural objectives have not been widely explored. We propose two new structural objectives and test their ability to guide evolving neural networks on two problems which can benefit from decomposition into subtasks. The first structural objective guides evolution to align neural networks with a user-recommended decomposition pattern. Intuitively, this should be a powerful guiding target for problems where human users can easily identify a structure. The second structural objective guides evolution towards a population with a high diversity in decomposition patterns. This results in exploration of many different ways to decompose a problem, allowing evolution to find good decompositions faster. Tests on our target problems reveal that both methods perform well on a problem with a very clear and decomposable structure. However, on a problem where the optimal decomposition is less obvious, the structural diversity objective is found to outcompete other structural objectives-and this technique can even increase performance on problems without any decomposable structure at all.
Collapse
Affiliation(s)
| | | | - Jim Torresen
- Department of Informatics and RITMO, University of Oslo, Norway
| |
Collapse
|
39
|
|
40
|
Buch VP, Richardson AG, Brandon C, Stiso J, Khattak MN, Bassett DS, Lucas TH. Network Brain-Computer Interface (nBCI): An Alternative Approach for Cognitive Prosthetics. Front Neurosci 2018; 12:790. [PMID: 30443203 PMCID: PMC6221897 DOI: 10.3389/fnins.2018.00790] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2018] [Accepted: 10/12/2018] [Indexed: 11/13/2022] Open
Abstract
Brain computer interfaces (BCIs) have been applied to sensorimotor systems for many years. However, BCI technology has broad potential beyond sensorimotor systems. The emerging field of cognitive prosthetics, for example, promises to improve learning and memory for patients with cognitive impairment. Unfortunately, our understanding of the neural mechanisms underlying these cognitive processes remains limited in part due to the extensive individual variability in neural coding and circuit function. As a consequence, the development of methods to ascertain optimal control signals for cognitive decoding and restoration remains an active area of inquiry. To advance the field, robust tools are required to quantify time-varying and task-dependent brain states predictive of cognitive performance. Here, we suggest that network science is a natural language in which to formulate and apply such tools. In support of our argument, we offer a simple demonstration of the feasibility of a network approach to BCI control signals, which we refer to as network BCI (nBCI). Finally, in a single subject example, we show that nBCI can reliably predict online cognitive performance and is superior to certain common spectral approaches currently used in BCIs. Our review of the literature and preliminary findings support the notion that nBCI could provide a powerful approach for future applications in cognitive prosthetics.
Collapse
Affiliation(s)
- Vivek P Buch
- Department of Neurosurgery, Hospital of the University of Pennsylvania, Philadelphia, PA, United States
| | - Andrew G Richardson
- Department of Neurosurgery, Hospital of the University of Pennsylvania, Philadelphia, PA, United States
| | - Cameron Brandon
- Department of Neurosurgery, Hospital of the University of Pennsylvania, Philadelphia, PA, United States
| | - Jennifer Stiso
- Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, United States
| | - Monica N Khattak
- Department of Neurosurgery, Hospital of the University of Pennsylvania, Philadelphia, PA, United States
| | - Danielle S Bassett
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, United States.,Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA, United States.,Department of Neurology, University of Pennsylvania, Philadelphia, PA, United States.,Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA, United States
| | - Timothy H Lucas
- Department of Neurosurgery, Hospital of the University of Pennsylvania, Philadelphia, PA, United States.,Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, United States
| |
Collapse
|
41
|
Khambhati AN, Sizemore AE, Betzel RF, Bassett DS. Modeling and interpreting mesoscale network dynamics. Neuroimage 2018; 180:337-349. [PMID: 28645844 PMCID: PMC5738302 DOI: 10.1016/j.neuroimage.2017.06.029] [Citation(s) in RCA: 63] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2017] [Revised: 06/12/2017] [Accepted: 06/14/2017] [Indexed: 11/28/2022] Open
Abstract
Recent advances in brain imaging techniques, measurement approaches, and storage capacities have provided an unprecedented supply of high temporal resolution neural data. These data present a remarkable opportunity to gain a mechanistic understanding not just of circuit structure, but also of circuit dynamics, and its role in cognition and disease. Such understanding necessitates a description of the raw observations, and a delineation of computational models and mathematical theories that accurately capture fundamental principles behind the observations. Here we review recent advances in a range of modeling approaches that embrace the temporally-evolving interconnected structure of the brain and summarize that structure in a dynamic graph. We describe recent efforts to model dynamic patterns of connectivity, dynamic patterns of activity, and patterns of activity atop connectivity. In the context of these models, we review important considerations in statistical testing, including parametric and non-parametric approaches. Finally, we offer thoughts on careful and accurate interpretation of dynamic graph architecture, and outline important future directions for method development.
Collapse
Affiliation(s)
- Ankit N Khambhati
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, USA; Center for Neuroengineering and Therapeautics, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Ann E Sizemore
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Richard F Betzel
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Danielle S Bassett
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, USA; Center for Neuroengineering and Therapeautics, University of Pennsylvania, Philadelphia, PA 19104, USA; Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA 19104, USA.
| |
Collapse
|
42
|
Chandra R, Ong YS, Goh CK. Co-evolutionary multi-task learning for dynamic time series prediction. Appl Soft Comput 2018. [DOI: 10.1016/j.asoc.2018.05.041] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
43
|
Soltoggio A, Stanley KO, Risi S. Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks. Neural Netw 2018; 108:48-67. [PMID: 30142505 DOI: 10.1016/j.neunet.2018.07.013] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2017] [Revised: 07/24/2018] [Accepted: 07/24/2018] [Indexed: 02/07/2023]
Abstract
Biological neural networks are systems of extraordinary computational capabilities shaped by evolution, development, and lifelong learning. The interplay of these elements leads to the emergence of biological intelligence. Inspired by such intricate natural phenomena, Evolved Plastic Artificial Neural Networks (EPANNs) employ simulated evolution in-silico to breed plastic neural networks with the aim to autonomously design and create learning systems. EPANN experiments evolve networks that include both innate properties and the ability to change and learn in response to experiences in different environments and problem domains. EPANNs' aims include autonomously creating learning systems, bootstrapping learning from scratch, recovering performance in unseen conditions, testing the computational advantages of particular neural components, and deriving hypotheses on the emergence of biological learning. Thus, EPANNs may include a large variety of different neuron types and dynamics, network architectures, plasticity rules, and other factors. While EPANNs have seen considerable progress over the last two decades, current scientific and technological advances in artificial neural networks are setting the conditions for radically new approaches and results. Exploiting the increased availability of computational resources and of simulation environments, the often challenging task of hand-designing learning neural networks could be replaced by more autonomous and creative processes. This paper brings together a variety of inspiring ideas that define the field of EPANNs. The main methods and results are reviewed. Finally, new opportunities and possible developments are presented.
Collapse
Affiliation(s)
- Andrea Soltoggio
- Department of Computer Science, Loughborough University, LE11 3TU, Loughborough, UK.
| | - Kenneth O Stanley
- Department of Computer Science, University of Central Florida, Orlando, FL, USA.
| | | |
Collapse
|
44
|
Espinosa-Soto C. On the role of sparseness in the evolution of modularity in gene regulatory networks. PLoS Comput Biol 2018; 14:e1006172. [PMID: 29775459 PMCID: PMC5979046 DOI: 10.1371/journal.pcbi.1006172] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2017] [Revised: 05/31/2018] [Accepted: 05/01/2018] [Indexed: 12/13/2022] Open
Abstract
Modularity is a widespread property in biological systems. It implies that interactions occur mainly within groups of system elements. A modular arrangement facilitates adjustment of one module without perturbing the rest of the system. Therefore, modularity of developmental mechanisms is a major factor for evolvability, the potential to produce beneficial variation from random genetic change. Understanding how modularity evolves in gene regulatory networks, that create the distinct gene activity patterns that characterize different parts of an organism, is key to developmental and evolutionary biology. One hypothesis for the evolution of modules suggests that interactions between some sets of genes become maladaptive when selection favours additional gene activity patterns. The removal of such interactions by selection would result in the formation of modules. A second hypothesis suggests that modularity evolves in response to sparseness, the scarcity of interactions within a system. Here I simulate the evolution of gene regulatory networks and analyse diverse experimentally sustained networks to study the relationship between sparseness and modularity. My results suggest that sparseness alone is neither sufficient nor necessary to explain modularity in gene regulatory networks. However, sparseness amplifies the effects of forms of selection that, like selection for additional gene activity patterns, already produce an increase in modularity. That evolution of new gene activity patterns is frequent across evolution also supports that it is a major factor in the evolution of modularity. That sparseness is widespread across gene regulatory networks indicates that it may have facilitated the evolution of modules in a wide variety of cases. Modular systems have performance and design advantages over non-modular systems. Thus, modularity is very important for the development of a wide range of new technological or clinical applications. Moreover, modularity is paramount to evolutionary biology since it allows adjusting one organismal function without disturbing other previously evolved functions. But how does modularity itself evolve? Here I analyse the structure of regulatory networks and follow simulations of network evolution to study two hypotheses for the origin of modules in gene regulatory networks. The first hypothesis considers that sparseness, a low number of interactions among the network genes, could be responsible for the evolution of modular networks. The second, that modules evolve when selection favours the production of additional gene activity patterns. I found that sparseness alone is neither sufficient nor necessary to explain modularity in gene regulatory networks. However, it enhances the effects of selection for multiple gene activity patterns. While selection for multiple patterns may be decisive in the evolution of modularity, that sparseness is widespread across gene regulatory networks suggests that its contributions should not be neglected.
Collapse
Affiliation(s)
- Carlos Espinosa-Soto
- Instituto de Física, Universidad Autónoma de San Luis Potosí, Manuel Nava 6, Zona Universitaria, San Luis Potosí, Mexico
- * E-mail:
| |
Collapse
|
45
|
Mattar MG, Wymbs NF, Bock AS, Aguirre GK, Grafton ST, Bassett DS. Predicting future learning from baseline network architecture. Neuroimage 2018; 172:107-117. [PMID: 29366697 PMCID: PMC5910215 DOI: 10.1016/j.neuroimage.2018.01.037] [Citation(s) in RCA: 49] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2017] [Revised: 01/09/2018] [Accepted: 01/15/2018] [Indexed: 12/24/2022] Open
Abstract
Human behavior and cognition result from a complex pattern of interactions between brain regions. The flexible reconfiguration of these patterns enables behavioral adaptation, such as the acquisition of a new motor skill. Yet, the degree to which these reconfigurations depend on the brain's baseline sensorimotor integration is far from understood. Here, we asked whether spontaneous fluctuations in sensorimotor networks at baseline were predictive of individual differences in future learning. We analyzed functional MRI data from 19 participants prior to six weeks of training on a new motor skill. We found that visual-motor connectivity was inversely related to learning rate: sensorimotor autonomy at baseline corresponded to faster learning in the future. Using three additional scans, we found that visual-motor connectivity at baseline is a relatively stable individual trait. These results suggest that individual differences in motor skill learning can be predicted from sensorimotor autonomy at baseline prior to task execution.
Collapse
Affiliation(s)
- Marcelo G Mattar
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA; Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, USA; Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Nicholas F Wymbs
- Human Brain Physiology and Stimulation Laboratory, Department of Physical Medicine and Rehabilitation, Johns Hopkins Medical Institution, Baltimore, MD, USA
| | - Andrew S Bock
- Department of Neurology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Geoffrey K Aguirre
- Department of Neurology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Scott T Grafton
- Department of Psychological and Brain Sciences and UCSB Brain Imaging Center, University of California, Santa Barbara, Santa Barbara, CA, USA
| | - Danielle S Bassett
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, USA; Department of Electrical & Systems Engineering, University of Pennsylvania, Philadelphia, PA 19104, USA; Department of Neurology, University of Pennsylvania, Philadelphia, PA 19104, USA.
| |
Collapse
|
46
|
Garcia JO, Ashourvan A, Muldoon SF, Vettel JM, Bassett DS. Applications of community detection techniques to brain graphs: Algorithmic considerations and implications for neural function. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2018; 106:846-867. [PMID: 30559531 PMCID: PMC6294140 DOI: 10.1109/jproc.2017.2786710] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
The human brain can be represented as a graph in which neural units such as cells or small volumes of tissue are heterogeneously connected to one another through structural or functional links. Brain graphs are parsimonious representations of neural systems that have begun to offer fundamental insights into healthy human cognition, as well as its alteration in disease. A critical open question in network neuroscience lies in how neural units cluster into densely interconnected groups that can provide the coordinated activity that is characteristic of perception, action, and adaptive behaviors. Tools that have proven particularly useful for addressing this question are community detection approaches, which can identify communities or modules: groups of neural units that are densely interconnected with other units in their own group but sparsely interconnected with units in other groups. In this paper, we describe a common community detection algorithm known as modularity maximization, and we detail its applications to brain graphs constructed from neuroimaging data. We pay particular attention to important algorithmic considerations, especially in recent extensions of these techniques to graphs that evolve in time. After recounting a few fundamental insights that these techniques have provided into brain function, we highlight potential avenues of methodological advancements for future studies seeking to better characterize the patterns of coordinated activity in the brain that accompany human behavior. This tutorial provides a naive reader with an introduction to theoretical considerations pertinent to the generation of brain graphs, an understanding of modularity maximization for community detection, a resource of statistical measures that can be used to characterize community structure, and an appreciation of the usefulness of these approaches in uncovering behaviorally-relevant network dynamics in neuroimaging data.
Collapse
Affiliation(s)
- Javier O Garcia
- U.S. Army Research Laboratory, Aberdeen Proving Ground, MD 21005 USA
- Department of Bioengineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA 19104 USA
- Penn Center for Neuroengineering and Therapeutics, University of Pennsylvania, Philadelphia, PA 19104 USA
- Department of Mathematics and CDSE Program, University at Buffalo, Buffalo, NY 14260 USA
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, 93106 USA
- Department of Electrical & Systems Engineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA 19104 USA
| | - Arian Ashourvan
- U.S. Army Research Laboratory, Aberdeen Proving Ground, MD 21005 USA
- Department of Bioengineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA 19104 USA
- Penn Center for Neuroengineering and Therapeutics, University of Pennsylvania, Philadelphia, PA 19104 USA
- Department of Mathematics and CDSE Program, University at Buffalo, Buffalo, NY 14260 USA
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, 93106 USA
- Department of Electrical & Systems Engineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA 19104 USA
| | - Sarah F Muldoon
- U.S. Army Research Laboratory, Aberdeen Proving Ground, MD 21005 USA
- Department of Bioengineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA 19104 USA
- Penn Center for Neuroengineering and Therapeutics, University of Pennsylvania, Philadelphia, PA 19104 USA
- Department of Mathematics and CDSE Program, University at Buffalo, Buffalo, NY 14260 USA
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, 93106 USA
- Department of Electrical & Systems Engineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA 19104 USA
| | - Jean M Vettel
- U.S. Army Research Laboratory, Aberdeen Proving Ground, MD 21005 USA
- Department of Bioengineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA 19104 USA
- Penn Center for Neuroengineering and Therapeutics, University of Pennsylvania, Philadelphia, PA 19104 USA
- Department of Mathematics and CDSE Program, University at Buffalo, Buffalo, NY 14260 USA
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, 93106 USA
- Department of Electrical & Systems Engineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA 19104 USA
| | - Danielle S Bassett
- U.S. Army Research Laboratory, Aberdeen Proving Ground, MD 21005 USA
- Department of Bioengineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA 19104 USA
- Penn Center for Neuroengineering and Therapeutics, University of Pennsylvania, Philadelphia, PA 19104 USA
- Department of Mathematics and CDSE Program, University at Buffalo, Buffalo, NY 14260 USA
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, 93106 USA
- Department of Electrical & Systems Engineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA 19104 USA
| |
Collapse
|
47
|
Reddy PG, Mattar MG, Murphy AC, Wymbs NF, Grafton ST, Satterthwaite TD, Bassett DS. Brain state flexibility accompanies motor-skill acquisition. Neuroimage 2018; 171:135-147. [PMID: 29309897 PMCID: PMC5857429 DOI: 10.1016/j.neuroimage.2017.12.093] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2017] [Revised: 12/09/2017] [Accepted: 12/29/2017] [Indexed: 11/23/2022] Open
Abstract
Learning requires the traversal of inherently distinct cognitive states to produce behavioral adaptation. Yet, tools to explicitly measure these states with non-invasive imaging – and to assess their dynamics during learning – remain limited. Here, we describe an approach based on a distinct application of graph theory in which points in time are represented by network nodes, and similarities in brain states between two different time points are represented as network edges. We use a graph-based clustering technique to identify clusters of time points representing canonical brain states, and to assess the manner in which the brain moves from one state to another as learning progresses. We observe the presence of two primary states characterized by either high activation in sensorimotor cortex or high activation in a frontal-subcortical system. Flexible switching among these primary states and other less common states becomes more frequent as learning progresses, and is inversely correlated with individual differences in learning rate. These results are consistent with the notion that the development of automaticity is associated with a greater freedom to use cognitive resources for other processes. Taken together, our work offers new insights into the constrained, low dimensional nature of brain dynamics characteristic of early learning, which give way to less constrained, high-dimensional dynamics in later learning.
Collapse
Affiliation(s)
- Pranav G Reddy
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Marcelo G Mattar
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Andrew C Murphy
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, USA; Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Nicholas F Wymbs
- Department of Physical Medicine and Rehabilitation, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Scott T Grafton
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, CA 93106, USA
| | | | - Danielle S Bassett
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, USA; Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA 19104, USA.
| |
Collapse
|
48
|
Abstract
There are two common approaches for optimizing the performance of a machine: genetic algorithms and machine learning. A genetic algorithm is applied over many generations whereas machine learning works by applying feedback until the system meets a performance threshold. These methods have been previously combined, particularly in artificial neural networks using an external objective feedback mechanism. We adapt this approach to Markov Brains, which are evolvable networks of probabilistic and deterministic logic gates. Prior to this work MB could only adapt from one generation to the other, so we introduce feedback gates which augment their ability to learn during their lifetime. We show that Markov Brains can incorporate these feedback gates in such a way that they do not rely on an external objective feedback signal, but instead can generate internal feedback that is then used to learn. This results in a more biologically accurate model of the evolution of learning, which will enable us to study the interplay between evolution and learning and could be another step towards autonomously learning machines.
Collapse
|
49
|
Velez R, Clune J. Diffusion-based neuromodulation can eliminate catastrophic forgetting in simple neural networks. PLoS One 2017; 12:e0187736. [PMID: 29145413 PMCID: PMC5690421 DOI: 10.1371/journal.pone.0187736] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2017] [Accepted: 10/25/2017] [Indexed: 01/30/2023] Open
Abstract
A long-term goal of AI is to produce agents that can learn a diversity of skills throughout their lifetimes and continuously improve those skills via experience. A longstanding obstacle towards that goal is catastrophic forgetting, which is when learning new information erases previously learned information. Catastrophic forgetting occurs in artificial neural networks (ANNs), which have fueled most recent advances in AI. A recent paper proposed that catastrophic forgetting in ANNs can be reduced by promoting modularity, which can limit forgetting by isolating task information to specific clusters of nodes and connections (functional modules). While the prior work did show that modular ANNs suffered less from catastrophic forgetting, it was not able to produce ANNs that possessed task-specific functional modules, thereby leaving the main theory regarding modularity and forgetting untested. We introduce diffusion-based neuromodulation, which simulates the release of diffusing, neuromodulatory chemicals within an ANN that can modulate (i.e. up or down regulate) learning in a spatial region. On the simple diagnostic problem from the prior work, diffusion-based neuromodulation 1) induces task-specific learning in groups of nodes and connections (task-specific localized learning), which 2) produces functional modules for each subtask, and 3) yields higher performance by eliminating catastrophic forgetting. Overall, our results suggest that diffusion-based neuromodulation promotes task-specific localized learning and functional modularity, which can help solve the challenging, but important problem of catastrophic forgetting.
Collapse
Affiliation(s)
- Roby Velez
- Computer Science Department, University of Wyoming, Laramie, Wyoming, United States of America
| | - Jeff Clune
- Computer Science Department, University of Wyoming, Laramie, Wyoming, United States of America
- Uber AI Labs, San Francisco, California, United States of America
- * E-mail:
| |
Collapse
|
50
|
Evolutionary Multi-task Learning for Modular Knowledge Representation in Neural Networks. Neural Process Lett 2017. [DOI: 10.1007/s11063-017-9718-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|