1
|
Palacios ER, Chadderton P, Friston K, Houghton C. Cerebellar state estimation enables resilient coupling across behavioural domains. Sci Rep 2024; 14:6641. [PMID: 38503802 PMCID: PMC10951354 DOI: 10.1038/s41598-024-56811-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 03/11/2024] [Indexed: 03/21/2024] Open
Abstract
Cerebellar computations are necessary for fine behavioural control and may rely on internal models for estimation of behaviourally relevant states. Here, we propose that the central cerebellar function is to estimate how states interact with each other, and to use these estimates to coordinates extra-cerebellar neuronal dynamics underpinning a range of interconnected behaviours. To support this claim, we describe a cerebellar model for state estimation that includes state interactions, and link this model with the neuronal architecture and dynamics observed empirically. This is formalised using the free energy principle, which provides a dual perspective on a system in terms of both the dynamics of its physical-in this case neuronal-states, and the inferential process they entail. As a demonstration of this proposal, we simulate cerebellar-dependent synchronisation of whisking and respiration, which are known to be tightly coupled in rodents, as well as limb and tail coordination during locomotion. In summary, we propose that the ubiquitous involvement of the cerebellum in behaviour arises from its central role in precisely coupling behavioural domains.
Collapse
Affiliation(s)
- Ensor Rafael Palacios
- University of Bristol, School of Physiology Pharmacology and Neuroscience, Bristol, BS8 1TD, UK.
| | - Paul Chadderton
- University of Bristol, School of Physiology Pharmacology and Neuroscience, Bristol, BS8 1TD, UK
| | - Karl Friston
- UCL, Wellcome Centre for Human Neuroimaging, London, WC1N 3AR, UK
| | - Conor Houghton
- University of Bristol, Department of Computer Science, Bristol, BS8 1UB, UK
| |
Collapse
|
2
|
Hauke DJ, Charlton CE, Schmidt A, Griffiths JD, Woods SW, Ford JM, Srihari VH, Roth V, Diaconescu AO, Mathalon DH. Aberrant Hierarchical Prediction Errors Are Associated With Transition to Psychosis: A Computational Single-Trial Analysis of the Mismatch Negativity. BIOLOGICAL PSYCHIATRY. COGNITIVE NEUROSCIENCE AND NEUROIMAGING 2023; 8:1176-1185. [PMID: 37536567 DOI: 10.1016/j.bpsc.2023.07.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 07/18/2023] [Accepted: 07/20/2023] [Indexed: 08/05/2023]
Abstract
BACKGROUND Mismatch negativity reductions are among the most reliable biomarkers for schizophrenia and have been associated with increased risk for conversion to psychosis in individuals who are at clinical high risk for psychosis (CHR-P). Here, we adopted a computational approach to develop a mechanistic model of mismatch negativity reductions in CHR-P individuals and patients early in the course of schizophrenia. METHODS Electroencephalography was recorded in 38 CHR-P individuals (15 converters), 19 patients early in the course of schizophrenia (≤5 years), and 44 healthy control participants during three different auditory oddball mismatch negativity paradigms including 10% duration, frequency, or double deviants, respectively. We modeled sensory learning with the hierarchical Gaussian filter and extracted precision-weighted prediction error trajectories from the model to assess how the expression of hierarchical prediction errors modulated electroencephalography amplitudes over sensor space and time. RESULTS Both low-level sensory and high-level volatility precision-weighted prediction errors were altered in CHR-P individuals and patients early in the course of schizophrenia compared with healthy control participants. Moreover, low-level precision-weighted prediction errors were significantly different in CHR-P individuals who later converted to psychosis compared with nonconverters. CONCLUSIONS Our results implicate altered processing of hierarchical prediction errors as a computational mechanism in early psychosis consistent with predictive coding accounts of psychosis. This computational model seems to capture pathophysiological mechanisms that are relevant to early psychosis and the risk for future psychosis in CHR-P individuals and may serve as predictive biomarkers and mechanistic targets for the development of novel treatments.
Collapse
Affiliation(s)
- Daniel J Hauke
- Centre for Medical Image Computing, Department of Computer Science, University College London, London, United Kingdom.
| | - Colleen E Charlton
- Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
| | - André Schmidt
- Department of Psychiatry, University of Basel, Basel, Switzerland
| | - John D Griffiths
- Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, Toronto, Ontario, Canada; Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| | - Scott W Woods
- Department of Psychiatry, Yale University School of Medicine, New Haven, Connecticut
| | - Judith M Ford
- Mental Health Service, Veterans Affairs San Francisco Health Care System, San Francisco, California; Department of Psychiatry and Behavioral Sciences, University of California San Francisco, San Francisco, California
| | - Vinod H Srihari
- Department of Psychiatry, Yale University School of Medicine, New Haven, Connecticut
| | - Volker Roth
- Department of Mathematics and Computer Science, University of Basel, Basel, Switzerland
| | - Andreea O Diaconescu
- Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, Toronto, Ontario, Canada; Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada; Institute of Medical Sciences, University of Toronto, Toronto, Ontario, Canada; Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Daniel H Mathalon
- Mental Health Service, Veterans Affairs San Francisco Health Care System, San Francisco, California; Department of Psychiatry and Behavioral Sciences, University of California San Francisco, San Francisco, California
| |
Collapse
|
3
|
Schröger E, Roeber U, Coy N. Markov chains as a proxy for the predictive memory representations underlying mismatch negativity. Front Hum Neurosci 2023; 17:1249413. [PMID: 37771348 PMCID: PMC10525344 DOI: 10.3389/fnhum.2023.1249413] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 08/22/2023] [Indexed: 09/30/2023] Open
Abstract
Events not conforming to a regularity inherent to a sequence of events elicit prediction error signals of the brain such as the Mismatch Negativity (MMN) and impair behavioral task performance. Events conforming to a regularity lead to attenuation of brain activity such as stimulus-specific adaptation (SSA) and behavioral benefits. Such findings are usually explained by theories stating that the information processing system predicts the forthcoming event of the sequence via detected sequential regularities. A mathematical model that is widely used to describe, to analyze and to generate event sequences are Markov chains: They contain a set of possible events and a set of probabilities for transitions between these events (transition matrix) that allow to predict the next event on the basis of the current event and the transition probabilities. The accuracy of such a prediction depends on the distribution of the transition probabilities. We argue that Markov chains also have useful applications when studying cognitive brain functions. The transition matrix can be regarded as a proxy for generative memory representations that the brain uses to predict the next event. We assume that detected regularities in a sequence of events correspond to (a subset of) the entries in the transition matrix. We apply this idea to the Mismatch Negativity (MMN) research and examine three types of MMN paradigms: classical oddball paradigms emphasizing sound probabilities, between-sound regularity paradigms manipulating transition probabilities between adjacent sounds, and action-sound coupling paradigms in which sounds are associated with actions and their intended effects. We show that the Markovian view on MMN yields theoretically relevant insights into the brain processes underlying MMN and stimulates experimental designs to study the brain's processing of event sequences.
Collapse
Affiliation(s)
- Erich Schröger
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
| | - Urte Roeber
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
| | - Nina Coy
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
- Max Planck School of Cognition, Leipzig, Germany
| |
Collapse
|
4
|
Xu S, Li W, Zhu Y, Xu A. A novel hybrid model for six main pollutant concentrations forecasting based on improved LSTM neural networks. Sci Rep 2022; 12:14434. [PMID: 36002466 PMCID: PMC9402967 DOI: 10.1038/s41598-022-17754-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 07/30/2022] [Indexed: 12/03/2022] Open
Abstract
In recent years, air pollution has become a factor that cannot be ignored, affecting human lives and health. The distribution of high-density populations and high-intensity development and construction have accentuated the problem of air pollution in China. To accelerate air pollution control and effectively improve environmental air quality, the target of our research was cities with serious air pollution problems to establish a model for air pollution prediction. We used the daily monitoring data of air pollution from January 2016 to December 2020 for the respective cities. We used the long short term memory networks (LSTM) algorithm model to solve the problem of gradient explosion in recurrent neural networks, then used the particle swarm optimization algorithm to determine the parameters of the CNN-LSTM model, and finally introduced the complete ensemble empirical mode decomposition of adaptive noise (CEEMDAN) decomposition to decompose air pollution and improve the accuracy of model prediction. The experimental results show that compared with a single LSTM model, the CEEMDAN-CNN-LSTM model has higher accuracy and lower prediction errors. The CEEMDAN-CNN-LSTM model enables a more precise prediction of air pollution, and may thus be useful for sustainable management and the control of air pollution.
Collapse
Affiliation(s)
- Shenyi Xu
- School of Statistics and Mathematics, Zhejiang Gongshang University, No.18 Xuezheng Street, Xiasha Higher Education Park, Hangzhou, Zhejiang, China
| | - Wei Li
- School of Statistics and Mathematics, Zhejiang Gongshang University, No.18 Xuezheng Street, Xiasha Higher Education Park, Hangzhou, Zhejiang, China
| | - Yuhan Zhu
- School of Statistics and Mathematics, Zhejiang Gongshang University, No.18 Xuezheng Street, Xiasha Higher Education Park, Hangzhou, Zhejiang, China.,Collaborative Innovation Center of Statistical Data Engineering, Technology & Application, Zhejiang Gongshang University, Hangzhou, China
| | - Aiting Xu
- School of Statistics and Mathematics, Zhejiang Gongshang University, No.18 Xuezheng Street, Xiasha Higher Education Park, Hangzhou, Zhejiang, China. .,Collaborative Innovation Center of Statistical Data Engineering, Technology & Application, Zhejiang Gongshang University, Hangzhou, China.
| |
Collapse
|
5
|
Popp NJ, Hernandez-Castillo CR, Gribble PL, Diedrichsen J. The role of feedback in the production of skilled finger sequences. J Neurophysiol 2022; 127:829-839. [PMID: 35235441 PMCID: PMC8957329 DOI: 10.1152/jn.00319.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Revised: 02/28/2022] [Accepted: 03/01/2022] [Indexed: 11/22/2022] Open
Abstract
Actions involving fine control of the hand, for example, grasping an object, rely heavily on sensory information from the fingertips. Although the integration of feedback during the execution of individual movements is well understood, less is known about the use of sensory feedback in the control of skilled movement sequences. To address this gap, we trained participants to produce sequences of finger movements on a keyboard-like device over a 4-day training period. Participants received haptic, visual, and auditory feedback indicating the occurrence of each finger press. We then either transiently delayed or advanced the feedback for a single press by a small amount of time (30 or 60 ms). We observed that participants rapidly adjusted their ongoing finger press by either accelerating or prolonging the ongoing press, in accordance with the direction of the perturbation. Furthermore, we could show that this rapid behavioral modulation was driven by haptic feedback. Although these feedback-driven adjustments reduced in size with practice, they were still clearly present at the end of training. In contrast to the directionally specific effect we observed on the perturbed press, a feedback perturbation resulted in a delayed onset of the subsequent presses irrespective of perturbation direction or feedback modality. This observation is consistent with a hierarchical organization of even very skilled and fast movement sequences, with different levels reacting distinctly to sensory perturbations.NEW & NOTEWORTHY Sensory feedback is important during the execution of a movement. However, little is known about how sensory feedback is used during the production of movement sequences. Here, we show two distinct feedback processes in the execution of fast finger movement sequences. By transiently delaying or advancing the feedback of a single press within a sequence, we observed a directionally specific effect on the perturbed press and a directionally non-specific effect on the subsequent presses.
Collapse
Affiliation(s)
- Nicola J Popp
- The Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada
| | | | - Paul L Gribble
- The Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada
- Department of Psychology, University of Western Ontario, London, Ontario, Canada
- Department of Physiology & Pharmacology, University of Western Ontario, London, Ontario, Canada
- Haskins Laboratories, New Haven, Connecticut
| | - Jörn Diedrichsen
- The Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada
- Department of Statistical and Actuarial Sciences, University of Western Ontario, London, Ontario, Canada
- Department of Computer Science, University of Western Ontario, London, Ontario, Canada
| |
Collapse
|
6
|
Kurikawa T, Kaneko K. Multiple-Timescale Neural Networks: Generation of History-Dependent Sequences and Inference Through Autonomous Bifurcations. Front Comput Neurosci 2021; 15:743537. [PMID: 34955798 PMCID: PMC8702558 DOI: 10.3389/fncom.2021.743537] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2021] [Accepted: 11/09/2021] [Indexed: 11/17/2022] Open
Abstract
Sequential transitions between metastable states are ubiquitously observed in the neural system and underlying various cognitive functions such as perception and decision making. Although a number of studies with asymmetric Hebbian connectivity have investigated how such sequences are generated, the focused sequences are simple Markov ones. On the other hand, fine recurrent neural networks trained with supervised machine learning methods can generate complex non-Markov sequences, but these sequences are vulnerable against perturbations and such learning methods are biologically implausible. How stable and complex sequences are generated in the neural system still remains unclear. We have developed a neural network with fast and slow dynamics, which are inspired by the hierarchy of timescales on neural activities in the cortex. The slow dynamics store the history of inputs and outputs and affect the fast dynamics depending on the stored history. We show that the learning rule that requires only local information can form the network generating the complex and robust sequences in the fast dynamics. The slow dynamics work as bifurcation parameters for the fast one, wherein they stabilize the next pattern of the sequence before the current pattern is destabilized depending on the previous patterns. This co-existence period leads to the stable transition between the current and the next pattern in the non-Markov sequence. We further find that timescale balance is critical to the co-existence period. Our study provides a novel mechanism generating robust complex sequences with multiple timescales. Considering the multiple timescales are widely observed, the mechanism advances our understanding of temporal processing in the neural system.
Collapse
Affiliation(s)
- Tomoki Kurikawa
- Department of Physics, Kansai Medical University, Hirakata, Japan
| | - Kunihiko Kaneko
- Department of Basic Science, Graduate School of Arts and Sciences, University of Tokyo, Tokyo, Japan.,Center for Complex Systems Biology, Universal Biology Institute, University of Tokyo, Tokyo, Japan
| |
Collapse
|
7
|
Fukai T, Asabuki T, Haga T. Neural mechanisms for learning hierarchical structures of information. Curr Opin Neurobiol 2021; 70:145-153. [PMID: 34808521 DOI: 10.1016/j.conb.2021.10.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Revised: 09/27/2021] [Accepted: 10/27/2021] [Indexed: 10/19/2022]
Abstract
Spatial and temporal information from the environment is often hierarchically organized, so is our knowledge formed about the environment. Identifying the meaningful segments embedded in hierarchically structured information is crucial for cognitive functions, including visual, auditory, motor, memory, and language processing. Segmentation enables the grasping of the links between isolated entities, offering the basis for reasoning and thinking. Importantly, the brain learns such segmentation without external instructions. Here, we review the underlying computational mechanisms implemented at the single-cell and network levels. The network-level mechanism has an interesting similarity to machine-learning methods for graph segmentation. The brain possibly implements methods for the analysis of the hierarchical structures of the environment at multiple levels of its processing hierarchy.
Collapse
Affiliation(s)
- Tomoki Fukai
- Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology, Tancha 1919-1, Onna-son, Okinawa 904-0495, Japan.
| | - Toshitake Asabuki
- Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology, Tancha 1919-1, Onna-son, Okinawa 904-0495, Japan
| | - Tatsuya Haga
- Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology, Tancha 1919-1, Onna-son, Okinawa 904-0495, Japan
| |
Collapse
|
8
|
Friston K, Heins C, Ueltzhöffer K, Da Costa L, Parr T. Stochastic Chaos and Markov Blankets. ENTROPY (BASEL, SWITZERLAND) 2021; 23:1220. [PMID: 34573845 PMCID: PMC8465859 DOI: 10.3390/e23091220] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 09/10/2021] [Accepted: 09/13/2021] [Indexed: 11/29/2022]
Abstract
In this treatment of random dynamical systems, we consider the existence-and identification-of conditional independencies at nonequilibrium steady-state. These independencies underwrite a particular partition of states, in which internal states are statistically secluded from external states by blanket states. The existence of such partitions has interesting implications for the information geometry of internal states. In brief, this geometry can be read as a physics of sentience, where internal states look as if they are inferring external states. However, the existence of such partitions-and the functional form of the underlying densities-have yet to be established. Here, using the Lorenz system as the basis of stochastic chaos, we leverage the Helmholtz decomposition-and polynomial expansions-to parameterise the steady-state density in terms of surprisal or self-information. We then show how Markov blankets can be identified-using the accompanying Hessian-to characterise the coupling between internal and external states in terms of a generalised synchrony or synchronisation of chaos. We conclude by suggesting that this kind of synchronisation may provide a mathematical basis for an elemental form of (autonomous or active) sentience in biology.
Collapse
Affiliation(s)
- Karl Friston
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, London WC1N 3AR, UK; (K.F.); (K.U.); (L.D.C.); (T.P.)
| | - Conor Heins
- Department of Collective Behaviour, Max Planck Institute of Animal Behavior, 78457 Konstanz, Germany
- Centre for the Advanced Study of Collective Behaviour, 78457 Konstanz, Germany
- Department of Biology, University of Konstanz, 78457 Konstanz, Germany
| | - Kai Ueltzhöffer
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, London WC1N 3AR, UK; (K.F.); (K.U.); (L.D.C.); (T.P.)
- Department of General Psychiatry, Centre of Psychosocial Medicine, Heidelberg University, Voßstraße 2, 69115 Heidelberg, Germany
| | - Lancelot Da Costa
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, London WC1N 3AR, UK; (K.F.); (K.U.); (L.D.C.); (T.P.)
- Department of Mathematics, Imperial College London, London SW7 2AZ, UK
| | - Thomas Parr
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, London WC1N 3AR, UK; (K.F.); (K.U.); (L.D.C.); (T.P.)
| |
Collapse
|
9
|
Frölich S, Marković D, Kiebel SJ. Neuronal Sequence Models for Bayesian Online Inference. Front Artif Intell 2021; 4:530937. [PMID: 34095815 PMCID: PMC8176225 DOI: 10.3389/frai.2021.530937] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Accepted: 04/13/2021] [Indexed: 11/13/2022] Open
Abstract
Various imaging and electrophysiological studies in a number of different species and brain regions have revealed that neuronal dynamics associated with diverse behavioral patterns and cognitive tasks take on a sequence-like structure, even when encoding stationary concepts. These neuronal sequences are characterized by robust and reproducible spatiotemporal activation patterns. This suggests that the role of neuronal sequences may be much more fundamental for brain function than is commonly believed. Furthermore, the idea that the brain is not simply a passive observer but an active predictor of its sensory input, is supported by an enormous amount of evidence in fields as diverse as human ethology and physiology, besides neuroscience. Hence, a central aspect of this review is to illustrate how neuronal sequences can be understood as critical for probabilistic predictive information processing, and what dynamical principles can be used as generators of neuronal sequences. Moreover, since different lines of evidence from neuroscience and computational modeling suggest that the brain is organized in a functional hierarchy of time scales, we will also review how models based on sequence-generating principles can be embedded in such a hierarchy, to form a generative model for recognition and prediction of sensory input. We shortly introduce the Bayesian brain hypothesis as a prominent mathematical description of how online, i.e., fast, recognition, and predictions may be computed by the brain. Finally, we briefly discuss some recent advances in machine learning, where spatiotemporally structured methods (akin to neuronal sequences) and hierarchical networks have independently been developed for a wide range of tasks. We conclude that the investigation of specific dynamical and structural principles of sequential brain activity not only helps us understand how the brain processes information and generates predictions, but also informs us about neuroscientific principles potentially useful for designing more efficient artificial neuronal networks for machine learning tasks.
Collapse
Affiliation(s)
- Sascha Frölich
- Department of Psychology, Technische Universität Dresden, Dresden, Germany
| | | | | |
Collapse
|
10
|
Çatal O, Wauthier S, De Boom C, Verbelen T, Dhoedt B. Learning Generative State Space Models for Active Inference. Front Comput Neurosci 2020; 14:574372. [PMID: 33304260 PMCID: PMC7701292 DOI: 10.3389/fncom.2020.574372] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2020] [Accepted: 10/14/2020] [Indexed: 11/13/2022] Open
Abstract
In this paper we investigate the active inference framework as a means to enable autonomous behavior in artificial agents. Active inference is a theoretical framework underpinning the way organisms act and observe in the real world. In active inference, agents act in order to minimize their so called free energy, or prediction error. Besides being biologically plausible, active inference has been shown to solve hard exploration problems in various simulated environments. However, these simulations typically require handcrafting a generative model for the agent. Therefore we propose to use recent advances in deep artificial neural networks to learn generative state space models from scratch, using only observation-action sequences. This way we are able to scale active inference to new and challenging problem domains, whilst still building on the theoretical backing of the free energy principle. We validate our approach on the mountain car problem to illustrate that our learnt models can indeed trade-off instrumental value and ambiguity. Furthermore, we show that generative models can also be learnt using high-dimensional pixel observations, both in the OpenAI Gym car racing environment and a real-world robotic navigation task. Finally we show that active inference based policies are an order of magnitude more sample efficient than Deep Q Networks on RL tasks.
Collapse
Affiliation(s)
- Ozan Çatal
- IDLab, Department of Information Technology, Ghent University - imec, Ghent, Belgium
| | | | | | | | | |
Collapse
|
11
|
Bennett M. An Attempt at a Unified Theory of the Neocortical Microcircuit in Sensory Cortex. Front Neural Circuits 2020; 14:40. [PMID: 32848632 PMCID: PMC7416357 DOI: 10.3389/fncir.2020.00040] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2020] [Accepted: 06/15/2020] [Indexed: 11/13/2022] Open
Abstract
The neocortex performs a wide range of functions, including working memory, sensory perception, and motor planning. Despite this diversity in function, evidence suggests that the neocortex is made up of repeating subunits ("macrocolumns"), each of which is largely identical in circuitry. As such, the specific computations performed by these macrocolumns are of great interest to neuroscientists and AI researchers. Leading theories of this microcircuit include models of predictive coding, hierarchical temporal memory (HTM), and Adaptive Resonance Theory (ART). However, these models have not yet explained: (1) how microcircuits learn sequences input with delay (i.e., working memory); (2) how networks of columns coordinate processing on precise timescales; or (3) how top-down attention modulates sensory processing. I provide a theory of the neocortical microcircuit that extends prior models in all three ways. Additionally, this theory provides a novel working memory circuit that extends prior models to support simultaneous multi-item storage without disrupting ongoing sensory processing. I then use this theory to explain the functional origin of a diverse set of experimental findings, such as cortical oscillations.
Collapse
Affiliation(s)
- Max Bennett
- Independent Researcher, New York, NY, United States
| |
Collapse
|
12
|
Butti N, Corti C, Finisguerra A, Bardoni A, Borgatti R, Poggi G, Urgesi C. Cerebellar Damage Affects Contextual Priors for Action Prediction in Patients with Childhood Brain Tumor. THE CEREBELLUM 2020; 19:799-811. [DOI: 10.1007/s12311-020-01168-w] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
13
|
Somatodendritic consistency check for temporal feature segmentation. Nat Commun 2020; 11:1554. [PMID: 32214100 PMCID: PMC7096495 DOI: 10.1038/s41467-020-15367-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2019] [Accepted: 03/06/2020] [Indexed: 11/08/2022] Open
Abstract
The brain identifies potentially salient features within continuous information streams to process hierarchical temporal events. This requires the compression of information streams, for which effective computational principles are yet to be explored. Backpropagating action potentials can induce synaptic plasticity in the dendrites of cortical pyramidal neurons. By analogy with this effect, we model a self-supervising process that increases the similarity between dendritic and somatic activities where the somatic activity is normalized by a running average. We further show that a family of networks composed of the two-compartment neurons performs a surprisingly wide variety of complex unsupervised learning tasks, including chunking of temporal sequences and the source separation of mixed correlated signals. Common methods applicable to these temporal feature analyses were previously unknown. Our results suggest the powerful ability of neural networks with dendrites to analyze temporal features. This simple neuron model may also be potentially useful in neural engineering applications. The authors propose a learning rule for a neuron model with dendrite. In their model, somatodendritic interaction implements self-supervised learning applicable to a wide range of sequence learning tasks, including spike pattern detection, chunking temporal input and blind source separation.
Collapse
|
14
|
Vasil J, Badcock PB, Constant A, Friston K, Ramstead MJD. A World Unto Itself: Human Communication as Active Inference. Front Psychol 2020; 11:417. [PMID: 32269536 PMCID: PMC7109408 DOI: 10.3389/fpsyg.2020.00417] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2019] [Accepted: 02/24/2020] [Indexed: 01/12/2023] Open
Abstract
Recent theoretical work in developmental psychology suggests that humans are predisposed to align their mental states with those of other individuals. One way this manifests is in cooperative communication; that is, intentional communication aimed at aligning individuals' mental states with respect to events in their shared environment. This idea has received strong empirical support. The purpose of this paper is to extend this account by proposing an integrative model of the biobehavioral dynamics of cooperative communication. Our formulation is based on active inference. Active inference suggests that action-perception cycles operate to minimize uncertainty and optimize an individual's internal model of the world. We propose that humans are characterized by an evolved adaptive prior belief that their mental states are aligned with, or similar to, those of conspecifics (i.e., that 'we are the same sort of creature, inhabiting the same sort of niche'). The use of cooperative communication emerges as the principal means to gather evidence for this belief, allowing for the development of a shared narrative that is used to disambiguate interactants' (hidden and inferred) mental states. Thus, by using cooperative communication, individuals effectively attune to a hermeneutic niche composed, in part, of others' mental states; and, reciprocally, attune the niche to their own ends via epistemic niche construction. This means that niche construction enables features of the niche to encode precise, reliable cues about the deontic or shared value of certain action policies (e.g., the utility of using communicative constructions to disambiguate mental states, given expectations about shared prior beliefs). In turn, the alignment of mental states (prior beliefs) enables the emergence of a novel, contextualizing scale of cultural dynamics that encompasses the actions and mental states of the ensemble of interactants and their shared environment. The dynamics of this contextualizing layer of cultural organization feedback, across scales, to constrain the variability of the prior expectations of the individuals who constitute it. Our theory additionally builds upon the active inference literature by introducing a new set of neurobiologically plausible computational hypotheses for cooperative communication. We conclude with directions for future research.
Collapse
Affiliation(s)
- Jared Vasil
- Department of Psychology and Neuroscience, Duke University, Durham, NC, United States
| | - Paul B. Badcock
- Centre for Youth Mental Health, The University of Melbourne, Melbourne, VIC, Australia
- Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, VIC, Australia
- Orygen, Melbourne, VIC, Australia
| | - Axel Constant
- Charles Perkins Centre, The University of Sydney, Camperdown, NSW, Australia
- Culture, Mind, and Brain Program, McGill University, Montreal, QC, Canada
- Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom
| | - Karl Friston
- Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom
| | - Maxwell J. D. Ramstead
- Culture, Mind, and Brain Program, McGill University, Montreal, QC, Canada
- Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom
- Division of Social and Transcultural Psychiatry, Department of Psychiatry, McGill University, Montreal, QC, Canada
| |
Collapse
|
15
|
Zarghami TS, Friston KJ. Dynamic effective connectivity. Neuroimage 2019; 207:116453. [PMID: 31821868 DOI: 10.1016/j.neuroimage.2019.116453] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Revised: 10/29/2019] [Accepted: 12/06/2019] [Indexed: 01/17/2023] Open
Abstract
Metastability is a key source of itinerant dynamics in the brain; namely, spontaneous spatiotemporal reorganization of neuronal activity. This itinerancy has been the focus of numerous dynamic functional connectivity (DFC) analyses - developed to characterize the formation and dissolution of distributed functional patterns over time, using resting state fMRI. However, aside from technical and practical controversies, these approaches cannot recover the neuronal mechanisms that underwrite itinerant (e.g., metastable) dynamics-due to their descriptive, model-free nature. We argue that effective connectivity (EC) analyses are more apt for investigating the neuronal basis of metastability. To this end, we appeal to biologically-grounded models (i.e., dynamic causal modelling, DCM) and dynamical systems theory (i.e., heteroclinic sequential dynamics) to create a probabilistic, generative model of haemodynamic fluctuations. This model generates trajectories in the parametric space of EC modes (i.e., states of connectivity) that characterize functional brain architectures. In brief, it extends an established spectral DCM, to generate functional connectivity data features that change over time. This foundational paper tries to establish the model's face validity by simulating non-stationary fMRI time series and recovering key model parameters (i.e., transition probabilities among connectivity states and the parametric nature of these states) using variational Bayes. These data are further characterized using Bayesian model comparison (within and between subjects). Finally, we consider practical issues that attend applications and extensions of this scheme. Importantly, the scheme operates within a generic Bayesian framework - that can be adapted to study metastability and itinerant dynamics in any non-stationary time series.
Collapse
Affiliation(s)
- Tahereh S Zarghami
- Bio-Electric Department, School of Electrical and Computer Engineering, University of Tehran, Amirabad, Tehran, Iran.
| | - Karl J Friston
- The Wellcome Centre for Human Neuroimaging, University College London, Queen Square, London, WC1N 3AR, UK.
| |
Collapse
|
16
|
Badcock PB, Friston KJ, Ramstead MJD, Ploeger A, Hohwy J. The hierarchically mechanistic mind: an evolutionary systems theory of the human brain, cognition, and behavior. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2019; 19:1319-1351. [PMID: 31115833 PMCID: PMC6861365 DOI: 10.3758/s13415-019-00721-3] [Citation(s) in RCA: 55] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
The purpose of this review was to integrate leading paradigms in psychology and neuroscience with a theory of the embodied, situated human brain, called the Hierarchically Mechanistic Mind (HMM). The HMM describes the brain as a complex adaptive system that functions to minimize the entropy of our sensory and physical states via action-perception cycles generated by hierarchical neural dynamics. First, we review the extant literature on the hierarchical structure of the brain. Next, we derive the HMM from a broader evolutionary systems theory that explains neural structure and function in terms of dynamic interactions across four nested levels of biological causation (i.e., adaptation, phylogeny, ontogeny, and mechanism). We then describe how the HMM aligns with a global brain theory in neuroscience called the free-energy principle, leveraging this theory to mathematically formulate neural dynamics across hierarchical spatiotemporal scales. We conclude by exploring the implications of the HMM for psychological inquiry.
Collapse
Affiliation(s)
- Paul B Badcock
- Centre for Youth Mental Health, The University of Melbourne, Melbourne, Australia.
- Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, Australia.
- Orygen, The National Centre of Excellence in Youth Mental Health, Melbourne, Australia.
| | - Karl J Friston
- Wellcome Trust Centre for Neuroimaging, University College London, London, UK
| | - Maxwell J D Ramstead
- Wellcome Trust Centre for Neuroimaging, University College London, London, UK
- Department of Philosophy, McGill University, Montreal, QC, Canada
- Division of Social and Transcultural Psychiatry, Department of Psychiatry, McGill University, Montreal, QC, Canada
| | - Annemie Ploeger
- Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
| | - Jakob Hohwy
- Cognition & Philosophy Lab, Monash University, Clayton, VIC, Australia
| |
Collapse
|
17
|
Badcock PB, Friston KJ, Ramstead MJD. The hierarchically mechanistic mind: A free-energy formulation of the human psyche. Phys Life Rev 2019; 31:104-121. [PMID: 30704846 PMCID: PMC6941235 DOI: 10.1016/j.plrev.2018.10.002] [Citation(s) in RCA: 63] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2018] [Revised: 09/04/2018] [Accepted: 10/10/2018] [Indexed: 11/29/2022]
Abstract
This article presents a unifying theory of the embodied, situated human brain called the Hierarchically Mechanistic Mind (HMM). The HMM describes the brain as a complex adaptive system that actively minimises the decay of our sensory and physical states by producing self-fulfilling action-perception cycles via dynamical interactions between hierarchically organised neurocognitive mechanisms. This theory synthesises the free-energy principle (FEP) in neuroscience with an evolutionary systems theory of psychology that explains our brains, minds, and behaviour by appealing to Tinbergen's four questions: adaptation, phylogeny, ontogeny, and mechanism. After leveraging the FEP to formally define the HMM across different spatiotemporal scales, we conclude by exploring its implications for theorising and research in the sciences of the mind and behaviour.
Collapse
Affiliation(s)
- Paul B Badcock
- Centre for Youth Mental Health, The University of Melbourne, Melbourne, 3052, Australia; Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, 3010, Australia; Orygen, the National Centre of Excellence in Youth Mental Health, Melbourne, 3052, Australia.
| | - Karl J Friston
- Wellcome Trust Centre for Neuroimaging, University College London, London, WC1N3BG, UK
| | - Maxwell J D Ramstead
- Wellcome Trust Centre for Neuroimaging, University College London, London, WC1N3BG, UK; Department of Philosophy, McGill University, Montreal, Quebec, H3A 2T7, Canada; Division of Social and Transcultural Psychiatry, Department of Psychiatry, McGill University, Montreal, Quebec, H3A 1A1, Canada
| |
Collapse
|
18
|
Abstract
By studying different sources of temporal variability in central pattern generator (CPG) circuits, we unveil fundamental aspects of the instantaneous balance between flexibility and robustness in sequential dynamics -a property that characterizes many systems that display neural rhythms. Our analysis of the triphasic rhythm of the pyloric CPG (Carcinus maenas) shows strong robustness of transient dynamics in keeping not only the activation sequences but also specific cycle-by-cycle temporal relationships in the form of strong linear correlations between pivotal time intervals, i.e. dynamical invariants. The level of variability and coordination was characterized using intrinsic time references and intervals in long recordings of both regular and irregular rhythms. Out of the many possible combinations of time intervals studied, only two cycle-by-cycle dynamical invariants were identified, existing even outside steady states. While executing a neural sequence, dynamical invariants reflect constraints to optimize functionality by shaping the actual intervals in which activity emerges to build the sequence. Our results indicate that such boundaries to the adaptability arise from the interaction between the rich dynamics of neurons and connections. We suggest that invariant temporal sequence relationships could be present in other networks, including those shaping sequences of functional brain rhythms, and underlie rhythm programming and functionality.
Collapse
|
19
|
Yi HG, Leonard MK, Chang EF. The Encoding of Speech Sounds in the Superior Temporal Gyrus. Neuron 2019; 102:1096-1110. [PMID: 31220442 PMCID: PMC6602075 DOI: 10.1016/j.neuron.2019.04.023] [Citation(s) in RCA: 173] [Impact Index Per Article: 34.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Revised: 04/08/2019] [Accepted: 04/16/2019] [Indexed: 01/02/2023]
Abstract
The human superior temporal gyrus (STG) is critical for extracting meaningful linguistic features from speech input. Local neural populations are tuned to acoustic-phonetic features of all consonants and vowels and to dynamic cues for intonational pitch. These populations are embedded throughout broader functional zones that are sensitive to amplitude-based temporal cues. Beyond speech features, STG representations are strongly modulated by learned knowledge and perceptual goals. Currently, a major challenge is to understand how these features are integrated across space and time in the brain during natural speech comprehension. We present a theory that temporally recurrent connections within STG generate context-dependent phonological representations, spanning longer temporal sequences relevant for coherent percepts of syllables, words, and phrases.
Collapse
Affiliation(s)
- Han Gyol Yi
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| | - Matthew K Leonard
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA.
| |
Collapse
|
20
|
Carrillo-Medina JL, Latorre R. Detection of Activation Sequences in Spiking-Bursting Neurons by means of the Recognition of Intraburst Neural Signatures. Sci Rep 2018; 8:16726. [PMID: 30425274 PMCID: PMC6233224 DOI: 10.1038/s41598-018-34757-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2018] [Accepted: 10/24/2018] [Indexed: 11/18/2022] Open
Abstract
Bursting activity is present in many cells of different nervous systems playing important roles in neural information processing. Multiple assemblies of bursting neurons act cooperatively to produce coordinated spatio-temporal patterns of sequential activity. A major goal in neuroscience is unveiling the mechanisms underlying neural information processing based on this sequential dynamics. Experimental findings have revealed the presence of precise cell-type-specific intraburst firing patterns in the activity of some bursting neurons. This characteristic neural signature coexists with the information encoded in other aspects of the spiking-bursting signals, and its functional meaning is still unknown. We investigate the ability of a neuron conductance-based model to detect specific presynaptic activation sequences taking advantage of intraburst fingerprints identifying the source of the signals building up a sequential pattern of activity. Our simulations point out that a reader neuron could use this information to contextualize incoming signals and accordingly compute a characteristic response by relying on precise phase relationships among the activity of different emitters. This would provide individual neurons enhanced capabilities to control and negotiate sequential dynamics. In this regard, we discuss the possible implications of the proposed contextualization mechanism for neural information processing.
Collapse
Affiliation(s)
- José Luis Carrillo-Medina
- Departamento de Eléctrica y Electrónica, Universidad de las Fuerzas Armadas - ESPE, Sangolquí, Ecuador
| | - Roberto Latorre
- Grupo de Neurocomputación Biológica, Dpto. Ingeniería Informática, Universidad Autónoma de Madrid, 28049, Madrid, Spain.
| |
Collapse
|
21
|
Asabuki T, Hiratani N, Fukai T. Interactive reservoir computing for chunking information streams. PLoS Comput Biol 2018; 14:e1006400. [PMID: 30296262 PMCID: PMC6193738 DOI: 10.1371/journal.pcbi.1006400] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2018] [Revised: 10/18/2018] [Accepted: 07/25/2018] [Indexed: 01/21/2023] Open
Abstract
Chunking is the process by which frequently repeated segments of temporal inputs are concatenated into single units that are easy to process. Such a process is fundamental to time-series analysis in biological and artificial information processing systems. The brain efficiently acquires chunks from various information streams in an unsupervised manner; however, the underlying mechanisms of this process remain elusive. A widely-adopted statistical method for chunking consists of predicting frequently repeated contiguous elements in an input sequence based on unequal transition probabilities over sequence elements. However, recent experimental findings suggest that the brain is unlikely to adopt this method, as human subjects can chunk sequences with uniform transition probabilities. In this study, we propose a novel conceptual framework to overcome this limitation. In this process, neural networks learn to predict dynamical response patterns to sequence input rather than to directly learn transition patterns. Using a mutually supervising pair of reservoir computing modules, we demonstrate how this mechanism works in chunking sequences of letters or visual images with variable regularity and complexity. In addition, we demonstrate that background noise plays a crucial role in correctly learning chunks in this model. In particular, the model can successfully chunk sequences that conventional statistical approaches fail to chunk due to uniform transition probabilities. In addition, the neural responses of the model exhibit an interesting similarity to those of the basal ganglia observed after motor habit formation.
Collapse
Affiliation(s)
- Toshitake Asabuki
- Department of Complexity Science and Engineering, Univ. of Tokyo, Kashiwa, Chiba, Japan
| | - Naoki Hiratani
- Department of Complexity Science and Engineering, Univ. of Tokyo, Kashiwa, Chiba, Japan
- Gatsby Computational Neuroscience Unit, Univ. College London, London, United Kingdom
| | - Tomoki Fukai
- Department of Complexity Science and Engineering, Univ. of Tokyo, Kashiwa, Chiba, Japan
- RIKEN Center for Brain Science, Wako, Saitama, Japan
| |
Collapse
|
22
|
Rabinovich MI, Varona P. Discrete Sequential Information Coding: Heteroclinic Cognitive Dynamics. Front Comput Neurosci 2018; 12:73. [PMID: 30245621 PMCID: PMC6137616 DOI: 10.3389/fncom.2018.00073] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2018] [Accepted: 08/14/2018] [Indexed: 12/22/2022] Open
Abstract
Discrete sequential information coding is a key mechanism that transforms complex cognitive brain activity into a low-dimensional dynamical process based on the sequential switching among finite numbers of patterns. The storage size of the corresponding process is large because of the permutation capacity as a function of control signals in ensembles of these patterns. Extracting low-dimensional functional dynamics from multiple large-scale neural populations is a central problem both in neuro- and cognitive- sciences. Experimental results in the last decade represent a solid base for the creation of low-dimensional models of different cognitive functions and allow moving toward a dynamical theory of consciousness. We discuss here a methodology to build simple kinetic equations that can be the mathematical skeleton of this theory. Models of the corresponding discrete information processing can be designed using the following dynamical principles: (i) clusterization of the neural activity in space and time and formation of information patterns; (ii) robustness of the sequential dynamics based on heteroclinic chains of metastable clusters; and (iii) sensitivity of such sequential dynamics to intrinsic and external informational signals. We analyze sequential discrete coding based on winnerless competition low-frequency dynamics. Under such dynamics, entrainment, and heteroclinic coordination leads to a large variety of coding regimes that are invariant in time.
Collapse
Affiliation(s)
- Mikhail I Rabinovich
- BioCircuits Institute, University of California, San Diego, La Jolla, CA, United States
| | - Pablo Varona
- Grupo de Neurocomputación Biológica, Departamento de Ingeniería Informática, Escuela Politécnica Superior, Universidad Autónoma de Madrid, Madrid, Spain
| |
Collapse
|
23
|
Abstract
Functional oscillator networks, such as neuronal networks in the brain, exhibit switching between metastable states involving many oscillators. We give exact results how such global dynamics can arise in paradigmatic phase oscillator networks: Higher-order network interactions give rise to metastable chimeras-localized frequency synchrony patterns-which are joined by heteroclinic connections. Moreover, we illuminate the mechanisms that underly the switching dynamics in these experimentally accessible networks.
Collapse
Affiliation(s)
- Christian Bick
- Oxford Centre for Industrial and Applied Mathematics, Mathematical Institute, University of Oxford, OX2 6GG, United Kingdom and Department of Mathematics and Centre for Systems Dynamics and Control, University of Exeter, EX4 4QF, United Kingdom
| |
Collapse
|
24
|
Heald SLM, Van Hedger SC, Nusbaum HC. Perceptual Plasticity for Auditory Object Recognition. Front Psychol 2017; 8:781. [PMID: 28588524 PMCID: PMC5440584 DOI: 10.3389/fpsyg.2017.00781] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2016] [Accepted: 04/26/2017] [Indexed: 01/25/2023] Open
Abstract
In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as "noise" in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples of perceptual categories that are thought to be highly stable. This framework suggests that the process of auditory recognition cannot be divorced from the short-term context in which an auditory object is presented. Implications for auditory category acquisition and extant models of auditory perception, both cognitive and neural, are discussed.
Collapse
|
25
|
Carrillo-Medina JL, Latorre R. Implementing Signature Neural Networks with Spiking Neurons. Front Comput Neurosci 2016; 10:132. [PMID: 28066221 PMCID: PMC5167754 DOI: 10.3389/fncom.2016.00132] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2016] [Accepted: 11/30/2016] [Indexed: 11/17/2022] Open
Abstract
Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm—i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data—to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence of inhibitory connections. These parameters also modulate the memory capabilities of the network. The dynamical modes observed in the different informational dimensions in a given moment are independent and they only depend on the parameters shaping the information processing in this dimension. In view of these results, we argue that plasticity mechanisms inside individual cells and multicoding strategies can provide additional computational properties to spiking neural networks, which could enhance their capacity and performance in a wide variety of real-world tasks.
Collapse
Affiliation(s)
- José Luis Carrillo-Medina
- Departamento de Eléctrica y Electrónica, Universidad de las Fuerzas Armadas - ESPE Sangolquí, Ecuador
| | - Roberto Latorre
- Grupo de Neurocomputación Biológica, Dpto. de Ingeniería Informática, Escuela Politécnica Superior, Universidad Autónoma de Madrid Madrid, Spain
| |
Collapse
|
26
|
Shahin S, Vallini F, Monifi F, Rabinovich M, Fainman Y. Heteroclinic dynamics of coupled semiconductor lasers with optoelectronic feedback. OPTICS LETTERS 2016; 41:5238-5241. [PMID: 27842102 DOI: 10.1364/ol.41.005238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Generalized Lotka-Volterra (GLV) equations are important equations used in various areas of science to describe competitive dynamics among a population of N interacting nodes in a network topology. In this Letter, we introduce a photonic network consisting of three optoelectronically cross-coupled semiconductor lasers to realize a GLV model. In such a network, the interaction of intensity and carrier inversion rates, as well as phases of laser oscillator nodes, result in various dynamics. We study the influence of asymmetric coupling strength and frequency detuning between semiconductor lasers and show that inhibitory asymmetric coupling is required to achieve consecutive amplitude oscillations of the laser nodes. These studies were motivated primarily by the dynamical models used to model brain cognitive activities and their correspondence with dynamics obtained among coupled laser oscillators.
Collapse
|
27
|
Pio-Lopez L, Nizard A, Friston K, Pezzulo G. Active inference and robot control: a case study. J R Soc Interface 2016; 13:rsif.2016.0616. [PMID: 27683002 PMCID: PMC5046960 DOI: 10.1098/rsif.2016.0616] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2016] [Accepted: 09/01/2016] [Indexed: 11/12/2022] Open
Abstract
Active inference is a general framework for perception and action that is gaining prominence in computational and systems neuroscience but is less known outside these fields. Here, we discuss a proof-of-principle implementation of the active inference scheme for the control or the 7-DoF arm of a (simulated) PR2 robot. By manipulating visual and proprioceptive noise levels, we show under which conditions robot control under the active inference scheme is accurate. Besides accurate control, our analysis of the internal system dynamics (e.g. the dynamics of the hidden states that are inferred during the inference) sheds light on key aspects of the framework such as the quintessentially multimodal nature of control and the differential roles of proprioception and vision. In the discussion, we consider the potential importance of being able to implement active inference in robots. In particular, we briefly review the opportunities for modelling psychophysiological phenomena such as sensory attenuation and related failures of gain control, of the sort seen in Parkinson's disease. We also consider the fundamental difference between active inference and optimal control formulations, showing that in the former the heavy lifting shifts from solving a dynamical inverse problem to creating deep forward or generative models with dynamics, whose attracting sets prescribe desired behaviours.
Collapse
Affiliation(s)
- Léo Pio-Lopez
- Pascal Institute, Clermont University, Clermont-Ferrand, France Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| | - Ange Nizard
- Pascal Institute, Clermont University, Clermont-Ferrand, France
| | - Karl Friston
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, London, UK
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| |
Collapse
|
28
|
Kwisthout J, Bekkering H, van Rooij I. To be precise, the details don't matter: On predictive processing, precision, and level of detail of predictions. Brain Cogn 2016; 112:84-91. [PMID: 27114040 DOI: 10.1016/j.bandc.2016.02.008] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2015] [Revised: 02/15/2016] [Accepted: 02/21/2016] [Indexed: 10/21/2022]
Abstract
Many theoretical and empirical contributions to the Predictive Processing account emphasize the important role of precision modulation of prediction errors. Recently it has been proposed that the causal models used in human predictive processing are best formally modeled by categorical probability distributions. Crucially, such distributions assume a well-defined, discrete state space. In this paper we explore the consequences of this formalization. In particular we argue that the level of detail of generative models and predictions modulates prediction error. We show that both increasing the level of detail of the generative models and decreasing the level of detail of the predictions can be suitable mechanisms for lowering prediction errors. Both increase precision, yet come at the price of lowering the amount of information that can be gained by correct predictions. Our theoretical result establishes a key open empirical question to address: How does the brain optimize the trade-off between high precision and information gain when making its predictions?
Collapse
Affiliation(s)
- Johan Kwisthout
- Radboud University, Donders Institute for Brain, Cognition and Behavior, Nijmegen, The Netherlands.
| | - Harold Bekkering
- Radboud University, Donders Institute for Brain, Cognition and Behavior, Nijmegen, The Netherlands
| | - Iris van Rooij
- Radboud University, Donders Institute for Brain, Cognition and Behavior, Nijmegen, The Netherlands
| |
Collapse
|
29
|
Giese MA, Rizzolatti G. Neural and Computational Mechanisms of Action Processing: Interaction between Visual and Motor Representations. Neuron 2016; 88:167-80. [PMID: 26447579 DOI: 10.1016/j.neuron.2015.09.040] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Action recognition has received enormous interest in the field of neuroscience over the last two decades. In spite of this interest, the knowledge in terms of fundamental neural mechanisms that provide constraints for underlying computations remains rather limited. This fact stands in contrast with a wide variety of speculative theories about how action recognition might work. This review focuses on new fundamental electrophysiological results in monkeys, which provide constraints for the detailed underlying computations. In addition, we review models for action recognition and processing that have concrete mathematical implementations, as opposed to conceptual models. We think that only such implemented models can be meaningfully linked quantitatively to physiological data and have a potential to narrow down the many possible computational explanations for action recognition. In addition, only concrete implementations allow judging whether postulated computational concepts have a feasible implementation in terms of realistic neural circuits.
Collapse
Affiliation(s)
- Martin A Giese
- Section on Computational Sensomotorics, Hertie Institute for Clinical Brain Research & Center for Integrative Neuroscience, University Clinic Tübingen, Otfried-Müller Str. 25, 72076 Tübingen, Germany.
| | - Giacomo Rizzolatti
- IIT Brain Center for Social and Motor Cognition, 43100, Parma, Italy; Dipartimento di Neuroscienze, Università di Parma, 43100 Parma, Italy.
| |
Collapse
|
30
|
Fonollosa J, Neftci E, Rabinovich M. Learning of Chunking Sequences in Cognition and Behavior. PLoS Comput Biol 2015; 11:e1004592. [PMID: 26584306 PMCID: PMC4652905 DOI: 10.1371/journal.pcbi.1004592] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2015] [Accepted: 10/05/2015] [Indexed: 12/19/2022] Open
Abstract
We often learn and recall long sequences in smaller segments, such as a phone number 858 534 22 30 memorized as four segments. Behavioral experiments suggest that humans and some animals employ this strategy of breaking down cognitive or behavioral sequences into chunks in a wide variety of tasks, but the dynamical principles of how this is achieved remains unknown. Here, we study the temporal dynamics of chunking for learning cognitive sequences in a chunking representation using a dynamical model of competing modes arranged to evoke hierarchical Winnerless Competition (WLC) dynamics. Sequential memory is represented as trajectories along a chain of metastable fixed points at each level of the hierarchy, and bistable Hebbian dynamics enables the learning of such trajectories in an unsupervised fashion. Using computer simulations, we demonstrate the learning of a chunking representation of sequences and their robust recall. During learning, the dynamics associates a set of modes to each information-carrying item in the sequence and encodes their relative order. During recall, hierarchical WLC guarantees the robustness of the sequence order when the sequence is not too long. The resulting patterns of activities share several features observed in behavioral experiments, such as the pauses between boundaries of chunks, their size and their duration. Failures in learning chunking sequences provide new insights into the dynamical causes of neurological disorders such as Parkinson’s disease and Schizophrenia. Because chunking is a hallmark of the brain’s organization, efforts to understand its dynamics can provide valuable insights into the brain and its disorders. For identifying the dynamical principles of chunking learning, we hypothesize that perceptual sequences can be learned and stored as a chain of metastable fixed points in a low-dimensional dynamical system, similar to the trajectory of a ball rolling down a pinball machine. During a learning phase, the interactions in the network evolve such that the network learns a chunking representation of the sequence, as when memorizing a phone number in segments. In the example of the pinball machine, learning can be identified with the gradual placement of the pins. After learning, the pins are placed in a way that, at each run, the ball follows the same trajectory (recall of the same sequence) that encodes the perceptual sequence. Simulations show that the dynamics are endowed with the hallmarks of chunking observed in behavioral experiments, such as increased delays observed before loading new chunks.
Collapse
Affiliation(s)
- Jordi Fonollosa
- Biocircuits Institute, University of California, San Diego, La Jolla, California, United States of America
- Institute for Bioengineering of Catalonia, Barcelona, Spain
| | - Emre Neftci
- Biocircuits Institute, University of California, San Diego, La Jolla, California, United States of America
- Department of Cognitive Sciences, University of California, Irvine, Irvine, California, United States of America
- * E-mail:
| | - Mikhail Rabinovich
- Biocircuits Institute, University of California, San Diego, La Jolla, California, United States of America
| |
Collapse
|
31
|
Abstract
This paper considers communication in terms of inference about the behaviour of others (and our own behaviour). It is based on the premise that our sensations are largely generated by other agents like ourselves. This means, we are trying to infer how our sensations are caused by others, while they are trying to infer our behaviour: for example, in the dialogue between two speakers. We suggest that the infinite regress induced by modelling another agent - who is modelling you - can be finessed if you both possess the same model. In other words, the sensations caused by others and oneself are generated by the same process. This leads to a view of communication based upon a narrative that is shared by agents who are exchanging sensory signals. Crucially, this narrative transcends agency - and simply involves intermittently attending to and attenuating sensory input. Attending to sensations enables the shared narrative to predict the sensations generated by another (i.e. to listen), while attenuating sensory input enables one to articulate the narrative (i.e. to speak). This produces a reciprocal exchange of sensory signals that, formally, induces a generalised synchrony between internal (neuronal) brain states generating predictions in both agents. We develop the arguments behind this perspective, using an active (Bayesian) inference framework and offer some simulations (of birdsong) as proof of principle.
Collapse
Affiliation(s)
- Karl Friston
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, UCL, United Kingdom.
| | - Christopher Frith
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, UCL, United Kingdom
| |
Collapse
|
32
|
Cuevas Rivera D, Bitzer S, Kiebel SJ. Modelling Odor Decoding in the Antennal Lobe by Combining Sequential Firing Rate Models with Bayesian Inference. PLoS Comput Biol 2015; 11:e1004528. [PMID: 26451888 PMCID: PMC4599861 DOI: 10.1371/journal.pcbi.1004528] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2015] [Accepted: 08/28/2015] [Indexed: 11/21/2022] Open
Abstract
The olfactory information that is received by the insect brain is encoded in the form of spatiotemporal patterns in the projection neurons of the antennal lobe. These dense and overlapping patterns are transformed into a sparse code in Kenyon cells in the mushroom body. Although it is clear that this sparse code is the basis for rapid categorization of odors, it is yet unclear how the sparse code in Kenyon cells is computed and what information it represents. Here we show that this computation can be modeled by sequential firing rate patterns using Lotka-Volterra equations and Bayesian online inference. This new model can be understood as an ‘intelligent coincidence detector’, which robustly and dynamically encodes the presence of specific odor features. We found that the model is able to qualitatively reproduce experimentally observed activity in both the projection neurons and the Kenyon cells. In particular, the model explains mechanistically how sparse activity in the Kenyon cells arises from the dense code in the projection neurons. The odor classification performance of the model proved to be robust against noise and time jitter in the observed input sequences. As in recent experimental results, we found that recognition of an odor happened very early during stimulus presentation in the model. Critically, by using the model, we found surprising but simple computational explanations for several experimental phenomena. Odor recognition in the insect brain is amazingly fast but still not fully understood. It is known that recognition is performed in three stages. In the first stage, the sensors respond to an odor by displaying a reproducible neuronal pattern. This code is turned, in the second and third stages, into a sparse code, that is, only relatively few neurons activate over hundreds of milliseconds. It is generally assumed that the insect brain uses this temporal code to recognize an odor. We propose a new model of how this temporal code emerges using sequential activation of groups of neurons. We show that these sequential activations underlie a fast and accurate recognition which is highly robust against neuronal or sensory noise. This model replicates several key experimental findings and explains how the insect brain achieves both speed and robustness of odor recognition as observed in experiments.
Collapse
Affiliation(s)
- Dario Cuevas Rivera
- Department of Psychology, Technische Universität, Dresden, Germany
- Biomagnetic Centre, Department of Neurology, University Hospital Jena, Jena, Germany
- * E-mail:
| | - Sebastian Bitzer
- Department of Psychology, Technische Universität, Dresden, Germany
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Stefan J. Kiebel
- Department of Psychology, Technische Universität, Dresden, Germany
- Biomagnetic Centre, Department of Neurology, University Hospital Jena, Jena, Germany
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
33
|
Dindo H, Donnarumma F, Chersi F, Pezzulo G. The intentional stance as structure learning: a computational perspective on mindreading. BIOLOGICAL CYBERNETICS 2015; 109:453-467. [PMID: 26168854 DOI: 10.1007/s00422-015-0654-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2014] [Accepted: 06/24/2015] [Indexed: 06/04/2023]
Abstract
Recent theories of mindreading explain the recognition of action, intention, and belief of other agents in terms of generative architectures that model the causal relations between observables (e.g., observed movements) and their hidden causes (e.g., action goals and beliefs). Two kinds of probabilistic generative schemes have been proposed in cognitive science and robotics that link to a "theory theory" and "simulation theory" of mindreading, respectively. The former compares perceived actions to optimal plans derived from rationality principles and conceptual theories of others' minds. The latter reuses one's own internal (inverse and forward) models for action execution to perform a look-ahead mental simulation of perceived actions. Both theories, however, leave one question unanswered: how are the generative models - including task structure and parameters - learned in the first place? We start from Dennett's "intentional stance" proposal and characterize it within generative theories of action and intention recognition. We propose that humans use an intentional stance as a learning bias that sidesteps the (hard) structure learning problem and bootstraps the acquisition of generative models for others' actions. The intentional stance corresponds to a candidate structure in the generative scheme, which encodes a simplified belief-desire folk psychology and a hierarchical intention-to-action organization of behavior. This simple structure can be used as a proxy for the "true" generative structure of others' actions and intentions and is continuously grown and refined - via state and parameter learning - during interactions. In turn - as our computational simulations show - this can help solve mindreading problems and bootstrap the acquisition of useful causal models of both one's own and others' goal-directed actions.
Collapse
Affiliation(s)
- Haris Dindo
- RoboticsLab, Polytechnic School (DICGIM), University of Palermo, Viale delle Scienze, Ed. 6, 90128, Palermo, Italy.
| | - Francesco Donnarumma
- Institute of Cognitive Sciences and Technologies, National Research Council, Via San Martino della Battaglia 44, 00185, Rome, Italy.
| | - Fabian Chersi
- Institute of Cognitive Neuroscience, University College London, London, WC1N 3AR, UK.
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Via San Martino della Battaglia 44, 00185, Rome, Italy.
| |
Collapse
|
34
|
Tuennerhoff J, Noppeney U. When sentences live up to your expectations. Neuroimage 2015; 124:641-653. [PMID: 26363344 DOI: 10.1016/j.neuroimage.2015.09.004] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2014] [Revised: 05/27/2015] [Accepted: 09/03/2015] [Indexed: 11/25/2022] Open
Abstract
Speech recognition is rapid, automatic and amazingly robust. How the brain is able to decode speech from noisy acoustic inputs is unknown. We show that the brain recognizes speech by integrating bottom-up acoustic signals with top-down predictions. Subjects listened to intelligible normal and unintelligible fine structure speech that lacked the predictability of the temporal envelope and did not enable access to higher linguistic representations. Their top-down predictions were manipulated using priming. Activation for unintelligible fine structure speech was confined to primary auditory cortices, but propagated into posterior middle temporal areas when fine structure speech was made intelligible by top-down predictions. By contrast, normal speech engaged posterior middle temporal areas irrespective of subjects' predictions. Critically, when speech violated subjects' expectations, activation increases in anterior temporal gyri/sulci signalled a prediction error and the need for new semantic integration. In line with predictive coding, our findings compellingly demonstrate that top-down predictions determine whether and how the brain translates bottom-up acoustic inputs into intelligible speech.
Collapse
Affiliation(s)
| | - Uta Noppeney
- Max-Planck-Institute for Biological Cybernetics, 72076 Tuebingen, Germany; Computational Neuroscience and Cognitive Robotics Centre, Department of Psychology, University of Birmingham, Birmingham B15 2TT, UK
| |
Collapse
|
35
|
Winkler I, Schröger E. Auditory perceptual objects as generative models: Setting the stage for communication by sound. BRAIN AND LANGUAGE 2015; 148:1-22. [PMID: 26184883 DOI: 10.1016/j.bandl.2015.05.003] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2014] [Revised: 03/03/2015] [Accepted: 05/03/2015] [Indexed: 06/04/2023]
Abstract
Communication by sounds requires that the communication channels (i.e. speech/speakers and other sound sources) had been established. This allows to separate concurrently active sound sources, to track their identity, to assess the type of message arriving from them, and to decide whether and when to react (e.g., reply to the message). We propose that these functions rely on a common generative model of the auditory environment. This model predicts upcoming sounds on the basis of representations describing temporal/sequential regularities. Predictions help to identify the continuation of the previously discovered sound sources to detect the emergence of new sources as well as changes in the behavior of the known ones. It produces auditory event representations which provide a full sensory description of the sounds, including their relation to the auditory context and the current goals of the organism. Event representations can be consciously perceived and serve as objects in various cognitive operations.
Collapse
Affiliation(s)
- István Winkler
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Hungary; Institute of Psychology, University of Szeged, Hungary.
| | - Erich Schröger
- Institute for Psychology, University of Leipzig, Germany.
| |
Collapse
|
36
|
Chalasani R, Principe JC. Context Dependent Encoding Using Convolutional Dynamic Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:1992-2004. [PMID: 25376046 DOI: 10.1109/tnnls.2014.2360060] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Perception of sensory signals is strongly influenced by their context, both in space and time. In this paper, we propose a novel hierarchical model, called convolutional dynamic networks, that effectively utilizes this contextual information, while inferring the representations of the visual inputs. We build this model based on a predictive coding framework and use the idea of empirical priors to incorporate recurrent and top-down connections. These connections endow the model with contextual information coming from temporal as well as abstract knowledge from higher layers. To perform inference efficiently in this hierarchical model, we rely on a novel scheme based on a smoothing proximal gradient method. When trained on unlabeled video sequences, the model learns a hierarchy of stable attractors, representing low-level to high-level parts of the objects. We demonstrate that the model effectively utilizes contextual information to produce robust and stable representations for object recognition in video sequences, even in case of highly corrupted inputs.
Collapse
|
37
|
Abstract
Sensory processing involves identification of stimulus features, but also integration with the surrounding sensory and cognitive context. Previous work in animals and humans has shown fine-scale sensitivity to context in the form of learned knowledge about the statistics of the sensory environment, including relative probabilities of discrete units in a stream of sequential auditory input. These statistics are a defining characteristic of one of the most important sequential signals humans encounter: speech. For speech, extensive exposure to a language tunes listeners to the statistics of sound sequences. To address how speech sequence statistics are neurally encoded, we used high-resolution direct cortical recordings from human lateral superior temporal cortex as subjects listened to words and nonwords with varying transition probabilities between sound segments. In addition to their sensitivity to acoustic features (including contextual features, such as coarticulation), we found that neural responses dynamically encoded the language-level probability of both preceding and upcoming speech sounds. Transition probability first negatively modulated neural responses, followed by positive modulation of neural responses, consistent with coordinated predictive and retrospective recognition processes, respectively. Furthermore, transition probability encoding was different for real English words compared with nonwords, providing evidence for online interactions with high-order linguistic knowledge. These results demonstrate that sensory processing of deeply learned stimuli involves integrating physical stimulus features with their contextual sequential structure. Despite not being consciously aware of phoneme sequence statistics, listeners use this information to process spoken input and to link low-level acoustic representations with linguistic information about word identity and meaning.
Collapse
|
38
|
Schwappach C, Hutt A, Beim Graben P. Metastable dynamics in heterogeneous neural fields. Front Syst Neurosci 2015; 9:97. [PMID: 26175671 PMCID: PMC4485166 DOI: 10.3389/fnsys.2015.00097] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2015] [Accepted: 06/15/2015] [Indexed: 11/13/2022] Open
Abstract
We present numerical simulations of metastable states in heterogeneous neural fields that are connected along heteroclinic orbits. Such trajectories are possible representations of transient neural activity as observed, for example, in the electroencephalogram. Based on previous theoretical findings on learning algorithms for neural fields, we directly construct synaptic weight kernels from Lotka-Volterra neural population dynamics without supervised training approaches. We deliver a MATLAB neural field toolbox validated by two examples of one- and two-dimensional neural fields. We demonstrate trial-to-trial variability and distributed representations in our simulations which might therefore be regarded as a proof-of-concept for more advanced neural field models of metastable dynamics in neurophysiological data.
Collapse
Affiliation(s)
- Cordula Schwappach
- Department of German Studies and Linguistics, Humboldt-Universität zu Berlin Berlin, Germany ; Department of Physics, Humboldt-Universität zu Berlin Berlin, Germany
| | - Axel Hutt
- Team Neurosys, Inria Villers-les-Nancy, France ; Team Neurosys, Centre National de la Recherche Scientifique, UMR nō 7503, Loria Villers-les-Nancy, France ; Team Neurosys, UMR nō 7503, Loria, Université de Lorraine Villers-les-Nancy, France
| | - Peter Beim Graben
- Department of German Studies and Linguistics, Humboldt-Universität zu Berlin Berlin, Germany ; Bernstein Center for Computational Neuroscience, Humboldt-Universität zu Berlin Berlin, Germany
| |
Collapse
|
39
|
Friston KJ, Frith CD. Active inference, communication and hermeneutics. Cortex 2015; 68:129-43. [PMID: 25957007 PMCID: PMC4502445 DOI: 10.1016/j.cortex.2015.03.025] [Citation(s) in RCA: 125] [Impact Index Per Article: 13.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2014] [Revised: 12/06/2014] [Accepted: 03/27/2015] [Indexed: 11/16/2022]
Abstract
Hermeneutics refers to interpretation and translation of text (typically ancient scriptures) but also applies to verbal and non-verbal communication. In a psychological setting it nicely frames the problem of inferring the intended content of a communication. In this paper, we offer a solution to the problem of neural hermeneutics based upon active inference. In active inference, action fulfils predictions about how we will behave (e.g., predicting we will speak). Crucially, these predictions can be used to predict both self and others--during speaking and listening respectively. Active inference mandates the suppression of prediction errors by updating an internal model that generates predictions--both at fast timescales (through perceptual inference) and slower timescales (through perceptual learning). If two agents adopt the same model, then--in principle--they can predict each other and minimise their mutual prediction errors. Heuristically, this ensures they are singing from the same hymn sheet. This paper builds upon recent work on active inference and communication to illustrate perceptual learning using simulated birdsongs. Our focus here is the neural hermeneutics implicit in learning, where communication facilitates long-term changes in generative models that are trying to predict each other. In other words, communication induces perceptual learning and enables others to (literally) change our minds and vice versa.
Collapse
Affiliation(s)
- Karl J Friston
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, UCL, United Kingdom.
| | - Christopher D Frith
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, UCL, United Kingdom
| |
Collapse
|
40
|
Riedel P, Ragert P, Schelinski S, Kiebel SJ, von Kriegstein K. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition. Cortex 2015; 68:86-99. [DOI: 10.1016/j.cortex.2014.11.016] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2014] [Revised: 10/24/2014] [Accepted: 11/25/2014] [Indexed: 12/31/2022]
|
41
|
Horchler AD, Daltorio KA, Chiel HJ, Quinn RD. Designing responsive pattern generators: stable heteroclinic channel cycles for modeling and control. BIOINSPIRATION & BIOMIMETICS 2015; 10:026001. [PMID: 25712192 DOI: 10.1088/1748-3190/10/2/026001] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
A striking feature of biological pattern generators is their ability to respond immediately to multisensory perturbations by modulating the dwell time at a particular phase of oscillation, which can vary force output, range of motion, or other characteristics of a physical system. Stable heteroclinic channels (SHCs) are a dynamical architecture that can provide such responsiveness to artificial devices such as robots. SHCs are composed of sequences of saddle equilibrium points, which yields exquisite sensitivity. The strength of the vector fields in the neighborhood of these equilibria determines the responsiveness to perturbations and how long trajectories dwell in the vicinity of a saddle. For SHC cycles, the addition of stochastic noise results in oscillation with a regular mean period. In this paper, we parameterize noise-driven Lotka-Volterra SHC cycles such that each saddle can be independently designed to have a desired mean sub-period. The first step in the design process is an analytic approximation, which results in mean sub-periods that are within 2% of the specified sub-period for a typical parameter set. Further, after measuring the resultant sub-periods over sufficient numbers of cycles, the magnitude of the noise can be adjusted to control the mean period with accuracy close to that of the integration step size. With these relationships, SHCs can be more easily employed in engineering and modeling applications. For applications that require smooth state transitions, this parameterization permits each state's distribution of periods to be independently specified. Moreover, for modeling context-dependent behaviors, continuously varying inputs in each state dimension can rapidly precipitate transitions to alter frequency and phase.
Collapse
Affiliation(s)
- Andrew D Horchler
- Department of Mechanical and Aerospace Engineering, Case Western Reserve University, 10900 Euclid Ave, Cleveland, OH 44106-7222, USA
| | | | | | | |
Collapse
|
42
|
Abstract
Positive and negative emotional events are better remembered than neutral events. Studies in animals suggest that this phenomenon depends on the influence of the amygdala upon the hippocampus. In humans, however, it is largely unknown how these two brain structures functionally interact and whether these interactions are similar between positive and negative information. Using dynamic causal modeling of fMRI data in 586 healthy subjects, we show that the strength of the connection from the amygdala to the hippocampus was rapidly and robustly increased during the encoding of both positive and negative pictures in relation to neutral pictures. We also observed an increase in connection strength from the hippocampus to the amygdala, albeit at a smaller scale. These findings indicate that, during encoding, emotionally arousing information leads to a robust increase in effective connectivity from the amygdala to the hippocampus, regardless of its valence.
Collapse
|
43
|
Baggio G, van Lambalgen M, Hagoort P. Logic as Marr's Computational Level: Four Case Studies. Top Cogn Sci 2014; 7:287-98. [PMID: 25417838 DOI: 10.1111/tops.12125] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2013] [Revised: 02/01/2014] [Accepted: 02/07/2014] [Indexed: 12/01/2022]
Abstract
We sketch four applications of Marr's levels-of-analysis methodology to the relations between logic and experimental data in the cognitive neuroscience of language and reasoning. The first part of the paper illustrates the explanatory power of computational level theories based on logic. We show that a Bayesian treatment of the suppression task in reasoning with conditionals is ruled out by EEG data, supporting instead an analysis based on defeasible logic. Further, we describe how results from an EEG study on temporal prepositions can be reanalyzed using formal semantics, addressing a potential confound. The second part of the article demonstrates the predictive power of logical theories drawing on EEG data on processing progressive constructions and on behavioral data on conditional reasoning in people with autism. Logical theories can constrain processing hypotheses all the way down to neurophysiology, and conversely neuroscience data can guide the selection of alternative computational level models of cognition.
Collapse
Affiliation(s)
- Giosuè Baggio
- Brain and Language Laboratory, Neuroscience Area, SISSA International School for Advanced Studies; Language Acquisition and Language Processing Lab, Department of Language and Literature, Norwegian University of Science and Technology
| | | | | |
Collapse
|
44
|
Bruineberg J, Rietveld E. Self-organization, free energy minimization, and optimal grip on a field of affordances. Front Hum Neurosci 2014; 8:599. [PMID: 25161615 PMCID: PMC4130179 DOI: 10.3389/fnhum.2014.00599] [Citation(s) in RCA: 144] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2014] [Accepted: 07/17/2014] [Indexed: 11/13/2022] Open
Abstract
In this paper, we set out to develop a theoretical and conceptual framework for the new field of Radical Embodied Cognitive Neuroscience. This framework should be able to integrate insights from several relevant disciplines: theory on embodied cognition, ecological psychology, phenomenology, dynamical systems theory, and neurodynamics. We suggest that the main task of Radical Embodied Cognitive Neuroscience is to investigate the phenomenon of skilled intentionality from the perspective of the self-organization of the brain-body-environment system, while doing justice to the phenomenology of skilled action. In previous work, we have characterized skilled intentionality as the organism's tendency toward an optimal grip on multiple relevant affordances simultaneously. Affordances are possibilities for action provided by the environment. In the first part of this paper, we introduce the notion of skilled intentionality and the phenomenon of responsiveness to a field of relevant affordances. Second, we use Friston's work on neurodynamics, but embed a very minimal version of his Free Energy Principle in the ecological niche of the animal. Thus amended, this principle is helpful for understanding the embeddedness of neurodynamics within the dynamics of the system "brain-body-landscape of affordances." Next, we show how we can use this adjusted principle to understand the neurodynamics of selective openness to the environment: interacting action-readiness patterns at multiple timescales contribute to the organism's selective openness to relevant affordances. In the final part of the paper, we emphasize the important role of metastable dynamics in both the brain and the brain-body-environment system for adequate affordance-responsiveness. We exemplify our integrative approach by presenting research on the impact of Deep Brain Stimulation on affordance responsiveness of OCD patients.
Collapse
Affiliation(s)
- Jelle Bruineberg
- Amsterdam Brain and Cognition, University of Amsterdam Amsterdam, Netherlands ; Department of Philosophy, Institute for Logic, Language and Computation, University of Amsterdam Amsterdam, Netherlands ; Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| | - Erik Rietveld
- Amsterdam Brain and Cognition, University of Amsterdam Amsterdam, Netherlands ; Department of Philosophy, Institute for Logic, Language and Computation, University of Amsterdam Amsterdam, Netherlands ; Department of Psychiatry, Academic Medical Center, University of Amsterdam Amsterdam, Netherlands
| |
Collapse
|
45
|
Rabinovich MI, Varona P, Tristan I, Afraimovich VS. Chunking dynamics: heteroclinics in mind. Front Comput Neurosci 2014; 8:22. [PMID: 24672469 PMCID: PMC3954027 DOI: 10.3389/fncom.2014.00022] [Citation(s) in RCA: 43] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2013] [Accepted: 02/10/2014] [Indexed: 11/16/2022] Open
Abstract
Recent results of imaging technologies and non-linear dynamics make possible to relate the structure and dynamics of functional brain networks to different mental tasks and to build theoretical models for the description and prediction of cognitive activity. Such models are non-linear dynamical descriptions of the interaction of the core components—brain modes—participating in a specific mental function. The dynamical images of different mental processes depend on their temporal features. The dynamics of many cognitive functions are transient. They are often observed as a chain of sequentially changing metastable states. A stable heteroclinic channel (SHC) consisting of a chain of saddles—metastable states—connected by unstable separatrices is a mathematical image for robust transients. In this paper we focus on hierarchical chunking dynamics that can represent several forms of transient cognitive activity. Chunking is a dynamical phenomenon that nature uses to perform information processing of long sequences by dividing them in shorter information items. Chunking, for example, makes more efficient the use of short-term memory by breaking up long strings of information (like in language where one can see the separation of a novel on chapters, paragraphs, sentences, and finally words). Chunking is important in many processes of perception, learning, and cognition in humans and animals. Based on anatomical information about the hierarchical organization of functional brain networks, we propose a cognitive network architecture that hierarchically chunks and super-chunks switching sequences of metastable states produced by winnerless competitive heteroclinic dynamics.
Collapse
Affiliation(s)
| | - Pablo Varona
- Grupo de Neurocomputación Biológica, Departamento de Ingeniería Informática, Escuela Politécnica Superior, Universidad Autónoma de Madrid Madrid, Spain
| | - Irma Tristan
- BioCircuits Institute, University of California San Diego, La Jolla, CA, USA
| | - Valentin S Afraimovich
- Instituto de Investigación en Comunicación Óptica, Universidad Autónoma de San Luis Potosí San Luis Potosí, México
| |
Collapse
|
46
|
Schröger E, Bendixen A, Denham SL, Mill RW, Bőhm TM, Winkler I. Predictive Regularity Representations in Violation Detection and Auditory Stream Segregation: From Conceptual to Computational Models. Brain Topogr 2013; 27:565-77. [DOI: 10.1007/s10548-013-0334-6] [Citation(s) in RCA: 64] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2013] [Accepted: 11/13/2013] [Indexed: 11/24/2022]
|
47
|
Tavano A, Widmann A, Bendixen A, Trujillo-Barreto N, Schröger E. Temporal regularity facilitates higher-order sensory predictions in fast auditory sequences. Eur J Neurosci 2013; 39:308-18. [DOI: 10.1111/ejn.12404] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2012] [Revised: 09/18/2013] [Accepted: 10/04/2013] [Indexed: 11/29/2022]
Affiliation(s)
- Alessandro Tavano
- Institute of Psychology; University of Leipzig; 04109 Leipzig Germany
| | - Andreas Widmann
- Institute of Psychology; University of Leipzig; 04109 Leipzig Germany
| | - Alexandra Bendixen
- Institute of Psychology; University of Leipzig; 04109 Leipzig Germany
- Department of Psychology; Cluster of Excellence ‘Hearing4all’; European Medical School; Carl von Ossietzky University of Oldenburg; 26129 Oldenburg Germany
| | | | - Erich Schröger
- Institute of Psychology; University of Leipzig; 04109 Leipzig Germany
| |
Collapse
|
48
|
Yildiz IB, von Kriegstein K, Kiebel SJ. From birdsong to human speech recognition: bayesian inference on a hierarchy of nonlinear dynamical systems. PLoS Comput Biol 2013; 9:e1003219. [PMID: 24068902 PMCID: PMC3772045 DOI: 10.1371/journal.pcbi.1003219] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2013] [Accepted: 07/27/2013] [Indexed: 11/19/2022] Open
Abstract
Our knowledge about the computational mechanisms underlying human learning and recognition of sound sequences, especially speech, is still very limited. One difficulty in deciphering the exact means by which humans recognize speech is that there are scarce experimental findings at a neuronal, microscopic level. Here, we show that our neuronal-computational understanding of speech learning and recognition may be vastly improved by looking at an animal model, i.e., the songbird, which faces the same challenge as humans: to learn and decode complex auditory input, in an online fashion. Motivated by striking similarities between the human and songbird neural recognition systems at the macroscopic level, we assumed that the human brain uses the same computational principles at a microscopic level and translated a birdsong model into a novel human sound learning and recognition model with an emphasis on speech. We show that the resulting Bayesian model with a hierarchy of nonlinear dynamical systems can learn speech samples such as words rapidly and recognize them robustly, even in adverse conditions. In addition, we show that recognition can be performed even when words are spoken by different speakers and with different accents—an everyday situation in which current state-of-the-art speech recognition models often fail. The model can also be used to qualitatively explain behavioral data on human speech learning and derive predictions for future experiments. Neuroscience still lacks a concrete explanation of how humans recognize speech. Even though neuroimaging techniques are helpful in determining the brain areas involved in speech recognition, there are rarely mechanistic explanations at a neuronal level. Here, we assume that songbirds and humans solve a very similar task: extracting information from sound wave modulations produced by a singing bird or a speaking human. Given strong evidence that both humans and songbirds, although genetically very distant, converged to a similar solution, we combined the vast amount of neurobiological findings for songbirds with nonlinear dynamical systems theory to develop a hierarchical, Bayesian model which explains fundamental functions in recognition of sound sequences. We found that the resulting model is good at learning and recognizing human speech. We suggest that this translated model can be used to qualitatively explain or predict experimental data, and the underlying mechanism can be used to construct improved automatic speech recognition algorithms.
Collapse
Affiliation(s)
- Izzet B. Yildiz
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Group for Neural Theory, Institute of Cognitive Studies, École Normale Supérieure, Paris, France
- * E-mail:
| | - Katharina von Kriegstein
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Humboldt University of Berlin, Department of Psychology, Berlin, Germany
| | - Stefan J. Kiebel
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Biomagnetic Center, Hans Berger Clinic for Neurology, University Hospital Jena, Jena, Germany
| |
Collapse
|
49
|
Daltorio KA, Boxerbaum AS, Horchler AD, Shaw KM, Chiel HJ, Quinn RD. Efficient worm-like locomotion: slip and control of soft-bodied peristaltic robots. BIOINSPIRATION & BIOMIMETICS 2013; 8:035003. [PMID: 23981561 DOI: 10.1088/1748-3182/8/3/035003] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
In this work, we present a dynamic simulation of an earthworm-like robot moving in a pipe with radially symmetric Coulomb friction contact. Under these conditions, peristaltic locomotion is efficient if slip is minimized. We characterize ways to reduce slip-related losses in a constant-radius pipe. Using these principles, we can design controllers that can navigate pipes even with a narrowing in radius. We propose a stable heteroclinic channel controller that takes advantage of contact force feedback on each segment. In an example narrowing pipe, this controller loses 40% less energy to slip compared to the best-fit sine wave controller. The peristaltic locomotion with feedback also has greater speed and more consistent forward progress
Collapse
Affiliation(s)
- Kathryn A Daltorio
- Department of Mechanical Engineering, Case Western Reserve University, 10900 Euclid Ave, Cleveland, OH 44106-7078, USA.
| | | | | | | | | | | |
Collapse
|
50
|
Wiestler T, Diedrichsen J. Skill learning strengthens cortical representations of motor sequences. eLife 2013; 2:e00801. [PMID: 23853714 PMCID: PMC3707182 DOI: 10.7554/elife.00801] [Citation(s) in RCA: 121] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2013] [Accepted: 06/05/2013] [Indexed: 12/04/2022] Open
Abstract
Motor-skill learning can be accompanied by both increases and decreases in brain activity. Increases may indicate neural recruitment, while decreases may imply that a region became unimportant or developed a more efficient representation of the skill. These overlapping mechanisms make interpreting learning-related changes of spatially averaged activity difficult. Here we show that motor-skill acquisition is associated with the emergence of highly distinguishable activity patterns for trained movement sequences, in the absence of average activity increases. During functional magnetic resonance imaging, participants produced either four trained or four untrained finger sequences. Using multivariate pattern analysis, both untrained and trained sequences could be discriminated in primary and secondary motor areas. However, trained sequences were classified more reliably, especially in the supplementary motor area. Our results indicate skill learning leads to the development of specialized neuronal circuits, which allow the execution of fast and accurate sequential movements without average increases in brain activity. DOI:http://dx.doi.org/10.7554/eLife.00801.001 Functional magnetic resonance imaging (fMRI) is a widely used technique that makes it possible to observe changes in a person’s brain activity as they perform specific tasks while lying in a scanner. These could range from listening to music or looking at images, to recalling words or imagining a scene, and each will produce a distinct pattern of neural activity. However, fMRI data can be difficult to interpret. Say a particular area of the brain is very active when a subject is trying to perform a new task, but becomes less active as the subject becomes better at the task and performs it more easily. Does this mean that the brain region is used for learning the task, but not for performing once it has been learned? Or alternatively, does it show that the brain area is involved in carrying out the task, but that it becomes more efficient with practice, and so shows less activity in later scans? Now, Wiestler and Diedrichsen have obtained data that help to distinguish between these alternatives. Subjects were trained to carry out four specific sequences of finger movements and then asked either to reproduce these ‘trained’ sequences or to perform four ‘untrained’ sequences while in the fMRI scanner. All eight sequences produced high levels of activity in the areas of motor cortex that control finger movements. However, closer analysis showed marked differences between the patterns of activity produced during the ‘trained’ sequences and those seen during ‘untrained’ sequences that involved moving the same fingers. Wiestler and Diedrichsen proposed that when subjects train to perform specific movement sequences, this should lead to the development of neural circuits that are specialized to carry out those specific movements—and that detailed analysis of the fMRI data would allow them to identify patterns of activity that correspond to these circuits. Sure enough, when they analysed the fMRI scans, Wiestler and Diedrichsen found that the activation patterns associated with ‘trained’ movement sequences were more readily distinguishable from each other than those associated with the ‘untrained’ movement sequences, even in areas where training led to an overall reduction in activity. As well as showing that movement sequences become associated with specific spatial patterns of activation as they are learned, this study provides a new way to study learning in fMRI that should be useful for many future studies. DOI:http://dx.doi.org/10.7554/eLife.00801.002
Collapse
Affiliation(s)
- Tobias Wiestler
- Institute of Cognitive Neuroscience , University College London , London , United Kingdom
| | | |
Collapse
|