1
|
Ambrad Giovannetti E, Rancz E. Behind mouse eyes: The function and control of eye movements in mice. Neurosci Biobehav Rev 2024; 161:105671. [PMID: 38604571 DOI: 10.1016/j.neubiorev.2024.105671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 03/12/2024] [Accepted: 04/08/2024] [Indexed: 04/13/2024]
Abstract
The mouse visual system has become the most popular model to study the cellular and circuit mechanisms of sensory processing. However, the importance of eye movements only started to be appreciated recently. Eye movements provide a basis for predictive sensing and deliver insights into various brain functions and dysfunctions. A plethora of knowledge on the central control of eye movements and their role in perception and behaviour arose from work on primates. However, an overview of various eye movements in mice and a comparison to primates is missing. Here, we review the eye movement types described to date in mice and compare them to those observed in primates. We discuss the central neuronal mechanisms for their generation and control. Furthermore, we review the mounting literature on eye movements in mice during head-fixed and freely moving behaviours. Finally, we highlight gaps in our understanding and suggest future directions for research.
Collapse
Affiliation(s)
| | - Ede Rancz
- INMED, INSERM, Aix-Marseille University, Marseille, France.
| |
Collapse
|
2
|
Palacios ER, Chadderton P, Friston K, Houghton C. Cerebellar state estimation enables resilient coupling across behavioural domains. Sci Rep 2024; 14:6641. [PMID: 38503802 PMCID: PMC10951354 DOI: 10.1038/s41598-024-56811-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 03/11/2024] [Indexed: 03/21/2024] Open
Abstract
Cerebellar computations are necessary for fine behavioural control and may rely on internal models for estimation of behaviourally relevant states. Here, we propose that the central cerebellar function is to estimate how states interact with each other, and to use these estimates to coordinates extra-cerebellar neuronal dynamics underpinning a range of interconnected behaviours. To support this claim, we describe a cerebellar model for state estimation that includes state interactions, and link this model with the neuronal architecture and dynamics observed empirically. This is formalised using the free energy principle, which provides a dual perspective on a system in terms of both the dynamics of its physical-in this case neuronal-states, and the inferential process they entail. As a demonstration of this proposal, we simulate cerebellar-dependent synchronisation of whisking and respiration, which are known to be tightly coupled in rodents, as well as limb and tail coordination during locomotion. In summary, we propose that the ubiquitous involvement of the cerebellum in behaviour arises from its central role in precisely coupling behavioural domains.
Collapse
Affiliation(s)
- Ensor Rafael Palacios
- University of Bristol, School of Physiology Pharmacology and Neuroscience, Bristol, BS8 1TD, UK.
| | - Paul Chadderton
- University of Bristol, School of Physiology Pharmacology and Neuroscience, Bristol, BS8 1TD, UK
| | - Karl Friston
- UCL, Wellcome Centre for Human Neuroimaging, London, WC1N 3AR, UK
| | - Conor Houghton
- University of Bristol, Department of Computer Science, Bristol, BS8 1UB, UK
| |
Collapse
|
3
|
Onagawa R, Kudo K, Watanabe K. Systematic bias in representation of reaction time distribution. Q J Exp Psychol (Hove) 2024:17470218241234650. [PMID: 38336626 DOI: 10.1177/17470218241234650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/12/2024]
Abstract
A correct perception of one's own abilities is essential for making appropriate decisions. A well-known bias in probability perception is that rare events are overestimated. Here, we examined whether such a bias also exists for action outcomes using a simple reaction task. In Experiment 1, after completing a set of 30 trials of the simple reaction task, participants were required to judge the probability that they would be able to respond before a given reference time when performing the task next. We assessed the difference between the actual reaction times and the probability judgement and found that the represented probability distribution was more widely distributed than the actual one, suggesting that low-probability events were overestimated and high-probability events were underestimated. Experiment 2 confirmed the presence of such a bias in the representation of both one's own and another's reaction times. In addition, Experiment 3 showed the presence of such a bias regardless of the difference between the representation of another's reaction times and the mere numerical representation. Furthermore, Experiment 4 found the presence of such a bias even when the information regarding actual reaction times was visually shown before the representation. The present results reveal the existence of a highly robust bias in the representation of motor performance, which reflects the ubiquitous bias in probability perception and is difficult to eliminate.
Collapse
Affiliation(s)
- Ryoji Onagawa
- Faculty of Science and Engineering, Waseda University, Tokyo, Japan
- Japan Society for the Promotion of Science, Tokyo, Japan
| | - Kazutoshi Kudo
- Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
| | - Katsumi Watanabe
- Faculty of Science and Engineering, Waseda University, Tokyo, Japan
| |
Collapse
|
4
|
Turner W, Sexton C, Hogendoorn H. Neural mechanisms of visual motion extrapolation. Neurosci Biobehav Rev 2024; 156:105484. [PMID: 38036162 DOI: 10.1016/j.neubiorev.2023.105484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 11/21/2023] [Accepted: 11/25/2023] [Indexed: 12/02/2023]
Abstract
Because neural processing takes time, the brain only has delayed access to sensory information. When localising moving objects this is problematic, as an object will have moved on by the time its position has been determined. Here, we consider predictive motion extrapolation as a fundamental delay-compensation strategy. From a population-coding perspective, we outline how extrapolation can be achieved by a forwards shift in the population-level activity distribution. We identify general mechanisms underlying such shifts, involving various asymmetries which facilitate the targeted 'enhancement' and/or 'dampening' of population-level activity. We classify these on the basis of their potential implementation (intra- vs inter-regional processes) and consider specific examples in different visual regions. We consider how motion extrapolation can be achieved during inter-regional signaling, and how asymmetric connectivity patterns which support extrapolation can emerge spontaneously from local synaptic learning rules. Finally, we consider how more abstract 'model-based' predictive strategies might be implemented. Overall, we present an integrative framework for understanding how the brain determines the real-time position of moving objects, despite neural delays.
Collapse
Affiliation(s)
- William Turner
- Queensland University of Technology, Brisbane 4059, Australia; The University of Melbourne, Melbourne 3010, Australia.
| | | | - Hinze Hogendoorn
- Queensland University of Technology, Brisbane 4059, Australia; The University of Melbourne, Melbourne 3010, Australia
| |
Collapse
|
5
|
Grimaldi A, Perrinet LU. Learning heterogeneous delays in a layer of spiking neurons for fast motion detection. BIOLOGICAL CYBERNETICS 2023; 117:373-387. [PMID: 37695359 DOI: 10.1007/s00422-023-00975-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 08/18/2023] [Indexed: 09/12/2023]
Abstract
The precise timing of spikes emitted by neurons plays a crucial role in shaping the response of efferent biological neurons. This temporal dimension of neural activity holds significant importance in understanding information processing in neurobiology, especially for the performance of neuromorphic hardware, such as event-based cameras. Nonetheless, many artificial neural models disregard this critical temporal dimension of neural activity. In this study, we present a model designed to efficiently detect temporal spiking motifs using a layer of spiking neurons equipped with heterogeneous synaptic delays. Our model capitalizes on the diverse synaptic delays present on the dendritic tree, enabling specific arrangements of temporally precise synaptic inputs to synchronize upon reaching the basal dendritic tree. We formalize this process as a time-invariant logistic regression, which can be trained using labeled data. To demonstrate its practical efficacy, we apply the model to naturalistic videos transformed into event streams, simulating the output of the biological retina or event-based cameras. To evaluate the robustness of the model in detecting visual motion, we conduct experiments by selectively pruning weights and demonstrate that the model remains efficient even under significantly reduced workloads. In conclusion, by providing a comprehensive, event-driven computational building block, the incorporation of heterogeneous delays has the potential to greatly improve the performance of future spiking neural network algorithms, particularly in the context of neuromorphic chips.
Collapse
Affiliation(s)
- Antoine Grimaldi
- Institut de Neurosciences de la Timone, Aix Marseille Univ, CNRS, 27 boulevard Jean Moulin, 13005, Marseille, France
| | - Laurent U Perrinet
- Institut de Neurosciences de la Timone, Aix Marseille Univ, CNRS, 27 boulevard Jean Moulin, 13005, Marseille, France.
| |
Collapse
|
6
|
Li JS, Sarma AA, Sejnowski TJ, Doyle JC. Internal feedback in the cortical perception-action loop enables fast and accurate behavior. Proc Natl Acad Sci U S A 2023; 120:e2300445120. [PMID: 37738297 PMCID: PMC10523540 DOI: 10.1073/pnas.2300445120] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 07/18/2023] [Indexed: 09/24/2023] Open
Abstract
Animals move smoothly and reliably in unpredictable environments. Models of sensorimotor control, drawing on control theory, have assumed that sensory information from the environment leads to actions, which then act back on the environment, creating a single, unidirectional perception-action loop. However, the sensorimotor loop contains internal delays in sensory and motor pathways, which can lead to unstable control. We show here that these delays can be compensated by internal feedback signals that flow backward, from motor toward sensory areas. This internal feedback is ubiquitous in neural sensorimotor systems, and we show how internal feedback compensates internal delays. This is accomplished by filtering out self-generated and other predictable changes so that unpredicted, actionable information can be rapidly transmitted toward action by the fastest components, effectively compressing the sensory input to more efficiently use feedforward pathways: Tracts of fast, giant neurons necessarily convey less accurate signals than tracts with many smaller neurons, but they are crucial for fast and accurate behavior. We use a mathematically tractable control model to show that internal feedback has an indispensable role in achieving state estimation, localization of function (how different parts of the cortex control different parts of the body), and attention, all of which are crucial for effective sensorimotor control. This control model can explain anatomical, physiological, and behavioral observations, including motor signals in the visual cortex, heterogeneous kinetics of sensory receptors, and the presence of giant cells in the cortex of humans as well as internal feedback patterns and unexplained heterogeneity in neural systems.
Collapse
Affiliation(s)
- Jing Shuang Li
- Control and Dynamical Systems, Division of Engineering and Applied Science, California Institute of Technology, Pasadena, CA91125
| | - Anish A. Sarma
- Control and Dynamical Systems, Division of Engineering and Applied Science, California Institute of Technology, Pasadena, CA91125
- School of Medicine, Vanderbilt University, Nashville, TN37232
| | - Terrence J. Sejnowski
- Department of Neurobiology, Computational Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, CA92037
- Department of Neurobiology, Division of Biological Sciences, University of California San Diego, La Jolla, CA92093
| | - John C. Doyle
- Control and Dynamical Systems, Division of Engineering and Applied Science, California Institute of Technology, Pasadena, CA91125
| |
Collapse
|
7
|
Floegel M, Kasper J, Perrier P, Kell CA. How the conception of control influences our understanding of actions. Nat Rev Neurosci 2023; 24:313-329. [PMID: 36997716 DOI: 10.1038/s41583-023-00691-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/28/2023] [Indexed: 04/01/2023]
Abstract
Wilful movement requires neural control. Commonly, neural computations are thought to generate motor commands that bring the musculoskeletal system - that is, the plant - from its current physical state into a desired physical state. The current state can be estimated from past motor commands and from sensory information. Modelling movement on the basis of this concept of plant control strives to explain behaviour by identifying the computational principles for control signals that can reproduce the observed features of movements. From an alternative perspective, movements emerge in a dynamically coupled agent-environment system from the pursuit of subjective perceptual goals. Modelling movement on the basis of this concept of perceptual control aims to identify the controlled percepts and their coupling rules that can give rise to the observed characteristics of behaviour. In this Perspective, we discuss a broad spectrum of approaches to modelling human motor control and their notions of control signals, internal models, handling of sensory feedback delays and learning. We focus on the influence that the plant control and the perceptual control perspective may have on decisions when modelling empirical data, which may in turn shape our understanding of actions.
Collapse
Affiliation(s)
- Mareike Floegel
- Department of Neurology and Brain Imaging Center, Goethe University Frankfurt, Frankfurt, Germany
| | - Johannes Kasper
- Department of Neurology and Brain Imaging Center, Goethe University Frankfurt, Frankfurt, Germany
| | - Pascal Perrier
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Christian A Kell
- Department of Neurology and Brain Imaging Center, Goethe University Frankfurt, Frankfurt, Germany.
| |
Collapse
|
8
|
Jérémie JN, Perrinet LU. Ultrafast Image Categorization in Biology and Neural Models. Vision (Basel) 2023; 7:vision7020029. [PMID: 37092462 PMCID: PMC10123664 DOI: 10.3390/vision7020029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 03/09/2023] [Accepted: 03/15/2023] [Indexed: 03/29/2023] Open
Abstract
Humans are able to categorize images very efficiently, in particular to detect the presence of an animal very quickly. Recently, deep learning algorithms based on convolutional neural networks (CNNs) have achieved higher than human accuracy for a wide range of visual categorization tasks. However, the tasks on which these artificial networks are typically trained and evaluated tend to be highly specialized and do not generalize well, e.g., accuracy drops after image rotation. In this respect, biological visual systems are more flexible and efficient than artificial systems for more general tasks, such as recognizing an animal. To further the comparison between biological and artificial neural networks, we re-trained the standard VGG 16 CNN on two independent tasks that are ecologically relevant to humans: detecting the presence of an animal or an artifact. We show that re-training the network achieves a human-like level of performance, comparable to that reported in psychophysical tasks. In addition, we show that the categorization is better when the outputs of the models are combined. Indeed, animals (e.g., lions) tend to be less present in photographs that contain artifacts (e.g., buildings). Furthermore, these re-trained models were able to reproduce some unexpected behavioral observations from human psychophysics, such as robustness to rotation (e.g., an upside-down or tilted image) or to a grayscale transformation. Finally, we quantified the number of CNN layers required to achieve such performance and showed that good accuracy for ultrafast image categorization can be achieved with only a few layers, challenging the belief that image recognition requires deep sequential analysis of visual objects. We hope to extend this framework to biomimetic deep neural architectures designed for ecological tasks, but also to guide future model-based psychophysical experiments that would deepen our understanding of biological vision.
Collapse
|
9
|
Precise Spiking Motifs in Neurobiological and Neuromorphic Data. Brain Sci 2022; 13:brainsci13010068. [PMID: 36672049 PMCID: PMC9856822 DOI: 10.3390/brainsci13010068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 12/20/2022] [Accepted: 12/23/2022] [Indexed: 12/31/2022] Open
Abstract
Why do neurons communicate through spikes? By definition, spikes are all-or-none neural events which occur at continuous times. In other words, spikes are on one side binary, existing or not without further details, and on the other, can occur at any asynchronous time, without the need for a centralized clock. This stands in stark contrast to the analog representation of values and the discretized timing classically used in digital processing and at the base of modern-day neural networks. As neural systems almost systematically use this so-called event-based representation in the living world, a better understanding of this phenomenon remains a fundamental challenge in neurobiology in order to better interpret the profusion of recorded data. With the growing need for intelligent embedded systems, it also emerges as a new computing paradigm to enable the efficient operation of a new class of sensors and event-based computers, called neuromorphic, which could enable significant gains in computation time and energy consumption-a major societal issue in the era of the digital economy and global warming. In this review paper, we provide evidence from biology, theory and engineering that the precise timing of spikes plays a crucial role in our understanding of the efficiency of neural networks.
Collapse
|
10
|
Mazzaglia P, Verbelen T, Çatal O, Dhoedt B. The Free Energy Principle for Perception and Action: A Deep Learning Perspective. ENTROPY 2022; 24:e24020301. [PMID: 35205595 PMCID: PMC8871280 DOI: 10.3390/e24020301] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 02/14/2022] [Accepted: 02/15/2022] [Indexed: 02/05/2023]
Abstract
The free energy principle, and its corollary active inference, constitute a bio-inspired theory that assumes biological agents act to remain in a restricted set of preferred states of the world, i.e., they minimize their free energy. Under this principle, biological agents learn a generative model of the world and plan actions in the future that will maintain the agent in an homeostatic state that satisfies its preferences. This framework lends itself to being realized in silico, as it comprehends important aspects that make it computationally affordable, such as variational inference and amortized planning. In this work, we investigate the tool of deep learning to design and realize artificial agents based on active inference, presenting a deep-learning oriented presentation of the free energy principle, surveying works that are relevant in both machine learning and active inference areas, and discussing the design choices that are involved in the implementation process. This manuscript probes newer perspectives for the active inference framework, grounding its theoretical aspects into more pragmatic affairs, offering a practical guide to active inference newcomers and a starting point for deep learning practitioners that would like to investigate implementations of the free energy principle.
Collapse
|
11
|
Parr T, Limanowski J, Rawji V, Friston K. The computational neurology of movement under active inference. Brain 2021; 144:1799-1818. [PMID: 33704439 PMCID: PMC8320263 DOI: 10.1093/brain/awab085] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 11/08/2020] [Accepted: 12/20/2020] [Indexed: 12/31/2022] Open
Abstract
We propose a computational neurology of movement based on the convergence of theoretical neurobiology and clinical neurology. A significant development in the former is the idea that we can frame brain function as a process of (active) inference, in which the nervous system makes predictions about its sensory data. These predictions depend upon an implicit predictive (generative) model used by the brain. This means neural dynamics can be framed as generating actions to ensure sensations are consistent with these predictions-and adjusting predictions when they are not. We illustrate the significance of this formulation for clinical neurology by simulating a clinical examination of the motor system using an upper limb coordination task. Specifically, we show how tendon reflexes emerge naturally under the right kind of generative model. Through simulated perturbations, pertaining to prior probabilities of this model's variables, we illustrate the emergence of hyperreflexia and pendular reflexes, reminiscent of neurological lesions in the corticospinal tract and cerebellum. We then turn to the computational lesions causing hypokinesia and deficits of coordination. This in silico lesion-deficit analysis provides an opportunity to revisit classic neurological dichotomies (e.g. pyramidal versus extrapyramidal systems) from the perspective of modern approaches to theoretical neurobiology-and our understanding of the neurocomputational architecture of movement control based on first principles.
Collapse
Affiliation(s)
- Thomas Parr
- Wellcome Centre for Human Neuroimaging, Queen Square Institute of Neurology, University College London, London WC1N 3BG, UK
| | - Jakub Limanowski
- Faculty of Psychology and Center for Tactile Internet with Human-in-the-Loop, Technische Universität Dresden, Dresden, Germany
| | - Vishal Rawji
- Department of Clinical and Movement Neurosciences, Queen Square Institute of Neurology, University College London, London WC1N 3BG, UK
| | - Karl Friston
- Wellcome Centre for Human Neuroimaging, Queen Square Institute of Neurology, University College London, London WC1N 3BG, UK
| |
Collapse
|
12
|
Parr T, Sajid N, Da Costa L, Mirza MB, Friston KJ. Generative Models for Active Vision. Front Neurorobot 2021; 15:651432. [PMID: 33927605 PMCID: PMC8076554 DOI: 10.3389/fnbot.2021.651432] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2021] [Accepted: 03/15/2021] [Indexed: 11/13/2022] Open
Abstract
The active visual system comprises the visual cortices, cerebral attention networks, and oculomotor system. While fascinating in its own right, it is also an important model for sensorimotor networks in general. A prominent approach to studying this system is active inference-which assumes the brain makes use of an internal (generative) model to predict proprioceptive and visual input. This approach treats action as ensuring sensations conform to predictions (i.e., by moving the eyes) and posits that visual percepts are the consequence of updating predictions to conform to sensations. Under active inference, the challenge is to identify the form of the generative model that makes these predictions-and thus directs behavior. In this paper, we provide an overview of the generative models that the brain must employ to engage in active vision. This means specifying the processes that explain retinal cell activity and proprioceptive information from oculomotor muscle fibers. In addition to the mechanics of the eyes and retina, these processes include our choices about where to move our eyes. These decisions rest upon beliefs about salient locations, or the potential for information gain and belief-updating. A key theme of this paper is the relationship between "looking" and "seeing" under the brain's implicit generative model of the visual world.
Collapse
Affiliation(s)
- Thomas Parr
- Wellcome Centre for Human Neuroimaging, Queen Square Institute of Neurology, London, United Kingdom
| | - Noor Sajid
- Wellcome Centre for Human Neuroimaging, Queen Square Institute of Neurology, London, United Kingdom
| | - Lancelot Da Costa
- Wellcome Centre for Human Neuroimaging, Queen Square Institute of Neurology, London, United Kingdom
- Department of Mathematics, Imperial College London, London, United Kingdom
| | - M. Berk Mirza
- Department of Neuroimaging, Centre for Neuroimaging Sciences, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom
| | - Karl J. Friston
- Wellcome Centre for Human Neuroimaging, Queen Square Institute of Neurology, London, United Kingdom
| |
Collapse
|
13
|
Sajid N, Holmes E, Hope TM, Fountas Z, Price CJ, Friston KJ. Simulating lesion-dependent functional recovery mechanisms. Sci Rep 2021; 11:7475. [PMID: 33811259 PMCID: PMC8018968 DOI: 10.1038/s41598-021-87005-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 03/22/2021] [Indexed: 01/13/2023] Open
Abstract
Functional recovery after brain damage varies widely and depends on many factors, including lesion site and extent. When a neuronal system is damaged, recovery may occur by engaging residual (e.g., perilesional) components. When damage is extensive, recovery depends on the availability of other intact neural structures that can reproduce the same functional output (i.e., degeneracy). A system's response to damage may occur rapidly, require learning or both. Here, we simulate functional recovery from four different types of lesions, using a generative model of word repetition that comprised a default premorbid system and a less used alternative system. The synthetic lesions (i) completely disengaged the premorbid system, leaving the alternative system intact, (ii) partially damaged both premorbid and alternative systems, and (iii) limited the experience-dependent plasticity of both. The results, across 1000 trials, demonstrate that (i) a complete disconnection of the premorbid system naturally invoked the engagement of the other, (ii) incomplete damage to both systems had a much more devastating long-term effect on model performance and (iii) the effect of reducing learning capacity within each system. These findings contribute to formal frameworks for interpreting the effect of different types of lesions.
Collapse
Affiliation(s)
- Noor Sajid
- Wellcome Centre for Human Neuroimaging, University College London, UCL Queen Square Institute of Neurology, 12 Queen Square, London, WC1N 3AR, UK.
| | - Emma Holmes
- Wellcome Centre for Human Neuroimaging, University College London, UCL Queen Square Institute of Neurology, 12 Queen Square, London, WC1N 3AR, UK
| | - Thomas M Hope
- Wellcome Centre for Human Neuroimaging, University College London, UCL Queen Square Institute of Neurology, 12 Queen Square, London, WC1N 3AR, UK
| | - Zafeirios Fountas
- Wellcome Centre for Human Neuroimaging, University College London, UCL Queen Square Institute of Neurology, 12 Queen Square, London, WC1N 3AR, UK
- Huawei 2012 Laboratories, London, UK
| | - Cathy J Price
- Wellcome Centre for Human Neuroimaging, University College London, UCL Queen Square Institute of Neurology, 12 Queen Square, London, WC1N 3AR, UK
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, University College London, UCL Queen Square Institute of Neurology, 12 Queen Square, London, WC1N 3AR, UK
| |
Collapse
|
14
|
Daucé E, Albiges P, Perrinet LU. A dual foveal-peripheral visual processing model implements efficient saccade selection. J Vis 2020; 20:22. [PMID: 38755789 PMCID: PMC7443118 DOI: 10.1167/jov.20.8.22] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Accepted: 05/07/2020] [Indexed: 11/24/2022] Open
Abstract
We develop a visuomotor model that implements visual search as a focal accuracy-seeking policy, with the target's position and category drawn independently from a common generative process. Consistently with the anatomical separation between the ventral versus dorsal pathways, the model is composed of two pathways that respectively infer what to see and where to look. The "What" network is a classical deep learning classifier that only processes a small region around the center of fixation, providing a "foveal" accuracy. In contrast, the "Where" network processes the full visual field in a biomimetic fashion, using a log-polar retinotopic encoding, which is preserved up to the action selection level. In our model, the foveal accuracy is used as a monitoring signal to train the "Where" network, much like in the "actor/critic" framework. After training, the "Where" network provides an "accuracy map" that serves to guide the eye toward peripheral objects. Finally, the comparison of both networks' accuracies amounts to either selecting a saccade or keeping the eye focused at the center to identify the target. We test this setup on a simple task of finding a digit in a large, cluttered image. Our simulation results demonstrate the effectiveness of this approach, increasing by one order of magnitude the radius of the visual field toward which the agent can detect and recognize a target, either through a single saccade or with multiple ones. Importantly, our log-polar treatment of the visual information exploits the strong compression rate performed at the sensory level, providing ways to implement visual search in a sublinear fashion, in contrast with mainstream computer vision.
Collapse
Affiliation(s)
- Emmanuel Daucé
- Institut de Neurosciences de la Timone (UMR 7289), Aix Marseille University, CNRS, Marseille, France
| | - Pierre Albiges
- Institut de Neurosciences de la Timone (UMR 7289), Aix Marseille University, CNRS, Marseille, France
| | - Laurent U Perrinet
- Institut de Neurosciences de la Timone (UMR 7289), Aix Marseille University, CNRS, Marseille, France
- https://laurentperrinet.github.io/
| |
Collapse
|
15
|
Pasturel C, Montagnini A, Perrinet LU. Humans adapt their anticipatory eye movements to the volatility of visual motion properties. PLoS Comput Biol 2020; 16:e1007438. [PMID: 32282790 PMCID: PMC7179935 DOI: 10.1371/journal.pcbi.1007438] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2019] [Revised: 04/23/2020] [Accepted: 02/27/2020] [Indexed: 12/20/2022] Open
Abstract
Animal behavior constantly adapts to changes, for example when the statistical properties of the environment change unexpectedly. For an agent that interacts with this volatile setting, it is important to react accurately and as quickly as possible. It has already been shown that when a random sequence of motion ramps of a visual target is biased to one direction (e.g. right or left), human observers adapt their eye movements to accurately anticipate the target's expected direction. Here, we prove that this ability extends to a volatile environment where the probability bias could change at random switching times. In addition, we also recorded the explicit prediction of the next outcome as reported by observers using a rating scale. Both results were compared to the estimates of a probabilistic agent that is optimal in relation to the assumed generative model. Compared to the classical leaky integrator model, we found a better match between our probabilistic agent and the behavioral responses, both for the anticipatory eye movements and the explicit task. Furthermore, by controlling the level of preference between exploitation and exploration in the model, we were able to fit for each individual's experimental dataset the most likely level of volatility and analyze inter-individual variability across participants. These results prove that in such an unstable environment, human observers can still represent an internal belief about the environmental contingencies, and use this representation both for sensory-motor control and for explicit judgments. This work offers an innovative approach to more generically test the diversity of human cognitive abilities in uncertain and dynamic environments.
Collapse
Affiliation(s)
- Chloé Pasturel
- Institut de Neurosciences de la Timone (UMR 7289), Aix Marseille Univ, CNRS, Marseille, France
| | - Anna Montagnini
- Institut de Neurosciences de la Timone (UMR 7289), Aix Marseille Univ, CNRS, Marseille, France
| | - Laurent Udo Perrinet
- Institut de Neurosciences de la Timone (UMR 7289), Aix Marseille Univ, CNRS, Marseille, France
- * E-mail:
| |
Collapse
|
16
|
Active inference under visuo-proprioceptive conflict: Simulation and empirical results. Sci Rep 2020; 10:4010. [PMID: 32132646 PMCID: PMC7055248 DOI: 10.1038/s41598-020-61097-w] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2019] [Accepted: 02/17/2020] [Indexed: 12/13/2022] Open
Abstract
It has been suggested that the brain controls hand movements via internal models that rely on visual and proprioceptive cues about the state of the hand. In active inference formulations of such models, the relative influence of each modality on action and perception is determined by how precise (reliable) it is expected to be. The ‘top-down’ affordance of expected precision to a particular sensory modality is associated with attention. Here, we asked whether increasing attention to (i.e., the precision of) vision or proprioception would enhance performance in a hand-target phase matching task, in which visual and proprioceptive cues about hand posture were incongruent. We show that in a simple simulated agent—based on predictive coding formulations of active inference—increasing the expected precision of vision or proprioception improved task performance (target matching with the seen or felt hand, respectively) under visuo-proprioceptive conflict. Moreover, we show that this formulation captured the behaviour and self-reported attentional allocation of human participants performing the same task in a virtual reality environment. Together, our results show that selective attention can balance the impact of (conflicting) visual and proprioceptive cues on action—rendering attention a key mechanism for a flexible body representation for action.
Collapse
|
17
|
Kim S, Park J, Lee J. Effect of Prior Direction Expectation on the Accuracy and Precision of Smooth Pursuit Eye Movements. Front Syst Neurosci 2019; 13:71. [PMID: 32038182 PMCID: PMC6988807 DOI: 10.3389/fnsys.2019.00071] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Accepted: 11/11/2019] [Indexed: 12/23/2022] Open
Abstract
The integration of sensory with top–down cognitive signals for generating appropriate sensory–motor behaviors is an important issue in understanding the brain’s information processes. Recent studies have demonstrated that the interplay between sensory and high-level signals in oculomotor behavior could be explained by Bayesian inference. Specifically, prior knowledge for motion speed introduces a bias in the speed of smooth pursuit eye movements. The other important prediction of Bayesian inference is variability reduction by prior expectation; however, there is insufficient evidence in oculomotor behaviors to support this prediction. In the present study, we trained monkeys to switch the prior expectation about motion direction and independently controlled the strength of the motion stimulus. Under identical sensory stimulus conditions, we tested if prior knowledge about the motion direction reduced the variability of open-loop smooth pursuit eye movements. We observed a significant reduction when the prior expectation was strong; this was consistent with the prediction of Bayesian inference. Taking advantage of the open-loop smooth pursuit, we investigated the temporal dynamics of the effect of the prior to the pursuit direction bias and variability. This analysis demonstrated that the strength of the sensory evidence depended not only on the strength of the sensory stimulus but also on the time required for the pursuit system to form a neural sensory representation. Finally, we demonstrated that the variability and directional bias change by prior knowledge were quantitatively explained by the Bayesian observer model.
Collapse
Affiliation(s)
- Seolmin Kim
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, South Korea.,Department of Biomedical Engineering, Sungkyunkwan University, Suwon, South Korea
| | - Jeongjun Park
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, South Korea.,Department of Biomedical Engineering, Sungkyunkwan University, Suwon, South Korea
| | - Joonyeol Lee
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, South Korea.,Department of Biomedical Engineering, Sungkyunkwan University, Suwon, South Korea
| |
Collapse
|
18
|
Alamia A, VanRullen R. Alpha oscillations and traveling waves: Signatures of predictive coding? PLoS Biol 2019; 17:e3000487. [PMID: 31581198 PMCID: PMC6776260 DOI: 10.1371/journal.pbio.3000487] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2019] [Accepted: 08/30/2019] [Indexed: 12/30/2022] Open
Abstract
Predictive coding is a key mechanism to understand the computational processes underlying brain functioning: in a hierarchical network, higher levels predict the activity of lower levels, and the unexplained residuals (i.e., prediction errors) are passed back to higher layers. Because of its recursive nature, we wondered whether predictive coding could be related to brain oscillatory dynamics. First, we show that a simple 2-level predictive coding model of visual cortex, with physiological communication delays between levels, naturally gives rise to alpha-band rhythms, similar to experimental observations. Then, we demonstrate that a multilevel version of the same model can explain the occurrence of oscillatory traveling waves across levels, both forward (during visual stimulation) and backward (during rest). Remarkably, the predictions of our model are matched by the analysis of 2 independent electroencephalography (EEG) datasets, in which we observed oscillatory traveling waves in both directions.
Collapse
Affiliation(s)
- Andrea Alamia
- Centre de Recherche Cerveau et Cognition (CerCo), CNRS, Université de Toulouse, Toulouse, France
| | - Rufin VanRullen
- Centre de Recherche Cerveau et Cognition (CerCo), CNRS, Université de Toulouse, Toulouse, France
| |
Collapse
|
19
|
Damasse JB, Perrinet LU, Madelain L, Montagnini A. Reinforcement effects in anticipatory smooth eye movements. J Vis 2019; 18:14. [PMID: 30347101 DOI: 10.1167/18.11.14] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
When predictive information about target motion is available, anticipatory smooth pursuit eye movements (aSPEM) are consistently generated before target appearance, thereby reducing the typical sensorimotor delay between target motion onset and foveation. By manipulating the probability for target motion direction, we were able to bias the direction and mean velocity of aSPEM. This suggests that motion-direction expectancy has a strong effect on the initiation of anticipatory movements. To further understand the nature of anticipatory smooth eye movements, we investigated different effects of reinforcement on aSPEM. In a first experiment, the reinforcement was contingent to a particular anticipatory behavior. A monetary reward was associated to a criterion-matching anticipatory velocity as estimated online during the gap before target motion onset. Our results showed a small but significant effect of behavior-contingent monetary reward on aSPEM. In a second experiment, the proportion of rewarded trials was manipulated across motion directions (right vs. left) independently from participants' behavior. Our results indicate that a bias in expected reward does not systematically affect anticipatory eye movements. Overall, these findings strengthen the notion that anticipatory eye movements can be considered as an operant behavior (similar to visually guided ones), whereas the expectancy for a noncontingent reward cannot efficiently bias them.
Collapse
Affiliation(s)
- Jean-Bernard Damasse
- Aix Marseille Université, CNRS, Institut de Neurosciences de la Timone UMR 7289, Marseille, France
| | - Laurent U Perrinet
- Aix Marseille Université, CNRS, Institut de Neurosciences de la Timone UMR 7289, Marseille, France
| | - Laurent Madelain
- University of Lille Nord de France, CNRS, SCALAB UMR 9193, Lille, France
| | - Anna Montagnini
- Aix Marseille Université, CNRS, Institut de Neurosciences de la Timone UMR 7289, Marseille, France
| |
Collapse
|
20
|
Hogendoorn H, Burkitt AN. Predictive Coding with Neural Transmission Delays: A Real-Time Temporal Alignment Hypothesis. eNeuro 2019; 6:ENEURO.0412-18.2019. [PMID: 31064839 PMCID: PMC6506824 DOI: 10.1523/eneuro.0412-18.2019] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2018] [Revised: 03/18/2019] [Accepted: 03/20/2019] [Indexed: 11/29/2022] Open
Abstract
Hierarchical predictive coding is an influential model of cortical organization, in which sequential hierarchical levels are connected by backward connections carrying predictions, as well as forward connections carrying prediction errors. To date, however, predictive coding models have largely neglected to take into account that neural transmission itself takes time. For a time-varying stimulus, such as a moving object, this means that backward predictions become misaligned with new sensory input. We present an extended model implementing both forward and backward extrapolation mechanisms that realigns backward predictions to minimize prediction error. This realignment has the consequence that neural representations across all hierarchical levels become aligned in real time. Using visual motion as an example, we show that the model is neurally plausible, that it is consistent with evidence of extrapolation mechanisms throughout the visual hierarchy, that it predicts several known motion-position illusions in human observers, and that it provides a solution to the temporal binding problem.
Collapse
Affiliation(s)
- Hinze Hogendoorn
- Melbourne School of Psychological Sciences, University of Melbourne, Melbourne, Victoria 3010, Australia
- Helmholtz Institute, Department of Experimental Psychology, Utrecht University, 3512 JE, Utrecht, The Netherlands
| | - Anthony N Burkitt
- NeuroEngineering Laboratory, Department of Biomedical Engineering, University of Melbourne, Melbourne, Victoria 3010, Australia
| |
Collapse
|
21
|
Parr T, Markovic D, Kiebel SJ, Friston KJ. Neuronal message passing using Mean-field, Bethe, and Marginal approximations. Sci Rep 2019; 9:1889. [PMID: 30760782 PMCID: PMC6374414 DOI: 10.1038/s41598-018-38246-3] [Citation(s) in RCA: 55] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2018] [Accepted: 12/19/2018] [Indexed: 01/08/2023] Open
Abstract
Neuronal computations rely upon local interactions across synapses. For a neuronal network to perform inference, it must integrate information from locally computed messages that are propagated among elements of that network. We review the form of two popular (Bayesian) message passing schemes and consider their plausibility as descriptions of inference in biological networks. These are variational message passing and belief propagation - each of which is derived from a free energy functional that relies upon different approximations (mean-field and Bethe respectively). We begin with an overview of these schemes and illustrate the form of the messages required to perform inference using Hidden Markov Models as generative models. Throughout, we use factor graphs to show the form of the generative models and of the messages they entail. We consider how these messages might manifest neuronally and simulate the inferences they perform. While variational message passing offers a simple and neuronally plausible architecture, it falls short of the inferential performance of belief propagation. In contrast, belief propagation allows exact computation of marginal posteriors at the expense of the architectural simplicity of variational message passing. As a compromise between these two extremes, we offer a third approach - marginal message passing - that features a simple architecture, while approximating the performance of belief propagation. Finally, we link formal considerations to accounts of neurological and psychiatric syndromes in terms of aberrant message passing.
Collapse
Affiliation(s)
- Thomas Parr
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, London, WC1N 3BG, UK.
| | - Dimitrije Markovic
- Chair of Neuroimaging, Psychology Department, Technische Universität Dresden, Dresden, Germany
| | - Stefan J Kiebel
- Chair of Neuroimaging, Psychology Department, Technische Universität Dresden, Dresden, Germany
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, London, WC1N 3BG, UK
| |
Collapse
|
22
|
Affiliation(s)
- Peter A. White
- School of Psychology, Cardiff University, Cardiff, Wales, UK
| |
Collapse
|
23
|
Parr T, Friston KJ. The Discrete and Continuous Brain: From Decisions to Movement-And Back Again. Neural Comput 2018. [PMID: 29894658 DOI: 10.1162/neco˙a˙01102] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
To act upon the world, creatures must change continuous variables such as muscle length or chemical concentration. In contrast, decision making is an inherently discrete process, involving the selection among alternative courses of action. In this article, we consider the interface between the discrete and continuous processes that translate our decisions into movement in a Newtonian world-and how movement informs our decisions. We do so by appealing to active inference, with a special focus on the oculomotor system. Within this exemplar system, we argue that the superior colliculus is well placed to act as a discrete-continuous interface. Interestingly, when the neuronal computations within the superior colliculus are formulated in terms of active inference, we find that many aspects of its neuroanatomy emerge from the computations it must perform in this role.
Collapse
Affiliation(s)
- Thomas Parr
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, WC1N 3BG, U.K.
| | - Karl J Friston
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, WC1N 3BG, U.K.
| |
Collapse
|
24
|
Parr T, Friston KJ. The Discrete and Continuous Brain: From Decisions to Movement-And Back Again. Neural Comput 2018; 30:2319-2347. [PMID: 29894658 PMCID: PMC6115199 DOI: 10.1162/neco_a_01102] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
To act upon the world, creatures must change continuous variables such as muscle length or chemical concentration. In contrast, decision making is an inherently discrete process, involving the selection among alternative courses of action. In this article, we consider the interface between the discrete and continuous processes that translate our decisions into movement in a Newtonian world—and how movement informs our decisions. We do so by appealing to active inference, with a special focus on the oculomotor system. Within this exemplar system, we argue that the superior colliculus is well placed to act as a discrete-continuous interface. Interestingly, when the neuronal computations within the superior colliculus are formulated in terms of active inference, we find that many aspects of its neuroanatomy emerge from the computations it must perform in this role.
Collapse
Affiliation(s)
- Thomas Parr
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, WC1N 3BG, U.K.
| | - Karl J Friston
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, WC1N 3BG, U.K.
| |
Collapse
|
25
|
Markkula G, Boer E, Romano R, Merat N. Sustained sensorimotor control as intermittent decisions about prediction errors: computational framework and application to ground vehicle steering. BIOLOGICAL CYBERNETICS 2018; 112:181-207. [PMID: 29453689 PMCID: PMC6002515 DOI: 10.1007/s00422-017-0743-9] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2017] [Accepted: 12/16/2017] [Indexed: 06/07/2023]
Abstract
A conceptual and computational framework is proposed for modelling of human sensorimotor control and is exemplified for the sensorimotor task of steering a car. The framework emphasises control intermittency and extends on existing models by suggesting that the nervous system implements intermittent control using a combination of (1) motor primitives, (2) prediction of sensory outcomes of motor actions, and (3) evidence accumulation of prediction errors. It is shown that approximate but useful sensory predictions in the intermittent control context can be constructed without detailed forward models, as a superposition of simple prediction primitives, resembling neurobiologically observed corollary discharges. The proposed mathematical framework allows straightforward extension to intermittent behaviour from existing one-dimensional continuous models in the linear control and ecological psychology traditions. Empirical data from a driving simulator are used in model-fitting analyses to test some of the framework's main theoretical predictions: it is shown that human steering control, in routine lane-keeping and in a demanding near-limit task, is better described as a sequence of discrete stepwise control adjustments, than as continuous control. Results on the possible roles of sensory prediction in control adjustment amplitudes, and of evidence accumulation mechanisms in control onset timing, show trends that match the theoretical predictions; these warrant further investigation. The results for the accumulation-based model align with other recent literature, in a possibly converging case against the type of threshold mechanisms that are often assumed in existing models of intermittent control.
Collapse
Affiliation(s)
- Gustav Markkula
- Institute for Transport Studies, University of Leeds, Leeds, UK.
| | - Erwin Boer
- Institute for Transport Studies, University of Leeds, Leeds, UK
| | - Richard Romano
- Institute for Transport Studies, University of Leeds, Leeds, UK
| | - Natasha Merat
- Institute for Transport Studies, University of Leeds, Leeds, UK
| |
Collapse
|
26
|
Parr T, Friston KJ. Active inference and the anatomy of oculomotion. Neuropsychologia 2018; 111:334-343. [PMID: 29407941 PMCID: PMC5884328 DOI: 10.1016/j.neuropsychologia.2018.01.041] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2017] [Revised: 01/07/2018] [Accepted: 01/29/2018] [Indexed: 02/01/2023]
Abstract
Given that eye movement control can be framed as an inferential process, how are the requisite forces generated to produce anticipated or desired fixation? Starting from a generative model based on simple Newtonian equations of motion, we derive a variational solution to this problem and illustrate the plausibility of its implementation in the oculomotor brainstem. We show, through simulation, that the Bayesian filtering equations that implement 'planning as inference' can generate both saccadic and smooth pursuit eye movements. Crucially, the associated message passing maps well onto the known connectivity and neuroanatomy of the brainstem - and the changes in these messages over time are strikingly similar to single unit recordings of neurons in the corresponding nuclei. Furthermore, we show that simulated lesions to axonal pathways reproduce eye movement patterns of neurological patients with damage to these tracts.
Collapse
Affiliation(s)
- Thomas Parr
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK.
| | - Karl J Friston
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK.
| |
Collapse
|
27
|
Petzschner FH, Weber LAE, Gard T, Stephan KE. Computational Psychosomatics and Computational Psychiatry: Toward a Joint Framework for Differential Diagnosis. Biol Psychiatry 2017; 82:421-430. [PMID: 28619481 DOI: 10.1016/j.biopsych.2017.05.012] [Citation(s) in RCA: 99] [Impact Index Per Article: 14.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/01/2017] [Revised: 04/14/2017] [Accepted: 05/15/2017] [Indexed: 12/17/2022]
Abstract
This article outlines how a core concept from theories of homeostasis and cybernetics, the inference-control loop, may be used to guide differential diagnosis in computational psychiatry and computational psychosomatics. In particular, we discuss 1) how conceptualizing perception and action as inference-control loops yields a joint computational perspective on brain-world and brain-body interactions and 2) how the concrete formulation of this loop as a hierarchical Bayesian model points to key computational quantities that inform a taxonomy of potential disease mechanisms. We consider the utility of this perspective for differential diagnosis in concrete clinical applications.
Collapse
Affiliation(s)
- Frederike H Petzschner
- Translational Neuromodeling Unit, Institute for Biomedical Engineering, University of Zurich and Swiss Federal Institute of Technology Zurich, Zurich, Switzerland
| | - Lilian A E Weber
- Translational Neuromodeling Unit, Institute for Biomedical Engineering, University of Zurich and Swiss Federal Institute of Technology Zurich, Zurich, Switzerland
| | - Tim Gard
- Translational Neuromodeling Unit, Institute for Biomedical Engineering, University of Zurich and Swiss Federal Institute of Technology Zurich, Zurich, Switzerland; Center for Complementary and Integrative Medicine, University Hospital Zurich, Zurich, Switzerland
| | - Klaas E Stephan
- Translational Neuromodeling Unit, Institute for Biomedical Engineering, University of Zurich and Swiss Federal Institute of Technology Zurich, Zurich, Switzerland; Max Planck Institute for Metabolism Research, Cologne, Germany; Wellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom.
| |
Collapse
|
28
|
Affiliation(s)
- M. W. Spratling
- Department of Informatics, King's College London, London, UK
| |
Collapse
|
29
|
Khoei MA, Masson GS, Perrinet LU. The Flash-Lag Effect as a Motion-Based Predictive Shift. PLoS Comput Biol 2017; 13:e1005068. [PMID: 28125585 PMCID: PMC5268412 DOI: 10.1371/journal.pcbi.1005068] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2015] [Accepted: 07/21/2016] [Indexed: 11/18/2022] Open
Abstract
Due to its inherent neural delays, the visual system has an outdated access to sensory information about the current position of moving objects. In contrast, living organisms are remarkably able to track and intercept moving objects under a large range of challenging environmental conditions. Physiological, behavioral and psychophysical evidences strongly suggest that position coding is extrapolated using an explicit and reliable representation of object’s motion but it is still unclear how these two representations interact. For instance, the so-called flash-lag effect supports the idea of a differential processing of position between moving and static objects. Although elucidating such mechanisms is crucial in our understanding of the dynamics of visual processing, a theory is still missing to explain the different facets of this visual illusion. Here, we reconsider several of the key aspects of the flash-lag effect in order to explore the role of motion upon neural coding of objects’ position. First, we formalize the problem using a Bayesian modeling framework which includes a graded representation of the degree of belief about visual motion. We introduce a motion-based prediction model as a candidate explanation for the perception of coherent motion. By including the knowledge of a fixed delay, we can model the dynamics of sensory information integration by extrapolating the information acquired at previous instants in time. Next, we simulate the optimal estimation of object position with and without delay compensation and compared it with human perception under a broad range of different psychophysical conditions. Our computational study suggests that the explicit, probabilistic representation of velocity information is crucial in explaining position coding, and therefore the flash-lag effect. We discuss these theoretical results in light of the putative corrective mechanisms that can be used to cancel out the detrimental effects of neural delays and illuminate the more general question of the dynamical representation at the present time of spatial information in the visual pathways. Visual illusions are powerful tools to explore the limits and constraints of human perception. One of them has received considerable empirical and theoretical interests: the so-called “flash-lag effect”. When a visual stimulus moves along a continuous trajectory, it may be seen ahead of its veridical position with respect to an unpredictable event such as a punctuate flash. This illusion tells us something important about the visual system: contrary to classical computers, neural activity travels at a relatively slow speed. It is largely accepted that the resulting delays cause this perceived spatial lag of the flash. Still, after three decades of debates, there is no consensus regarding the underlying mechanisms. Herein, we re-examine the original hypothesis that this effect may be caused by the extrapolation of the stimulus’ motion that is naturally generated in order to compensate for neural delays. Contrary to classical models, we propose a novel theoretical framework, called parodiction, that optimizes this process by explicitly using the precision of both sensory and predicted motion. Using numerical simulations, we show that the parodiction theory subsumes many of the previously proposed models and empirical studies. More generally, the parodiction hypothesis proposes that neural systems implement generic neural computations that can systematically compensate the existing neural delays in order to represent the predicted visual scene at the present time. It calls for new experimental approaches to directly explore the relationships between neural delays and predictive coding.
Collapse
Affiliation(s)
- Mina A. Khoei
- Institut de Neurosciences de la Timone, UMR7289, CNRS / Aix-Marseille Université, Marseille, France
| | - Guillaume S. Masson
- Institut de Neurosciences de la Timone, UMR7289, CNRS / Aix-Marseille Université, Marseille, France
| | - Laurent U. Perrinet
- Institut de Neurosciences de la Timone, UMR7289, CNRS / Aix-Marseille Université, Marseille, France
- * E-mail:
| |
Collapse
|
30
|
Pio-Lopez L, Nizard A, Friston K, Pezzulo G. Active inference and robot control: a case study. J R Soc Interface 2016; 13:rsif.2016.0616. [PMID: 27683002 PMCID: PMC5046960 DOI: 10.1098/rsif.2016.0616] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2016] [Accepted: 09/01/2016] [Indexed: 11/12/2022] Open
Abstract
Active inference is a general framework for perception and action that is gaining prominence in computational and systems neuroscience but is less known outside these fields. Here, we discuss a proof-of-principle implementation of the active inference scheme for the control or the 7-DoF arm of a (simulated) PR2 robot. By manipulating visual and proprioceptive noise levels, we show under which conditions robot control under the active inference scheme is accurate. Besides accurate control, our analysis of the internal system dynamics (e.g. the dynamics of the hidden states that are inferred during the inference) sheds light on key aspects of the framework such as the quintessentially multimodal nature of control and the differential roles of proprioception and vision. In the discussion, we consider the potential importance of being able to implement active inference in robots. In particular, we briefly review the opportunities for modelling psychophysiological phenomena such as sensory attenuation and related failures of gain control, of the sort seen in Parkinson's disease. We also consider the fundamental difference between active inference and optimal control formulations, showing that in the former the heavy lifting shifts from solving a dynamical inverse problem to creating deep forward or generative models with dynamics, whose attracting sets prescribe desired behaviours.
Collapse
Affiliation(s)
- Léo Pio-Lopez
- Pascal Institute, Clermont University, Clermont-Ferrand, France Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| | - Ange Nizard
- Pascal Institute, Clermont University, Clermont-Ferrand, France
| | - Karl Friston
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, London, UK
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| |
Collapse
|
31
|
Adams RA, Bauer M, Pinotsis D, Friston KJ. Dynamic causal modelling of eye movements during pursuit: Confirming precision-encoding in V1 using MEG. Neuroimage 2016; 132:175-189. [PMID: 26921713 PMCID: PMC4862965 DOI: 10.1016/j.neuroimage.2016.02.055] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2015] [Revised: 02/15/2016] [Accepted: 02/17/2016] [Indexed: 01/06/2023] Open
Abstract
This paper shows that it is possible to estimate the subjective precision (inverse variance) of Bayesian beliefs during oculomotor pursuit. Subjects viewed a sinusoidal target, with or without random fluctuations in its motion. Eye trajectories and magnetoencephalographic (MEG) data were recorded concurrently. The target was periodically occluded, such that its reappearance caused a visual evoked response field (ERF). Dynamic causal modelling (DCM) was used to fit models of eye trajectories and the ERFs. The DCM for pursuit was based on predictive coding and active inference, and predicts subjects' eye movements based on their (subjective) Bayesian beliefs about target (and eye) motion. The precisions of these hierarchical beliefs can be inferred from behavioural (pursuit) data. The DCM for MEG data used an established biophysical model of neuronal activity that includes parameters for the gain of superficial pyramidal cells, which is thought to encode precision at the neuronal level. Previous studies (using DCM of pursuit data) suggest that noisy target motion increases subjective precision at the sensory level: i.e., subjects attend more to the target's sensory attributes. We compared (noisy motion-induced) changes in the synaptic gain based on the modelling of MEG data to changes in subjective precision estimated using the pursuit data. We demonstrate that imprecise target motion increases the gain of superficial pyramidal cells in V1 (across subjects). Furthermore, increases in sensory precision – inferred by our behavioural DCM – correlate with the increase in gain in V1, across subjects. This is a step towards a fully integrated model of brain computations, cortical responses and behaviour that may provide a useful clinical tool in conditions like schizophrenia. The brain encodes states of the world probabilistically with means and precisions. Precision (inverse variance) may be encoded by the synaptic gain of pyramidal cells. We estimate subjects' sensory precision using a model of oculomotor pursuit and DCM. We estimate subjects' synaptic gain in V1 using DCM of MEG data during pursuit. Estimates of synaptic gain in V1 and sensory precision are significantly correlated.
Collapse
Affiliation(s)
- Rick A Adams
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK.
| | - Markus Bauer
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK; School of Psychology, University Park, Nottingham University, Nottingham, NG7 2RD, UK.
| | - Dimitris Pinotsis
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK.
| | - Karl J Friston
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK.
| |
Collapse
|
32
|
Spratling MW. A review of predictive coding algorithms. Brain Cogn 2016; 112:92-97. [PMID: 26809759 DOI: 10.1016/j.bandc.2015.11.003] [Citation(s) in RCA: 141] [Impact Index Per Article: 17.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2015] [Revised: 11/09/2015] [Accepted: 11/13/2015] [Indexed: 10/22/2022]
Abstract
Predictive coding is a leading theory of how the brain performs probabilistic inference. However, there are a number of distinct algorithms which are described by the term "predictive coding". This article provides a concise review of these different predictive coding algorithms, highlighting their similarities and differences. Five algorithms are covered: linear predictive coding which has a long and influential history in the signal processing literature; the first neuroscience-related application of predictive coding to explaining the function of the retina; and three versions of predictive coding that have been proposed to model cortical function. While all these algorithms aim to fit a generative model to sensory data, they differ in the type of generative model they employ, in the process used to optimise the fit between the model and sensory data, and in the way that they are related to neurobiology.
Collapse
Affiliation(s)
- M W Spratling
- King's College London, Department of Informatics, London, UK.
| |
Collapse
|
33
|
Abstract
This paper considers communication in terms of inference about the behaviour of others (and our own behaviour). It is based on the premise that our sensations are largely generated by other agents like ourselves. This means, we are trying to infer how our sensations are caused by others, while they are trying to infer our behaviour: for example, in the dialogue between two speakers. We suggest that the infinite regress induced by modelling another agent - who is modelling you - can be finessed if you both possess the same model. In other words, the sensations caused by others and oneself are generated by the same process. This leads to a view of communication based upon a narrative that is shared by agents who are exchanging sensory signals. Crucially, this narrative transcends agency - and simply involves intermittently attending to and attenuating sensory input. Attending to sensations enables the shared narrative to predict the sensations generated by another (i.e. to listen), while attenuating sensory input enables one to articulate the narrative (i.e. to speak). This produces a reciprocal exchange of sensory signals that, formally, induces a generalised synchrony between internal (neuronal) brain states generating predictions in both agents. We develop the arguments behind this perspective, using an active (Bayesian) inference framework and offer some simulations (of birdsong) as proof of principle.
Collapse
Affiliation(s)
- Karl Friston
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, UCL, United Kingdom.
| | - Christopher Frith
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, UCL, United Kingdom
| |
Collapse
|
34
|
Adams RA, Aponte E, Marshall L, Friston KJ. Active inference and oculomotor pursuit: the dynamic causal modelling of eye movements. J Neurosci Methods 2015; 242:1-14. [PMID: 25583383 PMCID: PMC4346275 DOI: 10.1016/j.jneumeth.2015.01.003] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2014] [Revised: 12/30/2014] [Accepted: 01/03/2015] [Indexed: 01/01/2023]
Abstract
We use a normative (Bayes optimal) model of oculomotor pursuit. We average the empirical responses of subjects performing a pursuit paradigm. We invert these responses using the pursuit model and dynamic causal modelling. We thereby estimate the precision of subjects’ Bayesian beliefs from their pursuit. This could be used to quantify abnormal precision encoding in schizophrenia.
Background This paper introduces a new paradigm that allows one to quantify the Bayesian beliefs evidenced by subjects during oculomotor pursuit. Subjects’ eye tracking responses to a partially occluded sinusoidal target were recorded non-invasively and averaged. These response averages were then analysed using dynamic causal modelling (DCM). In DCM, observed responses are modelled using biologically plausible generative or forward models – usually biophysical models of neuronal activity. New method Our key innovation is to use a generative model based on a normative (Bayes-optimal) model of active inference to model oculomotor pursuit in terms of subjects’ beliefs about how visual targets move and how their oculomotor system responds. Our aim here is to establish the face validity of the approach, by manipulating the content and precision of sensory information – and examining the ensuing changes in the subjects’ implicit beliefs. These beliefs are inferred from their eye movements using the normative model. Results We show that on average, subjects respond to an increase in the ‘noise’ of target motion by increasing sensory precision in their models of the target trajectory. In other words, they attend more to the sensory attributes of a noisier stimulus. Conversely, subjects only change kinetic parameters in their model but not precision, in response to increased target speed. Conclusions Using this technique one can estimate the precisions of subjects’ hierarchical Bayesian beliefs about target motion. We hope to apply this paradigm to subjects with schizophrenia, whose pursuit abnormalities may result from the abnormal encoding of precision.
Collapse
Affiliation(s)
- Rick A Adams
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK.
| | - Eduardo Aponte
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK; Translational Neuromodeling Unit (TNU), Institute for Biomedical Engineering, University of Zurich & ETH Zurich, Wilfriedstr. 6, 8032 Zurich, Switzerland
| | - Louise Marshall
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK; Sobell Department of Motor Neuroscience and Movement Disorders, UCL Institute of Neurology, 33 Queen Square, London WC1N 3BG, UK
| | - Karl J Friston
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK
| |
Collapse
|