1
|
Loosen AM, Seow TXF, Hauser TU. Consistency within change: Evaluating the psychometric properties of a widely used predictive-inference task. Behav Res Methods 2024; 56:7410-7426. [PMID: 38844601 PMCID: PMC11362202 DOI: 10.3758/s13428-024-02427-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/12/2024] [Indexed: 08/30/2024]
Abstract
Rapid adaptation to sudden changes in the environment is a hallmark of flexible human behaviour. Many computational, neuroimaging, and even clinical investigations studying this cognitive process have relied on a behavioural paradigm known as the predictive-inference task. However, the psychometric quality of this task has never been examined, leaving unanswered whether it is indeed suited to capture behavioural variation on a within- and between-subject level. Using a large-scale test-retest design (T1: N = 330; T2: N = 219), we assessed the internal (internal consistency) and temporal (test-retest reliability) stability of the task's most used measures. We show that the main measures capturing flexible belief and behavioural adaptation yield good internal consistency and overall satisfying test-retest reliability. However, some more complex markers of flexible behaviour show lower psychometric quality. Our findings have implications for the large corpus of previous studies using this task and provide clear guidance as to which measures should and should not be used in future studies.
Collapse
Affiliation(s)
- Alisa M Loosen
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, London, UK.
- Wellcome Centre for Human Neuroimaging, University College London, University College London, London, UK.
- Center for Computational Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| | - Tricia X F Seow
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, London, UK
- Wellcome Centre for Human Neuroimaging, University College London, University College London, London, UK
| | - Tobias U Hauser
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, London, UK
- Wellcome Centre for Human Neuroimaging, University College London, University College London, London, UK
- Department of Psychiatry and Psychotherapy, Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
- German Center for Mental Health (DZPG), Tübingen, Germany
| |
Collapse
|
2
|
Howlett JR, Paulus MP. Out of control: computational dynamic control dysfunction in stress- and anxiety-related disorders. DISCOVER MENTAL HEALTH 2024; 4:5. [PMID: 38236488 PMCID: PMC10796870 DOI: 10.1007/s44192-023-00058-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 12/28/2023] [Indexed: 01/19/2024]
Abstract
Control theory, which has played a central role in technological progress over the last 150 years, has also yielded critical insights into biology and neuroscience. Recently, there has been a surging interest in integrating control theory with computational psychiatry. Here, we review the state of the field of using control theory approaches in computational psychiatry and show that recent research has mapped a neural control circuit consisting of frontal cortex, parietal cortex, and the cerebellum. This basic feedback control circuit is modulated by estimates of reward and cost via the basal ganglia as well as by arousal states coordinated by the insula, dorsal anterior cingulate cortex, amygdala, and locus coeruleus. One major approach within the broader field of control theory, known as proportion-integral-derivative (PID) control, has shown promise as a model of human behavior which enables precise and reliable estimates of underlying control parameters at the individual level. These control parameters correlate with self-reported fear and with both structural and functional variation in affect-related brain regions. This suggests that dysfunctional engagement of stress and arousal systems may suboptimally modulate parameters of domain-general goal-directed control algorithms, impairing performance in complex tasks involving movement, cognition, and affect. Future directions include clarifying the causal role of control deficits in stress- and anxiety-related disorders and developing clinically useful tools based on insights from control theory.
Collapse
Affiliation(s)
- Jonathon R Howlett
- VA San Diego Healthcare System, 3350 La Jolla Village Dr, San Diego, CA, 92161, USA.
- Department of Psychiatry, University of California San Diego, La Jolla, CA, USA.
| | | |
Collapse
|
3
|
Abstract
Mood is an integrative and diffuse affective state that is thought to exert a pervasive effect on cognition and behavior. At the same time, mood itself is thought to fluctuate slowly as a product of feedback from interactions with the environment. Here we present a new computational theory of the valence of mood-the Integrated Advantage model-that seeks to account for this bidirectional interaction. Adopting theoretical formalisms from reinforcement learning, we propose to conceptualize the valence of mood as a leaky integral of an agent's appraisals of the Advantage of its actions. This model generalizes and extends previous models of mood wherein affective valence was conceptualized as a moving average of reward prediction errors. We give a full theoretical derivation of the Integrated Advantage model and provide a functional explanation of how an integrated-Advantage variable could be deployed adaptively by a biological agent to accelerate learning in complex and/or stochastic environments. Specifically, drawing on stochastic optimization theory, we propose that an agent can utilize our hypothesized form of mood to approximate a momentum-based update to its behavioral policy, thereby facilitating rapid learning of optimal actions. We then show how this model of mood provides a principled and parsimonious explanation for a number of contextual effects on mood from the affective science literature, including expectation- and surprise-related effects, counterfactual effects from information about foregone alternatives, action-typicality effects, and action/inaction asymmetry. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Collapse
Affiliation(s)
| | | | - Yael Niv
- Princeton Neuroscience Institute and Department of Psychology
| |
Collapse
|
4
|
Sosa R, Alcalá E. The nervous system as a solution for implementing closed negative feedback control loops. J Exp Anal Behav 2022; 117:279-300. [PMID: 35119112 DOI: 10.1002/jeab.736] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 01/02/2022] [Accepted: 01/05/2022] [Indexed: 01/15/2023]
Abstract
Behavior can be regarded as the output of a system (action), as a function linking stimulus to response (reaction), or as an abstraction of the bidirectional relationship between the environment and the organism (interaction). When considering the latter possibility, a relevant question arises concerning how an organism can materially and continuously implement such a relationship during its lifetime in order to perpetuate itself. The feedback control approach has taken up the task of answering just that question. During the last several decades, said approach has been progressing and has started to be recognized as a paradigm shift, superseding certain canonical notions in mainstream behavior analysis, cognitive psychology, and even neuroscience. In this paper, we describe the main features of feedback control theory and its associated techniques, concentrating on its critiques of behavior analysis, as well as the commonalities they share. While some of feedback control theory's major critiques of behavior analysis arise from the fact that they focus on different levels of organization, we believe that some are legitimate and meaningful. Moreover, feedback control theory seems to blend with neurobiology more smoothly as compared to canonical behavior analysis, which only subsists in a scattered handful of fields. If this paradigm shift truly takes place, behavior analysts-whether they accept or reject this new currency-should be mindful of the basics of the feedback control approach.
Collapse
Affiliation(s)
| | - Emmanuel Alcalá
- Instituto Tecnológico de Estudios Superiores de Occidente, Guadalajara, México
| |
Collapse
|
5
|
Guastello SJ, Futch W, Mirabito L, Green D, Marsicek L, Witty B. Heuristics associated with forecasting chaotic events: a rare cognitive ability. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2020. [DOI: 10.1080/1463922x.2020.1818001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
| | - William Futch
- Department of Psychology, Marquette University, Milwaukee, WI, USA
| | - Lucas Mirabito
- Department of Psychology, Marquette University, Milwaukee, WI, USA
| | - Dominique Green
- Department of Psychology, Marquette University, Milwaukee, WI, USA
| | - Laura Marsicek
- Department of Psychology, Marquette University, Milwaukee, WI, USA
| | - Brittany Witty
- Department of Psychology, Marquette University, Milwaukee, WI, USA
| |
Collapse
|
6
|
Brain dynamics for confidence-weighted learning. PLoS Comput Biol 2020; 16:e1007935. [PMID: 32484806 PMCID: PMC7292419 DOI: 10.1371/journal.pcbi.1007935] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2019] [Revised: 06/12/2020] [Accepted: 05/07/2020] [Indexed: 12/11/2022] Open
Abstract
Learning in a changing, uncertain environment is a difficult problem. A popular solution is to predict future observations and then use surprising outcomes to update those predictions. However, humans also have a sense of confidence that characterizes the precision of their predictions. Bayesian models use a confidence-weighting principle to regulate learning: for a given surprise, the update is smaller when the confidence about the prediction was higher. Prior behavioral evidence indicates that human learning adheres to this confidence-weighting principle. Here, we explored the human brain dynamics sub-tending the confidence-weighting of learning using magneto-encephalography (MEG). During our volatile probability learning task, subjects’ confidence reports conformed with Bayesian inference. MEG revealed several stimulus-evoked brain responses whose amplitude reflected surprise, and some of them were further shaped by confidence: surprise amplified the stimulus-evoked response whereas confidence dampened it. Confidence about predictions also modulated several aspects of the brain state: pupil-linked arousal and beta-range (15–30 Hz) oscillations. The brain state in turn modulated specific stimulus-evoked surprise responses following the confidence-weighting principle. Our results thus indicate that there exist, in the human brain, signals reflecting surprise that are dampened by confidence in a way that is appropriate for learning according to Bayesian inference. They also suggest a mechanism for confidence-weighted learning: confidence about predictions would modulate intrinsic properties of the brain state to amplify or dampen surprise responses evoked by discrepant observations. Learning in a changing and uncertain world is difficult. In this context, facing a discrepancy between my current belief and new observations may reflect random fluctuations (e.g. my commute train is unexpectedly late, but it happens sometimes), if so, I should ignore this discrepancy and not change erratically my belief. However, this discrepancy could also denote a profound change (e.g. the train company changed and is less reliable), in this case, I should promptly revise my current belief. Human learning is adaptive: we change how much we learn from new observations, in particular, we promote flexibility when facing profound changes. A mathematical analysis of the problem shows that we should increase flexibility when the confidence about our current belief is low, which occurs when a change is suspected. Here, I show that human learners entertain rational confidence levels during the learning of changing probabilities. This confidence modulates intrinsic properties of the brain state (oscillatory activity and neuromodulation) which in turn amplifies or reduces, depending on whether confidence is low or high, the neural responses to discrepant observations. This confidence-weighting mechanism could underpin adaptive learning.
Collapse
|
7
|
|
8
|
Cassanello CR, Ostendorf F, Rolfs M. A generative learning model for saccade adaptation. PLoS Comput Biol 2019; 15:e1006695. [PMID: 31398185 PMCID: PMC6703699 DOI: 10.1371/journal.pcbi.1006695] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2018] [Revised: 08/21/2019] [Accepted: 06/19/2019] [Indexed: 11/19/2022] Open
Abstract
Plasticity in the oculomotor system ensures that saccadic eye movements reliably meet their visual goals-to bring regions of interest into foveal, high-acuity vision. Here, we present a comprehensive description of sensorimotor learning in saccades. We induced continuous adaptation of saccade amplitudes using a double-step paradigm, in which participants saccade to a peripheral target stimulus, which then undergoes a surreptitious, intra-saccadic shift (ISS) as the eyes are in flight. In our experiments, the ISS followed a systematic variation, increasing or decreasing from one saccade to the next as a sinusoidal function of the trial number. Over a large range of frequencies, we confirm that adaptation gain shows (1) a periodic response, reflecting the frequency of the ISS with a delay of a number of trials, and (2) a simultaneous drift towards lower saccade gains. We then show that state-space-based linear time-invariant systems (LTIS) represent suitable generative models for this evolution of saccade gain over time. This state-equation algorithm computes the prediction of an internal (or hidden state-) variable by learning from recent feedback errors, and it can be compared to experimentally observed adaptation gain. The algorithm also includes a forgetting rate that quantifies per-trial leaks in the adaptation gain, as well as a systematic, non-error-based bias. Finally, we study how the parameters of the generative models depend on features of the ISS. Driven by a sinusoidal disturbance, the state-equation admits an exact analytical solution that expresses the parameters of the phenomenological description as functions of those of the generative model. Together with statistical model selection criteria, we use these correspondences to characterize and refine the structure of compatible state-equation models. We discuss the relation of these findings to established results and suggest that they may guide further design of experimental research across domains of sensorimotor adaptation.
Collapse
Affiliation(s)
- Carlos R. Cassanello
- Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience, Humboldt-Universität zu Berlin, Berlin, Germany
- * E-mail: (CRC); (MR)
| | - Florian Ostendorf
- Department of Neurology, Charité – University Medicine Berlin, Berlin, Germany
| | - Martin Rolfs
- Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience, Humboldt-Universität zu Berlin, Berlin, Germany
- * E-mail: (CRC); (MR)
| |
Collapse
|
9
|
Baltieri M, Buckley CL. PID Control as a Process of Active Inference with Linear Generative Models. ENTROPY (BASEL, SWITZERLAND) 2019; 21:E257. [PMID: 33266972 PMCID: PMC7514737 DOI: 10.3390/e21030257] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/18/2019] [Revised: 02/20/2019] [Accepted: 03/03/2019] [Indexed: 11/16/2022]
Abstract
In the past few decades, probabilistic interpretations of brain functions have become widespread in cognitive science and neuroscience. In particular, the free energy principle and active inference are increasingly popular theories of cognitive functions that claim to offer a unified understanding of life and cognition within a general mathematical framework derived from information and control theory, and statistical mechanics. However, we argue that if the active inference proposal is to be taken as a general process theory for biological systems, it is necessary to understand how it relates to existing control theoretical approaches routinely used to study and explain biological systems. For example, recently, PID (Proportional-Integral-Derivative) control has been shown to be implemented in simple molecular systems and is becoming a popular mechanistic explanation of behaviours such as chemotaxis in bacteria and amoebae, and robust adaptation in biochemical networks. In this work, we will show how PID controllers can fit a more general theory of life and cognition under the principle of (variational) free energy minimisation when using approximate linear generative models of the world. This more general interpretation also provides a new perspective on traditional problems of PID controllers such as parameter tuning as well as the need to balance performances and robustness conditions of a controller. Specifically, we then show how these problems can be understood in terms of the optimisation of the precisions (inverse variances) modulating different prediction errors in the free energy functional.
Collapse
Affiliation(s)
- Manuel Baltieri
- EASY Group—Sussex Neuroscience, Department of Informatics, University of Sussex, Brighton BN1 9RH, UK
| | | |
Collapse
|