1
|
Ging-Jehli NR, Kuhn M, Blank JM, Chanthrakumar P, Steinberger DC, Yu Z, Herrington TM, Dillon DG, Pizzagalli DA, Frank MJ. Cognitive Signatures of Depressive and Anhedonic Symptoms and Affective States Using Computational Modeling and Neurocognitive Testing. BIOLOGICAL PSYCHIATRY. COGNITIVE NEUROSCIENCE AND NEUROIMAGING 2024; 9:726-736. [PMID: 38401881 PMCID: PMC11227402 DOI: 10.1016/j.bpsc.2024.02.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 02/03/2024] [Accepted: 02/09/2024] [Indexed: 02/26/2024]
Abstract
BACKGROUND Deeper phenotyping may improve our understanding of depression. Because depression is heterogeneous, extracting cognitive signatures associated with severity of depressive symptoms, anhedonia, and affective states is a promising approach. METHODS Sequential sampling models decomposed behavior from an adaptive approach-avoidance conflict task into computational parameters quantifying latent cognitive signatures. Fifty unselected participants completed clinical scales and the approach-avoidance conflict task by either approaching or avoiding trials offering monetary rewards and electric shocks. RESULTS Decision dynamics were best captured by a sequential sampling model with linear collapsing boundaries varying by net offer values, and with drift rates varying by trial-specific reward and aversion, reflecting net evidence accumulation toward approach or avoidance. Unlike conventional behavioral measures, these computational parameters revealed distinct associations with self-reported symptoms. Specifically, passive avoidance tendencies, indexed by starting point biases, were associated with greater severity of depressive symptoms (R = 0.34, p = .019) and anhedonia (R = 0.49, p = .001). Depressive symptoms were also associated with slower encoding and response execution, indexed by nondecision time (R = 0.37, p = .011). Higher reward sensitivity for offers with negative net values, indexed by drift rates, was linked to more sadness (R = 0.29, p = .042) and lower positive affect (R = -0.33, p = .022). Conversely, higher aversion sensitivity was associated with more tension (R = 0.33, p = .025). Finally, less cautious response patterns, indexed by boundary separation, were linked to more negative affect (R = -0.40, p = .005). CONCLUSIONS We demonstrated the utility of multidimensional computational phenotyping, which could be applied to clinical samples to improve characterization and treatment selection.
Collapse
Affiliation(s)
- Nadja R Ging-Jehli
- Carney Institute for Brain Science, Department of Cognitive, Linguistic, & Psychological Sciences, Brown University, Providence, Rhode Island.
| | - Manuel Kuhn
- Center for Depression, Anxiety and Stress Research, McLean Hospital, Belmont, Massachusetts; Department of Psychiatry, Harvard Medical School, Boston, Massachusetts
| | - Jacob M Blank
- Center for Depression, Anxiety and Stress Research, McLean Hospital, Belmont, Massachusetts
| | - Pranavan Chanthrakumar
- Carney Institute for Brain Science, Department of Cognitive, Linguistic, & Psychological Sciences, Brown University, Providence, Rhode Island; Warren Alpert Medical School of Brown University, Providence, Rhode Island
| | - David C Steinberger
- Center for Depression, Anxiety and Stress Research, McLean Hospital, Belmont, Massachusetts
| | - Zeyang Yu
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| | - Todd M Herrington
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| | - Daniel G Dillon
- Center for Depression, Anxiety and Stress Research, McLean Hospital, Belmont, Massachusetts; Department of Psychiatry, Harvard Medical School, Boston, Massachusetts
| | - Diego A Pizzagalli
- Center for Depression, Anxiety and Stress Research, McLean Hospital, Belmont, Massachusetts; Department of Psychiatry, Harvard Medical School, Boston, Massachusetts
| | - Michael J Frank
- Carney Institute for Brain Science, Department of Cognitive, Linguistic, & Psychological Sciences, Brown University, Providence, Rhode Island
| |
Collapse
|
2
|
Rmus M, Pan TF, Xia L, Collins AGE. Artificial neural networks for model identification and parameter estimation in computational cognitive models. PLoS Comput Biol 2024; 20:e1012119. [PMID: 38748770 PMCID: PMC11132492 DOI: 10.1371/journal.pcbi.1012119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 05/28/2024] [Accepted: 04/27/2024] [Indexed: 05/28/2024] Open
Abstract
Computational cognitive models have been used extensively to formalize cognitive processes. Model parameters offer a simple way to quantify individual differences in how humans process information. Similarly, model comparison allows researchers to identify which theories, embedded in different models, provide the best accounts of the data. Cognitive modeling uses statistical tools to quantitatively relate models to data that often rely on computing/estimating the likelihood of the data under the model. However, this likelihood is computationally intractable for a substantial number of models. These relevant models may embody reasonable theories of cognition, but are often under-explored due to the limited range of tools available to relate them to data. We contribute to filling this gap in a simple way using artificial neural networks (ANNs) to map data directly onto model identity and parameters, bypassing the likelihood estimation. We test our instantiation of an ANN as a cognitive model fitting tool on classes of cognitive models with strong inter-trial dependencies (such as reinforcement learning models), which offer unique challenges to most methods. We show that we can adequately perform both parameter estimation and model identification using our ANN approach, including for models that cannot be fit using traditional likelihood-based methods. We further discuss our work in the context of the ongoing research leveraging simulation-based approaches to parameter estimation and model identification, and how these approaches broaden the class of cognitive models researchers can quantitatively investigate.
Collapse
Affiliation(s)
- Milena Rmus
- Department of Psychology, University of California, Berkeley, Berkeley, California, United States of America
| | - Ti-Fen Pan
- Department of Psychology, University of California, Berkeley, Berkeley, California, United States of America
| | - Liyu Xia
- Department of Mathematics, University of California, Berkeley, Berkeley, California, United States of America
| | - Anne G. E. Collins
- Department of Psychology, University of California, Berkeley, Berkeley, California, United States of America
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California, United States of America
| |
Collapse
|
3
|
Grahek I, Leng X, Musslick S, Shenhav A. Control adjustment costs limit goal flexibility: Empirical evidence and a computational account. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.08.22.554296. [PMID: 37662382 PMCID: PMC10473589 DOI: 10.1101/2023.08.22.554296] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
A cornerstone of human intelligence is the ability to flexibly adjust our cognition and behavior as our goals change. For instance, achieving some goals requires efficiency, while others require caution. Adapting to these changing goals require corresponding adjustments in cognitive control (e.g., levels of attention, response thresholds). However, adjusting our control to meet new goals comes at a cost: we are better at achieving a goal in isolation than when transitioning between goals. The source of these control adjustment costs remains poorly understood, and the bulk of our understanding of such costs comes from settings in which participants transition between discrete task sets, rather than performance goals. Across four experiments, we show that adjustments in continuous control states incur a performance cost, and that a dynamical systems model can explain the source of these costs. Participants performed a single cognitively demanding task under varying performance goals (e.g., to be fast or to be accurate). We modeled control allocation to include a dynamic process of adjusting from one's current control state to a target state for a given performance goal. By incorporating inertia into this adjustment process, our model accounts for our empirical findings that people under-shoot their target control state more (i.e., exhibit larger adjustment costs) when (a) goals switch rather than remain fixed over a block (Study 1); (b) target control states are more distant from one another (Study 2); (c) less time is given to adjust to the new goal (Study 3); and (d) when anticipating having to switch goals more frequently (Study 4). Our findings characterize the costs of adjusting control to meet changing goals, and show that these costs can emerge directly from cognitive control dynamics. In so doing, they shed new light on the sources of and constraints on flexibility in human goal-directed behavior.
Collapse
Affiliation(s)
- Ivan Grahek
- Department of Cognitive, Linguistic, and Psychological Sciences; Carney Institute for Brain Science; Brown University; Providence, RI, USA
| | - Xiamin Leng
- Department of Cognitive, Linguistic, and Psychological Sciences; Carney Institute for Brain Science; Brown University; Providence, RI, USA
| | - Sebastian Musslick
- Department of Cognitive, Linguistic, and Psychological Sciences; Carney Institute for Brain Science; Brown University; Providence, RI, USA
- Institute of Cognitive Science; Osnabrück University; Osnabrück, Germany
| | - Amitai Shenhav
- Department of Cognitive, Linguistic, and Psychological Sciences; Carney Institute for Brain Science; Brown University; Providence, RI, USA
| |
Collapse
|
4
|
Rmus M, Pan TF, Xia L, Collins AGE. Artificial neural networks for model identification and parameter estimation in computational cognitive models. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.09.14.557793. [PMID: 37767088 PMCID: PMC10521012 DOI: 10.1101/2023.09.14.557793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/29/2023]
Abstract
Computational cognitive models have been used extensively to formalize cognitive processes. Model parameters offer a simple way to quantify individual differences in how humans process information. Similarly, model comparison allows researchers to identify which theories, embedded in different models, provide the best accounts of the data. Cognitive modeling uses statistical tools to quantitatively relate models to data that often rely on computing/estimating the likelihood of the data under the model. However, this likelihood is computationally intractable for a substantial number of models. These relevant models may embody reasonable theories of cognition, but are often under-explored due to the limited range of tools available to relate them to data. We contribute to filling this gap in a simple way using artificial neural networks (ANNs) to map data directly onto model identity and parameters, bypassing the likelihood estimation. We test our instantiation of an ANN as a cognitive model fitting tool on classes of cognitive models with strong inter-trial dependencies (such as reinforcement learning models), which offer unique challenges to most methods. We show that we can adequately perform both parameter estimation and model identification using our ANN approach, including for models that cannot be fit using traditional likelihood-based methods. We further discuss our work in the context of the ongoing research leveraging simulation-based approaches to parameter estimation and model identification, and how these approaches broaden the class of cognitive models researchers can quantitatively investigate.
Collapse
|
5
|
Murrow M, Holmes WR. PyBEAM: A Bayesian approach to parameter inference for a wide class of binary evidence accumulation models. Behav Res Methods 2024; 56:2636-2656. [PMID: 37550470 DOI: 10.3758/s13428-023-02162-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/03/2023] [Indexed: 08/09/2023]
Abstract
Many decision-making theories are encoded in a class of processes known as evidence accumulation models (EAM). These assume that noisy evidence stochastically accumulates until a set threshold is reached, triggering a decision. One of the most successful and widely used of this class is the Diffusion Decision Model (DDM). The DDM however is limited in scope and does not account for processes such as evidence leakage, changes of evidence, or time varying caution. More complex EAMs can encode a wider array of hypotheses, but are currently limited by computational challenges. In this work, we develop the Python package PyBEAM (Bayesian Evidence Accumulation Models) to fill this gap. Toward this end, we develop a general probabilistic framework for predicting the choice and response time distributions for a general class of binary decision models. In addition, we have heavily computationally optimized this modeling process and integrated it with PyMC, a widely used Python package for Bayesian parameter estimation. This 1) substantially expands the class of EAM models to which Bayesian methods can be applied, 2) reduces the computational time to do so, and 3) lowers the entry fee for working with these models. Here we demonstrate the concepts behind this methodology, its application to parameter recovery for a variety of models, and apply it to a recently published data set to demonstrate its practical use.
Collapse
Affiliation(s)
- Matthew Murrow
- Department of Physics and Astronomy, Vanderbilt University, 6301 Stevenson Science Center, Nashville, 37212, TN, USA
| | - William R Holmes
- Cognitive Science Program and Department of Mathematics, Indiana University, 1001 E. 10th St., Bloomington, 47405, IN, USA.
| |
Collapse
|
6
|
Nunez MD, Fernandez K, Srinivasan R, Vandekerckhove J. A tutorial on fitting joint models of M/EEG and behavior to understand cognition. Behav Res Methods 2024:10.3758/s13428-023-02331-x. [PMID: 38409458 DOI: 10.3758/s13428-023-02331-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/21/2023] [Indexed: 02/28/2024]
Abstract
We present motivation and practical steps necessary to find parameter estimates of joint models of behavior and neural electrophysiological data. This tutorial is written for researchers wishing to build joint models of human behavior and scalp and intracranial electroencephalographic (EEG) or magnetoencephalographic (MEG) data, and more specifically those researchers who seek to understand human cognition. Although these techniques could easily be applied to animal models, the focus of this tutorial is on human participants. Joint modeling of M/EEG and behavior requires some knowledge of existing computational and cognitive theories, M/EEG artifact correction, M/EEG analysis techniques, cognitive modeling, and programming for statistical modeling implementation. This paper seeks to give an introduction to these techniques as they apply to estimating parameters from neurocognitive models of M/EEG and human behavior, and to evaluate model results and compare models. Due to our research and knowledge on the subject matter, our examples in this paper will focus on testing specific hypotheses in human decision-making theory. However, most of the motivation and discussion of this paper applies across many modeling procedures and applications. We provide Python (and linked R) code examples in the tutorial and appendix. Readers are encouraged to try the exercises at the end of the document.
Collapse
Affiliation(s)
- Michael D Nunez
- Psychological Methods, University of Amsterdam, Amsterdam, The Netherlands.
| | - Kianté Fernandez
- Department of Psychology, University of California, Los Angeles, CA, USA
| | - Ramesh Srinivasan
- Department of Cognitive Sciences, University of California, Irvine, CA, USA
- Department of Biomedical Engineering, University of California, Irvine, CA, USA
- Institute of Mathematical Behavioral Sciences, University of California, Irvine, CA, USA
| | - Joachim Vandekerckhove
- Department of Cognitive Sciences, University of California, Irvine, CA, USA
- Institute of Mathematical Behavioral Sciences, University of California, Irvine, CA, USA
- Department of Statistics, University of California, Irvine, CA, USA
| |
Collapse
|
7
|
Rasanan AHH, Rad JA, Sewell DK. Are there jumps in evidence accumulation, and what, if anything, do they reflect psychologically? An analysis of Lévy Flights models of decision-making. Psychon Bull Rev 2024; 31:32-48. [PMID: 37528276 DOI: 10.3758/s13423-023-02284-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/22/2023] [Indexed: 08/03/2023]
Abstract
According to existing theories of simple decision-making, decisions are initiated by continuously sampling and accumulating perceptual evidence until a threshold value has been reached. Many models, such as the diffusion decision model, assume a noisy accumulation process, described mathematically as a stochastic Wiener process with Gaussian distributed noise. Recently, an alternative account of decision-making has been proposed in the Lévy Flights (LF) model, in which accumulation noise is characterized by a heavy-tailed power-law distribution, controlled by a parameter, [Formula: see text]. The LF model produces sudden large "jumps" in evidence accumulation that are not produced by the standard Wiener diffusion model, which some have argued provide better fits to data. It remains unclear, however, whether jumps in evidence accumulation have any real psychological meaning. Here, we investigate the conjecture by Voss et al. (Psychonomic Bulletin & Review, 26(3), 813-832, 2019) that jumps might reflect sudden shifts in the source of evidence people rely on to make decisions. We reason that if jumps are psychologically real, we should observe systematic reductions in jumps as people become more practiced with a task (i.e., as people converge on a stable decision strategy with experience). We fitted five versions of the LF model to behavioral data from a study by Evans and Brown (Psychonomic Bulletin & Review, 24(2), 597-606, 2017), using a five-layer deep inference neural network for parameter estimation. The analysis revealed systematic reductions in jumps as a function of practice, such that the LF model more closely approximated the standard Wiener model over time. This trend could not be attributed to other sources of parameter variability, speaking against the possibility of trade-offs with other model parameters. Our analysis suggests that jumps in the LF model might be capturing strategy instability exhibited by relatively inexperienced observers early on in task performance. We conclude that further investigation of a potential psychological interpretation of jumps in evidence accumulation is warranted.
Collapse
Affiliation(s)
- Amir Hosein Hadian Rasanan
- Institute for Cognitive and Brain Sciences, Shahid Beheshti University, Tehran, Iran
- Faculty of Psychology, University of Basel, Basel, Switzerland
| | - Jamal Amani Rad
- Department of Cognitive Modeling, Institute for Cognitive and Brain Sciences, Shahid Beheshti University, Tehran, Iran
| | - David K Sewell
- School of Psychology, The University of Queensland, St Lucia, QLD 4072, Brisbane, Australia.
| |
Collapse
|
8
|
Hemmatian B, Varshney LR, Pi F, Barbey AK. The utilitarian brain: Moving beyond the Free Energy Principle. Cortex 2024; 170:69-79. [PMID: 38135613 DOI: 10.1016/j.cortex.2023.11.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 11/28/2023] [Accepted: 11/28/2023] [Indexed: 12/24/2023]
Abstract
The Free Energy Principle (FEP) is a normative computational framework for iterative reduction of prediction error and uncertainty through perception-intervention cycles that has been presented as a potential unifying theory of all brain functions (Friston, 2006). Any theory hoping to unify the brain sciences must be able to explain the mechanisms of decision-making, an important cognitive faculty, without the addition of independent, irreducible notions. This challenge has been accepted by several proponents of the FEP (Friston, 2010; Gershman, 2019). We evaluate attempts to reduce decision-making to the FEP, using Lucas' (2005) meta-theory of the brain's contextual constraints as a guidepost. We find reductive variants of the FEP for decision-making unable to explain behavior in certain types of diagnostic, predictive, and multi-armed bandit tasks. We trace the shortcomings to the core theory's lack of an adequate notion of subjective preference or "utility", a concept central to decision-making and grounded in the brain's biological reality. We argue that any attempts to fully reduce utility to the FEP would require unrealistic assumptions, making the principle an unlikely candidate for unifying brain science. We suggest that researchers instead attempt to identify contexts in which either informational or independent reward constraints predominate, delimiting the FEP's area of applicability. To encourage this type of research, we propose a two-factor formal framework that can subsume any FEP model and allows experimenters to compare the contributions of informational versus reward constraints to behavior.
Collapse
Affiliation(s)
- Babak Hemmatian
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, USA
| | - Lav R Varshney
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, USA; Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, USA
| | - Frederick Pi
- Department of Cognitive Science, University of California San Diego, USA
| | - Aron K Barbey
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, USA; Center for Brain, Biology and Behavior, University of Nebraska Lincoln, USA.
| |
Collapse
|
9
|
Jalalian P, Golubickis M, Sharma Y, Neil Macrae C. Learning about me and you: Only deterministic stimulus associations elicit self-prioritization. Conscious Cogn 2023; 116:103602. [PMID: 37952404 DOI: 10.1016/j.concog.2023.103602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 09/18/2023] [Accepted: 11/03/2023] [Indexed: 11/14/2023]
Abstract
Self-relevant material has been shown to be prioritized over stimuli relating to others (e.g., friend, stranger), generating benefits in attention, memory, and decision-making. What is not yet understood, however, is whether the conditions under which self-related knowledge is acquired impacts the emergence of self-bias. To address this matter, here we used an associative-learning paradigm in combination with a stimulus-classification task to explore the effects of different learning experiences (i.e., deterministic vs. probabilistic) on self-prioritization. The results revealed an effect of prior learning on task performance, with self-prioritization only emerging when participants acquired target-related associations (i.e., self vs. friend) under conditions of certainty (vs. uncertainty). A further computational (i.e., drift diffusion model) analysis indicated that differences in the efficiency of stimulus processing (i.e., rate of information uptake) underpinned this self-prioritization effect. The implications of these findings for accounts of self-function are considered.
Collapse
Affiliation(s)
- Parnian Jalalian
- School of Psychology, University of Aberdeen, King's College, Aberdeen, Scotland, UK.
| | - Marius Golubickis
- School of Psychology, University of Aberdeen, King's College, Aberdeen, Scotland, UK
| | - Yadvi Sharma
- School of Psychology, University of Aberdeen, King's College, Aberdeen, Scotland, UK
| | - C Neil Macrae
- School of Psychology, University of Aberdeen, King's College, Aberdeen, Scotland, UK
| |
Collapse
|
10
|
Kang I, Molenaar D, Ratcliff R. A Modeling Framework to Examine Psychological Processes Underlying Ordinal Responses and Response Times of Psychometric Data. PSYCHOMETRIKA 2023; 88:940-974. [PMID: 37171779 DOI: 10.1007/s11336-023-09902-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 10/25/2022] [Accepted: 01/03/2023] [Indexed: 05/13/2023]
Abstract
This article presents a joint modeling framework of ordinal responses and response times (RTs) for the measurement of latent traits. We integrate cognitive theories of decision-making and confidence judgments with psychometric theories to model individual-level measurement processes. The model development starts with the sequential sampling framework which assumes that when an item is presented, a respondent accumulates noisy evidence over time to respond to the item. Several cognitive and psychometric theories are reviewed and integrated, leading us to three psychometric process models with different representations of the cognitive processes underlying the measurement. We provide simulation studies that examine parameter recovery and show the relationships between latent variables and data distributions. We further test the proposed models with empirical data measuring three traits related to motivation. The results show that all three models provide reasonably good descriptions of observed response proportions and RT distributions. Also, different traits favor different process models, which implies that psychological measurement processes may have heterogeneous structures across traits. Our process of model building and examination illustrates how cognitive theories can be incorporated into psychometric model development to shed light on the measurement process, which has had little attention in traditional psychometric models.
Collapse
Affiliation(s)
- Inhan Kang
- Yonsei University, 403 Widang Hall, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea.
| | | | - Roger Ratcliff
- The Ohio State University, 212 Psychology Building 1835 Neil Avenue, Columbus, 43210, OH, USA
| |
Collapse
|
11
|
Smith PL. "Reliable organisms from unreliable components" revisited: the linear drift, linear infinitesimal variance model of decision making. Psychon Bull Rev 2023; 30:1323-1359. [PMID: 36720804 PMCID: PMC10482797 DOI: 10.3758/s13423-022-02237-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/13/2022] [Indexed: 02/02/2023]
Abstract
Diffusion models of decision making, in which successive samples of noisy evidence are accumulated to decision criteria, provide a theoretical solution to von Neumann's (1956) problem of how to increase the reliability of neural computation in the presence of noise. I introduce and evaluate a new neurally-inspired dual diffusion model, the linear drift, linear infinitesimal variance (LDLIV) model, which embodies three features often thought to characterize neural mechanisms of decision making. The accumulating evidence is intrinsically positively-valued, saturates at high intensities, and is accumulated for each alternative separately. I present explicit integral-equation predictions for the response time distribution and choice probabilities for the LDLIV model and compare its performance on two benchmark sets of data to three other models: the standard diffusion model and two dual diffusion model composed of racing Wiener processes, one between absorbing and reflecting boundaries and one with absorbing boundaries only. The LDLIV model and the standard diffusion model performed similarly to one another, although the standard diffusion model is more parsimonious, and both performed appreciably better than the other two dual diffusion models. I argue that accumulation of noisy evidence by a diffusion process and drift rate variability are both expressions of how the cognitive system solves von Neumann's problem, by aggregating noisy representations over time and over elements of a neural population. I also argue that models that do not solve von Neumann's problem do not address the main theoretical question that historically motivated research in this area.
Collapse
Affiliation(s)
- Philip L Smith
- Melbourne School of Psychological Sciences, The University of Melbourne, Vic., Melbourne, 3010, Australia.
| |
Collapse
|
12
|
Hashemi M, Vattikonda AN, Jha J, Sip V, Woodman MM, Bartolomei F, Jirsa VK. Amortized Bayesian inference on generative dynamical network models of epilepsy using deep neural density estimators. Neural Netw 2023; 163:178-194. [PMID: 37060871 DOI: 10.1016/j.neunet.2023.03.040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 03/24/2023] [Accepted: 03/30/2023] [Indexed: 04/03/2023]
Abstract
Whole-brain modeling of epilepsy combines personalized anatomical data with dynamical models of abnormal activities to generate spatio-temporal seizure patterns as observed in brain imaging data. Such a parametric simulator is equipped with a stochastic generative process, which itself provides the basis for inference and prediction of the local and global brain dynamics affected by disorders. However, the calculation of likelihood function at whole-brain scale is often intractable. Thus, likelihood-free algorithms are required to efficiently estimate the parameters pertaining to the hypothetical areas, ideally including the uncertainty. In this study, we introduce the simulation-based inference for the virtual epileptic patient model (SBI-VEP), enabling us to amortize the approximate posterior of the generative process from a low-dimensional representation of whole-brain epileptic patterns. The state-of-the-art deep learning algorithms for conditional density estimation are used to readily retrieve the statistical relationships between parameters and observations through a sequence of invertible transformations. We show that the SBI-VEP is able to efficiently estimate the posterior distribution of parameters linked to the extent of the epileptogenic and propagation zones from sparse intracranial electroencephalography recordings. The presented Bayesian methodology can deal with non-linear latent dynamics and parameter degeneracy, paving the way for fast and reliable inference on brain disorders from neuroimaging modalities.
Collapse
|
13
|
Self-judgment dissected: A computational modeling analysis of self-referential processing and its relationship to trait mindfulness facets and depression symptoms. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2023; 23:171-189. [PMID: 36168080 PMCID: PMC9931629 DOI: 10.3758/s13415-022-01033-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/29/2022] [Indexed: 11/08/2022]
Abstract
Cognitive theories of depression, and mindfulness theories of well-being, converge on the notion that self-judgment plays a critical role in mental health. However, these theories have rarely been tested via tasks and computational modeling analyses that can disentangle the information processes operative in self-judgments. We applied a drift-diffusion computational model to the self-referential encoding task (SRET) collected before and after an 8-week mindfulness intervention (n = 96). A drift-rate regression parameter representing positive-relative to negative-self-referential judgment strength positively related to mindful awareness and inversely related to depression, both at baseline and over time; however, this parameter did not significantly relate to the interaction between mindful awareness and nonjudgmentalness. At the level of individual depression symptoms, at baseline, a spectrum of symptoms (inversely) correlated with the drift-rate regression parameter, suggesting that many distinct depression symptoms relate to valenced self-judgment between subjects. By contrast, over the intervention, changes in only a smaller subset of anhedonia-related depression symptoms showed substantial relationships with this parameter. Both behavioral and model-derived measures showed modest split-half and test-retest correlations. Results support cognitive theories that implicate self-judgment in depression and mindfulness theories, which imply that mindful awareness should lead to more positive self-views.
Collapse
|
14
|
Govindarajan LN, Calvert JS, Parker SR, Jung M, Darie R, Miranda P, Shaaya E, Borton DA, Serre T. Fast inference of spinal neuromodulation for motor control using amortized neural networks. J Neural Eng 2022; 19:10.1088/1741-2552/ac9646. [PMID: 36174534 PMCID: PMC9668352 DOI: 10.1088/1741-2552/ac9646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 09/29/2022] [Indexed: 11/12/2022]
Abstract
Objective.Epidural electrical stimulation (EES) has emerged as an approach to restore motor function following spinal cord injury (SCI). However, identifying optimal EES parameters presents a significant challenge due to the complex and stochastic nature of muscle control and the combinatorial explosion of possible parameter configurations. Here, we describe a machine-learning approach that leverages modern deep neural networks to learn bidirectional mappings between the space of permissible EES parameters and target motor outputs.Approach.We collected data from four sheep implanted with two 24-contact EES electrode arrays on the lumbosacral spinal cord. Muscle activity was recorded from four bilateral hindlimb electromyography (EMG) sensors. We introduce a general learning framework to identify EES parameters capable of generating desired patterns of EMG activity. Specifically, we first amortize spinal sensorimotor computations in a forward neural network model that learns to predict motor outputs based on EES parameters. Then, we employ a second neural network as an inverse model, which reuses the amortized knowledge learned by the forward model to guide the selection of EES parameters.Main results.We found that neural networks can functionally approximate spinal sensorimotor computations by accurately predicting EMG outputs based on EES parameters. The generalization capability of the forward model critically benefited our inverse model. We successfully identified novel EES parameters, in under 20 min, capable of producing desired target EMG recruitment duringin vivotesting. Furthermore, we discovered potential functional redundancies within the spinal sensorimotor networks by identifying unique EES parameters that result in similar motor outcomes. Together, these results suggest that our framework is well-suited to probe spinal circuitry and control muscle recruitment in a completely data-driven manner.Significance.We successfully identify novel EES parameters within minutes, capable of producing desired EMG recruitment. Our approach is data-driven, subject-agnostic, automated, and orders of magnitude faster than manual approaches.
Collapse
Affiliation(s)
- Lakshmi Narasimhan Govindarajan
- Cognitive, Linguistic & Psychological Sciences, Brown University, Providence RI USA
- Carney Institute for Brain Science, Brown University, Providence RI USA
| | | | | | - Minju Jung
- Cognitive, Linguistic & Psychological Sciences, Brown University, Providence RI USA
- Carney Institute for Brain Science, Brown University, Providence RI USA
| | - Radu Darie
- School of Engineering, Brown University, Providence RI USA
| | | | - Elias Shaaya
- Department of Neurosurgery, Brown University and Rhode Island Hospital, Providence RI USA
| | - David A. Borton
- Carney Institute for Brain Science, Brown University, Providence RI USA
- School of Engineering, Brown University, Providence RI USA
- Center for Neurorestoration and Neurotechnology, Department of Veterans Affairs, Providence RI USA
| | - Thomas Serre
- Cognitive, Linguistic & Psychological Sciences, Brown University, Providence RI USA
- Carney Institute for Brain Science, Brown University, Providence RI USA
| |
Collapse
|
15
|
Persistent activity in human parietal cortex mediates perceptual choice repetition bias. Nat Commun 2022; 13:6015. [PMID: 36224207 PMCID: PMC9556658 DOI: 10.1038/s41467-022-33237-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Accepted: 09/08/2022] [Indexed: 11/09/2022] Open
Abstract
Humans and other animals tend to repeat or alternate their previous choices, even when judging sensory stimuli presented in a random sequence. It is unclear if and how sensory, associative, and motor cortical circuits produce these idiosyncratic behavioral biases. Here, we combined behavioral modeling of a visual perceptual decision with magnetoencephalographic (MEG) analyses of neural dynamics, across multiple regions of the human cerebral cortex. We identified distinct history-dependent neural signals in motor and posterior parietal cortex. Gamma-band activity in parietal cortex tracked previous choices in a sustained fashion, and biased evidence accumulation toward choice repetition; sustained beta-band activity in motor cortex inversely reflected the previous motor action, and biased the accumulation starting point toward alternation. The parietal, not motor, signal mediated the impact of previous on current choice and reflected individual differences in choice repetition. In sum, parietal cortical signals seem to play a key role in shaping choice sequences.
Collapse
|
16
|
Fengler A, Bera K, Pedersen ML, Frank MJ. Beyond Drift Diffusion Models: Fitting a Broad Class of Decision and Reinforcement Learning Models with HDDM. J Cogn Neurosci 2022; 34:1780-1805. [PMID: 35939629 DOI: 10.1162/jocn_a_01902] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Computational modeling has become a central aspect of research in the cognitive neurosciences. As the field matures, it is increasingly important to move beyond standard models to quantitatively assess models with richer dynamics that may better reflect underlying cognitive and neural processes. For example, sequential sampling models (SSMs) are a general class of models of decision-making intended to capture processes jointly giving rise to RT distributions and choice data in n-alternative choice paradigms. A number of model variations are of theoretical interest, but empirical data analysis has historically been tied to a small subset for which likelihood functions are analytically tractable. Advances in methods designed for likelihood-free inference have recently made it computationally feasible to consider a much larger spectrum of SSMs. In addition, recent work has motivated the combination of SSMs with reinforcement learning models, which had historically been considered in separate literatures. Here, we provide a significant addition to the widely used HDDM Python toolbox and include a tutorial for how users can easily fit and assess a (user-extensible) wide variety of SSMs and how they can be combined with reinforcement learning models. The extension comes batteries included, including model visualization tools, posterior predictive checks, and ability to link trial-wise neural signals with model parameters via hierarchical Bayesian regression.
Collapse
|
17
|
Boelts J, Lueckmann JM, Gao R, Macke JH. Flexible and efficient simulation-based inference for models of decision-making. eLife 2022; 11:77220. [PMID: 35894305 PMCID: PMC9374439 DOI: 10.7554/elife.77220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Accepted: 07/26/2022] [Indexed: 11/22/2022] Open
Abstract
Inferring parameters of computational models that capture experimental data is a central task in cognitive neuroscience. Bayesian statistical inference methods usually require the ability to evaluate the likelihood of the model—however, for many models of interest in cognitive neuroscience, the associated likelihoods cannot be computed efficiently. Simulation-based inference (SBI) offers a solution to this problem by only requiring access to simulations produced by the model. Previously, Fengler et al. introduced likelihood approximation networks (LANs, Fengler et al., 2021) which make it possible to apply SBI to models of decision-making but require billions of simulations for training. Here, we provide a new SBI method that is substantially more simulation efficient. Our approach, mixed neural likelihood estimation (MNLE), trains neural density estimators on model simulations to emulate the simulator and is designed to capture both the continuous (e.g., reaction times) and discrete (choices) data of decision-making models. The likelihoods of the emulator can then be used to perform Bayesian parameter inference on experimental data using standard approximate inference methods like Markov Chain Monte Carlo sampling. We demonstrate MNLE on two variants of the drift-diffusion model and show that it is substantially more efficient than LANs: MNLE achieves similar likelihood accuracy with six orders of magnitude fewer training simulations and is significantly more accurate than LANs when both are trained with the same budget. Our approach enables researchers to perform SBI on custom-tailored models of decision-making, leading to fast iteration of model design for scientific discovery.
Collapse
Affiliation(s)
- Jan Boelts
- University of Tübingen, Tübingen, Germany
| | | | | | | |
Collapse
|
18
|
Boehm U, Cox S, Gantner G, Stevenson R. Efficient numerical approximation of a non-regular Fokker-Planck equation associated with first-passage time distributions. BIT. NUMERICAL MATHEMATICS 2022; 62:1355-1382. [PMID: 36415672 PMCID: PMC9674775 DOI: 10.1007/s10543-022-00914-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 02/28/2022] [Indexed: 06/16/2023]
Abstract
In neuroscience, the distribution of a decision time is modelled by means of a one-dimensional Fokker-Planck equation with time-dependent boundaries and space-time-dependent drift. Efficient approximation of the solution to this equation is required, e.g., for model evaluation and parameter fitting. However, the prescribed boundary conditions lead to a strong singularity and thus to slow convergence of numerical approximations. In this article we demonstrate that the solution can be related to the solution of a parabolic PDE on a rectangular space-time domain with homogeneous initial and boundary conditions by transformation and subtraction of a known function. We verify that the solution of the new PDE is indeed more regular than the solution of the original PDE and proceed to discretize the new PDE using a space-time minimal residual method. We also demonstrate that the solution depends analytically on the parameters determining the boundaries as well as the drift. This justifies the use of a sparse tensor product interpolation method to approximate the PDE solution for various parameter ranges. The predicted convergence rates of the minimal residual method and that of the interpolation method are supported by numerical simulations.
Collapse
Affiliation(s)
- Udo Boehm
- Department of Psychology, University of Amsterdam, PO Box 15906, 1001 NK Amsterdam, The Netherlands
| | - Sonja Cox
- Korteweg–de Vries (KdV) Institute for Mathematics, University of Amsterdam, PO Box 94248, 1090 GE Amsterdam, The Netherlands
| | - Gregor Gantner
- Institute of Analysis and Scientific Computing, TU Wien, Wiedner Hauptstraße 8-10, 1040 Vienna, Austria
| | - Rob Stevenson
- Korteweg–de Vries (KdV) Institute for Mathematics, University of Amsterdam, PO Box 94248, 1090 GE Amsterdam, The Netherlands
| |
Collapse
|
19
|
Collins AGE, Shenhav A. Advances in modeling learning and decision-making in neuroscience. Neuropsychopharmacology 2022; 47:104-118. [PMID: 34453117 PMCID: PMC8617262 DOI: 10.1038/s41386-021-01126-y] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/03/2021] [Revised: 07/14/2021] [Accepted: 07/22/2021] [Indexed: 02/07/2023]
Abstract
An organism's survival depends on its ability to learn about its environment and to make adaptive decisions in the service of achieving the best possible outcomes in that environment. To study the neural circuits that support these functions, researchers have increasingly relied on models that formalize the computations required to carry them out. Here, we review the recent history of computational modeling of learning and decision-making, and how these models have been used to advance understanding of prefrontal cortex function. We discuss how such models have advanced from their origins in basic algorithms of updating and action selection to increasingly account for complexities in the cognitive processes required for learning and decision-making, and the representations over which they operate. We further discuss how a deeper understanding of the real-world complexities in these computations has shed light on the fundamental constraints on optimal behavior, and on the complex interactions between corticostriatal pathways to determine such behavior. The continuing and rapid development of these models holds great promise for understanding the mechanisms by which animals adapt to their environments, and what leads to maladaptive forms of learning and decision-making within clinical populations.
Collapse
Affiliation(s)
- Anne G E Collins
- Department of Psychology and Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA.
| | - Amitai Shenhav
- Department of Cognitive, Linguistic, & Psychological Sciences and Carney Institute for Brain Science, Brown University, Providence, RI, USA.
| |
Collapse
|
20
|
Rac-Lubashevsky R, Frank MJ. Analogous computations in working memory input, output and motor gating: Electrophysiological and computational modeling evidence. PLoS Comput Biol 2021; 17:e1008971. [PMID: 34097689 PMCID: PMC8211210 DOI: 10.1371/journal.pcbi.1008971] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 06/17/2021] [Accepted: 04/17/2021] [Indexed: 12/19/2022] Open
Abstract
Adaptive cognitive-control involves a hierarchical cortico-striatal gating system that supports selective updating, maintenance, and retrieval of useful cognitive and motor information. Here, we developed a task that independently manipulates selective gating operations into working-memory (input gating), from working-memory (output gating), and of responses (motor gating) and tested the neural dynamics and computational principles that support them. Increases in gating demands, captured by gate switches, were expressed by distinct EEG correlates at each gating level that evolved dynamically in partially overlapping time windows. Further, categorical representations of specific maintained items and of motor responses could be decoded from EEG when the corresponding gate was switching, thereby linking gating operations to prioritization. Finally, gate switching at all levels was related to increases in the motor decision threshold as quantified by the drift diffusion model. Together these results support the notion that cognitive gating operations scaffold on top of mechanisms involved in motor gating.
Collapse
Affiliation(s)
- Rachel Rac-Lubashevsky
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, Providence, Rhode Island, United States of America
- Carney Institute for Brain Science, Brown University, Providence, Rhode Island, United States of America
| | - Michael J. Frank
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, Providence, Rhode Island, United States of America
- Carney Institute for Brain Science, Brown University, Providence, Rhode Island, United States of America
| |
Collapse
|
21
|
Feltgen Q, Daunizeau J. An Overcomplete Approach to Fitting Drift-Diffusion Decision Models to Trial-By-Trial Data. Front Artif Intell 2021; 4:531316. [PMID: 33898982 PMCID: PMC8064018 DOI: 10.3389/frai.2021.531316] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2020] [Accepted: 02/17/2021] [Indexed: 11/13/2022] Open
Abstract
Drift-diffusion models or DDMs are becoming a standard in the field of computational neuroscience. They extend models from signal detection theory by proposing a simple mechanistic explanation for the observed relationship between decision outcomes and reaction times (RT). In brief, they assume that decisions are triggered once the accumulated evidence in favor of a particular alternative option has reached a predefined threshold. Fitting a DDM to empirical data then allows one to interpret observed group or condition differences in terms of a change in the underlying model parameters. However, current approaches only yield reliable parameter estimates in specific situations (c.f. fixed drift rates vs drift rates varying over trials). In addition, they become computationally unfeasible when more general DDM variants are considered (e.g., with collapsing bounds). In this note, we propose a fast and efficient approach to parameter estimation that relies on fitting a "self-consistency" equation that RT fulfill under the DDM. This effectively bypasses the computational bottleneck of standard DDM parameter estimation approaches, at the cost of estimating the trial-specific neural noise variables that perturb the underlying evidence accumulation process. For the purpose of behavioral data analysis, these act as nuisance variables and render the model "overcomplete," which is finessed using a variational Bayesian system identification scheme. However, for the purpose of neural data analysis, estimates of neural noise perturbation terms are a desirable (and unique) feature of the approach. Using numerical simulations, we show that this "overcomplete" approach matches the performance of current parameter estimation approaches for simple DDM variants, and outperforms them for more complex DDM variants. Finally, we demonstrate the added-value of the approach, when applied to a recent value-based decision making experiment.
Collapse
Affiliation(s)
- Q. Feltgen
- Paris Brain Institute (ICM), Sorbonne Université, Inserm, CNRS, Hôpital Pitié‐Salpêtrière, Paris, France
| | - J. Daunizeau
- Paris Brain Institute (ICM), Sorbonne Université, Inserm, CNRS, Hôpital Pitié‐Salpêtrière, Paris, France
- ETH, Zurich, Switzerland
| |
Collapse
|