1
|
Schumacher L, Bürkner PC, Voss A, Köthe U, Radev ST. Neural superstatistics for Bayesian estimation of dynamic cognitive models. Sci Rep 2023; 13:13778. [PMID: 37612320 PMCID: PMC10447473 DOI: 10.1038/s41598-023-40278-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Accepted: 08/08/2023] [Indexed: 08/25/2023] Open
Abstract
Mathematical models of cognition are often memoryless and ignore potential fluctuations of their parameters. However, human cognition is inherently dynamic. Thus, we propose to augment mechanistic cognitive models with a temporal dimension and estimate the resulting dynamics from a superstatistics perspective. Such a model entails a hierarchy between a low-level observation model and a high-level transition model. The observation model describes the local behavior of a system, and the transition model specifies how the parameters of the observation model evolve over time. To overcome the estimation challenges resulting from the complexity of superstatistical models, we develop and validate a simulation-based deep learning method for Bayesian inference, which can recover both time-varying and time-invariant parameters. We first benchmark our method against two existing frameworks capable of estimating time-varying parameters. We then apply our method to fit a dynamic version of the diffusion decision model to long time series of human response times data. Our results show that the deep learning approach is very efficient in capturing the temporal dynamics of the model. Furthermore, we show that the erroneous assumption of static or homogeneous parameters will hide important temporal information.
Collapse
Affiliation(s)
- Lukas Schumacher
- Institute of Psychology, Heidelberg University, Heidelberg, Germany.
| | | | - Andreas Voss
- Institute of Psychology, Heidelberg University, Heidelberg, Germany
| | - Ullrich Köthe
- Computer Vision and Learning Lab, Heidelberg University, Heidelberg, Germany
| | - Stefan T Radev
- Cluster of Excellence STRUCTURES, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
2
|
Balsdon T, Verdonck S, Loossens T, Philiastides MG. Secondary motor integration as a final arbiter in sensorimotor decision-making. PLoS Biol 2023; 21:e3002200. [PMID: 37459392 PMCID: PMC10393169 DOI: 10.1371/journal.pbio.3002200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 08/01/2023] [Accepted: 06/15/2023] [Indexed: 08/02/2023] Open
Abstract
Sensorimotor decision-making is believed to involve a process of accumulating sensory evidence over time. While current theories posit a single accumulation process prior to planning an overt motor response, here, we propose an active role of motor processes in decision formation via a secondary leaky motor accumulation stage. The motor leak adapts the "memory" with which this secondary accumulator reintegrates the primary accumulated sensory evidence, thus adjusting the temporal smoothing in the motor evidence and, correspondingly, the lag between the primary and motor accumulators. We compare this framework against different single accumulator variants using formal model comparison, fitting choice, and response times in a task where human observers made categorical decisions about a noisy sequence of images, under different speed-accuracy trade-off instructions. We show that, rather than boundary adjustments (controlling the amount of evidence accumulated for decision commitment), adjustment of the leak in the secondary motor accumulator provides the better description of behavior across conditions. Importantly, we derive neural correlates of these 2 integration processes from electroencephalography data recorded during the same task and show that these neural correlates adhere to the neural response profiles predicted by the model. This framework thus provides a neurobiologically plausible description of sensorimotor decision-making that captures emerging evidence of the active role of motor processes in choice behavior.
Collapse
Affiliation(s)
- Tarryn Balsdon
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, United Kingdom
| | - Stijn Verdonck
- Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Tim Loossens
- Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Marios G Philiastides
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, United Kingdom
| |
Collapse
|
3
|
Kolvoort IR, Temme N, van Maanen L. The Bayesian Mutation Sampler Explains Distributions of Causal Judgments. Open Mind (Camb) 2023; 7:318-349. [PMID: 37416078 PMCID: PMC10320818 DOI: 10.1162/opmi_a_00080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 04/25/2023] [Indexed: 07/08/2023] Open
Abstract
One consistent finding in the causal reasoning literature is that causal judgments are rather variable. In particular, distributions of probabilistic causal judgments tend not to be normal and are often not centered on the normative response. As an explanation for these response distributions, we propose that people engage in 'mutation sampling' when confronted with a causal query and integrate this information with prior information about that query. The Mutation Sampler model (Davis & Rehder, 2020) posits that we approximate probabilities using a sampling process, explaining the average responses of participants on a wide variety of tasks. Careful analysis, however, shows that its predicted response distributions do not match empirical distributions. We develop the Bayesian Mutation Sampler (BMS) which extends the original model by incorporating the use of generic prior distributions. We fit the BMS to experimental data and find that, in addition to average responses, the BMS explains multiple distributional phenomena including the moderate conservatism of the bulk of responses, the lack of extreme responses, and spikes of responses at 50%.
Collapse
Affiliation(s)
- Ivar R. Kolvoort
- Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
- Institute for Logic, Language, and Computation, University of Amsterdam, Amsterdam, The Netherlands
| | - Nina Temme
- Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
| | - Leendert van Maanen
- Department of Experimental Psychology, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
4
|
Radev ST, Mertens UK, Voss A, Ardizzone L, Kothe U. BayesFlow: Learning Complex Stochastic Models With Invertible Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:1452-1466. [PMID: 33338021 DOI: 10.1109/tnnls.2020.3042395] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Estimating the parameters of mathematical models is a common problem in almost all branches of science. However, this problem can prove notably difficult when processes and model descriptions become increasingly complex and an explicit likelihood function is not available. With this work, we propose a novel method for globally amortized Bayesian inference based on invertible neural networks that we call BayesFlow. The method uses simulations to learn a global estimator for the probabilistic mapping from observed data to underlying model parameters. A neural network pretrained in this way can then, without additional training or optimization, infer full posteriors on arbitrarily many real data sets involving the same model family. In addition, our method incorporates a summary network trained to embed the observed data into maximally informative summary statistics. Learning summary statistics from data makes the method applicable to modeling scenarios where standard inference techniques with handcrafted summary statistics fail. We demonstrate the utility of BayesFlow on challenging intractable models from population dynamics, epidemiology, cognitive science, and ecology. We argue that BayesFlow provides a general framework for building amortized Bayesian parameter estimation machines for any forward model from which data can be simulated.
Collapse
|
5
|
Maaß SC, de Jong J, van Maanen L, van Rijn H. Conceptually plausible Bayesian inference in interval timing. ROYAL SOCIETY OPEN SCIENCE 2021; 8:201844. [PMID: 34457319 PMCID: PMC8371368 DOI: 10.1098/rsos.201844] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Accepted: 07/20/2021] [Indexed: 05/12/2023]
Abstract
In a world that is uncertain and noisy, perception makes use of optimization procedures that rely on the statistical properties of previous experiences. A well-known example of this phenomenon is the central tendency effect observed in many psychophysical modalities. For example, in interval timing tasks, previous experiences influence the current percept, pulling behavioural responses towards the mean. In Bayesian observer models, these previous experiences are typically modelled by unimodal statistical distributions, referred to as the prior. Here, we critically assess the validity of the assumptions underlying these models and propose a model that allows for more flexible, yet conceptually more plausible, modelling of empirical distributions. By representing previous experiences as a mixture of lognormal distributions, this model can be parametrized to mimic different unimodal distributions and thus extends previous instantiations of Bayesian observer models. We fit the mixture lognormal model to published interval timing data of healthy young adults and a clinical population of aged mild cognitive impairment patients and age-matched controls, and demonstrate that this model better explains behavioural data and provides new insights into the mechanisms that underlie the behaviour of a memory-affected clinical population.
Collapse
Affiliation(s)
- Sarah C. Maaß
- Department of Experimental Psychology, University of Groningen, Grote Kruisstraat 2/1, 9712TS Groningen, The Netherlands
- Behavioral and Cognitive Neurosciences, University of Groningen, Grote Kruisstraat 2/1, 9712TS Groningen, The Netherlands
- Aging and Cognition Research Group, German Center for Neurodegenerative Diseases (DZNE), Leipziger Straße 44, 39120 Magdeburg, Germany
| | - Joost de Jong
- Department of Experimental Psychology, University of Groningen, Grote Kruisstraat 2/1, 9712TS Groningen, The Netherlands
- Behavioral and Cognitive Neurosciences, University of Groningen, Grote Kruisstraat 2/1, 9712TS Groningen, The Netherlands
| | - Leendert van Maanen
- Department of Experimental Psychology, Utrecht University, Heidelberglaan 1, 3584 CS Utrecht, The Netherlands
| | - Hedderik van Rijn
- Department of Experimental Psychology, University of Groningen, Grote Kruisstraat 2/1, 9712TS Groningen, The Netherlands
- Behavioral and Cognitive Neurosciences, University of Groningen, Grote Kruisstraat 2/1, 9712TS Groningen, The Netherlands
| |
Collapse
|
6
|
Fengler A, Govindarajan LN, Chen T, Frank MJ. Likelihood approximation networks (LANs) for fast inference of simulation models in cognitive neuroscience. eLife 2021; 10:e65074. [PMID: 33821788 PMCID: PMC8102064 DOI: 10.7554/elife.65074] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2020] [Accepted: 04/01/2021] [Indexed: 11/13/2022] Open
Abstract
In cognitive neuroscience, computational modeling can formally adjudicate between theories and affords quantitative fits to behavioral/brain data. Pragmatically, however, the space of plausible generative models considered is dramatically limited by the set of models with known likelihood functions. For many models, the lack of a closed-form likelihood typically impedes Bayesian inference methods. As a result, standard models are evaluated for convenience, even when other models might be superior. Likelihood-free methods exist but are limited by their computational cost or their restriction to particular inference scenarios. Here, we propose neural networks that learn approximate likelihoods for arbitrary generative models, allowing fast posterior sampling with only a one-off cost for model simulations that is amortized for future inference. We show that these methods can accurately recover posterior parameter distributions for a variety of neurocognitive process models. We provide code allowing users to deploy these methods for arbitrary hierarchical model instantiations without further training.
Collapse
Affiliation(s)
- Alexander Fengler
- Department of Cognitive, Linguistic and Psychological Sciences, Brown UniversityProvidenceUnited States
- Carney Institute for Brain Science, Brown UniversityProvidenceUnited States
| | - Lakshmi N Govindarajan
- Department of Cognitive, Linguistic and Psychological Sciences, Brown UniversityProvidenceUnited States
- Carney Institute for Brain Science, Brown UniversityProvidenceUnited States
| | - Tony Chen
- Psychology and Neuroscience Department, Boston CollegeChestnut HillUnited States
| | - Michael J Frank
- Department of Cognitive, Linguistic and Psychological Sciences, Brown UniversityProvidenceUnited States
- Carney Institute for Brain Science, Brown UniversityProvidenceUnited States
| |
Collapse
|