1
|
Sekulovski N, Marsman M, Wagenmakers EJ. A Good check on the Bayes factor. Behav Res Methods 2024:10.3758/s13428-024-02491-4. [PMID: 39231912 DOI: 10.3758/s13428-024-02491-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/01/2024] [Indexed: 09/06/2024]
Abstract
Bayes factor hypothesis testing provides a powerful framework for assessing the evidence in favor of competing hypotheses. To obtain Bayes factors, statisticians often require advanced, non-standard tools, making it important to confirm that the methodology is computationally sound. This paper seeks to validate Bayes factor calculations by applying two theorems attributed to Alan Turing and Jack Good. The procedure entails simulating data sets under two hypotheses, calculating Bayes factors, and assessing whether their expected values align with theoretical expectations. We illustrate this method with an ANOVA example and a network psychometrics application, demonstrating its efficacy in detecting calculation errors and confirming the computational correctness of the Bayes factor results. This structured validation approach aims to provide researchers with a tool to enhance the credibility of Bayes factor hypothesis testing, fostering more robust and trustworthy scientific inferences.
Collapse
Affiliation(s)
- Nikola Sekulovski
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands.
| | - Maarten Marsman
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
| | | |
Collapse
|
2
|
Love J, Gronau QF, Palmer G, Eidels A, Brown SD. In human-machine trust, humans rely on a simple averaging strategy. Cogn Res Princ Implic 2024; 9:58. [PMID: 39218841 PMCID: PMC11366733 DOI: 10.1186/s41235-024-00583-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2023] [Accepted: 08/08/2024] [Indexed: 09/04/2024] Open
Abstract
With the growing role of artificial intelligence (AI) in our lives, attention is increasingly turning to the way that humans and AI work together. A key aspect of human-AI collaboration is how people integrate judgements or recommendations from machine agents, when they differ from their own judgements. We investigated trust in human-machine teaming using a perceptual judgement task based on the judge-advisor system. Participants ( n = 89 ) estimated a perceptual quantity, then received a recommendation from a machine agent. The participants then made a second response which combined their first estimate and the machine's recommendation. The degree to which participants shifted their second response in the direction of the recommendations provided a measure of their trust in the machine agent. We analysed the role of advice distance in people's willingness to change their judgements. When a recommendation falls a long way from their initial judgement, do people come to doubt their own judgement, trusting the recommendation more, or do they doubt the machine agent, trusting the recommendation less? We found that although some participants exhibited these behaviours, the most common response was neither of these tendencies, and a simple model based on averaging accounted best for participants' trust behaviour. We discuss implications for theories of trust, and human-machine teaming.
Collapse
Affiliation(s)
- Jonathon Love
- Psychological Sciences, University of Newcastle, University Drive, Callaghan, NSW, 2308, Australia.
| | - Quentin F Gronau
- Psychological Sciences, University of Newcastle, University Drive, Callaghan, NSW, 2308, Australia
| | - Gemma Palmer
- Psychological Sciences, University of Newcastle, University Drive, Callaghan, NSW, 2308, Australia
| | - Ami Eidels
- Psychological Sciences, University of Newcastle, University Drive, Callaghan, NSW, 2308, Australia
| | - Scott D Brown
- Psychological Sciences, University of Newcastle, University Drive, Callaghan, NSW, 2308, Australia
| |
Collapse
|
3
|
Nunez MD, Fernandez K, Srinivasan R, Vandekerckhove J. A tutorial on fitting joint models of M/EEG and behavior to understand cognition. Behav Res Methods 2024; 56:6020-6050. [PMID: 38409458 PMCID: PMC11335833 DOI: 10.3758/s13428-023-02331-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/21/2023] [Indexed: 02/28/2024]
Abstract
We present motivation and practical steps necessary to find parameter estimates of joint models of behavior and neural electrophysiological data. This tutorial is written for researchers wishing to build joint models of human behavior and scalp and intracranial electroencephalographic (EEG) or magnetoencephalographic (MEG) data, and more specifically those researchers who seek to understand human cognition. Although these techniques could easily be applied to animal models, the focus of this tutorial is on human participants. Joint modeling of M/EEG and behavior requires some knowledge of existing computational and cognitive theories, M/EEG artifact correction, M/EEG analysis techniques, cognitive modeling, and programming for statistical modeling implementation. This paper seeks to give an introduction to these techniques as they apply to estimating parameters from neurocognitive models of M/EEG and human behavior, and to evaluate model results and compare models. Due to our research and knowledge on the subject matter, our examples in this paper will focus on testing specific hypotheses in human decision-making theory. However, most of the motivation and discussion of this paper applies across many modeling procedures and applications. We provide Python (and linked R) code examples in the tutorial and appendix. Readers are encouraged to try the exercises at the end of the document.
Collapse
Affiliation(s)
- Michael D Nunez
- Psychological Methods, University of Amsterdam, Amsterdam, The Netherlands.
| | - Kianté Fernandez
- Department of Psychology, University of California, Los Angeles, CA, USA
| | - Ramesh Srinivasan
- Department of Cognitive Sciences, University of California, Irvine, CA, USA
- Department of Biomedical Engineering, University of California, Irvine, CA, USA
- Institute of Mathematical Behavioral Sciences, University of California, Irvine, CA, USA
| | - Joachim Vandekerckhove
- Department of Cognitive Sciences, University of California, Irvine, CA, USA
- Institute of Mathematical Behavioral Sciences, University of California, Irvine, CA, USA
- Department of Statistics, University of California, Irvine, CA, USA
| |
Collapse
|
4
|
Tanis CC, Heathcote A, Zrubka M, Matzke D. A hybrid approach to dynamic cognitive psychometrics : Dynamic cognitive psychometrics. Behav Res Methods 2024; 56:5647-5666. [PMID: 38200240 PMCID: PMC11335914 DOI: 10.3758/s13428-023-02295-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/03/2023] [Indexed: 01/12/2024]
Abstract
Dynamic cognitive psychometrics measures mental capacities based on the way behavior unfolds over time. It does so using models of psychological processes whose validity is grounded in research from experimental psychology and the neurosciences. However, these models can sometimes have undesirable measurement properties. We propose a "hybrid" modeling approach that achieves good measurement by blending process-based and descriptive components. We demonstrate the utility of this approach in the stop-signal paradigm, in which participants make a series of speeded choices, but occasionally are required to withhold their response when a "stop signal" occurs. The stop-signal paradigm is widely used to measure response inhibition based on a modeling framework that assumes a race between processes triggered by the choice and the stop stimuli. However, the key index of inhibition, the latency of the stop process (i.e., stop-signal reaction time), is not directly observable, and is poorly estimated when the choice and the stop runners are both modeled by psychologically realistic evidence-accumulation processes. We show that using a descriptive account of the stop process, while retaining a realistic account of the choice process, simultaneously enables good measurement of both stop-signal reaction time and the psychological factors that determine choice behavior. We show that this approach, when combined with hierarchical Bayesian estimation, is effective even in a complex choice task that requires participants to perform only a relatively modest number of test trials.
Collapse
Affiliation(s)
- Charlotte C Tanis
- Department of Psychology, University of Amsterdam, Postbus 15916, 1001 NK, Amsterdam, Netherlands
| | - Andrew Heathcote
- Department of Psychology, University of Amsterdam, Postbus 15916, 1001 NK, Amsterdam, Netherlands
- Department of Psychology, University of Newcastle, Newcastle, Australia
| | - Mark Zrubka
- Department of Psychology, University of Amsterdam, Postbus 15916, 1001 NK, Amsterdam, Netherlands
| | - Dora Matzke
- Department of Psychology, University of Amsterdam, Postbus 15916, 1001 NK, Amsterdam, Netherlands.
| |
Collapse
|
5
|
Leow LA, Marcos A, Nielsen E, Sewell D, Ballard T, Dux PE, Filmer HL. Dopamine Alters the Effect of Brain Stimulation on Decision-Making. J Neurosci 2023; 43:6909-6919. [PMID: 37648451 PMCID: PMC10573748 DOI: 10.1523/jneurosci.1140-23.2023] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 07/27/2023] [Accepted: 08/22/2023] [Indexed: 09/01/2023] Open
Abstract
Noninvasive brain stimulation techniques, such as transcranial direct current stimulation (tDCS), show promise in treating a range of psychiatric and neurologic conditions. However, optimization of such applications requires a better understanding of how tDCS alters cognition and behavior. Existing evidence implicates dopamine in tDCS alterations of brain activity and plasticity; however, there is as yet no causal evidence for a role of dopamine in tDCS effects on cognition and behavior. Here, in a preregistered, double-blinded study, we examined how pharmacologically manipulating dopamine altered the effect of tDCS on the speed-accuracy trade-off, which taps ubiquitous strategic operations. Cathodal tDCS was delivered over the left prefrontal cortex and the superior medial frontal cortex before participants (N = 62, 24 males, 38 females) completed a dot-motion task, making judgments on the direction of a field of moving dots under instructions to emphasize speed, accuracy, or both. We leveraged computational modeling to uncover how our interventions altered latent decisional processes driving the speed-accuracy trade-off. We show that dopamine in combination with tDCS (but not tDCS alone nor dopamine alone) not only impaired decision accuracy but also impaired discriminability, which suggests that these manipulations altered the encoding or representation of discriminative evidence. This is, to the best of our knowledge, the first direct evidence implicating dopamine in the way tDCS affects cognition and behavior.SIGNIFICANCE STATEMENT tDCS can improve cognitive and behavioral impairments in clinical conditions; however, a better understanding of its mechanisms is required to optimize future clinical applications. Here, using a pharmacological approach to manipulate brain dopamine levels in healthy adults, we demonstrate a role for dopamine in the effects of tDCS in the speed-accuracy trade-off, a strategic cognitive process ubiquitous in many contexts. In doing so, we provide direct evidence implicating dopamine in the way tDCS affects cognition and behavior.
Collapse
Affiliation(s)
- Li-Ann Leow
- School of Psychology, University of Queensland, St Lucia, Brisbane QLD 4072 Australia
| | - Anjeli Marcos
- School of Psychology, University of Queensland, St Lucia, Brisbane QLD 4072 Australia
| | - Esteban Nielsen
- School of Psychology, University of Queensland, St Lucia, Brisbane QLD 4072 Australia
| | - David Sewell
- School of Psychology, University of Queensland, St Lucia, Brisbane QLD 4072 Australia
| | - Timothy Ballard
- School of Psychology, University of Queensland, St Lucia, Brisbane QLD 4072 Australia
| | - Paul E Dux
- School of Psychology, University of Queensland, St Lucia, Brisbane QLD 4072 Australia
| | - Hannah L Filmer
- School of Psychology, University of Queensland, St Lucia, Brisbane QLD 4072 Australia
| |
Collapse
|
6
|
Using cognitive modeling to examine the effects of competition on strategy and effort in races and tournaments. Psychon Bull Rev 2022:10.3758/s13423-022-02213-x. [DOI: 10.3758/s13423-022-02213-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/30/2022] [Indexed: 11/17/2022]
Abstract
AbstractWe investigated the effects of two types of competition, races and tournaments (as well as an individual challenge and a do-your-best condition), on two different aspects of performance: effort and strategy. In our experiment, 100 undergraduate participants completed a simple cognitive task under four experimental conditions (in a repeated-measures design) based on different types of competitions and challenges. We used the Linear Ballistic Accumulator to quantify the effects of competition on strategy and effort. The results reveal that competition produced changes in strategy rather than effort, and that trait competitiveness had minimal impact on how people responded to competition. This suggests individuals are more likely to adjust their strategy in competitions, and the uncertainty created by different competition types influences the direction of these strategy adjustments.
Collapse
|
7
|
Ehrhardt SE, Ballard T, Wards Y, Mattingley JB, Dux PE, Filmer HL. tDCS augments decision-making efficiency in an intensity dependent manner: A training study. Neuropsychologia 2022; 176:108397. [DOI: 10.1016/j.neuropsychologia.2022.108397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 10/10/2022] [Accepted: 10/14/2022] [Indexed: 11/15/2022]
|
8
|
Retzler C, Boehm U, Cai J, Cochrane A, Manning C. Prior information use and response caution in perceptual decision-making: No evidence for a relationship with autistic-like traits. Q J Exp Psychol (Hove) 2021; 74:1953-1965. [PMID: 33998332 PMCID: PMC8450985 DOI: 10.1177/17470218211019939] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Interpreting the world around us requires integrating incoming sensory signals with prior information. Autistic individuals have been proposed to rely less on prior information and make more cautious responses than non-autistic individuals. Here, we investigated whether these purported features of autistic perception vary as a function of autistic-like traits in the general population. We used a diffusion model framework, whereby decisions are modelled as noisy evidence accumulation processes towards one of two bounds. Within this framework, prior information can bias the starting point of the evidence accumulation process. Our pre-registered hypotheses were that higher autistic-like traits would relate to reduced starting point bias caused by prior information and increased response caution (wider boundary separation). 222 participants discriminated the direction of coherent motion stimuli as quickly and accurately as possible. Stimuli were preceded with a neutral cue (square) or a directional cue (arrow). 80% of the directional cues validly predicted the upcoming motion direction. We modelled accuracy and response time data using a hierarchical Bayesian model in which starting point varied with cue condition. We found no evidence for our hypotheses, with starting point bias and response caution seemingly unrelated to Adult Autism Spectrum Quotient (AQ) scores. Alongside future research applying this paradigm to autistic individuals, our findings will help refine theories regarding the role of prior information and altered decision-making strategies in autistic perception. Our study also has implications for models of bias in perceptual decision-making, as the most plausible model was one that incorporated bias in both decision-making and sensory processing.
Collapse
Affiliation(s)
- Chris Retzler
- Department of Psychology, University of Huddersfield, Huddersfield, UK
| | - Udo Boehm
- Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
| | - Jing Cai
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Aimee Cochrane
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Catherine Manning
- Department of Experimental Psychology, University of Oxford, Oxford, UK.,School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| |
Collapse
|
9
|
Tran NH, van Maanen L, Heathcote A, Matzke D. Systematic Parameter Reviews in Cognitive Modeling: Towards a Robust and Cumulative Characterization of Psychological Processes in the Diffusion Decision Model. Front Psychol 2021; 11:608287. [PMID: 33584443 PMCID: PMC7874054 DOI: 10.3389/fpsyg.2020.608287] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Accepted: 12/16/2020] [Indexed: 01/22/2023] Open
Abstract
Parametric cognitive models are increasingly popular tools for analyzing data obtained from psychological experiments. One of the main goals of such models is to formalize psychological theories using parameters that represent distinct psychological processes. We argue that systematic quantitative reviews of parameter estimates can make an important contribution to robust and cumulative cognitive modeling. Parameter reviews can benefit model development and model assessment by providing valuable information about the expected parameter space, and can facilitate the more efficient design of experiments. Importantly, parameter reviews provide crucial-if not indispensable-information for the specification of informative prior distributions in Bayesian cognitive modeling. From the Bayesian perspective, prior distributions are an integral part of a model, reflecting cumulative theoretical knowledge about plausible values of the model's parameters (Lee, 2018). In this paper we illustrate how systematic parameter reviews can be implemented to generate informed prior distributions for the Diffusion Decision Model (DDM; Ratcliff and McKoon, 2008), the most widely used model of speeded decision making. We surveyed the published literature on empirical applications of the DDM, extracted the reported parameter estimates, and synthesized this information in the form of prior distributions. Our parameter review establishes a comprehensive reference resource for plausible DDM parameter values in various experimental paradigms that can guide future applications of the model. Based on the challenges we faced during the parameter review, we formulate a set of general and DDM-specific suggestions aiming to increase reproducibility and the information gained from the review process.
Collapse
Affiliation(s)
- N.-Han Tran
- Department of Human Behavior, Ecology and Culture, Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany
| | - Leendert van Maanen
- Department of Experimental Psychology, Utrecht University, Utrecht, Netherlands
| | - Andrew Heathcote
- Department of Psychology, University of Tasmania, Hobart, TAS, Australia
| | - Dora Matzke
- Psychological Methods, Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|