1
|
Parker S, Ramsey R. What can evidence accumulation modelling tell us about human social cognition? Q J Exp Psychol (Hove) 2024; 77:639-655. [PMID: 37154622 PMCID: PMC10880422 DOI: 10.1177/17470218231176950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 04/16/2023] [Accepted: 05/04/2023] [Indexed: 05/10/2023]
Abstract
Evidence accumulation models are a series of computational models that provide an account for speeded decision-making. These models have been used extensively within the cognitive psychology literature to great success, allowing inferences to be drawn about the psychological processes that underlie cognition that are sometimes not available in a traditional analysis of accuracy or reaction time (RT). Despite this, there have been only a few applications of these models within the domain of social cognition. In this article, we explore several ways in which the study of human social information processing would benefit from application of evidence accumulation modelling. We begin first with a brief overview of the evidence accumulation modelling framework and their past success within the domain of cognitive psychology. We then highlight five ways in which social cognitive research would benefit from an evidence accumulation approach. This includes (1) greater specification of assumptions, (2) unambiguous comparisons across blocked task conditions, (3) quantifying and comparing the magnitude of effects in standardised measures, (4) a novel approach for studying individual differences, and (5) improved reproducibility and accessibility. These points are illustrated using examples from the domain of social attention. Finally, we outline several methodological and practical considerations, which should help researchers use evidence accumulation models productively. Ultimately, it will be seen that evidence accumulation modelling offers a well-developed, accessible, and commonly understood framework that can reveal inferences about cognition that may otherwise be out of reach in a traditional analysis of accuracy and RT. This approach, therefore, has the potential to substantially revise our understanding of social cognition.
Collapse
Affiliation(s)
- Samantha Parker
- School of Psychological Sciences, Macquarie University, Sydney, NSW, Australia
| | - Richard Ramsey
- School of Psychological Sciences, Macquarie University, Sydney, NSW, Australia
| |
Collapse
|
2
|
Robinson MM, DeStefano IC, Vul E, Brady TF. How do people build up visual memory representations from sensory evidence? Revisiting two classic models of choice. JOURNAL OF MATHEMATICAL PSYCHOLOGY 2023; 117:102805. [PMID: 38957571 PMCID: PMC11219025 DOI: 10.1016/j.jmp.2023.102805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2024]
Abstract
In many decision tasks, we have a set of alternative choices and are faced with the problem of how to use our latent beliefs and preferences about each alternative to make a single choice. Cognitive and decision models typically presume that beliefs and preferences are distilled to a scalar latent strength for each alternative, but it is also critical to model how people use these latent strengths to choose a single alternative. Most models follow one of two traditions to establish this link. Modern psychophysics and memory researchers make use of signal detection theory, assuming that latent strengths are perturbed by noise, and the highest resulting signal is selected. By contrast, many modern decision theoretic modeling and machine learning approaches use the softmax function (which is based on Luce's choice axiom; Luce, 1959) to give some weight to non-maximal-strength alternatives. Despite the prominence of these two theories of choice, current approaches rarely address the connection between them, and the choice of one or the other appears more motivated by the tradition in the relevant literature than by theoretical or empirical reasons to prefer one theory to the other. The goal of the current work is to revisit this topic by elucidating which of these two models provides a better characterization of latent processes in m -alternative decision tasks, with a particular focus on memory tasks. In a set of visual memory experiments, we show that, within the same experimental design, the softmax parameter β varies across m -alternatives, whereas the parameterd ' of the signal-detection model is stable. Together, our findings indicate that replacing softmax with signal-detection link models would yield more generalizable predictions across changes in task structure. More ambitiously, the invariance of signal detection model parameters across different tasks suggests that the parametric assumptions of these models may be more than just a mathematical convenience, but reflect something real about human decision-making.
Collapse
Affiliation(s)
| | | | - Edward Vul
- University of California, San Diego, United States of America
| | | |
Collapse
|
3
|
Robinson MM, Brady TF. A quantitative model of ensemble perception as summed activation in feature space. Nat Hum Behav 2023; 7:1638-1651. [PMID: 37402880 PMCID: PMC10810262 DOI: 10.1038/s41562-023-01602-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Accepted: 04/14/2023] [Indexed: 07/06/2023]
Abstract
Ensemble perception is a process by which we summarize complex scenes. Despite the importance of ensemble perception to everyday cognition, there are few computational models that provide a formal account of this process. Here we develop and test a model in which ensemble representations reflect the global sum of activation signals across all individual items. We leverage this set of minimal assumptions to formally connect a model of memory for individual items to ensembles. We compare our ensemble model against a set of alternative models in five experiments. Our approach uses performance on a visual memory task for individual items to generate zero-free-parameter predictions of interindividual and intraindividual differences in performance on an ensemble continuous-report task. Our top-down modelling approach formally unifies models of memory for individual items and ensembles and opens a venue for building and comparing models of distinct memory processes and representations.
Collapse
Affiliation(s)
- Maria M Robinson
- Psychology Department, University of California, San Diego, La Jolla, CA, USA.
| | - Timothy F Brady
- Psychology Department, University of California, San Diego, La Jolla, CA, USA.
| |
Collapse
|
4
|
Cruz N. Conceptual clarity and empirical testability: commentary on Knauff and Gazzo Castañeda (2022). THINKING & REASONING 2022. [DOI: 10.1080/13546783.2022.2112757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
Affiliation(s)
- Nicole Cruz
- Department of Psychology, University of Innsbruck, Innsbruck, Austria
| |
Collapse
|
5
|
Intelligence IS Cognitive Flexibility: Why Multilevel Models of Within-Individual Processes Are Needed to Realise This. J Intell 2022; 10:jintelligence10030049. [PMID: 35997405 PMCID: PMC9397005 DOI: 10.3390/jintelligence10030049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 07/22/2022] [Accepted: 07/26/2022] [Indexed: 11/24/2022] Open
Abstract
Despite substantial evidence for the link between an individual’s intelligence and successful life outcomes, questions about what defines intelligence have remained the focus of heated dispute. The most common approach to understanding intelligence has been to investigate what performance on tests of intellect is and is not associated with. This psychometric approach, based on correlations and factor analysis is deficient. In this review, we aim to substantiate why classic psychometrics which focus on between-person accounts will necessarily provide a limited account of intelligence until theoretical considerations of within-person accounts are incorporated. First, we consider the impact of entrenched psychometric presumptions that support the status quo and impede alternative views. Second, we review the importance of process-theories, which are critical for any serious attempt to build a within-person account of intelligence. Third, features of dynamic tasks are reviewed, and we outline how static tasks can be modified to target within-person processes. Finally, we explain how multilevel models are conceptually and psychometrically well-suited to building and testing within-individual notions of intelligence, which at its core, we argue is cognitive flexibility. We conclude by describing an application of these ideas in the context of microworlds as a case study.
Collapse
|
6
|
Haeffel GJ. Psychology needs to get tired of winning. ROYAL SOCIETY OPEN SCIENCE 2022; 9:220099. [PMID: 35754994 PMCID: PMC9214288 DOI: 10.1098/rsos.220099] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 06/06/2022] [Indexed: 05/03/2023]
Abstract
Psychological science is on an extraordinary winning streak. A review of the published literature shows that nearly all study hypotheses are supported. This means that either all the theories are correct, or the literature is biased towards positive findings. Results from large-scale replication projects and the prevalence of questionable research practices indicate the latter. This is a problem because science progresses from being wrong. For decades, there have been calls for better theories and the adoption of a strong inference approach to science. However, there is little reason to believe that psychological science is ready to change. Although recent developments like the open science movement have improved transparency and replicability, they have not addressed psychological science's method-oriented (rather than problem-oriented) mindset. Psychological science still does not embrace the scientific method of developing theories, conducting critical tests of those theories, detecting contradictory results, and revising (or disposing of) the theories accordingly. In this article, I review why psychologists must embrace being wrong and how the Registered Report format might be one strategy for stopping psychology's winning streak.
Collapse
Affiliation(s)
- Gerald J. Haeffel
- Department of Psychology, University of Notre Dame, Notre Dame, IN 46556, USA
| |
Collapse
|
7
|
Theoretical false positive psychology. Psychon Bull Rev 2022; 29:1751-1775. [PMID: 35501547 DOI: 10.3758/s13423-022-02098-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/04/2022] [Indexed: 11/08/2022]
Abstract
A fundamental goal of scientific research is to generate true positives (i.e., authentic discoveries). Statistically, a true positive is a significant finding for which the underlying effect size (δ) is greater than 0, whereas a false positive is a significant finding for which δ equals 0. However, the null hypothesis of no difference (δ = 0) may never be strictly true because innumerable nuisance factors can introduce small effects for theoretically uninteresting reasons. If δ never equals zero, then with sufficient power, every experiment would yield a significant result. Yet running studies with higher power by increasing sample size (N) is one of the most widely agreed upon reforms to increase replicability. Moreover, and perhaps not surprisingly, the idea that psychology should attach greater value to small effect sizes is gaining currency. Increasing N without limit makes sense for purely measurement-focused research, where the magnitude of δ itself is of interest, but it makes less sense for theory-focused research, where the truth status of the theory under investigation is of interest. Increasing power to enhance replicability will increase true positives at the level of the effect size (statistical true positives) while increasing false positives at the level of theory (theoretical false positives). With too much power, the cumulative foundation of psychological science would consist largely of nuisance effects masquerading as theoretically important discoveries. Positive predictive value at the level of theory is maximized by using an optimal N, one that is neither too small nor too large.
Collapse
|
8
|
Cerebral Polymorphisms for Lateralisation: Modelling the Genetic and Phenotypic Architectures of Multiple Functional Modules. Symmetry (Basel) 2022. [DOI: 10.3390/sym14040814] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
Recent fMRI and fTCD studies have found that functional modules for aspects of language, praxis, and visuo-spatial functioning, while typically left, left and right hemispheric respectively, frequently show atypical lateralisation. Studies with increasing numbers of modules and participants are finding increasing numbers of module combinations, which here are termed cerebral polymorphisms—qualitatively different lateral organisations of cognitive functions. Polymorphisms are more frequent in left-handers than right-handers, but it is far from the case that right-handers all show the lateral organisation of modules described in introductory textbooks. In computational terms, this paper extends the original, monogenic McManus DC (dextral-chance) model of handedness and language dominance to multiple functional modules, and to a polygenic DC model compatible with the molecular genetics of handedness, and with the biology of visceral asymmetries found in primary ciliary dyskinesia. Distributions of cerebral polymorphisms are calculated for families and twins, and consequences and implications of cerebral polymorphisms are explored for explaining aphasia due to cerebral damage, as well as possible talents and deficits arising from atypical inter- and intra-hemispheric modular connections. The model is set in the broader context of the testing of psychological theories, of issues of laterality measurement, of mutation-selection balance, and the evolution of brain and visceral asymmetries.
Collapse
|
9
|
Regenwetter M, Robinson MM, Wang C. Four Internal Inconsistencies in Tversky and Kahneman’s (1992) Cumulative Prospect Theory Article: A Case Study in Ambiguous Theoretical Scope and Ambiguous Parsimony. ADVANCES IN METHODS AND PRACTICES IN PSYCHOLOGICAL SCIENCE 2022. [DOI: 10.1177/25152459221074653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Scholars heavily rely on theoretical scope as a tool to challenge existing theory. We advocate that scientific discovery could be accelerated if far more effort were invested into also overtly specifying and painstakingly delineating the intended purview of any proposed new theory at the time of its inception. As a case study, we consider Tversky and Kahneman (1992). They motivated their Nobel-Prize-winning cumulative prospect theory with evidence that in each of two studies, roughly half of the participants violated independence, a property required by expected utility theory (EUT). Yet even at the time of inception, new theories may reveal signs of their own limited scope. For example, we show that Tversky and Kahneman’s findings in their own test of loss aversion provide evidence that at least half of their participants violated their theory, in turn, in that study. We highlight a combination of conflicting findings in the original article that make it ambiguous to evaluate both cumulative prospect theory’s scope and its parsimony on the authors’ own evidence. The Tversky and Kahneman article is illustrative of a social and behavioral research culture in which theoretical scope plays an extremely asymmetric role: to call existing theory into question and motivate surrogate proposals.
Collapse
Affiliation(s)
- Michel Regenwetter
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, Illinois
- Department of Political Science, University of Illinois at Urbana-Champaign, Urbana, Illinois
- Department of Electrical & Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois
| | - Maria M. Robinson
- Department of Psychology, University of California San Diego, La Jolla, California
| | - Cihang Wang
- Department of Economics, University of Illinois at Urbana-Champaign, Urbana, Illinois
| |
Collapse
|
10
|
Nguyen V, Versyp O, Cox C, Fusaroli R. A systematic review and Bayesian meta-analysis of the development of turn taking in adult-child vocal interactions. Child Dev 2022; 93:1181-1200. [PMID: 35305028 DOI: 10.1111/cdev.13754] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Fluent conversation requires temporal organization between conversational exchanges. By performing a systematic review and Bayesian multi-level meta-analysis, we map the trajectory of infants' turn-taking abilities over the course of early development (0 to 70 months). We synthesize the evidence from 26 studies (78 estimates from 429 unique infants, of which at least 152 are female) reporting response latencies in infant-adult dyadic interactions. The data were collected between 1975 and 2019, exclusively in North America and Europe. Infants took on average circa 1 s to respond, and the evidence of changes in response over time was inconclusive. Infants' response latencies are related to those of their adult conversational partners: an increase of 1 s in adult response latency (e.g., 400 to 1400 ms) would be related to an increase of over 1 s in infant response latency (from 600 to 1857 ms). These results highlight the dynamic reciprocity involved in the temporal organization of turn-taking. Based on these results, we provide recommendations for future avenues of enquiry: studies should analyze how turn-by-turn exchanges develop on a longitudinal timescale, with rich assessment of infants' linguistic and social development.
Collapse
Affiliation(s)
- Vivian Nguyen
- Psychology, Ghent University, Gent, Belgium.,Department of Linguistic, Cognitive Science and Semiotics, Aarhus University, Aarhus, Denmark
| | - Otto Versyp
- Psychology, Ghent University, Gent, Belgium.,Department of Linguistic, Cognitive Science and Semiotics, Aarhus University, Aarhus, Denmark
| | - Christopher Cox
- Department of Linguistic, Cognitive Science and Semiotics, Aarhus University, Aarhus, Denmark.,The Interacting Minds Center, Aarhus University, Aarhus, Denmark.,Department of Language & Linguistic Science, University of York, York, UK
| | - Riccardo Fusaroli
- Department of Linguistic, Cognitive Science and Semiotics, Aarhus University, Aarhus, Denmark.,The Interacting Minds Center, Aarhus University, Aarhus, Denmark.,Linguistic Data Consortium, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
11
|
Ledgerwood A, Hudson SKTJ, Lewis NA, Maddox KB, Pickett CL, Remedios JD, Cheryan S, Diekman AB, Dutra NB, Goh JX, Goodwin SA, Munakata Y, Navarro DJ, Onyeador IN, Srivastava S, Wilkins CL. The Pandemic as a Portal: Reimagining Psychological Science as Truly Open and Inclusive. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2022; 17:937-959. [PMID: 35235485 DOI: 10.1177/17456916211036654] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Psychological science is at an inflection point: The COVID-19 pandemic has exacerbated inequalities that stem from our historically closed and exclusive culture. Meanwhile, reform efforts to change the future of our science are too narrow in focus to fully succeed. In this article, we call on psychological scientists-focusing specifically on those who use quantitative methods in the United States as one context for such conversations-to begin reimagining our discipline as fundamentally open and inclusive. First, we discuss whom our discipline was designed to serve and how this history produced the inequitable reward and support systems we see today. Second, we highlight how current institutional responses to address worsening inequalities are inadequate, as well as how our disciplinary perspective may both help and hinder our ability to craft effective solutions. Third, we take a hard look in the mirror at the disconnect between what we ostensibly value as a field and what we actually practice. Fourth and finally, we lead readers through a roadmap for reimagining psychological science in whatever roles and spaces they occupy, from an informal discussion group in a department to a formal strategic planning retreat at a scientific society.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Amanda B Diekman
- Department of Psychological and Brain Sciences, Indiana University
| | - Natalia B Dutra
- Laboratory of Evolution of Human Behavior, Department of Physiology and Behavior, Federal University of Rio Grande do Norte
| | - Jin X Goh
- Department of Psychology, Colby College
| | - Stephanie A Goodwin
- Department of Psychology, Wright State University.,Department of Social Sciences, Stevens Institute of Technology
| | - Yuko Munakata
- Department of Psychology, University of California, Davis
| | | | | | | | | |
Collapse
|