1
|
Mayrhofer R, Büchner IC, Hevesi J. The quantitative paradigm and the nature of the human mind. The replication crisis as an epistemological crisis of quantitative psychology in view of the ontic nature of the psyche. Front Psychol 2024; 15:1390233. [PMID: 39328812 PMCID: PMC11424412 DOI: 10.3389/fpsyg.2024.1390233] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Accepted: 08/19/2024] [Indexed: 09/28/2024] Open
Abstract
Many suggestions for dealing with the so-called replication crisis in psychology revolve around the idea that better and more complex statistical-mathematical tools or stricter procedures are required in order to obtain reliable findings and prevent cheating or publication biases. While these aspects may play an exacerbating role, we interpret the replication crisis primarily as an epistemological crisis in psychology caused by an inadequate fit between the ontic nature of the psyche and the quantitative approach. On the basis of the philosophers of science Karl Popper, Thomas Kuhn, and Imre Lakatos we suggest that the replication crisis is therefore a symptom of a fundamental problem in psychology, but at the same time it is also an opportunity to advance psychology as a science. In a first step, against the background of Popper's Critical Rationalism, the replication crisis is interpreted as an opportunity to eliminate inaccurate theories from the pool of theories and to correct problematic developments. Continuing this line of thought, in an interpretation along the lines of Thomas Kuhn, the replication crisis might signify a model drift or even model crisis, thus possibly heralding a new paradigm in psychology. The reasons for this are located in the structure of academic psychology on the basis of Lakatos's assumption about how sciences operate. Accordingly, one hard core that lies at the very basis of psychology may be found in the assumption that the human psyche can and is to be understood in quantitative terms. For this to be possible, the ontic structure of the psyche, i.e., its very nature, must also in some way be quantitatively constituted. Hence, the replication crisis suggests that the ontic structure of the psyche in some way (also) contains a non-quantitative dimension that can only be grasped incompletely or fragmentarily using quantitative research methods. Fluctuating and inconsistent results in psychology could therefore also be the expression of a mismatch between the ontic level of the object of investigation and the epistemic level of the investigation.
Collapse
Affiliation(s)
- Roland Mayrhofer
- Department of Psychology, University of Regensburg, Regensburg, Germany
| | - Isabel C Büchner
- Department of Psychology, University of Regensburg, Regensburg, Germany
| | - Judit Hevesi
- Department of Psychology, University of Regensburg, Regensburg, Germany
| |
Collapse
|
2
|
Schweinsberg M, Thau S, Pillutla M. Research-Problem Validity in Primary Research: Precision and Transparency in Characterizing Past Knowledge. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023; 18:1230-1243. [PMID: 36745743 PMCID: PMC10475212 DOI: 10.1177/17456916221144990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Four validity types evaluate the approximate truth of inferences communicated by primary research. However, current validity frameworks ignore the truthfulness of empirical inferences that are central to research-problem statements. Problem statements contrast a review of past research with other knowledge that extends, contradicts, or calls into question specific features of past research. Authors communicate empirical inferences, or quantitative judgments, about the frequency (e.g., "few," "most") and variability (e.g., "on the one hand," "on the other hand") in their reviews of existing theories, measures, samples, or results. We code a random sample of primary research articles and show that 83% of quantitative judgments in our sample are vague and do not have a transparent origin, making it difficult to assess their validity. We review validity threats of current practices. We propose that documenting the literature search, reporting how the search was coded, and quantifying the search results facilitates more precise judgments and makes their origin transparent. This practice enables research questions that are more closely tied to the existing body of knowledge and allows for more informed evaluations of the contribution of primary research articles, their design choices, and how they advance knowledge. We discuss potential limitations of our proposed framework.
Collapse
|
3
|
Sarafoglou A, Kovacs M, Bakos B, Wagenmakers EJ, Aczel B. A survey on how preregistration affects the research workflow: better science but more work. ROYAL SOCIETY OPEN SCIENCE 2022; 9:211997. [PMID: 35814910 PMCID: PMC9257590 DOI: 10.1098/rsos.211997] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 06/06/2022] [Indexed: 05/20/2023]
Abstract
The preregistration of research protocols and analysis plans is a main reform innovation to counteract confirmation bias in the social and behavioural sciences. While theoretical reasons to preregister are frequently discussed in the literature, the individually experienced advantages and disadvantages of this method remain largely unexplored. The goal of this exploratory study was to identify the perceived benefits and challenges of preregistration from the researcher's perspective. To this end, we surveyed 355 researchers, 299 of whom had used preregistration in their own work. The researchers indicated the experienced or expected effects of preregistration on their workflow. The results show that experiences and expectations are mostly positive. Researchers in our sample believe that implementing preregistration improves or is likely to improve the quality of their projects. Criticism of preregistration is primarily related to the increase in work-related stress and the overall duration of the project. While the benefits outweighed the challenges for the majority of researchers with preregistration experience, this was not the case for the majority of researchers without preregistration experience. The experienced advantages and disadvantages identified in our survey could inform future efforts to improve preregistration and thus help the methodology gain greater acceptance in the scientific community.
Collapse
Affiliation(s)
| | - Marton Kovacs
- Doctoral School of Psychology, ELTE Eotvos Lorand University, Budapest, Hungary
- Institute of Psychology, ELTE Eotvos Lorand University, Budapest, Hungary
| | - Bence Bakos
- Institute of Psychology, ELTE Eotvos Lorand University, Budapest, Hungary
| | | | - Balazs Aczel
- Institute of Psychology, ELTE Eotvos Lorand University, Budapest, Hungary
| |
Collapse
|
4
|
Scheel AM, Tiokhin L, Isager PM, Lakens D. Why Hypothesis Testers Should Spend Less Time Testing Hypotheses. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2021; 16:744-755. [PMID: 33326363 PMCID: PMC8273364 DOI: 10.1177/1745691620966795] [Citation(s) in RCA: 94] [Impact Index Per Article: 31.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
For almost half a century, Paul Meehl educated psychologists about how the mindless use of null-hypothesis significance tests made research on theories in the social sciences basically uninterpretable. In response to the replication crisis, reforms in psychology have focused on formalizing procedures for testing hypotheses. These reforms were necessary and influential. However, as an unexpected consequence, psychological scientists have begun to realize that they may not be ready to test hypotheses. Forcing researchers to prematurely test hypotheses before they have established a sound "derivation chain" between test and theory is counterproductive. Instead, various nonconfirmatory research activities should be used to obtain the inputs necessary to make hypothesis tests informative. Before testing hypotheses, researchers should spend more time forming concepts, developing valid measures, establishing the causal relationships between concepts and the functional form of those relationships, and identifying boundary conditions and auxiliary assumptions. Providing these inputs should be recognized and incentivized as a crucial goal in itself. In this article, we discuss how shifting the focus to nonconfirmatory research can tie together many loose ends of psychology's reform movement and help us to develop strong, testable theories, as Paul Meehl urged.
Collapse
Affiliation(s)
- Anne M. Scheel
- Human-Technology Interaction Group, Eindhoven University of Technology
| | - Leonid Tiokhin
- Human-Technology Interaction Group, Eindhoven University of Technology
| | - Peder M. Isager
- Human-Technology Interaction Group, Eindhoven University of Technology
| | - Daniël Lakens
- Human-Technology Interaction Group, Eindhoven University of Technology
| |
Collapse
|
5
|
A decade of theory as reflected in Psychological Science (2009-2019). PLoS One 2021; 16:e0247986. [PMID: 33667242 PMCID: PMC7935264 DOI: 10.1371/journal.pone.0247986] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Accepted: 02/18/2021] [Indexed: 12/17/2022] Open
Abstract
The dominant belief is that science progresses by testing theories and moving towards theoretical consensus. While it’s implicitly assumed that psychology operates in this manner, critical discussions claim that the field suffers from a lack of cumulative theory. To examine this paradox, we analysed research published in Psychological Science from 2009–2019 (N = 2,225). We found mention of 359 theories in-text, most were referred to only once. Only 53.66% of all manuscripts included the word theory, and only 15.33% explicitly claimed to test predictions derived from theories. We interpret this to suggest that the majority of research published in this flagship journal is not driven by theory, nor can it be contributing to cumulative theory building. These data provide insight into the kinds of research psychologists are conducting and raises questions about the role of theory in the psychological sciences.
Collapse
|
6
|
Del Giudice M, Gangestad SW. A Traveler’s Guide to the Multiverse: Promises, Pitfalls, and a Framework for the Evaluation of Analytic Decisions. ADVANCES IN METHODS AND PRACTICES IN PSYCHOLOGICAL SCIENCE 2021. [DOI: 10.1177/2515245920954925] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
Decisions made by researchers while analyzing data (e.g., how to measure variables, how to handle outliers) are sometimes arbitrary, without an objective justification for choosing one alternative over another. Multiverse-style methods (e.g., specification curve, vibration of effects) estimate an effect across an entire set of possible specifications to expose the impact of hidden degrees of freedom and/or obtain robust, less biased estimates of the effect of interest. However, if specifications are not truly arbitrary, multiverse-style analyses can produce misleading results, potentially hiding meaningful effects within a mass of poorly justified alternatives. So far, a key question has received scant attention: How does one decide whether alternatives are arbitrary? We offer a framework and conceptual tools for doing so. We discuss three kinds of a priori nonequivalence among alternatives—measurement nonequivalence, effect nonequivalence, and power/precision nonequivalence. The criteria we review lead to three decision scenarios: Type E decisions (principled equivalence), Type N decisions (principled nonequivalence), and Type U decisions (uncertainty). In uncertain scenarios, multiverse-style analysis should be conducted in a deliberately exploratory fashion. The framework is discussed with reference to published examples and illustrated with the help of a simulated data set. Our framework will help researchers reap the benefits of multiverse-style methods while avoiding their pitfalls.
Collapse
|
7
|
Abstract
The COVID-19 pandemic points to the need for scientists to pool their efforts in order to understand this disease and respond to the ensuing crisis. Other global challenges also require such scientific cooperation. Yet in academic institutions, reward structures and incentives are based on systems that primarily fuel the competition between (groups of) scientific researchers. Competition between individual researchers, research groups, research approaches, and scientific disciplines is seen as an important selection mechanism and driver of academic excellence. These expected benefits of competition have come to define the organizational culture in academia. There are clear indications that the overreliance on competitive models undermines cooperative exchanges that might lead to higher quality insights. This damages the well-being and productivity of individual researchers and impedes efforts towards collaborative knowledge generation. Insights from social and organizational psychology on the side effects of relying on performance targets, prioritizing the achievement of success over the avoidance of failure, and emphasizing self-interest and efficiency, clarify implicit mechanisms that may spoil valid attempts at transformation. The analysis presented here elucidates that a broader change in the academic culture is needed to truly benefit from current attempts to create more open and collaborative practices for cumulative knowledge generation.
Collapse
|
8
|
Grounded procedures: A proximate mechanism for the psychology of cleansing and other physical actions. Behav Brain Sci 2020; 44:e1. [PMID: 32390575 DOI: 10.1017/s0140525x20000308] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Experimental work has revealed causal links between physical cleansing and various psychological variables. Empirically, how robust are they? Theoretically, how do they operate? Major prevailing accounts focus on morality or disgust, capturing a subset of cleansing effects, but cannot easily handle cleansing effects in non-moral, non-disgusting contexts. Building on grounded views on cognitive processes and known properties of mental procedures, we propose grounded procedures of separation as a proximate mechanism underlying cleansing effects. This account differs from prevailing accounts in terms of explanatory kind, interpretive parsimony, and predictive scope. Its unique and falsifiable predictions have received empirical support: Cleansing attenuates or eliminates otherwise observed influences of prior events (1) across domains and (2) across valences. (3) Cleansing manipulations produce stronger effects the more strongly they engage sensorimotor capacities. (4) Reversing the causal arrow, motivation for cleansing is triggered more readily by negative than positive entities. (5) Conceptually similar effects extend to other physical actions of separation. On the flipside, grounded procedures of connection are also observed. Together, separation and connection organize prior findings relevant to multiple perspectives (e.g., conceptual metaphor, sympathetic magic) and open up new questions. Their predictions are more generalizable than the specific mappings in conceptual metaphors, but more fine-grained than the broad assumptions of grounded cognition. This intermediate level of analysis sheds light on the interplay between mental and physical processes.
Collapse
|
9
|
Adversarial alignment enables competing models to engage in cooperative theory building toward cumulative science. Proc Natl Acad Sci U S A 2020; 117:7561-7567. [PMID: 32170010 DOI: 10.1073/pnas.1906720117] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023] Open
Abstract
Crises in science concern not only methods, statistics, and results but also, theory development. Beyond the indispensable refinement of tools and procedures, resolving crises would also benefit from a deeper understanding of the concepts and processes guiding research. Usually, theories compete, and some lose, incentivizing destruction of seemingly opposing views. This does not necessarily contribute to accumulating insights, and it may incur collateral damage (e.g., impairing cognitive processes and collegial relations). To develop a more constructive model, we built on adversarial collaboration, which integrates incompatible results into agreed-on new empirical research to test competing hypotheses [D. Kahneman, Am. Psychol. 58, 723-730 (2003)]. Applying theory and evidence from the behavioral sciences, we address the group dynamic complexities of adversarial interactions between scientists. We illustrate the added value of considering these in an "adversarial alignment" that addressed competing conceptual frameworks from five different theories of social evaluation. Negotiating a joint framework required two preconditions and several guidelines. First, we reframed our interactions from competitive rivalry to cooperative pursuit of a joint goal, and second, we assumed scientific competence and good intentions, enabling cooperation toward that goal. Then, we applied five rules for successful multiparty negotiations: 1) leveling the playing field, 2) capitalizing on curiosity, 3) producing measurable progress, 4) working toward mutual gain, and 5) being aware of the downside alternative. Together, these guidelines can encourage others to create conditions that allow for theoretical alignments and develop cumulative science.
Collapse
|
10
|
Szollosi A, Kellen D, Navarro DJ, Shiffrin R, van Rooij I, Van Zandt T, Donkin C. Is Preregistration Worthwhile? Trends Cogn Sci 2019; 24:94-95. [PMID: 31892461 DOI: 10.1016/j.tics.2019.11.009] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Revised: 11/14/2019] [Accepted: 11/30/2019] [Indexed: 10/25/2022]
Affiliation(s)
- Aba Szollosi
- University of New South Wales, Kensington, Australia.
| | | | | | | | | | | | - Chris Donkin
- University of New South Wales, Kensington, Australia
| |
Collapse
|
11
|
Dunn BD, O'Mahen H, Wright K, Brown G. A commentary on research rigour in clinical psychological science: How to avoid throwing out the innovation baby with the research credibility bath water in the depression field. Behav Res Ther 2019; 120:103417. [DOI: 10.1016/j.brat.2019.103417] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2019] [Revised: 05/14/2019] [Accepted: 06/03/2019] [Indexed: 11/27/2022]
|
12
|
|
13
|
Benning SD, Bachrach RL, Smith EA, Freeman AJ, Wright AGC. The registration continuum in clinical science: A guide toward transparent practices. JOURNAL OF ABNORMAL PSYCHOLOGY 2019; 128:528-540. [PMID: 31368732 PMCID: PMC6677163 DOI: 10.1037/abn0000451] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Clinical scientists can use a continuum of registration efforts that vary in their disclosure and timing relative to data collection and analysis. Broadly speaking, registration benefits investigators by offering stronger, more powerful tests of theory with particular methods in tandem with better control of long-run false positive error rates. Registration helps clinical researchers in thinking through tensions between bandwidth and fidelity that surround recruiting participants, defining clinical phenotypes, handling comorbidity, treating missing data, and analyzing rich and complex data. In particular, registration helps record and justify the reasons behind specific study design decisions, though it also provides the opportunity to register entire decision trees with specific endpoints. Creating ever more faithful registrations and standard operating procedures may offer alternative methods of judging a clinical investigator's scientific skill and eminence because study registration increases the transparency of clinical researchers' work. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
Affiliation(s)
| | - Rachel L Bachrach
- Interdisciplinary Addiction Program for Education and Research, Center for Health Equity and Research Promotion, Mental Illness Research, Education and Clinical Center, VA Pittsburgh Healthcare System
| | - Edward A Smith
- Department of Psychology, University of Nevada, Las Vegas
| | | | | |
Collapse
|
14
|
Renger D, Mommert A, Renger S, Miché M, Simon B. Voicing One’s Ideas: Intragroup Respect as an Antecedent of Assertive Behavior. BASIC AND APPLIED SOCIAL PSYCHOLOGY 2019. [DOI: 10.1080/01973533.2018.1542306] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
15
|
Wai J, Halpern DF. The Impact of Changing Norms on Creativity in Psychological Science. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2018; 13:466-472. [PMID: 29961414 DOI: 10.1177/1745691618773326] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
The open science or credibility revolution has divided psychologists on whether and how the "policy" change of preregistration and similar requirements will affect the quality and creativity of future research. We provide a brief history of how norms have rapidly changed and how news and social media are beginning to "disrupt" academic science. We note a variety of benefits, including more confidence in research findings, but there are possible costs as well, including a reduction in the number of studies conducted because of an increased workload required by new policies. We begin to craft a study to evaluate the short- and long-term impacts of these changing norms on creativity in psychological science, run into some possible roadblocks, and hope others will build on this idea. This policy change can be evaluated in the short term but will ultimately need to be evaluated decades from now. Long-term evaluations are rare, yet this is the ultimate measure of creative scientific advance. Our conclusion supports the goals and procedures for creating a more open science.
Collapse
Affiliation(s)
- Jonathan Wai
- 1 Department of Education Reform, University of Arkansas
| | - Diane F Halpern
- 2 Department of Psychology, Claremont McKenna College.,3 Minerva Schools at KGI
| |
Collapse
|