1
|
Fitzpatrick BG, Gorman DM, Trombatore C. Impact of redefining statistical significance on P-hacking and false positive rates: An agent-based model. PLoS One 2024; 19:e0303262. [PMID: 38753677 PMCID: PMC11098386 DOI: 10.1371/journal.pone.0303262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 04/23/2024] [Indexed: 05/18/2024] Open
Abstract
In recent years, concern has grown about the inappropriate application and interpretation of P values, especially the use of P<0.05 to denote "statistical significance" and the practice of P-hacking to produce results below this threshold and selectively reporting these in publications. Such behavior is said to be a major contributor to the large number of false and non-reproducible discoveries found in academic journals. In response, it has been proposed that the threshold for statistical significance be changed from 0.05 to 0.005. The aim of the current study was to use an evolutionary agent-based model comprised of researchers who test hypotheses and strive to increase their publication rates in order to explore the impact of a 0.005 P value threshold on P-hacking and published false positive rates. Three scenarios were examined, one in which researchers tested a single hypothesis, one in which they tested multiple hypotheses using a P<0.05 threshold, and one in which they tested multiple hypotheses using a P<0.005 threshold. Effects sizes were varied across models and output assessed in terms of researcher effort, number of hypotheses tested and number of publications, and the published false positive rate. The results supported the view that a more stringent P value threshold can serve to reduce the rate of published false positive results. Researchers still engaged in P-hacking with the new threshold, but the effort they expended increased substantially and their overall productivity was reduced, resulting in a decline in the published false positive rate. Compared to other proposed interventions to improve the academic publishing system, changing the P value threshold has the advantage of being relatively easy to implement and could be monitored and enforced with minimal effort by journal editors and peer reviewers.
Collapse
Affiliation(s)
- Ben G. Fitzpatrick
- Department of Mathematics, Loyola Marymount University, Los Angeles, California, United States of America
- Tempest Technologies, Los Angeles, California, United States of America
| | - Dennis M. Gorman
- Department of Epidemiology & Biostatistics, School of Public Health, Texas A&M University, College Station, Texas, United States of America
| | - Caitlin Trombatore
- Department of Mathematics, Loyola Marymount University, Los Angeles, California, United States of America
| |
Collapse
|
2
|
Marcu GM, Dumbravă A, Băcilă IC, Szekely-Copîndean RD, Zăgrean AM. Increasing Value and Reducing Waste of Research on Neurofeedback Effects in Post-traumatic Stress Disorder: A State-of-the-Art-Review. Appl Psychophysiol Biofeedback 2024; 49:23-45. [PMID: 38151684 DOI: 10.1007/s10484-023-09610-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2023]
Abstract
Post-Traumatic Stress Disorder (PTSD) is often considered challenging to treat due to factors that contribute to its complexity. In the last decade, more attention has been paid to non-pharmacological or non-psychological therapies for PTSD, including neurofeedback (NFB). NFB is a promising non-invasive technique targeting specific brainwave patterns associated with psychiatric symptomatology. By learning to regulate brain activity in a closed-loop paradigm, individuals can improve their functionality while reducing symptom severity. However, owing to its lax regulation and heterogeneous legal status across different countries, the degree to which it has scientific support as a psychiatric treatment remains controversial. In this state-of-the-art review, we searched PubMed, Cochrane Central, Web of Science, Scopus, and MEDLINE and identified meta-analyses and systematic reviews exploring the efficacy of NFB for PTSD. We included seven systematic reviews, out of which three included meta-analyses (32 studies and 669 participants) that targeted NFB as an intervention while addressing a single condition-PTSD. We used the MeaSurement Tool to Assess systematic Reviews (AMSTAR) 2 and the criteria described by Cristea and Naudet (Behav Res Therapy 123:103479, 2019, https://doi.org/10.1016/j.brat.2019.103479 ) to identify sources of research waste and increasing value in biomedical research. The seven assessed reviews had an overall extremely poor quality score (5 critically low, one low, one moderate, and none high) and multiple sources of waste while opening opportunities for increasing value in the NFB literature. Our research shows that it remains unclear whether NFB training is significantly beneficial in treating PTSD. The quality of the investigated literature is low and maintains a persistent uncertainty over numerous points, which are highly important for deciding whether an intervention has clinical efficacy. Just as importantly, none of the reviews we appraised explored the statistical power, referred to open data of the included studies, or adjusted their pooled effect sizes for publication bias and risk of bias. Based on the obtained results, we identified some recurrent sources of waste (such as a lack of research decisions based on sound questions or using an appropriate methodology in a fully transparent, unbiased, and useable manner) and proposed some directions for increasing value (homogeneity and consensus) in designing and reporting research on NFB interventions in PTSD.
Collapse
Affiliation(s)
- Gabriela Mariana Marcu
- Division of Physiology and Neuroscience, Department of Functional Sciences, Carol Davila University of Medicine and Pharmacy, Bucharest, Romania.
- Department of Psychology, "Lucian Blaga" University of Sibiu, Sibiu, Romania.
| | - Andrei Dumbravă
- George I.M. Georgescu Institute of Cardiovascular Diseases, Iaşi, Romania
- Alexandru Ioan Cuza University Iaşi, Iaşi, Romania
| | - Ionuţ-Ciprian Băcilă
- Scientific Research Group in Neuroscience "Dr. Gheorghe Preda" Clinical Psychiatry Hospital, Sibiu, Romania
- Faculty of Medicine, "Lucian Blaga" University of Sibiu Romania, Sibiu, Romania
| | - Raluca Diana Szekely-Copîndean
- Scientific Research Group in Neuroscience "Dr. Gheorghe Preda" Clinical Psychiatry Hospital, Sibiu, Romania
- Department of Social and Human Research, Romanian Academy - Cluj-Napoca Branch, Cluj-Napoca, Romania
| | - Ana-Maria Zăgrean
- Division of Physiology and Neuroscience, Department of Functional Sciences, Carol Davila University of Medicine and Pharmacy, Bucharest, Romania
| |
Collapse
|
3
|
Luijken K, Lohmann A, Alter U, Claramunt Gonzalez J, Clouth FJ, Fossum JL, Hesen L, Huizing AHJ, Ketelaar J, Montoya AK, Nab L, Nijman RCC, Penning de Vries BBL, Tibbe TD, Wang YA, Groenwold RHH. Replicability of simulation studies for the investigation of statistical methods: the RepliSims project. ROYAL SOCIETY OPEN SCIENCE 2024; 11:231003. [PMID: 38234442 PMCID: PMC10791519 DOI: 10.1098/rsos.231003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Accepted: 12/14/2023] [Indexed: 01/19/2024]
Abstract
Results of simulation studies evaluating the performance of statistical methods can have a major impact on the way empirical research is implemented. However, so far there is limited evidence of the replicability of simulation studies. Eight highly cited statistical simulation studies were selected, and their replicability was assessed by teams of replicators with formal training in quantitative methodology. The teams used information in the original publications to write simulation code with the aim of replicating the results. The primary outcome was to determine the feasibility of replicability based on reported information in the original publications and supplementary materials. Replicasility varied greatly: some original studies provided detailed information leading to almost perfect replication of results, whereas other studies did not provide enough information to implement any of the reported simulations. Factors facilitating replication included availability of code, detailed reporting or visualization of data-generating procedures and methods, and replicator expertise. Replicability of statistical simulation studies was mainly impeded by lack of information and sustainability of information sources. We encourage researchers publishing simulation studies to transparently report all relevant implementation details either in the research paper itself or in easily accessible supplementary material and to make their simulation code publicly available using permanent links.
Collapse
Affiliation(s)
- K. Luijken
- Department of Clinical Epidemiology, Leiden University Medical Centre, Leiden, The Netherlands
- Department of Epidemiology, Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, University Utrecht, Utrecht, The Netherlands
| | - A. Lohmann
- Department of Clinical Epidemiology, Leiden University Medical Centre, Leiden, The Netherlands
| | - U. Alter
- Department of Psychology, York University, Toronto, Ontario, Canada
| | - J. Claramunt Gonzalez
- Methodology and Statistics Unit, Institute of Psychology, Leiden University, Leiden, The Netherlands
| | - F. J. Clouth
- Department of Methodology and Statistics, Tilburg University, Tilburg, The Netherlands
- Netherlands Comprehensive Cancer Organisation (IKNL), Utrecht, The Netherlands
| | - J. L. Fossum
- Department of Psychology, University of California, Los Angeles, CA, USA
- Department of Psychology, Seattle Pacific University, Seattle, WA, USA
| | - L. Hesen
- Department of Clinical Epidemiology, Leiden University Medical Centre, Leiden, The Netherlands
| | - A. H. J. Huizing
- TNO (Netherlands Organization for Applied Scientific Research), Expertise Group Child Health, Leiden, The Netherlands
| | - J. Ketelaar
- Department of Clinical Epidemiology, Leiden University Medical Centre, Leiden, The Netherlands
| | - A. K. Montoya
- Department of Psychology, University of California, Los Angeles, CA, USA
| | - L. Nab
- Department of Clinical Epidemiology, Leiden University Medical Centre, Leiden, The Netherlands
| | - R. C. C. Nijman
- Department of Clinical Epidemiology, Leiden University Medical Centre, Leiden, The Netherlands
| | - B. B. L. Penning de Vries
- Department of Clinical Epidemiology, Leiden University Medical Centre, Leiden, The Netherlands
- Department of Biomedical Data Sciences, Leiden University Medical Centre, Leiden, The Netherlands
| | - T. D. Tibbe
- Department of Psychology, University of California, Los Angeles, CA, USA
| | - Y. A. Wang
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - R. H. H. Groenwold
- Department of Clinical Epidemiology, Leiden University Medical Centre, Leiden, The Netherlands
- Department of Biomedical Data Sciences, Leiden University Medical Centre, Leiden, The Netherlands
| |
Collapse
|
4
|
Smaldino PE, McElreath R. Correction to: 'The natural selection of bad science' (2016) by Paul E. Smaldino and Richard McElreath. ROYAL SOCIETY OPEN SCIENCE 2023; 10:rsos231026. [PMID: 37680497 PMCID: PMC10480692 DOI: 10.1098/rsos.231026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 07/21/2023] [Indexed: 09/09/2023]
Abstract
[This corrects the article DOI: 10.1098/rsos.160384.][This corrects the article DOI: 10.1098/rsos.160384.].
Collapse
|