1
|
Turoman N, Heyard R, Schwab S, Furrer E, Vergauwe E, Held L. Using an expert survey and user feedback to construct PRECHECK: A checklist to evaluate preprints on COVID-19 and beyond. F1000Res 2024; 12:588. [PMID: 38983445 PMCID: PMC11231630 DOI: 10.12688/f1000research.129814.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 05/30/2024] [Indexed: 07/11/2024] Open
Abstract
Background The quality of COVID-19 preprints should be considered with great care, as their contents can influence public policy. Surprisingly little has been done to calibrate the public's evaluation of preprints and their contents. The PRECHECK project aimed to generate a tool to teach and guide scientifically literate non-experts to critically evaluate preprints, on COVID-19 and beyond. Methods To create a checklist, we applied a four-step procedure consisting of an initial internal review, an external review by a pool of experts (methodologists, meta-researchers/experts on preprints, journal editors, and science journalists), a final internal review, and a Preliminary implementation stage. For the external review step, experts rated the relevance of each element of the checklist on five-point Likert scales, and provided written feedback. After each internal review round, we applied the checklist on a small set of high-quality preprints from an online list of milestone research works on COVID-19 and low-quality preprints, which were eventually retracted, to verify whether the checklist can discriminate between the two categories. Results At the external review step, 26 of the 54 contacted experts responded. The final checklist contained four elements (Research question, study type, transparency and integrity, and limitations), with 'superficial' and 'deep' evaluation levels. When using both levels, the checklist was effective at discriminating a small set of high- and low-quality preprints. Its usability for assessment and discussion of preprints was confirmed in workshops with Bachelors students in Psychology and Medicine, and science journalists. Conclusions We created a simple, easy-to-use tool for helping scientifically literate non-experts navigate preprints with a critical mind and facilitate discussions within, for example, a beginner-level lecture on research methods. We believe that our checklist has potential to help guide decisions about the quality of preprints on COVID-19 in our target audience and that this extends beyond COVID-19.
Collapse
Affiliation(s)
- Nora Turoman
- Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
| | - Rachel Heyard
- Center for Reproducible Science (CRS), University of Zurich, Zurich, Switzerland
- Department of Biostatistics at the Epidemiology Biostatistics and Prevention Institute (EPBI), University of Zurich, Zurich, Switzerland
| | - Simon Schwab
- Center for Reproducible Science (CRS), University of Zurich, Zurich, Switzerland
- Department of Biostatistics at the Epidemiology Biostatistics and Prevention Institute (EPBI), University of Zurich, Zurich, Switzerland
| | - Eva Furrer
- Center for Reproducible Science (CRS), University of Zurich, Zurich, Switzerland
- Department of Biostatistics at the Epidemiology Biostatistics and Prevention Institute (EPBI), University of Zurich, Zurich, Switzerland
| | - Evie Vergauwe
- Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
- Geneva University Neurocenter, University of Geneva, Geneva, Switzerland
| | - Leonhard Held
- Center for Reproducible Science (CRS), University of Zurich, Zurich, Switzerland
- Department of Biostatistics at the Epidemiology Biostatistics and Prevention Institute (EPBI), University of Zurich, Zurich, Switzerland
| |
Collapse
|
2
|
Tong J, Luo C, Sun Y, Duan R, Saine ME, Lin L, Peng Y, Lu Y, Batra A, Pan A, Wang O, Li R, Marks-Anglin A, Yang Y, Zuo X, Liu Y, Bian J, Kimmel SE, Hamilton K, Cuker A, Hubbard RA, Xu H, Chen Y. Confidence score: a data-driven measure for inclusive systematic reviews considering unpublished preprints. J Am Med Inform Assoc 2024; 31:809-819. [PMID: 38065694 PMCID: PMC10990515 DOI: 10.1093/jamia/ocad248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 11/29/2023] [Accepted: 12/06/2023] [Indexed: 04/05/2024] Open
Abstract
OBJECTIVES COVID-19, since its emergence in December 2019, has globally impacted research. Over 360 000 COVID-19-related manuscripts have been published on PubMed and preprint servers like medRxiv and bioRxiv, with preprints comprising about 15% of all manuscripts. Yet, the role and impact of preprints on COVID-19 research and evidence synthesis remain uncertain. MATERIALS AND METHODS We propose a novel data-driven method for assigning weights to individual preprints in systematic reviews and meta-analyses. This weight termed the "confidence score" is obtained using the survival cure model, also known as the survival mixture model, which takes into account the time elapsed between posting and publication of a preprint, as well as metadata such as the number of first 2-week citations, sample size, and study type. RESULTS Using 146 preprints on COVID-19 therapeutics posted from the beginning of the pandemic through April 30, 2021, we validated the confidence scores, showing an area under the curve of 0.95 (95% CI, 0.92-0.98). Through a use case on the effectiveness of hydroxychloroquine, we demonstrated how these scores can be incorporated practically into meta-analyses to properly weigh preprints. DISCUSSION It is important to note that our method does not aim to replace existing measures of study quality but rather serves as a supplementary measure that overcomes some limitations of current approaches. CONCLUSION Our proposed confidence score has the potential to improve systematic reviews of evidence related to COVID-19 and other clinical conditions by providing a data-driven approach to including unpublished manuscripts.
Collapse
Affiliation(s)
- Jiayi Tong
- The Center for Health Analytics and Synthesis of Evidence (CHASE), Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania, Philadelphia, PA 19104, United States
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, The University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Chongliang Luo
- Division of Public Health Sciences, Washington University School of Medicine in St Louis, St Louis, MO 63110, United States
| | - Yifei Sun
- Department of Biostatistics, Columbia University, New York City, NY 10032, United States
| | - Rui Duan
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Harvard University, Cambridge, MA 02115, United States
| | - M Elle Saine
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, The University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Lifeng Lin
- Department of Epidemiology and Biostatistics, University of Arizona, Tucson, AZ 85724, United States
| | - Yifan Peng
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY 11101, United States
| | - Yiwen Lu
- The Center for Health Analytics and Synthesis of Evidence (CHASE), Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania, Philadelphia, PA 19104, United States
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, The University of Pennsylvania, Philadelphia, PA 19104, United States
- The Graduate Group in Applied Mathematics and Computational Science, School of Arts and Sciences, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Anchita Batra
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, The University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Anni Pan
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, The University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Olivia Wang
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, The University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Ruowang Li
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, West Hollywood, CA, United States
| | - Arielle Marks-Anglin
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, The University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Yuchen Yang
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, The University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Xu Zuo
- McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX 77030, United States
| | - Yulun Liu
- Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States
| | - Jiang Bian
- Department of Health Outcomes & Biomedical Informatics, College of Medicine, University of Florida, Gainesville, FL 32611, United States
| | - Stephen E Kimmel
- Department of Epidemiology, College of Public Health & Health Professions and College of Medicine, University of Florida, Gainesville, FL 32610, United States
| | - Keith Hamilton
- Department of Medicine, Hospital of the University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Adam Cuker
- Department of Medicine and Department of Pathology & Laboratory Medicine, Perelman School of Medicine, The University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Rebecca A Hubbard
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, The University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Hua Xu
- Section of Biomedical Informatics & Data Science, Yale School of Medicine, New Haven, CT 06510, United States
| | - Yong Chen
- The Center for Health Analytics and Synthesis of Evidence (CHASE), Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania, Philadelphia, PA 19104, United States
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, The University of Pennsylvania, Philadelphia, PA 19104, United States
- The Graduate Group in Applied Mathematics and Computational Science, School of Arts and Sciences, University of Pennsylvania, Philadelphia, PA 19104, United States
- Leonard Davis Institute of Health Economics, Penn Medicine, Philadelphia, PA 19104, United States
- Center for Evidence-based Practice (CEP), Philadelphia, PA 19104, United States
- Penn Institute for Biomedical Informatics (IBI), Philadelphia, PA 19104, United States
| |
Collapse
|
3
|
Davidson M, Evrenoglou T, Graña C, Chaimani A, Boutron I. Comparison of effect estimates between preprints and peer-reviewed journal articles of COVID-19 trials. BMC Med Res Methodol 2024; 24:9. [PMID: 38212714 PMCID: PMC10782611 DOI: 10.1186/s12874-023-02136-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 12/22/2023] [Indexed: 01/13/2024] Open
Abstract
BACKGROUND Preprints are increasingly used to disseminate research results, providing multiple sources of information for the same study. We assessed the consistency in effect estimates between preprint and subsequent journal article of COVID-19 randomized controlled trials. METHODS The study utilized data from the COVID-NMA living systematic review of pharmacological treatments for COVID-19 (covid-nma.com) up to July 20, 2022. We identified randomized controlled trials (RCTs) evaluating pharmacological treatments vs. standard of care/placebo for patients with COVID-19 that were originally posted as preprints and subsequently published as journal articles. Trials that did not report the same analysis in both documents were excluded. Data were extracted independently by pairs of researchers with consensus to resolve disagreements. Effect estimates extracted from the first preprint were compared to effect estimates from the journal article. RESULTS The search identified 135 RCTs originally posted as a preprint and subsequently published as a journal article. We excluded 26 RCTs that did not meet the eligibility criteria, of which 13 RCTs reported an interim analysis in the preprint and a final analysis in the journal article. Overall, 109 preprint-article RCTs were included in the analysis. The median (interquartile range) delay between preprint and journal article was 121 (73-187) days, the median sample size was 150 (71-464) participants, 76% of RCTs had been prospectively registered, 60% received industry or mixed funding, 72% were multicentric trials. The overall risk of bias was rated as 'some concern' for 80% of RCTs. We found that 81 preprint-article pairs of RCTs were consistent for all outcomes reported. There were nine RCTs with at least one outcome with a discrepancy in the number of participants with outcome events or the number of participants analyzed, which yielded a minor change in the estimate of the effect. Furthermore, six RCTs had at least one outcome missing in the journal article and 14 RCTs had at least one outcome added in the journal article compared to the preprint. There was a change in the direction of effect in one RCT. No changes in statistical significance or conclusions were found. CONCLUSIONS Effect estimates were generally consistent between COVID-19 preprints and subsequent journal articles. The main results and interpretation did not change in any trial. Nevertheless, some outcomes were added and deleted in some journal articles.
Collapse
Affiliation(s)
- Mauricia Davidson
- Center for Research in Epidemiology and Statistics (CRESS-U1153), Université Paris Cité and Université Sorbonne Paris Nord, INRAE, Inserm, Hôpital Hôtel-Dieu, 1 Place du Parvis Notre-Dame, Paris, F-75004, France.
| | - Theodoros Evrenoglou
- Center for Research in Epidemiology and Statistics (CRESS-U1153), Université Paris Cité and Université Sorbonne Paris Nord, INRAE, Inserm, Hôpital Hôtel-Dieu, 1 Place du Parvis Notre-Dame, Paris, F-75004, France
| | - Carolina Graña
- Center for Research in Epidemiology and Statistics (CRESS-U1153), Université Paris Cité and Université Sorbonne Paris Nord, INRAE, Inserm, Hôpital Hôtel-Dieu, 1 Place du Parvis Notre-Dame, Paris, F-75004, France
- Centre d'Epidémiologie Clinique, AP-HP, Hôpital Hôtel Dieu, Paris, F-75004, France
- Cochrane France, Paris, France
| | - Anna Chaimani
- Center for Research in Epidemiology and Statistics (CRESS-U1153), Université Paris Cité and Université Sorbonne Paris Nord, INRAE, Inserm, Hôpital Hôtel-Dieu, 1 Place du Parvis Notre-Dame, Paris, F-75004, France
- Cochrane France, Paris, France
| | - Isabelle Boutron
- Center for Research in Epidemiology and Statistics (CRESS-U1153), Université Paris Cité and Université Sorbonne Paris Nord, INRAE, Inserm, Hôpital Hôtel-Dieu, 1 Place du Parvis Notre-Dame, Paris, F-75004, France
- Centre d'Epidémiologie Clinique, AP-HP, Hôpital Hôtel Dieu, Paris, F-75004, France
- Cochrane France, Paris, France
| |
Collapse
|
4
|
Lipworth W, Kerridge I, Stewart C, Silva D, Upshur R. The Fragility of Scientific Rigour and Integrity in "Sped up Science": Research Misconduct, Bias, and Hype and in the COVID-19 Pandemic. JOURNAL OF BIOETHICAL INQUIRY 2023; 20:607-616. [PMID: 38064166 DOI: 10.1007/s11673-023-10289-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Accepted: 07/20/2023] [Indexed: 03/16/2024]
Abstract
During the early years of the COVID-19 pandemic, preclinical and clinical research were sped up and scaled up in both the public and private sectors and in partnerships between them. This resulted in some extraordinary advances, but it also raised a range of issues regarding the ethics, rigour, and integrity of scientific research, academic publication, and public communication. Many of the failures of scientific rigour and integrity that occurred during the pandemic were exacerbated by the rush to generate, disseminate, and implement research findings, which not only created opportunities for unscrupulous actors but also compromised the methodological, peer review, and advisory processes that would usually identify sub-standard research and prevent compromised clinical or policy-level decisions. While it would be tempting to attribute these failures of science and its translation solely to the "unprecedented" circumstances of the COVID-19 pandemic, the reality is that they preceded the pandemic and will continue to arise once it is over. Existing strategies for promoting scientific rigour and integrity need to be made more rigorous, better integrated into research training and institutional cultures, and made more sophisticated. They might also need to be modified or supplemented with other strategies that are fit for purpose not only in public health emergencies but in any research that is sped-up and scaled up to address urgent unmet medical needs.
Collapse
Affiliation(s)
- W Lipworth
- Department of Philosophy, Macquarie University, Sydney, NSW, Australia.
| | - I Kerridge
- Department of Philosophy, Macquarie University, Sydney, NSW, Australia
- Royal North Shore Hospital and Sydney Health Ethics, University of Sydney, Sydney, NSW, Australia
| | - C Stewart
- Sydney Law School, University of Sydney, Sydney, NSW, Australia
| | - D Silva
- Sydney Health Ethics, Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia
| | - R Upshur
- Dalla Lana School of Public Health, University of Toronto, Toronto, Canada
| |
Collapse
|
5
|
Héroux M, Diong J, Bye E, Fisher G, Robertson L, Butler A, Gandevia S. Poor statistical reporting, inadequate data presentation and spin persist despite Journal awareness and updated Information for Authors. F1000Res 2023; 12:1483. [PMID: 38434651 PMCID: PMC10905014 DOI: 10.12688/f1000research.142841.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 11/13/2023] [Indexed: 03/05/2024] Open
Abstract
Sound reporting of research results is fundamental to good science. Unfortunately, poor reporting is common and does not improve with editorial educational strategies. We investigated whether publicly highlighting poor reporting at a journal can lead to improved reporting practices. We also investigated whether reporting practices that are required or strongly encouraged in journal Information for Authors are enforced by journal editors and staff. A 2016 audit highlighted poor reporting practices in the Journal of Neurophysiology. In August 2016 and 2018, the American Physiological Society updated the Information for Authors, which included the introduction of several required or strongly encouraged reporting practices. We audited Journal of Neurophysiology papers published in 2019 and 2020 (downloaded through the library of the University of New South Wales) on reporting items selected from the 2016 audit, the newly introduced reporting practices, and items from previous audits. Summary statistics (means, counts) were used to summarize audit results. In total, 580 papers were audited. Compared to results from the 2016 audit, several reporting practices remained unchanged or worsened. For example, 60% of papers erroneously reported standard errors of the mean, 23% of papers included undefined measures of variability, 40% of papers failed to define a statistical threshold for their tests, and when present, 64% of papers with p-values between 0.05 and 0.1 misinterpreted them as statistical trends. As for the newly introduced reporting practices, required practices were consistently adhered to by 34 to 37% of papers, while strongly encouraged practices were consistently adhered to by 9 to 26% of papers. Adherence to the other audited reporting practices was comparable to our previous audits. Publicly highlighting poor reporting practices did little to improve research reporting. Similarly, requiring or strongly encouraging reporting practices was only partly effective. Although the present audit focused on a single journal, this is likely not an isolated case. Stronger, more strategic measures are required to improve poor research reporting.
Collapse
Affiliation(s)
- Martin Héroux
- School of Biomedical Sciences, University of New South Wales, Sydney, New South Wales, 2052, Australia
- Neuroscience Research Australia, Sydney, NSW, 2031, Australia
| | - Joanna Diong
- Neuroscience Research Australia, Sydney, NSW, 2031, Australia
- School of Biomedical Sciences, The University of Sydney, Sydney, New South Wales, 2006, Australia
| | - Elizabeth Bye
- School of Biomedical Sciences, University of New South Wales, Sydney, New South Wales, 2052, Australia
- Neuroscience Research Australia, Sydney, NSW, 2031, Australia
| | - Georgia Fisher
- Neuroscience Research Australia, Sydney, NSW, 2031, Australia
- Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, New South Wales, 2109, Australia
| | - Lucy Robertson
- Neuroscience Research Australia, Sydney, NSW, 2031, Australia
- School of Clinical Medicine, University of New South Wales, Sydney, New South Wales, 2031, Australia
| | - Annie Butler
- School of Biomedical Sciences, University of New South Wales, Sydney, New South Wales, 2052, Australia
- Neuroscience Research Australia, Sydney, NSW, 2031, Australia
| | - Simon Gandevia
- Neuroscience Research Australia, Sydney, NSW, 2031, Australia
- School of Clinical Medicine, University of New South Wales, Sydney, New South Wales, 2031, Australia
| |
Collapse
|
6
|
Zannad F, Crea F, Keaney J, Spencer S, Hill JA, Pfeffer MA, Pocock S, Raderschadt E, Ross JS, Sacks CA, Van Spall HGC, Winslow R, Jessup M. Rapid, accurate publication and dissemination of clinical trial results: benefits and challenges. Eur Heart J 2023; 44:4220-4229. [PMID: 37165687 DOI: 10.1093/eurheartj/ehad279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 03/13/2023] [Accepted: 04/02/2023] [Indexed: 05/12/2023] Open
Abstract
Large-scale clinical trials are essential in cardiology and require rapid, accurate publication, and dissemination. Whereas conference presentations, press releases, and social media disseminate information quickly and often receive considerable coverage by mainstream and healthcare media, they lack detail, may emphasize selected data, and can be open to misinterpretation. Preprint servers speed access to research manuscripts while awaiting acceptance for publication by a journal, but these articles are not formally peer-reviewed and sometimes overstate the findings. Publication of trial results in a major journal is very demanding but the use of existing checklists can help accelerate the process. In case of rejection, procedures such as easing formatting requirements and possibly carrying over peer-review to other journals could speed resubmission. Secondary publications can help maximize benefits from clinical trials; publications of secondary endpoints and subgroup analyses further define treatment effects and the patient populations most likely to benefit. These rely on data access, and although data sharing is becoming more common, many challenges remain. Beyond publication in medical journals, there is a need for wider knowledge dissemination to maximize impact on clinical practice. This might be facilitated through plain language summary publications. Social media, websites, mainstream news outlets, and other publications, although not peer-reviewed, are important sources of medical information for both the public and for clinicians. This underscores the importance of ensuring that the information is understandable, accessible, balanced, and trustworthy. This report is based on discussions held on December 2021, at the 18th Global Cardiovascular Clinical Trialists meeting, involving a panel of editors of some of the top medical journals, as well as members of the lay press, industry, and clinical trialists.
Collapse
Affiliation(s)
- Faiez Zannad
- Université de Lorraine, INSERM, CIC 1439, Institut Lorrain du Coeur et des Vaisseaux, CHU 54500, Vandoeuvre-lès-Nancy, France
| | - Filippo Crea
- Department of Cardiovascular and Pneumological Sciences, Catholic University of the Sacred Heart, Rome 00168, Italy
| | - John Keaney
- Division of Cardiovascular Medicine, Heart and Vascular Center, Brigham and Women's Hospital and Harvard Medical School, Boston, MA 02115, USA
| | | | - Joseph A Hill
- Department of Internal Medicine and Department of Molecular Biology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Marc A Pfeffer
- Cardiovascular Division, Brigham and Women's Hospital, and Harvard Medical School Boston, MA 02115, USA
| | - Stuart Pocock
- Department of Medical Statistics, London School of Hygiene & Tropical Medicine, London, WC1E 7HT, UK
| | - Emma Raderschadt
- Global Medical Affairs, Boehringer Ingelheim, Siegburg, 55218, Germany
| | - Joseph S Ross
- Department of Medicine, Yale School of Medicine, New Haven, 06510, USA
| | | | - Harriette G C Van Spall
- Department of Medicine, and Department of Health Research Methods, Evidence, and Impact, McMaster University; Population Health Research Institute; Research Institute of St. Joseph's, Hamilton, ON L8N 4A6, Canada
| | | | | |
Collapse
|
7
|
Oh Y, Jung YJ, Sujata P, Kim M, Yon DK, Lee SW, Cho K, Koyanagi A, Dai Z, Smith L, Shin JI, Kim E. Spin in randomized controlled trials of pharmacology in COVID-19: A systematic review. Account Res 2023:1-19. [PMID: 37818630 DOI: 10.1080/08989621.2023.2269083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 10/06/2023] [Indexed: 10/12/2023]
Abstract
Spin, defined as the misrepresentation of the results of a study, could negate the validity of scientific findings. To explore the manifestation of spin, and identify the factors affecting spin in COVID-19 RCTs, a systematic review was performed from PubMed/Medline, National Institutes of Health, EMBASE, Cochrane, and Web of Science. RCTs on pharmacotherapy for COVID-19 with nonsignificant primary outcomes published in 2020 were included. 21 abstracts (33.9%) and 28 main texts (45.2%) were found to contain spin in at least one section. In the conclusion section, other spin strategies beautifying their findings that were not included in the abstract were found in the main texts. More factors influencing the level of spin were found in abstracts than in the main texts, but most of the levels of spin in abstracts were comparable to those in the main texts. Although common factors that affected the manifestation of spin in the main texts and abstracts were the sample size and type of journal, further research to determine multicollinearity between significant factors and the manifestation of spin is required.
Collapse
Affiliation(s)
- Yunkyoung Oh
- Data Science, Evidence-Based and Clinical Research Laboratory, Department of Health, Social and Clinical Pharmacy, College of Pharmacy, Chung-Ang University, Seoul, South Korea
| | - Youn-Joo Jung
- Data Science, Evidence-Based and Clinical Research Laboratory, Department of Health, Social and Clinical Pharmacy, College of Pharmacy, Chung-Ang University, Seoul, South Korea
| | - Purja Sujata
- Data Science, Evidence-Based and Clinical Research Laboratory, Department of Health, Social and Clinical Pharmacy, College of Pharmacy, Chung-Ang University, Seoul, South Korea
| | - Minji Kim
- Data Science, Evidence-Based and Clinical Research Laboratory, Department of Health, Social and Clinical Pharmacy, College of Pharmacy, Chung-Ang University, Seoul, South Korea
| | - Dong Keon Yon
- Centre for Digital Health, Medical Science Research Institute, Kyung Hee University Medical Centre, Seoul, Republic of Korea
| | - Seung Won Lee
- Sungkyunkwan University School of Medicine, Suwon, Korea
| | - Kyuyeon Cho
- Yonsei University College of Medicine, Seoul, Korea
| | - Ai Koyanagi
- Research and Development Unit, Parc Sanitari Sant Joan de Déu, CIBERSAM, Barcelona, Spain
- ICREA, Barcelona, Spain
| | - Zhaoli Dai
- College of Medicine and Public Health, Flinders University, South Australia; and School of Pharmacy, The University of Sydney, Sydney Australia
| | - Lee Smith
- Centre for Health, Performance, and Wellbeing, Anglia Ruskin University, Cambridge, UK
| | - Jae Il Shin
- Department of Paediatrics, Yonsei University College of Medicine, Seoul, Korea
| | - Eunyoung Kim
- Data Science, Evidence-Based and Clinical Research Laboratory, Department of Health, Social and Clinical Pharmacy, College of Pharmacy, Chung-Ang University, Seoul, South Korea
- The Graduate School of Pharmaceutical Industry Management, Chung-Ang University, Seoul, South Korea
| |
Collapse
|
8
|
Sommer I, Sunder-Plassmann V, Ratajczak P, Emprechtinger R, Dobrescu A, Griebler U, Gartlehner G. Full publication of preprint articles in prevention research: an analysis of publication proportions and results consistency. Sci Rep 2023; 13:17034. [PMID: 37813909 PMCID: PMC10562443 DOI: 10.1038/s41598-023-44291-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Accepted: 10/05/2023] [Indexed: 10/11/2023] Open
Abstract
There is concern that preprint articles will lead to an increase in the amount of scientifically invalid work available. The objectives of this study were to determine the proportion of prevention preprints published within 12 months, the consistency of the effect estimates and conclusions between preprint and published articles, and the reasons for the nonpublication of preprints. Of the 329 prevention preprints that met our eligibility criteria, almost half (48.9%) were published in a peer-reviewed journal within 12 months of being posted. While 16.8% published preprints showed some change in the magnitude of the primary outcome effect estimate, 4.4% were classified as having a major change. The style or wording of the conclusion changed in 42.2%, the content in 3.1%. Preprints on chemoprevention, with a cross-sectional design, and with public and noncommercial funding had the highest probabilities of publication. The main reasons for the nonpublication of preprints were journal rejection or lack of time. The reliability of preprint articles for evidence-based decision-making is questionable. Less than half of the preprint articles on prevention research are published in a peer-reviewed journal within 12 months, and significant changes in effect sizes and/or conclusions are still possible during the peer-review process.
Collapse
Affiliation(s)
- Isolde Sommer
- Cochrane Austria, Department for Evidence-Based Medicine and Evaluation, Danube University Krems, Krems, Austria.
| | - Vincent Sunder-Plassmann
- Cochrane Austria, Department for Evidence-Based Medicine and Evaluation, Danube University Krems, Krems, Austria
| | - Piotr Ratajczak
- Department of Pharmacoeconomics and Social Pharmacy, Poznań University of Medical Sciences, Poznań, Poland
| | | | - Andreea Dobrescu
- Cochrane Austria, Department for Evidence-Based Medicine and Evaluation, Danube University Krems, Krems, Austria
| | - Ursula Griebler
- Cochrane Austria, Department for Evidence-Based Medicine and Evaluation, Danube University Krems, Krems, Austria
| | - Gerald Gartlehner
- Cochrane Austria, Department for Evidence-Based Medicine and Evaluation, Danube University Krems, Krems, Austria
- RTI International, Research Triangle Park, NC, USA
| |
Collapse
|
9
|
Davidson M, Evrenoglou T, Graña C, Chaimani A, Boutron I. No evidence of important difference in summary treatment effects between COVID-19 preprints and peer-reviewed publications: a meta-epidemiological study. J Clin Epidemiol 2023; 162:90-97. [PMID: 37634703 DOI: 10.1016/j.jclinepi.2023.08.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 08/02/2023] [Accepted: 08/21/2023] [Indexed: 08/29/2023]
Abstract
OBJECTIVES Preprints became a major source of research communication during the COVID-19 pandemic. We aimed to evaluate whether summary treatment effect estimates differ between preprint and peer-reviewed journal trials. STUDY DESIGN AND SETTING A meta-epidemiological study. Data were derived from the COVID-NMA living systematic review (covid-nma.com) up to July 20, 2022. We identified all meta-analyses evaluating pharmacological treatments vs. standard of care or placebo for patients with COVID-19 that included at least one preprint and one peer-reviewed journal article. Difference in effect estimates between preprint and peer-reviewed journal trials were estimated by the ratio of odds ratio (ROR); ROR <1 indicated larger effects in preprint trials. RESULTS Thirty-seven meta-analyses including 114 trials (44 preprints and 70 peer-reviewed publications) were selected. The median number of randomized controlled trials (RCTs) per meta-analysis was 2 (interquartile range [IQR], 2-4; maximum, 11), median sample size of RCTs was 199 (IQR, 99-478). Overall, there was no statistically significant difference in summary effect estimates between preprint and peer-reviewed journal trials (ROR, 0.88; 95% CI, 0.71-1.09; I2 = 17.8%; τ2 = 0.06). CONCLUSION We did not find an important difference between summary treatment effects of preprints and summary treatment effects of peer-reviewed publications. Systematic reviewers and guideline developers should assess preprint inclusion individually, accounting for risk of bias and completeness of reporting.
Collapse
Affiliation(s)
- Mauricia Davidson
- Université Paris Cité and Université Sorbonne Paris Nord, Inserm, INRAE, Center for Research in Epidemiology and Statistics (CRESS), F-75004 Paris, France.
| | - Theodoros Evrenoglou
- Université Paris Cité and Université Sorbonne Paris Nord, Inserm, INRAE, Center for Research in Epidemiology and Statistics (CRESS), F-75004 Paris, France
| | - Carolina Graña
- Université Paris Cité and Université Sorbonne Paris Nord, Inserm, INRAE, Center for Research in Epidemiology and Statistics (CRESS), F-75004 Paris, France; Centre d'Epidémiologie Clinique, AP-HP, Hôpital Hôtel Dieu, F-75004 Paris, France; Cochrane France, Paris, France
| | - Anna Chaimani
- Université Paris Cité and Université Sorbonne Paris Nord, Inserm, INRAE, Center for Research in Epidemiology and Statistics (CRESS), F-75004 Paris, France; Cochrane France, Paris, France
| | - Isabelle Boutron
- Université Paris Cité and Université Sorbonne Paris Nord, Inserm, INRAE, Center for Research in Epidemiology and Statistics (CRESS), F-75004 Paris, France; Centre d'Epidémiologie Clinique, AP-HP, Hôpital Hôtel Dieu, F-75004 Paris, France; Cochrane France, Paris, France
| |
Collapse
|
10
|
Blatch-Jones AJ, Recio Saucedo A, Giddins B. The use and acceptability of preprints in health and social care settings: A scoping review. PLoS One 2023; 18:e0291627. [PMID: 37713422 PMCID: PMC10503772 DOI: 10.1371/journal.pone.0291627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 09/04/2023] [Indexed: 09/17/2023] Open
Abstract
BACKGROUND Preprints are open and accessible scientific manuscript or report that is shared publicly, through a preprint server, before being submitted to a journal. The value and importance of preprints has grown since its contribution during the public health emergency of the COVID-19 pandemic. Funders and publishers are establishing their position on the use of preprints, in grant applications and publishing models. However, the evidence supporting the use and acceptability of preprints varies across funders, publishers, and researchers. The scoping review explored the current evidence on the use and acceptability of preprints in health and social care settings by publishers, funders, and the research community throughout the research lifecycle. METHODS A scoping review was undertaken with no study or language limits. The search strategy was limited to the last five years (2017-2022) to capture changes influenced by COVID-19 (e.g., accelerated use and role of preprints in research). The review included international literature, including grey literature, and two databases were searched: Scopus and Web of Science (24 August 2022). RESULTS 379 titles and abstracts and 193 full text articles were assessed for eligibility. Ninety-eight articles met eligibility criteria and were included for full extraction. For barriers and challenges, 26 statements were grouped under four main themes (e.g., volume/growth of publications, quality assurance/trustworthiness, risks associated to credibility, and validation). For benefits and value, 34 statements were grouped under six themes (e.g., openness/transparency, increased visibility/credibility, open review process, open research, democratic process/systems, increased productivity/opportunities). CONCLUSIONS Preprints provide opportunities for rapid dissemination but there is a need for clear policies and guidance from journals, publishers, and funders. Cautionary measures are needed to maintain the quality and value of preprints, paying particular attention to how findings are translated to the public. More research is needed to address some of the uncertainties addressed in this review.
Collapse
Affiliation(s)
- Amanda Jane Blatch-Jones
- National Institute for Health and Care Research (NIHR) Coordinating Centre, School of Healthcare Enterprise and Innovation, University of Southampton, Southampton, Hampshire, United Kingdom
| | - Alejandra Recio Saucedo
- National Institute for Health and Care Research (NIHR) Coordinating Centre, School of Healthcare Enterprise and Innovation, University of Southampton, Southampton, Hampshire, United Kingdom
| | - Beth Giddins
- National Institute for Health and Care Research (NIHR) Coordinating Centre, School of Healthcare Enterprise and Innovation, University of Southampton, Southampton, Hampshire, United Kingdom
| |
Collapse
|
11
|
Stoll M, Lindner S, Marquardt B, Salholz-Hillel M, DeVito NJ, Klemperer D, Lieb K. Completeness and consistency of primary outcome reporting in COVID-19 publications in the early pandemic phase: a descriptive study. BMC Med Res Methodol 2023; 23:173. [PMID: 37516878 PMCID: PMC10385884 DOI: 10.1186/s12874-023-01991-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 07/13/2023] [Indexed: 07/31/2023] Open
Abstract
BACKGROUND The COVID-19 pandemic saw a steep increase in the number of rapidly published scientific studies, especially early in the pandemic. Some have suggested COVID-19 trial reporting is of lower quality than typical reports, but there is limited evidence for this in terms of primary outcome reporting. The objective of this study was to assess the prevalence of completely defined primary outcomes reported in registry entries, preprints, and journal articles, and to assess consistent primary outcome reporting between these sources. METHODS This is a descriptive study of a cohort of registered interventional clinical trials for the treatment and prevention of COVID-19, drawn from the DIssemination of REgistered COVID-19 Clinical Trials (DIRECCT) study dataset. The main outcomes are: 1) Prevalence of complete primary outcome reporting; 2) Prevalence of consistent primary outcome reporting between registry entry and preprint as well as registry entry and journal article pairs. RESULTS We analyzed 87 trials with 116 corresponding publications (87 registry entries, 53 preprints and 63 journal articles). All primary outcomes were completely defined in 47/87 (54%) registry entries, 31/53 (58%) preprints and 44/63 (70%) journal articles. All primary outcomes were consistently reported in 13/53 (25%) registry-preprint pairs and 27/63 (43%) registry-journal article pairs. No primary outcome was specified in 13/53 (25%) preprints and 8/63 (13%) journal articles. In this sample, complete primary outcome reporting occurred more frequently in trials with vs. without involvement of pharmaceutical companies (76% vs. 45%), and in RCTs vs. other study designs (68% vs. 49%). The same pattern was observed for consistent primary outcome reporting (with vs. without pharma: 56% vs. 12%, RCT vs. other: 43% vs. 22%). CONCLUSIONS In COVID-19 trials in the early phase of the pandemic, all primary outcomes were completely defined in 54%, 58%, and 70% of registry entries, preprints and journal articles, respectively. Only 25% of preprints and 43% of journal articles reported primary outcomes consistent with registry entries.
Collapse
Affiliation(s)
- Marlene Stoll
- Department of Psychiatry and Psychotherapy, University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany.
- Leibniz Institute for Resilience Research (LIR), Mainz, Germany.
| | - Saskia Lindner
- Department of Psychiatry and Psychotherapy, University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany
| | - Bernd Marquardt
- Department of Psychiatry and Psychotherapy, University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany
| | - Maia Salholz-Hillel
- QUEST Center for Responsible Research, Berlin Institute of Health (BIH), Charité Universitätsmedizin Berlin, Berlin, Germany
| | - Nicholas J DeVito
- Bennett Institute for Applied Data Science, Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
| | - David Klemperer
- Ostbayrische Technische Hochschule Regensburg, Regensburg, Germany
| | - Klaus Lieb
- Department of Psychiatry and Psychotherapy, University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany
- Leibniz Institute for Resilience Research (LIR), Mainz, Germany
| |
Collapse
|
12
|
Dobolyi K, Sieniawski GP, Dobolyi D, Goldfrank J, Hampel-Arias Z. Hindsight2020: Characterizing Uncertainty in the COVID-19 Scientific Literature. Disaster Med Public Health Prep 2023; 17:e437. [PMID: 37489527 DOI: 10.1017/dmp.2023.82] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/26/2023]
Abstract
Following emerging, re-emerging, and endemic pathogen outbreaks, the rush to publish and the risk of data misrepresentation, misinterpretation, and even misinformation puts an even greater onus on methodological rigor, which includes revisiting initial assumptions as new evidence becomes available. This study sought to understand how and when early evidence emerges and evolves when addressing different types of recurring pathogen-related questions. By applying claim-matching by means of deep learning Natural Language Processing (NLP) of coronavirus disease 2019 (COVID-19) scientific literature against a set of expert-curated evidence, patterns in timing across different COVID-19 questions-and-answers were identified, to build a framework for characterizing uncertainty in emerging infectious disease (EID) research over time. COVID-19 was chosen as a use case for this framework given the large and accessible datasets curated for scientists during the beginning of the pandemic. Timing patterns in reliably answering broad COVID-19 questions often do not align with general publication patterns, but early expert-curated evidence was generally stable. Because instability in answers often occurred within the first 2 to 6 mo for specific COVID-19 topics, public health officials could apply more conservative policies at the start of future pandemics, to be revised as evidence stabilizes.
Collapse
Affiliation(s)
- Kinga Dobolyi
- George Washington University, Department of Computer Science, Washington, DC, USA
| | | | | | - Joseph Goldfrank
- George Washington University, Department of Computer Science, Washington, DC, USA
| | | |
Collapse
|
13
|
Paul M. SPINning in infectious diseases. Clin Microbiol Infect 2023:S1198-743X(23)00197-0. [PMID: 37116862 DOI: 10.1016/j.cmi.2023.04.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 04/22/2023] [Accepted: 04/22/2023] [Indexed: 04/30/2023]
Affiliation(s)
- Mical Paul
- Rambam Health care Campus and The Ruth and Bruce Rappaport Faculty of Medicine, Technion - Israel Institute of Technology. Haifa, Israel.
| |
Collapse
|
14
|
Bai AD, Jiang Y, Nguyen DL, Lo CKL, Stefanova I, Guo K, Wang F, Zhang C, Sayeau K, Garg A, Loeb M. Comparison of Preprint Postings of Randomized Clinical Trials on COVID-19 and Corresponding Published Journal Articles: A Systematic Review. JAMA Netw Open 2023; 6:e2253301. [PMID: 36705921 DOI: 10.1001/jamanetworkopen.2022.53301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/28/2023] Open
Abstract
IMPORTANCE Randomized clinical trials (RCTs) on COVID-19 are increasingly being posted as preprints before publication in a scientific, peer-reviewed journal. OBJECTIVE To assess time to journal publication for COVID-19 RCT preprints and to compare differences between pairs of preprints and corresponding journal articles. EVIDENCE REVIEW This systematic review used a meta-epidemiologic approach to conduct a literature search using the World Health Organization COVID-19 database and Embase to identify preprints published between January 1 and December 31, 2021. This review included RCTs with human participants and research questions regarding the treatment or prevention of COVID-19. For each preprint, a literature search was done to locate the corresponding journal article. Two independent reviewers read the full text, extracted data, and assessed risk of bias using the Cochrane Risk of Bias 2 tool. Time to publication was analyzed using a Cox proportional hazards regression model. Differences between preprint and journal article pairs in terms of outcomes, analyses, results, or conclusions were described. Statistical analysis was performed on October 17, 2022. FINDINGS This study included 152 preprints. As of October 1, 2022, 119 of 152 preprints (78.3%) had been published in journals. The median time to publication was 186 days (range, 17-407 days). In a multivariable model, larger sample size and low risk of bias were associated with journal publication. With a sample size of less than 200 as the reference, sample sizes of 201 to 1000 and greater than 1000 had hazard ratios (HRs) of 1.23 (95% CI, 0.80-1.91) and 2.19 (95% CI, 1.36-3.53) for publication, respectively. With high risk of bias as the reference, medium-risk articles with some concerns for bias had an HR of 1.77 (95% CI, 1.02-3.09); those with a low risk of bias had an HR of 3.01 (95% CI, 1.71-5.30). Of the 119 published preprints, there were differences in terms of outcomes, analyses, results, or conclusions in 65 studies (54.6%). The main conclusion in the preprint contradicted the conclusion in the journal article for 2 studies (1.7%). CONCLUSIONS AND RELEVANCE These findings suggest that there is a substantial time lag from preprint posting to journal publication. Preprints with smaller sample sizes and high risk of bias were less likely to be published. Finally, although differences in terms of outcomes, analyses, results, or conclusions were observed for preprint and journal article pairs in most studies, the main conclusion remained consistent for the majority of studies.
Collapse
Affiliation(s)
- Anthony D Bai
- Division of Infectious Diseases, Department of Medicine, Queen's University, Kingston, Ontario, Canada
| | - Yunbo Jiang
- Faculty of Health Sciences, Queen's University, Kingston, Ontario, Canada
| | - David L Nguyen
- Faculty of Health Sciences, Queen's University, Kingston, Ontario, Canada
| | - Carson K L Lo
- Division of Infectious Diseases, Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| | | | - Kevin Guo
- Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
| | - Frank Wang
- Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
| | - Cindy Zhang
- Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
| | - Kyle Sayeau
- Mental Health and Addictions Care Program, Kingston Health Sciences Centre, Kingston, Ontario, Canada
| | - Akhil Garg
- Department of Medicine, Queen's University, Kingston, Ontario, Canada
| | - Mark Loeb
- Division of Infectious Diseases, Department of Medicine, McMaster University, Hamilton, Ontario, Canada
- Division of Medical Microbiology, Department of Pathology and Molecular Medicine, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
15
|
Delardas O, Giannos P. How COVID-19 Affected the Journal Impact Factor of High Impact Medical Journals: Bibliometric Analysis. J Med Internet Res 2022; 24:e43089. [PMID: 36454727 PMCID: PMC9778719 DOI: 10.2196/43089] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 11/21/2022] [Accepted: 11/30/2022] [Indexed: 12/05/2022] Open
Abstract
BACKGROUND Journal impact factor (IF) is the leading method of scholarly assessment in today's research world, influencing where scholars submit their research and where funders distribute their resources. COVID-19, one of the most serious health crises, resulted in an unprecedented surge of publications across all areas of knowledge. An important question is whether COVID-19 affected the gold standard of scholarly assessment. OBJECTIVE In this paper, we aimed to comprehensively compare the productivity trends of COVID-19 and non-COVID-19 literature as well as track their evolution and scholarly impact across 3 consecutive calendar years. METHODS We took as an example 6 high-impact medical journals (Annals of Internal Medicine [Annals], The British Medical Journal [The BMJ], Journal of the American Medical Association [JAMA], The Lancet, Nature Medicine [NatMed], and The New England Journal of Medicine [NEJM]) and searched the literature using the Web of Science database for manuscripts published between January 1, 2019, and December 31, 2021. To assess the effect of COVID-19 and non-COVID-19 literature in their scholarly impact, we calculated their annual IFs and percentage changes. Thereafter, we estimated the citation probability of COVID-19 and non-COVID-19 publications along with their rates of publication and citation by journal. RESULTS A significant increase in IF change for manuscripts including COVID-19 published from 2019 to 2020 (P=.002; Annals: 283%; The BMJ: 199%; JAMA: 208%; The Lancet: 392%; NatMed: 111%; and NEJM: 196%) and to 2021 (P=.007; Annals: 41%; The BMJ: 90%; JAMA: 6%; The Lancet: 22%; NatMed: 53%; and NEJM: 72%) was seen, against non-COVID-19 ones. The likelihood of highly cited publications was significantly increased in COVID-19 manuscripts between 2019 and 2021 (Annals: z=3.4, P<.001; The BMJ: z=4.0, P<.001; JAMA: z=3.8, P<.001; The Lancet: z=3.5, P<.001; NatMed: z=5.2, P<.001; and NEJM: z=4.7, P<.001). The publication and citation rates of COVID-19 publications followed a positive trajectory, as opposed to non-COVID-19. The citation rate for COVID-19 publications peaked by the second quarter of 2020 while that of the publication rate approximately a year later. CONCLUSIONS The rapid surge of COVID-19 publications emphasized the capacity of scientific communities to respond against a global health emergency, yet inflated IFs create ambiguity as benchmark tools for assessing scholarly impact. The immediate implication is a loss in value of and trust in journal IFs as metrics of research and scientific rigor perceived by academia and society. Loss of confidence toward procedures employed by highly reputable publishers may incentivize authors to exploit the publication process by monopolizing their research on COVID-19 and encourage them to publish in journals of predatory behavior.
Collapse
Affiliation(s)
- Orestis Delardas
- Promotion of Emerging and Evaluative Research Society, London, United Kingdom
| | - Panagiotis Giannos
- Promotion of Emerging and Evaluative Research Society, London, United Kingdom
- Department of Life Sciences, Faculty of Natural Sciences, Imperial College London, London, United Kingdom
| |
Collapse
|
16
|
Janda G, Khetpal V, Shi X, Ross JS, Wallach JD. Comparison of Clinical Study Results Reported in medRxiv Preprints vs Peer-reviewed Journal Articles. JAMA Netw Open 2022; 5:e2245847. [PMID: 36484989 PMCID: PMC9856222 DOI: 10.1001/jamanetworkopen.2022.45847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
IMPORTANCE Preprints have been widely adopted to enhance the timely dissemination of research across many scientific fields. Concerns remain that early, public access to preliminary medical research has the potential to propagate misleading or faulty research that has been conducted or interpreted in error. OBJECTIVE To evaluate the concordance among study characteristics, results, and interpretations described in preprints of clinical studies posted to medRxiv that are subsequently published in peer-reviewed journals (preprint-journal article pairs). DESIGN, SETTING, AND PARTICIPANTS This cross-sectional study assessed all preprints describing clinical studies that were initially posted to medRxiv in September 2020 and subsequently published in a peer-reviewed journal as of September 15, 2022. MAIN OUTCOMES AND MEASURES For preprint-journal article pairs describing clinical trials, observational studies, and meta-analyses that measured health-related outcomes, the sample size, primary end points, corresponding results, and overarching conclusions were abstracted and compared. Sample size and results from primary end points were considered concordant if they had exact numerical equivalence. RESULTS Among 1399 preprints first posted on medRxiv in September 2020, a total of 1077 (77.0%) had been published as of September 15, 2022, a median of 6 months (IQR, 3-8 months) after preprint posting. Of the 547 preprint-journal article pairs describing clinical trials, observational studies, or meta-analyses, 293 (53.6%) were related to COVID-19. Of the 535 pairs reporting sample sizes in both sources, 462 (86.4%) were concordant; 43 (58.9%) of the 73 pairs with discordant sample sizes had larger samples in the journal publication. There were 534 pairs (97.6%) with concordant and 13 pairs (2.4%) with discordant primary end points. Of the 535 pairs with numerical results for the primary end points, 434 (81.1%) had concordant primary end point results; 66 of the 101 discordant pairs (65.3%) had effect estimates that were in the same direction and were statistically consistent. Overall, 526 pairs (96.2%) had concordant study interpretations, including 82 of the 101 pairs (81.2%) with discordant primary end point results. CONCLUSIONS AND RELEVANCE Most clinical studies posted as preprints on medRxiv and subsequently published in peer-reviewed journals had concordant study characteristics, results, and final interpretations. With more than three-fourths of preprints published in journals within 24 months, these results may suggest that many preprints report findings that are consistent with the final peer-reviewed publications.
Collapse
Affiliation(s)
| | - Vishal Khetpal
- Department of Medicine, Warren Alpert Medical School of Brown University, Providence, Rhode Island
| | - Xiaoting Shi
- Department of Environmental Health Sciences, Yale School of Public Health, New Haven, Connecticut
| | - Joseph S. Ross
- Section of General Medicine and the National Clinician Scholars Program, Department of Internal Medicine, Yale School of Medicine, New Haven, Connecticut
- Center for Outcomes Research and Evaluation, Yale–New Haven Health System, New Haven, Connecticut
- Department of Health Policy and Management, Yale School of Public Health, New Haven, Connecticut
| | - Joshua D. Wallach
- Department of Epidemiology, Rollins School of Public Health, Emory University, Atlanta, Georgia
| |
Collapse
|
17
|
Nelson L, Ye H, Schwenn A, Lee S, Arabi S, Hutchins BI. Robustness of evidence reported in preprints during peer review. Lancet Glob Health 2022; 10:e1684-e1687. [PMID: 36240832 PMCID: PMC9553196 DOI: 10.1016/s2214-109x(22)00368-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 08/02/2022] [Accepted: 08/10/2022] [Indexed: 11/07/2022]
Abstract
Scientists have expressed concern that the risk of flawed decision making is increased through the use of preprint data that might change after undergoing peer review. This Health Policy paper assesses how COVID-19 evidence presented in preprints changes after review. We quantified attrition dynamics of more than 1000 epidemiological estimates first reported in 100 preprints matched to their subsequent peer-reviewed journal publication. Point estimate values changed an average of 6% during review; the correlation between estimate values before and after review was high (0·99) and there was no systematic trend. Expert peer-review scores of preprint quality were not related to eventual publication in a peer-reviewed journal. Uncertainty was reduced during peer review, with CIs reducing by 7% on average. These results support the use of preprints, a component of biomedical research literature, in decision making. These results can also help inform the use of preprints during the ongoing COVID-19 pandemic and future disease outbreaks.
Collapse
Affiliation(s)
- Lindsay Nelson
- Information School, School of Computer, Data and Information Sciences, College of Letters and Science, University of Wisconsin-Madison, Madison, WI, USA
| | - Honghan Ye
- Department of Statistics, School of Computer, Data and Information Sciences, College of Letters and Science, University of Wisconsin-Madison, Madison, WI, USA
| | - Anna Schwenn
- Information School, School of Computer, Data and Information Sciences, College of Letters and Science, University of Wisconsin-Madison, Madison, WI, USA
| | - Shinhyo Lee
- Information School, School of Computer, Data and Information Sciences, College of Letters and Science, University of Wisconsin-Madison, Madison, WI, USA
| | - Salsabil Arabi
- Information School, School of Computer, Data and Information Sciences, College of Letters and Science, University of Wisconsin-Madison, Madison, WI, USA
| | - B Ian Hutchins
- Information School, School of Computer, Data and Information Sciences, College of Letters and Science, University of Wisconsin-Madison, Madison, WI, USA.
| |
Collapse
|
18
|
Zeraatkar D, Pitre T, Leung G, Cusano E, Agarwal A, Khalid F, Escamilla Z, Cooper MA, Ghadimi M, Wang Y, Verdugo-Paiva F, Rada G, Kum E, Qasim A, Bartoszko JJ, Siemieniuk RAC, Patel C, Guyatt G, Brignardello-Petersen R. Consistency of covid-19 trial preprints with published reports and impact for decision making: retrospective review. BMJ MEDICINE 2022; 1:e000309. [PMID: 36936583 PMCID: PMC9951374 DOI: 10.1136/bmjmed-2022-000309] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Accepted: 08/30/2022] [Indexed: 12/04/2022]
Abstract
Objective To assess the trustworthiness (ie, complete and consistent reporting of key methods and results between preprint and published trial reports) and impact (ie, effects of preprints on meta-analytic estimates and the certainty of evidence) of preprint trial reports during the covid-19 pandemic. Design Retrospective review. Data sources World Health Organization covid-19 database and the Living Overview of the Evidence (L-OVE) covid-19 platform by the Epistemonikos Foundation (up to 3 August 2021). Main outcome measures Comparison of characteristics of covid-19 trials with and without preprints, estimates of time to publication of covid-19 preprints, and description of differences in reporting of key methods and results between preprints and their later publications. For the effects of eight treatments on mortality and mechanical ventilation, the study comprised meta-analyses including preprints and excluding preprints at one, three, and six months after the first trial addressing the treatment became available either as a preprint or publication (120 meta-analyses in total, 60 of which included preprints and 60 of which excluded preprints) and assessed the certainty of evidence using the GRADE framework. Results Of 356 trials included in the study, 101 were only available as preprints, 181 as journal publications, and 74 as preprints first and subsequently published in journals. The median time to publication of preprints was about six months. Key methods and results showed few important differences between trial preprints and their subsequent published reports. Apart from two (3.3%) of 60 comparisons, point estimates were consistent between meta-analyses including preprints versus those excluding preprints as to whether they indicated benefit, no appreciable effect, or harm. For nine (15%) of 60 comparisons, the rating of the certainty of evidence was different when preprints were included versus being excluded-the certainty of evidence including preprints was higher in four comparisons and lower in five comparisons. Conclusion No compelling evidence indicates that preprints provide results that are inconsistent with published papers. Preprints remain the only source of findings of many trials for several months-an unsuitable length of time in a health emergency that is not conducive to treating patients with timely evidence. The inclusion of preprints could affect the results of meta-analyses and the certainty of evidence. Evidence users should be encouraged to consider data from preprints.
Collapse
Affiliation(s)
- Dena Zeraatkar
- Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, ON, Canada
| | | | | | - Ellen Cusano
- Internal Medicine Residency Program, University of Calgary Cumming School of Medicine, Calgary, AB, Canada
| | - Arnav Agarwal
- Department of Medicine, University of Toronto, Toronto, ON, Canada
| | | | | | - Matthew Adam Cooper
- Department of Medicine, University of Alberta Faculty of Medicine and Dentistry, Edmonton, AB, Canada
| | | | - Ying Wang
- Department of Pharmacy, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Francisca Verdugo-Paiva
- Epistemonikos Foundation, Santiago, Chile
- UC Evidence Centre, Cochrane Chile Associated Centre, Pontificia Universidad Católica de Chile, Santiago, Chile
| | | | - Elena Kum
- Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, ON, Canada
| | - Anila Qasim
- Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, ON, Canada
| | | | | | - Chirag Patel
- Biomedical Informatics, Harvard Medical School, Boston, MA, USA
| | | | - Romina Brignardello-Petersen
- Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, ON, Canada
- Faculty of Dentistry, University of Chile, Santiago, Chile
| |
Collapse
|
19
|
Kapp P, Esmail L, Ghosn L, Ravaud P, Boutron I. Transparency and reporting characteristics of COVID-19 randomized controlled trials. BMC Med 2022; 20:363. [PMID: 36154932 PMCID: PMC9510360 DOI: 10.1186/s12916-022-02567-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/19/2022] [Accepted: 09/13/2022] [Indexed: 11/17/2022] Open
Abstract
BACKGROUND In the context of the COVID-19 pandemic, randomized controlled trials (RCTs) are essential to support clinical decision-making. We aimed (1) to assess and compare the reporting characteristics of RCTs between preprints and peer-reviewed publications and (2) to assess whether reporting improves after the peer review process for all preprints subsequently published in peer-reviewed journals. METHODS We searched the Cochrane COVID-19 Study Register and L·OVE COVID-19 platform to identify all reports of RCTs assessing pharmacological treatments of COVID-19, up to May 2021. We extracted indicators of transparency (e.g., trial registration, data sharing intentions) and assessed the completeness of reporting (i.e., some important CONSORT items, conflict of interest, ethical approval) using a standardized data extraction form. We also identified paired reports published in preprint and peer-reviewed publications. RESULTS We identified 251 trial reports: 121 (48%) were first published in peer-reviewed journals, and 130 (52%) were first published as preprints. Transparency was poor. About half of trials were prospectively registered (n = 140, 56%); 38% (n = 95) made their full protocols available, and 29% (n = 72) provided access to their statistical analysis plan report. A data sharing statement was reported in 68% (n = 170) of the reports of which 91% stated their willingness to share. Completeness of reporting was low: only 32% (n = 81) of trials completely defined the pre-specified primary outcome measures; 57% (n = 143) reported the process of allocation concealment. Overall, 51% (n = 127) adequately reported the results for the primary outcomes while only 14% (n = 36) of trials adequately described harms. Primary outcome(s) reported in trial registries and published reports were inconsistent in 49% (n = 104) of trials; of them, only 15% (n = 16) disclosed outcome switching in the report. There were no major differences between preprints and peer-reviewed publications. Of the 130 RCTs published as preprints, 78 were subsequently published in a peer-reviewed journal. There was no major improvement after the journal peer review process for most items. CONCLUSIONS Transparency, completeness, and consistency of reporting of COVID-19 clinical trials were insufficient both in preprints and peer-reviewed publications. A comparison of paired reports published in preprint and peer-reviewed publication did not indicate major improvement.
Collapse
Affiliation(s)
- Philipp Kapp
- Université Paris Cité, Inserm, INRAE, Centre of Research in Epidemiology and Statistics (CRESS), F-75004, Paris, France
- Centre d'Épidémiologie Clinique, AP-HP, Hôpital Hôtel-Dieu, F-75004, Paris, France
- Cochrane France, F-75004, Paris, France
- Institute for Evidence in Medicine, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, D-79110, Freiburg, Germany
| | - Laura Esmail
- Université Paris Cité, Inserm, INRAE, Centre of Research in Epidemiology and Statistics (CRESS), F-75004, Paris, France
- Centre d'Épidémiologie Clinique, AP-HP, Hôpital Hôtel-Dieu, F-75004, Paris, France
- Cochrane France, F-75004, Paris, France
| | - Lina Ghosn
- Université Paris Cité, Inserm, INRAE, Centre of Research in Epidemiology and Statistics (CRESS), F-75004, Paris, France
- Centre d'Épidémiologie Clinique, AP-HP, Hôpital Hôtel-Dieu, F-75004, Paris, France
- Cochrane France, F-75004, Paris, France
| | - Philippe Ravaud
- Université Paris Cité, Inserm, INRAE, Centre of Research in Epidemiology and Statistics (CRESS), F-75004, Paris, France
- Centre d'Épidémiologie Clinique, AP-HP, Hôpital Hôtel-Dieu, F-75004, Paris, France
- Cochrane France, F-75004, Paris, France
| | - Isabelle Boutron
- Université Paris Cité, Inserm, INRAE, Centre of Research in Epidemiology and Statistics (CRESS), F-75004, Paris, France.
- Centre d'Épidémiologie Clinique, AP-HP, Hôpital Hôtel-Dieu, F-75004, Paris, France.
- Cochrane France, F-75004, Paris, France.
| |
Collapse
|
20
|
Gehanno JF, Grosjean J, Darmoni SJ, Rollin L. Reliability of citations of medRxiv preprints in articles published on COVID-19 in the world leading medical journals. PLoS One 2022; 17:e0264661. [PMID: 35947594 PMCID: PMC9365132 DOI: 10.1371/journal.pone.0264661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 07/25/2022] [Indexed: 11/18/2022] Open
Abstract
Introduction Preprints have been widely cited during the COVID-19 pandemics, even in the major medical journals. However, since subsequent publication of preprint is not always mentioned in preprint repositories, some may be inappropriately cited or quoted. Our objectives were to assess the reliability of preprint citations in articles on COVID-19, to the rate of publication of preprints cited in these articles and to compare, if relevant, the content of the preprints to their published version. Methods Articles published on COVID in 2020 in the BMJ, The Lancet, the JAMA and the NEJM were manually screened to identify all articles citing at least one preprint from medRxiv. We searched PubMed, Google and Google Scholar to assess if the preprint had been published in a peer-reviewed journal, and when. Published articles were screened to assess if the title, data or conclusions were identical to the preprint version. Results Among the 205 research articles on COVID published by the four major medical journals in 2020, 60 (29.3%) cited at least one medRxiv preprint. Among the 182 preprints cited, 124 were published in a peer-reviewed journal, with 51 (41.1%) before the citing article was published online and 73 (58.9%) later. There were differences in the title, the data or the conclusion between the preprint cited and the published version for nearly half of them. MedRxiv did not mentioned the publication for 53 (42.7%) of preprints. Conclusions More than a quarter of preprints citations were inappropriate since preprints were in fact already published at the time of publication of the citing article, often with a different content. Authors and editors should check the accuracy of the citations and of the quotations of preprints before publishing manuscripts that cite them.
Collapse
Affiliation(s)
- Jean-Francois Gehanno
- Department of Occupational Medicine, Rouen University Hospital, Rouen, France
- Inserm, Rouen University, Sorbonne University, University of Paris 13, Laboratory of Medical Informatics and Knowledge Engineering in e-Health, LIMICS, Paris, France
- * E-mail:
| | - Julien Grosjean
- Inserm, Rouen University, Sorbonne University, University of Paris 13, Laboratory of Medical Informatics and Knowledge Engineering in e-Health, LIMICS, Paris, France
- Department of Biomedical Informatics, Rouen University Hospital, Rouen France
| | - Stefan J. Darmoni
- Inserm, Rouen University, Sorbonne University, University of Paris 13, Laboratory of Medical Informatics and Knowledge Engineering in e-Health, LIMICS, Paris, France
- Department of Biomedical Informatics, Rouen University Hospital, Rouen France
| | - Laetitia Rollin
- Department of Occupational Medicine, Rouen University Hospital, Rouen, France
- Inserm, Rouen University, Sorbonne University, University of Paris 13, Laboratory of Medical Informatics and Knowledge Engineering in e-Health, LIMICS, Paris, France
| |
Collapse
|
21
|
Collins A, Alexander R. Reproducibility of COVID-19 pre-prints. Scientometrics 2022; 127:4655-4673. [PMID: 35813409 PMCID: PMC9252536 DOI: 10.1007/s11192-022-04418-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Accepted: 05/20/2022] [Indexed: 01/24/2023]
Abstract
To examine the reproducibility of COVID-19 research, we create a dataset of pre-prints posted to arXiv, bioRxiv, and medRxiv between 28 January 2020 and 30 June 2021 that are related to COVID-19. We extract the text from these pre-prints and parse them looking for keyword markers signaling the availability of the data and code underpinning the pre-print. For the pre-prints that are in our sample, we are unable to find markers of either open data or open code for 75% of those on arXiv, 67% of those on bioRxiv, and 79% of those on medRxiv.
Collapse
|
22
|
Itani D, Lababidi G, Itani R, El Ghoul T, Hamade L, Hijazi ARA, Khabsa J, Akl EA. Reporting of funding and conflicts of interest improved from preprints to peer-reviewed publications of biomedical research. J Clin Epidemiol 2022; 149:146-153. [PMID: 35738307 DOI: 10.1016/j.jclinepi.2022.06.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 05/31/2022] [Accepted: 06/14/2022] [Indexed: 11/25/2022]
Abstract
OBJECTIVES To assess changes in the reporting of funding and conflicts of interest (COI) in biomedical research between preprint server publications and their corresponding versions in peer-reviewed journals. STUDY DESIGN We selected preprint servers publishing exclusively biomedical research. From these, we screened articles by order of publication date and identified 200 preprints first published in 2020 with subsequent versions in peer-reviewed journals. We judged eligibility and extracted data about authorship, funding, and COI in duplicate and independently. We performed descriptive statistics. RESULTS A quarter of the studies added at least one author to the peer-reviewed version. Most studies reported funding in both versions (87%), and a quarter of these added at least one funder to the peer-reviewed version. Eighteen studies (9%) reported funding only in the peer-reviewed version. A majority of studies reported COI in both versions (69%) and 5% of these had authors reporting more COI in the peer-reviewed version. A minority of studies (23%) reported COI only in the peer-reviewed version. None of the studies justified any changes in authorship, funding, or COI. CONCLUSION Reporting of funding and COI improved in peer-reviewed versions. However, substantive percentages of studies added authors, funders, and COI disclosures in their peer-reviewed versions.
Collapse
Affiliation(s)
- Dima Itani
- Faculty of Arts and Sciences, American University of Beirut, Beirut, Lebanon.
| | - Ghena Lababidi
- Faculty of Medicine, American University of Beirut, Beirut, Lebanon.
| | - Rola Itani
- Faculty of Medicine, American University of Beirut, Beirut, Lebanon.
| | - Tala El Ghoul
- Faculty of Health Sciences, American University of Beirut, Beirut, Lebanon.
| | - Lama Hamade
- Faculty of Arts and Sciences, American University of Beirut, Beirut, Lebanon.
| | - Ayat R A Hijazi
- Department of Family Medicine, American University of Beirut Medical Center, Beirut, Lebanon.
| | - Joanne Khabsa
- Clinical Research Institute, American University of Beirut Medical Center, Beirut, Lebanon.
| | - Elie A Akl
- Department of Internal Medicine American University of Beirut, Beirut, Lebanon; Department of Health Research Methods, Evidence, and Impact (HEI), McMaster University, Hamilton, ON, Canada.
| |
Collapse
|
23
|
Izcovich A, Ragusa MA, Tortosa F, Marzio MAL, Agnoletti C, Bengolea A, Ceirano A, Espinosa F, Saavedra E, Sanguine V, Tassara A, Cid C, Catalano HN, Agarwal A, Foroutan F, Rada G. Correction: Prognostic factors for severity and mortality in patients infected with COVID-19: A systematic review. PLoS One 2022; 17:e0269291. [PMID: 35617192 PMCID: PMC9135219 DOI: 10.1371/journal.pone.0269291] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022] Open
Abstract
[This corrects the article DOI: 10.1371/journal.pone.0241955.].
Collapse
|
24
|
Brierley L, Nanni F, Polka JK, Dey G, Pálfy M, Fraser N, Coates JA. Tracking changes between preprint posting and journal publication during a pandemic. PLoS Biol 2022; 20:e3001285. [PMID: 35104285 PMCID: PMC8806067 DOI: 10.1371/journal.pbio.3001285] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Accepted: 10/28/2021] [Indexed: 12/20/2022] Open
Abstract
Amid the Coronavirus Disease 2019 (COVID-19) pandemic, preprints in the biomedical sciences are being posted and accessed at unprecedented rates, drawing widespread attention from the general public, press, and policymakers for the first time. This phenomenon has sharpened long-standing questions about the reliability of information shared prior to journal peer review. Does the information shared in preprints typically withstand the scrutiny of peer review, or are conclusions likely to change in the version of record? We assessed preprints from bioRxiv and medRxiv that had been posted and subsequently published in a journal through April 30, 2020, representing the initial phase of the pandemic response. We utilised a combination of automatic and manual annotations to quantify how an article changed between the preprinted and published version. We found that the total number of figure panels and tables changed little between preprint and published articles. Moreover, the conclusions of 7.2% of non-COVID-19-related and 17.2% of COVID-19-related abstracts undergo a discrete change by the time of publication, but the majority of these changes do not qualitatively change the conclusions of the paper.
Collapse
Affiliation(s)
- Liam Brierley
- Department of Health Data Science, University of Liverpool, Liverpool, United Kingdom
| | | | | | - Gautam Dey
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory, Heidelberg, Germany
| | - Máté Pálfy
- The Company of Biologists, Histon, Cambridge, United Kingdom
| | | | - Jonathon Alexis Coates
- William Harvey Research Institute, Barts and the London School of Medicine and Dentistry Queen Mary University of London, London, United Kingdom
| |
Collapse
|
25
|
Misra DP, Ravindran V. Preprint publications: waste in haste or pragmatic progress? J R Coll Physicians Edinb 2021; 51:324-326. [PMID: 34882126 DOI: 10.4997/jrcpe.2021.401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Durga Prasanna Misra
- Department of Clinical Immunology and Rheumatology, Sanjay Gandhi Postgraduate Institute of Medical Sciences (SGPGIMS), Lucknow, India
| | | |
Collapse
|