1
|
Andaur Navarro CL, Damen JAA, Ghannad M, Dhiman P, van Smeden M, Reitsma JB, Collins GS, Riley RD, Moons KGM, Hooft L. SPIN-PM: a consensus framework to evaluate the presence of spin in studies on prediction models. J Clin Epidemiol 2024; 170:111364. [PMID: 38631529 DOI: 10.1016/j.jclinepi.2024.111364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 04/01/2024] [Accepted: 04/08/2024] [Indexed: 04/19/2024]
Abstract
OBJECTIVES To develop a framework to identify and evaluate spin practices and its facilitators in studies on clinical prediction model regardless of the modeling technique. STUDY DESIGN AND SETTING We followed a three-phase consensus process: (1) premeeting literature review to generate items to be included; (2) a series of structured meetings to provide comments discussed and exchanged viewpoints on items to be included with a panel of experienced researchers; and (3) postmeeting review on final list of items and examples to be included. Through this iterative consensus process, a framework was derived after all panel's researchers agreed. RESULTS This consensus process involved a panel of eight researchers and resulted in SPIN-Prediction Models which consists of two categories of spin (misleading interpretation and misleading transportability), and within these categories, two forms of spin (spin practices and facilitators of spin). We provide criteria and examples. CONCLUSION We proposed this guidance aiming to facilitate not only the accurate reporting but also an accurate interpretation and extrapolation of clinical prediction models which will likely improve the reporting quality of subsequent research, as well as reduce research waste.
Collapse
Affiliation(s)
- Constanza L Andaur Navarro
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands; Cochrane Netherlands, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands.
| | - Johanna A A Damen
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands; Cochrane Netherlands, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Mona Ghannad
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands; Cochrane Netherlands, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Paula Dhiman
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology & Musculoskeletal Sciences, University of Oxford, Oxford, UK; NIHR Oxford Biomedical Research Centre, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Maarten van Smeden
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Johannes B Reitsma
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Gary S Collins
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology & Musculoskeletal Sciences, University of Oxford, Oxford, UK; NIHR Oxford Biomedical Research Centre, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Richard D Riley
- Institute of Applied Health Research, College of Medical and Dental Sciences, University of Birmingham, Birmingham B15 2TT, UK
| | - Karel G M Moons
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands; Cochrane Netherlands, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Lotty Hooft
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands; Cochrane Netherlands, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
2
|
Armond ACV, Cobey KD, Moher D. Key concepts in clinical epidemiology: research integrity definitions and challenges. J Clin Epidemiol 2024; 171:111367. [PMID: 38642717 DOI: 10.1016/j.jclinepi.2024.111367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Revised: 04/10/2024] [Accepted: 04/14/2024] [Indexed: 04/22/2024]
Abstract
Research integrity is guided by a set of principles to ensure research reliability and rigor. It serves as a pillar to uphold society's trust in science and foster scientific progress. However, over the past 2 decades, a surge in research integrity concerns, including fraudulent research, reproducibility challenges, and questionable practices, has raised critical questions about the reliability of scientific outputs, particularly in biomedical research. In the biomedical sciences, any breaches in research integrity could potentially lead to a domino effect impacting patient care, medical interventions, and the broader implementation of healthcare policies. Addressing these breaches requires measures such as rigorous research methods, transparent reporting, and changing the research culture. Institutional support through clear guidelines, robust training, and mentorship is crucial to fostering a culture of research integrity. However, structural and institutional factors, including research incentives and recognition systems, play an important role in research behavior. Therefore, promoting research integrity demands a collective effort from all stakeholders to maintain public trust in the scientific community and ensure the reliability of science. Here we discuss some definitions and principles, the implications for biomedical sciences, and propose actionable steps to foster research integrity.
Collapse
Affiliation(s)
- Anna Catharina V Armond
- Metaresearch and Open Science Program, University of Ottawa Heart Institute, Ottawa, Canada.
| | - Kelly D Cobey
- Metaresearch and Open Science Program, University of Ottawa Heart Institute, Ottawa, Canada; School of Epidemiology and Public Health, University of Ottawa, Ottawa, Canada
| | - David Moher
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, Canada; Centre for Journalology, Ottawa Hospital Research Institute, Ottawa, Canada
| |
Collapse
|
3
|
Marcu GM, Dumbravă A, Băcilă IC, Szekely-Copîndean RD, Zăgrean AM. Increasing Value and Reducing Waste of Research on Neurofeedback Effects in Post-traumatic Stress Disorder: A State-of-the-Art-Review. Appl Psychophysiol Biofeedback 2024; 49:23-45. [PMID: 38151684 DOI: 10.1007/s10484-023-09610-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2023]
Abstract
Post-Traumatic Stress Disorder (PTSD) is often considered challenging to treat due to factors that contribute to its complexity. In the last decade, more attention has been paid to non-pharmacological or non-psychological therapies for PTSD, including neurofeedback (NFB). NFB is a promising non-invasive technique targeting specific brainwave patterns associated with psychiatric symptomatology. By learning to regulate brain activity in a closed-loop paradigm, individuals can improve their functionality while reducing symptom severity. However, owing to its lax regulation and heterogeneous legal status across different countries, the degree to which it has scientific support as a psychiatric treatment remains controversial. In this state-of-the-art review, we searched PubMed, Cochrane Central, Web of Science, Scopus, and MEDLINE and identified meta-analyses and systematic reviews exploring the efficacy of NFB for PTSD. We included seven systematic reviews, out of which three included meta-analyses (32 studies and 669 participants) that targeted NFB as an intervention while addressing a single condition-PTSD. We used the MeaSurement Tool to Assess systematic Reviews (AMSTAR) 2 and the criteria described by Cristea and Naudet (Behav Res Therapy 123:103479, 2019, https://doi.org/10.1016/j.brat.2019.103479 ) to identify sources of research waste and increasing value in biomedical research. The seven assessed reviews had an overall extremely poor quality score (5 critically low, one low, one moderate, and none high) and multiple sources of waste while opening opportunities for increasing value in the NFB literature. Our research shows that it remains unclear whether NFB training is significantly beneficial in treating PTSD. The quality of the investigated literature is low and maintains a persistent uncertainty over numerous points, which are highly important for deciding whether an intervention has clinical efficacy. Just as importantly, none of the reviews we appraised explored the statistical power, referred to open data of the included studies, or adjusted their pooled effect sizes for publication bias and risk of bias. Based on the obtained results, we identified some recurrent sources of waste (such as a lack of research decisions based on sound questions or using an appropriate methodology in a fully transparent, unbiased, and useable manner) and proposed some directions for increasing value (homogeneity and consensus) in designing and reporting research on NFB interventions in PTSD.
Collapse
Affiliation(s)
- Gabriela Mariana Marcu
- Division of Physiology and Neuroscience, Department of Functional Sciences, Carol Davila University of Medicine and Pharmacy, Bucharest, Romania.
- Department of Psychology, "Lucian Blaga" University of Sibiu, Sibiu, Romania.
| | - Andrei Dumbravă
- George I.M. Georgescu Institute of Cardiovascular Diseases, Iaşi, Romania
- Alexandru Ioan Cuza University Iaşi, Iaşi, Romania
| | - Ionuţ-Ciprian Băcilă
- Scientific Research Group in Neuroscience "Dr. Gheorghe Preda" Clinical Psychiatry Hospital, Sibiu, Romania
- Faculty of Medicine, "Lucian Blaga" University of Sibiu Romania, Sibiu, Romania
| | - Raluca Diana Szekely-Copîndean
- Scientific Research Group in Neuroscience "Dr. Gheorghe Preda" Clinical Psychiatry Hospital, Sibiu, Romania
- Department of Social and Human Research, Romanian Academy - Cluj-Napoca Branch, Cluj-Napoca, Romania
| | - Ana-Maria Zăgrean
- Division of Physiology and Neuroscience, Department of Functional Sciences, Carol Davila University of Medicine and Pharmacy, Bucharest, Romania
| |
Collapse
|
4
|
Baba A, Smith M, Potter BK, Chan AW, Moher D, Offringa M. Guidelines for reporting pediatric and child health clinical trial protocols and reports: study protocol for SPIRIT-Children and CONSORT-Children. Trials 2024; 25:96. [PMID: 38287439 PMCID: PMC10826142 DOI: 10.1186/s13063-024-07948-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Accepted: 01/22/2024] [Indexed: 01/31/2024] Open
Abstract
BACKGROUND Despite the critical importance of clinical trials to provide evidence about the effects of intervention for children and youth, a paucity of published high-quality pediatric clinical trials persists. Sub-optimal reporting of key trial elements necessary to critically appraise and synthesize findings is prevalent. To harmonize and provide guidance for reporting in pediatric controlled clinical trial protocols and reports, reporting guideline extensions to the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) and Consolidated Standards of Reporting Trials (CONSORT) guidelines specific to pediatrics are being developed: SPIRIT-Children (SPIRIT-C) and CONSORT-Children (CONSORT-C). METHODS The development of SPIRIT-C/CONSORT-C will be informed by the Enhancing the Quality and Transparency of Health Research Quality (EQUATOR) method for reporting guideline development in the following stages: (1) generation of a preliminary list of candidate items, informed by (a) items developed during initial development efforts and child relevant items from recent published SPIRIT and CONSORT extensions; (b) two systematic reviews and environmental scan of the literature; (c) workshops with young people; (2) an international Delphi study, where a wide range of panelists will vote on the inclusion or exclusion of candidate items on a nine-point Likert scale; (3) a consensus meeting to discuss items that have not reached consensus in the Delphi study and to "lock" the checklist items; (4) pilot testing of items and definitions to ensure that they are understandable, useful, and applicable; and (5) a final project meeting to discuss each item in the context of pilot test results. Key partners, including young people (ages 12-24 years) and family caregivers (e.g., parents) with lived experiences with pediatric clinical trials, and individuals with expertise and involvement in pediatric trials will be involved throughout the project. SPIRIT-C/CONSORT-C will be disseminated through publications, academic conferences, and endorsement by pediatric journals and relevant research networks and organizations. DISCUSSION SPIRIT/CONSORT-C may serve as resources to facilitate comprehensive reporting needed to understand pediatric clinical trial protocols and reports, which may improve transparency within pediatric clinical trials and reduce research waste. TRIAL REGISTRATION The development of these reporting guidelines is registered with the EQUATOR Network: SPIRIT-Children ( https://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-clinical-trials-protocols/#35 ) and CONSORT-Children ( https://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-clinical-trials/#CHILD ).
Collapse
Affiliation(s)
- Ami Baba
- Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, ON, Canada
| | - Maureen Smith
- Patient Partner, Canadian Organization for Rare Disorders, Ottawa, ON, Canada
| | - Beth K Potter
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada
| | - An-Wen Chan
- Department of Medicine, Women's College Research Institute, University of Toronto, Toronto, ON, Canada
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada
| | - David Moher
- Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
| | - Martin Offringa
- Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, ON, Canada.
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada.
- Peter Gilgan Centre for Research and Learning, The Hospital for Sick Children, 686 Bay Street, Toronto, ON, M5G 0A4, Canada.
| |
Collapse
|
5
|
Kahkasha. Clinical Trials in Palliative Care: Need for Serious Reckoning. JCO Glob Oncol 2023; 9:e2300181. [PMID: 37824801 DOI: 10.1200/go.23.00181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 08/07/2023] [Accepted: 08/10/2023] [Indexed: 10/14/2023] Open
Abstract
We explore five crucial insights on Clinical Trials in Palliative care. From setting impactful priorities to ethical funding, we delve into the heart of informed patient care and discover how 'Ground Shots' can drive meaningful changes and why collaborati
Collapse
Affiliation(s)
- Kahkasha
- All India Institute of Medical Sciences, Deoghar, India
| |
Collapse
|
6
|
Harrer M, Cuijpers P, Schuurmans LKJ, Kaiser T, Buntrock C, van Straten A, Ebert D. Evaluation of randomized controlled trials: a primer and tutorial for mental health researchers. Trials 2023; 24:562. [PMID: 37649083 PMCID: PMC10469910 DOI: 10.1186/s13063-023-07596-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 08/18/2023] [Indexed: 09/01/2023] Open
Abstract
BACKGROUND Considered one of the highest levels of evidence, results of randomized controlled trials (RCTs) remain an essential building block in mental health research. They are frequently used to confirm that an intervention "works" and to guide treatment decisions. Given their importance in the field, it is concerning that the quality of many RCT evaluations in mental health research remains poor. Common errors range from inadequate missing data handling and inappropriate analyses (e.g., baseline randomization tests or analyses of within-group changes) to unduly interpretations of trial results and insufficient reporting. These deficiencies pose a threat to the robustness of mental health research and its impact on patient care. Many of these issues may be avoided in the future if mental health researchers are provided with a better understanding of what constitutes a high-quality RCT evaluation. METHODS In this primer article, we give an introduction to core concepts and caveats of clinical trial evaluations in mental health research. We also show how to implement current best practices using open-source statistical software. RESULTS Drawing on Rubin's potential outcome framework, we describe that RCTs put us in a privileged position to study causality by ensuring that the potential outcomes of the randomized groups become exchangeable. We discuss how missing data can threaten the validity of our results if dropouts systematically differ from non-dropouts, introduce trial estimands as a way to co-align analyses with the goals of the evaluation, and explain how to set up an appropriate analysis model to test the treatment effect at one or several assessment points. A novice-friendly tutorial is provided alongside this primer. It lays out concepts in greater detail and showcases how to implement techniques using the statistical software R, based on a real-world RCT dataset. DISCUSSION Many problems of RCTs already arise at the design stage, and we examine some avoidable and unavoidable "weak spots" of this design in mental health research. For instance, we discuss how lack of prospective registration can give way to issues like outcome switching and selective reporting, how allegiance biases can inflate effect estimates, review recommendations and challenges in blinding patients in mental health RCTs, and describe problems arising from underpowered trials. Lastly, we discuss why not all randomized trials necessarily have a limited external validity and examine how RCTs relate to ongoing efforts to personalize mental health care.
Collapse
Affiliation(s)
- Mathias Harrer
- Psychology and Digital Mental Health Care, Technical University Munich, Georg-Brauchle-Ring 60-62, Munich, 80992, Germany.
- Clinical Psychology and Psychotherapy, Institute for Psychology, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany.
| | - Pim Cuijpers
- Department of Clinical, Neuro and Developmental Psychology, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
- WHO Collaborating Centre for Research and Dissemination of Psychological Interventions, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - Lea K J Schuurmans
- Psychology and Digital Mental Health Care, Technical University Munich, Georg-Brauchle-Ring 60-62, Munich, 80992, Germany
| | - Tim Kaiser
- Methods and Evaluation/Quality Assurance, Freie Universität Berlin, Berlin, Germany
| | - Claudia Buntrock
- Institute of Social Medicine and Health Systems Research (ISMHSR), Medical Faculty, Otto Von Guericke University Magdeburg, Magdeburg, Germany
| | - Annemieke van Straten
- Department of Clinical, Neuro and Developmental Psychology, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - David Ebert
- Psychology and Digital Mental Health Care, Technical University Munich, Georg-Brauchle-Ring 60-62, Munich, 80992, Germany
| |
Collapse
|
7
|
Doi SA, Abdulmajeed J. Angry scientists, angry analysts and angry novelists. Diabetologia 2023; 66:1580-1583. [PMID: 37212887 DOI: 10.1007/s00125-023-05917-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 03/28/2023] [Indexed: 05/23/2023]
Affiliation(s)
- Suhail A Doi
- Department of Population Medicine, College of Medicine, QU Health, Qatar University, Doha, Qatar.
- Royal College of Physicians of Edinburgh, Edinburgh, UK.
| | - Jazeel Abdulmajeed
- Department of Population Medicine, College of Medicine, QU Health, Qatar University, Doha, Qatar
- Primary Health Care Corporation, Doha, Qatar
| |
Collapse
|
8
|
Hughes K, Ford H, Thangaratinam S, Brennecke S, Mol BW, Wang R. Diagnosis or prognosis? An umbrella review of mid-trimester cervical length and spontaneous preterm birth. BJOG 2023; 130:866-879. [PMID: 36871557 PMCID: PMC10953024 DOI: 10.1111/1471-0528.17443] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 12/04/2022] [Accepted: 01/16/2023] [Indexed: 03/07/2023]
Abstract
BACKGROUND Cervical length is widely used to assess a woman's risk of spontaneous preterm birth (SPTB). OBJECTIVES To summarise and critically appraise the evidence from systematic reviews on the prognostic capacity of transvaginal sonographic cervical length in the second trimester in asymptomatic women with singleton or twin pregnancy. SEARCH STRATEGY Searches were performed in Medline, Embase, CINAHL and grey literature from 1 January 1995 to 6 July 2021, including keywords 'cervical length', 'preterm birth', 'obstetric labour, premature', 'review' and others, without language restriction. SELECTION CRITERIA We included systematic reviews including women who did not receive treatments to reduce SPTB risk. DATA COLLECTION AND ANALYSIS From 2472 articles, 14 systematic reviews were included. Summary statistics were independently extracted by two reviewers, tabulated and analysed descriptively. The ROBIS tool was used to evaluate risk of bias of included systematic reviews. MAIN RESULTS Twelve reviews performed meta-analyses: two were reported as systematic reviews of prognostic factor studies, ten used diagnostic test accuracy methodology. Ten systematic reviews were at high or unclear risk of bias. Meta-analyses reported up to 80 combinations of cervical length, gestational age at measurement and definition of preterm birth. Cervical length was consistently associated with SPTB, with a likelihood ratio for a positive test of 1.70-142. CONCLUSIONS The ability of cervical length to predict SPTB is a prognostic research question; systematic reviews typically analysed diagnostic test accuracy. Individual participant data meta-analysis using prognostic factor research methods is recommended to better quantify how well transvaginal ultrasonographic cervical length can predict SPTB.
Collapse
Affiliation(s)
- Kelly Hughes
- Department of Obstetrics and GynaecologyMonash UniversityMelbourneVictoriaAustralia
| | - Heather Ford
- Department of Obstetrics and GynaecologyMonash UniversityMelbourneVictoriaAustralia
| | - Shakila Thangaratinam
- WHO Collaborating Centre for Women's Health, Institute of Translational MedicineUniversity of BirminghamBirminghamUK
| | - Shaun Brennecke
- Department of Obstetrics and GynaecologyThe University of MelbourneMelbourneVictoriaAustralia
- Department of Maternal‐Fetal Medicine & Pregnancy Research CentreRoyal Women's HospitalMelbourneVictoriaAustralia
| | - Ben W. Mol
- Department of Obstetrics and GynaecologyMonash UniversityMelbourneVictoriaAustralia
- Aberdeen Centre for Women's Health Research, School of Medicine, Medical Sciences and NutritionUniversity of AberdeenAberdeenUK
| | - Rui Wang
- Department of Obstetrics and GynaecologyMonash UniversityMelbourneVictoriaAustralia
| |
Collapse
|
9
|
Lozada-Martinez ID, Ealo-Cardona CI, Marrugo-Ortiz AC, Picón-Jaimes YA, Cabrera-Vargas LF, Narvaez-Rojas AR. Meta-research studies in surgery: a field that should be encouraged to assess and improve the quality of surgical evidence. Int J Surg 2023; 109:1823-1824. [PMID: 37144675 PMCID: PMC10389356 DOI: 10.1097/js9.0000000000000422] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Accepted: 04/18/2023] [Indexed: 05/06/2023]
Affiliation(s)
- Ivan D. Lozada-Martinez
- Department of Graduate Studies in Health Sciences, Epidemiology Program, Universidad Autónoma de Bucaramanga, Bucaramanga
| | | | | | | | - Luis F. Cabrera-Vargas
- Medical and Surgical Research Center, Future Surgeons Chapter, Colombian Surgery Association, Bogotá, Colombia
| | - Alexis R. Narvaez-Rojas
- International Coalition on Surgical Research, Faculty of Medical Sciences, Universidad Nacional Autónoma de Nicaragua, Managua, Nicaragua
- DeWitt Daughtry Family Department of Surgery, Breast Surgical Oncology Division and Jackson Health System/University of Miami Miller School of Medicine, Miami, Florida, USA
| |
Collapse
|
10
|
Clift AK, Dodwell D, Lord S, Petrou S, Brady M, Collins GS, Hippisley-Cox J. Development and internal-external validation of statistical and machine learning models for breast cancer prognostication: cohort study. BMJ 2023; 381:e073800. [PMID: 37164379 PMCID: PMC10170264 DOI: 10.1136/bmj-2022-073800] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 03/28/2023] [Indexed: 05/12/2023]
Abstract
OBJECTIVE To develop a clinically useful model that estimates the 10 year risk of breast cancer related mortality in women (self-reported female sex) with breast cancer of any stage, comparing results from regression and machine learning approaches. DESIGN Population based cohort study. SETTING QResearch primary care database in England, with individual level linkage to the national cancer registry, Hospital Episodes Statistics, and national mortality registers. PARTICIPANTS 141 765 women aged 20 years and older with a diagnosis of invasive breast cancer between 1 January 2000 and 31 December 2020. MAIN OUTCOME MEASURES Four model building strategies comprising two regression (Cox proportional hazards and competing risks regression) and two machine learning (XGBoost and an artificial neural network) approaches. Internal-external cross validation was used for model evaluation. Random effects meta-analysis that pooled estimates of discrimination and calibration metrics, calibration plots, and decision curve analysis were used to assess model performance, transportability, and clinical utility. RESULTS During a median 4.16 years (interquartile range 1.76-8.26) of follow-up, 21 688 breast cancer related deaths and 11 454 deaths from other causes occurred. Restricting to 10 years maximum follow-up from breast cancer diagnosis, 20 367 breast cancer related deaths occurred during a total of 688 564.81 person years. The crude breast cancer mortality rate was 295.79 per 10 000 person years (95% confidence interval 291.75 to 299.88). Predictors varied for each regression model, but both Cox and competing risks models included age at diagnosis, body mass index, smoking status, route to diagnosis, hormone receptor status, cancer stage, and grade of breast cancer. The Cox model's random effects meta-analysis pooled estimate for Harrell's C index was the highest of any model at 0.858 (95% confidence interval 0.853 to 0.864, and 95% prediction interval 0.843 to 0.873). It appeared acceptably calibrated on calibration plots. The competing risks regression model had good discrimination: pooled Harrell's C index 0.849 (0.839 to 0.859, and 0.821 to 0.876, and evidence of systematic miscalibration on summary metrics was lacking. The machine learning models had acceptable discrimination overall (Harrell's C index: XGBoost 0.821 (0.813 to 0.828, and 0.805 to 0.837); neural network 0.847 (0.835 to 0.858, and 0.816 to 0.878)), but had more complex patterns of miscalibration and more variable regional and stage specific performance. Decision curve analysis suggested that the Cox and competing risks regression models tested may have higher clinical utility than the two machine learning approaches. CONCLUSION In women with breast cancer of any stage, using the predictors available in this dataset, regression based methods had better and more consistent performance compared with machine learning approaches and may be worthy of further evaluation for potential clinical use, such as for stratified follow-up.
Collapse
Affiliation(s)
- Ash Kieran Clift
- Cancer Research UK Oxford Centre, Oxford, UK
- Nuffield Department of Primary Care Health Sciences, Radcliffe Primary Care Building, Radcliffe Observatory Quarter, University of Oxford, Oxford OX2 6GG, UK
| | - David Dodwell
- Nuffield Department of Population Health, University of Oxford, Oxford, UK
| | - Simon Lord
- Department of Oncology, University of Oxford, Oxford, UK
| | - Stavros Petrou
- Nuffield Department of Primary Care Health Sciences, Radcliffe Primary Care Building, Radcliffe Observatory Quarter, University of Oxford, Oxford OX2 6GG, UK
| | - Michael Brady
- Department of Oncology, University of Oxford, Oxford, UK
| | - Gary S Collins
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
| | - Julia Hippisley-Cox
- Nuffield Department of Primary Care Health Sciences, Radcliffe Primary Care Building, Radcliffe Observatory Quarter, University of Oxford, Oxford OX2 6GG, UK
| |
Collapse
|
11
|
Paul M. SPINning in infectious diseases. Clin Microbiol Infect 2023:S1198-743X(23)00197-0. [PMID: 37116862 DOI: 10.1016/j.cmi.2023.04.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 04/22/2023] [Accepted: 04/22/2023] [Indexed: 04/30/2023]
Affiliation(s)
- Mical Paul
- Rambam Health care Campus and The Ruth and Bruce Rappaport Faculty of Medicine, Technion - Israel Institute of Technology. Haifa, Israel.
| |
Collapse
|
12
|
Seidler AL, Hunter KE, Barba A, Aberoumand M, Libesman S, Williams JG, Shrestha N, Aagerup J, Gyte G, Montgomery A, Duley L, Askie L. Optimizing cord management for each preterm baby - Challenges of collating individual participant data and recommendations for future collaborative research. Semin Perinatol 2023:151740. [PMID: 37019711 DOI: 10.1016/j.semperi.2023.151740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 04/07/2023]
Abstract
The optimal cord management strategy at birth for each preterm baby is still unknown, despite more than 100 randomized controlled trials (RCTs) undertaken on this question. To address this, we brought together all RCTs examining cord management strategies at preterm birth in the iCOMP (individual participant data on COrd Management at Preterm birth) Collaboration, to perform an individual participant data network meta-analysis. In this paper, we describe the trials and tribulations around obtaining individual participant data to resolve controversies around cord clamping, and we derive key recommendations for future collaborative research in perinatology. To reliably answer outstanding questions, future cord management research needs to be collaborative and coordinated, by aligning core protocol elements, ensuring quality and reporting standards are met, and carefully considering and reporting on vulnerable sub-populations. The iCOMP Collaboration is an example of the power of collaboration to address priority research questions, and ultimately improve neonatal outcomes worldwide.
Collapse
Affiliation(s)
- Anna Lene Seidler
- Senior Research Fellow, NHMRC Clinical Trials Centre, University of Sydney, Australia.
| | - Kylie E Hunter
- Human Mvt, Senior Evidence Analyst, NHMRC Clinical Trials Centre, University of Sydney, Australia
| | - Angie Barba
- Senior Evidence Analyst, NHMRC Clinical Trials Centre, University of Sydney, Australia
| | - Mason Aberoumand
- Evidence Analyst, NHMRC Clinical Trials Centre, University of Sydney, Australia
| | - Sol Libesman
- Post Doctoral Research Associate, NHMRC Clinical Trials Centre, University of Sydney, Australia
| | - Jonathan G Williams
- BMedBiotech, Evidence Analyst, NHMRC Clinical Trials Centre, University of Sydney, Australia
| | - Nipun Shrestha
- Post Doctoral Research Associate, NHMRC Clinical Trials Center, University of Sydney, Australia
| | - Jannik Aagerup
- Research Administration Officer, NHMRC Clinical Trials Centre, University of Sydney, Australia
| | - Gill Gyte
- Consumer Editor, Cochrane Pregnancy and Childbirth, University of Liverpool, UK
| | - Alan Montgomery
- Professor of Medical Statistics and Clinical Trials, Nottingham Clinical Trials Unit, University of Nottingham, UK
| | | | - Lisa Askie
- MPH FAHMS FHEA, University of Sydney, Australia
| |
Collapse
|
13
|
Braun T, Kopkow C. Research Integrity – Teil 1: Verantwortungsvolle Forschungspraktiken und Transparenz. PHYSIOSCIENCE 2023. [DOI: 10.1055/a-1982-2858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
|
14
|
We need to talk about nonprobability samples. Trends Ecol Evol 2023; 38:521-531. [PMID: 36775795 DOI: 10.1016/j.tree.2023.01.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 01/13/2023] [Accepted: 01/17/2023] [Indexed: 02/12/2023]
Abstract
In most circumstances, probability sampling is the only way to ensure unbiased inference about population quantities where a complete census is not possible. As we enter the era of 'big data', however, nonprobability samples, whose sampling mechanisms are unknown, are undergoing a renaissance. We explain why the use of nonprobability samples can lead to spurious conclusions, and why seemingly large nonprobability samples can be (effectively) very small. We also review some recent controversies surrounding the use of nonprobability samples in biodiversity monitoring. These points notwithstanding, we argue that nonprobability samples can be useful, provided that their limitations are assessed, mitigated where possible and clearly communicated. Ecologists can learn much from other disciplines on each of these fronts.
Collapse
|
15
|
Morrell W, Gelinas L, Zarin D, Bierer BE. Ensuring the Scientific Value and Feasibility of Clinical Trials: A Qualitative Interview Study. AJOB Empir Bioeth 2023; 14:99-110. [PMID: 36599052 DOI: 10.1080/23294515.2022.2160510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
BACKGROUND Ethical and scientific principles require that clinical trials address an important question and have the resources needed to complete the study. However, there are no clear standards for review that would ensure that these principles are upheld. METHODS We conducted semi-structured interviews with a convenience sample of nineteen experts in clinical trial design, conduct, and/or oversight to elucidate current practice and identify areas of need with respect to ensuring the scientific value and feasibility of clinical trials prior to initiation and while ongoing. We used a priori and grounded theory to analyze the data and constant comparative method to induce higher order themes. RESULTS Interviewees perceived determination of scientific value as the responsibility of the investigator and, secondarily, other parties who review or oversee research. Interviewees reported that ongoing trials are rarely reevaluated due to emerging evidence from external sources, evaluation is complex, and there would be value in the development of standards for monitoring and evaluating evidence systematically. Investigators, IRBs, and/or data monitoring committees (DMCs) could undertake these responsibilities. Feasibility assessments are performed but are typically inadequate; potential solutions are unclear. CONCLUSIONS There are three domains where current approaches are suboptimal and in which further guidance is needed. First, who has the responsibility for conducting scientific review, whether it be the investigator, IRB, and/or DMC is often unclear. Second, the standards for scientific review (e.g., appropriate search terms, data sources, and analytic plan) should be defined. Third, guidance is needed on the evaluation of ongoing studies in light of potentially new and evolving evidence, with particular reference to evidence from outside the trial itself.
Collapse
Affiliation(s)
- Walker Morrell
- Multi-Regional Clinical Trials Center, Brigham & Women's Hospital and Harvard, Cambridge, MA, USA
| | - Luke Gelinas
- Multi-Regional Clinical Trials Center, Brigham & Women's Hospital and Harvard, Cambridge, MA, USA.,Advarra IRB, Columbia, MD, USA
| | - Deborah Zarin
- Multi-Regional Clinical Trials Center, Brigham & Women's Hospital and Harvard, Cambridge, MA, USA
| | - Barbara E Bierer
- Multi-Regional Clinical Trials Center, Brigham & Women's Hospital and Harvard, Cambridge, MA, USA.,Harvard Medical School, Boston, MA, USA.,Brigham & Women's Hospital, Boston, MA, USA
| |
Collapse
|
16
|
Riley RD, Cole TJ, Deeks J, Kirkham JJ, Morris J, Perera R, Wade A, Collins GS. On the 12th Day of Christmas, a Statistician Sent to Me . . . BMJ 2022; 379:e072883. [PMID: 36593578 PMCID: PMC9844255 DOI: 10.1136/bmj-2022-072883] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Affiliation(s)
- Richard D Riley
- Institute of Applied Health Research, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
| | - Tim J Cole
- UCL Great Ormond Street Institute of Child Health, London, UK
| | - Jon Deeks
- Institute of Applied Health Research, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
| | - Jamie J Kirkham
- Centre for Biostatistics, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK
| | | | - Rafael Perera
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
| | - Angie Wade
- UCL Great Ormond Street Institute of Child Health, London, UK
| | - Gary S Collins
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
| |
Collapse
|
17
|
Byrne JA, Park Y, Richardson RAK, Pathmendra P, Sun M, Stoeger T. Protection of the human gene research literature from contract cheating organizations known as research paper mills. Nucleic Acids Res 2022; 50:12058-12070. [PMID: 36477580 PMCID: PMC9757046 DOI: 10.1093/nar/gkac1139] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 11/08/2022] [Accepted: 11/14/2022] [Indexed: 12/12/2022] Open
Abstract
Human gene research generates new biology insights with translational potential, yet few studies have considered the health of the human gene literature. The accessibility of human genes for targeted research, combined with unreasonable publication pressures and recent developments in scholarly publishing, may have created a market for low-quality or fraudulent human gene research articles, including articles produced by contract cheating organizations known as paper mills. This review summarises the evidence that paper mills contribute to the human gene research literature at scale and outlines why targeted gene research may be particularly vulnerable to systematic research fraud. To raise awareness of targeted gene research from paper mills, we highlight features of problematic manuscripts and publications that can be detected by gene researchers and/or journal staff. As improved awareness and detection could drive the further evolution of paper mill-supported publications, we also propose changes to academic publishing to more effectively deter and correct problematic publications at scale. In summary, the threat of paper mill-supported gene research highlights the need for all researchers to approach the literature with a more critical mindset, and demand publications that are underpinned by plausible research justifications, rigorous experiments and fully transparent reporting.
Collapse
Affiliation(s)
- Jennifer A Byrne
- To whom correspondence should be addressed. Tel: +61 2 4920 4135;
| | - Yasunori Park
- School of Medical Sciences, Faculty of Medicine and Health, The University of Sydney, NSW, Australia
| | - Reese A K Richardson
- Department of Chemical and Biological Engineering, Northwestern University, Evanston, USA
| | - Pranujan Pathmendra
- School of Medical Sciences, Faculty of Medicine and Health, The University of Sydney, NSW, Australia
| | - Mengyi Sun
- Department of Chemical and Biological Engineering, Northwestern University, Evanston, USA
| | - Thomas Stoeger
- To whom correspondence should be addressed. Tel: +61 2 4920 4135;
| |
Collapse
|
18
|
The role of data sharing in survey dropout: a study among scientists as respondents. JOURNAL OF DOCUMENTATION 2022. [DOI: 10.1108/jd-06-2022-0135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
PurposeOne of the currently debated changes in scientific practice is the implementation of data sharing requirements for peer-reviewed publication to increase transparency and intersubjective verifiability of results. However, it seems that data sharing is a not fully adopted behavior among researchers. The theory of planned behavior was repeatedly applied to explain drivers of data sharing from the perspective of data donors (researchers). However, data sharing can be viewed from another perspective as well: survey participants. The research questions (RQs) for this study were as follows: 1 Does data sharing increase participant's nonresponse? 2 Does data sharing influence participant's response behavior? The purpose of this paper is to address these issues.Design/methodology/approachTo answer the RQs, a mixed methods approach was applied, consisting of a qualitative prestudy and a quantitative survey including an experimental component. The latter was a two-group setup with an intervention group (A) and a control group (B). A list-based recruiting of members of the Medical Faculty of the University of Freiburg was applied for 15 days. For exploratory data analysis of dropouts and nonresponse, we used Fisher's exact tests and binary logistic regressions.FindingsIn sum, we recorded 197 cases for Group A and 198 cases for Group B. We found no systematic group differences regarding response bias or dropout. Furthermore, we gained insights into the experiences our sample made with data sharing: half of our sample already requested data of other researchers or shared data on request of other researchers. Data repositories, however, were used less frequently: 28% of our respondents used data from repositories and 19% stored data in a repository.Originality/valueTo the authors’ knowledge, their study is the first study that includes researchers as survey subjects investigating the effect of data sharing on their response patterns.
Collapse
|
19
|
Now Is the Time to Bring a Common but Unpopular Noncommunicable Disease into Focus: Peripheral Arterial Disease Takes Limbs and Lives, but It Must Also Touch Our Hearts! J Clin Med 2022; 11:jcm11195737. [PMID: 36233605 PMCID: PMC9573182 DOI: 10.3390/jcm11195737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Accepted: 09/27/2022] [Indexed: 11/17/2022] Open
Abstract
We have all learned a great deal from the ongoing pandemic that has already taken more than five million lives in less than three years [...]
Collapse
|
20
|
Chicco D, Jurman G. The ABC recommendations for validation of supervised machine learning results in biomedical sciences. Front Big Data 2022; 5:979465. [PMID: 36238654 PMCID: PMC9552836 DOI: 10.3389/fdata.2022.979465] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Accepted: 09/12/2022] [Indexed: 12/23/2022] Open
Affiliation(s)
- Davide Chicco
- Institute of Health Policy Management and Evaluation, University of Toronto, Toronto, ON, Canada
- *Correspondence: Davide Chicco
| | - Giuseppe Jurman
- Data Science for Health Unit, Fondazione Bruno Kessler, Trento, Italy
| |
Collapse
|
21
|
Cirugía coronaria y ¿evidencia? científica. CIRUGIA CARDIOVASCULAR 2022. [DOI: 10.1016/j.circv.2022.09.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
22
|
Quality or quantity? Questions on the growth of global scientific production. Int J Surg 2022; 105:106862. [PMID: 36031070 DOI: 10.1016/j.ijsu.2022.106862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 08/20/2022] [Indexed: 11/21/2022]
|
23
|
Hardwicke TE, Thibault RT, Kosie JE, Tzavella L, Bendixen T, Handcock SA, Köneke VE, Ioannidis JPA. Post-publication critique at top-ranked journals across scientific disciplines: a cross-sectional assessment of policies and practice. ROYAL SOCIETY OPEN SCIENCE 2022; 9:220139. [PMID: 36039285 PMCID: PMC9399707 DOI: 10.1098/rsos.220139] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 07/19/2022] [Indexed: 06/15/2023]
Abstract
Journals exert considerable control over letters, commentaries and online comments that criticize prior research (post-publication critique). We assessed policies (Study One) and practice (Study Two) related to post-publication critique at 15 top-ranked journals in each of 22 scientific disciplines (N = 330 journals). Two-hundred and seven (63%) journals accepted post-publication critique and often imposed limits on length (median 1000, interquartile range (IQR) 500-1200 words) and time-to-submit (median 12, IQR 4-26 weeks). The most restrictive limits were 175 words and two weeks; some policies imposed no limits. Of 2066 randomly sampled research articles published in 2018 by journals accepting post-publication critique, 39 (1.9%, 95% confidence interval [1.4, 2.6]) were linked to at least one post-publication critique (there were 58 post-publication critiques in total). Of the 58 post-publication critiques, 44 received an author reply, of which 41 asserted that original conclusions were unchanged. Clinical Medicine had the most active culture of post-publication critique: all journals accepted post-publication critique and published the most post-publication critique overall, but also imposed the strictest limits on length (median 400, IQR 400-550 words) and time-to-submit (median 4, IQR 4-6 weeks). Our findings suggest that top-ranked academic journals often pose serious barriers to the cultivation, documentation and dissemination of post-publication critique.
Collapse
Affiliation(s)
- Tom E. Hardwicke
- Department of Psychology, University of Amsterdam, Nieuwe Achtergracht 129-B, 1018 WT Amsterdam, The Netherlands
| | - Robert T. Thibault
- Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, CA, USA
- School of Psychological Science, University of Bristol, Bristol, UK
| | - Jessica E. Kosie
- Department of Psychology, Princeton University, Princeton, NJ, USA
| | | | - Theiss Bendixen
- Department of the Study of Religion, Aarhus University, Aarhus, UK
| | - Sarah A. Handcock
- Florey Department of Neuroscience and Mental Health, University of Melbourne, Melbourne, Australia
| | | | - John P. A. Ioannidis
- Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, CA, USA
- Departments of Medicine, Epidemiology and Population Health, Biomedical Data Science, and Statistics, Stanford University, Stanford, CA, USA
- Meta-Research Innovation Center Berlin (METRIC-B), QUEST Center for Transforming Biomedical Research, Berlin Institute of Health, Charité – Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
24
|
Reynard C, Jenkins D, Martin GP, Kontopantelis E, Body R. Is your clinical prediction model past its sell by date? Arch Emerg Med 2022; 39:956-958. [DOI: 10.1136/emermed-2021-212224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 06/22/2022] [Indexed: 11/04/2022]
|
25
|
Questionable Research Practices, Low Statistical Power, and Other Obstacles to Replicability: Why Preclinical Neuroscience Research Would Benefit from Registered Reports. eNeuro 2022; 9:9/4/ENEURO.0017-22.2022. [PMID: 35922130 PMCID: PMC9351632 DOI: 10.1523/eneuro.0017-22.2022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 05/22/2022] [Accepted: 05/31/2022] [Indexed: 02/03/2023] Open
Abstract
Replicability, the degree to which a previous scientific finding can be repeated in a distinct set of data, has been considered an integral component of institutionalized scientific practice since its inception several hundred years ago. In the past decade, large-scale replication studies have demonstrated that replicability is far from favorable, across multiple scientific fields. Here, I evaluate this literature and describe contributing factors including the prevalence of questionable research practices (QRPs), misunderstanding of p-values, and low statistical power. I subsequently discuss how these issues manifest specifically in preclinical neuroscience research. I conclude that these problems are multifaceted and difficult to solve, relying on the actions of early and late career researchers, funding sources, academic publishers, and others. I assert that any viable solution to the problem of substandard replicability must include changing academic incentives, with adoption of registered reports being the most immediately impactful and pragmatic strategy. For animal research in particular, comprehensive reporting guidelines that document potential sources of sensitivity for experimental outcomes is an essential addition.
Collapse
|
26
|
Serio CD, Malgaroli A, Ferrari P, Kenett RS. The reproducibility of COVID-19 data analysis: paradoxes, pitfalls, and future challenges. PNAS NEXUS 2022; 1:pgac125. [PMID: 36741433 PMCID: PMC9896906 DOI: 10.1093/pnasnexus/pgac125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 08/08/2022] [Indexed: 02/07/2023]
Abstract
In the midst of the COVID-19 experience, we learned an important scientific lesson: knowledge acquisition and information quality in medicine depends more on "data quality" rather than "data quantity." The large number of COVID-19 reports, published in a very short time, demonstrated that the most advanced statistical and computational tools cannot properly overcome the poor quality of acquired data. The main evidence for this observation comes from the poor reproducibility of results. Indeed, understanding the data generation process is fundamental when investigating scientific questions such as prevalence, immunity, transmissibility, and susceptibility. Most of COVID-19 studies are case reports based on "non probability" sampling and do not adhere to the general principles of controlled experimental designs. Such collected data suffers from many limitations when used to derive clinical conclusions. These include confounding factors, measurement errors and bias selection effects. Each of these elements represents a source of uncertainty, which is often ignored or assumed to provide an unbiased random contribution. Inference retrieved from large data in medicine is also affected by data protection policies that, while protecting patients' privacy, are likely to reduce consistently usefulness of big data in achieving fundamental goals such as effective and efficient data-integration. This limits the degree of generalizability of scientific studies and leads to paradoxical and conflicting conclusions. We provide such examples from assessing the role of risks factors. In conclusion, new paradigms and new designs schemes are needed in order to reach inferential conclusions that are meaningful and informative when dealing with data collected during emergencies like COVID-19.
Collapse
Affiliation(s)
- Clelia Di Serio
- Vita-Salute San Raffaele University, UniSR, Milan, Italy
- University Centre of Statistics in the Biomedical Sciences CUSSB, UniSR, Milan, Italy
- Biomedical Faculty, Università della Svizzera Italiana, Lugano, Switzerland
| | | | - Paolo Ferrari
- Biomedical Faculty, Università della Svizzera Italiana, Lugano, Switzerland
- Ente Ospedaliero Cantonale, Lugano, Switzerland
- Clinical School, University of New South Wales, Sydney, Australia
| | - Ron S Kenett
- KPA,Samuel Neaman Institute, Technion, Haifa, Israel
- University of Turin, Turin, Italy
| |
Collapse
|
27
|
Prosepe I, Groenwold RHH, Knevel R, Pajouheshnia R, van Geloven N. The Disconnect Between Development and Intended Use of Clinical Prediction Models for Covid-19: A Systematic Review and Real-World Data Illustration. FRONTIERS IN EPIDEMIOLOGY 2022; 2:899589. [PMID: 38455309 PMCID: PMC10910889 DOI: 10.3389/fepid.2022.899589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 05/23/2022] [Indexed: 03/09/2024]
Abstract
Background The SARS-CoV-2 pandemic has boosted the appearance of clinical predictions models in medical literature. Many of these models aim to provide guidance for decision making on treatment initiation. Special consideration on how to account for post-baseline treatments is needed when developing such models. We examined how post-baseline treatment was handled in published Covid-19 clinical prediction models and we illustrated how much estimated risks may differ according to how treatment is handled. Methods Firstly, we reviewed 33 Covid-19 prognostic models published in literature in the period up to 5 May 2020. We extracted: (1) the reported intended use of the model; (2) how treatment was incorporated during model development and (3) whether the chosen analysis strategy was in agreement with the intended use. Secondly, we used nationwide Dutch data on hospitalized patients who tested positive for SARS-CoV-2 in 2020 to illustrate how estimated mortality risks will differ when using four different analysis strategies to model ICU treatment. Results Of the 33 papers, 21 (64%) had misalignment between intended use and analysis strategy, 7 (21%) were unclear about the estimated risk and only 5 (15%) had clear alignment between intended use and analysis strategy. We showed with real data how different approaches to post-baseline treatment yield different estimated mortality risks, ranging between 33 and 46% for a 75 year-old patient with two medical conditions. Conclusions Misalignment between intended use and analysis strategy is common in reported Covid-19 clinical prediction models. This can lead to considerable under or overestimation of intended risks.
Collapse
Affiliation(s)
- Ilaria Prosepe
- Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, Netherlands
| | - Rolf H. H. Groenwold
- Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, Netherlands
- Department of Clinical Epidemiology, Leiden University Medical Center, Leiden, Netherlands
| | - Rachel Knevel
- Department of Rheumatology, Leiden University Medical Center, Leiden, Netherlands
| | - Romin Pajouheshnia
- Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht Institute for Pharmaceutical Sciences (UIPS), Utrecht University, Utrecht, Netherlands
| | - Nan van Geloven
- Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, Netherlands
| |
Collapse
|
28
|
Xu J, Guo Y, Wang F, Xu H, Lucero R, Bian J, Prosperi M. Protocol for the development of a reporting guideline for causal and counterfactual prediction models in biomedicine. BMJ Open 2022; 12:e059715. [PMID: 35725267 PMCID: PMC9214357 DOI: 10.1136/bmjopen-2021-059715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
INTRODUCTION While there are guidelines for reporting on observational studies (eg, Strengthening the Reporting of Observational Studies in Epidemiology, Reporting of Studies Conducted Using Observational Routinely Collected Health Data Statement), estimation of causal effects from both observational data and randomised experiments (eg, A Guideline for Reporting Mediation Analyses of Randomised Trials and Observational Studies, Consolidated Standards of Reporting Trials, PATH) and on prediction modelling (eg, Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis), none is purposely made for deriving and validating models from observational data to predict counterfactuals for individuals on one or more possible interventions, on the basis of given (or inferred) causal structures. This paper describes methods and processes that will be used to develop a Reporting Guideline for Causal and Counterfactual Prediction Models (PRECOG). METHODS AND ANALYSIS PRECOG will be developed following published guidance from the Enhancing the Quality and Transparency of Health Research (EQUATOR) network and will comprise five stages. Stage 1 will be meetings of a working group every other week with rotating external advisors (active until stage 5). Stage 2 will comprise a systematic review of literature on counterfactual prediction modelling for biomedical sciences (registered in Prospective Register of Systematic Reviews). In stage 3, a computer-based, real-time Delphi survey will be performed to consolidate the PRECOG checklist, involving experts in causal inference, epidemiology, statistics, machine learning, informatics and protocols/standards. Stage 4 will involve the write-up of the PRECOG guideline based on the results from the prior stages. Stage 5 will seek the peer-reviewed publication of the guideline, the scoping/systematic review and dissemination. ETHICS AND DISSEMINATION The study will follow the principles of the Declaration of Helsinki. The study has been registered in EQUATOR and approved by the University of Florida's Institutional Review Board (#202200495). Informed consent will be obtained from the working groups and the Delphi survey participants. The dissemination of PRECOG and its products will be done through journal publications, conferences, websites and social media.
Collapse
Affiliation(s)
- Jie Xu
- Department of Health Outcomes and Biomedical Informatics, University of Florida, Gainesville, Florida, USA
| | - Yi Guo
- Department of Health Outcomes and Biomedical Informatics, University of Florida, Gainesville, Florida, USA
| | - Fei Wang
- Department of Population Health Sciences, Weill Cornell Medical College, Cornell University, New York City, New York, USA
| | - Hua Xu
- School of Biomedical Informatics, University of Texas Health Science at Houston, Houston, Texas, USA
| | - Robert Lucero
- School of Nursing, University of California - Los Angeles, Los Angeles, California, USA
| | - Jiang Bian
- Department of Health Outcomes and Biomedical Informatics, University of Florida, Gainesville, Florida, USA
| | - Mattia Prosperi
- Department of Epidemiology, University of Florida, Gainesville, Florida, USA
| |
Collapse
|
29
|
Mott A, McDaid C, Hewitt C, Kirkham JJ. Interventions for improving the design and conduct of scientific research: A scoping review protocol. NIHR OPEN RESEARCH 2022; 2:4. [PMID: 37881299 PMCID: PMC10593266 DOI: 10.3310/nihropenres.13252.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/09/2022] [Indexed: 10/27/2023]
Abstract
Background Research waste is prevalent in many scientific fields despite a number of initiatives to improve research practices. Interventions to improve practice are often implemented without evaluating their effectiveness. It is therefore important to identify the interventions that have been evaluated, assess how they have been evaluated and to identify areas where further research is required. Objectives A scoping review will be undertaken to assess what interventions, aimed at researchers or research teams, to improve research design and conduct have been evaluated. This review will also consider when in the research pathway these interventions are implemented; what aspects of research design or conduct are being targeted; and who is implementing these interventions. Methods Interventions which aim to improve the design or conduct of research will be eligible for inclusion. The review will not include interventions aimed at hypothetical research projects or interventions implemented without evaluation.The following sources will be searched: MEDLINE, EMBASE, ERIC, HMIC, EconLit, Social Policy and Practice, ProQuest theses, and MetaArXiv. Hand searching of references and citations of included studies will also be undertaken. Searches will be limited to articles published in the last 10 years.Data extraction will be completed using a data extraction template developed for this review. Results will be tabulated by type of intervention, research stage, and outcome. A narrative review will also be provided addressing each of the objectives.
Collapse
Affiliation(s)
- Andrew Mott
- York Trials Unit, University of York, UK, York, North Yorkshire, YO10 5DD, UK
| | - Catriona McDaid
- York Trials Unit, University of York, UK, York, North Yorkshire, YO10 5DD, UK
| | - Catherine Hewitt
- York Trials Unit, University of York, UK, York, North Yorkshire, YO10 5DD, UK
| | - Jamie J Kirkham
- Centre for Biostatistics, The University of Manchester, Manchester, Manchester, M13 9PL, UK
| |
Collapse
|
30
|
Mitchell EJ, Sprange K, Treweek S, Nixon E. Value and engagement: what can clinical trials learn from techniques used in not-for-profit marketing? Trials 2022; 23:457. [PMID: 35655239 PMCID: PMC9164393 DOI: 10.1186/s13063-022-06417-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 05/18/2022] [Indexed: 11/10/2022] Open
Abstract
Marketing is a core business function in commercial companies but is also frequently used by not-for-profit organisations. Marketing focuses on understanding what people value to make choices about engaging with a product or service: a concept also key to understanding why people may choose to engage with a clinical trial. Understanding the needs and values of stakeholders, whether they are participants, staff at recruiting sites or policy-makers, is critical for a clinical trial to be a success. As many trials fail to recruit and retain participants, perhaps it is time for us to consider approaches from other disciplines. Though clinical trial teams may consider evidence- and non-evidence-based recruitment and retention strategies, this is rarely done in a systematic, streamlined way and is often in response to challenges once the trial has started. In this short commentary, we argue the need for a formal marketing approach to be applied to clinical trials, from the outset, as a potential prevention to recruitment and retention problems.
Collapse
Affiliation(s)
- E J Mitchell
- Nottingham Clinical Trials Unit, Applied Health Research Building, University Park, University of Nottingham, Nottingham, NG7 2RD, UK.
| | - K Sprange
- Nottingham Clinical Trials Unit, Applied Health Research Building, University Park, University of Nottingham, Nottingham, NG7 2RD, UK
| | - S Treweek
- Health Services Research Unit, Health Sciences Building, University of Aberdeen, Aberdeen, AB25 2ZD, UK
| | - E Nixon
- Nottingham University Business School, Jubilee Campus, University of Nottingham, Nottingham, NG8 1BB, UK
| |
Collapse
|
31
|
Pirosca S, Shiely F, Clarke M, Treweek S. Tolerating bad health research: the continuing scandal. Trials 2022; 23:458. [PMID: 35655288 PMCID: PMC9161194 DOI: 10.1186/s13063-022-06415-5] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Accepted: 05/18/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND At the 2015 REWARD/EQUATOR conference on research waste, the late Doug Altman revealed that his only regret about his 1994 BMJ paper 'The scandal of poor medical research' was that he used the word 'poor' rather than 'bad'. But how much research is bad? And what would improve things? MAIN TEXT We focus on randomised trials and look at scale, participants and cost. We randomly selected up to two quantitative intervention reviews published by all clinical Cochrane Review Groups between May 2020 and April 2021. Data including the risk of bias, number of participants, intervention type and country were extracted for all trials included in selected reviews. High risk of bias trials was classed as bad. The cost of high risk of bias trials was estimated using published estimates of trial cost per participant. We identified 96 reviews authored by 546 reviewers from 49 clinical Cochrane Review Groups that included 1659 trials done in 84 countries. Of the 1640 trials providing risk of bias information, 1013 (62%) were high risk of bias (bad), 494 (30%) unclear and 133 (8%) low risk of bias. Bad trials were spread across all clinical areas and all countries. Well over 220,000 participants (or 56% of all participants) were in bad trials. The low estimate of the cost of bad trials was £726 million; our high estimate was over £8 billion. We have five recommendations: trials should be neither funded (1) nor given ethical approval (2) unless they have a statistician and methodologist; trialists should use a risk of bias tool at design (3); more statisticians and methodologists should be trained and supported (4); there should be more funding into applied methodology research and infrastructure (5). CONCLUSIONS Most randomised trials are bad and most trial participants will be in one. The research community has tolerated this for decades. This has to stop: we need to put rigour and methodology where it belongs - at the centre of our science.
Collapse
Affiliation(s)
- Stefania Pirosca
- Health Services Research Unit, University of Aberdeen, Foresterhill, Aberdeen, AB25 2ZD, UK
| | - Frances Shiely
- Trials Research and Methodologies Unit, HRB Clinical Research Facility, University College Cork, Cork, Ireland.,School of Public Health, University College Cork, Cork, Ireland
| | - Mike Clarke
- Northern Ireland Methodology Hub, Queen's University Belfast, Belfast, UK
| | - Shaun Treweek
- Health Services Research Unit, University of Aberdeen, Foresterhill, Aberdeen, AB25 2ZD, UK.
| |
Collapse
|
32
|
Bradley SH, DeVito NJ, Lloyd KE, Logullo P, Butler JE. Improving medical research in the United Kingdom. BMC Res Notes 2022; 15:165. [PMID: 35562775 PMCID: PMC9100293 DOI: 10.1186/s13104-022-06050-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 04/24/2022] [Indexed: 11/13/2022] Open
Abstract
Poor quality medical research causes serious harms by misleading healthcare professionals and policymakers, decreasing trust in science and medicine, and wasting public funds. Here we outline underlying problems including insufficient transparency, dysfunctional incentives, and reporting biases. We make the following recommendations to address these problems: Journals and funders should ensure authors fulfil their obligation to share detailed study protocols, analytical code, and (as far as possible) research data. Funders and journals should incentivise uptake of registered reports and establish funding pathways which integrate evaluation of funding proposals with initial peer review of registered reports. A mandatory national register of interests for all those who are involved in medical research in the UK should be established, with an expectation that individuals maintain the accuracy of their declarations and regularly update them. Funders and institutions should stop using metrics such as citations and journal's impact factor to assess research and researchers and instead evaluate based on quality, reproducibility, and societal value. Employers and non-academic training programmes for health professionals (clinicians hired for patient care, not to do research) should not select based on number of research publications. Promotions based on publication should be restricted to those hired to do research.
Collapse
Affiliation(s)
- Stephen H. Bradley
- Leeds Institute of Health Sciences, University of Leeds, Worsley Building, Leeds, LS2 9JT UK
| | - Nicholas J. DeVito
- The DataLab and Centre for Evidence Based Medicine, Nuffield Department of Primary Care Health Sciences, New Radcliffe House, 2nd floor, Radcliffe Observatory Quarter,Woodstock Road, Oxford, OX2 6GG UK
| | - Kelly E. Lloyd
- Leeds Institute of Health Sciences, University of Leeds, Worsley Building, Leeds, LS2 9JT UK
| | - Patricia Logullo
- UK EQUATOR Centre, Centre for Statistics in Medicine, NDORMS, University of Oxford, Windmill Road, Oxford, OX3 7LD UK
| | - Jessica E. Butler
- Centre for Health Data Science, University of Aberdeen, Aberdeen, AB25 2ZD UK
| |
Collapse
|
33
|
Sauerbrei W, Haeussler T, Balmford J, Huebner M. Structured reporting to improve transparency of analyses in prognostic marker studies. BMC Med 2022; 20:184. [PMID: 35546237 PMCID: PMC9095054 DOI: 10.1186/s12916-022-02304-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 02/17/2022] [Indexed: 11/24/2022] Open
Abstract
BACKGROUND Factors contributing to the lack of understanding of research studies include poor reporting practices, such as selective reporting of statistically significant findings or insufficient methodological details. Systematic reviews have shown that prognostic factor studies continue to be poorly reported, even for important aspects, such as the effective sample size. The REMARK reporting guidelines support researchers in reporting key aspects of tumor marker prognostic studies. The REMARK profile was proposed to augment these guidelines to aid in structured reporting with an emphasis on including all aspects of analyses conducted. METHODS A systematic search of prognostic factor studies was conducted, and fifteen studies published in 2015 were selected, three from each of five oncology journals. A paper was eligible for selection if it included survival outcomes and multivariable models were used in the statistical analyses. For each study, we summarized the key information in a REMARK profile consisting of details about the patient population with available variables and follow-up data, and a list of all analyses conducted. RESULTS Structured profiles allow an easy assessment if reporting of a study only has weaknesses or if it is poor because many relevant details are missing. Studies had incomplete reporting of exclusion of patients, missing information about the number of events, or lacked details about statistical analyses, e.g., subgroup analyses in small populations without any information about the number of events. Profiles exhibit severe weaknesses in the reporting of more than 50% of the studies. The quality of analyses was not assessed, but some profiles exhibit several deficits at a glance. CONCLUSIONS A substantial part of prognostic factor studies is poorly reported and analyzed, with severe consequences for related systematic reviews and meta-analyses. We consider inadequate reporting of single studies as one of the most important reasons that the clinical relevance of most markers is still unclear after years of research and dozens of publications. We conclude that structured reporting is an important step to improve the quality of prognostic marker research and discuss its role in the context of selective reporting, meta-analysis, study registration, predefined statistical analysis plans, and improvement of marker research.
Collapse
Affiliation(s)
- Willi Sauerbrei
- Institute for Medical Biometry and Statistics, Faculty of Medicine and Medical Center - University of Freiburg, Freiburg, Germany.
| | - Tim Haeussler
- Institute for Medical Biometry and Statistics, Faculty of Medicine and Medical Center - University of Freiburg, Freiburg, Germany
| | - James Balmford
- Institute for Medical Biometry and Statistics, Faculty of Medicine and Medical Center - University of Freiburg, Freiburg, Germany
| | - Marianne Huebner
- Department of Statistics and Probability, Michigan State University, East Lansing, MI, USA
| |
Collapse
|
34
|
McCradden MD, Anderson JA, A Stephenson E, Drysdale E, Erdman L, Goldenberg A, Zlotnik Shaul R. A Research Ethics Framework for the Clinical Translation of Healthcare Machine Learning. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2022; 22:8-22. [PMID: 35048782 DOI: 10.1080/15265161.2021.2013977] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The application of artificial intelligence and machine learning (ML) technologies in healthcare have immense potential to improve the care of patients. While there are some emerging practices surrounding responsible ML as well as regulatory frameworks, the traditional role of research ethics oversight has been relatively unexplored regarding its relevance for clinical ML. In this paper, we provide a comprehensive research ethics framework that can apply to the systematic inquiry of ML research across its development cycle. The pathway consists of three stages: (1) exploratory, hypothesis-generating data access; (2) silent period evaluation; (3) prospective clinical evaluation. We connect each stage to its literature and ethical justification and suggest adaptations to traditional paradigms to suit ML while maintaining ethical rigor and the protection of individuals. This pathway can accommodate a multitude of research designs from observational to controlled trials, and the stages can apply individually to a variety of ML applications.
Collapse
Affiliation(s)
- Melissa D McCradden
- Department of Bioethics, The Hospital for Sick Children
- Genetics and Genome Biology, The Hospital for Sick Children, Peter Gilgan Centre for Research and Learning
- Division of Clinical & Public Health, Dalla Lana School of Public Health
| | - James A Anderson
- Department of Bioethics, The Hospital for Sick Children
- Institute for Health Management Policy, & Evaluation, University of Toronto
| | - Elizabeth A Stephenson
- Labatt Family Heart Centre, The Hospital for Sick Children
- Department of Pediatrics, The Hospital for Sick Children
| | - Erik Drysdale
- Genetics and Genome Biology, The Hospital for Sick Children, Peter Gilgan Centre for Research and Learning
| | - Lauren Erdman
- Genetics and Genome Biology, The Hospital for Sick Children, Peter Gilgan Centre for Research and Learning
- Vector Institute
- Department of Computer Science, University of Toronto
| | - Anna Goldenberg
- Department of Bioethics, The Hospital for Sick Children
- Vector Institute
- Department of Computer Science, University of Toronto
- CIFAR
| | - Randi Zlotnik Shaul
- Department of Bioethics, The Hospital for Sick Children
- Department of Pediatrics, The Hospital for Sick Children
- Child Health Evaluative Sciences, The Hospital for Sick Children
| |
Collapse
|
35
|
Sofi-Mahmudi A, Raittio E. Transparency of COVID-19-Related Research in Dental Journals. FRONTIERS IN ORAL HEALTH 2022; 3:871033. [PMID: 35464778 PMCID: PMC9019132 DOI: 10.3389/froh.2022.871033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 03/14/2022] [Indexed: 11/13/2022] Open
Abstract
ObjectiveWe aimed to assess the adherence to transparency practices (data availability, code availability, statements of protocol registration and conflicts of interest and funding disclosures) and FAIRness (Findable, Accessible, Interoperable, and Reusable) of shared data from open access COVID-19-related articles published in dental journals available from the Europe PubMed Central (PMC) database.MethodsWe searched and exported all COVID-19-related open-access articles from PubMed-indexed dental journals available in the Europe PMC database in 2020 and 2021. We detected transparency indicators with a validated and automated tool developed to extract the indicators from the downloaded articles. Basic journal- and article-related information was retrieved from the PMC database. Then, from those which had shared data, we assessed their accordance with FAIR data principles using the F-UJI online tool (f-uji.net).ResultsOf 650 available articles published in 59 dental journals, 74% provided conflicts of interest disclosure and 40% funding disclosure and 4% were preregistered. One study shared raw data (0.15%) and no study shared code. Transparent practices were more common in articles published in journals with higher impact factors, and in 2020 than in 2021. Adherence to the FAIR principles in the only paper that shared data was moderate.ConclusionWhile the majority of the papers had a COI disclosure, the prevalence of the other transparency practices was far from the acceptable level. A much stronger commitment to open science practices, particularly to preregistration, data and code sharing, is needed from all stakeholders.
Collapse
Affiliation(s)
- Ahmad Sofi-Mahmudi
- Seqiz Health Network, Kurdistan University of Medical Sciences, Sanandaj, Iran
- Cochrane Iran Associate Centre, National Institute for Medical Research Development, Tehran, Iran
- *Correspondence: Ahmad Sofi-Mahmudi ;
| | - Eero Raittio
- Institute of Dentistry, University of Eastern Finland, Kuopio, Finland
| |
Collapse
|
36
|
Minogue V, Morrissey M, Terres A. Supporting researchers in knowledge translation and dissemination of their research to increase usability and impact. Qual Life Res 2022; 31:2959-2968. [PMID: 35303224 DOI: 10.1007/s11136-022-03122-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/05/2022] [Indexed: 11/28/2022]
Abstract
PURPOSE One of the key areas of delivery of the 'Action Plan for Health Research 2019-2029', for the Health Service Executive (HSE) in Ireland, is adding value and using data and knowledge, including health-related quality of life (HRQoL), for improved health care, service delivery and better population health and wellbeing. The development of governance, management and support framework and mechanisms will provide a structure for ensuring research is relevant to the organisation's service plan, well designed, has a clear plan for dissemination and translation of knowledge, and minimises research waste. Developing a process for the translation, dissemination and impact of research is part of the approach to improving translation of research into practice and aligning it with knowledge gaps. A project was undertaken to develop a clear, unified, universally applicable approach for the translation, dissemination, and impact of research undertaken by HSE staff and commissioned, sponsored, or hosted by the organisation. This included the development of guidance, training, and information for researchers. METHODS Through an iterative process, an interdisciplinary working group of experts in knowledge translation (KT), implementation science, quality improvement and research management, identified KT frameworks and tools to form a KT, dissemination, and impact process for the HSE. This involved a literature review, screening of 247 KT theories, models, and frameworks (TMFs), review of 18 TMFs selected as usable and applicable to the HSE, selection of 11 for further review, and final review of 6 TMFs in a consensus workshop. An anonymous online survey of HSE researchers, consisting of a mixture of multiple choice and free text questions, was undertaken to inform the development of the guidance and training. RESULTS A pilot of the KT process and guidance, involving HSE researchers testing its use at various stages of their research, demonstrated the need to guide researchers through planning, stakeholder engagement, and disseminating research knowledge, and provide information that could easily be understood by novice as well as more experienced researchers. A survey of all active researchers across the organisation identified their support and knowledge requirements and led to the development of accompanying guidance to support researchers in the use of the process. Researchers of all levels reported that they struggled to engage with stakeholders, including evidence users and policy makers, to optimise the impact of their research. They wanted tools that would support better engagement and maximise the value of KT. As a result of the project a range of information, guidance, and training resources have been developed. CONCLUSION KT is a complex area and researchers need support to ensure they maximise the value of their research. The KT process outlined enables the distilling of a clear message, provides a process to engage with stakeholders, create a plan to incorporate local and political context, and can show a means to evaluate how much the findings are applied in practice. This is a beneficial application of KT in the field of patient reported outcomes. In implementing this work, we have reinforced the message that stakeholder engagement is crucial from the start of the research study and increases engagement in, and ownership of, the research knowledge.
Collapse
Affiliation(s)
- Virginia Minogue
- Strategy and Research, HSE Research and Development, Jervis House, Jervis Street, Dublin 1, Ireland.
| | - Mary Morrissey
- HSE Research and Evidence, Strategy and Research, 4th Floor, Jervis House, Jervis Street, Dublin 1, Ireland
| | - Ana Terres
- Strategy and Research, HSE, Jervis House, Jervis Street, Dublin 1, Ireland
| |
Collapse
|
37
|
Sharma PD, Cotton PM. “New Year’s resolution for 2022: Incontrovertible evidence in medical research”. Trop Doct 2022; 52:2. [DOI: 10.1177/00494755221075682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Affiliation(s)
| | - Professor Michael Cotton
- Michael Cotton, Professor of Clinical Practice, Consultant Surgeon, Quai Santé, Montreux, Switzerland
| |
Collapse
|
38
|
Reynolds PS. Between two stools: preclinical research, reproducibility, and statistical design of experiments. BMC Res Notes 2022; 15:73. [PMID: 35189946 PMCID: PMC8862533 DOI: 10.1186/s13104-022-05965-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 02/08/2022] [Indexed: 11/11/2022] Open
Abstract
Translation of animal-based preclinical research is hampered by poor validity and reproducibility issues. Unfortunately, preclinical research has ‘fallen between the stools’ of competing study design traditions. Preclinical studies are often characterised by small sample sizes, large variability, and ‘problem’ data. Although Fisher-type designs with randomisation and blocking are appropriate and have been vigorously promoted, structured statistically-based designs are almost unknown. Traditional analysis methods are commonly misapplied, and basic terminology and principles of inference testing misinterpreted. Problems are compounded by the lack of adequate statistical training for researchers, and failure of statistical educators to account for the unique demands of preclinical research. The solution is a return to the basics: statistical education tailored to non-statistician investigators, with clear communication of statistical concepts, and curricula that address design and data issues specific to preclinical research. Statistics curricula should focus on statistics as process: data sampling and study design before analysis and inference. Properly-designed and analysed experiments are a matter of ethics as much as procedure. Shifting the focus of statistical education from rote hypothesis testing to sound methodology will reduce the numbers of animals wasted in noninformative experiments and increase overall scientific quality and value of published research.
Collapse
|
39
|
Errington TM, Mathur M, Soderberg CK, Denis A, Perfito N, Iorns E, Nosek BA. Investigating the replicability of preclinical cancer biology. eLife 2021; 10:e71601. [PMID: 34874005 PMCID: PMC8651293 DOI: 10.7554/elife.71601] [Citation(s) in RCA: 80] [Impact Index Per Article: 26.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Accepted: 10/16/2021] [Indexed: 12/18/2022] Open
Abstract
Replicability is an important feature of scientific research, but aspects of contemporary research culture, such as an emphasis on novelty, can make replicability seem less important than it should be. The Reproducibility Project: Cancer Biology was set up to provide evidence about the replicability of preclinical research in cancer biology by repeating selected experiments from high-impact papers. A total of 50 experiments from 23 papers were repeated, generating data about the replicability of a total of 158 effects. Most of the original effects were positive effects (136), with the rest being null effects (22). A majority of the original effect sizes were reported as numerical values (117), with the rest being reported as representative images (41). We employed seven methods to assess replicability, and some of these methods were not suitable for all the effects in our sample. One method compared effect sizes: for positive effects, the median effect size in the replications was 85% smaller than the median effect size in the original experiments, and 92% of replication effect sizes were smaller than the original. The other methods were binary - the replication was either a success or a failure - and five of these methods could be used to assess both positive and null effects when effect sizes were reported as numerical values. For positive effects, 40% of replications (39/97) succeeded according to three or more of these five methods, and for null effects 80% of replications (12/15) were successful on this basis; combining positive and null effects, the success rate was 46% (51/112). A successful replication does not definitively confirm an original finding or its theoretical interpretation. Equally, a failure to replicate does not disconfirm a finding, but it does suggest that additional investigation is needed to establish its reliability.
Collapse
Affiliation(s)
| | - Maya Mathur
- Quantitative Sciences Unit, Stanford UniversityStanfordUnited States
| | | | | | | | | | - Brian A Nosek
- Center for Open ScienceCharlottesvilleUnited States
- University of VirginiaCharlottesvilleUnited States
| |
Collapse
|
40
|
Evidence Supporting Anesthesiology Guidelines: Comment. Anesthesiology 2021; 135:1162-1163. [PMID: 34610095 DOI: 10.1097/aln.0000000000004018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
41
|
Kurth T. Continuing to Advance Epidemiology. FRONTIERS IN EPIDEMIOLOGY 2021; 1:782374. [PMID: 38455238 PMCID: PMC10910999 DOI: 10.3389/fepid.2021.782374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 10/11/2021] [Indexed: 03/09/2024]
Affiliation(s)
- Tobias Kurth
- Institute of Public Health, Charité - Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
42
|
Reynard C, Martin GP, Kontopantelis E, Jenkins DA, Heagerty A, McMillan B, Jafar A, Garlapati R, Body R. Advanced cardiovascular risk prediction in the emergency department: updating a clinical prediction model - a large database study protocol. Diagn Progn Res 2021; 5:16. [PMID: 34620253 PMCID: PMC8499458 DOI: 10.1186/s41512-021-00105-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Accepted: 09/27/2021] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND Patients presenting with chest pain represent a large proportion of attendances to emergency departments. In these patients clinicians often consider the diagnosis of acute myocardial infarction (AMI), the timely recognition and treatment of which is clinically important. Clinical prediction models (CPMs) have been used to enhance early diagnosis of AMI. The Troponin-only Manchester Acute Coronary Syndromes (T-MACS) decision aid is currently in clinical use across Greater Manchester. CPMs have been shown to deteriorate over time through calibration drift. We aim to assess potential calibration drift with T-MACS and compare methods for updating the model. METHODS We will use routinely collected electronic data from patients who were treated using TMACS at two large NHS hospitals. This is estimated to include approximately 14,000 patient episodes spanning June 2016 to October 2020. The primary outcome of acute myocardial infarction will be sourced from NHS Digital's admitted patient care dataset. We will assess the calibration drift of the existing model and the benefit of updating the CPM by model recalibration, model extension and dynamic updating. These models will be validated by bootstrapping and one step ahead prequential testing. We will evaluate predictive performance using calibrations plots and c-statistics. We will also examine the reclassification of predicted probability with the updated TMACS model. DISCUSSION CPMs are widely used in modern medicine, but are vulnerable to deteriorating calibration over time. Ongoing refinement using routinely collected electronic data will inevitably be more efficient than deriving and validating new models. In this analysis we will seek to exemplify methods for updating CPMs to protect the initial investment of time and effort. If successful, the updating methods could be used to continually refine the algorithm used within TMACS, maintaining or even improving predictive performance over time. TRIAL REGISTRATION ISRCTN number: ISRCTN41008456.
Collapse
Affiliation(s)
- Charles Reynard
- grid.5379.80000000121662407Division of Cardiovascular Sciences, University of Manchester, Manchester, UK
- grid.498924.aEmergency Department, Manchester University NHS Foundation Trust, Manchester, UK
| | - Glen P. Martin
- grid.5379.80000000121662407Division of Informatics, Imaging and Data Science, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK
| | - Evangelos Kontopantelis
- grid.5379.80000000121662407Division of Informatics, Imaging and Data Science, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK
| | - David A. Jenkins
- grid.5379.80000000121662407Division of Informatics, Imaging and Data Science, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK
| | - Anthony Heagerty
- grid.5379.80000000121662407Division of Cardiovascular Sciences, University of Manchester, Manchester, UK
| | - Brian McMillan
- Centre for Primary Care and Health Services Research Division of Population Health, Health Services Research and Primary Care School of Health Sciences Faculty of Biology, Medicine and Health University of Manchestern, Manchester, UK
| | - Anisa Jafar
- grid.5379.80000000121662407Humanitarian and Conflict Response Institute, University of Manchester, Manchester, UK
| | - Rajendar Garlapati
- grid.439642.e0000 0004 0489 3782Emergency Department, Royal Blackburn Hospital, East Lancashire Hospitals NHS Trust, Burnley, UK
| | - Richard Body
- grid.5379.80000000121662407Division of Cardiovascular Sciences, University of Manchester, Manchester, UK
- grid.498924.aEmergency Department, Manchester University NHS Foundation Trust, Manchester, UK
| |
Collapse
|
43
|
Jesus-Silva SGD, Antonio ACP. Research integrity in times of pandemic. REVISTA CIÊNCIAS EM SAÚDE 2021. [DOI: 10.21876/rcshci.v11i3.1220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022] Open
Abstract
In 1994, Douglas Graham Altman, one of the greatest statisticians of all time, wrote "We need less research, better research, and research done for good reasons". Twenty-seven years ago, Altman pointed out that the system favored unscientific behavior and that "bad science" was easy to publish, highlighting the financial implications of this amount of poorly designed research, with erroneous statistical methods, unrepresentative samples, or fraud. The covid-19 pandemic has once again put clinical research in check. The pressure for urgent responses was unprecedented. Knowledge of the origin of the virus, the transmission dynamics, the pathophysiology of the disease, efficient pharmacological and non-pharmacological measures would be counted in lives - and economies, and in governments.
Collapse
|
44
|
Impellizzeri FM, McCall A, van Smeden M. Why methods matter in a meta-analysis: a reappraisal showed inconclusive injury preventive effect of Nordic hamstring exercise. J Clin Epidemiol 2021; 140:111-124. [PMID: 34520846 DOI: 10.1016/j.jclinepi.2021.09.007] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 08/24/2021] [Accepted: 09/07/2021] [Indexed: 01/06/2023]
Abstract
OBJECTIVES The Nordic hamstring exercise (NHE) has been strongly recommended to reduce hamstring injuries in previous meta-analyses (50% reduction in risk of injury). To underline the importance and impact of adopting appropriate methodology for evidence synthesis, we revisited the study selection, reanalyzed and updated the findings of the most recent meta-analysis. STUDY DESIGN AND SETTING Only randomized control trials (RCT) using NHE as one of the prevention arms were selected. Summary effects for risk ratios (RR) for original studies included in the earlier meta-analysis, and new studies identified (update), were re-estimated under the random-effects model and presented with 95% confidence intervals (CI) and prediction intervals (PI). Tentative recommendations were provided according to the Grading of Recommendations Assessment, Development and Evaluation. RESULTS Only five RCTs out of the 15 studies included in the earlier meta-analysis randomized to NHE. Our update revealed one additional RCT. The point estimate (RR) for the five RCTs previously considered RCTs was 0.56 (95% CI, 0.20-1.52; 95% PI, 0.06-5.14, parametric, and 0.13-1.80, nonparametric). After the update, the RR was 0.59 (95% CI, 0.27-1.29; 95% PI 0.10-3.29, parametric, and 0.17-1.52, nonparametric). CONCLUSION Contrary to the conclusions of a recent meta-analysis, as well as earlier meta-analyses, by using more appropriate methodology, the evidence underpinning the protective effect of NHE so far remains inconclusive and mostly derived from high risk of bias RCTs. At best, only conditional recommendation can be provided (for soccer) and future RCTs are warranted.
Collapse
Affiliation(s)
| | - Alan McCall
- University of Technology, Faculty of Health, Sydney, New South Wales, Australia; Arsenal Performance and Research Team, Arsenal Football Club, London, UK
| | - Maarten van Smeden
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, University of Utrecht, Utrecht, the Netherlands
| |
Collapse
|