1
|
Liu S, Bourgeois FT, Narang C, Dunn AG. A comparison of machine learning methods to find clinical trials for inclusion in new systematic reviews from their PROSPERO registrations prior to searching and screening. Res Synth Methods 2024; 15:73-85. [PMID: 37749068 PMCID: PMC10872991 DOI: 10.1002/jrsm.1672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 08/13/2023] [Accepted: 09/08/2023] [Indexed: 09/27/2023]
Abstract
Searching for trials is a key task in systematic reviews and a focus of automation. Previous approaches required knowing examples of relevant trials in advance, and most methods are focused on published trial articles. To complement existing tools, we compared methods for finding relevant trial registrations given a International Prospective Register of Systematic Reviews (PROSPERO) entry and where no relevant trials have been screened for inclusion in advance. We compared SciBERT-based (extension of Bidirectional Encoder Representations from Transformers) PICO extraction, MetaMap, and term-based representations using an imperfect dataset mined from 3632 PROSPERO entries connected to a subset of 65,662 trial registrations and 65,834 trial articles known to be included in systematic reviews. Performance was measured by the median rank and recall by rank of trials that were eventually included in the published systematic reviews. When ranking trial registrations relative to PROSPERO entries, 296 trial registrations needed to be screened to identify half of the relevant trials, and the best performing approach used a basic term-based representation. When ranking trial articles relative to PROSPERO entries, 162 trial articles needed to be screened to identify half of the relevant trials, and the best-performing approach used a term-based representation. The results show that MetaMap and term-based representations outperformed approaches that included PICO extraction for this use case. The results suggest that when starting with a PROSPERO entry and where no trials have been screened for inclusion, automated methods can reduce workload, but additional processes are still needed to efficiently identify trial registrations or trial articles that meet the inclusion criteria of a systematic review.
Collapse
Affiliation(s)
- Shifeng Liu
- Biomedical Informatics and Digital Health, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia
| | - Florence T Bourgeois
- Computational Health Informatics Program, Boston Children's Hospital, Boston, Massachusetts, USA
- Department of Pediatrics, Harvard Medical School, Boston, Massachusetts, USA
| | - Claire Narang
- Computational Health Informatics Program, Boston Children's Hospital, Boston, Massachusetts, USA
| | - Adam G Dunn
- Biomedical Informatics and Digital Health, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia
- Computational Health Informatics Program, Boston Children's Hospital, Boston, Massachusetts, USA
| |
Collapse
|
2
|
Barker DH, Bie R, Steingrimsson JA. Addressing Systematic Missing Data in the Context of Causally Interpretable Meta-analysis. PREVENTION SCIENCE : THE OFFICIAL JOURNAL OF THE SOCIETY FOR PREVENTION RESEARCH 2023; 24:1648-1658. [PMID: 37726579 DOI: 10.1007/s11121-023-01586-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/07/2023] [Indexed: 09/21/2023]
Abstract
Evidence synthesis involves drawing conclusions from trial samples that may differ from the target population of interest, and there is often heterogeneity among trials in sample characteristics, treatment implementation, study design, and assessment of covariates. Stitching together this patchwork of evidence requires subject-matter knowledge, a clearly defined target population, and guidance on how to weigh evidence from different trials. Transportability analysis has provided formal identifiability conditions required to make unbiased causal inference in the target population. In this manuscript, we review these conditions along with an additional assumption required to address systematic missing data. The identifiability conditions highlight the importance of accounting for differences in treatment effect modifiers between the populations underlying the trials and the target population. We perform simulations to evaluate the bias of conventional random effect models and multiply imputed estimates using the pooled trials sample and describe causal estimators that explicitly address trial-to-target differences in key covariates in the context of systematic missing data. Results indicate that the causal transportability estimators are unbiased when treatment effect modifiers are accounted for in the analyses. Results also highlight the importance of carefully evaluating identifiability conditions for each trial to reduce bias due to differences in participant characteristics between trials and the target population. Bias can be limited by adjusting for covariates that are strongly correlated with missing treatment effect modifiers, including data from trials that do not differ from the target on treatment modifiers, and removing trials that do differ from the target and did not assess a modifier.
Collapse
Affiliation(s)
- David H Barker
- Department of Psychiatry and Human Behavior, The Warren Alpert Medical School of Brown University, Providence, RI, USA.
- Bradley Hasbro Children's Research Center, Providence, RI, USA.
| | - Ruofan Bie
- Department of Biostatistics, Brown University, Providence, RI, USA
| | | |
Collapse
|
3
|
Wu Y, Sun Y, Liu Y, Levis B, Krishnan A, He C, Neupane D, Patten SB, Cuijpers P, Ziegelstein RC, Benedetti A, Thombs BD. Depression screening tool accuracy individual participant data meta-analyses: data contribution was associated with multiple factors. J Clin Epidemiol 2023; 162:63-71. [PMID: 37619800 DOI: 10.1016/j.jclinepi.2023.08.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 08/12/2023] [Accepted: 08/16/2023] [Indexed: 08/26/2023]
Abstract
OBJECTIVES To examine the proportion of eligible primary studies that contributed data, study characteristics associated with data contribution, and reasons for noncontribution using diagnostic test accuracy Individual Participant Data Meta-Analysis (IPDMA) data sets from the DEPRESsion Screening Data project. STUDY DESIGN AND SETTING We reviewed data set contributions from four IPDMAs. A multivariable logistic regression model was fitted to evaluate study factors associated with data contribution. RESULTS Of 456 eligible studies from four included IPDMAs, 295 (65%) contributed data. More recent year of publication and higher journal impact factor were associated with greater odds of data contribution. Studies conducted in Europe (excluding the United Kingdom), Oceania, Canada, the Middle East, Africa, and Central or South America (reference = the United States), that have recruitment from inpatient care or nonmedical settings (reference = outpatient), that reported screening accuracy results, or that drew negative conclusions (reference = positive conclusions) were more likely to contribute data. Studies of the Geriatric Depression Scale (reference = the Patient Health Questionnaire) or lacking funding information were negatively associated with data contribution. Over 80% of noncontributions were due to authors being unreachable or data being unavailable. CONCLUSION The study identified factors associated with data contribution that may support future research to promote data contribution to IPDMAs.
Collapse
Affiliation(s)
- Yin Wu
- Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Quebec, Canada; Department of Psychiatry, McGill University, Montreal, Quebec, Canada
| | - Ying Sun
- Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Quebec, Canada
| | - Yi Liu
- Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Quebec, Canada
| | - Brooke Levis
- Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Quebec, Canada; Centre for Prognosis Research, School of Medicine, Keele University, Staffordshire, UK
| | - Ankur Krishnan
- Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Quebec, Canada
| | - Chen He
- Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Quebec, Canada
| | - Dipika Neupane
- Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Quebec, Canada
| | - Scott B Patten
- Department of Community Health Sciences, University of Calgary, Calgary, Alberta, Canada
| | - Pim Cuijpers
- Department of Clinical, Neuro and Developmental Psychology, Amsterdam Public Health Research Institute, Vrije Universiteit, Amsterdam, The Netherlands
| | - Roy C Ziegelstein
- Department of Medicine, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Andrea Benedetti
- Department of Epidemiology, Biostatistics, and Occupational Health, McGill University, Montreal, Quebec, Canada; Respiratory Epidemiology and Clinical Research Unit, McGill University Health Centre, Montreal, Quebec, Canada; Department of Medicine, McGill University, Montreal, Quebec, Canada
| | - Brett D Thombs
- Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Quebec, Canada; Department of Psychiatry, McGill University, Montreal, Quebec, Canada; Department of Epidemiology, Biostatistics, and Occupational Health, McGill University, Montreal, Quebec, Canada; Department of Medicine, McGill University, Montreal, Quebec, Canada; Department of Psychology, McGill University, Montreal, Quebec, Canada; Department of Educational and Counselling Psychology, McGill University, Montreal, Quebec, Canada; Biomedical Ethics Unit, McGill University, Montreal, Quebec, Canada.
| |
Collapse
|
4
|
Pellen C, Le Louarn A, Spurrier-Bernard G, Decullier E, Chrétien JM, Rosenthal E, Le Goff G, Moher D, Ioannidis JPA, Naudet F. Ten (not so) simple rules for clinical trial data-sharing. PLoS Comput Biol 2023; 19:e1010879. [PMID: 36893146 PMCID: PMC9997951 DOI: 10.1371/journal.pcbi.1010879] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/10/2023] Open
Abstract
Clinical trial data-sharing is seen as an imperative for research integrity and is becoming increasingly encouraged or even required by funders, journals, and other stakeholders. However, early experiences with data-sharing have been disappointing because they are not always conducted properly. Health data is indeed sensitive and not always easy to share in a responsible way. We propose 10 rules for researchers wishing to share their data. These rules cover the majority of elements to be considered in order to start the commendable process of clinical trial data-sharing: Rule 1: Abide by local legal and regulatory data protection requirementsRule 2: Anticipate the possibility of clinical trial data-sharing before obtaining fundingRule 3: Declare your intent to share data in the registration stepRule 4: Involve research participantsRule 5: Determine the method of data accessRule 6: Remember there are several other elements to shareRule 7: Do not proceed aloneRule 8: Deploy optimal data management to ensure that the data shared is usefulRule 9: Minimize risksRule 10: Strive for excellence.
Collapse
Affiliation(s)
- Claude Pellen
- Univ Rennes, CHU Rennes, Inserm, Irset (Institut de recherche en santé, environnement et travail)—UMR_S 1085, CIC 1414 [(Centre d’Investigation Clinique de Rennes)], Rennes, France
- * E-mail:
| | - Anne Le Louarn
- GCS CNCR (Comité National de Coordination de la Recherche), Paris, France
| | | | - Evelyne Decullier
- Hospices Civils de Lyon, Pôle Santé Publique, Service REC, Lyon, France
- Université de Lyon, Lyon, France
| | | | - Eric Rosenthal
- ANRS|Maladies infectieuses émergentes, PariSanté Campus, Paris, France
| | | | - David Moher
- Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada
| | - John P. A. Ioannidis
- Departments of Medicine, Epidemiology and Population Health, Biomedical Data Science, and Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, California, United States of America
| | - Florian Naudet
- Univ Rennes, CHU Rennes, Inserm, Irset (Institut de recherche en santé, environnement et travail)—UMR_S 1085, CIC 1414 [(Centre d’Investigation Clinique de Rennes)], Rennes, France
- Institut Universitaire de France (IUF), Paris, France
| |
Collapse
|
5
|
Siebert M, Gaba J, Renault A, Laviolle B, Locher C, Moher D, Naudet F. Data-sharing and re-analysis for main studies assessed by the European Medicines Agency-a cross-sectional study on European Public Assessment Reports. BMC Med 2022; 20:177. [PMID: 35590360 PMCID: PMC9119701 DOI: 10.1186/s12916-022-02377-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Accepted: 04/13/2022] [Indexed: 01/13/2023] Open
Abstract
BACKGROUND Transparency and reproducibility are expected to be normative practices in clinical trials used for decision-making on marketing authorisations for new medicines. This registered report introduces a cross-sectional study aiming to assess inferential reproducibility for main trials assessed by the European Medicines Agency. METHODS Two researchers independently identified all studies on new medicines, biosimilars and orphan medicines given approval by the European Commission between January 2017 and December 2019, categorised as 'main studies' in the European Public Assessment Reports (EPARs). Sixty-two of these studies were randomly sampled. One researcher retrieved the individual patient data (IPD) for these studies and prepared a dossier for each study, containing the IPD, the protocol and information on the conduct of the study. A second researcher who had no access to study reports used the dossier to run an independent re-analysis of each trial. All results of these re-analyses were reported in terms of each study's conclusions, p-values, effect sizes and changes from the initial protocol. A team of two researchers not involved in the re-analysis compared results of the re-analyses with published results of the trial. RESULTS Two hundred ninety-two main studies in 173 EPARs were identified. Among the 62 studies randomly sampled, we received IPD for 10 trials. The median number of days between data request and data receipt was 253 [interquartile range 182-469]. For these ten trials, we identified 23 distinct primary outcomes for which the conclusions were reproduced in all re-analyses. Therefore, 10/62 trials (16% [95% confidence interval 8% to 28%]) were reproduced, as the 52 studies without available data were considered non-reproducible. There was no change from the original study protocol regarding the primary outcome in any of these ten studies. Spin was observed in the report of one study. CONCLUSIONS Despite their results supporting decisions that affect millions of people's health across the European Union, most main studies used in EPARs lack transparency and their results are not reproducible for external researchers. Re-analyses of the few trials with available data showed very good inferential reproducibility. TRIAL REGISTRATION https://osf.io/mcw3t/.
Collapse
Affiliation(s)
- Maximilian Siebert
- Univ Rennes, CHU Rennes, Inserm, CIC 1414 [(Centre d'Investigation Clinique de Rennes)], F-35000, Rennes, France.,Univ Rennes, CHU Rennes, Inserm, Irset (Institut de recherche en santé, environnement et travail) - UMR_S 1085, F-35000, Rennes, France
| | - Jeanne Gaba
- Univ Rennes, CHU Rennes, Inserm, CIC 1414 [(Centre d'Investigation Clinique de Rennes)], F-35000, Rennes, France.,Univ Rennes, CHU Rennes, Inserm, Irset (Institut de recherche en santé, environnement et travail) - UMR_S 1085, F-35000, Rennes, France
| | - Alain Renault
- Univ Rennes, CHU Rennes, Inserm, CIC 1414 [(Centre d'Investigation Clinique de Rennes)], F-35000, Rennes, France.,Univ Rennes, CHU Rennes, Inserm, Irset (Institut de recherche en santé, environnement et travail) - UMR_S 1085, F-35000, Rennes, France
| | - Bruno Laviolle
- Univ Rennes, CHU Rennes, Inserm, CIC 1414 [(Centre d'Investigation Clinique de Rennes)], F-35000, Rennes, France.,Univ Rennes, CHU Rennes, Inserm, Irset (Institut de recherche en santé, environnement et travail) - UMR_S 1085, F-35000, Rennes, France
| | - Clara Locher
- Univ Rennes, CHU Rennes, Inserm, CIC 1414 [(Centre d'Investigation Clinique de Rennes)], F-35000, Rennes, France.,Univ Rennes, CHU Rennes, Inserm, Irset (Institut de recherche en santé, environnement et travail) - UMR_S 1085, F-35000, Rennes, France
| | - David Moher
- Center for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada
| | - Florian Naudet
- Univ Rennes, CHU Rennes, Inserm, CIC 1414 [(Centre d'Investigation Clinique de Rennes)], F-35000, Rennes, France. .,Univ Rennes, CHU Rennes, Inserm, Irset (Institut de recherche en santé, environnement et travail) - UMR_S 1085, F-35000, Rennes, France. .,Clinical Investigation Center (Inserm 1414) and Adult Psychiatry Department, Rennes University Hospital, Rennes, France.
| |
Collapse
|
6
|
Liu S, Bourgeois FT, Dunn AG. Identifying unreported links between ClinicalTrials.gov trial registrations and their published results. Res Synth Methods 2022; 13:342-352. [PMID: 34970844 PMCID: PMC9090946 DOI: 10.1002/jrsm.1545] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2021] [Revised: 12/13/2021] [Accepted: 12/17/2021] [Indexed: 11/10/2022]
Abstract
A substantial proportion of trial registrations are not linked to corresponding published articles, limiting analyses and new tools. Our aim was to develop a method for finding articles reporting the results of trials that are registered on ClinicalTrials.gov when they do not include metadata links. We used a set of 27,280 trial registration and article pairs to train and evaluate methods for identifying missing links in both directions-from articles to registrations and from registrations to articles. We trained a classifier with six distance metrics as feature representations to rank the correct article or registration, using recall@K to evaluate performance and compare to baseline methods. When identifying links from registrations to published articles, the classifier ranked the correct article first (recall@1) among 378,048 articles in 80.8% of evaluation cases and 34.9% in the baseline method. Recall@10 was 85.1% compared to 60.7% in the baseline. When predicting links from articles to registrations, recall@1 was 83.4% for the classifier and 39.8% in the baseline. Recall@10 was 89.5% compared to 65.8% in the baseline. The proposed method improves on our baseline document similarity method to be feasible for identifying missing links in practice. Given a ClinicalTrials.gov registration, a user checking 10 ranked articles can expect to identify the matching article in at least 85% of cases, if the trial has been published. The proposed method can be used to improve the coupling of ClinicalTrials.gov and PubMed, with applications related to automating systematic review and evidence synthesis processes.
Collapse
Affiliation(s)
- Shifeng Liu
- Faculty of Medicine and Health, The University of Sydney, Biomedical Informatics and Digital Health, School of Medical Sciences, Sydney, New South Wales, Australia
| | - Florence T Bourgeois
- Computational Health Informatics Program, Boston Children's Hospital, Boston, Massachusetts, USA
- Department of Pediatrics, Harvard Medical School, Boston, Massachusetts, USA
| | - Adam G Dunn
- Faculty of Medicine and Health, The University of Sydney, Biomedical Informatics and Digital Health, School of Medical Sciences, Sydney, New South Wales, Australia
- Computational Health Informatics Program, Boston Children's Hospital, Boston, Massachusetts, USA
| |
Collapse
|
7
|
Wieseler B, McGauran N. From publication bias to lost in information: why we need a central public portal for clinical trial data. BMJ Evid Based Med 2022; 27:74-76. [PMID: 33303480 PMCID: PMC8961768 DOI: 10.1136/bmjebm-2020-111566] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 11/11/2020] [Indexed: 11/21/2022]
Affiliation(s)
- Beate Wieseler
- Institute for Quality and Efficiency in Health Care, Köln, Germany
| | - Natalie McGauran
- Institute for Quality and Efficiency in Health Care, Köln, Germany
| |
Collapse
|
8
|
McGauran N, Wieseler B. Centralised Full Access to Clinical Study Data Can Support Unbiased Guideline Development, Continuing Medical Education, and Patient Information. J Eur CME 2021; 10:1989172. [PMID: 34868731 PMCID: PMC8635651 DOI: 10.1080/21614083.2021.1989172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2021] [Accepted: 09/24/2021] [Indexed: 10/25/2022] Open
Affiliation(s)
- Natalie McGauran
- Communications Department, Institute for Quality and Efficiency in Health Care (IQWIG), Cologne, Germany
| | - Beate Wieseler
- Drug Assessment Department, Institute for Quality and Efficiency in Health Care (IQWIG), Cologne, Germany
| |
Collapse
|
9
|
Tan AC, Askie LM, Hunter KE, Barba A, Simes RJ, Seidler AL. Data sharing-trialists' plans at registration, attitudes, barriers and facilitators: A cohort study and cross-sectional survey. Res Synth Methods 2021; 12:641-657. [PMID: 34057290 DOI: 10.1002/jrsm.1500] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Revised: 03/18/2021] [Accepted: 05/26/2021] [Indexed: 01/05/2023]
Abstract
Data unavailability impedes research transparency and is a major problem for individual participant data (IPD) meta-analyses as it reduces statistical power, increases risk of bias, and may even preclude completion. The primary objectives of this study were to determine IPD sharing plans reported in recently registered clinical trial registration records, how data sharing commitment relates to clinical trial characteristics, and principal investigators' attitudes, motivations and barriers to data sharing. The secondary objective was to derive recommendations to overcome identified barriers to data sharing. This was a retrospective cohort study of all interventional trials registered on the Australian New Zealand Clinical Trials Registry (ANZCTR) from 1 December 2018 to 30 November 2019, and an online cross-sectional survey of their principal investigators. In the cohort study of all clinical trials registered on the ANZCTR in the study period (n = 1517), commitment to share data was low (22%, 329/1517). In the cross-sectional survey (n = 281, 23% response rate), principal investigators showed strong support for the concept of data sharing (77%, 216/281) but a substantially lower intention to actually share data from their clinical trials (40%, 111/281). Major barriers to data sharing included lacking informed consent to share data, protecting participant confidentiality and preventing misinterpretation of data or misleading secondary analyses. There is a gap between high in-principle support for data sharing, and low in-practice intention from investigators to share data from their own clinical trials. Multiple pathways exist to bridge this gap by addressing the identified barriers to data sharing.
Collapse
Affiliation(s)
- Aidan Christopher Tan
- NHMRC Clinical Trials Centre, University of Sydney, Sydney, New South Wales, Australia
| | - Lisa M Askie
- NHMRC Clinical Trials Centre, University of Sydney, Sydney, New South Wales, Australia
| | | | - Angie Barba
- NHMRC Clinical Trials Centre, University of Sydney, Sydney, New South Wales, Australia
| | - Robert John Simes
- NHMRC Clinical Trials Centre, University of Sydney, Sydney, New South Wales, Australia
| | - Anna Lene Seidler
- NHMRC Clinical Trials Centre, University of Sydney, Sydney, New South Wales, Australia
| |
Collapse
|
10
|
When Evidence Goes "Missing in Action": Implications for Patient Management in Cardiac Surgery. THE JOURNAL OF EXTRA-CORPOREAL TECHNOLOGY 2020; 52:126-134. [PMID: 32669739 DOI: 10.1182/ject-2000020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Accepted: 05/18/2020] [Indexed: 11/20/2022]
Abstract
Best-practice clinical decision-making for patient blood management (PBM) and transfusion in cardiac surgery requires high-quality, timely information. However, evidence will be misleading if published information lags too far behind evolving practice, or if trial results are biased, incomplete, or unreported. The result is that providers are deprived of accurate data, and patients will not receive best possible care. Publicly accessible trial registries provide information for structured audits of reporting compliance, and appraisal of evidence attrition and distortion. Trials related to blood management and transfusion in cardiac surgery and those registered in ClinicalTrials.gov were evaluated for relevance, reliability, transparency, timeliness, and prevalence of unreported trial results. Evidence was considered to have "disappeared" if no results were posted to the registry and no related PUBMED publications were available by July 2019. Data were summarized by descriptive statistics. A total of 181 registered trials were surveyed; 52% were prospectively registered. Most commonly reported primary outcomes were laboratory surrogate measures (34%). Patient- and practice-relevant outcomes-mortality/major morbidity (7%), transfusion (27%), and major bleeding (28%)-were less common. Only seven studies posted results to the registry within the mandated 12 months from study completion; median time to posting was 17 (interquartile range [IQR] 13, 37) months. Trial results for 58% were unreported 3-9 years after trial completion. A staggering amount of clinical trial evidence for PBM in cardiac surgery is missing from publicly accessible records and the literature. Investigators must be incentivized to promptly and completely report all results. Penalties for noncompliance are already in place and should be enforced. Simplified information linkage, centralized and routine audit cycles, and prioritization of robust "living" reviews may be more positive motivators. Implementation will require a sea change in the prevailing culture of research reporting, plus coordinated efforts of clinicians, applied statisticians, information technology specialists, and research librarians.
Collapse
|
11
|
Dunn AG, Bourgeois FT. Is it time for computable evidence synthesis? J Am Med Inform Assoc 2020; 27:972-975. [PMID: 32337600 DOI: 10.1093/jamia/ocaa035] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Revised: 03/06/2020] [Accepted: 03/19/2020] [Indexed: 01/06/2023] Open
Abstract
Efforts aimed at increasing the pace of evidence synthesis have been primarily focused on the use of published articles, but these are a relatively delayed, incomplete, and at times biased source of study results data. Compared to those in bibliographic databases, structured results data available in trial registries may be more timely, complete, and accessible, but these data remain underutilized. Key advantages of using structured results data include the potential to automatically monitor the accumulation of relevant evidence and use it to signal when a systematic review requires updating, as well as to prospectively assign trials to already published reviews. Shifting focus to emerging sources of structured trial data may provide the impetus to build a more proactive and efficient system of continuous evidence surveillance.
Collapse
Affiliation(s)
- Adam G Dunn
- Centre for Health Informatics, Macquarie University, Sydney, New South Wales, Australia.,Discipline of Biomedical Informatics and Digital Health, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia.,Computational Health Informatics Program, Boston Children's Hospital, Boston, Massachusetts, United States
| | - Florence T Bourgeois
- Computational Health Informatics Program, Boston Children's Hospital, Boston, Massachusetts, United States.,Departments of Pediatrics and Emergency Medicine, Harvard Medical School, Boston, Massachusetts, United States
| |
Collapse
|
12
|
Azar M, Benedetti A, Riehm KE, Imran M, Krishnan A, Chiovitti M, Sanchez T, Shrier I, Thombs BD. Individual participant data meta-analyses (IPDMA): data contribution was associated with trial corresponding author country, publication year, and journal impact factor. J Clin Epidemiol 2020; 124:16-23. [PMID: 32298776 DOI: 10.1016/j.jclinepi.2020.03.026] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Revised: 03/01/2020] [Accepted: 03/13/2020] [Indexed: 10/24/2022]
Abstract
OBJECTIVES The objectives were to determine the proportion of eligible randomized controlled trials (RCTs) that contributed data to individual participant data meta-analyses (IPDMAs) and explore associated factors. STUDY DESIGN AND SETTING IPDMAs with ≥10 eligible RCTs were identified by searching MEDLINE, EMBASE, CINAHL, and Cochrane May 1, 2015 to February 13, 2017. Mixed-effect logistic regression was used to identify factors associated with data contribution. RESULTS Of 774 eligible RCTs from 35 included IPDMAs, 517 (67%, 95% confidence interval [CI]: 63%-70%) contributed data. Compared to RCTs from journals with low-impact factors (0-2.4), RCTs from journals with higher impact factors were more likely to contribute data: impact factor 5.0-9.9, odds ratio [OR] 2.6, 95% CI: 1.37-4.86; impact factor: 10.0-19.9, OR: 5.7, 95% CI: 3.0-10.8; impact factor >20.0, OR: 4.6, 95% CI: 1.9-11.4. RCTs from the United Kingdom were more likely to contribute data than those from the United States (reference; OR: 2.4, 95% CI, 1.3-4.6). There was an increase in OR per publication year (OR: 1.05, 95% CI: 1.02-1.09). CONCLUSION The country where RCTs are conducted, impact factor of the journal where RCTs are published, and RCT publication year were associated with data contribution in IPDMAs with ≥10 eligible RCTs.
Collapse
Affiliation(s)
- Marleine Azar
- Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Quebec, Canada; Department of Epidemiology, Biostatistics, and Occupational Health, McGill University, Montreal, Quebec, Canada
| | - Andrea Benedetti
- Department of Epidemiology, Biostatistics, and Occupational Health, McGill University, Montreal, Quebec, Canada; Respiratory Epidemiology and Clinical Research Unit, McGill University Health Centre, Montreal, Quebec, Canada
| | - Kira E Riehm
- Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Quebec, Canada
| | - Mahrukh Imran
- Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Quebec, Canada; Department of Epidemiology, Biostatistics, and Occupational Health, McGill University, Montreal, Quebec, Canada
| | - Ankur Krishnan
- Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Quebec, Canada; Department of Epidemiology, Biostatistics, and Occupational Health, McGill University, Montreal, Quebec, Canada
| | - Matthew Chiovitti
- Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Quebec, Canada
| | - Tatiana Sanchez
- Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Quebec, Canada
| | - Ian Shrier
- Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Quebec, Canada; Department of Epidemiology, Biostatistics, and Occupational Health, McGill University, Montreal, Quebec, Canada
| | - Brett D Thombs
- Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Quebec, Canada; Department of Epidemiology, Biostatistics, and Occupational Health, McGill University, Montreal, Quebec, Canada; Department of Psychiatry, McGill University, Montreal, Quebec, Canada; Department of Medicine, McGill University, Montreal, Quebec, Canada; Department of Psychology, McGill University, Montreal, Quebec, Canada; Department of Educational and Counselling Psychology, McGill University, Montreal, Quebec, Canada; Biomedical Ethics Unit, McGill University, Montreal, Quebec, Canada.
| |
Collapse
|
13
|
Lalu M, Leung GJ, Dong YY, Montroy J, Butler C, Auer RC, Fergusson DA. Mapping the preclinical to clinical evidence and development trajectory of the oncolytic virus talimogene laherparepvec (T-VEC): a systematic review. BMJ Open 2019; 9:e029475. [PMID: 31796474 PMCID: PMC7003485 DOI: 10.1136/bmjopen-2019-029475] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
OBJECTIVE This study aimed to conduct a systematic review of preclinical and clinical evidence to chart the successful trajectory of talimogene laherparepvec (T-VEC) from the bench to the clinic. DESIGN This study was a systematic review. The primary outcome of interest was the efficacy of treatment, determined by complete response. Abstract and full-text selection as well as data extraction were done by two independent reviewers. The Cochrane risk of bias tool was used to assess the risk of bias in studies. SETTING Embase, Embase Classic and OvidMedline were searched from inception until May 2016 to assess its development trajectory to approval in 2015. PARTICIPANTS Preclinical and clinical controlled comparison studies, as well as observational studies. INTERVENTIONS T-VEC for the treatment of any malignancy. RESULTS 8852 records were screened and five preclinical (n=150 animals) and seven clinical studies (n=589 patients) were included. We saw large decreases in T-VEC's efficacy as studies moved from the laboratory to patients, and as studies became more methodologically rigorous. Preclinical studies reported complete regression rates up to 100% for injected tumours and 80% for contralateral tumours, while the highest degree of efficacy seen in the clinical setting was a 24% complete response rate, with one study experiencing a complete response rate of 0%. We were unable to reliably assess safety due to the lack of reporting, as well as the heterogeneity seen in adverse event definitions. All preclinical studies had high or unclear risk of bias, and all clinical studies were at a high risk of bias in at least one domain. CONCLUSIONS Our findings illustrate that even successful biotherapeutics may not demonstrate a clear translational road map. This emphasises the need to consider increasing rigour and transparency along the translational pathway. PROSPERO REGISTRATION NUMBER CRD42016043541.
Collapse
Affiliation(s)
- Manoj Lalu
- BLUEPRINT Translational Research Group, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
- Department of Anesthesiology and Pain Medicine, The Ottawa Hospital, Ottawa, Ontario, Canada
- Regenerative Medicine Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
- Department of Cellular and Molecular Medicine, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
| | - Garvin J Leung
- BLUEPRINT Translational Research Group, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
- Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Yuan Yi Dong
- BLUEPRINT Translational Research Group, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
- Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Joshua Montroy
- BLUEPRINT Translational Research Group, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
| | - Claire Butler
- BLUEPRINT Translational Research Group, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
| | - Rebecca C Auer
- Department of Surgery, The Ottawa Hospital, Ottawa, Ontario, Canada
- Cancer Therapeutics Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
| | - Dean A Fergusson
- BLUEPRINT Translational Research Group, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
- Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
14
|
Zarin DA, Fain KM, Dobbins HD, Tse T, Williams RJ. 10-Year Update on Study Results Submitted to ClinicalTrials.gov. N Engl J Med 2019; 381:1966-1974. [PMID: 31722160 PMCID: PMC8591666 DOI: 10.1056/nejmsr1907644] [Citation(s) in RCA: 82] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Affiliation(s)
- Deborah A Zarin
- From the National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD
| | - Kevin M Fain
- From the National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD
| | - Heather D Dobbins
- From the National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD
| | - Tony Tse
- From the National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD
| | - Rebecca J Williams
- From the National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD
| |
Collapse
|
15
|
Li R, Sim I. How Clinical Trial Data Sharing Platforms Can Advance the Study of Biomarkers. THE JOURNAL OF LAW, MEDICINE & ETHICS : A JOURNAL OF THE AMERICAN SOCIETY OF LAW, MEDICINE & ETHICS 2019; 47:369-373. [PMID: 31560635 DOI: 10.1177/1073110519876165] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Although data sharing platforms host diverse data types the features of these platforms are well-suited to facilitating biomarker research. Given the current state of biomarker discovery, an innovative paradigm to accelerate biomarker discovery is to utilize platforms such as Vivli to leverage researchers' abilities to integrate certain classes of biomarkers.
Collapse
Affiliation(s)
- Rebecca Li
- Rebecca Li, Ph.D., is at the Vivli Center for Global Clinical Research Data and the Center for Bioethics at Harvard Medical School. Ida Sim, M.D., Ph.D., is at the Vivli Center for Global Clinical Research Data and at the University of California, San Francisco
| | - Ida Sim
- Rebecca Li, Ph.D., is at the Vivli Center for Global Clinical Research Data and the Center for Bioethics at Harvard Medical School. Ida Sim, M.D., Ph.D., is at the Vivli Center for Global Clinical Research Data and at the University of California, San Francisco
| |
Collapse
|
16
|
Kuntz RE, Antman EM, Califf RM, Ingelfinger JR, Krumholz HM, Ommaya A, Peterson ED, Ross JS, Waldstreicher J, Wang SV, Zarin DA, Whicher DM, Siddiqi SM, Lopez MH. Individual Patient-Level Data Sharing for Continuous Learning: A Strategy for Trial Data Sharing. NAM Perspect 2019; 2019:201906b. [PMID: 34532668 DOI: 10.31478/201906b] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
17
|
Bashir R, Surian D, Dunn AG. The risk of conclusion change in systematic review updates can be estimated by learning from a database of published examples. J Clin Epidemiol 2019; 110:42-49. [PMID: 30849512 DOI: 10.1016/j.jclinepi.2019.02.015] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2018] [Revised: 01/25/2019] [Accepted: 02/26/2019] [Indexed: 01/11/2023]
Abstract
OBJECTIVES To determine which systematic review characteristics are needed to estimate the risk of conclusion change in systematic review updates. STUDY DESIGN AND SETTING We applied classification trees (a machine learning method) to model the risk of conclusion change in systematic review updates, using pairs of systematic reviews and their updates as samples. The classifiers were constructed using a set of features extracted from systematic reviews and the relevant trials added in published updates. Model performance was measured by recall, precision, and area under the receiver operating characteristic curve (AUC). RESULTS We identified 63 pairs of systematic reviews and updates, of which 20 (32%) exhibited a change in conclusion in their updates. A classifier using information about new trials exhibited the highest performance (AUC: 0.71; recall: 0.75; precision: 0.43) compared to a classifier that used fewer features (AUC: 0.65; recall: 0.75; precision: 0.39). CONCLUSION When estimating the risk of conclusion change in systematic review updates, information about the sizes of trials that will be added in an update are most useful. Future tools aimed at signaling conclusion change risks would benefit from complementary tools that automate screening of relevant trials.
Collapse
Affiliation(s)
- Rabia Bashir
- Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney, New South Wales 2109, Australia.
| | - Didi Surian
- Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney, New South Wales 2109, Australia
| | - Adam G Dunn
- Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney, New South Wales 2109, Australia; Computational Health Informatics Program, Boston Children's Hospital, Boston, MA 02115, USA
| |
Collapse
|
18
|
Bashir R, Dunn AG. Software engineering principles address current problems in the systematic review ecosystem. J Clin Epidemiol 2019; 109:136-141. [PMID: 30582972 DOI: 10.1016/j.jclinepi.2018.12.014] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Revised: 11/04/2018] [Accepted: 12/17/2018] [Indexed: 12/19/2022]
Abstract
Systematic reviewers are simultaneously unable to produce systematic reviews fast enough to keep up with the availability of new trial evidence while overproducing systematic reviews that are unlikely to change practice because they are redundant or biased. Although the transparency and completeness of trial reporting has improved with changes in policy and new technologies, systematic reviews have not yet benefited from the same level of effort. We found that new methods and tools used to automate aspects of systematic review processes have focused on improving the efficiency of individual systematic reviews rather than the efficiency of the entire ecosystem of systematic review production. We use software engineering principles to review challenges and opportunities for improving the interoperability, integrity, efficiency, and maintainability. We conclude by recommending ways to improve access to structured systematic review results. Major opportunities for improving systematic reviews will come from new tools and changes in policy focused on doing the right systematic reviews rather than just doing more of them faster.
Collapse
Affiliation(s)
- Rabia Bashir
- Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney, Australia.
| | - Adam G Dunn
- Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney, Australia
| |
Collapse
|
19
|
Tse T, Fain KM, Zarin DA. How to avoid common problems when using ClinicalTrials.gov in research: 10 issues to consider. BMJ 2018; 361:k1452. [PMID: 29802130 PMCID: PMC5968400 DOI: 10.1136/bmj.k1452] [Citation(s) in RCA: 107] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
ClinicalTrials.gov, a repository of information about clinical studies and their results, together with specialised search tools, provides a unique window into the clinical research enterprise, which includes all initiated, ongoing, and completed or terminated clinical studies. Researchers are increasingly using information from the database to assess research reporting practices, or to characterise the clinical research enterprise. Conducting valid analyses requires an understanding of both the capabilities and limitations of the database (that is, intrinsic factors) as well as reporting policies and other factors external to the database that influence the types of studies in ClinicalTrials.gov in a specified time. This article discusses 10 key issues that researchers need to consider when using the database to conduct research.
Collapse
Affiliation(s)
- Tony Tse
- National Library of Medicine, National Institutes of Health, Department of Health and Human Services, Bethesda MD 20894, USA
| | - Kevin M Fain
- National Library of Medicine, National Institutes of Health, Department of Health and Human Services, Bethesda MD 20894, USA
| | - Deborah A Zarin
- National Library of Medicine, National Institutes of Health, Department of Health and Human Services, Bethesda MD 20894, USA
| |
Collapse
|
20
|
Goldstein A, Venker E, Weng C. Evidence appraisal: a scoping review, conceptual framework, and research agenda. J Am Med Inform Assoc 2018; 24:1192-1203. [PMID: 28541552 DOI: 10.1093/jamia/ocx050] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2016] [Accepted: 04/18/2017] [Indexed: 12/16/2022] Open
Abstract
Objective Critical appraisal of clinical evidence promises to help prevent, detect, and address flaws related to study importance, ethics, validity, applicability, and reporting. These research issues are of growing concern. The purpose of this scoping review is to survey the current literature on evidence appraisal to develop a conceptual framework and an informatics research agenda. Methods We conducted an iterative literature search of Medline for discussion or research on the critical appraisal of clinical evidence. After title and abstract review, 121 articles were included in the analysis. We performed qualitative thematic analysis to describe the evidence appraisal architecture and its issues and opportunities. From this analysis, we derived a conceptual framework and an informatics research agenda. Results We identified 68 themes in 10 categories. This analysis revealed that the practice of evidence appraisal is quite common but is rarely subjected to documentation, organization, validation, integration, or uptake. This is related to underdeveloped tools, scant incentives, and insufficient acquisition of appraisal data and transformation of the data into usable knowledge. Discussion The gaps in acquiring appraisal data, transforming the data into actionable information and knowledge, and ensuring its dissemination and adoption can be addressed with proven informatics approaches. Conclusions Evidence appraisal faces several challenges, but implementing an informatics research agenda would likely help realize the potential of evidence appraisal for improving the rigor and value of clinical evidence.
Collapse
Affiliation(s)
- Andrew Goldstein
- Department of Biomedical Informatics, Columbia University, New York, NY, USA
| | - Eric Venker
- Department of Medicine, Columbia University, New York, NY, USA
| | - Chunhua Weng
- Department of Biomedical Informatics, Columbia University, New York, NY, USA
| |
Collapse
|
21
|
Naudet F, Sakarovitch C, Janiaud P, Cristea I, Fanelli D, Moher D, Ioannidis JPA. Data sharing and reanalysis of randomized controlled trials in leading biomedical journals with a full data sharing policy: survey of studies published in The BMJ and PLOS Medicine. BMJ 2018; 360:k400. [PMID: 29440066 PMCID: PMC5809812 DOI: 10.1136/bmj.k400] [Citation(s) in RCA: 113] [Impact Index Per Article: 18.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
OBJECTIVES To explore the effectiveness of data sharing by randomized controlled trials (RCTs) in journals with a full data sharing policy and to describe potential difficulties encountered in the process of performing reanalyses of the primary outcomes. DESIGN Survey of published RCTs. SETTING PubMed/Medline. ELIGIBILITY CRITERIA RCTs that had been submitted and published by The BMJ and PLOS Medicine subsequent to the adoption of data sharing policies by these journals. MAIN OUTCOME MEASURE The primary outcome was data availability, defined as the eventual receipt of complete data with clear labelling. Primary outcomes were reanalyzed to assess to what extent studies were reproduced. Difficulties encountered were described. RESULTS 37 RCTs (21 from The BMJ and 16 from PLOS Medicine) published between 2013 and 2016 met the eligibility criteria. 17/37 (46%, 95% confidence interval 30% to 62%) satisfied the definition of data availability and 14 of the 17 (82%, 59% to 94%) were fully reproduced on all their primary outcomes. Of the remaining RCTs, errors were identified in two but reached similar conclusions and one paper did not provide enough information in the Methods section to reproduce the analyses. Difficulties identified included problems in contacting corresponding authors and lack of resources on their behalf in preparing the datasets. In addition, there was a range of different data sharing practices across study groups. CONCLUSIONS Data availability was not optimal in two journals with a strong policy for data sharing. When investigators shared data, most reanalyses largely reproduced the original results. Data sharing practices need to become more widespread and streamlined to allow meaningful reanalyses and reuse of data. TRIAL REGISTRATION Open Science Framework osf.io/c4zke.
Collapse
Affiliation(s)
- Florian Naudet
- Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, California, USA
| | - Charlotte Sakarovitch
- Quantitative Sciences Unit, Division of Biomedical Informatics Research, Department of Medicine, Stanford University, Stanford, CA, USA
| | - Perrine Janiaud
- Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, California, USA
| | - Ioana Cristea
- Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, California, USA
- Department of Clinical Psychology and Psychotherapy, Babes-Bolyai University, Romania
| | - Daniele Fanelli
- Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, California, USA
- Department of Methodology, London School of Economics and Political Science, UK
| | - David Moher
- Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, California, USA
- Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
| | - John P A Ioannidis
- Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, California, USA
- Departments of Medicine, of Health Research and Policy, of Biomedical Data Science, and of Statistics, Stanford University, Stanford, California, USA
| |
Collapse
|
22
|
Abstract
This study summarizes responses to trial registration fields on the ClinicalTrials.gov website asking users if they plan to share individual participant data and what information they would be willing to share.
Collapse
Affiliation(s)
- Annice Bergeris
- National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | - Tony Tse
- National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | - Deborah A. Zarin
- National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| |
Collapse
|
23
|
Unreported links between trial registrations and published articles were identified using document similarity measures in a cross-sectional analysis of ClinicalTrials.gov. J Clin Epidemiol 2017; 95:94-101. [PMID: 29277557 DOI: 10.1016/j.jclinepi.2017.12.007] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2017] [Revised: 11/24/2017] [Accepted: 12/14/2017] [Indexed: 12/14/2022]
Abstract
OBJECTIVES Trial registries can be used to measure reporting biases and support systematic reviews, but 45% of registrations do not provide a link to the article reporting on the trial. We evaluated the use of document similarity methods to identify unreported links between ClinicalTrials.gov and PubMed. STUDY DESIGN AND SETTING We extracted terms and concepts from a data set of 72,469 ClinicalTrials.gov registrations and 276,307 PubMed articles and tested methods for ranking articles across 16,005 reported links and 90 manually identified unreported links. Performance was measured by the median rank of matching articles and the proportion of unreported links that could be found by screening ranked candidate articles in order. RESULTS The best-performing concept-based representation produced a median rank of 3 (interquartile range [IQR] 1-21) for reported links and 3 (IQR 1-19) for the manually identified unreported links, and term-based representations produced a median rank of 2 (1-20) for reported links and 2 (IQR 1-12) in unreported links. The matching article was ranked first for 40% of registrations, and screening 50 candidate articles per registration identified 86% of the unreported links. CONCLUSION Leveraging the growth in the corpus of reported links between ClinicalTrials.gov and PubMed, we found that document similarity methods can assist in the identification of unreported links between trial registrations and corresponding articles.
Collapse
|
24
|
Coiera E, Choong MK, Tsafnat G, Hibbert P, Runciman WB. Linking quality indicators to clinical trials: an automated approach. Int J Qual Health Care 2017; 29:571-578. [PMID: 28651340 PMCID: PMC5890874 DOI: 10.1093/intqhc/mzx076] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2016] [Revised: 05/28/2017] [Accepted: 06/15/2017] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE Quality improvement of health care requires robust measurable indicators to track performance. However identifying which indicators are supported by strong clinical evidence, typically from clinical trials, is often laborious. This study tests a novel method for automatically linking indicators to clinical trial registrations. DESIGN A set of 522 quality of care indicators for 22 common conditions drawn from the CareTrack study were automatically mapped to outcome measures reported in 13 971 trials from ClinicalTrials.gov. INTERVENTION Text mining methods extracted phrases mentioning indicators and outcome phrases, and these were compared using the Levenshtein edit distance ratio to measure similarity. MAIN OUTCOME MEASURE Number of care indicators that mapped to outcome measures in clinical trials. RESULTS While only 13% of the 522 CareTrack indicators were thought to have Level I or II evidence behind them, 353 (68%) could be directly linked to randomized controlled trials. Within these 522, 50 of 70 (71%) Level I and II evidence-based indicators, and 268 of 370 (72%) Level V (consensus-based) indicators could be linked to evidence. Of the indicators known to have evidence behind them, only 5.7% (4 of 70) were mentioned in the trial reports but were missed by our method. CONCLUSIONS We automatically linked indicators to clinical trial registrations with high precision. Whilst the majority of quality indicators studied could be directly linked to research evidence, a small portion could not and these require closer scrutiny. It is feasible to support the process of indicator development using automated methods to identify research evidence.
Collapse
Affiliation(s)
- Enrico Coiera
- Centre for Health Informatics, Australian Institute of Health Innovation, Faculty of Medicine and Health Science, Macquarie University, Sydney, Australia
| | - Miew Keen Choong
- Centre for Health Informatics, Australian Institute of Health Innovation, Faculty of Medicine and Health Science, Macquarie University, Sydney, Australia
| | - Guy Tsafnat
- Centre for Health Informatics, Australian Institute of Health Innovation, Faculty of Medicine and Health Science, Macquarie University, Sydney, Australia
| | - Peter Hibbert
- Centre for Health Informatics, Australian Institute of Health Innovation, Faculty of Medicine and Health Science, Macquarie University, Sydney, Australia
- Centre for Population Health Research, University of South Australia, Adelaide, South Australia
| | - William B. Runciman
- Centre for Health Informatics, Australian Institute of Health Innovation, Faculty of Medicine and Health Science, Macquarie University, Sydney, Australia
- Centre for Population Health Research, University of South Australia, Adelaide, South Australia
- Australian Patient Safety Foundation, Adelaide, South Australia
| |
Collapse
|
25
|
Wu T, Bian Z, Shang H, Li Y. Innovation of clinical trials in China: Commentary on the publication of "CONSORT extension for Chinese herbal medicine formulas 2017: Recommendations, explanation, and elaboration". J Evid Based Med 2017; 10:155-162. [PMID: 28857507 DOI: 10.1111/jebm.12268] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/11/2017] [Indexed: 02/05/2023]
Affiliation(s)
- Taixiang Wu
- Chinese Evidence-based Medicine, West China Hospital, Sichuan University, Chengdu, 610041, P.R. China
| | - Zhaoxiang Bian
- Hong Kong Chinese Medicine Clinical Study Centre, School of Chinese Medicine, Hong Kong Baptist University, Hong Kong, 999077, P.R. China
| | - Hongcai Shang
- Key Laboratory of Chinese Internal Medicine of Ministry of Education, Dongzhimen Hospital, Beijing University of Chinese Medicine, Beijing, 100700, P.R. China
| | - Youping Li
- Chinese Evidence-based Medicine, West China Hospital, Sichuan University, Chengdu, 610041, P.R. China
| |
Collapse
|
26
|
Bashir R, Bourgeois FT, Dunn AG. A systematic review of the processes used to link clinical trial registrations to their published results. Syst Rev 2017; 6:123. [PMID: 28669351 PMCID: PMC5494826 DOI: 10.1186/s13643-017-0518-3] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/28/2016] [Accepted: 06/09/2017] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Studies measuring the completeness and consistency of trial registration and reporting rely on linking registries with bibliographic databases. In this systematic review, we quantified the processes used to identify these links. METHODS PubMed and Embase databases were searched from inception to May 2016 for studies linking trial registries with bibliographic databases. The processes used to establish these links were categorised as automatic when the registration identifier was available in the bibliographic database or publication, or manual when linkage required inference or contacting of trial investigators. The number of links identified by each process was extracted where available. Linear regression was used to determine whether the proportions of links available via automatic processes had increased over time. RESULTS In 43 studies that examined cohorts of registry entries, 24 used automatic and manual processes to find articles; 3 only automatic; and 11 only manual (5 did not specify). Twelve studies reported results for both manual and automatic processes and showed that a median of 23% (range from 13 to 42%) included automatic links to articles, while 17% (range from 5 to 42%) of registry entries required manual processes to find articles. There was no evidence that the proportion of registry entries with automatic links had increased (R 2 = 0.02, p = 0.36). In 39 studies that examined cohorts of articles, 21 used automatic and manual processes; 9 only automatic; and 2 only manual (7 did not specify). Sixteen studies reported numbers for automatic and manual processes and indicated that a median of 49% (range from 8 to 97%) of articles had automatic links to registry entries, and 10% (range from 0 to 28%) required manual processes to find registry entries. There was no evidence that the proportion of articles with automatic links to registry entries had increased (R 2 = 0.01, p = 0.73). CONCLUSIONS The linkage of trial registries to their corresponding publications continues to require extensive manual processes. We did not find that the use of automatic linkage has increased over time. Further investigation is needed to inform approaches that will ensure publications are properly linked to trial registrations, thus enabling efficient monitoring of trial reporting.
Collapse
Affiliation(s)
- Rabia Bashir
- Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney, NSW, 2109, Australia.
| | - Florence T Bourgeois
- Computational Health Informatics Program, Boston Children's Hospital, Boston, MA, USA.,Departments of Pediatrics and Emergency Medicine, Harvard Medical School, Boston, MA, USA
| | - Adam G Dunn
- Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney, NSW, 2109, Australia
| |
Collapse
|
27
|
Coady SA, Mensah GA, Wagner EL, Goldfarb ME, Hitchcock DM, Giffen CA. Use of the National Heart, Lung, and Blood Institute Data Repository. N Engl J Med 2017; 376:1849-1858. [PMID: 28402243 PMCID: PMC5665376 DOI: 10.1056/nejmsa1603542] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
BACKGROUND Research on data sharing from clinical trials has focused on elucidating perceptions, barriers, and attitudes among trialists and study participants with respect to sharing data. However, little information exists regarding utilization or associated publication of articles once clinical trial data have been widely shared. METHODS We analyzed administrative records of investigator requests for data access, linked publications, and bibliometrics to describe the use of the National Heart, Lung, and Blood Institute data repository. RESULTS From January 2000 through May 2016, a total of 370 investigators requested data from 1 or more clinical trials. Requests for trial data have been increasing, with 195 investigators (53%) initiating requests during the last 4.4 years of the study period. The predominant reason for requesting data was post hoc secondary analysis of new questions (72%), followed by analytic or statistical approaches to clinical trials (9%) and meta-analyses or pooled study research (7%). Of 172 requests with online project descriptions, only 2 requests were initiated for reanalysis of primary-outcome findings. Data from 88 of 100 available clinical trials were requested at least once, and the median time from repository availability to first request was 235 days. A total of 277 articles were published on the basis of data from 47 trials. Citation metrics from 224 articles indicated that half of the publications have cumulative citations that rank in the top 34% normalized for subject category and year of publication. CONCLUSIONS Demand for trial data for secondary analysis has been increasing. Requesting data for the a priori purpose of reanalysis or verification of original findings was rare.
Collapse
Affiliation(s)
- Sean A Coady
- From the Division of Cardiovascular Sciences (S.A.C.), the Center for Translation Research and Implementation Science (G.A.M.), and the Division of Blood Diseases and Resources (E.L.W.), National Heart, Lung, and Blood Institute, Bethesda, and Information Management Services, Calverton (M.E.G., D.M.H., C.A.G.) - both in Maryland
| | - George A Mensah
- From the Division of Cardiovascular Sciences (S.A.C.), the Center for Translation Research and Implementation Science (G.A.M.), and the Division of Blood Diseases and Resources (E.L.W.), National Heart, Lung, and Blood Institute, Bethesda, and Information Management Services, Calverton (M.E.G., D.M.H., C.A.G.) - both in Maryland
| | - Elizabeth L Wagner
- From the Division of Cardiovascular Sciences (S.A.C.), the Center for Translation Research and Implementation Science (G.A.M.), and the Division of Blood Diseases and Resources (E.L.W.), National Heart, Lung, and Blood Institute, Bethesda, and Information Management Services, Calverton (M.E.G., D.M.H., C.A.G.) - both in Maryland
| | - Miriam E Goldfarb
- From the Division of Cardiovascular Sciences (S.A.C.), the Center for Translation Research and Implementation Science (G.A.M.), and the Division of Blood Diseases and Resources (E.L.W.), National Heart, Lung, and Blood Institute, Bethesda, and Information Management Services, Calverton (M.E.G., D.M.H., C.A.G.) - both in Maryland
| | - Denise M Hitchcock
- From the Division of Cardiovascular Sciences (S.A.C.), the Center for Translation Research and Implementation Science (G.A.M.), and the Division of Blood Diseases and Resources (E.L.W.), National Heart, Lung, and Blood Institute, Bethesda, and Information Management Services, Calverton (M.E.G., D.M.H., C.A.G.) - both in Maryland
| | - Carol A Giffen
- From the Division of Cardiovascular Sciences (S.A.C.), the Center for Translation Research and Implementation Science (G.A.M.), and the Division of Blood Diseases and Resources (E.L.W.), National Heart, Lung, and Blood Institute, Bethesda, and Information Management Services, Calverton (M.E.G., D.M.H., C.A.G.) - both in Maryland
| |
Collapse
|
28
|
Pansieri C, Pandolfini C, Bonati M. Clinical trial registries: more international, converging efforts are needed. Trials 2017; 18:86. [PMID: 28241781 PMCID: PMC5329910 DOI: 10.1186/s13063-017-1836-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2016] [Accepted: 01/30/2017] [Indexed: 12/03/2022] Open
Abstract
Clinical trial registries are being increasingly acknowledged worldwide. We searched for possibly trustworthy online registries that are not already included in the International Clinical Trials Registry Platform to evaluate whether other useful trial data sources exist and whether they could potentially be consulted, since the strategy search within this platform has recently been questioned. Fifty-nine registries were initially identified, and 11 of them fit the criteria applied and were analyzed for quality and usability. Four additional, potentially reliable registries were identified that researchers could exploit in order to obtain a more global view of the issue being investigated.
Collapse
Affiliation(s)
- Claudia Pansieri
- Department of Public Health, Laboratory for Mother and Child Health, IRCCS-Istituto di Ricerche Farmacologiche "Mario Negri", Via Giuseppe la Masa 19, 20156, Milan, Italy
| | - Chiara Pandolfini
- Department of Public Health, Laboratory for Mother and Child Health, IRCCS-Istituto di Ricerche Farmacologiche "Mario Negri", Via Giuseppe la Masa 19, 20156, Milan, Italy
| | - Maurizio Bonati
- Department of Public Health, Laboratory for Mother and Child Health, IRCCS-Istituto di Ricerche Farmacologiche "Mario Negri", Via Giuseppe la Masa 19, 20156, Milan, Italy.
| |
Collapse
|
29
|
Wieseler B. Beyond journal publications – a new format for the publication of clinical trials. ZEITSCHRIFT FUR EVIDENZ FORTBILDUNG UND QUALITAET IM GESUNDHEITSWESEN 2017; 120:3-8. [DOI: 10.1016/j.zefq.2016.11.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2016] [Revised: 11/11/2016] [Accepted: 11/14/2016] [Indexed: 10/20/2022]
|
30
|
Zarin DA, Tse T, Williams RJ, Rajakannan T. Update on Trial Registration 11 Years after the ICMJE Policy Was Established. N Engl J Med 2017; 376:383-391. [PMID: 28121511 PMCID: PMC5813248 DOI: 10.1056/nejmsr1601330] [Citation(s) in RCA: 146] [Impact Index Per Article: 20.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
In the decade following the journal editors’ trial registration policy, a global trial reporting system (TRS) has arisen to supplement journal publication by increasing the transparency and accountability of the clinical research enterprise (CRE), which ultimately advances evidence-based medicine. Trial registration a foundation component of the TRS. In this article, we assess impact of the trial registration on the CRE with respect to two key goals: (1) establishing a publicly accessible and structured public record of all trials and (2) ensuring access to date-stamped protocol details that change during a study. After characterizing international trial registry landscape, we summarize the published evidence of the impact of the registration laws and policies on the CRE to date. We present three analyses using ClinicalTrials.gov registration data to illustrate approaches for assessing and monitoring the TRS: (1) timing of registration (i.e., prior to trial initiation [prospective] or after trial initiation [retrospective or “late”]; (2) degree of specificity and consistency of registered primary outcome measures compared to descriptions in study protocols and published articles; and (3) a survey of the published literature to characterize how ClinicalTrials.gov data has been used in research on the CRE. These findings suggest that, while the TRS is largely moving towards goals, key stakeholders need to do more in the next decade.
Collapse
Affiliation(s)
- Deborah A Zarin
- From the National Library of Medicine, National Institutes of Health, Department of Health and Human Services, Bethesda, MD
| | - Tony Tse
- From the National Library of Medicine, National Institutes of Health, Department of Health and Human Services, Bethesda, MD
| | - Rebecca J Williams
- From the National Library of Medicine, National Institutes of Health, Department of Health and Human Services, Bethesda, MD
| | - Thiyagu Rajakannan
- From the National Library of Medicine, National Institutes of Health, Department of Health and Human Services, Bethesda, MD
| |
Collapse
|
31
|
Barbui C. Sharing all types of clinical data and harmonizing journal standards. BMC Med 2016; 14:63. [PMID: 27038634 PMCID: PMC4818917 DOI: 10.1186/s12916-016-0612-8] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/17/2016] [Accepted: 03/30/2016] [Indexed: 12/02/2022] Open
Abstract
Despite recent efforts to enforce policies requiring the sharing of data underlying clinical findings, current policies of biomedical journals remain largely heterogeneous. As this heterogeneity does not optimally serve the cause of data sharing, a first step towards better harmonization would be the requirement of a data sharing statement for all clinical studies and not simply for randomized studies. Although the publication of a data sharing statement does not imply that all data is made readily available, such a policy would swiftly implement a cultural change in the definition of scientific outputs. Currently, a scientific output only corresponds to a study report published in a medical journal, while in the near future it might consist of all materials described in the manuscript, including all relevant raw data. When such a cultural shift has been achieved, the logical conclusion would be for biomedical journals to require authors to make all data fully available without restriction as a condition for publication.
Collapse
Affiliation(s)
- Corrado Barbui
- WHO Collaborating Centre for Research and Training in Mental Health and Service Evaluation, Section of Psychiatry, University of Verona, Verona, Italy.
| |
Collapse
|
32
|
Abstract
In this month’s editorial, the PLOS Medicine editors note recent progress towards data sharing as the community norm in medical research and the barriers that remain.
Collapse
|