1
|
Chalkou K, Hamza T, Benkert P, Kuhle J, Zecca C, Simoneau G, Pellegrini F, Manca A, Egger M, Salanti G. Combining randomized and non-randomized data to predict heterogeneous effects of competing treatments. Res Synth Methods 2024; 15:641-656. [PMID: 38501273 DOI: 10.1002/jrsm.1717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 01/26/2024] [Accepted: 02/16/2024] [Indexed: 03/20/2024]
Abstract
Some patients benefit from a treatment while others may do so less or do not benefit at all. We have previously developed a two-stage network meta-regression prediction model that synthesized randomized trials and evaluates how treatment effects vary across patient characteristics. In this article, we extended this model to combine different sources of types in different formats: aggregate data (AD) and individual participant data (IPD) from randomized and non-randomized evidence. In the first stage, a prognostic model is developed to predict the baseline risk of the outcome using a large cohort study. In the second stage, we recalibrated this prognostic model to improve our predictions for patients enrolled in randomized trials. In the third stage, we used the baseline risk as effect modifier in a network meta-regression model combining AD, IPD randomized clinical trial to estimate heterogeneous treatment effects. We illustrated the approach in the re-analysis of a network of studies comparing three drugs for relapsing-remitting multiple sclerosis. Several patient characteristics influence the baseline risk of relapse, which in turn modifies the effect of the drugs. The proposed model makes personalized predictions for health outcomes under several treatment options and encompasses all relevant randomized and non-randomized evidence.
Collapse
Affiliation(s)
- Konstantina Chalkou
- Institute of Social and Preventive Medicine, University of Bern, Bern, Switzerland
- Graduate School for Health Sciences, University of Bern, Bern, Switzerland
- Department of Clinical Research, University of Bern, Bern, Switzerland
| | - Tasnim Hamza
- Institute of Social and Preventive Medicine, University of Bern, Bern, Switzerland
- Graduate School for Health Sciences, University of Bern, Bern, Switzerland
| | - Pascal Benkert
- Department of Clinical Research, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Jens Kuhle
- Multiple Sclerosis Centre, Neurologic Clinic and Policlinic, Department of Head, Spine and Neuromedicine, University Hospital Basel, University of Basel, Basel, Switzerland
- Multiple Sclerosis Centre, Neurologic Clinic and Policlinic, Department of Biomedicine, University Hospital Basel, University of Basel, Basel, Switzerland
- Multiple Sclerosis Centre, Neurologic Clinic and Policlinic, Department of Clinical Research, University Hospital Basel, University of Basel, Basel, Switzerland
- Research Center for Clinical Neuroimmunology and Neuroscience (RC2NB), University Hospital, University of Basel, Basel, Switzerland
| | - Chiara Zecca
- Multiple Sclerosis Center, Neurocenter of Southern Switzerland, EOC, Lugano, Switzerland
- Faculty of Biomedical Sciences, Università della Svizzera Italiana, Lugano, Switzerland
| | | | | | - Andrea Manca
- Centre for Health Economics, University of York, York, UK
| | - Matthias Egger
- Institute of Social and Preventive Medicine, University of Bern, Bern, Switzerland
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
| | - Georgia Salanti
- Institute of Social and Preventive Medicine, University of Bern, Bern, Switzerland
| |
Collapse
|
2
|
Moran JL, Linden A. Problematic meta-analyses: Bayesian and frequentist perspectives on combining randomized controlled trials and non-randomized studies. BMC Med Res Methodol 2024; 24:99. [PMID: 38678213 PMCID: PMC11056075 DOI: 10.1186/s12874-024-02215-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 04/10/2024] [Indexed: 04/29/2024] Open
Abstract
PURPOSE In the literature, the propriety of the meta-analytic treatment-effect produced by combining randomized controlled trials (RCT) and non-randomized studies (NRS) is questioned, given the inherent confounding in NRS that may bias the meta-analysis. The current study compared an implicitly principled pooled Bayesian meta-analytic treatment-effect with that of frequentist pooling of RCT and NRS to determine how well each approach handled the NRS bias. MATERIALS & METHODS Binary outcome Critical-Care meta-analyses, reflecting the importance of such outcomes in Critical-Care practice, combining RCT and NRS were identified electronically. Bayesian pooled treatment-effect and 95% credible-intervals (BCrI), posterior model probabilities indicating model plausibility and Bayes-factors (BF) were estimated using an informative heavy-tailed heterogeneity prior (half-Cauchy). Preference for pooling of RCT and NRS was indicated for Bayes-factors > 3 or < 0.333 for the converse. All pooled frequentist treatment-effects and 95% confidence intervals (FCI) were re-estimated using the popular DerSimonian-Laird (DSL) random effects model. RESULTS Fifty meta-analyses were identified (2009-2021), reporting pooled estimates in 44; 29 were pharmaceutical-therapeutic and 21 were non-pharmaceutical therapeutic. Re-computed pooled DSL FCI excluded the null (OR or RR = 1) in 86% (43/50). In 18 meta-analyses there was an agreement between FCI and BCrI in excluding the null. In 23 meta-analyses where FCI excluded the null, BCrI embraced the null. BF supported a pooled model in 27 meta-analyses and separate models in 4. The highest density of the posterior model probabilities for 0.333 < Bayes factor < 1 was 0.8. CONCLUSIONS In the current meta-analytic cohort, an integrated and multifaceted Bayesian approach gave support to including NRS in a pooled-estimate model. Conversely, caution should attend the reporting of naïve frequentist pooled, RCT and NRS, meta-analytic treatment effects.
Collapse
Affiliation(s)
- John L Moran
- The Queen Elizabeth Hospital, Woodville, SA, 5011, Australia.
| | - Ariel Linden
- Department of Medicine, School of Medicine, University of California, San Francisco, USA
| |
Collapse
|
3
|
McLennan S, Nussbaumer-Streit B, Hemkens LG, Briel M. Barriers and Facilitating Factors for Conducting Systematic Evidence Assessments in Academic Clinical Trials. JAMA Netw Open 2021; 4:e2136577. [PMID: 34846522 PMCID: PMC8634056 DOI: 10.1001/jamanetworkopen.2021.36577] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
IMPORTANCE A systematic assessment of existing research should justify the conduct and inform the design of new clinical research but is often lacking. There is little research on the barriers to and factors facilitating systematic evidence assessments. OBJECTIVE To examine the practices and attitudes of Swiss stakeholders and international funders regarding conducting systematic evidence assessments in academic clinical trials. DESIGN, SETTING, AND PARTICIPANTS In this qualitative study, individual semistructured qualitative interviews were conducted between February and August 2020 with 48 Swiss stakeholder groups (27 primary investigators, 9 funders and sponsors, 6 clinical trial support organizations, and 6 ethics committee members) and between January and March 2021 with 9 international funders of clinical trials from North America and Europe with a reputation for requiring systematic evidence synthesis in applications for academic clinical trials. MAIN OUTCOMES AND MEASURES The main outcomes were practices and attitudes of Swiss stakeholders and international funders regarding conducting systematic evidence assessments in academic clinical trials. Interviews were analyzed using conventional content analysis. RESULTS Of the 57 participants, 40 (70.2%) were male. Participants universally acknowledged that a comprehensive understanding of the previous evidence is important but reported wide variation regarding how this should be achieved. Participants reported that the conduct of formal systematic reviews was currently not expected before most clinical trials, but most international funders reported expecting a systematic search for the existing evidence. Whereas time and resources were reported by all participants as barriers to conducting systematic reviews, the Swiss research ecosystem was reported not to be as supportive of a systematic approach compared with international settings. CONCLUSIONS AND RELEVANCE In this qualitative study, Swiss stakeholders and international funders generally agreed that new clinical trials should be justified by a systematic evidence assessment but that barriers on individual, organizational, and political levels kept them from implementing it. More explicit requirements from funders appear to be needed to clarify the required level of comprehensiveness in summarizing existing evidence for different types of clinical trials.
Collapse
Affiliation(s)
- Stuart McLennan
- Department of Clinical Research, Basel Institute for Clinical Epidemiology and Biostatistics, University of Basel and University Hospital Basel, Basel, Switzerland
- Institute of History and Ethics in Medicine, TUM School of Medicine, Technical University of Munich, Munich, Germany
| | - Barbara Nussbaumer-Streit
- Cochrane Austria, Department for Evidence-based Medicine and Evaluation, Danube University Krems, Krems, Austria
| | - Lars G. Hemkens
- Department of Clinical Research, Basel Institute for Clinical Epidemiology and Biostatistics, University of Basel and University Hospital Basel, Basel, Switzerland
- Meta-Research Innovation Center at Stanford, Stanford University, Stanford, California
- Meta-Research Innovation Center Berlin, Berlin Institute of Health, Berlin, Germany
| | - Matthias Briel
- Department of Clinical Research, Basel Institute for Clinical Epidemiology and Biostatistics, University of Basel and University Hospital Basel, Basel, Switzerland
- Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
4
|
Graves N, Mitchell BG, Otter JA, Kiernan M. The cost-effectiveness of temporary single-patient rooms to reduce risks of healthcare-associated infection. J Hosp Infect 2021; 116:21-28. [PMID: 34246721 DOI: 10.1016/j.jhin.2021.07.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 06/30/2021] [Accepted: 07/06/2021] [Indexed: 12/12/2022]
Abstract
BACKGROUND The use of single rooms for patient isolation often forms part of a wider bundle to prevent certain healthcare-associated infections (HAIs) in hospitals. Demand for single rooms often exceeds what is available and the use of temporary isolation rooms may help resolve this. Changes to infection prevention practice should be supported by evidence showing that cost-effectiveness is plausible and likely. AIM To perform a cost-effectiveness evaluation of adopting temporary single rooms into UK National Health Service (NHS) hospitals. METHODS The cost-effectiveness of a decision to adopt a temporary, single-patient, isolation room to the current infection prevention efforts of an NHS hospital was modelled. Primary outcomes are the expected change to total costs and life-years from an NHS perspective. FINDINGS The mean expected incremental cost per life-year gained (LYG) is £5,829. The probability that adoption is cost-effective against a £20,000 threshold per additional LYG is 93%, and for a £13,000 threshold the probability is 87%. The conclusions are robust to scenarios for key model parameters. If a temporary single-patient isolation room reduces risks of HAI by 16.5% then an adoption decision is more likely to be cost-effective than not. Our estimate of the effectiveness reflects guidelines and reasonable assumptions and the theoretical rationale is strong. CONCLUSION Despite uncertainties about the effectiveness of temporary isolation rooms for reducing risks of HAI, there is some evidence that an adoption decision is likely to be cost-effective for the NHS setting. Prospective studies will be useful to reduce this source of uncertainty.
Collapse
Affiliation(s)
- N Graves
- Health Services & Systems Research, Duke-NUS Medical School, Singapore.
| | - B G Mitchell
- School of Nursing and Midwifery, University of Newcastle, Ourimbah, NSW, Australia
| | - J A Otter
- National Institute for Healthcare Research Health Protection Research Unit (NIHR HPRU) in HCAI and AMR, Imperial College London & Public Health England, Hammersmith Hospital, London, UK
| | - M Kiernan
- Gama Healthcare Ltd, Hemel Hempstead, UK
| |
Collapse
|
5
|
Kim KS, Chung JH, Park HJ, Shin WJ, Lee BH, Lee SW. Quality Assessment and Relevant Clinical Impact of Randomized Controlled Trials of Varicocele: Next Step to Good-Quality Randomized Controlled Trial of Varicocele Treatment. World J Mens Health 2021; 40:290-298. [PMID: 34169678 PMCID: PMC8987142 DOI: 10.5534/wjmh.200167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 02/27/2021] [Accepted: 03/13/2021] [Indexed: 11/17/2022] Open
Abstract
Purpose To assess the quality of randomized controlled trials (RCTs) on varicocele published from 1979 to 2017. Materials and Methods We searched for original RCT on varicocele published between 1979 and 2017. Jadad scale, van Tulder scale, and Cochrane Collaboration Risk of Bias Tool were used to analyze RCT quality over time. Effects on RCT quality including funding source, Institutional Review Board (IRB) approval, and intervention were assessed. Treatment parameters of varicocele were also analyzed. Results Blinding and allocation concealment were described in 25.9% and 9.4% of RCT, respectively. Both tended to increase and a sharp dip in allocation concealment was observed in 2010–2017. Jadad scores increased steadily from 1979 to 2017 (1.28±0.59 to 2.19±1.10, p<0.01). Van Tulder scores tended to increase from 1979 to 2017 (4.21±0.94 to 5.58±1.58, p<0.01). RCTs with funding statements had higher Jadad (Yes vs. No, 3.25±0.50 vs. 1.70±0.97; p<0.01) and van Tulder (Yes vs. No, 7.25±1.26 vs. 4.81±1.26; p<0.01) scores than unfunded RCTs. IRB approval and intervention were associated with better quality. Conclusions The number of RCTs on varicocele increased from 1979 to 2017. Also, quality improved over time with increasing IRB approval, funding, and multicenter trial. Most RCTs on varicocele reported the use of surgical treatment. RCTs of surgical treatments have limitations to satisfy the condition of RCT to conduct, but their quality has improved over time.
Collapse
Affiliation(s)
- Kyu Shik Kim
- Department of Urology, Hanyang University Medical Center, Hanyang University College of Medicine, Seoul, Korea
| | - Jae Hoon Chung
- Department of Urology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Hyung Joon Park
- Department of Anesthesiology and Pain Medicine, Hanyang University Guri Hospital, Hanyang University College of Medicine, Guri, Korea
| | - Woo Jong Shin
- Department of Anesthesiology and Pain Medicine, Hanyang University Guri Hospital, Hanyang University College of Medicine, Guri, Korea
| | - Bum Hyun Lee
- Department of Urban Design and Information, Sungkyul University College of Engineering, Anyang, Korea
| | - Seung Wook Lee
- Department of Urology, Hanyang University Medical Center, Hanyang University College of Medicine, Seoul, Korea.
| |
Collapse
|
6
|
Davies AL, Galla T. Degree irregularity and rank probability bias in network meta-analysis. Res Synth Methods 2020; 12:316-332. [PMID: 32935913 DOI: 10.1002/jrsm.1454] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2020] [Revised: 09/02/2020] [Accepted: 09/08/2020] [Indexed: 11/09/2022]
Abstract
Network meta-analysis (NMA) is a statistical technique for the comparison of treatment options. Outcomes of Bayesian NMA include estimates of treatment effects, and the probabilities that each treatment is ranked best, second best and so on. How exactly network topology affects the accuracy and precision of these outcomes is not fully understood. Here we carry out a simulation study and find that disparity in the number of trials involving different treatments leads to a systematic bias in estimated rank probabilities. This bias is associated with an increased variation in the precision of treatment effect estimates. Using ideas from the theory of complex networks, we define a measure of "degree irregularity" to quantify asymmetry in the number of studies involving each treatment. Our simulations indicate that more regular networks have more precise treatment effect estimates and smaller bias of rank probabilities. Conversely, these topological effects are not observed for the accuracy of treatment effect estimates. This reinforces the importance of taking into account multiple measures, rather than making decisions based on a single metric. We also find that degree regularity is a better indicator for the accuracy and precision of parameter estimates in NMA than both the total number of studies in a network and the disparity in the number of trials per comparison. These results have implications for planning future trials. We demonstrate that choosing trials which reduce the network's irregularity can improve the precision and accuracy of parameter estimates from NMA.
Collapse
Affiliation(s)
- Annabel L Davies
- Theoretical Physics, Department of Physics and Astronomy, School of Natural Sciences, The University of Manchester, Manchester, UK
| | - Tobias Galla
- Theoretical Physics, Department of Physics and Astronomy, School of Natural Sciences, The University of Manchester, Manchester, UK.,Instituto de Física Interdisciplinar y Sistemas Complejos, IFISC (CSIC-UIB), Campus Universitat Illes Balears, Palma de Mallorca, Spain
| |
Collapse
|
7
|
Rogozińska E, Gargon E, Olmedo-Requena R, Asour A, Cooper NAM, Vale CL, van’t Hooft J. Methods used to assess outcome consistency in clinical studies: A literature-based evaluation. PLoS One 2020; 15:e0235485. [PMID: 32639999 PMCID: PMC7343158 DOI: 10.1371/journal.pone.0235485] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Accepted: 06/17/2020] [Indexed: 01/08/2023] Open
Abstract
Evaluation studies of outcomes used in clinical research and their consistency are appearing more frequently in the literature, as a key part of the core outcome set (COS) development. Current guidance suggests such evaluation studies should use systematic review methodology as their default. We aimed to examine the methods used. We searched the Core Outcome Measures in Effectiveness Trials (COMET) database (up to May 2019) supplementing it with additional resources. We included evaluation studies of outcome consistency in clinical studies across health subjects and used a subset of A MeaSurement Tool to Assess systematic Reviews (AMSTAR) 2 (items 1-9) to assess their methods. Of 93 included evaluation studies of outcome consistency (90 full reports, three summaries), 91% (85/93) reported performing literature searches in at least one bibliographic database, and 79% (73/93) was labelled as a "systematic review". The evaluations varied in terms of satisfying AMSTAR 2 criteria, such that 81/93 (87%) had implemented PICO in the research question, whereas only 5/93 (6%) had included the exclusions list. None of the evaluation studies explained how inconsistency of outcomes was detected, however, 80/90 (88%) concluded inconsistency in individual outcomes (66%, 55/90) or outcome domains (20%, 18/90). Methods used in evaluation studies of outcome consistency in clinical studies differed considerably. Despite frequent being labelled as a "systematic review", adoption of systematic review methodology is selective. While the impact on COS development is unknown, authors of these studies should refrain from labelling them as "systematic review" and focus on ensuring that the methods used to generate the different outcomes and outcome domains are reported transparently.
Collapse
Affiliation(s)
- Ewelina Rogozińska
- Meta-Analysis Group, Institute of Clinical Trials and Methodology, MRC Clinical Trials Unit at UCL, London, England, United Kingdom
- Women’s Health Research Unit, Queen Mary University of London, London, England, United Kingdom
- * E-mail:
| | - Elizabeth Gargon
- Department of Biostatistics, University of Liverpool, Liverpool, England, United Kingdom
| | - Rocío Olmedo-Requena
- Department of Preventive Medicine and Public Health, School of Medicine, University of Granada, Granada, Spain
- Consortium for Biomedical Research in Epidemiology and Public Health (CIBERESP), Madrid, Spain
- Instituto de Investigación Biosanitaria ibs.GRANADA, Granada, Spain
| | - Amani Asour
- Women’s Health Research Unit, Queen Mary University of London, London, England, United Kingdom
| | - Natalie A. M. Cooper
- Women’s Health Research Unit, Queen Mary University of London, London, England, United Kingdom
| | - Claire L. Vale
- Meta-Analysis Group, Institute of Clinical Trials and Methodology, MRC Clinical Trials Unit at UCL, London, England, United Kingdom
| | - Janneke van’t Hooft
- Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, California, United States of America
| |
Collapse
|
8
|
Ashokka B, Dong C, Law LSC, Liaw SY, Chen FG, Samarasekera DD. A BEME systematic review of teaching interventions to equip medical students and residents in early recognition and prompt escalation of acute clinical deteriorations: BEME Guide No. 62. MEDICAL TEACHER 2020; 42:724-737. [PMID: 32493155 DOI: 10.1080/0142159x.2020.1763286] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Background: Current educational interventions and teaching for acute deteriorations seem to address acute care learning in discreet segments. Technology enhanced and team training methodologies are in vogue though well studied in the nursing profession, teaching avenues for junior 'doctors in training' seem to be a lacuna.Aims: The BEME systematic review was designed to (1) appraise the existing published evidence on educational interventions that are intended for 'doctors in training' to teach early recognition and prompt escalation in acute clinical deteriorations (2) to synthesise evidence & to evaluate educational effectiveness.Methodology: The method applied was a descriptive, justification & clarification review. Databases searched included PubMed, PsycINFO, Science Direct and Scopus for original research and grey literature with no restrictions to year or language. Abstract review, full text decisions and data extraction were completed by two primary coders with final consensus by a third reviewer.Results: 5592 titles and abstracts were chosen after removal of 905 duplications. After exclusion of 5555 studies, 37 full text articles were chosen for coding. 22 studies met final criteria of educational effectiveness, relevance to acute care. Educational platforms varied from didactics to blended learning approaches, small group teaching sessions, simulations, live & cadaveric tissue training, virtual environments and insitu team-based training. Translational outcomes with reduction in long term (up to 3-6 years) morbidity & mortality with financial savings were reported by 18% (4/22) studies. Interprofessional training were reported in 41% (9/22) of studies. Recent evidence demonstrated effectiveness of virtual environment and mobile game-based learning.Conclusions: There were significant improvements in teaching initiatives with focus on observable behaviours and translational real patient outcomes. Serious game-based learning and virtual multi-user collaborative environments might enhance individual learners' cognitive deliberate practice. Acute care learning continuum with programmatic acute care portfolios could be a promise of the future.
Collapse
Affiliation(s)
| | | | | | - Sok Ying Liaw
- Alice Lee Centre for Nursing Studies, National University of Singapore, Singapore
| | - Fun Gee Chen
- Anaesthesia, National University of Singapore, Singapore
| | | |
Collapse
|
9
|
Ferguson KD, McCann M, Katikireddi SV, Thomson H, Green MJ, Smith DJ, Lewsey JD. Evidence synthesis for constructing directed acyclic graphs (ESC-DAGs): a novel and systematic method for building directed acyclic graphs. Int J Epidemiol 2020; 49:322-329. [PMID: 31325312 PMCID: PMC7124493 DOI: 10.1093/ije/dyz150] [Citation(s) in RCA: 70] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/01/2019] [Indexed: 12/16/2022] Open
Abstract
Background Directed acyclic graphs (DAGs) are popular tools for identifying appropriate adjustment strategies for epidemiological analysis. However, a lack of direction on how to build them is problematic. As a solution, we propose using a combination of evidence synthesis strategies and causal inference principles to integrate the DAG-building exercise within the review stages of research projects. We demonstrate this idea by introducing a novel protocol: ‘Evidence Synthesis for Constructing Directed Acyclic Graphs’ (ESC-DAGs)’. Methods ESC-DAGs operates on empirical studies identified by a literature search, ideally a novel systematic review or review of systematic reviews. It involves three key stages: (i) the conclusions of each study are ‘mapped’ into a DAG; (ii) the causal structures in these DAGs are systematically assessed using several causal inference principles and are corrected accordingly; (iii) the resulting DAGs are then synthesised into one or more ‘integrated DAGs’. This demonstration article didactically applies ESC-DAGs to the literature on parental influences on offspring alcohol use during adolescence. Conclusions ESC-DAGs is a practical, systematic and transparent approach for developing DAGs from background knowledge. These DAGs can then direct primary data analysis and DAG-based sensitivity analysis. ESC-DAGs has a modular design to allow researchers who are experienced DAG users to both use and improve upon the approach. It is also accessible to researchers with limited experience of DAGs or evidence synthesis.
Collapse
Affiliation(s)
- Karl D Ferguson
- MRC / CSO Social and Public Health Sciences Unit, University of Glasgow, Glasgow, UK
| | - Mark McCann
- MRC / CSO Social and Public Health Sciences Unit, University of Glasgow, Glasgow, UK
| | | | - Hilary Thomson
- MRC / CSO Social and Public Health Sciences Unit, University of Glasgow, Glasgow, UK
| | - Michael J Green
- MRC / CSO Social and Public Health Sciences Unit, University of Glasgow, Glasgow, UK
| | - Daniel J Smith
- Mental Health and Wellbeing, University of Glasgow, Glasgow, UK
| | - James D Lewsey
- Health Economics and Health Technology Assessment, University of Glasgow, Glasgow, UK
| |
Collapse
|
10
|
Kim KS, Chung JH, Lee SW. Randomized controlled trials on erectile dysfunction: quality assessment and relevant clinical impact (2007–2018). Int J Impot Res 2020; 32:213-220. [DOI: 10.1038/s41443-019-0143-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2018] [Revised: 02/12/2019] [Accepted: 02/18/2019] [Indexed: 11/09/2022]
|
11
|
Ferguson KD, McCann M, Katikireddi SV, Thomson H, Green MJ, Smith DJ, Lewsey JD. Corrigendum to: Evidence synthesis for constructing directed acyclic graphs (ESC-DAGs): a novel and systematic method for building directed acyclic graphs. Int J Epidemiol 2020; 49:353. [PMID: 31665296 PMCID: PMC8015970 DOI: 10.1093/ije/dyz220] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
12
|
Cook JA, Julious SA, Sones W, Hampson LV, Hewitt C, Berlin JA, Ashby D, Emsley R, Fergusson DA, Walters SJ, Wilson EC, MacLennan G, Stallard N, Rothwell JC, Bland M, Brown L, Ramsay CR, Cook A, Armstrong D, Altman D, Vale LD. Practical help for specifying the target difference in sample size calculations for RCTs: the DELTA 2 five-stage study, including a workshop. Health Technol Assess 2019; 23:1-88. [PMID: 31661431 PMCID: PMC6843113 DOI: 10.3310/hta23600] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
BACKGROUND The randomised controlled trial is widely considered to be the gold standard study for comparing the effectiveness of health interventions. Central to its design is a calculation of the number of participants needed (the sample size) for the trial. The sample size is typically calculated by specifying the magnitude of the difference in the primary outcome between the intervention effects for the population of interest. This difference is called the 'target difference' and should be appropriate for the principal estimand of interest and determined by the primary aim of the study. The target difference between treatments should be considered realistic and/or important by one or more key stakeholder groups. OBJECTIVE The objective of the report is to provide practical help on the choice of target difference used in the sample size calculation for a randomised controlled trial for researchers and funder representatives. METHODS The Difference ELicitation in TriAls2 (DELTA2) recommendations and advice were developed through a five-stage process, which included two literature reviews of existing funder guidance and recent methodological literature; a Delphi process to engage with a wider group of stakeholders; a 2-day workshop; and finalising the core document. RESULTS Advice is provided for definitive trials (Phase III/IV studies). Methods for choosing the target difference are reviewed. To aid those new to the topic, and to encourage better practice, 10 recommendations are made regarding choosing the target difference and undertaking a sample size calculation. Recommended reporting items for trial proposal, protocols and results papers under the conventional approach are also provided. Case studies reflecting different trial designs and covering different conditions are provided. Alternative trial designs and methods for choosing the sample size are also briefly considered. CONCLUSIONS Choosing an appropriate sample size is crucial if a study is to inform clinical practice. The number of patients recruited into the trial needs to be sufficient to answer the objectives; however, the number should not be higher than necessary to avoid unnecessary burden on patients and wasting precious resources. The choice of the target difference is a key part of this process under the conventional approach to sample size calculations. This document provides advice and recommendations to improve practice and reporting regarding this aspect of trial design. Future work could extend the work to address other less common approaches to the sample size calculations, particularly in terms of appropriate reporting items. FUNDING Funded by the Medical Research Council (MRC) UK and the National Institute for Health Research as part of the MRC-National Institute for Health Research Methodology Research programme.
Collapse
Affiliation(s)
- Jonathan A Cook
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
| | - Steven A Julious
- Medical Statistics Group, School of Health and Related Research, University of Sheffield, Sheffield, UK
| | - William Sones
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
| | - Lisa V Hampson
- Statistical Methodology and Consulting, Novartis Pharma AG, Basel, Switzerland
| | - Catherine Hewitt
- York Trials Unit, Department of Health Sciences, University of York, York, UK
| | | | - Deborah Ashby
- Imperial Clinical Trials Unit, Imperial College London, London, UK
| | - Richard Emsley
- Department of Biostatistics and Health Informatics, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK
| | - Dean A Fergusson
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
| | - Stephen J Walters
- Medical Statistics Group, School of Health and Related Research, University of Sheffield, Sheffield, UK
| | - Edward Cf Wilson
- Cambridge Centre for Health Services Research, Cambridge Clinical Trials Unit University of Cambridge, Cambridge, UK
- Health Economics Group, Norwich Medical School, University of East Anglia, Norwich, UK
| | - Graeme MacLennan
- Centre for Healthcare Randomised Trials, University of Aberdeen, Aberdeen, UK
| | - Nigel Stallard
- Warwick Medical School, Statistics and Epidemiology, University of Warwick, Coventry, UK
| | - Joanne C Rothwell
- Medical Statistics Group, School of Health and Related Research, University of Sheffield, Sheffield, UK
| | - Martin Bland
- Department of Health Sciences, University of York, York, UK
| | - Louise Brown
- MRC Clinical Trials Unit, Institute of Clinical Trials and Methodology, University College London, London, UK
| | - Craig R Ramsay
- Health Services Research Unit, University of Aberdeen, Aberdeen, UK
| | - Andrew Cook
- Wessex Institute, University of Southampton, Southampton, UK
| | - David Armstrong
- School of Population Health and Environmental Sciences, King's College London, London, UK
| | - Douglas Altman
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
| | - Luke D Vale
- Health Economics Group, Institute of Health & Society, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|
13
|
Debray TP, de Jong VM, Moons KG, Riley RD. Evidence synthesis in prognosis research. Diagn Progn Res 2019; 3:13. [PMID: 31338426 PMCID: PMC6621956 DOI: 10.1186/s41512-019-0059-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/23/2018] [Accepted: 04/16/2019] [Indexed: 12/11/2022] Open
Abstract
Over the past few years, evidence synthesis has become essential to investigate and improve the generalizability of medical research findings. This strategy often involves a meta-analysis to formally summarize quantities of interest, such as relative treatment effect estimates. The use of meta-analysis methods is, however, less straightforward in prognosis research because substantial variation exists in research objectives, analysis methods and the level of reported evidence. We present a gentle overview of statistical methods that can be used to summarize data of prognostic factor and prognostic model studies. We discuss how aggregate data, individual participant data, or a combination thereof can be combined through meta-analysis methods. Recent examples are provided throughout to illustrate the various methods.
Collapse
Affiliation(s)
- Thomas P.A. Debray
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Universiteitsweg 100, Utrecht, 3584 CG The Netherlands
- Cochrane Netherlands, University Medical Center Utrecht, Universiteitsweg 100, Utrecht, 3584 CG The Netherlands
| | - Valentijn M.T. de Jong
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Universiteitsweg 100, Utrecht, 3584 CG The Netherlands
| | - Karel G.M. Moons
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Universiteitsweg 100, Utrecht, 3584 CG The Netherlands
- Cochrane Netherlands, University Medical Center Utrecht, Universiteitsweg 100, Utrecht, 3584 CG The Netherlands
| | - Richard D. Riley
- Research Institute for Primary Care & Health Sciences, Keele University, Staffordshire, ST5 5BG UK
| |
Collapse
|
14
|
Jones HE, Ades AE, Sutton AJ, Welton NJ. Use of a random effects meta-analysis in the design and analysis of a new clinical trial. Stat Med 2018; 37:4665-4679. [PMID: 30187505 PMCID: PMC6484819 DOI: 10.1002/sim.7948] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2016] [Revised: 06/29/2018] [Accepted: 07/28/2018] [Indexed: 01/08/2023]
Abstract
In designing a randomized controlled trial, it has been argued that trialists should consider existing evidence about the likely intervention effect. One approach is to form a prior distribution for the intervention effect based on a meta‐analysis of previous studies and then power the trial on its ability to affect the posterior distribution in a Bayesian analysis. Alternatively, methods have been proposed to calculate the power of the trial to influence the “pooled” estimate in an updated meta‐analysis. These two approaches can give very different results if the existing evidence is heterogeneous, summarised using a random effects meta‐analysis. We argue that the random effects mean will rarely represent the trialist's target parameter, and so, it will rarely be appropriate to power a trial based on its impact upon the random effects mean. Furthermore, the random effects mean will not generally provide an appropriate prior distribution. More appropriate alternatives include the predictive distribution and shrinkage estimate for the most similar study. Consideration of the impact of the trial on the entire random effects distribution might sometimes be appropriate. We describe how beliefs about likely sources of heterogeneity have implications for how the previous evidence should be used and can have a profound impact on the expected power of the new trial. We conclude that the likely causes of heterogeneity among existing studies need careful consideration. In the absence of explanations for heterogeneity, we suggest using the predictive distribution from the meta‐analysis as the basis for a prior distribution for the intervention effect.
Collapse
Affiliation(s)
- Hayley E Jones
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
| | - A E Ades
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
| | - Alex J Sutton
- Department of Health Sciences, University of Leicester, Leicester, UK
| | - Nicky J Welton
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
| |
Collapse
|
15
|
Martina R, Jenkins D, Bujkiewicz S, Dequen P, Abrams K. The inclusion of real world evidence in clinical development planning. Trials 2018; 19:468. [PMID: 30157904 PMCID: PMC6116448 DOI: 10.1186/s13063-018-2769-2] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2017] [Accepted: 06/28/2018] [Indexed: 12/29/2022] Open
Abstract
BACKGROUND When designing studies it is common to search the literature to investigate variability estimates to use in sample size calculations. Proprietary data of previously designed trials in a particular indication are also used to obtain estimates of variability. Estimates of treatment effects are typically obtained from randomised controlled clinical trials (RCTs). Based on the observed estimates of treatment effect, variability and the minimum clinical relevant difference to detect, the sample size for a subsequent trial is estimated. However, data from real world evidence (RWE) studies, such as observational studies and other interventional studies in patients in routine clinical practice, are not widely used in a systematic manner when designing studies. In this paper, we propose a framework for inclusion of RWE in planning of a clinical development programme. METHODS In our proposed approach, all evidence, from both RCTs and RWE (i.e. from studies in routine clinical practice), available at the time of designing of a new clinical trial is combined in a Bayesian network meta-analysis (NMA). The results can be used to inform the design of the next clinical trial in the programme. The NMA was performed at key milestones, such as at the end of the phase II trial and prior to the design of key phase III studies. To illustrate the methods, we designed an alternative clinical development programme in multiple sclerosis using RWE through clinical trial simulations. RESULTS Inclusion of RWE in the NMA and the resulting trial simulations demonstrated that 284 patients per arm were needed to achieve 90% power to detect effects of predetermined size in the TRANSFORMS study. For the FREEDOMS and FREEDOMS II clinical trials, 189 patients per arm were required. Overall there was a reduction in sample size of at least 40% across the three phase III studies, which translated to a time savings of at least 6 months for the undertaking of the fingolimod phase III programme. CONCLUSION The use of RWE resulted in a reduced sample size of the pivotal phase III studies, which led to substantial time savings compared to the approach of sample size calculations without RWE.
Collapse
Affiliation(s)
- Reynaldo Martina
- Department of Health Sciences, University of Leicester, University Road, Leicester, UK
- Department of Biostatistics, University of Liverpool, 1-5 Brownlow Street, Liverpool, UK
| | - David Jenkins
- Department of Health Sciences, University of Leicester, University Road, Leicester, UK
- School of Health Sciences, University of Manchester, Oxford Road, Manchester, UK
| | - Sylwia Bujkiewicz
- Department of Health Sciences, University of Leicester, University Road, Leicester, UK
| | - Pascale Dequen
- Department of Health Sciences, University of Leicester, University Road, Leicester, UK
- Evidence Synthesis/Health Economics, Visible Analytics Ltd., Union Way, Oxon, UK
| | - Keith Abrams
- Department of Health Sciences, University of Leicester, University Road, Leicester, UK
| | - on behalf of GetReal Workpackage 1
- Department of Health Sciences, University of Leicester, University Road, Leicester, UK
- Department of Biostatistics, University of Liverpool, 1-5 Brownlow Street, Liverpool, UK
- School of Health Sciences, University of Manchester, Oxford Road, Manchester, UK
- Evidence Synthesis/Health Economics, Visible Analytics Ltd., Union Way, Oxon, UK
| |
Collapse
|
16
|
Salanti G, Nikolakopoulou A, Sutton AJ, Reichenbach S, Trelle S, Naci H, Egger M. Planning a future randomized clinical trial based on a network of relevant past trials. Trials 2018; 19:365. [PMID: 29996869 PMCID: PMC6042258 DOI: 10.1186/s13063-018-2740-2] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2017] [Accepted: 06/12/2018] [Indexed: 11/16/2022] Open
Abstract
Background The important role of network meta-analysis of randomized clinical trials in health technology assessment and guideline development is increasingly recognized. This approach has the potential to obtain conclusive results earlier than with new standalone trials or conventional, pairwise meta-analyses. Methods Network meta-analyses can also be used to plan future trials. We introduce a four-step framework that aims to identify the optimal design for a new trial that will update the existing evidence while minimizing the required sample size. The new trial designed within this framework does not need to include all competing interventions and comparisons of interest and can contribute direct and indirect evidence to the updated network meta-analysis. We present the method by virtually planning a new trial to compare biologics in rheumatoid arthritis and a new trial to compare two drugs for relapsing-remitting multiple sclerosis. Results A trial design based on updating the evidence from a network meta-analysis of relevant previous trials may require a considerably smaller sample size to reach the same conclusion compared with a trial designed and analyzed in isolation. Challenges of the approach include the complexity of the methodology and the need for a coherent network meta-analysis of previous trials with little heterogeneity. Conclusions When used judiciously, conditional trial design could significantly reduce the required resources for a new study and prevent experimentation with an unnecessarily large number of participants. Electronic supplementary material The online version of this article (10.1186/s13063-018-2740-2) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Georgia Salanti
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland.
| | - Adriani Nikolakopoulou
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland
| | - Alex J Sutton
- Department of Health Sciences, College of Medicine, Biological Sciences and Psychology, University of Leicester, Leicester, UK
| | - Stephan Reichenbach
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland.,Department of Rheumatology, Immunology and Allergiology, University Hospital, University of Bern, Bern, Switzerland
| | - Sven Trelle
- CTU Bern, University of Bern, Bern, Switzerland
| | - Huseyin Naci
- LSE Health, Department of Health Policy, London School of Economics and Political Science, London, UK
| | - Matthias Egger
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland
| |
Collapse
|
17
|
Tam WWS, Lo KKH, Khalechelvam P, Seah J, Goh SYS. Is the information of systematic reviews published in nursing journals up-to-date? a cross-sectional study. BMC Med Res Methodol 2017; 17:151. [PMID: 29178832 PMCID: PMC5702238 DOI: 10.1186/s12874-017-0432-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Accepted: 11/16/2017] [Indexed: 12/20/2022] Open
Abstract
BACKGROUND An up-to-date systematic review is important for researchers to decide whether to embark on new research or continue supporting ongoing studies. The aim of this study is to examine the time taken between the last search, submission, acceptance and publication dates of systematic reviews published in nursing journals. METHODS Nursing journals indexed in Journal Citation Reports were first identified. Thereafter, systematic reviews published in these journals in 2014 were extracted from three databases. The quality of the systematic reviews were evaluated by the AMSTAR. The last search, submission, acceptance, online publication, full publication dates and other characteristics of the systematic reviews were recorded. The time taken between the five dates was then computed. Descriptive statistics were used to summarize the time differences; non-parametric statistics were used to examine the association between the time taken from the last search and full publication alongside other potential factors, including the funding support, submission during holiday periods, number of records retrieved from database, inclusion of meta-analysis, and quality of the review. RESULTS A total of 107 nursing journals were included in this study, from which 1070 articles were identified through the database search. After screening for eligibility, 202 systematic reviews were included in the analysis. The quality of these reviews was low with the median score of 3 out of 11. A total of 172 (85.1%), 72 (35.6%), 153 (75.7%) and 149 (73.8%) systematic reviews provided their last search, submission, acceptance and online published dates respectively. The median numbers of days taken from the last search to acceptance and to full publication were, respectively, 393 (IQR: 212-609) and 669 (427-915) whereas that from submission to full publication was 365 (243-486). Moreover, the median number of days from the last search to submission and from submission to online publication were 167.5 (53.5-427) and 153 (92-212), respectively. No significant association were revealed between the time lag and those potential factors. CONCLUSION The median time from the last search to acceptance for systematic reviews published in nursing journals was 393 days. Readers for systematic reviews are advised to check the time taken from the last search date of the reviews in order to ensure that up-to-date evidence is consulted for effective clinical decision-making.
Collapse
Affiliation(s)
- Wilson W. S. Tam
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, Level 2, Clinical Research Centre, Block MD11, 10 Medical Drive, Singapore, 117597 Singapore
| | - Kenneth K. H. Lo
- 4/F, JC School of Public Health and Primary Care, The Chinese University of Hong Kong, Shatin, HKSAR Hong Kong
| | - Parames Khalechelvam
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, Level 2, Clinical Research Centre, Block MD11, 10 Medical Drive, Singapore, 117597 Singapore
| | - Joey Seah
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, Level 2, Clinical Research Centre, Block MD11, 10 Medical Drive, Singapore, 117597 Singapore
| | - Shawn Y. S. Goh
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, Level 2, Clinical Research Centre, Block MD11, 10 Medical Drive, Singapore, 117597 Singapore
| |
Collapse
|
18
|
Clayton GL, Smith IL, Higgins JPT, Mihaylova B, Thorpe B, Cicero R, Lokuge K, Forman JR, Tierney JF, White IR, Sharples LD, Jones HE. The INVEST project: investigating the use of evidence synthesis in the design and analysis of clinical trials. Trials 2017; 18:219. [PMID: 28506284 PMCID: PMC5433067 DOI: 10.1186/s13063-017-1955-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2017] [Accepted: 04/26/2017] [Indexed: 11/23/2022] Open
Abstract
BACKGROUND When designing and analysing clinical trials, using previous relevant information, perhaps in the form of evidence syntheses, can reduce research waste. We conducted the INVEST (INVestigating the use of Evidence Synthesis in the design and analysis of clinical Trials) survey to summarise the current use of evidence synthesis in trial design and analysis, to capture opinions of trialists and methodologists on such use, and to understand any barriers. METHODS Our sampling frame was all delegates attending the International Clinical Trials Methodology Conference in November 2015. Respondents were asked to indicate (1) their views on the use of evidence synthesis in trial design and analysis, (2) their own use during the past 10 years and (3) the three greatest barriers to use in practice. RESULTS Of approximately 638 attendees of the conference, 106 (17%) completed the survey, half of whom were statisticians. Support was generally high for using a description of previous evidence, a systematic review or a meta-analysis in trial design. Generally, respondents did not seem to be using evidence syntheses as often as they felt they should. For example, only 50% (42/84 relevant respondents) had used a meta-analysis to inform whether a trial is needed compared with 74% (62/84) indicating that this is desirable. Only 6% (5/81 relevant respondents) had used a value of information analysis to inform sample size calculations versus 22% (18/81) indicating support for this. Surprisingly large numbers of participants indicated support for, and previous use of, evidence syntheses in trial analysis. For example, 79% (79/100) of respondents indicated that external information about the treatment effect should be used to inform aspects of the analysis. The greatest perceived barrier to using evidence synthesis methods in trial design or analysis was time constraints, followed by a belief that the new trial was the first in the area. CONCLUSIONS Evidence syntheses can be resource-intensive, but their use in informing the design, conduct and analysis of clinical trials is widely considered desirable. We advocate additional research, training and investment in resources dedicated to ways in which evidence syntheses can be undertaken more efficiently, offering the potential for cost savings in the long term.
Collapse
Affiliation(s)
- Gemma L. Clayton
- School of Social and Community Medicine, Faculty of Health Sciences, University of Bristol, Canynge Hall, 39 Whatley Road, Bristol, BS8 2PS UK
| | - Isabelle L. Smith
- Leeds Institute of Clinical Trials Research, University of Leeds, Leeds, UK
| | - Julian P. T. Higgins
- School of Social and Community Medicine, Faculty of Health Sciences, University of Bristol, Canynge Hall, 39 Whatley Road, Bristol, BS8 2PS UK
| | - Borislava Mihaylova
- Health Economics Research Centre, Nuffield Department of Population Health, University of Oxford, Oxford, UK
| | - Benjamin Thorpe
- Leeds Institute of Clinical Trials Research, University of Leeds, Leeds, UK
| | - Robert Cicero
- Leeds Institute of Clinical Trials Research, University of Leeds, Leeds, UK
| | - Kusal Lokuge
- Health Economics Research Centre, Nuffield Department of Population Health, University of Oxford, Oxford, UK
| | - Julia R. Forman
- Cambridge Clinical Trials Unit, University of Cambridge, Cambridge, UK
| | | | - Ian R. White
- MRC Biostatistics Unit, Cambridge Institute of Public Health, Cambridge, UK
| | | | - Hayley E. Jones
- School of Social and Community Medicine, Faculty of Health Sciences, University of Bristol, Canynge Hall, 39 Whatley Road, Bristol, BS8 2PS UK
| |
Collapse
|
19
|
Chow JTY, Lam K, Naeem A, Akanda ZZ, Si FF, Hodge W. The pathway to RCTs: how many roads are there? Examining the homogeneity of RCT justification. Trials 2017; 18:51. [PMID: 28148278 PMCID: PMC5288880 DOI: 10.1186/s13063-017-1804-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2016] [Accepted: 01/19/2017] [Indexed: 11/10/2022] Open
Abstract
Background Randomized controlled trials (RCTs) form the foundational background of modern medical practice. They are considered the highest quality of evidence, and their results help inform decisions concerning drug development and use, preventive therapies, and screening programs. However, the inputs that justify an RCT to be conducted have not been studied. Methods We reviewed the MEDLINE and EMBASE databases across six specialties (Ophthalmology, Otorhinolaryngology (ENT), General Surgery, Psychiatry, Obstetrics-Gynecology (OB-GYN), and Internal Medicine) and randomly chose 25 RCTs from each specialty except for Otorhinolaryngology (20 studies) and Internal Medicine (28 studies). For each RCT, we recorded information relating to the justification for conducting RCTs such as average study size cited, number of studies cited, and types of studies cited. The justification varied widely both within and between specialties. Results For Ophthalmology and OB-GYN, the average study sizes cited were around 1100 patients, whereas they were around 500 patients for Psychiatry and General Surgery. Between specialties, the average number of studies cited ranged from around 4.5 for ENT to around 10 for Ophthalmology, but the standard deviations were large, indicating that there was even more discrepancy within each specialty. When standardizing by the sample size of the RCT, some of the discrepancies between and within specialties can be explained, but not all. On average, Ophthalmology papers cited review articles the most (2.96 studies per RCT) compared to less than 1.5 studies per RCT for all other specialties. Conclusions The justifications for RCTs vary widely both within and between specialties, and the justification for conducting RCTs is not standardized. Electronic supplementary material The online version of this article (doi:10.1186/s13063-017-1804-z) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Jeffrey Tin Yu Chow
- Department of Epidemiology and Biostatistics, Schulich School of Medicine and Dentistry, The University of Western Ontario, London, Canada
| | - Kevin Lam
- Department of Epidemiology and Biostatistics, Schulich School of Medicine and Dentistry, The University of Western Ontario, London, Canada
| | - Abdul Naeem
- Schulich School of Medicine and Dentistry, The University of Western Ontario, London, Canada
| | - Zarique Z Akanda
- Faculty of Science, The University of Western Ontario, London, Canada
| | - Francie Fengqin Si
- Department of Ophthalmology, Ivey Eye Institute, St. Joseph's Health Care London, London, Canada
| | - William Hodge
- Department of Epidemiology and Biostatistics, Schulich School of Medicine and Dentistry, The University of Western Ontario, London, Canada. .,Department of Ophthalmology, Ivey Eye Institute, St. Joseph's Health Care London, London, Canada.
| |
Collapse
|
20
|
Quality analysis of randomized controlled trials in the International Journal of Impotence Research: quality assessment and relevant clinical impact. Int J Impot Res 2016; 29:65-69. [DOI: 10.1038/ijir.2016.48] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2016] [Revised: 09/16/2016] [Accepted: 10/28/2016] [Indexed: 02/08/2023]
|
21
|
Kelly LE, Davies EH, Saint-Raymond A, Tomasi P, Offringa M. Important issues in the justification of a control treatment in paediatric drug trials. Arch Dis Child 2016; 101:962-7. [PMID: 27052950 DOI: 10.1136/archdischild-2016-310644] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/04/2016] [Accepted: 03/17/2016] [Indexed: 12/25/2022]
Abstract
OBJECTIVE The value of comparative effectiveness trials in informing clinical and policy decisions depends heavily on the choice of control arm (comparator). Our objective is to identify challenges in comparator reasoning and to determine justification criteria for selecting a control arm in paediatric clinical trials. DESIGN A literature search was completed to identify existing sources of guidance on comparator selection. Subsequently, we reviewed a randomly selected sample of comparators selected for paediatric investigation plans (PIPs) adopted by the Paediatric Committee of the European Medicines Agency in 2013. We gathered descriptive information and evaluated their review process to identify challenges and compromises between regulators and sponsors with regard to the selection of the comparator. A tool to help investigators justify the selection of active controls and placebo arms was developed using the existing literature and empirical data. RESULTS Justifying comparator selection was a challenge in 28% of PIPs. The following challenging paediatric issues in the decision-making process were identified: use of off-label medications as comparators, ethical and safe use of placebo, duration of placebo use, an undefined optimal dosing strategy, lack of age-appropriate safety and efficacy data, and drug dosing not supported by extrapolation of safety/efficacy evidence from other populations. CONCLUSIONS In order to generate trials that will inform clinical decision-making and support marketing authorisations, researchers must systemically and transparently justify their selection of the comparator arm for their study. This report highlights key areas for justification in the choice of comparator in paediatric clinical trials.
Collapse
Affiliation(s)
- Lauren E Kelly
- Child Health and Evaluative Sciences, The Hospital for Sick Children, Toronto, Canada
| | | | | | | | - Martin Offringa
- Child Health and Evaluative Sciences, The Hospital for Sick Children, Toronto, Canada
| |
Collapse
|
22
|
Imberger G, Thorlund K, Gluud C, Wetterslev J. False-positive findings in Cochrane meta-analyses with and without application of trial sequential analysis: an empirical review. BMJ Open 2016; 6:e011890. [PMID: 27519923 PMCID: PMC4985805 DOI: 10.1136/bmjopen-2016-011890] [Citation(s) in RCA: 157] [Impact Index Per Article: 19.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/03/2022] Open
Abstract
OBJECTIVE Many published meta-analyses are underpowered. We explored the role of trial sequential analysis (TSA) in assessing the reliability of conclusions in underpowered meta-analyses. METHODS We screened The Cochrane Database of Systematic Reviews and selected 100 meta-analyses with a binary outcome, a negative result and sufficient power. We defined a negative result as one where the 95% CI for the effect included 1.00, a positive result as one where the 95% CI did not include 1.00, and sufficient power as the required information size for 80% power, 5% type 1 error, relative risk reduction of 10% or number needed to treat of 100, and control event proportion and heterogeneity taken from the included studies. We re-conducted the meta-analyses, using conventional cumulative techniques, to measure how many false positives would have occurred if these meta-analyses had been updated after each new trial. For each false positive, we performed TSA, using three different approaches. RESULTS We screened 4736 systematic reviews to find 100 meta-analyses that fulfilled our inclusion criteria. Using conventional cumulative meta-analysis, false positives were present in seven of the meta-analyses (7%, 95% CI 3% to 14%), occurring more than once in three. The total number of false positives was 14 and TSA prevented 13 of these (93%, 95% CI 68% to 98%). In a post hoc analysis, we found that Cochrane meta-analyses that are negative are 1.67 times more likely to be updated (95% CI 0.92 to 2.68) than those that are positive. CONCLUSIONS We found false positives in 7% (95% CI 3% to 14%) of the included meta-analyses. Owing to limitations of external validity and to the decreased likelihood of updating positive meta-analyses, the true proportion of false positives in meta-analysis is probably higher. TSA prevented 93% of the false positives (95% CI 68% to 98%).
Collapse
Affiliation(s)
- Georgina Imberger
- Copenhagen Trial Unit, Centre for Clinical Intervention Research, Copenhagen University Hospital, Copenhagen, Denmark
- Department of Anaesthesia & Perioperative Medicine, Monash University, Melbourne, Victoria, Australia
| | - Kristian Thorlund
- Copenhagen Trial Unit, Centre for Clinical Intervention Research, Copenhagen University Hospital, Copenhagen, Denmark
- Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada
| | - Christian Gluud
- Copenhagen Trial Unit, Centre for Clinical Intervention Research, Copenhagen University Hospital, Copenhagen, Denmark
| | - Jørn Wetterslev
- Copenhagen Trial Unit, Centre for Clinical Intervention Research, Copenhagen University Hospital, Copenhagen, Denmark
| |
Collapse
|
23
|
|
24
|
Efthimiou O, Debray TPA, van Valkenhoef G, Trelle S, Panayidou K, Moons KGM, Reitsma JB, Shang A, Salanti G. GetReal in network meta-analysis: a review of the methodology. Res Synth Methods 2016; 7:236-63. [PMID: 26754852 DOI: 10.1002/jrsm.1195] [Citation(s) in RCA: 202] [Impact Index Per Article: 25.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2014] [Revised: 09/30/2015] [Accepted: 11/06/2015] [Indexed: 11/11/2022]
Abstract
Pairwise meta-analysis is an established statistical tool for synthesizing evidence from multiple trials, but it is informative only about the relative efficacy of two specific interventions. The usefulness of pairwise meta-analysis is thus limited in real-life medical practice, where many competing interventions may be available for a certain condition and studies informing some of the pairwise comparisons may be lacking. This commonly encountered scenario has led to the development of network meta-analysis (NMA). In the last decade, several applications, methodological developments, and empirical studies in NMA have been published, and the area is thriving as its relevance to public health is increasingly recognized. This article presents a review of the relevant literature on NMA methodology aiming to pinpoint the developments that have appeared in the field. Copyright © 2016 John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- Orestis Efthimiou
- Department of Hygiene and Epidemiology, University of Ioannina School of Medicine, Ioannina, Greece.
| | - Thomas P A Debray
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, The Netherlands.,The Dutch Cochrane Centre, Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Gert van Valkenhoef
- Department of Epidemiology, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Sven Trelle
- Institute of Social and Preventive Medicine, University of Bern, Bern, Switzerland.,CTU Bern, Department of Clinical Research, University of Bern, Bern, Switzerland
| | - Klea Panayidou
- Institute of Social and Preventive Medicine, University of Bern, Bern, Switzerland
| | - Karel G M Moons
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, The Netherlands.,The Dutch Cochrane Centre, Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Johannes B Reitsma
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, The Netherlands.,The Dutch Cochrane Centre, Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, The Netherlands
| | | | - Georgia Salanti
- Department of Hygiene and Epidemiology, University of Ioannina School of Medicine, Ioannina, Greece
| | | |
Collapse
|
25
|
Bhurke S, Cook A, Tallant A, Young A, Williams E, Raftery J. Using systematic reviews to inform NIHR HTA trial planning and design: a retrospective cohort. BMC Med Res Methodol 2015; 15:108. [PMID: 26715462 PMCID: PMC4696153 DOI: 10.1186/s12874-015-0102-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2015] [Accepted: 12/11/2015] [Indexed: 11/12/2022] Open
Abstract
Background Chalmers and Glasziou’s paper published in 2014 recommends research funding bodies should mandate that proposals for additional primary research are built on systematic reviews of existing evidence showing what is already known. Jones et al. identified 11 (23 %) of 48 trials funded during 2006–8 by the National Institute for Health Research Health Technology Assessment (NIHR HTA) Programme did not reference a systematic review. This study did not explore the reasons for trials not referencing a systematic review or consider trials using any other evidence in the absence of a systematic review. Referencing a systematic review may not be possible in certain circumstances, for instance if the systematic review does not address the question being proposed in the trial. The current study extended Jones’ study by exploring the reasons for why trials did not reference a systematic review and included a more recent cohort of trials funded in 2013 to determine if there were any changes in the referencing or use of systematic reviews. Methods Two cohorts of NIHR HTA randomised controlled trials were included. Cohort I included the same trials as Jones et al. (with the exception of one trial which was discontinued). Cohort II included NIHR HTA trials funded in 2013. Data extraction was undertaken independently by two reviewers using full applications and trial protocols. Descriptive statistics was used and no formal statistical analyses were conducted. Results Five (11 %) trials of the 47 funded during 2006–2008 did not reference a systematic review. These 5 trials had warranted reasons for not referencing systematic reviews. All trials from Cohort II referenced a systematic review. A quarter of all those trials with a preceding systematic review used a different primary outcome than those stated in the reviews. Conclusions The NIHR requires that proposals for new primary research are justified by existing evidence and the findings of this study confirm the adherence to this requirement with a high rate of applications using systematic reviews. Electronic supplementary material The online version of this article (doi:10.1186/s12874-015-0102-2) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Sheetal Bhurke
- Wessex Institute, University of Southampton Alpha House, University of Southampton Science Park, Southampton, SO16 7NS, UK. .,National Institute for Health Research (NIHR), Evaluation, Trials and Studies Coordinating Centre (NETSCC), University of Southampton, Southampton, SO16 7NS, UK.
| | - Andrew Cook
- National Institute for Health Research (NIHR), Evaluation, Trials and Studies Coordinating Centre (NETSCC), University of Southampton, Southampton, SO16 7NS, UK. .,University of Southampton and University Hospital Southampton NHS Foundation Trusts, Southampton, UK.
| | - Anna Tallant
- National Institute for Health Research (NIHR), Evaluation, Trials and Studies Coordinating Centre (NETSCC), University of Southampton, Southampton, SO16 7NS, UK.
| | - Amanda Young
- Wessex Institute, University of Southampton Alpha House, University of Southampton Science Park, Southampton, SO16 7NS, UK. .,National Institute for Health Research (NIHR), Evaluation, Trials and Studies Coordinating Centre (NETSCC), University of Southampton, Southampton, SO16 7NS, UK.
| | - Elaine Williams
- National Institute for Health Research (NIHR), Evaluation, Trials and Studies Coordinating Centre (NETSCC), University of Southampton, Southampton, SO16 7NS, UK.
| | - James Raftery
- Wessex Institute, University of Southampton Alpha House, University of Southampton Science Park, Southampton, SO16 7NS, UK. .,National Institute for Health Research (NIHR), Evaluation, Trials and Studies Coordinating Centre (NETSCC), University of Southampton, Southampton, SO16 7NS, UK. .,University of Southampton and University Hospital Southampton NHS Foundation Trusts, Southampton, UK.
| |
Collapse
|
26
|
Debray TPA, Moons KGM, van Valkenhoef G, Efthimiou O, Hummel N, Groenwold RHH, Reitsma JB. Get real in individual participant data (IPD) meta-analysis: a review of the methodology. Res Synth Methods 2015; 6:293-309. [PMID: 26287812 PMCID: PMC5042043 DOI: 10.1002/jrsm.1160] [Citation(s) in RCA: 187] [Impact Index Per Article: 20.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2014] [Revised: 05/15/2015] [Accepted: 05/16/2015] [Indexed: 02/06/2023]
Abstract
Individual participant data (IPD) meta-analysis is an increasingly used approach for synthesizing and investigating treatment effect estimates. Over the past few years, numerous methods for conducting an IPD meta-analysis (IPD-MA) have been proposed, often making different assumptions and modeling choices while addressing a similar research question. We conducted a literature review to provide an overview of methods for performing an IPD-MA using evidence from clinical trials or non-randomized studies when investigating treatment efficacy. With this review, we aim to assist researchers in choosing the appropriate methods and provide recommendations on their implementation when planning and conducting an IPD-MA.
Collapse
Affiliation(s)
- Thomas P. A. Debray
- Julius Center for Health Sciences and Primary CareUniversity Medical Center UtrechtUtrechtThe Netherlands
- The Dutch Cochrane Centre, Julius Center for Health Sciences and Primary CareUniversity Medical CenterUtrechtThe Netherlands
| | - Karel G. M. Moons
- Julius Center for Health Sciences and Primary CareUniversity Medical Center UtrechtUtrechtThe Netherlands
- The Dutch Cochrane Centre, Julius Center for Health Sciences and Primary CareUniversity Medical CenterUtrechtThe Netherlands
| | - Gert van Valkenhoef
- Department of EpidemiologyUniversity of Groningen, University Medical Center GroningenGroningenThe Netherlands
| | - Orestis Efthimiou
- Department of Hygiene and Epidemiology, School of MedicineUniversity of IoanninaIoanninaGreece
| | - Noemi Hummel
- Institute of Social and Preventive MedicineUniversity of BernBernSwitzerland
| | - Rolf H. H. Groenwold
- Julius Center for Health Sciences and Primary CareUniversity Medical Center UtrechtUtrechtThe Netherlands
| | - Johannes B. Reitsma
- Julius Center for Health Sciences and Primary CareUniversity Medical Center UtrechtUtrechtThe Netherlands
- The Dutch Cochrane Centre, Julius Center for Health Sciences and Primary CareUniversity Medical CenterUtrechtThe Netherlands
| | | |
Collapse
|
27
|
Chevance A, Schuster T, Steele R, Ternès N, Platt RW. Contour plot assessment of existing meta-analyses confirms robust association of statin use and acute kidney injury risk. J Clin Epidemiol 2015; 68:1138-43. [PMID: 26092287 DOI: 10.1016/j.jclinepi.2015.05.030] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2014] [Revised: 04/20/2015] [Accepted: 05/30/2015] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Robustness of an existing meta-analysis can justify decisions on whether to conduct an additional study addressing the same research question. We illustrate the graphical assessment of the potential impact of an additional study on an existing meta-analysis using published data on statin use and the risk of acute kidney injury. STUDY DESIGN AND SETTING A previously proposed graphical augmentation approach is used to assess the sensitivity of the current test and heterogeneity statistics extracted from existing meta-analysis data. In addition, we extended the graphical augmentation approach to assess potential changes in the pooled effect estimate after updating a current meta-analysis and applied the three graphical contour definitions to data from meta-analyses on statin use and acute kidney injury risk. RESULTS In the considered example data, the pooled effect estimates and heterogeneity indices demonstrated to be considerably robust to the addition of a future study. Supportingly, for some previously inconclusive meta-analyses, a study update might yield statistically significant kidney injury risk increase associated with higher statin exposure. CONCLUSIONS The illustrated contour approach should become a standard tool for the assessment of the robustness of meta-analyses. It can guide decisions on whether to conduct additional studies addressing a relevant research question.
Collapse
Affiliation(s)
- Aurélie Chevance
- Department of Epidemiology, Biostatistics, and Occupational Health, McGill University, 1020 Pine Ave W., Montreal, Quebec, Canada H3A 1A2
| | - Tibor Schuster
- Department of Epidemiology, Biostatistics, and Occupational Health, McGill University, 1020 Pine Ave W., Montreal, Quebec, Canada H3A 1A2.
| | - Russell Steele
- Centre for Clinical Epidemiology, Lady Davis Institute for Medical Research, Jewish General Hospital, 3755 Cote Ste-Catherine, H-461, Montreal, Quebec, Canada H3T 1E2; Department of Mathematics and Statistics, McGill University, Burnside Hall, 805 Sherbrooke Street West, Montreal, Quebec, Canada H3A 0B9
| | - Nils Ternès
- Service de biostatistique et d'épidémiologie, Gustave Roussy, 39 rue Camille Desmoulins, Villejuif, France; CESP Centre for Research in Epidemiology and Population Health, INSERM U1018, Paris-Sud University, 12 avenue Paul Vaillant Couturier, Villejuif, France
| | - Robert W Platt
- Department of Epidemiology, Biostatistics, and Occupational Health, McGill University, 1020 Pine Ave W., Montreal, Quebec, Canada H3A 1A2; Department of Pediatrics, McGill University, 1001 Decarie Boulevard, Montreal, Quebec, Canada H4A 3J1
| |
Collapse
|
28
|
[Research in health care professions - value for care delivery as the focus]. ZEITSCHRIFT FUR EVIDENZ FORTBILDUNG UND QUALITAET IM GESUNDHEITSWESEN 2014; 108 Suppl 1:S2-3. [PMID: 25458394 DOI: 10.1016/j.zefq.2014.09.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
29
|
Rhodes KM, Turner RM, Higgins JPT. Predictive distributions were developed for the extent of heterogeneity in meta-analyses of continuous outcome data. J Clin Epidemiol 2014; 68:52-60. [PMID: 25304503 PMCID: PMC4270451 DOI: 10.1016/j.jclinepi.2014.08.012] [Citation(s) in RCA: 224] [Impact Index Per Article: 22.4] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2014] [Revised: 06/26/2014] [Accepted: 08/01/2014] [Indexed: 11/30/2022]
Abstract
OBJECTIVES Estimation of between-study heterogeneity is problematic in small meta-analyses. Bayesian meta-analysis is beneficial because it allows incorporation of external evidence on heterogeneity. To facilitate this, we provide empirical evidence on the likely heterogeneity between studies in meta-analyses relating to specific research settings. STUDY DESIGN AND SETTING Our analyses included 6,492 continuous-outcome meta-analyses within the Cochrane Database of Systematic Reviews. We investigated the influence of meta-analysis settings on heterogeneity by modeling study data from all meta-analyses on the standardized mean difference scale. Meta-analysis setting was described according to outcome type, intervention comparison type, and medical area. Predictive distributions for between-study variance expected in future meta-analyses were obtained, which can be used directly as informative priors. RESULTS Among outcome types, heterogeneity was found to be lowest in meta-analyses of obstetric outcomes. Among intervention comparison types, heterogeneity was lowest in meta-analyses comparing two pharmacologic interventions. Predictive distributions are reported for different settings. In two example meta-analyses, incorporating external evidence led to a more precise heterogeneity estimate. CONCLUSION Heterogeneity was influenced by meta-analysis characteristics. Informative priors for between-study variance were derived for each specific setting. Our analyses thus assist the incorporation of realistic prior information into meta-analyses including few studies.
Collapse
Affiliation(s)
- Kirsty M Rhodes
- MRC Biostatistics Unit, Cambridge Institute of Public Health, Forvie Site, Robinson Way, Cambridge Biomedical Campus, Cambridge, CB2 0SR, UK.
| | - Rebecca M Turner
- MRC Biostatistics Unit, Cambridge Institute of Public Health, Forvie Site, Robinson Way, Cambridge Biomedical Campus, Cambridge, CB2 0SR, UK
| | - Julian P T Higgins
- School of Social and Community Medicine, University of Bristol, Canynge Hall, 39 Whatley Road, Bristol, BS8 2PS, UK; Centre for Reviews and Dissemination, A/B Block, Alcuin College, University of York, York, YO10 5DD, UK
| |
Collapse
|
30
|
Cook JA, Hislop JM, Altman DG, Briggs AH, Fayers PM, Norrie JD, Ramsay CR, Harvey IM, Vale LD. Use of methods for specifying the target difference in randomised controlled trial sample size calculations: Two surveys of trialists' practice. Clin Trials 2014; 11:300-308. [PMID: 24603006 DOI: 10.1177/1740774514521907] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND Central to the design of a randomised controlled trial (RCT) is a calculation of the number of participants needed. This is typically achieved by specifying a target difference, which enables the trial to identify a difference of a particular magnitude should one exist. Seven methods have been proposed for formally determining what the target difference should be. However, in practice, it may be driven by convenience or some other informal basis. It is unclear how aware the trialist community is of these formal methods or whether they are used. PURPOSE To determine current practice regarding the specification of the target difference by surveying trialists. METHODS Two surveys were conducted: (1) Members of the Society for Clinical Trials (SCT): participants were invited to complete an online survey through the society's email distribution list. Respondents were asked about their awareness, use of, and willingness to recommend methods; (2) Leading UK- and Ireland-based trialists: the survey was sent to UK Clinical Research Collaboration registered Clinical Trials Units, Medical Research Council UK Hubs for Trial Methodology Research, and the Research Design Services of the National Institute for Health Research. This survey also included questions about the most recent trial developed by the respondent's group. RESULTS Survey 1: Of the 1182 members on the SCT membership email distribution list, 180 responses were received (15%). Awareness of methods ranged from 69 (38%) for health economic methods to 162 (90%) for pilot study. Willingness to recommend among those who had used a particular method ranged from 56% for the opinion-seeking method to 89% for the review of evidence-base method. Survey 2: Of the 61 surveys sent out, 34 (56%) responses were received. Awareness of methods ranged from 33 (97%) for the review of evidence-base and pilot methods to 14 (41%) for the distribution method. The highest level of willingness to recommend among users was for the anchor method (87%). Based upon the most recent trial, the target difference was usually one viewed as important by a stakeholder group, mostly also viewed as a realistic difference given the interventions under evaluation, and sometimes one that led to an achievable sample size. LIMITATIONS The response rates achieved were relatively low despite the surveys being short, well presented, and having utilised reminders. CONCLUSION Substantial variations in practice exist with awareness, use, and willingness to recommend methods varying substantially. The findings support the view that sample size calculation is a more complex process than would appear to be the case from trial reports and protocols. Guidance on approaches for sample size estimation may increase both awareness and use of appropriate formal methods.
Collapse
Affiliation(s)
- Jonathan A Cook
- Health Services Research Unit, University of Aberdeen, Aberdeen, UK Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
| | - Jennifer M Hislop
- Institute of Health & Society, Newcastle University, Newcastle upon Tyne, UK
| | - Doug G Altman
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
| | - Andrew H Briggs
- Health Economics and Health Technology Assessment, University of Glasgow, Glasgow, UK
| | - Peter M Fayers
- Population Health, University of Aberdeen, Aberdeen, UK Department of Cancer Research and Molecular, Norwegian University of Science and Technology, Trondheim, Norway
| | - John D Norrie
- Health Services Research Unit, University of Aberdeen, Aberdeen, UK
| | - Craig R Ramsay
- Health Services Research Unit, University of Aberdeen, Aberdeen, UK
| | - Ian M Harvey
- Faculty of Medicine and Health Sciences, University of East Anglia, Norwich, UK
| | - Luke D Vale
- Institute of Health & Society, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|
31
|
Colosia A, Njue A, Trask PC, Olivares R, Khan S, Abbe A, Police R, Wang J, Ruiz-Soto R, Kaye JA, Awan F. Clinical efficacy and safety in relapsed/refractory diffuse large B-cell lymphoma: a systematic literature review. CLINICAL LYMPHOMA MYELOMA & LEUKEMIA 2014; 14:343-355.e6. [PMID: 24768510 DOI: 10.1016/j.clml.2014.02.012] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2013] [Revised: 02/20/2014] [Accepted: 02/24/2014] [Indexed: 11/28/2022]
Abstract
This systematic literature review was designed to assess information on the clinical efficacy and safety of interventions used in the treatment of refractory or relapsed diffuse large B-cell lymphoma (R/R DLBCL) and to perform a meta-analysis if possible. We searched databases (PubMed, EMBASE, and Cochrane Library for articles from 1997 to August 2, 2012 reported in English), conference abstracts, bibliographic reference lists, and the ClinicalTrials.gov database for phase II to IV studies with results. Studies had to report on patients with R/R DLBCL who were not eligible to receive high-dose therapy (HDT) with stem cell transplantation (SCT) (autologous or allogeneic). Mixed-type non-Hodgkin lymphoma (NHL) studies were required to report R/R DLBCL outcomes separately. We identified 55 studies that presented outcomes data separately for patients with R/R DLBCL. Of 7 comparative studies, only 4 were randomized controlled trials (RCTs). In the 2 RCTs with a common regimen, the patient populations differed too greatly to perform a valid meta-analysis. The 48 single-arm studies identified were typically small (n < 50 in most), with 31% reporting median progression-free survival (PFS) or overall survival (OS) specifically for the R/R DLBCL population. In these studies, median OS ranged from 4 to 13 months. The small number of RCTs in R/R DLBCL precludes identifying optimal treatments. Small sample size, infrequent reporting of OS and PFS separated by histologic type, and limited information on patient characteristics also hinder comparison of results. Randomized studies are needed to demonstrate which current therapies have advantages for improving survival and other important clinical outcomes in patients with R/R DLBCL.
Collapse
Affiliation(s)
- Ann Colosia
- RTI Health Solutions, Research Triangle Park, NC.
| | - Annete Njue
- RTI Health Solutions, Didsbury, Manchester, United Kingdom
| | - Peter C Trask
- Global Evidence and Value Development, Sanofi, Cambridge, MA
| | - Robert Olivares
- Global Evidence and Value Development, Sanofi, Chilly-Mazarin, France
| | - Shahnaz Khan
- RTI Health Solutions, Research Triangle Park, NC
| | - Adeline Abbe
- Global Evidence and Value Development, Sanofi, Chilly-Mazarin, France
| | | | - Jianmin Wang
- RTI Health Solutions, Research Triangle Park, NC
| | | | | | | |
Collapse
|
32
|
Debray TPA, Koffijberg H, Nieboer D, Vergouwe Y, Steyerberg EW, Moons KGM. Meta-analysis and aggregation of multiple published prediction models. Stat Med 2014; 33:2341-62. [PMID: 24752993 DOI: 10.1002/sim.6080] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2013] [Revised: 11/22/2013] [Accepted: 12/05/2013] [Indexed: 12/24/2022]
Abstract
Published clinical prediction models are often ignored during the development of novel prediction models despite similarities in populations and intended usage. The plethora of prediction models that arise from this practice may still perform poorly when applied in other populations. Incorporating prior evidence might improve the accuracy of prediction models and make them potentially better generalizable. Unfortunately, aggregation of prediction models is not straightforward, and methods to combine differently specified models are currently lacking. We propose two approaches for aggregating previously published prediction models when a validation dataset is available: model averaging and stacked regressions. These approaches yield user-friendly stand-alone models that are adjusted for the new validation data. Both approaches rely on weighting to account for model performance and between-study heterogeneity but adopt a different rationale (averaging versus combination) to combine the models. We illustrate their implementation in a clinical example and compare them with established methods for prediction modeling in a series of simulation studies. Results from the clinical datasets and simulation studies demonstrate that aggregation yields prediction models with better discrimination and calibration in a vast majority of scenarios, and results in equivalent performance (compared to developing a novel model from scratch) when validation datasets are relatively large. In conclusion, model aggregation is a promising strategy when several prediction models are available from the literature and a validation dataset is at hand. The aggregation methods do not require existing models to have similar predictors and can be applied when relatively few data are at hand.
Collapse
Affiliation(s)
- Thomas P A Debray
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, The Netherlands
| | | | | | | | | | | |
Collapse
|
33
|
Kulinskaya E, Wood J. Trial sequential methods for meta-analysis. Res Synth Methods 2013; 5:212-20. [PMID: 26052847 DOI: 10.1002/jrsm.1104] [Citation(s) in RCA: 50] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2013] [Revised: 10/09/2013] [Accepted: 10/20/2013] [Indexed: 11/07/2022]
Abstract
Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual difficulties arise in the random effects model. One approach applying sequential meta-analysis to design is 'trial sequential analysis', developed by Wetterslev, Thorlund, Brok, Gluud and others from the Copenhagen Trial Unit. In trial sequential analysis, information size is based on the required sample size of a single new trial, which, in the random effects model, is obtained by simply inflating it in comparison with fixed effects meta-analysis. However, this is not sufficient as, depending on the amount of heterogeneity, a minimum of several new trials may be indicated, and the total number of new patients needed may be substantially reduced by planning an even larger number of small trials. We provide explicit formulae to determine the requisite minimum number of trials and their sample sizes within this framework, which also exemplify the conceptual difficulties referred to. We illustrate all these points with two practical examples, including the well-known meta-analysis of magnesium for myocardial infarction.
Collapse
Affiliation(s)
- Elena Kulinskaya
- School of Computing Sciences, University of East Anglia, Norwich, U.K
| | | |
Collapse
|
34
|
Jo JK, Autorino R, Chung JH, Kim KS, Lee JW, Baek EJ, Lee SW. Randomized controlled trials in endourology: a quality assessment. J Endourol 2013; 27:1055-60. [PMID: 23767666 DOI: 10.1089/end.2013.0036] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
PURPOSE To analyze the quality of studies reporting randomized clinical trials (RCTs) in the field of endourology. MATERIALS AND METHODS RCTs published in the Journal of Endourology from 1993 until 2011 were identified. The Jadad scale, van Tulder scale, and Cochrane Collaboration Risk of Bias Tool (CCRBT) were used to assess the quality of the studies. The review period was divided into early (1993-1999), mid (2000-2005), and late (2006-2011) terms. Studies were categorized by country of origin, subject matter, single- vs multicenter setting, Institutional Review Board (IRB) approval and funding support, and blinding vs nonblinding. RESULTS In total, 3339 articles had been published during the defined review period, of which 165 articles were reporting a RCT. There was a significant increase in the number of RCTs published over time, with 18 (2.81%), 43 (4.88%), and 104 (5.72%) studies identified in the early, mid, and late term, respectively (P=0.009). Nevertheless, there was no difference in terms of quality of reporting, as assessed with the Jadad scale, van Tulder scale, or CCRBT, between the three study terms. On the other hand, significant differences were found in both the number of high qualitative RCTs that used blinding methodology and those that had IRB review, when comparing the early, mid, and late terms. CONCLUSION There has been a growing number of Journal of Endourology publications reporting on RTC over the last two decades. The quality of reporting for these studies remains suboptimal, however. Researchers should focus on a more appropriate description of key features of any given RCT, such as randomization and allocation methods, as well as disclosure of IRB review and financial support.
Collapse
Affiliation(s)
- Jung Ki Jo
- Department of Urology, Hanyang University College of Medicine, Seoul, Korea
| | | | | | | | | | | | | |
Collapse
|
35
|
Beller EM, Chen JKH, Wang ULH, Glasziou PP. Are systematic reviews up-to-date at the time of publication? Syst Rev 2013; 2:36. [PMID: 23714302 PMCID: PMC3674908 DOI: 10.1186/2046-4053-2-36] [Citation(s) in RCA: 117] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/07/2012] [Accepted: 05/16/2013] [Indexed: 11/24/2022] Open
Abstract
BACKGROUND Systematic reviews provide a synthesis of evidence for practitioners, for clinical practice guideline developers, and for those designing and justifying primary research. Having an up-to-date and comprehensive review is therefore important. Our main objective was to determine the recency of systematic reviews at the time of their publication, as measured by the time from last search date to publication. We also wanted to study the time from search date to acceptance, and from acceptance to publication, and measure the proportion of systematic reviews with recorded information on search dates and information sources in the abstract and full text of the review. METHODS A descriptive analysis of published systematic reviews indexed in Medline in 2009, 2010 and 2011 by three reviewers, independently extracting data. RESULTS Of the 300 systematic reviews included, 271 (90%) provided the date of search in the full-text article, but only 141 (47%) stated this in the abstract. The median (standard error; minimum to maximum) survival time from last search to acceptance was 5.1 (0.58; 0 to 43.8) months (95% confidence interval = 3.9 to 6.2) and from last search to first publication time was 8.0 (0.35; 0 to 46.7) months (95% confidence interval = 7.3 to 8.7), respectively. Of the 300 reviews, 295 (98%) stated which databases had been searched, but only 181 (60%) stated the databases in the abstract. Most researchers searched three (35%) or four (21%) databases. The top-three most used databases were MEDLINE (79%), Cochrane library (76%), and EMBASE (64%). CONCLUSIONS Being able to identify comprehensive, up-to-date reviews is important to clinicians, guideline groups, and those designing clinical trials. This study demonstrates that some reviews have a considerable delay between search and publication, but only 47% of systematic review abstracts stated the last search date and 60% stated the databases that had been searched. Improvements in the quality of abstracts of systematic reviews and ways to shorten the review and revision processes to make review publication more rapid are needed.
Collapse
Affiliation(s)
- Elaine M Beller
- Centre for Research in Evidence-Based Practice, Faculty of Health Sciences and Medicine, Bond University, Gold Coast, QLD 4229, Australia
| | | | | | | |
Collapse
|
36
|
Jones AP, Conroy E, Williamson PR, Clarke M, Gamble C. The use of systematic reviews in the planning, design and conduct of randomised trials: a retrospective cohort of NIHR HTA funded trials. BMC Med Res Methodol 2013; 13:50. [PMID: 23530582 PMCID: PMC3621166 DOI: 10.1186/1471-2288-13-50] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2012] [Accepted: 03/13/2013] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND A systematic review, with or without a meta-analysis, should be undertaken to determine if the research question of interest has already been answered before a new trial begins. There has been limited research on how systematic reviews are used within the design of new trials, the aims of this study were to investigate how systematic reviews of earlier trials are used in the planning and design of new randomised trials. METHODS Documentation from the application process for all randomised trials funded by the National Institute for Health Research Health Technology Assessment (NIHR HTA) between 2006 and 2008 were obtained. This included the: commissioning brief (if appropriate), outline application, minutes of the Board meeting in which the outline application was discussed, full application, detailed project description, referee comments, investigator response to referee comments, Board minutes on the full application and the trial protocol. Data were extracted on references to systematic reviews and how any such reviews had been used in the planning and design of the trial. RESULTS 50 randomised trials were funded by NIHR HTA during this period and documentation was available for 48 of these. The cohort was predominately individually randomised parallel trials aiming to detect superiority between two treatments for a single primary outcome. 37 trials (77.1%) referenced a systematic review within the application and 20 of these (i.e. 41.7% of the total) used information contained in the systematic review in the design or planning of the new trial. The main areas in which systematic reviews were used were in the selection or definition of an outcome to be measured in the trial (7 of 37, 18.9%), the sample size calculation (7, 18.9%), the duration of follow up (8, 21.6%) and the approach to describing adverse events (9, 24.3%). Boards did not comment on the presence/absence or use of systematic reviews in any application. CONCLUSIONS Systematic reviews were referenced in most funded applications but just over half of these used the review to inform the design. There is an expectation from funders that applicants will use a systematic review to justify the need for a new trial but no expectation regarding further use of a systematic review to aid planning and design of the trial. Guidelines for applicants and funders should be developed to promote the use of systematic reviews in the design and planning of randomised trials, to optimise delivery of new studies informed by the most up-to-date evidence base and to minimise waste in research.
Collapse
Affiliation(s)
- Ashley P Jones
- Department of Biostatistics, Faculty of Health & Life Sciences University of Liverpool, Brownlow Street, Liverpool L69 3GS, UK.
| | | | | | | | | |
Collapse
|
37
|
Hinchliffe SR, Crowther MJ, Phillips RS, Sutton AJ. Using meta-analysis to inform the design of subsequent studies of diagnostic test accuracy. Res Synth Methods 2012; 4:156-68. [DOI: 10.1002/jrsm.1066] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2011] [Revised: 10/10/2012] [Accepted: 10/12/2012] [Indexed: 11/11/2022]
Affiliation(s)
- Sally R. Hinchliffe
- Biostatistics Group, Department of Health Sciences; University of Leicester; Leicester; UK
| | - Michael J. Crowther
- Biostatistics Group, Department of Health Sciences; University of Leicester; Leicester; UK
| | - Robert S. Phillips
- Regional Department of Paediatric Haematology/Oncology; St James's Hospital; Leeds; UK
| | - Alex J. Sutton
- Biostatistics Group, Department of Health Sciences; University of Leicester; Leicester; UK
| |
Collapse
|
38
|
Korhonen A, Hakulinen-Viitanen T, Jylhä V, Holopainen A. Meta-synthesis and evidence-based health care - a method for systematic review. Scand J Caring Sci 2012; 27:1027-34. [DOI: 10.1111/scs.12003] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2012] [Accepted: 09/24/2012] [Indexed: 02/03/2023]
Affiliation(s)
- Anne Korhonen
- Nursing Research Foundation; Helsinki Finland
- Finnish Centre for Evidence-Based Health Care: an Affiliated Centre of the Joanna Briggs Institute; Helsinki Finland
| | - Tuovi Hakulinen-Viitanen
- Finnish Centre for Evidence-Based Health Care: an Affiliated Centre of the Joanna Briggs Institute; Helsinki Finland
- National Institute for Health and Welfare; Helsinki Finland
| | - Virpi Jylhä
- Nursing Research Foundation; Helsinki Finland
- Finnish Centre for Evidence-Based Health Care: an Affiliated Centre of the Joanna Briggs Institute; Helsinki Finland
| | - Arja Holopainen
- Nursing Research Foundation; Helsinki Finland
- Finnish Centre for Evidence-Based Health Care: an Affiliated Centre of the Joanna Briggs Institute; Helsinki Finland
| |
Collapse
|
39
|
McCarron CE, Pullenayegum EM, Thabane L, Goeree R, Tarride JE. The impact of using informative priors in a Bayesian cost-effectiveness analysis: an application of endovascular versus open surgical repair for abdominal aortic aneurysms in high-risk patients. Med Decis Making 2012; 33:437-50. [PMID: 23054366 DOI: 10.1177/0272989x12458457] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND Bayesian methods have been proposed as a way of synthesizing all available evidence to inform decision making. However, few practical applications of the use of Bayesian methods for combining patient-level data (i.e., trial) with additional evidence (e.g., literature) exist in the cost-effectiveness literature. The objective of this study was to compare a Bayesian cost-effectiveness analysis using informative priors to a standard non-Bayesian nonparametric method to assess the impact of incorporating additional information into a cost-effectiveness analysis. METHODS Patient-level data from a previously published nonrandomized study were analyzed using traditional nonparametric bootstrap techniques and bivariate normal Bayesian models with vague and informative priors. Two different types of informative priors were considered to reflect different valuations of the additional evidence relative to the patient-level data (i.e., "face value" and "skeptical"). The impact of using different distributions and valuations was assessed in a sensitivity analysis. Models were compared in terms of incremental net monetary benefit (INMB) and cost-effectiveness acceptability frontiers (CEAFs). RESULTS The bootstrapping and Bayesian analyses using vague priors provided similar results. The most pronounced impact of incorporating the informative priors was the increase in estimated life years in the control arm relative to what was observed in the patient-level data alone. Consequently, the incremental difference in life years originally observed in the patient-level data was reduced, and the INMB and CEAF changed accordingly. CONCLUSIONS The results of this study demonstrate the potential impact and importance of incorporating additional information into an analysis of patient-level data, suggesting this could alter decisions as to whether a treatment should be adopted and whether more information should be acquired.
Collapse
Affiliation(s)
- C Elizabeth McCarron
- Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada (CEM, EMP, LT, RG, J-ET),Programs for Assessment of Technology in Health (PATH) Research Institute, St. Joseph’s Healthcare–Hamilton,
Hamilton, Ontario, Canada (CEM, RG, J-ET)
| | - Eleanor M Pullenayegum
- Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada (CEM, EMP, LT, RG, J-ET),Biostatistics Unit, St. Joseph’s Healthcare–Hamilton, Hamilton, Ontario, Canada (EMP, LT)
| | - Lehana Thabane
- Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada (CEM, EMP, LT, RG, J-ET),Biostatistics Unit, St. Joseph’s Healthcare–Hamilton, Hamilton, Ontario, Canada (EMP, LT)
| | - Ron Goeree
- Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada (CEM, EMP, LT, RG, J-ET),Programs for Assessment of Technology in Health (PATH) Research Institute, St. Joseph’s Healthcare–Hamilton,
Hamilton, Ontario, Canada (CEM, RG, J-ET)
| | - Jean-Eric Tarride
- Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada (CEM, EMP, LT, RG, J-ET),Programs for Assessment of Technology in Health (PATH) Research Institute, St. Joseph’s Healthcare–Hamilton,
Hamilton, Ontario, Canada (CEM, RG, J-ET)
| |
Collapse
|
40
|
Valkenhoef GV, Tervonen T, Brock BD, Hillege H. Deficiencies in the transfer and availability of clinical trials evidence: a review of existing systems and standards. BMC Med Inform Decis Mak 2012; 12:95. [PMID: 22947211 PMCID: PMC3534489 DOI: 10.1186/1472-6947-12-95] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2011] [Accepted: 08/24/2012] [Indexed: 11/15/2022] Open
Abstract
Background Decisions concerning drug safety and efficacy are generally based on pivotal evidence provided by clinical trials. Unfortunately, finding the relevant clinical trials is difficult and their results are only available in text-based reports. Systematic reviews aim to provide a comprehensive overview of the evidence in a specific area, but may not provide the data required for decision making. Methods We review and analyze the existing information systems and standards for aggregate level clinical trials information from the perspective of systematic review and evidence-based decision making. Results The technology currently used has major shortcomings, which cause deficiencies in the transfer, traceability and availability of clinical trials information. Specifically, data available to decision makers is insufficiently structured, and consequently the decisions cannot be properly traced back to the underlying evidence. Regulatory submission, trial publication, trial registration, and systematic review produce unstructured datasets that are insufficient for supporting evidence-based decision making. Conclusions The current situation is a hindrance to policy decision makers as it prevents fully transparent decision making and the development of more advanced decision support systems. Addressing the identified deficiencies would enable more efficient, informed, and transparent evidence-based medical decision making.
Collapse
Affiliation(s)
- Gert van Valkenhoef
- Department of Epidemiology, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands.
| | | | | | | |
Collapse
|
41
|
Debray TPA, Koffijberg H, Lu D, Vergouwe Y, Steyerberg EW, Moons KGM. Incorporating published univariable associations in diagnostic and prognostic modeling. BMC Med Res Methodol 2012; 12:121. [PMID: 22883206 PMCID: PMC3548751 DOI: 10.1186/1471-2288-12-121] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2012] [Accepted: 06/26/2012] [Indexed: 01/05/2023] Open
Abstract
BACKGROUND Diagnostic and prognostic literature is overwhelmed with studies reporting univariable predictor-outcome associations. Currently, methods to incorporate such information in the construction of a prediction model are underdeveloped and unfamiliar to many researchers. METHODS This article aims to improve upon an adaptation method originally proposed by Greenland (1987) and Steyerberg (2000) to incorporate previously published univariable associations in the construction of a novel prediction model. The proposed method improves upon the variance estimation component by reconfiguring the adaptation process in established theory and making it more robust. Different variants of the proposed method were tested in a simulation study, where performance was measured by comparing estimated associations with their predefined values according to the Mean Squared Error and coverage of the 90% confidence intervals. RESULTS Results demonstrate that performance of estimated multivariable associations considerably improves for small datasets where external evidence is included. Although the error of estimated associations decreases with increasing amount of individual participant data, it does not disappear completely, even in very large datasets. CONCLUSIONS The proposed method to aggregate previously published univariable associations with individual participant data in the construction of a novel prediction models outperforms established approaches and is especially worthwhile when relatively limited individual participant data are available.
Collapse
Affiliation(s)
- Thomas P A Debray
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, The Netherlands.
| | | | | | | | | | | |
Collapse
|
42
|
Saramago P, Sutton AJ, Cooper NJ, Manca A. Mixed treatment comparisons using aggregate and individual participant level data. Stat Med 2012; 31:3516-36. [PMID: 22764016 DOI: 10.1002/sim.5442] [Citation(s) in RCA: 69] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2011] [Accepted: 04/23/2012] [Indexed: 11/06/2022]
Abstract
Mixed treatment comparisons (MTC) extend the traditional pair-wise meta-analytic framework to synthesize information on more than two interventions. Although most MTCs use aggregate data (AD), a proportion of the evidence base might be available at the individual level (IPD). We develop a series of novel Bayesian statistical MTC models to allow for the simultaneous synthesis of IPD and AD, potentially incorporating study and individual level covariates. The effectiveness of different interventions to increase the provision of functioning smoke alarms in households with children was used as a motivating dataset. This included 20 studies (11 AD and 9 IPD), including 11 500 participants. Incorporating the IPD into the network allowed the inclusion of information on subject level covariates, which produced markedly more accurate treatment-covariate interaction estimates than an analysis solely on the AD from all studies. Including evidence at the IPD level in the MTC is desirable when exploring participant level covariates; even when IPD is available only for a fraction of the studies. Such modelling may not only reduce inconsistencies within networks of trials but also assist the estimation of intervention subgroup effects to guide more individualised treatment decisions.
Collapse
Affiliation(s)
- Pedro Saramago
- Centre for Health Economics, University of York, York, UK.
| | | | | | | |
Collapse
|
43
|
Langan D, Higgins JPT, Gregory W, Sutton AJ. Graphical augmentations to the funnel plot assess the impact of additional evidence on a meta-analysis. J Clin Epidemiol 2012; 65:511-9. [PMID: 22342263 DOI: 10.1016/j.jclinepi.2011.10.009] [Citation(s) in RCA: 64] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2011] [Revised: 10/21/2011] [Accepted: 10/25/2011] [Indexed: 12/20/2022]
Abstract
OBJECTIVE We aim to illustrate the potential impact of a new study on a meta-analysis, which gives an indication of the robustness of the meta-analysis. STUDY DESIGN AND SETTING A number of augmentations are proposed to one of the most widely used of graphical displays, the funnel plot. Namely, 1) statistical significance contours, which define regions of the funnel plot in which a new study would have to be located to change the statistical significance of the meta-analysis; and 2) heterogeneity contours, which show how a new study would affect the extent of heterogeneity in a given meta-analysis. Several other features are also described, and the use of multiple features simultaneously is considered. RESULTS The statistical significance contours suggest that one additional study, no matter how large, may have a very limited impact on the statistical significance of a meta-analysis. The heterogeneity contours illustrate that one outlying study can increase the level of heterogeneity dramatically. CONCLUSION The additional features of the funnel plot have applications including 1) informing sample size calculations for the design of future studies eligible for inclusion in the meta-analysis; and 2) informing the updating prioritization of a portfolio of meta-analyses such as those prepared by the Cochrane Collaboration.
Collapse
Affiliation(s)
- Dean Langan
- Clinical Trials Research Unit (CTRU), University of Leeds, 71-75 Clarendon Road, Leeds, West Yorkshire, LS2 9JT, UK.
| | | | | | | |
Collapse
|
44
|
Rockers PC, Feigl AB, Røttingen JA, Fretheim A, de Ferranti D, Lavis JN, Melberg HO, Bärnighausen T. Study-design selection criteria in systematic reviews of effectiveness of health systems interventions and reforms: A meta-review. Health Policy 2012; 104:206-14. [DOI: 10.1016/j.healthpol.2011.12.007] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2011] [Revised: 11/28/2011] [Accepted: 12/15/2011] [Indexed: 11/16/2022]
|
45
|
Viechtbauer W. Learning from the past: refining the way we study treatments. J Clin Epidemiol 2010; 63:980-2. [DOI: 10.1016/j.jclinepi.2010.04.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2010] [Accepted: 04/12/2010] [Indexed: 10/19/2022]
|
46
|
Empirical assessment suggests that existing evidence could be used more fully in designing randomized controlled trials. J Clin Epidemiol 2010; 63:983-91. [DOI: 10.1016/j.jclinepi.2010.01.022] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2008] [Revised: 01/05/2010] [Accepted: 01/08/2010] [Indexed: 11/22/2022]
|
47
|
Thorlund K, Anema A, Mills E. Interpreting meta-analysis according to the adequacy of sample size. An example using isoniazid chemoprophylaxis for tuberculosis in purified protein derivative negative HIV-infected individuals. Clin Epidemiol 2010; 2:57-66. [PMID: 20865104 PMCID: PMC2943189 DOI: 10.2147/clep.s9242] [Citation(s) in RCA: 86] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2010] [Indexed: 01/23/2023] Open
Abstract
Objective: To illustrate the utility of statistical monitoring boundaries in meta-analysis, and provide a framework in which meta-analysis can be interpreted according to the adequacy of sample size. To propose a simple method for determining how many patients need to be randomized in a future trial before a meta-analysis can be deemed conclusive. Study design and setting: Prospective meta-analysis of randomized clinical trials (RCTs) that evaluated the effectiveness of isoniazid chemoprophylaxis versus placebo for preventing the incidence of tuberculosis disease among human immunodeficiency virus (HIV)-positive individuals testing purified protein derivative negative. Assessment of meta-analysis precision using trial sequential analysis (TSA) with LanDeMets monitoring boundaries. Sample size determination for a future trials to make the meta-analysis conclusive according to the thresholds set by the monitoring boundaries. Results: The meta-analysis included nine trials comprising 2,911 trial participants and yielded a relative risk of 0.74 (95% CI, 0.53–1.04, P = 0.082, I2 = 0%). To deem the meta-analysis conclusive according to the thresholds set by the monitoring boundaries, a future RCT would need to randomize 3,800 participants. Conclusion: Statistical monitoring boundaries provide a framework for interpreting meta-analysis according to the adequacy of sample size and project the required sample size for a future RCT to make a meta-analysis conclusive.
Collapse
Affiliation(s)
- Kristian Thorlund
- Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada
| | | | | |
Collapse
|
48
|
McCarron CE, Pullenayegum EM, Thabane L, Goeree R, Tarride JE. The importance of adjusting for potential confounders in Bayesian hierarchical models synthesising evidence from randomised and non-randomised studies: an application comparing treatments for abdominal aortic aneurysms. BMC Med Res Methodol 2010; 10:64. [PMID: 20618973 PMCID: PMC2916004 DOI: 10.1186/1471-2288-10-64] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2010] [Accepted: 07/09/2010] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND Informing health care decision making may necessitate the synthesis of evidence from different study designs (e.g., randomised controlled trials, non-randomised/observational studies). Methods for synthesising different types of studies have been proposed, but their routine use requires development of approaches to adjust for potential biases, especially among non-randomised studies. The objective of this study was to extend a published Bayesian hierarchical model to adjust for bias due to confounding in synthesising evidence from studies with different designs. METHODS In this new methodological approach, study estimates were adjusted for potential confounders using differences in patient characteristics (e.g., age) between study arms. The new model was applied to synthesise evidence from randomised and non-randomised studies from a published review comparing treatments for abdominal aortic aneurysms. We compared the results of the Bayesian hierarchical model adjusted for differences in study arms with: 1) unadjusted results, 2) results adjusted using aggregate study values and 3) two methods for downweighting the potentially biased non-randomised studies. Sensitivity of the results to alternative prior distributions and the inclusion of additional covariates were also assessed. RESULTS In the base case analysis, the estimated odds ratio was 0.32 (0.13,0.76) for the randomised studies alone and 0.57 (0.41,0.82) for the non-randomised studies alone. The unadjusted result for the two types combined was 0.49 (0.21,0.98). Adjusted for differences between study arms, the estimated odds ratio was 0.37 (0.17,0.77), representing a shift towards the estimate for the randomised studies alone. Adjustment for aggregate values resulted in an estimate of 0.60 (0.28,1.20). The two methods used for downweighting gave odd ratios of 0.43 (0.18,0.89) and 0.35 (0.16,0.76), respectively. Point estimates were robust but credible intervals were wider when using vaguer priors. CONCLUSIONS Covariate adjustment using aggregate study values does not account for covariate imbalances between treatment arms and downweighting may not eliminate bias. Adjustment using differences in patient characteristics between arms provides a systematic way of adjusting for bias due to confounding. Within the context of a Bayesian hierarchical model, such an approach could facilitate the use of all available evidence to inform health policy decisions.
Collapse
Affiliation(s)
- C Elizabeth McCarron
- Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada
- Programs for Assessment of Technology in Health (PATH) Research Institute, St. Joseph's Healthcare Hamilton, Hamilton, Ontario, Canada
| | - Eleanor M Pullenayegum
- Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada
- Biostatistics Unit, St. Joseph's Healthcare Hamilton, Hamilton, Ontario, Canada
| | - Lehana Thabane
- Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada
- Biostatistics Unit, St. Joseph's Healthcare Hamilton, Hamilton, Ontario, Canada
| | - Ron Goeree
- Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada
- Programs for Assessment of Technology in Health (PATH) Research Institute, St. Joseph's Healthcare Hamilton, Hamilton, Ontario, Canada
| | - Jean-Eric Tarride
- Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada
- Programs for Assessment of Technology in Health (PATH) Research Institute, St. Joseph's Healthcare Hamilton, Hamilton, Ontario, Canada
| |
Collapse
|
49
|
Janszky J, Kovacs N, Gyimesi C, Fogarasi A, Doczi T, Wiebe S. Epilepsy surgery, antiepileptic drug trials, and the role of evidence. Epilepsia 2010; 51:1004-9. [DOI: 10.1111/j.1528-1167.2010.02566.x] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|