1
|
Pittelkow MM, de Vries YA, Monden R, Bastiaansen JA, van Ravenzwaaij D. Comparing the evidential strength for psychotropic drugs: a Bayesian meta-analysis. Psychol Med 2021; 51:2752-2761. [PMID: 34620261 PMCID: PMC8640368 DOI: 10.1017/s0033291721003950] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Revised: 09/06/2021] [Accepted: 09/09/2021] [Indexed: 11/17/2022]
Abstract
Approval and prescription of psychotropic drugs should be informed by the strength of evidence for efficacy. Using a Bayesian framework, we examined (1) whether psychotropic drugs are supported by substantial evidence (at the time of approval by the Food and Drug Administration), and (2) whether there are systematic differences across drug groups. Data from short-term, placebo-controlled phase II/III clinical trials for 15 antipsychotics, 16 antidepressants for depression, nine antidepressants for anxiety, and 20 drugs for attention deficit hyperactivity disorder (ADHD) were extracted from FDA reviews. Bayesian model-averaged meta-analysis was performed and strength of evidence was quantified (i.e. BFBMA). Strength of evidence and trialling varied between drugs. Median evidential strength was extreme for ADHD medication (BFBMA = 1820.4), moderate for antipsychotics (BFBMA = 365.4), and considerably lower and more frequently classified as weak or moderate for antidepressants for depression (BFBMA = 94.2) and anxiety (BFBMA = 49.8). Varying median effect sizes (ESschizophrenia = 0.45, ESdepression = 0.30, ESanxiety = 0.37, ESADHD = 0.72), sample sizes (Nschizophrenia = 324, Ndepression = 218, Nanxiety = 254, NADHD = 189.5), and numbers of trials (kschizophrenia = 3, kdepression = 5.5, kanxiety = 3, kADHD = 2) might account for differences. Although most drugs were supported by strong evidence at the time of approval, some only had moderate or ambiguous evidence. These results show the need for more systematic quantification and classification of statistical evidence for psychotropic drugs. Evidential strength should be communicated transparently and clearly towards clinical decision makers.
Collapse
Affiliation(s)
- Merle-Marie Pittelkow
- Department Psychometrics and Statistics, University of Groningen, Groningen, the Netherlands
| | - Ymkje Anna de Vries
- Department of Developmental Psychology, University of Groningen, Groningen, the Netherlands
- Interdisciplinary Center Psychopathology and Emotion Regulation, Department of Psychiatry, University Medical Center Groningen, Groningen, the Netherlands
| | - Rei Monden
- Interdisciplinary Center Psychopathology and Emotion Regulation, Department of Psychiatry, University Medical Center Groningen, Groningen, the Netherlands
- Department of Biomedical Statistics, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| | - Jojanneke A. Bastiaansen
- Interdisciplinary Center Psychopathology and Emotion Regulation, Department of Psychiatry, University Medical Center Groningen, Groningen, the Netherlands
- Department of Education and Research, Friesland Mental Health Care Services, Leeuwarden, the Netherlands
| | - Don van Ravenzwaaij
- Department Psychometrics and Statistics, University of Groningen, Groningen, the Netherlands
| |
Collapse
|
2
|
Chen J, Ho M, Lee K, Song Y, Fang Y, Goldstein BA, He W, Irony T, Jiang Q, van der Laan M, Lee H, Lin X, Meng Z, Mishra-Kalyani P, Rockhold F, Wang H, White R. The Current Landscape in Biostatistics of Real-World Data and Evidence: Clinical Study Design and Analysis. Stat Biopharm Res 2021. [DOI: 10.1080/19466315.2021.1883474] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Affiliation(s)
- Jie Chen
- Overland Pharmaceuticals, Inc., Dover, DE
| | | | - Kwan Lee
- Janssen Research and Development, Spring House, PA
| | | | - Yixin Fang
- Global Medical Affairs Statistics, Data and Statistical Sciences, AbbVie, North Chicago, IL
| | - Benjamin A Goldstein
- Duke Clinical Research Institute and Duke University Medical Center, Duke University, Durham, NC
| | - Weili He
- Global Medical Affairs Statistics, Data and Statistical Sciences, AbbVie, North Chicago, IL
| | | | | | | | | | - Xiwu Lin
- Janssen Research and Development, Spring House, PA
| | | | | | - Frank Rockhold
- Global Medical Affairs Statistics, Data and Statistical Sciences, AbbVie, North Chicago, IL
| | - Hongwei Wang
- Global Medical Affairs Statistics, Data and Statistical Sciences, AbbVie, North Chicago, IL
| | | |
Collapse
|
3
|
Hoendervangers S, Burbach JPM, Lacle MM, Koopman M, van Grevenstein WMU, Intven MPW, Verkooijen HM. Pathological Complete Response Following Different Neoadjuvant Treatment Strategies for Locally Advanced Rectal Cancer: A Systematic Review and Meta-analysis. Ann Surg Oncol 2020; 27:4319-4336. [PMID: 32524461 PMCID: PMC7497700 DOI: 10.1245/s10434-020-08615-2] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Indexed: 12/18/2022]
Abstract
Background Pathological complete response (pCR) following neoadjuvant treatment for locally advanced rectal cancer (LARC) is associated with better survival, less local recurrence, and less distant failure. Furthermore, pCR indicates that the rectum may have been preserved. This meta-analysis gives an overview of available neoadjuvant treatment strategies for LARC and analyzes how these perform in achieving pCR as compared with the standard of care. Methods Pubmed, Embase, and Cochrane Central bibliographic databases were searched. Randomized controlled trials in which patients received neoadjuvant treatment for MRI-staged nonmetastatic resectable LARC were included. The primary outcome was pCR, defined as ypT0N0. A meta-analysis of studies comparing an intervention with standard fluoropyrimidine-based chemoradiation (CRT) was performed. Results Of the 17 articles included in the systematic review, 11 were used for the meta-analysis. Addition of oxaliplatin to fluoropyrimidine-based CRT resulted in significantly more pCR compared with fluoropyrimidine-based CRT only (OR 1.46), but at the expense of more ≥ grade 3 toxicity. Other treatment strategies, including consolidation/induction chemotherapy and short-course radiotherapy (SCRT), did not improve pCR rates. None of the included trials reported a benefit in local control or OS. Five-year DFS was significantly worse after SCRT-delay compared with CRT (59% vs. 75.1%, HR 1.93). Conclusions All included trials fail to deliver high-level evidence to show an improvement in pCR compared with standard fluoropyrimidine-based CRT. The addition of oxaliplatin might result in more pCR but at the expense of more toxicity. Furthermore, this benefit does not translate into less local recurrence or improved survival. Electronic supplementary material The online version of this article (10.1245/s10434-020-08615-2) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- S Hoendervangers
- Department of Radiation Oncology, University Medical Center Utrecht, Utrecht, The Netherlands. .,Department of Surgery, University Medical Center Utrecht, Utrecht, The Netherlands.
| | - J P M Burbach
- Department of Surgery, MC Leeuwarden, Leeuwarden, The Netherlands
| | - M M Lacle
- Department of Pathology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - M Koopman
- Department of Medical Oncology, University Medical Center Utrecht, Utrecht, The Netherlands
| | | | - M P W Intven
- Department of Radiation Oncology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - H M Verkooijen
- Department of Radiation Oncology, University Medical Center Utrecht, Utrecht, The Netherlands
| |
Collapse
|
4
|
Behrend MR, Basáñez MG, Hamley JID, Porco TC, Stolk WA, Walker M, de Vlas SJ. Modelling for policy: The five principles of the Neglected Tropical Diseases Modelling Consortium. PLoS Negl Trop Dis 2020; 14:e0008033. [PMID: 32271755 PMCID: PMC7144973 DOI: 10.1371/journal.pntd.0008033] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Affiliation(s)
- Matthew R. Behrend
- Neglected Tropical Diseases, Bill & Melinda Gates Foundation, Seattle, Washington, United States of America
- Blue Well 8, Seattle, Washington, United States of America
- * E-mail:
| | - María-Gloria Basáñez
- MRC Centre for Global Infectious Disease Analysis and London Centre for Neglected Tropical Disease Research, Department of Infectious Disease Epidemiology, Imperial College London, London, United Kingdom
| | - Jonathan I. D. Hamley
- MRC Centre for Global Infectious Disease Analysis and London Centre for Neglected Tropical Disease Research, Department of Infectious Disease Epidemiology, Imperial College London, London, United Kingdom
| | - Travis C. Porco
- Francis I. Proctor Foundation for Research in Ophthalmology, Department of Epidemiology and Biostatistics, and Department of Ophthalmology, University of California, San Francisco, United States of America
| | - Wilma A. Stolk
- Department of Public Health, Erasmus MC, University Medical Center Rotterdam, Rotterdam, the Netherlands
| | - Martin Walker
- London Centre for Neglected Tropical Disease Research, Department of Pathobiology and Population Sciences, Royal Veterinary College, Hatfield, Hertfordshire, United Kingdom
- London Centre for Neglected Tropical Disease Research and Department of Infectious Disease Epidemiology, Imperial College London, London, United Kingdom
| | - Sake J. de Vlas
- Department of Public Health, Erasmus MC, University Medical Center Rotterdam, Rotterdam, the Netherlands
| | | |
Collapse
|
5
|
Cook JA, Julious SA, Sones W, Hampson LV, Hewitt C, Berlin JA, Ashby D, Emsley R, Fergusson DA, Walters SJ, Wilson EC, MacLennan G, Stallard N, Rothwell JC, Bland M, Brown L, Ramsay CR, Cook A, Armstrong D, Altman D, Vale LD. Practical help for specifying the target difference in sample size calculations for RCTs: the DELTA 2 five-stage study, including a workshop. Health Technol Assess 2019; 23:1-88. [PMID: 31661431 PMCID: PMC6843113 DOI: 10.3310/hta23600] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
BACKGROUND The randomised controlled trial is widely considered to be the gold standard study for comparing the effectiveness of health interventions. Central to its design is a calculation of the number of participants needed (the sample size) for the trial. The sample size is typically calculated by specifying the magnitude of the difference in the primary outcome between the intervention effects for the population of interest. This difference is called the 'target difference' and should be appropriate for the principal estimand of interest and determined by the primary aim of the study. The target difference between treatments should be considered realistic and/or important by one or more key stakeholder groups. OBJECTIVE The objective of the report is to provide practical help on the choice of target difference used in the sample size calculation for a randomised controlled trial for researchers and funder representatives. METHODS The Difference ELicitation in TriAls2 (DELTA2) recommendations and advice were developed through a five-stage process, which included two literature reviews of existing funder guidance and recent methodological literature; a Delphi process to engage with a wider group of stakeholders; a 2-day workshop; and finalising the core document. RESULTS Advice is provided for definitive trials (Phase III/IV studies). Methods for choosing the target difference are reviewed. To aid those new to the topic, and to encourage better practice, 10 recommendations are made regarding choosing the target difference and undertaking a sample size calculation. Recommended reporting items for trial proposal, protocols and results papers under the conventional approach are also provided. Case studies reflecting different trial designs and covering different conditions are provided. Alternative trial designs and methods for choosing the sample size are also briefly considered. CONCLUSIONS Choosing an appropriate sample size is crucial if a study is to inform clinical practice. The number of patients recruited into the trial needs to be sufficient to answer the objectives; however, the number should not be higher than necessary to avoid unnecessary burden on patients and wasting precious resources. The choice of the target difference is a key part of this process under the conventional approach to sample size calculations. This document provides advice and recommendations to improve practice and reporting regarding this aspect of trial design. Future work could extend the work to address other less common approaches to the sample size calculations, particularly in terms of appropriate reporting items. FUNDING Funded by the Medical Research Council (MRC) UK and the National Institute for Health Research as part of the MRC-National Institute for Health Research Methodology Research programme.
Collapse
Affiliation(s)
- Jonathan A Cook
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
| | - Steven A Julious
- Medical Statistics Group, School of Health and Related Research, University of Sheffield, Sheffield, UK
| | - William Sones
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
| | - Lisa V Hampson
- Statistical Methodology and Consulting, Novartis Pharma AG, Basel, Switzerland
| | - Catherine Hewitt
- York Trials Unit, Department of Health Sciences, University of York, York, UK
| | | | - Deborah Ashby
- Imperial Clinical Trials Unit, Imperial College London, London, UK
| | - Richard Emsley
- Department of Biostatistics and Health Informatics, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK
| | - Dean A Fergusson
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
| | - Stephen J Walters
- Medical Statistics Group, School of Health and Related Research, University of Sheffield, Sheffield, UK
| | - Edward Cf Wilson
- Cambridge Centre for Health Services Research, Cambridge Clinical Trials Unit University of Cambridge, Cambridge, UK
- Health Economics Group, Norwich Medical School, University of East Anglia, Norwich, UK
| | - Graeme MacLennan
- Centre for Healthcare Randomised Trials, University of Aberdeen, Aberdeen, UK
| | - Nigel Stallard
- Warwick Medical School, Statistics and Epidemiology, University of Warwick, Coventry, UK
| | - Joanne C Rothwell
- Medical Statistics Group, School of Health and Related Research, University of Sheffield, Sheffield, UK
| | - Martin Bland
- Department of Health Sciences, University of York, York, UK
| | - Louise Brown
- MRC Clinical Trials Unit, Institute of Clinical Trials and Methodology, University College London, London, UK
| | - Craig R Ramsay
- Health Services Research Unit, University of Aberdeen, Aberdeen, UK
| | - Andrew Cook
- Wessex Institute, University of Southampton, Southampton, UK
| | - David Armstrong
- School of Population Health and Environmental Sciences, King's College London, London, UK
| | - Douglas Altman
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
| | - Luke D Vale
- Health Economics Group, Institute of Health & Society, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|
6
|
Martina R, Jenkins D, Bujkiewicz S, Dequen P, Abrams K. The inclusion of real world evidence in clinical development planning. Trials 2018; 19:468. [PMID: 30157904 PMCID: PMC6116448 DOI: 10.1186/s13063-018-2769-2] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2017] [Accepted: 06/28/2018] [Indexed: 12/29/2022] Open
Abstract
BACKGROUND When designing studies it is common to search the literature to investigate variability estimates to use in sample size calculations. Proprietary data of previously designed trials in a particular indication are also used to obtain estimates of variability. Estimates of treatment effects are typically obtained from randomised controlled clinical trials (RCTs). Based on the observed estimates of treatment effect, variability and the minimum clinical relevant difference to detect, the sample size for a subsequent trial is estimated. However, data from real world evidence (RWE) studies, such as observational studies and other interventional studies in patients in routine clinical practice, are not widely used in a systematic manner when designing studies. In this paper, we propose a framework for inclusion of RWE in planning of a clinical development programme. METHODS In our proposed approach, all evidence, from both RCTs and RWE (i.e. from studies in routine clinical practice), available at the time of designing of a new clinical trial is combined in a Bayesian network meta-analysis (NMA). The results can be used to inform the design of the next clinical trial in the programme. The NMA was performed at key milestones, such as at the end of the phase II trial and prior to the design of key phase III studies. To illustrate the methods, we designed an alternative clinical development programme in multiple sclerosis using RWE through clinical trial simulations. RESULTS Inclusion of RWE in the NMA and the resulting trial simulations demonstrated that 284 patients per arm were needed to achieve 90% power to detect effects of predetermined size in the TRANSFORMS study. For the FREEDOMS and FREEDOMS II clinical trials, 189 patients per arm were required. Overall there was a reduction in sample size of at least 40% across the three phase III studies, which translated to a time savings of at least 6 months for the undertaking of the fingolimod phase III programme. CONCLUSION The use of RWE resulted in a reduced sample size of the pivotal phase III studies, which led to substantial time savings compared to the approach of sample size calculations without RWE.
Collapse
Affiliation(s)
- Reynaldo Martina
- Department of Health Sciences, University of Leicester, University Road, Leicester, UK
- Department of Biostatistics, University of Liverpool, 1-5 Brownlow Street, Liverpool, UK
| | - David Jenkins
- Department of Health Sciences, University of Leicester, University Road, Leicester, UK
- School of Health Sciences, University of Manchester, Oxford Road, Manchester, UK
| | - Sylwia Bujkiewicz
- Department of Health Sciences, University of Leicester, University Road, Leicester, UK
| | - Pascale Dequen
- Department of Health Sciences, University of Leicester, University Road, Leicester, UK
- Evidence Synthesis/Health Economics, Visible Analytics Ltd., Union Way, Oxon, UK
| | - Keith Abrams
- Department of Health Sciences, University of Leicester, University Road, Leicester, UK
| | - on behalf of GetReal Workpackage 1
- Department of Health Sciences, University of Leicester, University Road, Leicester, UK
- Department of Biostatistics, University of Liverpool, 1-5 Brownlow Street, Liverpool, UK
- School of Health Sciences, University of Manchester, Oxford Road, Manchester, UK
- Evidence Synthesis/Health Economics, Visible Analytics Ltd., Union Way, Oxon, UK
| |
Collapse
|
7
|
Eldridge SM, Chan CL, Campbell MJ, Bond CM, Hopewell S, Thabane L, Lancaster GA. CONSORT 2010 statement: extension to randomised pilot and feasibility trials. BMJ 2016; 355:i5239. [PMID: 27777223 PMCID: PMC5076380 DOI: 10.1136/bmj.i5239] [Citation(s) in RCA: 1385] [Impact Index Per Article: 173.1] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/18/2016] [Indexed: 12/27/2022]
Affiliation(s)
- Sandra M Eldridge
- Centre for Primary Care and Public Health, Queen Mary University of London, London, UK
| | - Claire L Chan
- Centre for Primary Care and Public Health, Queen Mary University of London, London, UK
| | - Michael J Campbell
- School of Health and Related Research, University of Sheffield, Sheffield, UK
| | - Christine M Bond
- Centre of Academic Primary Care, University of Aberdeen, Aberdeen, Scotland, UK
| | - Sally Hopewell
- Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
| | - Lehana Thabane
- Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada
| | - Gillian A Lancaster
- Department of Mathematics and Statistics, Lancaster University, Lancaster, UK
| |
Collapse
|
8
|
Eldridge SM, Chan CL, Campbell MJ, Bond CM, Hopewell S, Thabane L, Lancaster GA. CONSORT 2010 statement: extension to randomised pilot and feasibility trials. Pilot Feasibility Stud 2016; 2:64. [PMID: 27965879 PMCID: PMC5154046 DOI: 10.1186/s40814-016-0105-8] [Citation(s) in RCA: 651] [Impact Index Per Article: 81.4] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2016] [Accepted: 10/10/2016] [Indexed: 01/10/2023] Open
Abstract
The Consolidated Standards of Reporting Trials (CONSORT) statement is a guideline designed to improve the transparency and quality of the reporting of randomised controlled trials (RCTs). In this article we present an extension to that statement for randomised pilot and feasibility trials conducted in advance of a future definitive RCT. The checklist applies to any randomised study in which a future definitive RCT, or part of it, is conducted on a smaller scale, regardless of its design (eg, cluster, factorial, crossover) or the terms used by authors to describe the study (eg, pilot, feasibility, trial, study). The extension does not directly apply to internal pilot studies built into the design of a main trial, non-randomised pilot and feasibility studies, or phase II studies, but these studies all have some similarities to randomised pilot and feasibility studies and so many of the principles might also apply. The development of the extension was motivated by the growing number of studies described as feasibility or pilot studies and by research that has identified weaknesses in their reporting and conduct. We followed recommended good practice to develop the extension, including carrying out a Delphi survey, holding a consensus meeting and research team meetings, and piloting the checklist. The aims and objectives of pilot and feasibility randomised studies differ from those of other randomised trials. Consequently, although much of the information to be reported in these trials is similar to those in randomised controlled trials (RCTs) assessing effectiveness and efficacy, there are some key differences in the type of information and in the appropriate interpretation of standard CONSORT reporting items. We have retained some of the original CONSORT statement items, but most have been adapted, some removed, and new items added. The new items cover how participants were identified and consent obtained; if applicable, the prespecified criteria used to judge whether or how to proceed with a future definitive RCT; if relevant, other important unintended consequences; implications for progression from pilot to future definitive RCT, including any proposed amendments; and ethical approval or approval by a research review committee confirmed with a reference number. This article includes the 26 item checklist, a separate checklist for the abstract, a template for a CONSORT flowchart for these studies, and an explanation of the changes made and supporting examples. We believe that routine use of this proposed extension to the CONSORT statement will result in improvements in the reporting of pilot trials. Editor's note: In order to encourage its wide dissemination this article is freely accessible on the BMJ and Pilot and Feasibility Studies journal websites.
Collapse
Affiliation(s)
- Sandra M. Eldridge
- Centre for Primary Care and Public Health, Queen Mary University of London, London, UK
| | - Claire L. Chan
- Centre for Primary Care and Public Health, Queen Mary University of London, London, UK
| | - Michael J. Campbell
- School of Health and Related Research, University of Sheffield, Sheffield, UK
| | - Christine M. Bond
- Centre of Academic Primary Care, University of Aberdeen, Aberdeen, Scotland, UK
| | - Sally Hopewell
- Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
| | - Lehana Thabane
- Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario Canada
| | | |
Collapse
|
9
|
Burke DL, Ensor J, Riley RD. Meta-analysis using individual participant data: one-stage and two-stage approaches, and why they may differ. Stat Med 2016; 36:855-875. [PMID: 27747915 PMCID: PMC5297998 DOI: 10.1002/sim.7141] [Citation(s) in RCA: 297] [Impact Index Per Article: 37.1] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2016] [Revised: 09/13/2016] [Accepted: 09/13/2016] [Indexed: 12/30/2022]
Abstract
Meta‐analysis using individual participant data (IPD) obtains and synthesises the raw, participant‐level data from a set of relevant studies. The IPD approach is becoming an increasingly popular tool as an alternative to traditional aggregate data meta‐analysis, especially as it avoids reliance on published results and provides an opportunity to investigate individual‐level interactions, such as treatment‐effect modifiers. There are two statistical approaches for conducting an IPD meta‐analysis: one‐stage and two‐stage. The one‐stage approach analyses the IPD from all studies simultaneously, for example, in a hierarchical regression model with random effects. The two‐stage approach derives aggregate data (such as effect estimates) in each study separately and then combines these in a traditional meta‐analysis model. There have been numerous comparisons of the one‐stage and two‐stage approaches via theoretical consideration, simulation and empirical examples, yet there remains confusion regarding when each approach should be adopted, and indeed why they may differ. In this tutorial paper, we outline the key statistical methods for one‐stage and two‐stage IPD meta‐analyses, and provide 10 key reasons why they may produce different summary results. We explain that most differences arise because of different modelling assumptions, rather than the choice of one‐stage or two‐stage itself. We illustrate the concepts with recently published IPD meta‐analyses, summarise key statistical software and provide recommendations for future IPD meta‐analyses. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Collapse
Affiliation(s)
- Danielle L Burke
- Research Institute for Primary Care and Health Sciences, Keele University, Staffordshire, U.K
| | - Joie Ensor
- Research Institute for Primary Care and Health Sciences, Keele University, Staffordshire, U.K
| | - Richard D Riley
- Research Institute for Primary Care and Health Sciences, Keele University, Staffordshire, U.K
| |
Collapse
|