1
|
Hong H, Liu L, Mojtabai R, Stuart EA. Calibrated meta-analysis to estimate the efficacy of mental health treatments in target populations: an application to paliperidone trials for treatment of schizophrenia. BMC Med Res Methodol 2023; 23:150. [PMID: 37365521 DOI: 10.1186/s12874-023-01958-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 05/25/2023] [Indexed: 06/28/2023] Open
Abstract
BACKGROUNDS Meta-analyses can be a powerful tool but need to calibrate potential unrepresentativeness of the included trials to a target population. Estimating target population average treatment effects (TATE) in meta-analyses is important to understand how treatments perform in well-defined target populations. This study estimated TATE of paliperidone palmitate in patients with schizophrenia using meta-analysis with individual patient trial data and target population data. METHODS We conducted a meta-analysis with data from four randomized clinical trials and target population data from the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) study. Efficacy was measured using the Positive and Negative Syndrome Scale (PANSS). Weights to equate the trial participants and target population were calculated by comparing baseline characteristics between the trials and CATIE. A calibrated weighted meta-analysis with random effects was performed to estimate the TATE of paliperidone compared to placebo. RESULTS A total of 1,738 patients were included in the meta-analysis along with 1,458 patients in CATIE. After weighting, the covariate distributions of the trial participants and target population were similar. Compared to placebo, paliperidone palmitate was associated with a significant reduction of the PANSS total score under both unweighted (mean difference 9.07 [4.43, 13.71]) and calibrated weighted (mean difference 6.15 [2.22, 10.08]) meta-analysis. CONCLUSIONS The effect of paliperidone palmitate compared with placebo is slightly smaller in the target population than that estimated directly from the unweighted meta-analysis. Representativeness of samples of trials included in a meta-analysis to a target population should be assessed and incorporated properly to obtain the most reliable evidence of treatment effects in target populations.
Collapse
Affiliation(s)
- Hwanhee Hong
- Department of Biostatistics and Bioinformatics, School of Medicine, Duke University, 2424 Erwin Road, Ste 1105, Durham, NC, 27705, USA.
| | - Lu Liu
- Department of Biostatistics and Bioinformatics, School of Medicine, Duke University, 2424 Erwin Road, Ste 1105, Durham, NC, 27705, USA
| | - Ramin Mojtabai
- Department of Mental Health, Bloomberg School of Public Health, Johns Hopkins University, 615 N. Wolfe Street, Baltimore, MD, 21205, USA
| | - Elizabeth A Stuart
- Department of Mental Health, Bloomberg School of Public Health, Johns Hopkins University, 615 N. Wolfe Street, Baltimore, MD, 21205, USA
| |
Collapse
|
2
|
Gancz NN, Forster SE. Threats to external validity in the neuroprediction of substance use treatment outcomes. THE AMERICAN JOURNAL OF DRUG AND ALCOHOL ABUSE 2023; 49:5-20. [PMID: 36099534 PMCID: PMC9974755 DOI: 10.1080/00952990.2022.2116712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Revised: 08/09/2022] [Accepted: 08/21/2022] [Indexed: 10/14/2022]
Abstract
Background: Tools predicting individual relapse risk would invaluably inform clinical decision-making (e.g. level-of-care) in substance use treatment. Studies of neuroprediction - use of neuromarkers to predict individual outcomes - have the dual potential to create such tools and inform etiological models leading to new treatments. However, financial limitations, statistical power demands, and related factors encourage restrictive selection criteria, yielding samples that do not fully represent the target population. This problem may be further compounded by a lack of statistical optimism correction in neuroprediction research, resulting in predictive models that are overfit to already-restricted samples.Objectives: This systematic review aims to identify potential threats to external validity related to restrictive selection criteria and underutilization of optimism correction in the existing neuroprediction literature targeting substance use treatment outcomes.Methods: Sixty-seven studies of neuroprediction in substance use treatment were identified and details of sample selection criteria and statistical optimism correction were extracted.Results: Most publications were found to report restrictive selection criteria (e.g. excluding psychiatric (94% of publications) and substance use comorbidities (69% of publications)) that would rule-out a considerable portion of the treatment population. Furthermore, only 21% of publications reported optimism correction.Conclusion: Restrictive selection criteria and underutilization of optimism correction are common in the existing literature and may limit the generalizability of identified neural predictors to the target population whose treatment they would ultimately inform. Greater attention to the inclusivity and generalizability of addiction neuroprediction research, as well as new opportunities provided through open science initiatives, have the potential to address this issue.
Collapse
Affiliation(s)
- Naomi N. Gancz
- VA Pittsburgh Healthcare System, VISN 4 Mental Illness Research, Education, & Clinical Center (MIRECC)
- University of California, Los Angeles, Department of Psychology
| | - Sarah E. Forster
- VA Pittsburgh Healthcare System, VISN 4 Mental Illness Research, Education, & Clinical Center (MIRECC)
| |
Collapse
|
3
|
Remiro-Azócar A. Two-stage matching-adjusted indirect comparison. BMC Med Res Methodol 2022; 22:217. [PMID: 35941551 PMCID: PMC9358807 DOI: 10.1186/s12874-022-01692-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 07/19/2022] [Indexed: 01/03/2023] Open
Abstract
BACKGROUND Anchored covariate-adjusted indirect comparisons inform reimbursement decisions where there are no head-to-head trials between the treatments of interest, there is a common comparator arm shared by the studies, and there are patient-level data limitations. Matching-adjusted indirect comparison (MAIC), based on propensity score weighting, is the most widely used covariate-adjusted indirect comparison method in health technology assessment. MAIC has poor precision and is inefficient when the effective sample size after weighting is small. METHODS A modular extension to MAIC, termed two-stage matching-adjusted indirect comparison (2SMAIC), is proposed. This uses two parametric models. One estimates the treatment assignment mechanism in the study with individual patient data (IPD), the other estimates the trial assignment mechanism. The first model produces inverse probability weights that are combined with the odds weights produced by the second model. The resulting weights seek to balance covariates between treatment arms and across studies. A simulation study provides proof-of-principle in an indirect comparison performed across two randomized trials. Nevertheless, 2SMAIC can be applied in situations where the IPD trial is observational, by including potential confounders in the treatment assignment model. The simulation study also explores the use of weight truncation in combination with MAIC for the first time. RESULTS Despite enforcing randomization and knowing the true treatment assignment mechanism in the IPD trial, 2SMAIC yields improved precision and efficiency with respect to MAIC in all scenarios, while maintaining similarly low levels of bias. The two-stage approach is effective when sample sizes in the IPD trial are low, as it controls for chance imbalances in prognostic baseline covariates between study arms. It is not as effective when overlap between the trials' target populations is poor and the extremity of the weights is high. In these scenarios, truncation leads to substantial precision and efficiency gains but induces considerable bias. The combination of a two-stage approach with truncation produces the highest precision and efficiency improvements. CONCLUSIONS Two-stage approaches to MAIC can increase precision and efficiency with respect to the standard approach by adjusting for empirical imbalances in prognostic covariates in the IPD trial. Further modules could be incorporated for additional variance reduction or to account for missingness and non-compliance in the IPD trial.
Collapse
Affiliation(s)
- Antonio Remiro-Azócar
- Medical Affairs Statistics, Bayer plc, 400 South Oak Way, Reading, UK.
- Department of Statistical Science, University College London, 1-19 Torrington Place, London, UK.
| |
Collapse
|
4
|
Susukida R, Amin-Esmaeili M, Mayo-Wilson E, Mojtabai R. Data management in substance use disorder treatment research: Implications from data harmonization of National Institute on Drug Abuse-funded randomized controlled trials. Clin Trials 2020; 18:215-225. [DOI: 10.1177/1740774520972687] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Background: Secondary analysis of data from completed randomized controlled trials is a critical and efficient way to maximize the potential benefits from past research. De-identified primary data from completed randomized controlled trials have been increasingly available in recent years; however, the lack of standardized data products is a major barrier to further use of these valuable data. Pre-statistical harmonization of data structure, variables, and codebooks across randomized controlled trials would facilitate secondary data analysis, including meta-analyses and comparative effectiveness studies. We describe a pre-statistical data harmonization initiative to standardize de-identified primary data from substance use disorder treatment randomized controlled trials funded by the National Institute on Drug Abuse available on the National Institute on Drug Abuse Data Share website. Methods: Standardized datasets and codebooks with consistent data structures, variable names, labels, and definitions were developed for 36 completed randomized controlled trials. Common data domains were identified to bundle data files from individual randomized controlled trials according to relevant concepts. Variables were harmonized if at least two randomized controlled trials used the same instruments. The structures of the harmonized data were determined based on the feedback from clinical trialists and substance use disorder research experts. Results: We have created a harmonized database of variables across 36 randomized controlled trials with a build-in label and a brief definition for each variable. Data files from the randomized controlled trials have been consistently categorized into eight domains (enrollment, demographics, adherence, adverse events, physical health measures, mental-behavioral-cognitive health measures, self-reported substance use measures, and biologic substance use measures). Standardized codebooks and concordance tables have also been developed to help identify instruments and variables of interest more easily. Conclusion: The harmonized data of randomized controlled trials of substance use disorder treatments can potentially promote future secondary data analysis of completed randomized controlled trials, allowing combining data from multiple randomized controlled trials and provide guidance for future randomized controlled trials in substance use disorder treatment research.
Collapse
Affiliation(s)
- Ryoko Susukida
- Department of Mental Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA
| | - Masoumeh Amin-Esmaeili
- Department of Mental Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA
- Iranian National Center for Addiction Studies (INCAS), Tehran University of Medical Sciences, Tehran, Iran
| | - Evan Mayo-Wilson
- Department of Epidemiology and Biostatistics, Indiana University School of Public Health–Bloomington, Bloomington, IN, USA
| | - Ramin Mojtabai
- Department of Mental Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA
- Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
5
|
He Z, Tang X, Yang X, Guo Y, George TJ, Charness N, Quan Hem KB, Hogan W, Bian J. Clinical Trial Generalizability Assessment in the Big Data Era: A Review. Clin Transl Sci 2020; 13:675-684. [PMID: 32058639 PMCID: PMC7359942 DOI: 10.1111/cts.12764] [Citation(s) in RCA: 62] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Accepted: 01/25/2020] [Indexed: 01/04/2023] Open
Abstract
Clinical studies, especially randomized, controlled trials, are essential for generating evidence for clinical practice. However, generalizability is a long‐standing concern when applying trial results to real‐world patients. Generalizability assessment is thus important, nevertheless, not consistently practiced. We performed a systematic review to understand the practice of generalizability assessment. We identified 187 relevant articles and systematically organized these studies in a taxonomy with three dimensions: (i) data availability (i.e., before or after trial (a priori vs. a posteriori generalizability)); (ii) result outputs (i.e., score vs. nonscore); and (iii) populations of interest. We further reported disease areas, underrepresented subgroups, and types of data used to profile target populations. We observed an increasing trend of generalizability assessments, but < 30% of studies reported positive generalizability results. As a priori generalizability can be assessed using only study design information (primarily eligibility criteria), it gives investigators a golden opportunity to adjust the study design before the trial starts. Nevertheless, < 40% of the studies in our review assessed a priori generalizability. With the wide adoption of electronic health records systems, rich real‐world patient databases are increasingly available for generalizability assessment; however, informatics tools are lacking to support the adoption of generalizability assessment practice.
Collapse
Affiliation(s)
- Zhe He
- School of Information, Florida State University, Tallahassee, Florida, USA
| | - Xiang Tang
- Department of Statistics, Florida State University, Tallahassee, Florida, USA
| | - Xi Yang
- Department of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida, Gainesville, Florida, USA
| | - Yi Guo
- Department of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida, Gainesville, Florida, USA
| | - Thomas J George
- Hematology & Oncology, Department of Medicine, College of Medicine, University of Florida, Gainesville, Florida, USA
| | - Neil Charness
- Department of Psychology, Florida State University, Tallahassee, Florida, USA
| | - Kelsa Bartley Quan Hem
- Calder Memorial Library, Miller School of Medicine, University of Miami, Miami, Florida, USA
| | - William Hogan
- Department of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida, Gainesville, Florida, USA
| | - Jiang Bian
- Department of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida, Gainesville, Florida, USA
| |
Collapse
|
6
|
Susukida R, Crum RM, Hong H, Stuart EA, Mojtabai R. Comparing pharmacological treatments for cocaine dependence: Incorporation of methods for enhancing generalizability in meta-analytic studies. Int J Methods Psychiatr Res 2018; 27:e1609. [PMID: 29464791 PMCID: PMC6103900 DOI: 10.1002/mpr.1609] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Revised: 12/13/2017] [Accepted: 01/05/2018] [Indexed: 11/07/2022] Open
Abstract
OBJECTIVES Few head-to-head comparisons of cocaine dependence medications exist, and combining data from different randomized controlled trials (RCTs) is fraught with methodological challenges including limited generalizability of the RCT findings. This study applied a novel meta-analytic approach to data of cocaine dependence medications. METHODS Data from 4 placebo-controlled RCTs (Reserpine, Modafinil, Buspirone, and Ondansetron) were obtained from the National Institute of Drug Abuse Clinical Trials Network (n = 456). The RCT samples were weighted to resemble treatment-seeking patients (Treatment Episodes Data Set-Admissions) and individuals with cocaine dependence in general population (National Survey on Drug Use and Health). We synthesized the generalized outcomes with pairwise meta-analysis using individual-level data and compared the generalized outcomes across the 4 RCTs with network meta-analysis using study-level data. RESULTS Weighting the data by the National Survey on Drug Use and Health generalizability weight made the overall population effect on retention significantly larger than the RCT sample effect. However, there was no significant difference between the population effect and the RCT sample effect on abstinence. Weighting changed the ranking of the effectiveness across treatments. CONCLUSIONS Applying generalizability weights to meta-analytic studies is feasible and potentially provides a useful tool in assessing comparative effectiveness of treatments for substance use disorders in target populations.
Collapse
Affiliation(s)
- Ryoko Susukida
- Department of Mental Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA.,Department of Mental Health Policy and Evaluation, National Institute of Mental Health, National Center of Neurology and Psychiatry, Kodaira, Tokyo, Japan
| | - Rosa M Crum
- Department of Mental Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA.,Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA.,Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Hwanhee Hong
- Department of Mental Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA
| | - Elizabeth A Stuart
- Department of Mental Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA.,Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA.,Department of Health Policy and Management, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA
| | - Ramin Mojtabai
- Department of Mental Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA.,Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| |
Collapse
|