1
|
McCandlish JA, Ayer T, Chhatwal J. Cost-Effectiveness and Value-of-Information Analysis Using Machine Learning-Based Metamodeling: A Case of Hepatitis C Treatment. Med Decis Making 2023; 43:68-77. [PMID: 36113098 DOI: 10.1177/0272989x221125418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
BACKGROUND Metamodels can address some of the limitations of complex simulation models by formulating a mathematical relationship between input parameters and simulation model outcomes. Our objective was to develop and compare the performance of a machine learning (ML)-based metamodel against a conventional metamodeling approach in replicating the findings of a complex simulation model. METHODS We constructed 3 ML-based metamodels using random forest, support vector regression, and artificial neural networks and a linear regression-based metamodel from a previously validated microsimulation model of the natural history hepatitis C virus (HCV) consisting of 40 input parameters. Outcomes of interest included societal costs and quality-adjusted life-years (QALYs), the incremental cost-effectiveness (ICER) of HCV treatment versus no treatment, cost-effectiveness analysis curve (CEAC), and expected value of perfect information (EVPI). We evaluated metamodel performance using root mean squared error (RMSE) and Pearson's R2 on the normalized data. RESULTS The R2 values for the linear regression metamodel for QALYs without treatment, QALYs with treatment, societal cost without treatment, societal cost with treatment, and ICER were 0.92, 0.98, 0.85, 0.92, and 0.60, respectively. The corresponding R2 values for our ML-based metamodels were 0.96, 0.97, 0.90, 0.95, and 0.49 for support vector regression; 0.99, 0.83, 0.99, 0.99, and 0.82 for artificial neural network; and 0.99, 0.99, 0.99, 0.99, and 0.98 for random forest. Similar trends were observed for RMSE. The CEAC and EVPI curves produced by the random forest metamodel matched the results of the simulation output more closely than the linear regression metamodel. CONCLUSIONS ML-based metamodels generally outperformed traditional linear regression metamodels at replicating results from complex simulation models, with random forest metamodels performing best. HIGHLIGHTS Decision-analytic models are frequently used by policy makers and other stakeholders to assess the impact of new medical technologies and interventions. However, complex models can impose limitations on conducting probabilistic sensitivity analysis and value-of-information analysis, and may not be suitable for developing online decision-support tools.Metamodels, which accurately formulate a mathematical relationship between input parameters and model outcomes, can replicate complex simulation models and address the above limitation.The machine learning-based random forest model can outperform linear regression in replicating the findings of a complex simulation model. Such a metamodel can be used for conducting cost-effectiveness and value-of-information analyses or developing online decision support tools.
Collapse
Affiliation(s)
| | - Turgay Ayer
- Georgia Institute of Technology, Atlanta, Georgia
| | - Jagpreet Chhatwal
- Massachusetts General Hospital Institute for Technology Assessment, Boston, Massachusetts.,Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
2
|
Zhong H, Brandeau ML, Yazdi GE, Wang J, Nolen S, Hagan L, Thompson WW, Assoumou SA, Linas BP, Salomon JA. Metamodeling for Policy Simulations with Multivariate Outcomes. Med Decis Making 2022; 42:872-884. [PMID: 35735216 PMCID: PMC9452454 DOI: 10.1177/0272989x221105079] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PURPOSE Metamodels are simplified approximations of more complex models that can be used as surrogates for the original models. Challenges in using metamodels for policy analysis arise when there are multiple correlated outputs of interest. We develop a framework for metamodeling with policy simulations to accommodate multivariate outcomes. METHODS We combine 2 algorithm adaptation methods-multitarget stacking and regression chain with maximum correlation-with different base learners including linear regression (LR), elastic net (EE) with second-order terms, Gaussian process regression (GPR), random forests (RFs), and neural networks. We optimize integrated models using variable selection and hyperparameter tuning. We compare the accuracy, efficiency, and interpretability of different approaches. As an example application, we develop metamodels to emulate a microsimulation model of testing and treatment strategies for hepatitis C in correctional settings. RESULTS Output variables from the simulation model were correlated (average ρ = 0.58). Without multioutput algorithm adaptation methods, in-sample fit (measured by R2) ranged from 0.881 for LR to 0.987 for GPR. The multioutput algorithm adaptation method increased R2 by an average 0.002 across base learners. Variable selection and hyperparameter tuning increased R2 by 0.009. Simpler models such as LR, EE, and RF required minimal training and prediction time. LR and EE had advantages in model interpretability, and we considered methods for improving the interpretability of other models. CONCLUSIONS In our example application, the choice of base learner had the largest impact on R2; multioutput algorithm adaptation and variable selection and hyperparameter tuning had a modest impact. Although advantages and disadvantages of specific learning algorithms may vary across different modeling applications, our framework for metamodeling in policy analyses with multivariate outcomes has broad applicability to decision analysis in health and medicine.
Collapse
Affiliation(s)
- Huaiyang Zhong
- Department of Management Science and Engineering, Stanford University, Stanford, CA, USA
| | - Margaret L Brandeau
- Department of Management Science and Engineering, Stanford University, Stanford, CA, USA
| | - Golnaz Eftekhari Yazdi
- Section of Infectious Diseases, Department of Medicine, Boston Medical Center, Boston, MA, USA
| | - Jianing Wang
- Section of Infectious Diseases, Department of Medicine, Boston Medical Center, Boston, MA, USA
| | - Shayla Nolen
- Section of Infectious Diseases, Department of Medicine, Boston Medical Center, Boston, MA, USA
| | | | - William W Thompson
- Division of Viral Hepatitis, Center for Disease Control and Prevention, Atlanta, GA, USA
| | - Sabrina A Assoumou
- Section of Infectious Diseases, Department of Medicine, Boston Medical Center, Boston, MA, USA
| | - Benjamin P Linas
- Section of Infectious Diseases, Department of Medicine, Boston Medical Center, Boston, MA, USA
| | - Joshua A Salomon
- Center for Health Policy and Center for Primary Care and Outcomes Research, Stanford University, Stanford, CA, USA
| |
Collapse
|
3
|
Malloy GSP, Brandeau ML. When Is Mass Prophylaxis Cost-Effective for Epidemic Control? A Comparison of Decision Approaches. Med Decis Making 2022; 42:1052-1063. [PMID: 35591754 DOI: 10.1177/0272989x221098409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
BACKGROUND For certain communicable disease outbreaks, mass prophylaxis of uninfected individuals can curtail new infections. When an outbreak emerges, decision makers could benefit from methods to quickly determine whether mass prophylaxis is cost-effective. We consider 2 approaches: a simple decision model and machine learning meta-models. The motivating example is plague in Madagascar. METHODS We use a susceptible-exposed-infectious-removed (SEIR) epidemic model to derive a decision rule based on the fraction of the population infected, effective reproduction ratio, infection fatality rate, quality-adjusted life-year loss associated with death, prophylaxis effectiveness and cost, time horizon, and willingness-to-pay threshold. We also develop machine learning meta-models of a detailed model of plague in Madagascar using logistic regression, random forest, and neural network models. In numerical experiments, we compare results using the decision rule and the meta-models to results obtained using the simulation model. We vary the initial fraction of the population infected, the effective reproduction ratio, the intervention start date and duration, and the cost of prophylaxis. LIMITATIONS We assume homogeneous mixing and no negative side effects due to antibiotic prophylaxis. RESULTS The simple decision rule matched the SEIR model outcome in 85.4% of scenarios. Using data for a 2017 plague outbreak in Madagascar, the decision rule correctly indicated that mass prophylaxis was not cost-effective. The meta-models were significantly more accurate, with an accuracy of 92.8% for logistic regression, 95.8% for the neural network model, and 96.9% for the random forest model. CONCLUSIONS A simple decision rule using minimal information about an outbreak can accurately evaluate the cost-effectiveness of mass prophylaxis for outbreak mitigation. Meta-models of a complex disease simulation can achieve higher accuracy but with greater computational and data requirements and less interpretability. HIGHLIGHTS We use a susceptible-exposed-infectious-removed model and net monetary benefit to derive a simple decision rule to evaluate the cost-effectiveness of mass prophylaxis.We use the example of plague in Madagascar to compare the performance of the analytically derived decision rule to that of machine learning meta-models trained on a stochastic dynamic transmission model.We assess the accuracy of each approach for different combinations of disease dynamics and intervention scenarios.The machine learning meta-models are more accurate predictors of mass prophylaxis cost-effectiveness. However, the simple decision rule is also accurate and may be a preferred substitute in low-resource settings.
Collapse
Affiliation(s)
- Giovanni S P Malloy
- Department of Management Science and Engineering, Stanford University, Stanford, CA, USA
| | - Margaret L Brandeau
- Department of Management Science and Engineering, Stanford University, Stanford, CA, USA
| |
Collapse
|
4
|
Weyant C, Brandeau ML. Personalization of Medical Treatment Decisions: Simplifying Complex Models while Maintaining Patient Health Outcomes. Med Decis Making 2022; 42:450-460. [PMID: 34416832 PMCID: PMC8858337 DOI: 10.1177/0272989x211037921] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
BACKGROUND Personalizing medical treatments based on patient-specific risks and preferences can improve patient health. However, models to support personalized treatment decisions are often complex and difficult to interpret, limiting their clinical application. METHODS We present a new method, using machine learning to create meta-models, for simplifying complex models for personalizing medical treatment decisions. We consider simple interpretable models, interpretable ensemble models, and noninterpretable ensemble models. We use variable selection with a penalty for patient-specific risks and/or preferences that are difficult, risky, or costly to obtain. We interpret the meta-models to the extent permitted by their model architectures. We illustrate our method by applying it to simplify a previously developed model for personalized selection of antipsychotic drugs for patients with schizophrenia. RESULTS The best simplified interpretable, interpretable ensemble, and noninterpretable ensemble models contained at most half the number of patient-specific risks and preferences compared with the original model. The simplified models achieved 60.5% (95% credible interval [crI]: 55.2-65.4), 60.8% (95% crI: 55.5-65.7), and 83.8% (95% crI: 80.8-86.6), respectively, of the net health benefit of the original model (quality-adjusted life-years gained). Important variables in all models were similar and made intuitive sense. Computation time for the meta-models was orders of magnitude less than for the original model. LIMITATIONS The simplified models share the limitations of the original model (e.g., potential biases). CONCLUSIONS Our meta-modeling method is disease- and model- agnostic and can be used to simplify complex models for personalization, allowing for variable selection in addition to improved model interpretability and computational performance. Simplified models may be more likely to be adopted in clinical settings and can help improve equity in patient outcomes.
Collapse
Affiliation(s)
- Christopher Weyant
- Department of Management Science and Engineering, Stanford University, Stanford, California, USA
| | - Margaret L. Brandeau
- Department of Management Science and Engineering, Stanford University, Stanford, California, USA
| |
Collapse
|
5
|
Degeling K, IJzerman MJ, Lavieri MS, Strong M, Koffijberg H. Introduction to Metamodeling for Reducing Computational Burden of Advanced Analyses with Health Economic Models: A Structured Overview of Metamodeling Methods in a 6-Step Application Process. Med Decis Making 2020; 40:348-363. [PMID: 32428428 PMCID: PMC7754830 DOI: 10.1177/0272989x20912233] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2019] [Accepted: 02/14/2020] [Indexed: 01/24/2023]
Abstract
Metamodels can be used to reduce the computational burden associated with computationally demanding analyses of simulation models, although applications within health economics are still scarce. Besides a lack of awareness of their potential within health economics, the absence of guidance on the conceivably complex and time-consuming process of developing and validating metamodels may contribute to their limited uptake. To address these issues, this article introduces metamodeling to the wider health economic audience and presents a process for applying metamodeling in this context, including suitable methods and directions for their selection and use. General (i.e., non-health economic specific) metamodeling literature, clinical prediction modeling literature, and a previously published literature review were exploited to consolidate a process and to identify candidate metamodeling methods. Methods were considered applicable to health economics if they are able to account for mixed (i.e., continuous and discrete) input parameters and continuous outcomes. Six steps were identified as relevant for applying metamodeling methods within health economics: 1) the identification of a suitable metamodeling technique, 2) simulation of data sets according to a design of experiments, 3) fitting of the metamodel, 4) assessment of metamodel performance, 5) conducting the required analysis using the metamodel, and 6) verification of the results. Different methods are discussed to support each step, including their characteristics, directions for use, key references, and relevant R and Python packages. To address challenges regarding metamodeling methods selection, a first guide was developed toward using metamodels to reduce the computational burden of analyses of health economic models. This guidance may increase applications of metamodeling in health economics, enabling increased use of state-of-the-art analyses (e.g., value of information analysis) with computationally burdensome simulation models.
Collapse
Affiliation(s)
- Koen Degeling
- />Health Technology and Services Research Department, Faculty of Behavioural, Management and Social Sciences, Technical Medical Centre, University of Twente, Enschede, Overijssel, the Netherlands
- />Cancer Health Services Research Unit, School of Population and Global Health, Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Melbourne, Australia
| | - Maarten J. IJzerman
- />Victorian Comprehensive Cancer Centre, Melbourne, Australia
- />Health Technology and Services Research Department, Faculty of Behavioural, Management and Social Sciences, Technical Medical Centre, University of Twente, Enschede, Overijssel, the Netherlands
- />Cancer Health Services Research Unit, School of Population and Global Health, Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Melbourne, Australia
| | - Mariel S. Lavieri
- Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI, USA
| | - Mark Strong
- School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, England, UK
| | - Hendrik Koffijberg
- Health Technology and Services Research Department, Faculty of Behavioural, Management and Social Sciences, Technical Medical Centre, University of Twente, Enschede, Overijssel, the Netherlands
| |
Collapse
|
6
|
Simulation Modeling and Metamodeling to Inform National and International HIV Policies for Children and Adolescents. J Acquir Immune Defic Syndr 2019; 78 Suppl 1:S49-S57. [PMID: 29994920 PMCID: PMC6042862 DOI: 10.1097/qai.0000000000001749] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Objective and Approach: Computer-based simulation models serve an important purpose in informing HIV care for children and adolescents. We review current model-based approaches to informing pediatric and adolescent HIV estimates and guidelines. Findings: Clinical disease simulation models and epidemiologic models are used to inform global and regional estimates of numbers of children and adolescents living with HIV and in need of antiretroviral therapy, to develop normative guidelines addressing strategies for diagnosis and treatment of HIV in children, and to forecast future need for pediatric and adolescent antiretroviral therapy formulations and commodities. To improve current model-generated estimates and policy recommendations, better country-level and regional-level data are needed about children living with HIV, as are improved data about survival and treatment outcomes for children with perinatal HIV infection as they age into adolescence and adulthood. In addition, novel metamodeling and value of information methods are being developed to improve the transparency of model methods and results, as well as to allow users to more easily tailor model-based analyses to their own settings. Conclusions: Substantial progress has been made in using models to estimate the size of the pediatric and adolescent HIV epidemic, to inform the development of guidelines for children and adolescents affected by HIV, and to support targeted implementation of policy recommendations to maximize impact. Ongoing work will address key limitations and further improve these model-based projections.
Collapse
|
7
|
Alam MF, Briggs A. Artificial neural network metamodel for sensitivity analysis in a total hip replacement health economic model. Expert Rev Pharmacoecon Outcomes Res 2019; 20:629-640. [PMID: 31491359 DOI: 10.1080/14737167.2019.1665512] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Objectives: Metamodels have been used to approximate complex simulations and have many applications with sensitivity analysis, optimization, etc. However, their use in health economics is very limited. Application of artificial neural network (ANN) with a health economic model has never been investigated. The study intends to introduce ANN as a metamodeling method to conduct sensitivity analysis in a total hip replacement decision analytical model and compare its performance with two other counterparts. Methods: First, a nonlinear factor screening method was adopted to screen out unimportant factors from the simulation. Second, an ANN was developed using the important variables to approximate the simulation. Performance of the ANN metamodel was then compared with its Gaussian Process (GP) and multiple linear regression (MLR) counterparts. Results: Out of 31, the factor screening method identified 12 important variables from the simulation. ANN metamodels showed best predictive capabilities in terms of performance measures (mean squared error of prediction, MSEP and mean absolute percentage deviation, MAPD) used for predicting both costs and quality-adjusted life years (QALYs) for two prostheses. Conclusion: The study provides a methodological development in sensitivity analysis and demonstrates that an ANN metamodel is a potential approximation method for computationally expensive health economic simulations.
Collapse
Affiliation(s)
- M Fasihul Alam
- Department of Public Health, College of Health Sciences, Qatar University , Doha, Qatar
| | - Andrew Briggs
- HEHTA, Institute of Health & Wellbeing, University of Glasgow , Glasgow, UK
| |
Collapse
|
8
|
Sai A, Vivas-Valencia C, Imperiale TF, Kong N. Multiobjective Calibration of Disease Simulation Models Using Gaussian Processes. Med Decis Making 2019; 39:540-552. [PMID: 31375053 DOI: 10.1177/0272989x19862560] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Background. Developing efficient procedures of model calibration, which entails matching model predictions to observed outcomes, has gained increasing attention. With faithful but complex simulation models established for cancer diseases, key parameters of cancer natural history can be investigated for possible fits, which can subsequently inform optimal prevention and treatment strategies. When multiple calibration targets exist, one approach to identifying optimal parameters relies on the Pareto frontier. However, computational burdens associated with higher-dimensional parameter spaces require a metamodeling approach. The goal of this work is to explore multiobjective calibration using Gaussian process regression (GPR) with an eye toward how multiple goodness-of-fit (GOF) criteria identify Pareto-optimal parameters. Methods. We applied GPR, a metamodeling technique, to estimate colorectal cancer (CRC)-related prevalence rates simulated from a microsimulation model of CRC natural history, known as the Colon Modeling Open Source Tool (CMOST). We embedded GPR metamodels within a Pareto optimization framework to identify best-fitting parameters for age-, adenoma-, and adenoma staging-dependent transition probabilities and risk factors. The Pareto frontier approach is demonstrated using genetic algorithms with both sum-of-squared errors (SSEs) and Poisson deviance GOF criteria. Results. The GPR metamodel is able to approximate CMOST outputs accurately on 2 separate parameter sets. Both GOF criteria are able to identify different best-fitting parameter sets on the Pareto frontier. The SSE criterion emphasizes the importance of age-specific adenoma progression parameters, while the Poisson criterion prioritizes adenoma-specific progression parameters. Conclusion. Different GOF criteria assert different components of the CRC natural history. The combination of multiobjective optimization and nonparametric regression, along with diverse GOF criteria, can advance the calibration process by identifying optimal regions of the underlying parameter landscape.
Collapse
Affiliation(s)
- Aditya Sai
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | | | - Thomas F Imperiale
- Indiana University School of Medicine, Indiana University, Indianapolis, IN, USA.,Richard A. Roudebush VA Medical Center, Indianapolis, IN, USA.,Regenstrief Institute, Indianapolis, IN, USA
| | - Nan Kong
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| |
Collapse
|
9
|
Degeling K, IJzerman M, Koffijberg H. A scoping review of metamodeling applications and opportunities for advanced health economic analyses. Expert Rev Pharmacoecon Outcomes Res 2018; 19:181-187. [DOI: 10.1080/14737167.2019.1548279] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Affiliation(s)
- K. Degeling
- Health Technology and Services Research Department, Technical Medical Centre, University of Twente, Enschede, the Netherlands
| | - M.J. IJzerman
- Health Technology and Services Research Department, Technical Medical Centre, University of Twente, Enschede, the Netherlands
- Cancer Health Services Research Unit, School of Population and Global Health, Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Melbourne, Australia
- Victorian Comprehensive Cancer Centre, Melbourne, Australia
| | - H. Koffijberg
- Health Technology and Services Research Department, Technical Medical Centre, University of Twente, Enschede, the Netherlands
| |
Collapse
|
10
|
Koffijberg H, Rothery C, Chalkidou K, Grutters J. Value of Information Choices that Influence Estimates: A Systematic Review of Prevailing Considerations. Med Decis Making 2018; 38:888-900. [DOI: 10.1177/0272989x18797948] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Background. Although value of information (VOI) analyses are increasingly advocated and used for research prioritization and reimbursement decisions, the interpretation and usefulness of VOI outcomes depend critically on the underlying choices and assumptions used in the analysis. In this article, we present a structured overview of all items reported in literature to potentially influence VOI outcomes. Use of this overview increases awareness and transparency of choices and assumptions underpinning VOI outcomes. Methods. A systematic literature review was performed to identify aspects of VOI analyses that were found to potentially influence VOI outcomes. Identified aspects were grouped to develop a structured overview. Explanations were defined for all items included in the overview. Results. We retrieved 687 unique papers, of which 71 original papers and 8 reviews were included. In the full text of these 79 papers, 16 aspects were found that may influence VOI outcomes. These aspects related to the underlying evidence (bias, synthesis, heterogeneity, correlation), uncertainty (structural, future pricing), model (relevance, approach, population), choices in VOI calculation (estimation technique, implementation level, population size, perspective), and aspects specifically for assessing the value of future study designs (reversal costs, efficient estimator). These aspects were aggregated into 7 items to provide a structured overview. Conclusion. The developed overview should increase awareness of key choices underlying VOI analysis and facilitate structured reporting of such choices and interpretation of the ensuing VOI outcomes by researchers and policy makers. Use of this overview should improve prioritization and reimbursement decisions.
Collapse
Affiliation(s)
- Hendrik Koffijberg
- Department of Health Technology & Services Research, Technical Medical Centre, University of Twente, Enschede, The Netherlands (HK)
- Centre for Health Economics, University of York, York, Heslington, UK (CR)
- Global Health and Development Group, Institute for Global Health Innovation, Imperial College London, London, UK (KC)
- Department for Health Evidence, Radboud University Medical Center, Nijmegen, Gelderland, The Netherlands (JG)
| | - Claire Rothery
- Department of Health Technology & Services Research, Technical Medical Centre, University of Twente, Enschede, The Netherlands (HK)
- Centre for Health Economics, University of York, York, Heslington, UK (CR)
- Global Health and Development Group, Institute for Global Health Innovation, Imperial College London, London, UK (KC)
- Department for Health Evidence, Radboud University Medical Center, Nijmegen, Gelderland, The Netherlands (JG)
| | - Kalipso Chalkidou
- Department of Health Technology & Services Research, Technical Medical Centre, University of Twente, Enschede, The Netherlands (HK)
- Centre for Health Economics, University of York, York, Heslington, UK (CR)
- Global Health and Development Group, Institute for Global Health Innovation, Imperial College London, London, UK (KC)
- Department for Health Evidence, Radboud University Medical Center, Nijmegen, Gelderland, The Netherlands (JG)
| | - Janneke Grutters
- Department of Health Technology & Services Research, Technical Medical Centre, University of Twente, Enschede, The Netherlands (HK)
- Centre for Health Economics, University of York, York, Heslington, UK (CR)
- Global Health and Development Group, Institute for Global Health Innovation, Imperial College London, London, UK (KC)
- Department for Health Evidence, Radboud University Medical Center, Nijmegen, Gelderland, The Netherlands (JG)
| |
Collapse
|
11
|
Lee Y, Park JS. Model selection algorithm in Gaussian process regression for computer experiments. COMMUNICATIONS FOR STATISTICAL APPLICATIONS AND METHODS 2017. [DOI: 10.5351/csam.2017.24.4.383] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Youngsaeng Lee
- Department of Statistics, Chonnam National University, Korea
| | - Jeong-Soo Park
- Department of Statistics, Chonnam National University, Korea
| |
Collapse
|
12
|
Rafia R, Brennan A, Madan J, Collins K, Reed MWR, Lawrence G, Robinson T, Greenberg D, Wyld L. Modeling the Cost-Effectiveness of Alternative Upper Age Limits for Breast Cancer Screening in England and Wales. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2016; 19:404-12. [PMID: 27325332 DOI: 10.1016/j.jval.2015.06.006] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2013] [Revised: 06/02/2015] [Accepted: 06/10/2015] [Indexed: 06/06/2023]
Abstract
BACKGROUND Currently in the United Kingdom, the National Health Service (NHS) Breast Screening Programme invites all women for triennial mammography between the ages of 47 and 73 years (the extension to 47-50 and 70-73 years is currently examined as part of a randomized controlled trial). The benefits and harms of screening in women 70 years and older, however, are less well documented. OBJECTIVES The aim of this study was to examine whether extending screening to women older than 70 years would represent a cost-effective use of NHS resources and to identify the upper age limit at which screening mammography should be extended in England and Wales. METHODS A mathematical model that allows the impact of screening policies on cancer diagnosis and subsequent management to be assessed was built. The model has two parts: a natural history model of the progression of breast cancer up to discovery and a postdiagnosis model of treatment, recurrence, and survival. The natural history model was calibrated to available data and compared against published literature. The management of breast cancer at diagnosis was taken from registry data and valued using official UK tariffs. RESULTS The model estimated that screening would lead to overdiagnosis in 6.2% of screen-detected women at the age of 72 years, increasing up to 37.9% at the age of 90 years. Under commonly quoted willingness-to-pay thresholds in the United Kingdom, our study suggests that an extension to screening up to the age of 78 years represents a cost-effective strategy. CONCLUSIONS This study provides encouraging findings to support the extension of the screening program to older ages and suggests that further extension of the UK NHS Breast Screening Programme up to age 78 years beyond the current upper age limit of 73 years could be potentially cost-effective according to current NHS willingness-to-pay thresholds.
Collapse
Affiliation(s)
- Rachid Rafia
- School of Health and Related Research, Health Economics and Decision Science, University of Sheffield, Sheffield, UK.
| | - Alan Brennan
- School of Health and Related Research, Health Economics and Decision Science, University of Sheffield, Sheffield, UK
| | - Jason Madan
- Division of Health Sciences, Warwick Medical School, Coventry, UK
| | - Karen Collins
- Faculty of Health and Wellbeing, Sheffield Hallam University, Sheffield, UK
| | - Malcolm W R Reed
- Brighton and Sussex Medical School, University of Sussex, Brighton, UK
| | - Gill Lawrence
- Breast Screening QA Reference Centre, West Midlands Cancer Intelligence Unit, Public Health Building, The University of Birmingham, Birmingham, UK
| | - Thompson Robinson
- Department of Cardiovascular Sciences, University of Leicester, Leicester, UK
| | - David Greenberg
- Eastern Cancer Registration and Information Centre (ECRIC), Unit C, Cambridge, UK
| | - Lynda Wyld
- Academic Unit of Surgical Oncology, Royal Hallamshire Hospital, University of Sheffield, Sheffield, UK
| |
Collapse
|
13
|
Menzies NA. An Efficient Estimator for the Expected Value of Sample Information. Med Decis Making 2015; 36:308-20. [PMID: 25911600 DOI: 10.1177/0272989x15583495] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2014] [Accepted: 03/09/2015] [Indexed: 11/16/2022]
Abstract
BACKGROUND Conventional estimators for the expected value of sample information (EVSI) are computationally expensive or limited to specific analytic scenarios. I describe a novel approach that allows efficient EVSI computation for a wide range of study designs and is applicable to models of arbitrary complexity. METHODS The posterior parameter distribution produced by a hypothetical study is estimated by reweighting existing draws from the prior distribution. EVSI can then be estimated using a conventional probabilistic sensitivity analysis, with no further model evaluations and with a simple sequence of calculations (Algorithm 1). A refinement to this approach (Algorithm 2) uses smoothing techniques to improve accuracy. Algorithm performance was compared with the conventional EVSI estimator (2-level Monte Carlo integration) and an alternative developed by Brennan and Kharroubi (BK), in a cost-effectiveness case study. RESULTS Compared with the conventional estimator, Algorithm 2 exhibited a root mean square error (RMSE) 8%-17% lower, with far fewer model evaluations (3-4 orders of magnitude). Algorithm 1 produced results similar to those of the conventional estimator when study evidence was weak but underestimated EVSI when study evidence was strong. Compared with the BK estimator, the proposed algorithms reduced RSME by 18%-38% in most analytic scenarios, with 40 times fewer model evaluations. Algorithm 1 performed poorly in the context of strong study evidence. All methods were sensitive to the number of samples in the outer loop of the simulation. CONCLUSIONS The proposed algorithms remove two major challenges for estimating EVSI--the difficulty of estimating the posterior parameter distribution given hypothetical study data and the need for many model evaluations to obtain stable and unbiased results. These approaches make EVSI estimation feasible for a wide range of analytic scenarios.
Collapse
Affiliation(s)
- Nicolas A Menzies
- Department of Global Health and Population and the Center for Health Decision Science, Harvard University, Boston, MA (NAM)
| |
Collapse
|
14
|
Strong M, Oakley JE, Brennan A. Estimating multiparameter partial expected value of perfect information from a probabilistic sensitivity analysis sample: a nonparametric regression approach. Med Decis Making 2014; 34:311-26. [PMID: 24246566 PMCID: PMC4819801 DOI: 10.1177/0272989x13505910] [Citation(s) in RCA: 160] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2013] [Accepted: 08/27/2013] [Indexed: 11/17/2022]
Abstract
The partial expected value of perfect information (EVPI) quantifies the expected benefit of learning the values of uncertain parameters in a decision model. Partial EVPI is commonly estimated via a 2-level Monte Carlo procedure in which parameters of interest are sampled in an outer loop, and then conditional on these, the remaining parameters are sampled in an inner loop. This is computationally demanding and may be difficult if correlation between input parameters results in conditional distributions that are hard to sample from. We describe a novel nonparametric regression-based method for estimating partial EVPI that requires only the probabilistic sensitivity analysis sample (i.e., the set of samples drawn from the joint distribution of the parameters and the corresponding net benefits). The method is applicable in a model of any complexity and with any specification of input parameter distribution. We describe the implementation of the method via 2 nonparametric regression modeling approaches, the Generalized Additive Model and the Gaussian process. We demonstrate in 2 case studies the superior efficiency of the regression method over the 2-level Monte Carlo method. R code is made available to implement the method.
Collapse
Affiliation(s)
- Mark Strong
- Mark Strong PhD, School of Health and Related Research (ScHARR), University of Sheffield, 30 Regent Street, Sheffield S1 4DA, UK; tel +44 (0)114 222 0812; e-mail
| | - Jeremy E. Oakley
- School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK (MS, AB)
- School of Mathematics and Statistics, University of Sheffield, Sheffield, UK (JEO)
| | - Alan Brennan
- School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK (MS, AB)
- School of Mathematics and Statistics, University of Sheffield, Sheffield, UK (JEO)
| |
Collapse
|
15
|
Tuffaha HW, Gordon LG, Scuffham PA. Value of information analysis in oncology: the value of evidence and evidence of value. J Oncol Pract 2013; 10:e55-62. [PMID: 24194511 DOI: 10.1200/jop.2013.001108] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
PURPOSE Value of information (VOI) analysis is a novel systematic approach for assessing whether there is sufficient evidence to support regulatory approval of new technologies, estimating the value of additional research, informing trial design, and setting research priorities. This article reviews the use of VOI methods in oncology and identifies the potential applications of VOI in this field. METHODS A systematic literature search was undertaken to identify studies explicitly reporting VOI analyses for interventions directed at cancer management. Articles published from 2000 onward addressing prevention, screening, diagnosis, treatment, or symptom management in oncology were selected. RESULTS A total of 35 articles were included in the review; most were published after 2006. The main cancers addressed were breast (n = 10; 29%), prostate (n = 5; 14%), lung (n = 5; 14%), and colorectal (n = 3; 9%). The VOI analyses were of an applied nature in 31 studies (89%). In the applied studies, VOI was used to characterize decision uncertainty in all studies and to inform future research focus in 16 (52%). Additionally, one article (3%) addressed the value of optimal trial design, and one article (3%) reported the use of VOI methods to prioritize research. CONCLUSION The application of VOI analysis in oncology is growing but remains limited. Benefits in oncology research and practice will potentially be optimized with an increase in the application of VOI methods to inform decision making, optimal trial design, and research prioritization in this field.
Collapse
|
16
|
Mohseninejad L, van Baal PHM, van den Berg M, Buskens E, Feenstra T. Value of information analysis from a societal perspective: a case study in prevention of major depression. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2013; 16:490-497. [PMID: 23796282 DOI: 10.1016/j.jval.2012.12.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2011] [Revised: 10/30/2012] [Accepted: 12/21/2012] [Indexed: 06/02/2023]
Abstract
OBJECTIVES Productivity losses usually have a considerable impact on cost-effectiveness estimates while their estimated values are often relatively uncertain. Therefore, parameters related to these indirect costs play a role in setting priorities for future research from a societal perspective. Until now, however, value of information analyses have usually applied a health care perspective for economic evaluations. Hence, the effect of productivity losses has rarely been investigated in such analyses. The aim of the current study therefore was to investigate the effects of including or excluding productivity costs in value of information analyses. METHODS Expected value of information analysis (EVPI) was performed in cost-effectiveness evaluation of prevention from both societal and health care perspectives, to give us the opportunity to compare different perspectives. Priorities for future research were determined by partial EVPI. The program to prevent major depression in patients with subthreshold depression was opportunistic screening followed by minimal contact psychotherapy. RESULTS The EVPI indicated that regardless of perspective, further research is potentially worthwhile. Partial EVPI results underlined the importance of productivity losses when a societal perspective was considered. Furthermore, priority setting for future research differed according to perspective. CONCLUSIONS The results illustrated that advise for future research will differ for a health care versus a societal perspective and hence the value of information analysis should be adjusted to the perspective that is relevant for the decision makers involved. The outcomes underlined the need for carefully choosing the suitable perspective for the decision problem at hand.
Collapse
Affiliation(s)
- Leyla Mohseninejad
- Department of Epidemiology, Unit Health Technology Assessment, University Medical Center Groningen, The University of Groningen, Groningen, The Netherlands.
| | | | | | | | | |
Collapse
|
17
|
Steuten L, van de Wetering G, Groothuis-Oudshoorn K, Retèl V. A systematic and critical review of the evolving methods and applications of value of information in academia and practice. PHARMACOECONOMICS 2013; 31:25-48. [PMID: 23329591 DOI: 10.1007/s40273-012-0008-3] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
OBJECTIVE This article provides a systematic and critical review of the evolving methods and applications of value of information (VOI) in academia and practice and discusses where future research needs to be directed. METHODS Published VOI studies were identified by conducting a computerized search on Scopus and ISI Web of Science from 1980 until December 2011 using pre-specified search terms. Only full-text papers that outlined and discussed VOI methods for medical decision making, and studies that applied VOI and explicitly discussed the results with a view to informing healthcare decision makers, were included. The included papers were divided into methodological and applied papers, based on the aim of the study. RESULTS A total of 118 papers were included of which 50 % (n = 59) are methodological. A rapidly accumulating literature base on VOI from 1999 onwards for methodological papers and from 2005 onwards for applied papers is observed. Expected value of sample information (EVSI) is the preferred method of VOI to inform decision making regarding specific future studies, but real-life applications of EVSI remain scarce. Methodological challenges to VOI are numerous and include the high computational demands, dealing with non-linear models and interdependency between parameters, estimations of effective time horizons and patient populations, and structural uncertainties. CONCLUSION VOI analysis receives increasing attention in both the methodological and the applied literature bases, but challenges to applying VOI in real-life decision making remain. For many technical and methodological challenges to VOI analytic solutions have been proposed in the literature, including leaner methods for VOI. Further research should also focus on the needs of decision makers regarding VOI.
Collapse
Affiliation(s)
- Lotte Steuten
- Department of Health Technology and Services Research, University of Twente, PO Box 217, 7500 AE, Enschede, The Netherlands.
| | | | | | | |
Collapse
|
18
|
McCullagh L, Walsh C, Barry M. Value-of-information analysis to reduce decision uncertainty associated with the choice of thromboprophylaxis after total hip replacement in the Irish healthcare setting. PHARMACOECONOMICS 2012; 30:941-959. [PMID: 22667458 DOI: 10.2165/11591510-000000000-00000] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
BACKGROUND The National Centre for Pharmacoeconomics, in collaboration with the Health Services Executive, considers the cost effectiveness of all new medicines introduced into Ireland. Health Technology Assessments (HTAs) are conducted in accordance with the existing agreed Irish HTA guidelines. These guidelines do not specify a formal analysis of value of information (VOI). OBJECTIVE The aim of this study was to demonstrate the benefits of using VOI analysis in decreasing decision uncertainty and to examine the viability of applying these techniques as part of the formal HTA process for reimbursement purposes within the Irish healthcare system. METHOD The evaluation was conducted from the Irish health payer perspective. A lifetime model evaluated the cost effectiveness of rivaroxaban, dabigatran etexilate and enoxaparin sodium for the prophylaxis of venous thromboembolism after total hip replacement. The expected value of perfect information (EVPI) was determined directly from the probabilistic analysis (PSA). Population-level EVPI (PEVPI) was determined by scaling up the EVPI according to the decision incidence. The expected value of perfect parameter information (EVPPI) was calculated for the three model parameter subsets: probabilities, preference weights and direct medical costs. RESULTS In the base-case analysis, rivaroxaban dominated both dabigatran etexilate and enoxaparin sodium. PSA indicated that rivaroxaban had the highest probability of being the most cost-effective strategy over a threshold range of &U20AC;0-&U20AC;100 000 per QALY. At a threshold of &U20AC;45 000 per QALY, the probability that rivaroxaban was the most cost-effective strategy was 67%. At a threshold of &U20AC;45 000 per QALY, assuming a 10-year decision time horizon, the PEVPI was &U20AC;11.96 million and the direct medical costs subset had the highest EVPPI value (&U20AC;9.00 million at a population level). In order to decrease uncertainty, a more detailed costing study was undertaken. In the subsequent analysis, rivaroxaban continued to dominate both comparators. In the PSA, rivaroxaban continued to have the highest probability of being optimal over the threshold range &U20AC;0-&U20AC;100 000 per QALY. At &U20AC;45 000 per QALY, the probability that rivaroxaban was the most cost-effective strategy increased to 80%. At &U20AC;45 000 per QALY, the 10-year PEVPI decreased to &U20AC;3.58 million and the population value associated with the direct medical costs fell to &U20AC;1.72 million. CONCLUSION This increase in probability of cost effectiveness, coupled with a substantially reduced potential opportunity loss could influence a decision maker's confidence in making a reimbursement decision. On discussions with the decision maker we now intend to incorporate the use of VOI into our HTA process.
Collapse
Affiliation(s)
- Laura McCullagh
- National Centre for Pharmacoeconomics, St Jamess Hospital, Dublin, Ireland
| | | | | |
Collapse
|
19
|
Naveršnik K, Rojnik K. Handling input correlations in pharmacoeconomic models. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2012; 15:540-9. [PMID: 22583465 DOI: 10.1016/j.jval.2011.12.008] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2011] [Revised: 12/11/2011] [Accepted: 12/16/2011] [Indexed: 05/04/2023]
Abstract
OBJECTIVES Probabilistic uncertainty analysis is a common means of evaluating pharmacoeconomic models and exploring decision uncertainty. Uncertain parameters are assigned probability distributions and analyses performed by Monte Carlo simulation. Correlations between input parameters are rarely accounted for despite recommendations from several guidelines. By outlining theoretical reasons for including correlations and showing numerous examples of existing correlations, we appeal to the analyst to consider input dependencies. Our objective is to review the available methods to do so, give technical details on implementation and show, by using examples of published studies, the effect input correlations have on model outputs. METHODS A hierarchy of methods for dealing with correlations in Monte Carlo simulation is presented and used. The choice of method depends on the amount of information available on dependency and consists of functional modeling, joint distributions/copulas, and coupling of marginal distributions. RESULTS We induced input correlation with various methods and showed that in most cases the choice of optimal decision remained the same as in the independent scenario. There was, however, a significant change in the value of further information because of inducing input correlations. The results were similar for various dependency structures and were mainly a function of the strength of correlation, as measured by the linear correlation coefficient. CONCLUSION Probabilistic uncertainty analysis reflects joint uncertainty across input parameters only when dependence among input parameters is accounted for.
Collapse
Affiliation(s)
- Klemen Naveršnik
- Lek Pharmaceuticals d.d., Sandoz Development Center Ljubljana, Ljubljana, Slovenia.
| | | |
Collapse
|