1
|
Heath A, Baio G, Manolopoulou I, Welton NJ. Value of Information for Clinical Trial Design: The Importance of Considering All Relevant Comparators. PHARMACOECONOMICS 2024; 42:479-486. [PMID: 38583100 DOI: 10.1007/s40273-024-01372-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/05/2024] [Indexed: 04/08/2024]
Abstract
Value of Information (VOI) analyses calculate the economic value that could be generated by obtaining further information to reduce uncertainty in a health economic decision model. VOI has been suggested as a tool for research prioritisation and trial design as it can highlight economically valuable avenues for future research. Recent methodological advances have made it increasingly feasible to use VOI in practice for research; however, there are critical differences between the VOI approach and the standard methods used to design research studies such as clinical trials. We aimed to highlight key differences between the research design approach based on VOI and standard clinical trial design methods, in particular the importance of considering the full decision context. We present two hypothetical examples to demonstrate that VOI methods are only accurate when (1) all feasible comparators are included in the decision model when designing research, and (2) all comparators are retained in the decision model once the data have been collected and a final treatment recommendation is made. Omitting comparators from either the design or analysis phase of research when using VOI methods can lead to incorrect trial designs and/or treatment recommendations. Overall, we conclude that incorrectly specifying the health economic model by ignoring potential comparators can lead to misleading VOI results and potentially waste scarce research resources.
Collapse
Affiliation(s)
- Anna Heath
- Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, ON, Canada.
- Division of Biostatistics, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada.
- Department of Statistical Science, University College London, London, UK.
| | - Gianluca Baio
- Department of Statistical Science, University College London, London, UK
| | | | - Nicky J Welton
- Bristol Medical School, University of Bristol, Bristol, UK
| |
Collapse
|
2
|
Jackson CH, Baio G, Heath A, Strong M, Welton NJ, Wilson EC. Value of Information Analysis in Models to Inform Health Policy. ANNUAL REVIEW OF STATISTICS AND ITS APPLICATION 2022; 9:95-118. [PMID: 35415193 PMCID: PMC7612603 DOI: 10.1146/annurev-statistics-040120-010730] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Value of information (VoI) is a decision-theoretic approach to estimating the expected benefits from collecting further information of different kinds, in scientific problems based on combining one or more sources of data. VoI methods can assess the sensitivity of models to different sources of uncertainty and help to set priorities for further data collection. They have been widely applied in healthcare policy making, but the ideas are general to a range of evidence synthesis and decision problems. This article gives a broad overview of VoI methods, explaining the principles behind them, the range of problems that can be tackled with them, and how they can be implemented, and discusses the ongoing challenges in the area.
Collapse
Affiliation(s)
| | - Gianluca Baio
- Department of Statistical Science, University College London, London WC1E 6BT, United Kingdom
| | - Anna Heath
- The Hospital for Sick Children, Toronto, Ontario M5G 1X8, Canada
| | - Mark Strong
- School of Health and Related Research, University of Sheffield, Sheffield S1 4DA, United Kingdom
| | - Nicky J. Welton
- Bristol Medical School (PHS), University of Bristol, Bristol BS8 1QU, United Kingdom
| | | |
Collapse
|
3
|
Heath A, Strong M, Glynn D, Kunst N, Welton NJ, Goldhaber-Fiebert JD. Simulating Study Data to Support Expected Value of Sample Information Calculations: A Tutorial. Med Decis Making 2022; 42:143-155. [PMID: 34388954 PMCID: PMC8793320 DOI: 10.1177/0272989x211026292] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Accepted: 05/20/2021] [Indexed: 12/13/2022]
Abstract
The expected value of sample information (EVSI) can be used to prioritize avenues for future research and design studies that support medical decision making and offer value for money spent. EVSI is calculated based on 3 key elements. Two of these, a probabilistic model-based economic evaluation and updating model uncertainty based on simulated data, have been frequently discussed in the literature. By contrast, the third element, simulating data from the proposed studies, has received little attention. This tutorial contributes to bridging this gap by providing a step-by-step guide to simulating study data for EVSI calculations. We discuss a general-purpose algorithm for simulating data and demonstrate its use to simulate 3 different outcome types. We then discuss how to induce correlations in the generated data, how to adjust for common issues in study implementation such as missingness and censoring, and how individual patient data from previous studies can be leveraged to undertake EVSI calculations. For all examples, we provide comprehensive code written in the R language and, where possible, Excel spreadsheets in the supplementary materials. This tutorial facilitates practical EVSI calculations and allows EVSI to be used to prioritize research and design studies.
Collapse
Affiliation(s)
- Anna Heath
- Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, ON, Canada
- Division of Biostatistics, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
- Department of Statistical Science, University College London, London, UK
| | - Mark Strong
- School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK
| | - David Glynn
- Centre for Health Economics, University of York, York, UK
| | - Natalia Kunst
- Harvard Medical School & Harvard Pilgrim Health Care Institute, Harvard University, Boston, MA
| | - Nicky J. Welton
- School of Social and Community Medicine, University of Bristol, Bristol, UK
| | - Jeremy D. Goldhaber-Fiebert
- Stanford Health Policy, Centers for Health Policy and Primary Care and Outcomes Research, Stanford University, Stanford, CA, USA
| |
Collapse
|
4
|
Abstract
BACKGROUND The expected value of sample information (EVSI) calculates the value of collecting additional information through a research study with a given design. However, standard EVSI analyses do not account for the slow and often incomplete implementation of the treatment recommendations that follow research. Thus, standard EVSI analyses do not correctly capture the value of the study. Previous research has developed measures to calculate the research value while adjusting for implementation challenges, but estimating these measures is a challenge. METHODS Based on a method that assumes the implementation level is related to the strength of evidence in favor of the treatment, 2 implementation-adjusted EVSI calculation methods are developed. These novel methods circumvent the need for analytical calculations, which were restricted to settings in which normality could be assumed. The first method developed in this article uses computationally demanding nested simulations, based on the definition of the implementation-adjusted EVSI. The second method is based on adapting the moment matching method, a recently developed efficient EVSI computation method, to adjust for imperfect implementation. The implementation-adjusted EVSI is then calculated with the 2 methods across 3 examples. RESULTS The maximum difference between the 2 methods is at most 6% in all examples. The efficient computation method is between 6 and 60 times faster than the nested simulation method in this case study and could be used in practice. CONCLUSIONS This article permits the calculation of an implementation-adjusted EVSI using realistic assumptions. The efficient estimation method is accurate and can estimate the implementation-adjusted EVSI in practice. By adapting standard EVSI estimation methods, adjustments for imperfect implementation can be made with the same computational cost as a standard EVSI analysis. HIGHLIGHTS Standard expected value of sample information (EVSI) analyses do not account for the fact that treatment implementation following research is often slow and incomplete, meaning they incorrectly capture the value of the study.Two methods, based on nested Monte Carlo sampling and the moment matching EVSI calculation method, are developed to adjust EVSI calculations for imperfect implementation when the speed and level of the implementation of a new treatment depends on the strength of evidence in favor of the treatment.The 2 methods we develop provide similar estimates for the implementation-adjusted EVSI.Our methods extend current EVSI calculation algorithms and thus require limited additional computational complexity.
Collapse
Affiliation(s)
- Anna Heath
- Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, ON, Canada.,Division of Biostatistics, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada.,Department of Statistical Science, University College London, London, UK
| |
Collapse
|
5
|
Heath A, Pechlivanoglou P. Prioritizing Research in an Era of Personalized Medicine: The Potential Value of Unexplained Heterogeneity. Med Decis Making 2022; 42:649-660. [PMID: 35023403 PMCID: PMC9189719 DOI: 10.1177/0272989x211072858] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Background Clinical care is moving from a “one size fits all” approach to a setting in
which treatment decisions are based on individual treatment response, needs,
preferences, and risk. Research into personalized treatment strategies aims
to discover currently unknown markers that identify individuals who would
benefit from treatments that are nonoptimal at the population level. Before
investing in research to identify these markers, it is important to assess
whether such research has the potential to generate value. Thus, this
article aims to develop a framework to prioritize research into the
development of new personalized treatment strategies by creating a set of
measures that assess the value of personalizing care based on a set of
unknown patient characteristics. Methods Generalizing ideas from the value of heterogeneity framework, we demonstrate
3 measures that assess the value of developing personalized treatment
strategies. The first measure identifies the potential value of
personalizing medicine within a given disease area. The next 2 measures
highlight specific research priorities and subgroup structures that would
lead to improved patient outcomes from the personalization of treatment
decisions. Results We graphically present the 3 measures to perform sensitivity analyses around
the key drivers of value, in particular, the correlation between the
individual treatment benefits across the available treatment options. We
illustrate these 3 measures using a previously published decision model and
discuss how they can direct research in personalized medicine. Conclusion We discuss 3 measures that form the basis of a novel framework to prioritize
research into novel personalized treatment strategies. Our novel framework
ensures that research targets personalized treatment strategies that have
high potential to improve patient outcomes and health system efficiency. Highlights
Collapse
Affiliation(s)
- Anna Heath
- Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, ON, Canada.,Division of Biostatistics, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada.,Department of Statistical Science, University College London, London, UK
| | - Petros Pechlivanoglou
- Child Health Evaluative Sciences, The Hospital for Sick Children, Toronto, ON, Canada.,Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
6
|
Flight L, Julious S, Brennan A, Todd S. Expected Value of Sample Information to Guide the Design of Group Sequential Clinical Trials. Med Decis Making 2021; 42:461-473. [PMID: 34859693 PMCID: PMC9005835 DOI: 10.1177/0272989x211045036] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Introduction Adaptive designs allow changes to an ongoing trial based on prespecified early examinations of accrued data. Opportunities are potentially being missed to incorporate health economic considerations into the design of these studies. Methods We describe how to estimate the expected value of sample information for group sequential design adaptive trials. We operationalize this approach in a hypothetical case study using data from a pilot trial. We report the expected value of sample information and expected net benefit of sampling results for 5 design options for the future full-scale trial including the fixed-sample-size design and the group sequential design using either the Pocock stopping rule or the O’Brien-Fleming stopping rule with 2 or 5 analyses. We considered 2 scenarios relating to 1) using the cost-effectiveness model with a traditional approach to the health economic analysis and 2) adjusting the cost-effectiveness analysis to incorporate the bias-adjusted maximum likelihood estimates of trial outcomes to account for the bias that can be generated in adaptive trials. Results The case study demonstrated that the methods developed could be successfully applied in practice. The results showed that the O’Brien-Fleming stopping rule with 2 analyses was the most efficient design with the highest expected net benefit of sampling in the case study. Conclusions Cost-effectiveness considerations are unavoidable in budget-constrained, publicly funded health care systems, and adaptive designs can provide an alternative to costly fixed-sample-size designs. We recommend that when planning a clinical trial, expected value of sample information methods be used to compare possible adaptive and nonadaptive trial designs, with appropriate adjustment, to help justify the choice of design characteristics and ensure the cost-effective use of research funding. Highlights
Collapse
Affiliation(s)
- Laura Flight
- Laura Flight, School of Health and Related Research (ScHARR), University of Sheffield, Regent Court, 30 Regent Street, Sheffield, S1 4DA, UK; ()
| | - Steven Julious
- School of Health and Related Research, University of Sheffield, Sheffield, UK
| | - Alan Brennan
- School of Health and Related Research, University of Sheffield, Sheffield, UK
| | - Susan Todd
- Department of Mathematics and Statistics, University of Reading, Reading, UK
| |
Collapse
|
7
|
Probabilistic threshold analysis by pairwise stochastic approximation for decision-making under uncertainty. Sci Rep 2021; 11:19671. [PMID: 34608224 PMCID: PMC8490445 DOI: 10.1038/s41598-021-99089-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 09/20/2021] [Indexed: 11/30/2022] Open
Abstract
The concept of probabilistic parameter threshold analysis has recently been introduced as a way of probabilistic sensitivity analysis for decision-making under uncertainty, in particular, for health economic evaluations which compare two or more alternative treatments with consideration of uncertainty on outcomes and costs. In this paper we formulate the probabilistic threshold analysis as a root-finding problem involving the conditional expectations, and propose a pairwise stochastic approximation algorithm to search for the threshold value below and above which the choice of conditionally optimal decision options changes. Numerical experiments for both a simple synthetic testcase and a chemotherapy Markov model illustrate the effectiveness of our proposed algorithm, without any need for accurate estimation or approximation of conditional expectations which the existing approaches rely upon. Moreover we introduce a new measure called decision switching probability for probabilistic sensitivity analysis in this paper.
Collapse
|
8
|
Fang W, Wang Z, Giles MB, Jackson CH, Welton NJ, Andrieu C, Thom H. Multilevel and Quasi Monte Carlo Methods for the Calculation of the Expected Value of Partial Perfect Information. Med Decis Making 2021; 42:168-181. [PMID: 34231446 PMCID: PMC8777326 DOI: 10.1177/0272989x211026305] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The expected value of partial perfect information (EVPPI) provides an upper bound
on the value of collecting further evidence on a set of inputs to a
cost-effectiveness decision model. Standard Monte Carlo estimation of EVPPI is
computationally expensive as it requires nested simulation. Alternatives based
on regression approximations to the model have been developed but are not
practicable when the number of uncertain parameters of interest is large and
when parameter estimates are highly correlated. The error associated with the
regression approximation is difficult to determine, while MC allows the bias and
precision to be controlled. In this article, we explore the potential of quasi
Monte Carlo (QMC) and multilevel Monte Carlo (MLMC) estimation to reduce the
computational cost of estimating EVPPI by reducing the variance compared with MC
while preserving accuracy. We also develop methods to apply QMC and MLMC to
EVPPI, addressing particular challenges that arise where Markov chain Monte
Carlo (MCMC) has been used to estimate input parameter distributions. We
illustrate the methods using 2 examples: a simplified decision tree model for
treatments for depression and a complex Markov model for treatments to prevent
stroke in atrial fibrillation, both of which use MCMC inputs. We compare the
performance of QMC and MLMC with MC and the approximation techniques of
generalized additive model (GAM) regression, Gaussian process (GP) regression,
and integrated nested Laplace approximations (INLA-GP). We found QMC and MLMC to
offer substantial computational savings when parameter sets are large and
correlated and when the EVPPI is large. We also found that GP and INLA-GP were
biased in those situations, whereas GAM cannot estimate EVPPI for large
parameter sets.
Collapse
Affiliation(s)
- Wei Fang
- Mathematical Institute, University of Oxford, Oxford, Oxfordshire, UK
| | - Zhenru Wang
- Mathematical Institute, University of Oxford, Oxford, Oxfordshire, UK
| | - Michael B Giles
- Mathematical Institute, University of Oxford, Oxford, Oxfordshire, UK
| | - Chris H Jackson
- MRC Biostatistics Unit, University of Cambridge, Cambridge, Cambridgeshire, UK
| | - Nicky J Welton
- Population Health Science, Bristol Medical School, University of Bristol, Bristol, UK
| | | | - Howard Thom
- Population Health Science, Bristol Medical School, University of Bristol, Bristol, UK
| |
Collapse
|
9
|
Affiliation(s)
- Haitham Tuffaha
- The Centre for the Business and Economics of Health, The University of Queensland, Brisbane, QLD, Australia.
| |
Collapse
|
10
|
Fairley M, Cipriano LE, Goldhaber-Fiebert JD. Optimal Allocation of Research Funds under a Budget Constraint. Med Decis Making 2020; 40:797-814. [PMID: 32845233 DOI: 10.1177/0272989x20944875] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
Purpose. Health economic evaluations that include the expected value of sample information support implementation decisions as well as decisions about further research. However, just as decision makers must consider portfolios of implementation spending, they must also identify the optimal portfolio of research investments. Methods. Under a fixed research budget, a decision maker determines which studies to fund; additional budget allocated to one study to increase the study sample size implies less budget available to collect information to reduce decision uncertainty in other implementation decisions. We employ a budget-constrained portfolio optimization framework in which the decisions are whether to invest in a study and at what sample size. The objective is to maximize the sum of the studies' population expected net benefit of sampling (ENBS). We show how to determine the optimal research portfolio and study-specific levels of investment. We demonstrate our framework with a stylized example to illustrate solution features and a real-world application using 6 published cost-effectiveness analyses. Results. Among the studies selected for nonzero investment, the optimal sample size occurs at the point at which the marginal population ENBS divided by the marginal cost of additional sampling is the same for all studies. Compared with standard ENBS optimization without a research budget constraint, optimal budget-constrained sample sizes are typically smaller but allow more studies to be funded. Conclusions. The budget constraint for research studies directly implies that the optimal sample size for additional research is not the point at which the ENBS is maximized for individual studies. A portfolio optimization approach can yield higher total ENBS. Ultimately, there is a maximum willingness to pay for incremental information that determines optimal sample sizes.
Collapse
Affiliation(s)
- Michael Fairley
- Department of Management Science and Engineering, Stanford University, Stanford, CA, USA
| | - Lauren E Cipriano
- Ivey Business School and the Department of Epidemiology and Biostatistics at Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Jeremy D Goldhaber-Fiebert
- Stanford Health Policy, Centers for Health Policy and Primary Care and Outcomes Research, Stanford University, Stanford, CA, USA
| |
Collapse
|
11
|
Kunst N, Wilson ECF, Glynn D, Alarid-Escudero F, Baio G, Brennan A, Fairley M, Goldhaber-Fiebert JD, Jackson C, Jalal H, Menzies NA, Strong M, Thom H, Heath A. Computing the Expected Value of Sample Information Efficiently: Practical Guidance and Recommendations for Four Model-Based Methods. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2020; 23:734-742. [PMID: 32540231 PMCID: PMC8183576 DOI: 10.1016/j.jval.2020.02.010] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2019] [Revised: 12/19/2019] [Accepted: 02/11/2020] [Indexed: 05/09/2023]
Abstract
Value of information (VOI) analyses can help policy makers make informed decisions about whether to conduct and how to design future studies. Historically a computationally expensive method to compute the expected value of sample information (EVSI) restricted the use of VOI to simple decision models and study designs. Recently, 4 EVSI approximation methods have made such analyses more feasible and accessible. Members of the Collaborative Network for Value of Information (ConVOI) compared the inputs, the analyst's expertise and skills, and the software required for the 4 recently developed EVSI approximation methods. Our report provides practical guidance and recommendations to help inform the choice between the 4 efficient EVSI estimation methods. More specifically, this report provides: (1) a step-by-step guide to the methods' use, (2) the expertise and skills required to implement the methods, and (3) method recommendations based on the features of decision-analytic problems.
Collapse
Affiliation(s)
- Natalia Kunst
- Department of Health Management and Health Economics, University of Oslo, Oslo, Norway; Yale University School of Medicine, New Haven, CT, USA; Department of Epidemiology and Biostatistics, Amsterdam UMC, Amsterdam, The Netherlands; LINK Medical Research, Oslo, Norway.
| | - Edward C F Wilson
- Health Economics Group, Norwich Medical School, University of East Anglia, Norwich, England, UK
| | | | | | | | - Alan Brennan
- School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, England, UK
| | - Michael Fairley
- Stanford Health Policy, Centers for Health Policy and Primary Care and Outcomes Research, Stanford University, Stanford, CA, USA
| | - Jeremy D Goldhaber-Fiebert
- Stanford Health Policy, Centers for Health Policy and Primary Care and Outcomes Research, Stanford University, Stanford, CA, USA
| | - Chris Jackson
- MRC Biostatistics Unit, University of Cambridge, Cambridge, England, UK
| | - Hawre Jalal
- University of Pittsburgh, Pittsburgh, PA, USA
| | - Nicolas A Menzies
- Harvard TH Chan School of Public Health, Harvard University, Boston, MA, USA
| | - Mark Strong
- School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, England, UK
| | | | - Anna Heath
- University College London, London, England, UK; The Hospital for Sick Children, Toronto, ON, Canada; University of Toronto, Toronto, ON, Canada
| |
Collapse
|
12
|
Heath A, Kunst N, Jackson C, Strong M, Alarid-Escudero F, Goldhaber-Fiebert JD, Baio G, Menzies NA, Jalal H. Calculating the Expected Value of Sample Information in Practice: Considerations from 3 Case Studies. Med Decis Making 2020; 40:314-326. [PMID: 32297840 PMCID: PMC7968749 DOI: 10.1177/0272989x20912402] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Background. Investing efficiently in future research to improve policy decisions is an important goal. Expected value of sample information (EVSI) can be used to select the specific design and sample size of a proposed study by assessing the benefit of a range of different studies. Estimating EVSI with the standard nested Monte Carlo algorithm has a notoriously high computational burden, especially when using a complex decision model or when optimizing over study sample sizes and designs. Recently, several more efficient EVSI approximation methods have been developed. However, these approximation methods have not been compared, and therefore their comparative performance across different examples has not been explored. Methods. We compared 4 EVSI methods using 3 previously published health economic models. The examples were chosen to represent a range of real-world contexts, including situations with multiple study outcomes, missing data, and data from an observational rather than a randomized study. The computational speed and accuracy of each method were compared. Results. In each example, the approximation methods took minutes or hours to achieve reasonably accurate EVSI estimates, whereas the traditional Monte Carlo method took weeks. Specific methods are particularly suited to problems where we wish to compare multiple proposed sample sizes, when the proposed sample size is large, or when the health economic model is computationally expensive. Conclusions. As all the evaluated methods gave estimates similar to those given by traditional Monte Carlo, we suggest that EVSI can now be efficiently computed with confidence in realistic examples. No systematically superior EVSI computation method exists as the properties of the different methods depend on the underlying health economic model, data generation process, and user expertise.
Collapse
Affiliation(s)
- Anna Heath
- The Hospital for Sick Children, Toronto, ON, Canada
- University of Toronto, Toronto, ON, Canada
- University College London, London, UK
| | - Natalia Kunst
- Department of Health Management and Health Economics, Institute of Health and Society, Faculty of Medicine, University of Oslo, Oslo, Norway
- Cancer Outcomes, Public Policy and Effectiveness Research (COPPER) Center, Yale University School of Medicine and Yale Cancer Center, New Haven, CT, USA
- Department of Epidemiology and Biostatistics, Amsterdam UMC, Amsterdam, the Netherlands
- LINK Medical Research, Oslo, Norway
| | | | - Mark Strong
- School of Health and Related Research, University of Sheffield, Sheffield, UK
| | | | - Jeremy D Goldhaber-Fiebert
- Stanford Health Policy, Centers for Health Policy and Primary Care and Outcomes Research, Stanford University, Stanford, CA, USA
| | | | | | - Hawre Jalal
- University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
13
|
Rothery C, Strong M, Koffijberg HE, Basu A, Ghabri S, Knies S, Murray JF, Sanders Schmidler GD, Steuten L, Fenwick E. Value of Information Analytical Methods: Report 2 of the ISPOR Value of Information Analysis Emerging Good Practices Task Force. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2020; 23:277-286. [PMID: 32197720 PMCID: PMC7373630 DOI: 10.1016/j.jval.2020.01.004] [Citation(s) in RCA: 59] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Accepted: 01/16/2020] [Indexed: 05/19/2023]
Abstract
The allocation of healthcare resources among competing priorities requires an assessment of the expected costs and health effects of investing resources in the activities and of the opportunity cost of the expenditure. To date, much effort has been devoted to assessing the expected costs and health effects, but there remains an important need to also reflect the consequences of uncertainty in resource allocation decisions and the value of further research to reduce uncertainty. Decision making with uncertainty may turn out to be suboptimal, resulting in health loss. Consequently, there may be value in reducing uncertainty, through the collection of new evidence, to better inform resource decisions. This value can be quantified using value of information (VOI) analysis. This report from the ISPOR VOI Task Force describes methods for computing 4 VOI measures: the expected value of perfect information, expected value of partial perfect information (EVPPI), expected value of sample information (EVSI), and expected net benefit of sampling (ENBS). Several methods exist for computing EVPPI and EVSI, and this report provides guidance on selecting the most appropriate method based on the features of the decision problem. The report provides a number of recommendations for good practice when planning, undertaking, or reviewing VOI analyses. The software needed to compute VOI is discussed, and areas for future research are highlighted.
Collapse
Affiliation(s)
- Claire Rothery
- Centre for Health Economics, University of York, York, England, UK.
| | - Mark Strong
- School of Health and Related Research, University of Sheffield, Sheffield, England, UK
| | - Hendrik Erik Koffijberg
- Department of Health Technology & Services Research, Technical Medical Centre, University of Twente, Enschede, The Netherlands
| | - Anirban Basu
- The Comparative Health Outcomes, Policy, and Economics Institute, School of Pharmacy, University of Washington, Seattle, Washington, DC, USA
| | - Salah Ghabri
- French National Authority for Health, Paris, France
| | - Saskia Knies
- National Health Care Institute (Zorginstituut Nederland), Diemen, The Netherlands
| | | | - Gillian D Sanders Schmidler
- Duke-Margolis Center for Health Policy, Duke Clinical Research Institute and Department of Population Health Sciences, Duke University, Durham, NC, USA
| | | | | |
Collapse
|
14
|
Jones DA, Smith J, Mei XW, Hawkins MA, Maughan T, van den Heuvel F, Mee T, Kirkby K, Kirkby N, Gray A. A systematic review of health economic evaluations of proton beam therapy for adult cancer: Appraising methodology and quality. Clin Transl Radiat Oncol 2020; 20:19-26. [PMID: 31754652 PMCID: PMC6854069 DOI: 10.1016/j.ctro.2019.10.007] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2019] [Accepted: 10/28/2019] [Indexed: 12/12/2022] Open
Abstract
BACKGROUND AND PURPOSE With high treatment costs and limited capacity, decisions on which adult patients to treat with proton beam therapy (PBT) must be based on the relative value compared to the current standard of care. Cost-utility analyses (CUAs) are the gold-standard method for doing this. We aimed to appraise the methodology and quality of CUAs in this area. MATERIALS AND METHODS We performed a systematic review of the literature to identify CUA studies of PBT in adult disease using MEDLINE, EMBASE, EconLIT, NHS Economic Evaluation Database (NHS EED), Web of Science, and the Tufts Medical Center Cost-Effectiveness Analysis Registry from 1st January 2010 up to 6th June 2018. General characteristics, information relating to modelling approaches, and methodological quality were extracted and synthesized narratively. RESULTS Seven PBT CUA studies in adult disease were identified. Without randomised controlled trials to inform the comparative effectiveness of PBT, studies used either results from one-armed studies, or dose-response models derived from radiobiological and epidemiological studies of PBT. Costing methods varied widely. The assessment of model quality highlighted a lack of transparency in the identification of model parameters, and absence of external validation of model outcomes. Furthermore, appropriate assessment of uncertainty was often deficient. CONCLUSION In order to foster credibility, future CUA studies must be more systematic in their approach to evidence synthesis and expansive in their consideration of uncertainties in light of the lack of clinical evidence.
Collapse
Affiliation(s)
- David A. Jones
- Health Economics Research Centre, Nuffield Department of Population Health, University of Oxford, UK
| | - Joel Smith
- Health Economics Research Centre, Nuffield Department of Population Health, University of Oxford, UK
- NIHR Oxford Biomedical Research Centre, Oxford University Hospitals NHS Foundation Trust, John Radcliffe Hospital, Oxford, UK
| | - Xue W. Mei
- Health Economics Research Centre, Nuffield Department of Population Health, University of Oxford, UK
| | | | - Tim Maughan
- CRUK/MRC Oxford Institute for Radiation Oncology, Oxford, UK
| | - Frank van den Heuvel
- CRUK/MRC Oxford Institute for Radiation Oncology, Oxford, UK
- Department of Haematology/Oncology, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Thomas Mee
- Division of Cancer Sciences, School of Medical Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Karen Kirkby
- Division of Cancer Sciences, School of Medical Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Norman Kirkby
- Division of Cancer Sciences, School of Medical Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Alastair Gray
- Health Economics Research Centre, Nuffield Department of Population Health, University of Oxford, UK
- NIHR Oxford Biomedical Research Centre, Oxford University Hospitals NHS Foundation Trust, John Radcliffe Hospital, Oxford, UK
| |
Collapse
|
15
|
Cook JA, Julious SA, Sones W, Hampson LV, Hewitt C, Berlin JA, Ashby D, Emsley R, Fergusson DA, Walters SJ, Wilson EC, MacLennan G, Stallard N, Rothwell JC, Bland M, Brown L, Ramsay CR, Cook A, Armstrong D, Altman D, Vale LD. Practical help for specifying the target difference in sample size calculations for RCTs: the DELTA 2 five-stage study, including a workshop. Health Technol Assess 2019; 23:1-88. [PMID: 31661431 PMCID: PMC6843113 DOI: 10.3310/hta23600] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
BACKGROUND The randomised controlled trial is widely considered to be the gold standard study for comparing the effectiveness of health interventions. Central to its design is a calculation of the number of participants needed (the sample size) for the trial. The sample size is typically calculated by specifying the magnitude of the difference in the primary outcome between the intervention effects for the population of interest. This difference is called the 'target difference' and should be appropriate for the principal estimand of interest and determined by the primary aim of the study. The target difference between treatments should be considered realistic and/or important by one or more key stakeholder groups. OBJECTIVE The objective of the report is to provide practical help on the choice of target difference used in the sample size calculation for a randomised controlled trial for researchers and funder representatives. METHODS The Difference ELicitation in TriAls2 (DELTA2) recommendations and advice were developed through a five-stage process, which included two literature reviews of existing funder guidance and recent methodological literature; a Delphi process to engage with a wider group of stakeholders; a 2-day workshop; and finalising the core document. RESULTS Advice is provided for definitive trials (Phase III/IV studies). Methods for choosing the target difference are reviewed. To aid those new to the topic, and to encourage better practice, 10 recommendations are made regarding choosing the target difference and undertaking a sample size calculation. Recommended reporting items for trial proposal, protocols and results papers under the conventional approach are also provided. Case studies reflecting different trial designs and covering different conditions are provided. Alternative trial designs and methods for choosing the sample size are also briefly considered. CONCLUSIONS Choosing an appropriate sample size is crucial if a study is to inform clinical practice. The number of patients recruited into the trial needs to be sufficient to answer the objectives; however, the number should not be higher than necessary to avoid unnecessary burden on patients and wasting precious resources. The choice of the target difference is a key part of this process under the conventional approach to sample size calculations. This document provides advice and recommendations to improve practice and reporting regarding this aspect of trial design. Future work could extend the work to address other less common approaches to the sample size calculations, particularly in terms of appropriate reporting items. FUNDING Funded by the Medical Research Council (MRC) UK and the National Institute for Health Research as part of the MRC-National Institute for Health Research Methodology Research programme.
Collapse
Affiliation(s)
- Jonathan A Cook
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
| | - Steven A Julious
- Medical Statistics Group, School of Health and Related Research, University of Sheffield, Sheffield, UK
| | - William Sones
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
| | - Lisa V Hampson
- Statistical Methodology and Consulting, Novartis Pharma AG, Basel, Switzerland
| | - Catherine Hewitt
- York Trials Unit, Department of Health Sciences, University of York, York, UK
| | | | - Deborah Ashby
- Imperial Clinical Trials Unit, Imperial College London, London, UK
| | - Richard Emsley
- Department of Biostatistics and Health Informatics, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK
| | - Dean A Fergusson
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
| | - Stephen J Walters
- Medical Statistics Group, School of Health and Related Research, University of Sheffield, Sheffield, UK
| | - Edward Cf Wilson
- Cambridge Centre for Health Services Research, Cambridge Clinical Trials Unit University of Cambridge, Cambridge, UK
- Health Economics Group, Norwich Medical School, University of East Anglia, Norwich, UK
| | - Graeme MacLennan
- Centre for Healthcare Randomised Trials, University of Aberdeen, Aberdeen, UK
| | - Nigel Stallard
- Warwick Medical School, Statistics and Epidemiology, University of Warwick, Coventry, UK
| | - Joanne C Rothwell
- Medical Statistics Group, School of Health and Related Research, University of Sheffield, Sheffield, UK
| | - Martin Bland
- Department of Health Sciences, University of York, York, UK
| | - Louise Brown
- MRC Clinical Trials Unit, Institute of Clinical Trials and Methodology, University College London, London, UK
| | - Craig R Ramsay
- Health Services Research Unit, University of Aberdeen, Aberdeen, UK
| | - Andrew Cook
- Wessex Institute, University of Southampton, Southampton, UK
| | - David Armstrong
- School of Population Health and Environmental Sciences, King's College London, London, UK
| | - Douglas Altman
- Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
| | - Luke D Vale
- Health Economics Group, Institute of Health & Society, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|
16
|
Jutkowitz E, Alarid-Escudero F, Kuntz KM, Jalal H. The Curve of Optimal Sample Size (COSS): A Graphical Representation of the Optimal Sample Size from a Value of Information Analysis. PHARMACOECONOMICS 2019; 37:871-877. [PMID: 30761461 PMCID: PMC6556417 DOI: 10.1007/s40273-019-00770-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Value of information (VOI) analysis quantifies the opportunity cost associated with decision uncertainty, and thus informs the value of collecting further information to avoid this cost. VOI can inform study design, optimal sample size selection, and research prioritization. Recent methodological advances have reduced the computational burden of conducting VOI analysis and have made it easier to evaluate the expected value of sample information, the expected net benefit of sampling, and the optimal sample size of a study design ([Formula: see text]). The volume of VOI analyses being published is increasing, and there is now a need for VOI studies to conduct sensitivity analyses on VOI-specific parameters. In this practical application, we introduce the curve of optimal sample size (COSS), which is a graphical representation of [Formula: see text] over a range of willingness-to-pay thresholds and VOI parameters (example data and R code are provided). In a single figure, the COSS presents summary data for decision makers to determine the sample size that optimizes research funding given their operating characteristics. The COSS also presents variation in the optimal sample size given variability or uncertainty in VOI parameters. The COSS represents an efficient and additional approach for summarizing results from a VOI analysis.
Collapse
Affiliation(s)
- Eric Jutkowitz
- Department of Health Services, Policy and Practice, Brown University School of Public Health, Providence, RI, USA
| | - Fernando Alarid-Escudero
- Drug Policy Program, Center for Research and Teaching in Economics (CIDE)-CONACyT, 20313, Aguascalientes, AGS, Mexico.
| | - Karen M Kuntz
- Division of Health Policy and Management, University of Minnesota School of Public Health, Minneapolis, MN, USA
| | - Hawre Jalal
- Division of Health Policy and Management, Graduate School of Public Health, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
17
|
Heath A, Manolopoulou I, Baio G. Estimating the Expected Value of Sample Information across Different Sample Sizes Using Moment Matching and Nonlinear Regression. Med Decis Making 2019; 39:346-358. [PMID: 31161867 DOI: 10.1177/0272989x19837983] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Background. The expected value of sample information (EVSI) determines the economic value of any future study with a specific design aimed at reducing uncertainty about the parameters underlying a health economic model. This has potential as a tool for trial design; the cost and value of different designs could be compared to find the trial with the greatest net benefit. However, despite recent developments, EVSI analysis can be slow, especially when optimizing over a large number of different designs. Methods. This article develops a method to reduce the computation time required to calculate the EVSI across different sample sizes. Our method extends the moment-matching approach to EVSI estimation to optimize over different sample sizes for the underlying trial while retaining a similar computational cost to a single EVSI estimate. This extension calculates the posterior variance of the net monetary benefit across alternative sample sizes and then uses Bayesian nonlinear regression to estimate the EVSI across these sample sizes. Results. A health economic model developed to assess the cost-effectiveness of interventions for chronic pain demonstrates that this EVSI calculation method is fast and accurate for realistic models. This example also highlights how different trial designs can be compared using the EVSI. Conclusion. The proposed estimation method is fast and accurate when calculating the EVSI across different sample sizes. This will allow researchers to realize the potential of using the EVSI to determine an economically optimal trial design for reducing uncertainty in health economic models. Limitations. Our method involves rerunning the health economic model, which can be more computationally expensive than some recent alternatives, especially in complex models.
Collapse
Affiliation(s)
- Anna Heath
- The Hospital for Sick Children, Toronto, Canada and University of Toronto, Canada
| | | | - Gianluca Baio
- Department of Statistical Science, University College London, London, UK
| |
Collapse
|
18
|
Jackson C, Presanis A, Conti S, De Angelis D. Value of Information: Sensitivity Analysis and Research Design in Bayesian Evidence Synthesis. J Am Stat Assoc 2019; 114:1436-1449. [PMID: 32165869 PMCID: PMC7034331 DOI: 10.1080/01621459.2018.1562932] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2017] [Revised: 11/20/2018] [Accepted: 12/06/2018] [Indexed: 11/29/2022]
Abstract
Suppose we have a Bayesian model that combines evidence from several different sources. We want to know which model parameters most affect the estimate or decision from the model, or which of the parameter uncertainties drive the decision uncertainty. Furthermore, we want to prioritize what further data should be collected. These questions can be addressed by Value of Information (VoI) analysis, in which we estimate expected reductions in loss from learning specific parameters or collecting data of a given design. We describe the theory and practice of VoI for Bayesian evidence synthesis, using and extending ideas from health economics, computer modeling and Bayesian design. The methods are general to a range of decision problems including point estimation and choices between discrete actions. We apply them to a model for estimating prevalence of HIV infection, combining indirect information from surveys, registers, and expert beliefs. This analysis shows which parameters contribute most of the uncertainty about each prevalence estimate, and the expected improvements in precision from specific amounts of additional data. These benefits can be traded with the costs of sampling to determine an optimal sample size. Supplementary materials for this article, including a standardized description of the materials available for reproducing the work, are available as an online supplement.
Collapse
|
19
|
Heath A, Baio G. Calculating the Expected Value of Sample Information Using Efficient Nested Monte Carlo: A Tutorial. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2018; 21:1299-1304. [PMID: 30442277 DOI: 10.1016/j.jval.2018.05.004] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2017] [Revised: 03/27/2018] [Accepted: 05/07/2018] [Indexed: 06/09/2023]
Abstract
OBJECTIVE The expected value of sample information (EVSI) quantifies the economic benefit of reducing uncertainty in a health economic model by collecting additional information. This has the potential to improve the allocation of research budgets. Despite this, practical EVSI evaluations are limited partly due to the computational cost of estimating this value using the gold-standard nested simulation methods. Recently, however, Heath et al. developed an estimation procedure that reduces the number of simulations required for this gold-standard calculation. Up to this point, this new method has been presented in purely technical terms. STUDY DESIGN This study presents the practical application of this new method to aid its implementation. We use a worked example to illustrate the key steps of the EVSI estimation procedure before discussing its optimal implementation using a practical health economic model. METHODS The worked example is based on a three-parameter linear health economic model. The more realistic model evaluates the cost-effectiveness of a new chemotherapy treatment, which aims to reduce the number of side effects experienced by patients. We use a Markov model structure to evaluate the health economic profile of experiencing side effects. RESULTS This EVSI estimation method offers accurate estimation within a feasible computation time, seconds compared to days, even for more complex model structures. The EVSI estimation is more accurate if a greater number of nested samples are used, even for a fixed computational cost. CONCLUSIONS This new method reduces the computational cost of estimating the EVSI by nested simulation.
Collapse
Affiliation(s)
- Anna Heath
- Department of Statistical Science, University College London, London, UK.
| | - Gianluca Baio
- Department of Statistical Science, University College London, London, UK
| |
Collapse
|
20
|
Rabideau DJ, Pei PP, Walensky RP, Zheng A, Parker RA. Implementing Generalized Additive Models to Estimate the Expected Value of Sample Information in a Microsimulation Model: Results of Three Case Studies. Med Decis Making 2018; 38:189-199. [PMID: 29117791 PMCID: PMC5771838 DOI: 10.1177/0272989x17732973] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
BACKGROUND The expected value of sample information (EVSI) can help prioritize research but its application is hampered by computational infeasibility, especially for complex models. We investigated an approach by Strong and colleagues to estimate EVSI by applying generalized additive models (GAM) to results generated from a probabilistic sensitivity analysis (PSA). METHODS For 3 potential HIV prevention and treatment strategies, we estimated life expectancy and lifetime costs using the Cost-effectiveness of Preventing AIDS Complications (CEPAC) model, a complex patient-level microsimulation model of HIV progression. We fitted a GAM-a flexible regression model that estimates the functional form as part of the model fitting process-to the incremental net monetary benefits obtained from the CEPAC PSA. For each case study, we calculated the expected value of partial perfect information (EVPPI) using both the conventional nested Monte Carlo approach and the GAM approach. EVSI was calculated using the GAM approach. RESULTS For all 3 case studies, the GAM approach consistently gave similar estimates of EVPPI compared with the conventional approach. The EVSI behaved as expected: it increased and converged to EVPPI for larger sample sizes. For each case study, generating the PSA results for the GAM approach required 3 to 4 days on a shared cluster, after which EVPPI and EVSI across a range of sample sizes were evaluated in minutes. The conventional approach required approximately 5 weeks for the EVPPI calculation alone. CONCLUSION Estimating EVSI using the GAM approach with results from a PSA dramatically reduced the time required to conduct a computationally intense project, which would otherwise have been impractical. Using the GAM approach, we can efficiently provide policy makers with EVSI estimates, even for complex patient-level microsimulation models.
Collapse
Affiliation(s)
| | - Pamela P. Pei
- Medical Practice Evaluation Center, Massachusetts General Hospital,
Boston, MA, USA
| | - Rochelle P. Walensky
- Medical Practice Evaluation Center, Massachusetts General Hospital,
Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
- Division of Infectious Diseases, Massachusetts General Hospital,
Boston, MA, USA
- Division of Infectious Diseases, Brigham and Women’s
Hospital, Boston, MA, USA
| | - Amy Zheng
- Medical Practice Evaluation Center, Massachusetts General Hospital,
Boston, MA, USA
| | - Robert A. Parker
- Biostatistics Center, Massachusetts General Hospital, Boston, MA,
USA
- Medical Practice Evaluation Center, Massachusetts General Hospital,
Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| |
Collapse
|
21
|
Heath A, Manolopoulou I, Baio G. Efficient Monte Carlo Estimation of the Expected Value of Sample Information Using Moment Matching. Med Decis Making 2017; 38:163-173. [PMID: 29126364 DOI: 10.1177/0272989x17738515] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND The Expected Value of Sample Information (EVSI) is used to calculate the economic value of a new research strategy. Although this value would be important to both researchers and funders, there are very few practical applications of the EVSI. This is due to computational difficulties associated with calculating the EVSI in practical health economic models using nested simulations. METHODS We present an approximation method for the EVSI that is framed in a Bayesian setting and is based on estimating the distribution of the posterior mean of the incremental net benefit across all possible future samples, known as the distribution of the preposterior mean. Specifically, this distribution is estimated using moment matching coupled with simulations that are available for probabilistic sensitivity analysis, which is typically mandatory in health economic evaluations. RESULTS This novel approximation method is applied to a health economic model that has previously been used to assess the performance of other EVSI estimators and accurately estimates the EVSI. The computational time for this method is competitive with other methods. CONCLUSION We have developed a new calculation method for the EVSI which is computationally efficient and accurate. LIMITATIONS This novel method relies on some additional simulation so can be expensive in models with a large computational cost.
Collapse
Affiliation(s)
- Anna Heath
- Department of Statistical Science, University College London, London, England, UK (AH, IM, GB)
| | - Ioanna Manolopoulou
- Department of Statistical Science, University College London, London, England, UK (AH, IM, GB)
| | - Gianluca Baio
- Department of Statistical Science, University College London, London, England, UK (AH, IM, GB)
| |
Collapse
|
22
|
Soeteman DI, Menzies NA, Pandya A. Would a Large tPA Trial for Those 4.5 to 6.0 Hours from Stroke Onset Be Good Value for Information? VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2017; 20:894-901. [PMID: 28712618 DOI: 10.1016/j.jval.2017.03.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2016] [Revised: 02/21/2017] [Accepted: 03/03/2017] [Indexed: 06/07/2023]
Abstract
OBJECTIVES To quantify the potential value of new research in patients treated with thrombolytic treatment (tissue-type plasminogen activator [tPA]) in the 4.5- to 6.0-hour time window after stroke onset and to determine the optimal size of a future trial using value of information analysis. METHODS Expected value of partial perfect and sample information (EVPPI and EVSI) analyses were conducted using a probabilistic Markov model. Data for modified Rankin Scale (mRS) distributions in patients 4.5 to 6.0 hours since stroke onset for tPA (n = 576) and placebo (n = 543) were obtained from pooled randomized controlled trials. EVSI was quantified with net monetary benefit (assuming willingness to pay for health as $100,000/QALY). We calculated discounted population-level EVSI by multiplying per-person EVSI by the annual number of eligible patients with stroke in the United States and assuming a 10-year time frame of treatment use. Study costs were based on administrative costs and the costs of tPA. RESULTS The base-case lifetime cost-effectiveness analysis showed that tPA was dominated by placebo in this patient group. EVPPI for mRS distributions was $1003 per person. On the basis of EVSI, the optimal sample size of a new trial collecting data on tPA efficacy in these patients would be 5600 across study arms with expected population-level societal returns (EVSI minus study costs) of $68.7 million. CONCLUSIONS Expanding research attention to the 4.5- to 6.0-hour time window for tPA treatment of patients with acute ischemic stroke is justified because the expected returns are substantial. Even a relatively large trial in which more information on treatment efficacy on the basis of mRS scores is collected would represent good value for information.
Collapse
Affiliation(s)
- Djøra I Soeteman
- Center for Health Decision Science, Harvard T.H. Chan School of Public Health, Boston, MA, USA.
| | - Nicolas A Menzies
- Center for Health Decision Science, Harvard T.H. Chan School of Public Health, Boston, MA, USA; Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Ankur Pandya
- Center for Health Decision Science, Harvard T.H. Chan School of Public Health, Boston, MA, USA; Department of Health Policy and Management, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| |
Collapse
|
23
|
Tuffaha HW, Gordon LG, Scuffham PA. Value of Information Analysis Informing Adoption and Research Decisions in a Portfolio of Health Care Interventions. MDM Policy Pract 2016; 1:2381468316642238. [PMID: 30288400 PMCID: PMC6125050 DOI: 10.1177/2381468316642238] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Accepted: 03/01/2016] [Indexed: 01/13/2023] Open
Abstract
Background: Value of information (VOI) analysis quantifies the value of additional research in reducing decision uncertainty. It addresses adoption and research decisions simultaneously by comparing the expected benefits and costs of research studies. Nevertheless, the application of this approach in practice remains limited. Objectives: To apply VOI analysis in health care interventions to guide adoption decisions, optimize trial design, and prioritize research. Methods: The analysis was from the perspective of Queensland Health, Australia. It included four interventions: clinically indicated catheter replacement, tissue adhesive for securing catheters, negative pressure wound therapy (NPWT) in caesarean sections, and nutritional support for preventing pressure ulcers. For each intervention, cost-effectiveness analysis was performed, decision uncertainty characterized, and VOI calculated using Monte Carlo simulations. The benefits and costs of additional research were considered together with the costs and consequences of acting now versus waiting for more information. All values are reported in 2014 Australian dollars (AU$). Results: All interventions were cost-effective, but with various levels of decision uncertainty. The current evidence is sufficient to support the adoption of clinically indicated catheter replacement. For the tissue adhesive, an additional study before adoption is worthwhile with a four-arm trial of 220 patients per arm. Additional research on NPWT before adoption is worthwhile with a two-arm trial of 200 patients per arm. Nutritional support should be adopted with a two-arm trial of 1200 patients per arm. Based on the expected net monetary benefits, the studies were ranked as follows: 1) NPWT (AU$1.2 million), 2) tissue adhesive (AU$0.3 milliion), and 3) nutritional support (AU$0.1 million). Conclusions: VOI analysis is a useful and practical approach to inform adoption and research decisions. Efforts should be focused on facilitating its integration into decision making frameworks.
Collapse
Affiliation(s)
- Haitham W. Tuffaha
- Haitham W. Tuffaha, Centre for Applied
Health Economics, School of Medicine, Griffith University, Meadowbrook,
Queensland 4131, Australia; telephone: 61 7 338 21156; fax: 61 7 338 21338;
e-mail:
| | | | | |
Collapse
|