1
|
Welton NJ, Ades AE. Estimation of Markov Chain Transition Probabilities and Rates from Fully and Partially Observed Data: Uncertainty Propagation, Evidence Synthesis, and Model Calibration. Med Decis Making 2016; 25:633-45. [PMID: 16282214 DOI: 10.1177/0272989x05282637] [Citation(s) in RCA: 85] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Markov transition models are frequently used to model disease progression. The authors show how the solution to Kolmogorov’s forward equations can be exploited to map between transition rates and probabilities from probability data in multistate models. They provide a uniform, Bayesian treatment of estimation and propagation of uncertainty of transition rates and probabilities when 1) observations are available on all transitions and exact time at risk in each state (fully observed data) and 2) observations are on initial state and final state after a fixed interval of time but not on the sequence of transitions (partially observed data). The authors show how underlying transition rates can be recovered from partially observed data using Markov chain Monte Carlo methods in WinBUGS, and they suggest diagnostics to investigate inconsistencies between evidence from different starting states. An illustrative example for a 3-state model is given, which shows how the methods extend to more complex Markov models using the software WBDiff to compute solutions. Finally, the authors illustrate how to statistically combine data from multiple sources, including partially observed data at several follow-up times and also how to calibrate a Markov model to be consistent with data from one specific study.
Collapse
Affiliation(s)
- Nicky J Welton
- MRC Health Services Research Collaboration, Bristol, United Kingdom.
| | | |
Collapse
|
2
|
Hazen GB, Huang M. Large-Sample Bayesian Posterior Distributions for Probabilistic Sensitivity Analysis. Med Decis Making 2016; 26:512-34. [PMID: 16997928 DOI: 10.1177/0272989x06290487] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In probabilistic sensitivity analyses, analysts assign probability distributions to uncertain model parameters and use Monte Carlo simulation to estimate the sensitivity of model results to parameter uncertainty. The authors present Bayesian methods for constructing large-sample approximate posterior distributions for probabilities, rates, and relative effect parameters, for both controlled and uncontrolled studies, and discuss how to use these posterior distributions in a probabilistic sensitivity analysis. These results draw on and extend procedures from the literature on large-sample Bayesian posterior distributions and Bayesian random effects meta-analysis. They improve on standard approaches to probabilistic sensitivity analysis by allowing a proper accounting for heterogeneity across studies as well as dependence between control and treatment parameters, while still being simple enough to be carried out on a spreadsheet. The authors apply these methods to conduct a probabilistic sensitivity analysis for a recently published analysis of zidovudine prophylaxis following rapid HIV testing in labor to prevent vertical HIV transmission in pregnant women.
Collapse
Affiliation(s)
- Gordon B Hazen
- IEMS Department, Northwestern University, Evanston, IL 60208-3119, USA.
| | | |
Collapse
|
3
|
Ades AE, Lu G, Claxton K. Expected Value of Sample Information Calculations in Medical Decision Modeling. Med Decis Making 2016; 24:207-27. [PMID: 15090106 DOI: 10.1177/0272989x04263162] [Citation(s) in RCA: 236] [Impact Index Per Article: 29.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
There has been an increasing interest in using expected value of information (EVI) theory in medical decision making, to identify the need for further research to reduce uncertainty in decision and as a tool for sensitivity analysis. Expected value of sample information (EVSI) has been proposed for determination of optimum sample size and allocation rates in randomized clinical trials. This article derives simple Monte Carlo, or nested Monte Carlo, methods that extend the use of EVSI calculations to medical decision applications with multiple sources of uncertainty, with particular attention to the form in which epidemiological data and research findings are structured. In particular, information on key decision parameters such as treatment efficacy are invariably available on measures of relative efficacy such as risk differences or odds ratios, but not on model parameters themselves. In addition, estimates of model parameters and of relative effect measures in the literature may be heterogeneous, reflecting additional sources of variation besides statistical sampling error. The authors describe Monte Carlo procedures for calculating EVSI for probability, rate, or continuous variable parameters in multi parameter decision models and approximate methods for relative measures such as risk differences, odds ratios, risk ratios, and hazard ratios. Where prior evidence is based on a random effects meta-analysis, the authors describe different ESVI calculations, one relevant for decisions concerning a specific patient group and the other for decisions concerning the entire population of patient groups. They also consider EVSI methods for new studies intended to update information on both baseline treatment efficacy and the relative efficacy of 2 treatments. Although there are restrictions regarding models with prior correlation between parameters, these methods can be applied to the majority of probabilistic decision models. Illustrative worked examples of EVSI calculations are given in an appendix.
Collapse
Affiliation(s)
- A E Ades
- Medical Research Council Health Services Research Collaboration, Bristol, United Kingdom.
| | | | | |
Collapse
|
4
|
van Rosmalen J, Toy M, O'Mahony JF. A mathematical approach for evaluating Markov models in continuous time without discrete-event simulation. Med Decis Making 2013; 33:767-79. [PMID: 23715464 DOI: 10.1177/0272989x13487947] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
Markov models are a simple and powerful tool for analyzing the health and economic effects of health care interventions. These models are usually evaluated in discrete time using cohort analysis. The use of discrete time assumes that changes in health states occur only at the end of a cycle period. Discrete-time Markov models only approximate the process of disease progression, as clinical events typically occur in continuous time. The approximation can yield biased cost-effectiveness estimates for Markov models with long cycle periods and if no half-cycle correction is made. The purpose of this article is to present an overview of methods for evaluating Markov models in continuous time. These methods use mathematical results from stochastic process theory and control theory. The methods are illustrated using an applied example on the cost-effectiveness of antiviral therapy for chronic hepatitis B. The main result is a mathematical solution for the expected time spent in each state in a continuous-time Markov model. It is shown how this solution can account for age-dependent transition rates and discounting of costs and health effects, and how the concept of tunnel states can be used to account for transition rates that depend on the time spent in a state. The applied example shows that the continuous-time model yields more accurate results than the discrete-time model but does not require much computation time and is easily implemented. In conclusion, continuous-time Markov models are a feasible alternative to cohort analysis and can offer several theoretical and practical advantages.
Collapse
Affiliation(s)
- Joost van Rosmalen
- Department of Public Health, Erasmus MC, University Medical Center, Rotterdam, the Netherlands (JVR, MT, JFO),Department of Biostatistics, Erasmus MC, University Medical Center, Rotterdam, the Netherlands (JVR)
| | - Mehlika Toy
- Department of Public Health, Erasmus MC, University Medical Center, Rotterdam, the Netherlands (JVR, MT, JFO),Department of Global Health and Population, Harvard School of Public Health, Boston, Massachusetts (MT)
| | - James F O'Mahony
- Department of Public Health, Erasmus MC, University Medical Center, Rotterdam, the Netherlands (JVR, MT, JFO),Department of Health Policy and Management, Trinity College Dublin, Dublin, Ireland (JFO)
| |
Collapse
|
5
|
Soares MO, Canto E Castro L. Continuous time simulation and discretized models for cost-effectiveness analysis. PHARMACOECONOMICS 2012; 30:1101-1117. [PMID: 23116289 DOI: 10.2165/11599380-000000000-00000] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
The design of decision-analytic models for cost-effectiveness analysis has been the subject of discussion. The current work addresses this issue by noting that, when time is to be explicitly modelled, we need to represent phenomena occurring in continuous time. Models evaluated in continuous time may not have closed-form solutions, and in this case, two approximations can be used: simulation models in continuous time and discretized models at the aggregate level. Stylized examples were set up where both approximations could be implemented. These aimed to illustrate determinants of the use of the two approximations: cycle length and precision, the use of continuity corrections in discretized models and the discretization of rates into probabilities. The examples were also used to explore the impact of the approximations not only in terms of absolute survival but also cost effectiveness and incremental comparisons. Discretized models better approximate continuous time results if lower cycle lengths are used. Continuous time simulation models are inherently stochastic, and the precision of the results is determined by the simulation sample size. The use of continuity corrections in discretized models allows the use of greater cycle lengths, producing no significant bias from the discretization. How the process is discretized (the conversion of rates into probabilities) is key. Results show that appropriate discretization coupled with the use of a continuity correction produces results unbiased for higher cycle lengths. Alternative methods of discretization are less efficient, i.e. lower cycle lengths are needed to obtain unbiased results. The developed work showed the importance of acknowledging bias in estimating cost effectiveness. When the alternative approximations can be applied, we argue that it is preferable to implement a cohort discretized model rather than a simulation model in continuous time. In practice, however, it may not be possible to represent the decision problem by any conventionally defined discretized model, in which case other model designs need to be applied, e.g. a simulation model.
Collapse
|
6
|
|
7
|
Abstract
Cohort analysis is a widespread tool for computing expected costs and quality-adjusted life years (QALYs) in Markov models for medical cost-effectiveness analyses. Although not always explicitly identified, such models commonly have multiple simple factors, or components. In these, a health state consists of a multiple component vector, one component for each factor, and arbitrary combinations of components are possible. The authors show here that when the model does not assume any probabilistic dependence among these factors, then a standard cohort analysis may be decomposed into several independent cohort analyses, one for each factor, and the results may be combined to produce desired expected costs and QALYs. These single-factor cohort analyses are not only simpler but also computationally more efficient. The authors derive the appropriate formulas for this cohort decomposition in discrete time and give several examples of their use based on published cost-effectiveness analyses. Explicitly identifying the simple factors of which a model is composed allows these factors to be portrayed graphically. Graphical depiction of the simple factors that comprise a model reduces model complexity, makes model formulation easier and more transparent, and thereby facilitates peer inspection and critique.
Collapse
Affiliation(s)
- Gordon Hazen
- Department of Industrial Engineering and Management Sciences, Northwestern University, Evanston, Illinois
| | - Zhe Li
- Department of Industrial Engineering and Management Sciences, Northwestern University, Evanston, Illinois
| |
Collapse
|
8
|
Hazen GB, Schwartz A. Incorporating extrinsic goals into decision and cost-effectiveness analyses. Med Decis Making 2009; 29:580-9. [PMID: 19329774 DOI: 10.1177/0272989x09333121] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
It has not been widely recognized that medical patients as individuals may have goals that are not easily expressed in terms of quality-adjusted life years (QALYs). The QALY model deals with ongoing goals such as reducing pain or maintaining mobility, but goals such as completing an important project or seeing a child graduate from college occur at unique points in time and do not lend themselves to easy expression in terms of QALYs. Such extrinsic goals have been posited as explanations for preferences inconsistent with the QALY model, such as unwillingness to trade away time or accept gambles. In this article, the authors examine methods for including extrinsic goals in medical decision and cost-effectiveness analyses. As illustrations, they revisit 2 previously published analyses, the management of unruptured intracranial arteriovenous malformations (AVMs) and the evaluation of preventive strategies for BRCA + women.
Collapse
Affiliation(s)
- Gordon B Hazen
- Department of Industrial Engineering and Management Sciences, Northwestern University, Evanston, Illinois 60208, USA.
| | | |
Collapse
|
9
|
|
10
|
Hazen GB, Huang M. Parametric Sensitivity Analysis Using Large-Sample Approximate Bayesian Posterior Distributions. DECISION ANALYSIS 2006. [DOI: 10.1287/deca.1060.0078] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
|
11
|
Bravata DM, McDonald KM, Szeto H, Smith WM, Rydzak C, Owens DK. A conceptual framework for evaluating information technologies and decision support systems for bioterrorism preparedness and response. Med Decis Making 2004; 24:192-206. [PMID: 15090105 DOI: 10.1177/0272989x04263254] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
OBJECTIVES The authors sought to develop a conceptual framework for evaluating whether existing information technologies and decision support systems (IT/DSSs) would assist the key decisions faced by clinicians and public health officials preparing for and responding to bioterrorism. METHODS They reviewed reports of natural and bioterrorism related infectious outbreaks, bioterrorism preparedness exercises, and advice from experts to identify the key decisions, tasks, and information needs of clinicians and public health officials during a bioterrorism response. The authors used task decomposition to identify the subtasks and data requirements of IT/DSSs designed to facilitate a bioterrorism response. They used the results of the task decomposition to develop evaluation criteria for IT/DSSs for bioterrorism preparedness. They then applied these evaluation criteria to 341 reports of 217 existing IT/DSSs that could be used to support a bioterrorism response. MAIN RESULTS In response to bioterrorism, clinicians must make decisions in 4 critical domains (diagnosis, management, prevention, and reporting to public health), and public health officials must make decisions in 4 other domains (interpretation of bioterrorism surveillance data, outbreak investigation, outbreak control, and communication). The time horizons and utility functions for these decisions differ. From the task decomposition, the authors identified critical subtasks for each of the 8 decisions. For example, interpretation of diagnostic tests is an important subtask of diagnostic decision making that requires an understanding of the tests' sensitivity and specificity. Therefore, an evaluation criterion applied to reports of diagnostic IT/DSSs for bioterrorism asked whether the reports described the systems' sensitivity and specificity. Of the 217 existing IT/DSSs that could be used to respond to bioterrorism, 79 studies evaluated 58 systems for at least 1 performance metric. CONCLUSIONS The authors identified 8 key decisions that clinicians and public health officials must make in response to bioterrorism. When applying the evaluation system to 217 currently available IT/DSSs that could potentially support the decisions of clinicians and public health officials, the authors found that the literature provides little information about the accuracy of these systems.
Collapse
Affiliation(s)
- Dena M Bravata
- Center for Primary Care and Outcomes Research, Stanford University, Stanford, California 94305-6019, USA.
| | | | | | | | | | | |
Collapse
|
12
|
Hazen GB. Stochastic trees and the StoTree modeling environment: models and software for medical decision analysis. J Med Syst 2002; 26:399-413. [PMID: 12182205 DOI: 10.1023/a:1016401115823] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
In this paper we present a review of stochastic trees, a convenient modeling approach for medical treatment decision analyses. Stochastic trees are a generalization of decision trees that incorporate useful features from continuous-time Markov chains. We also discuss StoTree, a freely available software tool for the formulation and solution of stochastic trees, implemented in the Excel spreadsheet environment.
Collapse
Affiliation(s)
- Gordon B Hazen
- IE/MS Department, Northwestern University, Evanston, Illinois 60208, USA
| |
Collapse
|
13
|
Patten SB, Lee RC. Modeling methods for facilitating decisions in pharmaceutical policy and population therapeutics. Pharmacoepidemiol Drug Saf 2002; 11:165-8. [PMID: 11998542 DOI: 10.1002/pds.706] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Scott B Patten
- Departments of Community Health Sciences and Psychiatry, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, Canada T2N 4N1.
| | | |
Collapse
|
14
|
Abstract
Clinical decision models often rely upon survival models predicated on disease-specific hazard functions combined with baseline hazard functions obtained from standard life tables. Two biases may arise in such a modeling process. First, life expectancy estimates may be biased even if estimates of survival probabilities are unbiased (misestimation bias). In simulation studies, the authors discovered that the magnitude of misestimation bias is larger as life expectancy increases, sample size decreases, and censoring percentage increases. In the context of a simple decision analysis, they found that imbalances in the sample sizes for the data used to estimate the parameters among different strategies resulted in non-optimal decisions in the long run. The second bias stems from misspecification of the survival model itself (misspecification bias). Using a simple cost-effectiveness model, the authors found that life expectancies and incremental cost-effectiveness ratios differed depending on whether an excess-mortality or a proportional-hazards model was specified. In addition, a predictable pattern was observed for these two survival models when extrapolated to other age and gender groups.
Collapse
Affiliation(s)
- K M Kuntz
- Department of Medicine, Brigham and Women's Hospital, Boston, MA 02115, USA
| | | |
Collapse
|