1
|
Using Relative Survival to Estimate the Burden of Kidney Failure. Am J Kidney Dis 2024; 83:28-36.e1. [PMID: 37678740 PMCID: PMC10841440 DOI: 10.1053/j.ajkd.2023.05.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 05/17/2023] [Accepted: 05/23/2023] [Indexed: 09/09/2023]
Abstract
RATIONALE & OBJECTIVE Estimates of mortality from kidney failure are misleading because the mortality from kidney failure is inseparable from the mortality attributed to comorbid conditions. We sought to develop an alternative method to reduce the bias in estimating mortality due to kidney failure using life table methods. STUDY DESIGN Longitudinal cohort study. SETTING & PARTICIPANTS Using data from the US Renal Data System and the Medicare 5% sample, we identified an incident cohort of patients, age 66+, who first had kidney failure in 2009 and a similar general population cohort without kidney failure. EXPOSURE Kidney failure. OUTCOME Death. ANALYTICAL APPROACH We created comorbidity, age, sex, race, and year-specific life tables to estimate relative survival of patients with incident kidney failure and to attain an estimate of excess kidney failure-related deaths. Estimates were compared with those based on standard life tables (not adjusted for comorbidity). RESULTS The analysis included 31,944 adults with kidney failure with a mean age of 77±7 years. The 5-year relative survival was 31% using standard life tables (adjusted for age, sex, race, and year) versus 36% using life tables also adjusted for comorbidities. Compared with other chronic diseases, patients with kidney failure have among the lowest relative survival. Patients with incident kidney failure ages 66-70 and 76-80 have a survival comparable to adults without kidney failure roughly 86-90 and 91-95 years old, respectively. LIMITATIONS Relative survival estimates can be improved by narrowing the specificity of the covariates collected (eg, disease severity and ethnicity). CONCLUSIONS Estimates of survival relative to a matched general population partition the mortality due to kidney failure from other causes of death. Results highlight the immense burden of kidney failure on mortality and the importance of disease prevention efforts among older adults. PLAIN-LANGUAGE SUMMARY Estimates of death due to kidney failure can be misleading because death information from kidney failure is intertwined with death due to aging and other chronic diseases. Life tables are an old method, commonly used by actuaries and demographers to describe the life expectancy of a population. We developed life tables specific to a patient's age, sex, year, race, and comorbidity. Survival is derived from the life tables as the percentage of patients who are still alive in a specified period. By comparing survival of patients with kidney failure to the survival of people from the general population, we estimate that patients with kidney failure have one-third the chance of survival in 5 years compared with people with similar demographics and comorbidity but without kidney failure. The importance of this measure is that it provides a quantifiable estimate of the immense mortality burden of kidney failure.
Collapse
|
2
|
Endovascular versus Surgical Lower Extremity Revascularization among Patients with Chronic Kidney Disease. Int J Nephrol 2023; 2023:5586060. [PMID: 38144229 PMCID: PMC10748729 DOI: 10.1155/2023/5586060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 10/31/2023] [Accepted: 11/28/2023] [Indexed: 12/26/2023] Open
Abstract
Introduction Patients with chronic kidney disease (CKD) have a high prevalence of peripheral artery disease. How best to manage lower extremity peripheral artery disease remains unclear in this patient population. We therefore sought to compare the outcomes after endovascular versus surgical lower extremity revascularization among patients with CKD. Methods We used data from Optum's de-identifed Clinformatics® Data Mart Database, a nationwide database of commercially insured persons in the United States to study patients with CKD who underwent lower extremity endovascular or surgical revascularization. We used inverse probability of treatment weighting to balance covariates. We employed proportional hazard regression to study the primary outcome of major adverse limb events (MALE), defined as a repeat revascularization or amputation. We also studied each of these events separately and death from any cause. Results In our cohort, 60,057 patients underwent endovascular revascularization and 9,338 patients underwent surgical revascularization. Endovascular revascularization compared with surgical revascularization was associated with a higher adjusted hazard of MALE (hazard ratio (HR) 1.52; 95% confidence interval (CI) 1.46-1.59). Endovascular revascularization was also associated with a higher adjusted hazard of repeat revascularization (HR 1.65; 95% CI 1.57-1.72) but a lower adjusted risk of amputation (HR 0.71; CI 0.73-0.89). Patients undergoing endovascular revascularization also had a lower adjusted hazard for death from any cause (0.85; CI 0.82-0.88). Conclusions In this analysis of patients with CKD undergoing lower extremity revascularization, an endovascular approach was associated with a higher rate of repeated revascularization but a lower risk of subsequent amputation and death compared with surgical revascularization. Multiple factors must be considered when counseling patients with CKD, who have a high burden of comorbid conditions. Clinical trials should include more patients with kidney disease, who are often otherwise excluded from participation, to better understand the most effective treatment strategies for this vulnerable patient population.
Collapse
|
3
|
Association of Pretransplant Coronary Heart Disease Testing With Early Kidney Transplant Outcomes. JAMA Intern Med 2023; 183:134-141. [PMID: 36595271 PMCID: PMC9857067 DOI: 10.1001/jamainternmed.2022.6069] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 11/07/2022] [Indexed: 01/04/2023]
Abstract
Importance Testing for coronary heart disease (CHD) in asymptomatic kidney transplant candidates before transplant is widespread and endorsed by various professional societies, but its association with perioperative outcomes is unclear. Objective To estimate the association of pretransplant CHD testing with rates of death and myocardial infarction (MI). Design, Setting, and Participants This retrospective cohort study included all adult, first-time kidney transplant recipients from January 2000 through December 2014 in the US Renal Data System with at least 1 year of Medicare enrollment before and after transplant. An instrumental variable (IV) analysis was used, with the program-level CHD testing rate in the year of the transplant as the IV. Analyses were stratified by study period, as the rate of CHD testing varied over time. A combination of US Renal Data System variables and Medicare claims was used to ascertain exposure, IV, covariates, and outcomes. Exposures Receipt of nonurgent invasive or noninvasive CHD testing during the 12 months preceding kidney transplant. Main Outcomes and Measures The primary outcome was a composite of death or acute MI within 30 days of after kidney transplant. Results The cohort comprised 79 334 adult, first-time kidney transplant recipients (30 147 women [38%]; 25 387 [21%] Black and 48 394 [61%] White individuals; mean [SD] age of 56 [14] years during 2012 to 2014). The primary outcome occurred in 4604 patients (244 [5.3%]; 120 [2.6%] death, 134 [2.9%] acute MI). During the most recent study period (2012-2014), the CHD testing rate was 56% in patients in the most test-intensive transplant programs (fifth IV quintile) and 24% in patients at the least test-intensive transplant program (first IV quintile, P < .001); this pattern was similar across other study periods. In the main IV analysis, compared with no testing, CHD testing was not associated with a change in the rate of primary outcome (rate difference, 1.9%; 95% CI, 0%-3.5%). The results were similar across study periods, except for 2000 to 2003, during which CHD testing was associated with a higher event rate (rate difference, 6.8%; 95% CI, 1.8%-12.0%). Conclusions and Relevance The results of this cohort study suggest that pretransplant CHD testing was not associated with a reduction in early posttransplant death or acute MI. The study findings potentially challenge the ubiquity of CHD testing before kidney transplant and should be confirmed in interventional studies.
Collapse
|
4
|
National Imaging Trends for Suspected Urinary Stone Disease in the Emergency Department. JAMA Intern Med 2022; 182:1323-1325. [PMID: 36315134 PMCID: PMC9623481 DOI: 10.1001/jamainternmed.2022.4939] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 07/15/2022] [Indexed: 11/06/2022]
Abstract
This cohort study examines the use of an ultrasonography-first strategy for urinary stone disease.
Collapse
|
5
|
Testosterone concentrations andoutcomes in hemodialysis patients of the EVOLVE trial. Nephrol Dial Transplant 2022; 38:1519-1527. [PMID: 36175142 DOI: 10.1093/ndt/gfac278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Indexed: 11/14/2022] Open
Abstract
BACKGROUND Hypogonadism is common in end-stage kidney disease and may contribute to morbidity and mortality. METHODS Using data from the randomized controlled EVOLVE trial of cinacalcet, we analyzed the associations of total testosterone, free testosterone, and sex-hormone binding globulin (SHBG) serum concentrations with mortality and major cardiovascular events in 1692 men and 1059 women receiving hemodialysis. We also describe the effect of cinacalcet treatment on serum concentrations of testosterone. RESULTS Among men, lower serum free testosterone (OR 0.18 95%, CI 0.04-0.82, p = 0.026) and higher SHBG (OR 1.05 per 10 nmol/L, 95% CI 1.01-1.10, p = 0.012), but not total testosterone, were associated with higher risk of death or cardiovascular event. Only SHBG was associated with all-cause mortality (OR 1.07 per 10 nmol/L, 95% CI 1.02-1.12, p = 0.0073). Among women, neither total- or free testosterone, nor SHBG were associated with outcomes. We found no statistically significant effect of cinacalcet treatment on SHBG, free- or total testosterone. CONCLUSIONS Lower free testosterone and higher SHBG in serum are associated with higher risk of death or cardiovascular event in men undergoing chronic hemodialysis.
Collapse
|
6
|
Advice for Isolated Statisticians Collaborating in Academic Healthcare Center Settings. Stat (Int Stat Inst) 2022. [DOI: 10.1002/sta4.492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
7
|
Trends in Coronary Artery Disease Screening before Kidney Transplantation. KIDNEY360 2021; 3:516-523. [PMID: 35582172 PMCID: PMC9034804 DOI: 10.34067/kid.0005282021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 12/09/2021] [Indexed: 01/10/2023]
Abstract
Background Coronary artery disease (CAD) screening in asymptomatic kidney transplant candidates is widespread but not well supported by contemporary cardiology literature. In this study we describe temporal trends in CAD screening before kidney transplant in the United States. Methods Using the United States Renal Data System, we examined Medicare-insured adults who received a first kidney transplant from 2000 through 2015. We stratified analysis on the basis of whether the patient's comorbidity burden met guideline definitions of high risk for CAD. We examined temporal trends in nonurgent CAD tests within the year before transplant and the composite of death and nonfatal myocardial infarction in the 30 days after transplant. Results Of 94,832 kidney transplant recipients, 37,139 (39%) underwent at least one nonurgent CAD test in the 1 year before transplant. From 2000 to 2015, the transplant program waitlist volume had increased as transplant volume stayed constant, whereas patients in the later eras had a slightly higher comorbidity burden (older, longer dialysis vintage, and a higher prevalence of diabetes mellitus and CAD). The likelihood of CAD test in the year before transplant increased from 2000 through 2003 and remained relatively stable thereafter. When stratified by CAD risk status, test rates decreased modestly in patients who were high risk but remained constant in patients who were low risk after 2008. Death or nonfatal myocardial infarction within 30 days after transplant decreased from 3% in 2000 to 2% in 2015. Nuclear perfusion scan was the most frequent modality of testing throughout the examined time periods. Conclusions CAD testing rates before kidney transplantation have remained constant from 2000 through 2015, despite widespread changes in cardiology guidelines and practice.
Collapse
|
8
|
Documentation of Reproductive Health Counseling Among Women With CKD: A Retrospective Chart Review. Am J Kidney Dis 2021; 79:765-767. [PMID: 34571063 DOI: 10.1053/j.ajkd.2021.08.012] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Accepted: 08/11/2021] [Indexed: 11/11/2022]
|
9
|
Performance versus Risk Factor-Based Approaches to Coronary Artery Disease Screening in Waitlisted Kidney Transplant Candidates. Cardiorenal Med 2021; 11:140-150. [PMID: 34034263 DOI: 10.1159/000516158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Accepted: 03/22/2021] [Indexed: 11/19/2022] Open
Abstract
INTRODUCTION Current screening algorithms for coronary artery disease (CAD) before kidney transplantation result in many tests but few interventions. OBJECTIVE The aim of this study was to study the utility of 6-minute walk test (6MWT), an office-based test of cardiorespiratory fitness, for risk stratification in this setting. METHODS We enrolled 360 patients who are near the top of the kidney transplant waitlist at our institution. All patients underwent CAD evaluation irrespective of 6MWT results. We examined the association between 6MWT and time to CAD-related events (defined as cardiac death, revascularization, nonfatal myocardial infarction, and removal from the waitlist for CAD), treating noncardiac death and waitlist removal for non-CAD reasons as competing events. RESULTS The 6MWT-based approach designated approximately 45% of patients as "low risk," whereas a risk factor- or symptom-based approach designated 14 and 81% of patients as "low risk," respectively. The 6MWT-based approach was not significantly associated with CAD-related events within 1 year (subproportional hazard ratio [sHR] 1.00 [0.90-1.11] per 50 m) but was significantly associated with competing events (sHR 0.70 [0.66-0.75] per 50 m). In a companion analysis, removing waitlist status from consideration, 6MWT result was associated with the development of CAD-related events (sHR 0.92 [0.84-1.00] per 50 m). CONCLUSIONS The 6MWT designates fewer patients as high risk and in need of further testing (compared to risk factor-based approaches), but its utility as a pure CAD risk stratification tool is modulated by the background waitlist removal rate. CAD screening before kidney transplant should be tailored according to a patient's actual chance of receiving a transplant.
Collapse
|
10
|
Toward telemedicine-compatible physical functioning assessments in kidney transplant candidates. Clin Transplant 2020; 35:e14173. [PMID: 33247983 PMCID: PMC7906942 DOI: 10.1111/ctr.14173] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 11/15/2020] [Accepted: 11/18/2020] [Indexed: 12/15/2022]
Abstract
Frailty is associated with adverse kidney transplant outcomes and can be assessed by subjective and objective metrics. There is increasing recognition of the value of metrics obtainable remotely. We compared the self‐reported SF‐36 physical functioning subscale score (SF‐36 PF) with in‐person physical performance tests (6‐min walk and sit‐to‐stand) in a prospective cohort of kidney transplant candidates. We assessed each metric's ability to predict time to the composite outcome of waitlist removal or death, censoring at transplant. We built time‐dependent receiver operating characteristic curves and calculated the area under the curve [AUC(t)] at 1 year, using bootstrapping for internal validation. In 199 patients followed for a median of 346 days, 41 reached the composite endpoint. Lower SF‐36 PF scores were associated with higher risk of waitlist removal/death, with every 10‐point decrease corresponding to a 16% increase in risk. All models showed an AUC(t) of 0.83–0.84 that did not contract substantially after internal validation. Among kidney transplant candidates, SF‐36 PF, obtainable remotely, can help to stratify the risk of waitlist removal or death, and may be used as a screening tool for poor physical functioning in ongoing candidate evaluation, particularly where travel, increasing patient volume, or other restrictions challenge in‐person assessment.
Collapse
|
11
|
Factors Associated With Failure to Achieve the Intensive Blood Pressure Target in the Systolic Blood Pressure Intervention Trial (SPRINT). Hypertension 2020; 76:1725-1733. [PMID: 33131314 DOI: 10.1161/hypertensionaha.120.16155] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
SPRINT (Systolic Blood Pressure Intervention Trial) found that randomization of nondiabetic participants at high cardiovascular risk to an intensive (systolic blood pressure [SBP] <120 mm Hg) versus standard (SBP <140 mm Hg) target resulted in 25% risk reduction in the first cardiovascular composite event (ie, cardiovascular death or nonfatal myocardial infarction, stroke, or hospitalization for heart failure) and a 27% risk reduction in all-cause mortality. In this post hoc analysis, we sought to determine the factors associated with failure to achieve the SBP target in 4678 SPRINT participants randomized to the intensive treatment group. Using a generalized estimating equation model, we assessed variables associated with failure to achieve the intensive SBP target as a repeated outcome collected during serial follow-up visits, including the occurrence of serious adverse events. In the multivariable model adjusted for baseline demographic, clinical, and laboratory variables, older age, higher SBP, underlying chronic kidney disease, higher number of antihypertensives, and moderate cognitive impairment at screening were associated with failure to achieve the intensive SBP target. Occurrence of a serious adverse event during the trial was associated with 20% higher odds of failure to achieve the SBP target. Participants of Hispanic ethnicity had 47% lower odds of failure to achieve the intensive SBP target relative to non-Hispanic Whites. Understanding barriers to achieving intensive SBP targets should allow clinicians to optimize management of hypertension in patients at high risk for cardiovascular disease.
Collapse
|
12
|
Physical Performance Testing in Kidney Transplant Candidates at the Top of the Waitlist. Am J Kidney Dis 2020; 76:815-825. [PMID: 32512039 DOI: 10.1053/j.ajkd.2020.04.009] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2019] [Accepted: 04/08/2020] [Indexed: 02/08/2023]
Abstract
RATIONALE & OBJECTIVE Frailty and poor physical function are associated with adverse kidney transplant outcomes, but how to incorporate this knowledge into clinical practice is uncertain. We studied the association between measured physical performance and clinical outcomes among patients on kidney transplant waitlists. STUDY DESIGN Prospective observational cohort study. SETTING & PARTICIPANTS We studied consecutive patients evaluated in our Transplant Readiness Assessment Clinic, a top-of-the-waitlist management program, from May 2017 through December 2018 (N=305). We incorporated physical performance testing, including the 6-minute walk test (6MWT) and the sit-to-stand (STS) test, into routine clinical assessments. EXPOSURES 6MWT and STS test results. OUTCOMES The primary outcome was time to adverse waitlist outcomes (removal from waitlist or death); secondary outcomes were time to transplantation and time to death. ANALYTICAL APPROACH We used linear regression to examine the relationship between clinical characteristics and physical performance test results. We used subdistribution hazards models to examine the association between physical performance test results and outcomes. RESULTS Median 6MWT and STS results were 393 (IQR, 305-455) m and 17 (IQR, 12-21) repetitions, respectively. Clinical characteristics and Estimated Post-Transplant Survival scores accounted for only 14% to 21% of the variance in 6MWT/STS results. Physical performance test results were associated with adverse waitlist outcomes (adjusted subdistribution hazard ratio [sHR] of 1.42 [95% CI, 1.30-1.56] per 50-m lower 6MWT test result and 1.53 [95% CI, 1.33-1.75] per 5-repetition lower STS test result) and with transplantation (adjusted sHR of 0.80 [95% CI, 0.72-0.88] per 50-m lower 6MWT test result and 0.80 [95% CI, 0.71-0.89] per 5-repetition lower STS test result). Addition of either STS or 6MWT to survival models containing clinical characteristics enhanced fit (likelihood ratio test P<0.001). LIMITATIONS Single-center observational study. Other measures of global health status (eg, Fried Frailty Index or Short Physical Performance Battery) were not examined. CONCLUSIONS Among waitlisted kidney transplant candidates with high kidney allocation scores, standardized and easily performed physical performance test results are associated with waitlist outcomes and contain information beyond what is currently routinely collected in clinical practice.
Collapse
|
13
|
Impact of Pretransplant Donor BK Viruria in Kidney Transplant Recipients. J Infect Dis 2020; 220:370-376. [PMID: 30869132 DOI: 10.1093/infdis/jiz114] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2019] [Accepted: 03/12/2019] [Indexed: 01/10/2023] Open
Abstract
BACKGROUND BK virus (BKV) is a significant cause of nephropathy in kidney transplantation. The goal of this study was to characterize the course and source of BKV in kidney transplant recipients. METHODS We prospectively collected pretransplant plasma and urine samples from living and deceased kidney donors and performed BKV polymerase chain reaction (PCR) and immunoglobulin G (IgG) testing on pretransplant and serially collected posttransplant samples in kidney transplant recipients. RESULTS Among deceased donors, 8.1% (17/208) had detectable BKV DNA in urine prior to organ procurement. BK viruria was observed in 15.4% (6/39) of living donors and 8.5% (4/47) of deceased donors of recipients at our institution (P = .50). BKV VP1 sequencing revealed identical virus between donor-recipient pairs to suggest donor transmission of virus. Recipients of BK viruric donors were more likely to develop BK viruria (66.6% vs 7.8%; P < .001) and viremia (66.6% vs 8.9%; P < .001) with a shorter time to onset (log-rank test, P < .001). Though donor BKV IgG titers were higher in recipients who developed BK viremia, pretransplant donor, recipient, and combined donor/recipient serology status was not associated with BK viremia (P = .31, P = .75, and P = .51, respectively). CONCLUSIONS Donor BK viruria is associated with early BK viruria and viremia in kidney transplant recipients. BKV PCR testing of donor urine may be useful in identifying recipients at risk for BKV complications.
Collapse
|
14
|
Trimethylamine N-Oxide and Cardiovascular Outcomes in Patients with ESKD Receiving Maintenance Hemodialysis. Clin J Am Soc Nephrol 2019; 14:261-267. [PMID: 30665924 PMCID: PMC6390920 DOI: 10.2215/cjn.06190518] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Accepted: 11/09/2018] [Indexed: 12/22/2022]
Abstract
BACKGROUND AND OBJECTIVES Trimethylamine N-oxide (TMAO), a compound derived from byproducts of intestinal bacteria, has been shown to accelerate atherosclerosis in rodents. To date, there are conflicting data regarding the association of serum TMAO with cardiovascular outcomes in patients with ESKD, a population exhibiting both high serum TMAO and excessive atherosclerosis. DESIGN, SETTING, PARTICIPANTS, & MEASUREMENTS We measured baseline serum TMAO concentrations in a subset of participants (n=1243) from the Evaluation of Cinacalcet Hydrochloride Therapy to Lower Cardiovascular Events (EVOLVE) trial and conducted post hoc analyses evaluating the association between baseline serum TMAO and cardiovascular outcomes. RESULTS We observed a wide distribution of serum TMAO in our cohort, with approximately 80% of participants exhibiting TMAO concentrations ≥56 µM and a maximum TMAO concentration of 1103.1 µM. We found no association between TMAO and our primary outcome, a composite of cardiovascular mortality, myocardial infarction, peripheral vascular event, stroke, and hospitalization for unstable angina. Moreover, in unadjusted and adjusted analyses, we observed no relation between TMAO and all-cause mortality, the independent components of our composite outcome, or the original EVOLVE primary outcome. Although we did observe higher TMAO concentrations in white participants, further subgroup analyses did not confirm the previously identified interaction between TMAO and race observed in a prior study in patients receiving dialysis. CONCLUSIONS We found no evidence linking TMAO to adverse clinical outcomes in patients receiving maintenance hemodialysis with moderate to severe secondary hyperparathyroidism.
Collapse
|
15
|
Longitudinal changes in kidney function following heart transplantation: Stanford experience. Clin Transplant 2018; 32:e13414. [PMID: 30240515 PMCID: PMC6265058 DOI: 10.1111/ctr.13414] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Revised: 08/08/2018] [Accepted: 09/13/2018] [Indexed: 11/30/2022]
Abstract
Many heart transplant recipients experience declining kidney function following transplantation. We aimed to quantify change in kidney function in heart transplant recipients stratified by pre-transplant kidney function. A total of 230 adult heart transplant recipients between May 1, 2008, and December 31, 2014, were evaluated for up to 5 years post-transplant (median 1 year). Using 19 398 total estimated glomerular filtration rate (eGFR) assessments, we evaluated trends in eGFR in recipients with normal/near-normal (eGFR ≥45 mL/min/1.73 m2 ) vs impaired (eGFR <45 mL/min/1.73 m2 ) kidney function and the likelihood of reaching an eGFR of 20 mL/min/1.73 m2 after heart transplant. Baseline characteristics were similar. Immediately following heart transplant, the impaired pre-transplant kidney function group showed a mean eGFR gain of 9.5 mL/min/1.73 m2 (n = 193) vs a mean decline of 4.9 mL/min/1.73 m2 (n = 37) in the normal/near-normal group. Subsequent rates of eGFR decline were 2.2 mL/min/1.73 m2 /y vs 2.9 mL/min/1.73 m2 /y, respectively. The probability of reaching an eGFR of 20 mL/min/1.73 m2 or less at 1, 5, and 10 years following heart transplant was 1%, 4%, and 30% in the impaired group, and <1%, <1%, and 10% in the normal/near-normal group. Estimates of expected recovery in kidney function and its decline over time will help inform decision making about kidney care after heart transplantation.
Collapse
|
16
|
Current estimates of the cure fraction: a feasibility study of statistical cure for breast and colorectal cancer. J Natl Cancer Inst Monogr 2015; 2014:244-54. [PMID: 25417238 DOI: 10.1093/jncimonographs/lgu015] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
BACKGROUND The probability of cure is a long-term prognostic measure of cancer survival. Estimates of the cure fraction, the proportion of patients "cured" of the disease, are based on extrapolating survival models beyond the range of data. The objective of this work is to evaluate the sensitivity of cure fraction estimates to model choice and study design. METHODS Data were obtained from the Surveillance, Epidemiology, and End Results (SEER)-9 registries to construct a cohort of breast and colorectal cancer patients diagnosed from 1975 to 1985. In a sensitivity analysis, cure fraction estimates are compared from different study designs with short- and long-term follow-up. Methods tested include: cause-specific and relative survival, parametric mixture, and flexible models. In a separate analysis, estimates are projected for 2008 diagnoses using study designs including the full cohort (1975-2008 diagnoses) and restricted to recent diagnoses (1998-2008) with follow-up to 2009. RESULTS We show that flexible models often provide higher estimates of the cure fraction compared to parametric mixture models. Log normal models generate lower estimates than Weibull parametric models. In general, 12 years is enough follow-up time to estimate the cure fraction for regional and distant stage colorectal cancer but not for breast cancer. 2008 colorectal cure projections show a 15% increase in the cure fraction since 1985. DISCUSSION Estimates of the cure fraction are model and study design dependent. It is best to compare results from multiple models and examine model fit to determine the reliability of the estimate. Early-stage cancers are sensitive to survival type and follow-up time because of their longer survival. More flexible models are susceptible to slight fluctuations in the shape of the survival curve which can influence the stability of the estimate; however, stability may be improved by lengthening follow-up and restricting the cohort to reduce heterogeneity in the data.
Collapse
|
17
|
Abstract
Adolescent and young adults (AYAs) face challenges in having their cancers recognized, diagnosed, treated, and monitored. Monitoring AYA cancer survival is of interest because of the lack of improvement in outcome previously documented for these patients as compared with younger and older patient outcomes. AYA patients 15-39 years old, diagnosed during 2000-2008 with malignant cancers were selected from the SEER 17 registries data. Selected cancers were analyzed for incidence and five-year relative survival by histology, stage, and receptor subtypes. Hazard ratios were estimated for cancer death risk among younger and older ages relative to the AYA group. AYA survival was worse for female breast cancer (regardless of estrogen receptor status), acute lymphoid leukemia (ALL), and acute myeloid leukemia (AML). AYA survival for AML was lowest for a subtype associated with a mutation of the nucleophosmin 1 gene (NPM1). AYA survival for breast cancer and leukemia remain poor as compared with younger and older survivors. Research is needed to address disparities and improve survival in this age group.
Collapse
|
18
|
A SAS macro for a clustered logrank test. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2011; 104:266-70. [PMID: 21496938 PMCID: PMC3140566 DOI: 10.1016/j.cmpb.2011.02.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/26/2009] [Revised: 06/30/2010] [Accepted: 02/03/2011] [Indexed: 05/17/2023]
Abstract
The clustered logrank test is a nonparametric method of significance testing for correlated survival data. Examples of its application include cluster randomized trials where groups of patients rather than individuals are randomized to either a treatment or a control intervention. We describe a SAS macro that implements the 2-sample clustered logrank test for data where the entire cluster is randomized to the same treatment group. We discuss the theory and applications behind this test as well as details of the SAS code.
Collapse
|
19
|
A comparison of statistical approaches for physician-randomized trials with survival outcomes. Contemp Clin Trials 2011; 33:104-15. [PMID: 21924382 DOI: 10.1016/j.cct.2011.08.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2011] [Revised: 08/17/2011] [Accepted: 08/31/2011] [Indexed: 10/17/2022]
Abstract
This study compares methods for analyzing correlated survival data from physician-randomized trials of health care quality improvement interventions. Several proposed methods adjust for correlated survival data; however the most suitable method is unknown. Applying the characteristics of our study example, we performed three simulation studies to compare conditional, marginal, and non-parametric methods for analyzing clustered survival data. We simulated 1000 datasets using a shared frailty model with (1) fixed cluster size, (2) variable cluster size, and (3) non-lognormal random effects. Methods of analyses included: the nonlinear mixed model (conditional), the marginal proportional hazards model with robust standard errors, the clustered logrank test, and the clustered permutation test (non-parametric). For each method considered we estimated Type I error, power, mean squared error, and the coverage probability of the treatment effect estimator. We observed underestimated Type I error for the clustered logrank test. The marginal proportional hazards method performed well even when model assumptions were violated. Nonlinear mixed models were only advantageous when the distribution was correctly specified.
Collapse
|
20
|
Healthcare information technology interventions to improve cardiovascular and diabetes medication adherence. THE AMERICAN JOURNAL OF MANAGED CARE 2010; 16:SP82-SP92. [PMID: 21314228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
OBJECTIVE To determine the efficacy of healthcare information technology (HIT) interventions in improving adherence. STUDY DESIGN Systematic search of randomized controlled trials of HIT interventions to improve medication adherence in cardiovascular disease or diabetes. METHODS Interventions were classified as 1-way patient reminder systems, 2-way interactive systems, and systems to enhance patient-provider interaction. Studies were subclassified into those with and without real-time provider feedback. Cohen's d effect sizes were calculated to assess each intervention's magnitude of effectiveness. RESULTS We identified 7190 articles, only 13 of which met inclusion criteria. The majority of included studies (54%, 7 studies) showed a very small ES. The effect size was small in 15%, large in 8%, and was not amenable to calculation in the remainder. Reminder systems were consistently effective, showing the largest effect sizes in this review. Education/counseling HIT systems were less successful, as was the addition of realtime adherence feedback to healthcare providers. Interactive systems were rudimentary and not integrated into electronic health records; they exhibited very small effect sizes. Studies aiming to improve patient-provider communication also had very small effect sizes. CONCLUSIONS There is a paucity of data about HIT's efficacy in improving adherence to medications for cardiovascular disease and diabetes, although simple patient reminder systems appear effective. Future studies should focus on more sophisticated interactive interventions that expand the functionality and capabilities of HIT and better engage patients in care.
Collapse
|
21
|
Guideline-conformity of initiation with oral hypoglycemic treatment for patients with newly therapy-dependent type 2 diabetes mellitus in Austria. Pharmacoepidemiol Drug Saf 2010; 20:57-65. [PMID: 21182153 DOI: 10.1002/pds.2059] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2010] [Revised: 08/20/2010] [Accepted: 08/30/2010] [Indexed: 11/09/2022]
Abstract
PURPOSE To determine guideline conformity of initiation of oral hypoglycemic (OH) treatment for type 2 diabetes in Austria; to study patient and prescriber correlates of recommended initiation with metformin monotherapy. METHODS We used claims from 11 sickness funds that covered 7.5 million individuals, representing >90% of the Austrian population. First-time OH use was defined as a first filled prescription after one year without any OH drug or insulin. Among these incident users, we described the OH drug class used and defined correlates of initiation with metformin monotherapy. RESULTS From 1/2007 to 6/2008, we identified 42,882 incident users of an OH drug: 70.8% used metformin, 24.7% used a sulfonylurea, and 4.5% initiated treatment with another class. We estimated the incidence of OH-dependent type 2 diabetes at 3.8-4.4 per 1000 patient-years. We conducted multivariate analyses among 39 077 patients with available prescriber information. Independent correlates of initiation with metformin were younger age, female gender, waived co-payment, more recent initiation, fewer hospital days and more therapeutic classes received in the year prior to first OH therapy (all p < 0.001). Prescriber specialty and age (p < 0.001), but not gender, were also associated with metformin initiation. Approximately 20% of metformin initiators had a second OH drug added within <18 months. While we were unable to ascertain specific contraindications to metformin (renal insufficiency, hepatic failure), <10% of the general population are expected to have these conditions. CONCLUSIONS Seventy per cent of new initiators of OH treatment in Austria received metformin as recommended by international guidelines. At least 20% did not, taking into account possible contraindications, which provides an opportunity for intervention.
Collapse
|
22
|
Seizure outcomes following the use of generic versus brand-name antiepileptic drugs: a systematic review and meta-analysis. Drugs 2010; 70:605-21. [PMID: 20329806 DOI: 10.2165/10898530-000000000-00000] [Citation(s) in RCA: 142] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
The automatic substitution of bioequivalent generics for brand-name antiepileptic drugs (AEDs) has been linked by anecdotal reports to loss of seizure control. To evaluate studies comparing brand-name and generic AEDs, and determine whether evidence exists of superiority of the brand-name version in maintaining seizure control. English-language human studies identified in searches of MEDLINE, EMBASE and International Pharmaceutical Abstracts (1984 to 2009). Randomized controlled trials (RCTs) and observational studies comparing seizure events or seizure-related outcomes between one brand-name AED and at least one alternative version produced by a distinct manufacturer. We identified 16 articles (9 RCTs, 1 prospective nonrandomized trial, 6 observational studies). We assessed characteristics of the studies and, for RCTs, extracted counts for patients whose seizures were characterized as 'controlled' and 'uncontrolled'. Seven RCTs were included in the meta-analysis. The aggregate odds ratio (n = 204) was 1.1 (95% CI 0.9, 1.2), indicating no difference in the odds of uncontrolled seizure for patients on generic medications compared with patients on brand-name medications. In contrast, the observational studies identified trends in drug or health services utilization that the authors attributed to changes in seizure control. Although most RCTs were short-term evaluations, the available evidence does not suggest an association between loss of seizure control and generic substitution of at least three types of AEDs. The observational study data may be explained by factors such as undue concern from patients or physicians about the effectiveness of generic AEDs after a recent switch. In the absence of better data, physicians may want to consider more intensive monitoring of high-risk patients taking AEDs when any switch occurs.
Collapse
|
23
|
Primary medication non-adherence: analysis of 195,930 electronic prescriptions. J Gen Intern Med 2010; 25:284-90. [PMID: 20131023 PMCID: PMC2842539 DOI: 10.1007/s11606-010-1253-9] [Citation(s) in RCA: 372] [Impact Index Per Article: 26.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/13/2009] [Revised: 12/23/2009] [Accepted: 01/04/2010] [Indexed: 11/25/2022]
Abstract
BACKGROUND Non-adherence to essential medications represents an important public health problem. Little is known about the frequency with which patients fail to fill prescriptions when new medications are started ("primary non-adherence") or predictors of failure to fill. OBJECTIVE Evaluate primary non-adherence in community-based practices and identify predictors of non-adherence. PARTICIPANTS 75,589 patients treated by 1,217 prescribers in the first year of a community-based e-prescribing initiative. DESIGN We compiled all e-prescriptions written over a 12-month period and used filled claims to identify filled prescriptions. We calculated primary adherence and non-adherence rates for all e-prescriptions and for new medication starts and compared the rates across patient and medication characteristics. Using multivariable regressions analyses, we examined which characteristics were associated with non-adherence. MAIN MEASURES Primary medication non-adherence. KEY RESULTS Of 195,930 e-prescriptions, 151,837 (78%) were filled. Of 82,245 e-prescriptions for new medications, 58,984 (72%) were filled. Primary adherence rates were higher for prescriptions written by primary care specialists, especially pediatricians (84%). Patients aged 18 and younger filled prescriptions at the highest rate (87%). In multivariate analyses, medication class was the strongest predictor of adherence, and non-adherence was common for newly prescribed medications treating chronic conditions such as hypertension (28.4%), hyperlipidemia (28.2%), and diabetes (31.4%). CONCLUSIONS Many e-prescriptions were not filled. Previous studies of medication non-adherence failed to capture these prescriptions. Efforts to increase primary adherence could dramatically improve the effectiveness of medication therapy. Interventions that target specific medication classes may be most effective.
Collapse
|
24
|
|
25
|
Abstract
SUMMARY Weekly bisphosphonates are the primary agents used to treat osteoporosis. Although these agents are generally well tolerated, serious gastrointestinal adverse events, including hospitalization for gastrointestinal bleed, may arise. We compared the gastrointestinal safety between weekly alendronate and weekly risedronate and found no important difference between new users of these agents. INTRODUCTION Weekly bisphosphonates are the primary agents prescribed for osteoporosis. We examined the comparative gastrointestinal safety between weekly bisphosphonates. METHODS We studied new users of weekly alendronate and weekly risedronate from June 2002 to August 2005 among enrollees in a state-wide pharmaceutical benefit program for seniors. Our primary outcome was hospitalization for upper gastrointestinal bleed. Secondary outcomes included outpatient diagnoses for upper gastrointestinal disease, symptoms, endoscopic procedures, use of gastroprotective agents, and switching between therapies. We used Cox proportional hazard models to compare outcomes between agents within 120 days of treatment initiation, adjusting for propensity score quintiles. We also examined composite safety outcomes and stratified results by age and prior gastrointestinal history. RESULTS A total of 10,420 new users were studied, mean age = 79 years (SD, 6.9), and 95% women. We observed 31 hospitalizations for upper gastrointestinal bleed (0.91 per 100 person-years) within 120 days of treatment initiation. Adjusting for covariates, there was no difference in hospitalization for upper gastrointestinal bleed among those treated with risedronate compared with alendronate (HR, 1.12; 95%CI, 0.55 to 2.28). Risedronate switching rates were lower; otherwise, no differences were observed for secondary or composite outcomes. CONCLUSIONS We found no important difference in gastrointestinal safety between weekly oral bisphosphonates.
Collapse
|
26
|
Adherence to osteoporosis medications after patient and physician brief education: post hoc analysis of a randomized controlled trial. THE AMERICAN JOURNAL OF MANAGED CARE 2009; 15:417-424. [PMID: 19589009 PMCID: PMC2885859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
OBJECTIVE To examine whether adherence to osteoporosis medications can be improved by educational interventions targeted at primary care physicians (PCPs) and patients. STUDY DESIGN Post hoc analysis of data collected as part of a prospective randomized controlled trial to improve initiation of osteoporosis management such as bone mineral density testing or osteoporosis drug initiation. METHODS The trial was conducted among patients at risk for osteoporosis enrolled in Horizon Blue Cross Blue Shield of New Jersey. For a 3-month period, randomly selected PCPs and their patients received education about osteoporosis diagnosis and treatment. The PCPs received face-to-face education by trained pharmacists, while patients received letters and automated telephone calls. The control group received no education. We assessed medication adherence during 10 months following the start of the intervention using the medication possession ratio (MPR), the ratio of available medication to the total number of days studied. RESULTS These analyses included 1867 patients (972 randomized to the intervention group and 875 to the control group) and their 436 PCPs. During 10 months following the intervention, the median MPRs were 74% (interquartile range [IQR], 19%-93%) for the intervention group and 73% (IQR, 0%-93%) for the control group (P = .18). The median times until medication discontinuation after the intervention were 85 days (IQR, 58-174 days) for the intervention group and 79 days (IQR, 31-158 days) for the control group. CONCLUSION The educational intervention did not significantly improve medication compliance or persistence with osteoporosis drugs.
Collapse
|
27
|
A SAS macro for a clustered permutation test. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2009; 95:89-94. [PMID: 19321221 PMCID: PMC2674116 DOI: 10.1016/j.cmpb.2009.02.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2008] [Revised: 02/09/2009] [Accepted: 02/12/2009] [Indexed: 05/27/2023]
Abstract
The clustered permutation test is a nonparametric method of significance testing for correlated data. It is often used in cluster randomized trials where groups of patients rather than individuals are randomized to either a treatment or control intervention. We describe a flexible and efficient SAS macro that implements the 2-sample clustered permutation test. We discuss the theory and applications behind this test as well as details of the SAS code.
Collapse
|
28
|
Clinical equivalence of generic and brand-name drugs used in cardiovascular disease: a systematic review and meta-analysis. JAMA 2008; 300:2514-26. [PMID: 19050195 PMCID: PMC2713758 DOI: 10.1001/jama.2008.758] [Citation(s) in RCA: 262] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
CONTEXT Use of generic drugs, which are bioequivalent to brand-name drugs, can help contain prescription drug spending. However, there is concern among patients and physicians that brand-name drugs may be clinically superior to generic drugs. OBJECTIVES To summarize clinical evidence comparing generic and brand-name drugs used in cardiovascular disease and to assess the perspectives of editorialists on this issue. DATA SOURCES Systematic searches of peer-reviewed publications in MEDLINE, EMBASE, and International Pharmaceutical Abstracts from January 1984 to August 2008. STUDY SELECTION Studies compared generic and brand-name cardiovascular drugs using clinical efficacy and safety end points. We separately identified editorials addressing generic substitution. DATA EXTRACTION We extracted variables related to the study design, setting, participants, clinical end points, and funding. Methodological quality of the trials was assessed by Jadad and Newcastle-Ottawa scores, and a meta-analysis was performed to determine an aggregate effect size. For editorials, we categorized authors' positions on generic substitution as negative, positive, or neutral. RESULTS We identified 47 articles covering 9 subclasses of cardiovascular medications, of which 38 (81%) were randomized controlled trials (RCTs). Clinical equivalence was noted in 7 of 7 RCTs (100%) of beta-blockers, 10 of 11 RCTs (91%) of diuretics, 5 of 7 RCTs (71%) of calcium channel blockers, 3 of 3 RCTs (100%) of antiplatelet agents, 2 of 2 RCTs (100%) of statins, 1 of 1 RCT (100%) of angiotensin-converting enzyme inhibitors, and 1 of 1 RCT (100%) of alpha-blockers. Among narrow therapeutic index drugs, clinical equivalence was reported in 1 of 1 RCT (100%) of class 1 antiarrhythmic agents and 5 of 5 RCTs (100%) of warfarin. Aggregate effect size (n = 837) was -0.03 (95% confidence interval, -0.15 to 0.08), indicating no evidence of superiority of brand-name to generic drugs. Among 43 editorials, 23 (53%) expressed a negative view of generic drug substitution. CONCLUSIONS Whereas evidence does not support the notion that brand-name drugs used in cardiovascular disease are superior to generic drugs, a substantial number of editorials counsel against the interchangeability of generic drugs.
Collapse
|
29
|
An evaluation of statistical approaches for analyzing physician-randomized quality improvement interventions. Contemp Clin Trials 2008; 29:687-95. [DOI: 10.1016/j.cct.2008.04.003] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2007] [Revised: 04/04/2008] [Accepted: 04/15/2008] [Indexed: 10/22/2022]
|
30
|
Abstract
BACKGROUND Little information is available on the comparative effectiveness of osteoporosis pharmacotherapies. OBJECTIVE To compare the relative effectiveness of osteoporosis treatments to reduce nonvertebral fracture risk among older adults. DESIGN Cohort study. SETTING Enrollees in 2 statewide pharmaceutical benefit programs for persons age 65 years or older. PATIENTS 43,135 new recipients of oral bisphosphonates, nasal calcitonin, and raloxifene who began treatment from 2000 to 2005. The mean age was 79 years (SD, 6.9), and 96% were women. MEASUREMENTS The primary outcome was nonvertebral fracture (hip, humerus, or radius or ulna) within 12 months of treatment initiation. Cox proportional hazard models stratified by state and adjusted for risk factors for fracture were used to compare fracture rates. Alendronate was the reference category in all analyses. RESULTS A total of 1051 nonvertebral fractures were observed within 12 months (2.62 fractures per 100 person-years). No large differences in fracture risk were found between risedronate (hazard ratio [HR], 1.01 [95% CI, 0.85 to 1.21]) or raloxifene (HR, 1.18 [CI, 0.96 to 1.46]) and alendronate. However, among those with a fracture history, raloxifene recipients experienced more nonvertebral fractures within 12 months (HR, 1.78 [CI, 1.20 to 2.63]) compared with alendronate recipients. Patients who received calcitonin experienced more nonvertebral fractures than those who received alendronate (HR, 1.40, [CI, 1.20 to 1.63]). Results were similar in sensitivity analyses that examined different lengths of follow-up (6 months and 24 months), were restricted to hip fracture as the outcome, and were completed in various subgroups. LIMITATION Confounder adjustment was limited to health care utilization data, and the confidence bounds of some comparisons were too wide to rule out potential clinically important differences between agents. CONCLUSION Differences in fracture risk between risedronate or raloxifene and alendronate were small. Nasal calcitonin recipients may have a higher risk for nonvertebral fractures compared with alendronate recipients. Future studies that can better adjust for possible confounding may further clarify these relationships.
Collapse
|
31
|
Trends in drug prescribing for osteoporosis after hip fracture, 1995-2004. J Rheumatol 2008; 35:319-326. [PMID: 18061977 PMCID: PMC3256248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
OBJECTIVE To examine trends in osteoporosis drug prescribing after hip fracture from 1995 to 2004. METHODS We conducted a population-based study of enrollees in the Pennsylvania Pharmaceutical Assistance Contract for the Elderly. Hip fractures were identified using Medicare hospital claims between January 1, 1995, and June 30, 2004. Osteoporosis treatment comprised oral bisphosphonates, calcitonin, hormone therapy, raloxifene, and/or teriparatide. Kaplan-Meier methods were used to estimate the probability of treatment within 6 months of fracture, censoring patients on their date of death or 6 months postfracture. RESULTS Treatment within 6 months after hip fracture improved from 7% in 1995 to 31% in 2002, and then remained stable through 2004. Similar patterns were observed among new users, with treatment increasing from 4% in 1995 to 17% in 2002, with no subsequent increase through 2004. Bisphosphonates led other treatments in the frequency of prescribing, except during 1997-99, when calcitonin was the most common. Among women, hormone therapy prescribing decreased from 22% of those treated in 1995 to 4% in 2004, and raloxifene prescribing remained relatively constant (4%-10%) since its introduction (p for trend = 0.15). Of patients treated before and after hip fracture, 18% changed therapy postfracture. Significantly more patients changed therapy following fracture if a different physician prescribed treatment (26%) compared to those treated by the same physician pre- and postfracture (13%; p < 0.0001). CONCLUSION Prescribing practices changed substantially over the 10 years of study. The proportion of hip fracture patients treated with osteoporosis drugs has increased, but remains low, with fewer than one-third receiving pharmacotherapy.
Collapse
|
32
|
Abstract
To characterize long-term prescriptions for commonly prescribed anxiolytic benzodiazepines to veteran patients and to identify factors associated with high daily doses, we analyzed the linked pharmacy and administrative databases from New England Veterans Healthcare System (VISN 1). We analyzed treatment episodes of 3 months or longer with the 4 most commonly prescribed agents: alprazolam, clonazepam, diazepam, and lorazepam. Descriptive statistics and univariate and multivariate analyses described the sample and tested associations of pharmacological and clinical variables for patients prescribed the top 5% of average daily doses ("high-dose" prescriptions). From 16,630 full or partial treatment episodes for all 4 agents analyzed within a 42-month window, average daily doses were predominantly moderate, age-sensitive, and stable; refill lag intervals were short. Patients on "high-dose" prescriptions for the 4 agents combined, compared with "middle quartile" dose prescriptions, in adjusted analyses, were younger, more likely to have posttraumatic stress disorder (odds ratio [OR], 2.6; 95% confidence interval [CI], 2.17-3.13), substance abuse (OR, 1.50; 95% CI, 1.25-1.80), and anxiety (OR, 1.33; 95% CI, 1.11-1.60) and were more likely to be receiving concurrent oxycodone/acetaminophen (OR, 2.05; 95% CI, 1.64-2.56), anxiolytic benzodiazepine (OR, 1.51; 95% CI, 1.12-2.03), antidepressant (OR, 2.15; 95% CI, 1.80-2.58), and neuroleptic (OR, 2.03; 95% CI, 1.69-2.44) prescriptions. These results indicate that veteran patients prescribed anxiolytic benzodiazepines typically receive modest, nonincreasing doses over long-term treatment episodes. However, those on the highest average daily doses, typically more than recommended guidelines, are more likely to have clinical diagnoses and concurrent prescriptions for psychoactive medications indicative of more complex and, perhaps, problematic management.
Collapse
|