51
|
Han JH, Bryce SN, Ely EW, Kripalani S, Morandi A, Shintani A, Jackson JC, Storrow AB, Dittus RS, Schnelle J. The effect of cognitive impairment on the accuracy of the presenting complaint and discharge instruction comprehension in older emergency department patients. Ann Emerg Med 2011; 57:662-671.e2. [PMID: 21272958 DOI: 10.1016/j.annemergmed.2010.12.002] [Citation(s) in RCA: 64] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2010] [Revised: 11/17/2010] [Accepted: 12/01/2010] [Indexed: 12/19/2022]
Abstract
STUDY OBJECTIVE We seek to determine how delirium and dementia affect the accuracy of the presenting illness and discharge instruction comprehension in older emergency department (ED) patients. METHODS This cross-sectional study was conducted at an academic ED from May 2008 to July 2008 and included non-nursing home patients aged 65 years and older. Two open-ended interviews were performed to assess patients' ability to accurately provide their presenting illness and comprehension of their ED discharge instructions. The surrogates' version of the presenting illness and printed discharge instructions were the reference standards. Concordance between the patient and the reference standards was determined by 2 reviewers using a 5-point scale ranging from 1 (no concordance) to 5 (complete concordance). Proportional odds logistic regression was performed to determine whether cognitive impairment was associated with presenting complaint accuracy and discharge instruction comprehension. All models were adjusted for age, health literacy, education, nonwhite race, and hearing impairment. RESULTS For the presenting illness analysis, 202 patients participated. Compared with patients without cognitive impairment, those with delirium superimposed on dementia (DSD) had lower odds of agreeing with their surrogates with regard to why they were in the ED (adjusted proportional odds ratio=0.20; 95% confidence interval [CI] 0.09 to 0.43). For the discharge instruction comprehension analysis, 115 patients participated. Patients with DSD had significantly lower odds of comprehending their discharge diagnosis (adjusted proportional odds ratio=0.13; 95% CI 0.04 to 0.47), return to the ED instructions (adjusted proportional odds ratio=0.18; 95% CI 0.04 to 0.82), and follow-up instructions (adjusted proportional odds ratio=0.09; 95% CI 0.02 to 0.35) compared with patients without cognitive impairment. CONCLUSION DSD is associated with decreased accuracy of the older patient's presenting illness and decreased comprehension of ED discharge instructions.
Collapse
|
Research Support, U.S. Gov't, Non-P.H.S. |
14 |
64 |
52
|
Grogan EL, Morris JA, Dittus RS, Moore DE, Poulose BK, Diaz JJ, Speroff T. Cervical spine evaluation in urban trauma centers: Lowering institutional costs and complications through helical CT scan1. J Am Coll Surg 2005; 200:160-5. [PMID: 15664088 DOI: 10.1016/j.jamcollsurg.2004.10.019] [Citation(s) in RCA: 64] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2004] [Accepted: 10/06/2004] [Indexed: 10/25/2022]
Abstract
BACKGROUND In the evaluation of the cervical spine (c-spine), helical CT scan has higher sensitivity and specificity than plain radiographs in the moderate- and high-risk trauma population, but is more costly. We hypothesize that institutional costs associated with missed injuries make helical CT scan the least costly approach. STUDY DESIGN A cost-minimization study was performed using decision analysis examining helical CT scan versus radiographic evaluation of the c-spine. Parameter estimates were obtained from the literature for probability of c-spine injury, probability of paralysis after missed injury, plain film sensitivity and specificity, CT scan sensitivity and specificity, and settlement cost of missed injuries resulting in paralysis. Institutional costs of CT scan and plain radiography were used. Sensitivity analyses tested robustness of strategy preference, accounted for parameter variability, and determined threshold values for individual parameters on strategy preference. RESULTS C-spine evaluation with helical CT scan has an expected cost of US 554 dollars per patient compared with US 2,142 dollars for plain films. CT scan is the least costly alternative if threshold values exceed US 58,180 dollars for institutional settlement costs, 0.9% for probability of c-spine fracture, and 1.7% for probability of paralysis. Plain films are least costly if CT scan costs surpass US 1,918 dollars or plain film sensitivity exceeds 90%. CONCLUSIONS Helical CT scan is the preferred initial screening test for detection of cervical spine fractures among moderate- to high-risk patients seen in urban trauma centers, reducing the incidence of paralysis resulting from false-negative imaging studies and institutional costs, when settlement costs are taken into account.
Collapse
|
|
20 |
64 |
53
|
Han JH, Vasilevskis EE, Schnelle JF, Shintani A, Dittus RS, Wilson A, Ely EW. The Diagnostic Performance of the Richmond Agitation Sedation Scale for Detecting Delirium in Older Emergency Department Patients. Acad Emerg Med 2015; 22:878-82. [PMID: 26113020 DOI: 10.1111/acem.12706] [Citation(s) in RCA: 63] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2014] [Revised: 01/31/2015] [Accepted: 02/13/2015] [Indexed: 12/26/2022]
Abstract
OBJECTIVES Delirium is frequently missed in older emergency department (ED) patients. Brief (<2 minutes) delirium assessments have been validated for the ED, but some ED health care providers may consider them to be cumbersome. The Richmond Agitation Sedation Scale (RASS) is an observational scale that quantifies level of consciousness and takes less than 10 seconds to perform. The authors sought to explore the diagnostic accuracy of the RASS for delirium in older ED patients. METHODS This was a preplanned analysis of a prospective observational study designed to validate brief delirium assessments for the ED. The study was conducted at an academic ED and enrolled patients who were 65 years or older. Patients who were non-English-speaking, deaf, blind, comatose or had end-stage dementia were excluded. A research assistant (RA) and a physician performed the RASS at the time of enrollment. Within 3 hours, a consultation-liaison psychiatrist performed his or her comprehensive reference standard assessment for delirium using Diagnostic and Statistical Manual of Mental Disorders Fourth Edition, Text Revision (DSM-IV-TR) criteria. Sensitivities, specificities, and likelihood ratios with their 95% confidence intervals (CIs) were calculated. RESULTS Of 406 enrolled patients, 50 (12.3%) had delirium diagnosed by the consult-liaison psychiatrist reference rater. When performed by the RA, a RASS other than 0 (RASS > 0 or < 0) was 84.0% sensitive (95% CI = 73.8% to 94.2%) and 87.6% specific (95% CI = 84.2% to 91.1%) for delirium. When performed by physician, a RASS other than 0 was 82.0% sensitive (95% CI = 71.4% to 92.6%) and 85.1% specific (95% CI = 81.4% to 88.8%) for delirium. Using a RASS > +1 or < -1 as the cutoff, the specificity improved to approximately 99% for both raters at the expense of sensitivity; the sensitivities were 22.0% (95% CI = 10.5% to 33.5%) and 16.0% (95% CI = 5.8% to 25.2%) in the RAs and physician raters, respectively. The positive likelihood ratio was 19.6 (95% CI = 6.5 to 59.1) when performed by the RA and 57.0 (95% CI = 7.3 to 445.9) when performed by the physician, indicating that a RASS > +1 or < -1 strongly increased the likelihood of delirium. The weighted kappa was 0.63, indicating moderate interobserver reliability. CONCLUSIONS In older ED patients, a RASS other than 0 has very good sensitivity and specificity for delirium as diagnosed by a psychiatrist. A RASS > +1 or < -1 is nearly diagnostic for delirium, given the very high positive likelihood ratio.
Collapse
|
Observational Study |
10 |
63 |
54
|
Wright JG, Hawker GA, Bombardier C, Croxford R, Dittus RS, Freund DA, Coyte PC. Physician enthusiasm as an explanation for area variation in the utilization of knee replacement surgery. Med Care 1999; 37:946-56. [PMID: 10493472 DOI: 10.1097/00005650-199909000-00010] [Citation(s) in RCA: 63] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
BACKGROUND Explanations for regional variation in the use of many medical and surgical treatments is controversial. OBJECTIVES To identify factors that might be amenable to intervention, we investigated the determinants of regional variation in the use of knee replacement surgery. RESEARCH DESIGN We examined the effect of the following factors: characteristics and opinions of surgeons; family physicians and rheumatologists; patients' severity of disease before knee replacement; access to knee-replacement surgery; surgeons' use of other surgical treatment; and county population characteristics. OUTCOMES MEASURE County utilization rates of knee replacement in Ontario, Canada. RESULTS Counties that had higher rates of knee replacement had older patients (P = 0.0001), higher percentage of medical school affiliated hospital beds (P = 0.04), with more male (P = 0.02) non-North American trained referring physicians (P = 0.002) and orthopedic surgeons who had higher propensities to operate and better perceptions of outcome (P = 0.0001). CONCLUSIONS After controlling for population characteristics and access to care (including the number of hospital beds, and the density of orthopaedic and referring physicians), orthopaedic surgeons' opinions or enthusiasm for the procedure was the dominant modifiable determinant of area variation. Thus, research needs to focus on the opinions of surgeons which may be important in reducing regional variation for knee replacement.
Collapse
|
|
26 |
63 |
55
|
Coyte PC, Wright JG, Hawker GA, Bombardier C, Dittus RS, Paul JE, Freund DA, Ho E. Waiting times for knee-replacement surgery in the United States and Ontario. N Engl J Med 1994; 331:1068-71. [PMID: 8090168 DOI: 10.1056/nejm199410203311607] [Citation(s) in RCA: 62] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
BACKGROUND Canada, which has universal single-payer health insurance, is often criticized for waiting times for surgery that are longer than those in the United States. We compared waiting times for orthopedic consultations and knee-replacement surgery and patients' acceptance of them in the United States and in the province of Ontario, Canada. METHODS A stratified random sample of 1486 Medicare recipients (629 from the U.S. national sample, 428 from Indiana, and 429 from western Pennsylvania) and 516 people from Ontario who had been hospitalized for knee replacement between 1985 and 1989 were surveyed by mail in 1992. Patients were asked how long they had waited to see an orthopedic surgeon and to have surgery, the acceptability of these waiting times, and their overall satisfaction with surgery. RESULTS About 80 percent of the questionnaires were returned, but not all the respondents answered all the questions. The rate of response to specific questions was about 60 to 65 percent in both countries. The median waiting time for an initial orthopedic consultation was two weeks in the United States and four weeks in Ontario. The median waiting time for knee replacement after the operation had been planned was three weeks in the United States and eight weeks in Canada. In the United States, 95 percent of patients in the national sample considered their waiting time for surgery acceptable, as compared with 85.1 percent in Ontario. Overall satisfaction with surgery ("very or somewhat satisfied") was 85.3 percent for all U.S. respondents and 83.5 percent for Canadian respondents. CONCLUSIONS Waiting times for initial orthopedic consultation and for knee-replacement surgery were longer in Ontario than in the United States, but overall satisfaction with surgery was similar.
Collapse
|
Comparative Study |
31 |
62 |
56
|
Callahan CM, Dittus RS, Tierney WM. Primary care physicians' medical decision making for late-life depression. J Gen Intern Med 1996; 11:218-25. [PMID: 8744879 DOI: 10.1007/bf02642478] [Citation(s) in RCA: 61] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
OBJECTIVE To describe primary care physicians' clinical decision making regarding late-life depression. DESIGN Longitudinal collection of data regarding physicians' clinical assessments and the volume and content of patients' ambulatory visits as part of a randomized clinical trial of a physician-targeted intervention to improve the treatment of late-life depression. SETTING Academic primary care group practice. PATIENTS/PARTICIPANTS One-hundred and eleven primary care physicians who completed a structured questionnaire to describe their clinical assessments immediately following their evaluations of 222 elderly patients who had reported symptoms of depression on screening questionnaires. INTERVENTIONS Intervention physicians were provided with their patient's score on the Hamilton Depression rating scale (HAM-D) and patient-specific treatment recommendations prior to completing the questionnaire regarding their clinical assessment. MAIN RESULTS Those physicians not provided HAM-D scores were just as likely to rate their patients as depressed, as determined by specific query of these physicians regarding their clinical assessments. A physician's clinical rating of likely depression did not consistently result in the formulation of treatment intentions or actions. Treatment intentions and actions were facilitated by provision of treatment algorithms, but treatment was received by fewer than half of the patients whom physicians intended to treat. Barriers to treatment appear to include both physician and patient doubts about treatment benefits. CONCLUSIONS Lack of recognition of depressive symptoms did not appear to be the primary barrier to treatment. Recognition of symptoms and access to treatment algorithms did not consistently result in progression to subsequent stages in treatment decision making. More research is needed to determine how patients and physicians weigh the potential risks and benefits of treatment and how accurately they make these judgments.
Collapse
|
Clinical Trial |
29 |
61 |
57
|
Poehling KA, Speroff T, Dittus RS, Griffin MR, Hickson GB, Edwards KM. Predictors of influenza virus vaccination status in hospitalized children. Pediatrics 2001; 108:E99. [PMID: 11731626 DOI: 10.1542/peds.108.6.e99] [Citation(s) in RCA: 60] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
OBJECTIVE To determine predictors of influenza virus vaccination status in children who are hospitalized during the influenza season. METHODS A cross-sectional study was conducted among children who were hospitalized with fever between 6 months and 3 years of age or with respiratory symptoms between 6 months and 18 years of age. The 1999 to 2000 influenza vaccination status of hospitalized children and potential factors that influence decisions to vaccinate were obtained from a questionnaire administered to parents/guardians. RESULTS Influenza vaccination rates for hospitalized children with and without high-risk medical conditions were 31% and 14%, respectively. For both groups of children, the vaccination status was strongly influenced by recommendations from physicians. More than 70% of children were vaccinated if a physician had recommended the influenza vaccine, whereas only 3% were vaccinated if a physician had not. Lack of awareness that children can receive the influenza vaccine was a commonly cited reason for nonvaccination. CONCLUSIONS A minority of hospitalized children with high-risk conditions had received the influenza vaccine. However, parents' recalling that a clinician had recommended the vaccine had a positive impact on the vaccination status of children.
Collapse
|
|
24 |
60 |
58
|
Poehling KA, Griffin MR, Dittus RS, Tang YW, Holland K, Li H, Edwards KM. Bedside diagnosis of influenzavirus infections in hospitalized children. Pediatrics 2002; 110:83-8. [PMID: 12093950 DOI: 10.1542/peds.110.1.83] [Citation(s) in RCA: 57] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
OBJECTIVE For preventing nosocomial influenza infections and to facilitate prompt antiviral therapy, an accessible, rapid diagnostic method for influenzavirus is needed. We evaluated the performance of a lateral-flow immunoassay (QuickVue Influenza Test) completed at the bedside of hospitalized children during the influenza season. METHODS All children who were evaluated at a large teaching hospital during the 1999 to 2000 influenza season were eligible if they were 1) younger than 19 years and hospitalized with respiratory symptoms or 2) younger than 3 years and hospitalized with fever. Each study child had 2 nasal swabs obtained--1 for influenzavirus culture and polymerase chain reaction (PCR) and the other for the QuickVue Influenza Test. The performance of the rapid diagnostic test was compared with the results of culture or PCR for influenza A or B. RESULTS Of 303 eligible children, 233 (77%) were enrolled. In this population, 19 children had culture- and/or PCR-confirmed influenza A infection, prevalence of 8%. The QuickVue Influenza Test had a sensitivity of 74%, specificity of 98%, positive predictive value of 74%, and negative predictive value of 98%. CONCLUSIONS Among children hospitalized with fever/respiratory symptoms during the influenza season, negative bedside QuickVue Influenza Tests indicated very low likelihood of influenza infection, whereas positive tests greatly increased the probability of influenza-associated illness.
Collapse
|
|
23 |
57 |
59
|
Klein RW, Dittus RS, Roberts SD, Wilson JR. Simulation modeling and health-care decision making. Med Decis Making 1993; 13:347-54. [PMID: 8246707 DOI: 10.1177/0272989x9301300411] [Citation(s) in RCA: 55] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
|
Bibliography |
32 |
55 |
60
|
Kallianpur AR, Hall LD, Yadav M, Christman BW, Dittus RS, Haines JL, Parl FF, Summar ML. Increased Prevalence of the HFE C282Y Hemochromatosis Allele in Women with Breast Cancer. Cancer Epidemiol Biomarkers Prev 2004; 13:205-12. [PMID: 14973098 DOI: 10.1158/1055-9965.epi-03-0188] [Citation(s) in RCA: 54] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Individuals with the major hemochromatosis (HFE) allele C282Y and iron overload develop hepatocellular and some extrahepatic malignancies at increased rates. No association has been previously reported between the C282Y allele and breast cancer. We hypothesized that due to the pro-oxidant properties of iron, altered iron metabolism in C282Y carriers may promote breast carcinogenesis. Because 1 in 10 Caucasians of Northern European ancestry carries this allele, any impact it may have on breast cancer burden is potentially great. We determined C282Y genotypes in 168 patients who underwent high-dose chemotherapy and blood cell transplantation for cancer: 41 with breast cancer and 127 with predominantly hematological cancers (transplant cohort). Demographic, clinical, and tumor characteristics were reviewed in breast cancer patients. The frequency of C282Y genotypes in breast cancers was compared with the frequency in nonbreast cancers, an outpatient sample from Tennessee (n = 169), and a published United States national sample. The frequency of at least one C282Y allele in breast cancers was higher (36.6%, 5 homozygotes/10 heterozygotes) than frequencies in Tennessee (12.7%, P < 0.001), the general population (12.4%, P < 0.001), and similarly selected nonbreast cancers (17.0%, P = 0.008). The likelihood of breast cancer in the transplant cohort increased with C282Y allele dose (P(trend) = 0.010). These results were supported by the finding in a nontransplant cohort of a higher frequency of C282Y mutations in Caucasian (18.4%, P = 0.039) and African-American (8.5%, P = 0.005) women with breast cancer than race-specific national frequency estimates. A high prevalence of C282Y alleles in women with breast cancer with and without poor risk features suggests that altered iron metabolism in C282Y carriers may promote the development of breast cancer and/or more aggressive forms of the disease.
Collapse
|
|
21 |
54 |
61
|
Maiga AW, Deppen S, Scaffidi BK, Baddley J, Aldrich MC, Dittus RS, Grogan EL. Mapping Histoplasma capsulatum Exposure, United States. Emerg Infect Dis 2019; 24:1835-1839. [PMID: 30226187 PMCID: PMC6154167 DOI: 10.3201/eid2410.180032] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023] Open
Abstract
Maps of Histoplasma capsulatum infection prevalence were created 50 years ago; since then, the environment, climate, and anthropogenic land use have changed drastically. Recent outbreaks of acute disease in Montana and Nebraska, USA, suggest shifts in geographic distribution, necessitating updated prevalence maps. To create a weighted overlay geographic suitability model for Histoplasma, we used a geographic information system to combine satellite imagery integrating land cover use (70%), distance to water (20%), and soil pH (10%). We used logistic regression modeling to compare our map with state-level histoplasmosis incidence data from a 5% sample from the Centers for Medicare and Medicaid Services. When compared with the state-based Centers data, the predictive accuracy of the suitability score–predicted states with high and mid-to-high histoplasmosis incidence was moderate. Preferred soil environments for Histoplasma have migrated into the upper Missouri River basin. Suitability score mapping may be applicable to other geographically specific infectious vectors.
Collapse
|
Research Support, U.S. Gov't, Non-P.H.S. |
6 |
54 |
62
|
Nettleman MD, Jones RB, Roberts SD, Katz BP, Washington AE, Dittus RS, Quinn TS. Cost-effectiveness of culturing for Chlamydia trachomatis. A study in a clinic for sexually transmitted diseases. Ann Intern Med 1986; 105:189-96. [PMID: 3089086 DOI: 10.7326/0003-4819-105-2-189] [Citation(s) in RCA: 53] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/04/2023] Open
Abstract
We have evaluated the cost-effectiveness of using cell culture to test for chlamydial infections in 9979 patients at a clinic for sexually transmitted diseases. From results of cultures, we have established prevalence data and, using decision-theory analysis, have calculated costs and probabilities of various outcomes. According to their histories and presenting signs and symptoms, patients were classified as at high or low risk for chlamydial infections. Empiric treatment of all patients attending the clinic was the most cost-effective strategy, followed by empiric treatment of high-risk women and culture-based treatment of low-risk women. Obtaining cultures for men at high and low risk was not cost-effective. If universal treatment is not provided, the most cost-effective strategy appears to be empiric therapy in patients at high risk for chlamydial infections and therapy based on diagnostic test results in women at low risk.
Collapse
|
|
39 |
53 |
63
|
Asher AL, Devin CJ, Archer KR, Chotai S, Parker SL, Bydon M, Nian H, Harrell FE, Speroff T, Dittus RS, Philips SE, Shaffrey CI, Foley KT, McGirt MJ. An analysis from the Quality Outcomes Database, Part 2. Predictive model for return to work after elective surgery for lumbar degenerative disease. J Neurosurg Spine 2017; 27:370-381. [DOI: 10.3171/2016.8.spine16527] [Citation(s) in RCA: 53] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
OBJECTIVECurrent costs associated with spine care are unsustainable. Productivity loss and time away from work for patients who were once gainfully employed contributes greatly to the financial burden experienced by individuals and, more broadly, society. Therefore, it is vital to identify the factors associated with return to work (RTW) after lumbar spine surgery. In this analysis, the authors used data from a national prospective outcomes registry to create a predictive model of patients’ ability to RTW after undergoing lumbar spine surgery for degenerative spine disease.METHODSData from 4694 patients who underwent elective spine surgery for degenerative lumbar disease, who had been employed preoperatively, and who had completed a 3-month follow-up evaluation, were entered into a prospective, multicenter registry. Patient-reported outcomes—Oswestry Disability Index (ODI), numeric rating scale (NRS) for back pain (BP) and leg pain (LP), and EQ-5D scores—were recorded at baseline and at 3 months postoperatively. The time to RTW was defined as the period between operation and date of returning to work. A multivariable Cox proportional hazards regression model, including an array of preoperative factors, was fitted for RTW. The model performance was measured using the concordance index (c-index).RESULTSEighty-two percent of patients (n = 3855) returned to work within 3 months postoperatively. The risk-adjusted predictors of a lower likelihood of RTW were being preoperatively employed but not working at the time of presentation, manual labor as an occupation, worker’s compensation, liability insurance for disability, higher preoperative ODI score, higher preoperative NRS-BP score, and demographic factors such as female sex, African American race, history of diabetes, and higher American Society of Anesthesiologists score. The likelihood of a RTW within 3 months was higher in patients with higher education level than in those with less than high school–level education. The c-index of the model’s performance was 0.71.CONCLUSIONSThis study presents a novel predictive model for the probability of returning to work after lumbar spine surgery. Spine care providers can use this model to educate patients and encourage them in shared decision-making regarding the RTW outcome. This evidence-based decision support will result in better communication between patients and clinicians and improve postoperative recovery expectations, which will ultimately increase the likelihood of a positive RTW trajectory.
Collapse
|
|
8 |
53 |
64
|
Heck DA, Melfi CA, Mamlin LA, Katz BP, Arthur DS, Dittus RS, Freund DA. Revision rates after knee replacement in the United States. Med Care 1998; 36:661-9. [PMID: 9596057 DOI: 10.1097/00005650-199805000-00006] [Citation(s) in RCA: 52] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
OBJECTIVES Each year approximately 100,000 Medicare patients undergo knee replacement surgery. Patients, referring physicians, and surgeons must consider a variety of factors when deciding if knee replacement is indicated. One factor in this decision process is the likelihood of revision knee replacement after the initial surgery. This study determined the chance that a revision knee replacement will occur and which factors were associated with revision. METHODS Data on all primary and revision knee replacements that were performed on Medicare patients during the years 1985 through 1990 were obtained. The probability that a revision knee replacement occurred was modeled from data for all patients for whom 2 full years of follow-up data were available. Two strategies for linking revisions to a particular primary knee replacement for each patient were developed. Predictive models were developed for each linking strategy. ICD-9-CM codes were used to determine hospitalizations for primary knee replacement and revision knee replacement. RESULTS More than 200,000 hospitalizations for primary knee replacements were performed, with fewer than 3% of them requiring revision within 2 years. The following factors increase the chance of revision within 2 years of primary knee replacement: (1) male gender, (2) younger age, (3) longer length of hospital stay for the primary knee replacement, (4) more diagnoses at the primary knee replacement hospitalization, (5) unspecified arthritis type, (6) surgical complications during the primary knee replacement hospitalization, and (7) primary knee replacement performed at an urban hospital. CONCLUSIONS Revision knee replacement is uncommon. Demographic, clinical, and process factors were related to the probability of revision knee replacement.
Collapse
|
|
27 |
52 |
65
|
Han JH, Morandi A, Ely EW, Callison C, Zhou C, Storrow AB, Dittus RS, Habermann R, Schnelle J. Delirium in the nursing home patients seen in the emergency department. J Am Geriatr Soc 2009; 57:889-94. [PMID: 19484845 DOI: 10.1111/j.1532-5415.2009.02219.x] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
OBJECTIVES To determine whether nursing home patients are more likely than non-nursing home patients to present to the emergency department (ED) with delirium and to explore how variations in their delirium risk factor profiles contribute to this relationship. DESIGN Prospective cross-sectional study. SETTING Tertiary care academic ED. PARTICIPANTS Three hundred forty-one English-speaking patients aged 65 and older. MEASUREMENTS Delirium status was determined using the Confusion Assessment Method for the Intensive Care Unit (CAM-ICU) administered by trained research assistants. Multivariable logistic regression was used to determine whether nursing home residence was independently associated with delirium. Adjusted odds ratios (ORs) with their 95% confidence intervals (95% CIs) were reported. RESULTS Of the 341 patients enrolled, 58 (17.0%) resided in a nursing home and 38 (11.1%) were considered to have delirium in the ED. Of the 58, (22 (37.9%) nursing home patients and 16 of 283 (5.7%) non-nursing home patients had delirium; unadjusted OR=10.2, 95% CI=4.9-21.2). After adjusting for dementia, a Katz activity of daily living score less than or equal to 4, hearing impairment, and the presence of systemic inflammatory response syndrome, nursing home residence was independently associated with delirium in the ED (adjusted OR=4.2, 95% CI=1.8-9.7). CONCLUSION In the ED setting, nursing home patients were more likely to present with delirium, and this relationship persisted after adjusting for delirium risk factors.
Collapse
|
Research Support, Non-U.S. Gov't |
16 |
51 |
66
|
Roumie CL, Halasa NB, Edwards KM, Zhu Y, Dittus RS, Griffin MR. Differences in antibiotic prescribing among physicians, residents, and nonphysician clinicians. Am J Med 2005; 118:641-8. [PMID: 15922696 DOI: 10.1016/j.amjmed.2005.02.013] [Citation(s) in RCA: 51] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/01/2004] [Indexed: 11/25/2022]
Abstract
PURPOSE State legislatures have increased the prescribing capabilities of nurse practitioners and physician assistants and broadened the scope of their practice roles. To determine the impact of these changes, we compared outpatient antibiotic prescribing by practicing physicians, nonphysician clinicians, and resident physicians. METHODS Using the National Ambulatory Medical Care Survey (NAMCS) and the National Hospital Ambulatory Medical Care Survey (NHAMCS), we conducted a cross-sectional study of patients >/=18 years of age receiving care in 3 outpatient settings: office practices, hospital practices, and emergency departments, 1995-2000. We measured the proportion of all visits and visits for respiratory diagnoses where antibiotics are rarely indicated in which an antibiotic was prescribed by practitioner type. RESULTS For all patient visits, nonphysician clinicians were more likely to prescribe antibiotics than practicing physicians for visits in office practices (26.3% vs 16.2%), emergency departments (23.8% vs 18.2%), and hospital clinics (25.2% vs 14.6%). Similarly, for the subset of visits for respiratory conditions where antibiotics are rarely indicated, nonphysician clinicians prescribed antibiotics more often than practicing physicians in office practices (odds ratio [OR] 1.86, 95% confidence intervals [CI]: 1.05 to 3.29), and in hospital practices (OR 1.55, 95% CI: 1.12 to 2.15). In hospital practices, resident physicians had lower prescribing rates than practicing physicians for all visits as well as visits for respiratory conditions where antibiotics are rarely indicated (OR 0.56, 95% CI: 0.36 to 0.86). CONCLUSION Nonphysician clinicians were more likely to prescribe antibiotics than practicing physicians in outpatient settings, and resident physicians were less likely to prescribe antibiotics. These differences suggest that general educational campaigns to reduce antibiotic prescribing have not reached all providers.
Collapse
|
Comparative Study |
20 |
51 |
67
|
Abstract
Economic analyses have become increasingly important in healthcare in general and with respect to pharmaceuticals in particular. If economic analyses are to play an important and useful role in the allocation of scarce healthcare resources, then such analyses must be performed properly and with care. This article outlines some of the basic principles of pharmacoeconomic analysis. Every analysis should have an explicitly stated perspective, which, unless otherwise justified, should be a societal perspective. Cost minimisation, cost-effectiveness, cost-utility and cost-benefit analyses are a family of techniques used in economic analyses. Cost minimisation analysis is appropriate when alternative therapies have identical outcomes, but differ in costs. Cost-effectiveness analysis is appropriate when alternative therapies differ in clinical effectiveness but can be examined from the same dimension of health outcome. Cost-utility analysis can be used when alternative therapies may be examined using multiple dimensions of health outcome, such as morbidity and mortality. Cost-benefit analysis requires the benefits of therapy to be described in monetary units and is not usually the technique of choice. The technique used in an analysis should be described and explicitly defended according to the problem being examined. For each technique, the method of determining costs is the same; direct, indirect, and intangible costs can be considered. The specific costs to be used depend on the analytical perspective; a societal perspective implies the use of both direct and indirect economic costs. A modelling framework such as a decision tree, influence diagram, Markov chain, or network simulation must be used to structure the analysis explicitly. Regardless of the choice of framework, all modelling assumptions should be described. The mechanism of data collection for model inputs must be detailed and defended. Models must undergo careful verification and validation procedures. Following baseline analysis of the model, further analyses should examine the role of uncertainty in model assumptions and data.
Collapse
|
|
33 |
50 |
68
|
Hughes CG, Patel MB, Jackson JC, Girard TD, Geevarghese SK, Norman BC, L.Thompson J, Chandrasekhar R, Brummel NE, May AK, Elstad MR, Wasserstein ML, Goodman RB, Moons KG, Dittus RS, Ely EW, Pandharipande PP. Surgery and Anesthesia Exposure Is Not a Risk Factor for Cognitive Impairment After Major Noncardiac Surgery and Critical Illness. Ann Surg 2017; 265:1126-1133. [PMID: 27433893 PMCID: PMC5856253 DOI: 10.1097/sla.0000000000001885] [Citation(s) in RCA: 49] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
OBJECTIVE The aim of this study was to determine whether surgery and anesthesia exposure is an independent risk factor for cognitive impairment after major noncardiac surgery associated with critical illness. SUMMARY OF BACKGROUND DATA Postoperative cognitive impairment is a prevalent individual and public health problem. Data are inconclusive as to whether this impairment is attributable to surgery and anesthesia exposure versus patients' baseline factors and hospital course. METHODS In a multicenter prospective cohort study, we enrolled ICU patients with major noncardiac surgery during hospital admission and with nonsurgical medical illness. At 3 and 12 months, we assessed survivors' global cognitive function with the Repeatable Battery for the Assessment of Neuropsychological Status and executive function with the Trail Making Test, Part B. We performed multivariable linear regression to study the independent association of surgery/anesthesia exposure with cognitive outcomes, adjusting initially for baseline covariates and subsequently for in-hospital covariates. RESULTS We enrolled 1040 patients, 402 (39%) with surgery/anesthesia exposure. Median global cognition scores were similar in patients with surgery/anesthesia exposure compared with those without exposure at 3 months (79 vs 80) and 12 months (82 vs 82). Median executive function scores were also similar at 3 months (41 vs 40) and 12 months (43 vs 42). Surgery/anesthesia exposure was not associated with worse global cognition or executive function at 3 or 12 months in models incorporating baseline or in-hospital covariates (P > 0.2). Higher baseline education level was associated with better global cognition at 3 and 12 months (P < 0.001), and longer in-hospital delirium duration was associated with worse global cognition (P < 0.02) and executive function (P < 0.01) at 3 and 12 months. CONCLUSIONS Cognitive impairment after major noncardiac surgery and critical illness is not associated with the surgery and anesthesia exposure but is predicted by baseline education level and in-hospital delirium.
Collapse
|
Multicenter Study |
8 |
49 |
69
|
Mazzuca SA, Brandt KD, Katz BP, Dittus RS, Freund DA, Lubitz R, Hawker G, Eckert G. Comparison of general internists, family physicians, and rheumatologists managing patients with symptoms of osteoarthritis of the knee. ARTHRITIS CARE AND RESEARCH : THE OFFICIAL JOURNAL OF THE ARTHRITIS HEALTH PROFESSIONS ASSOCIATION 1997; 10:289-99. [PMID: 9362595 DOI: 10.1002/art.1790100503] [Citation(s) in RCA: 48] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
OBJECTIVE To evaluate the nature, risks, and benefits of osteoarthritis (OA) management by primary care physicians and rheumatologists. METHODS Subjects were 419 patients followed for symptoms of knee OA by either a specialist in family medicine (FM) or general internal medicine (GIM) or by a rheumatologist (RH). Management practices were characterized by in-home documentation by a visiting nurse of drugs taken to relieve OA pain or to prevent gastrointestinal side effects of nonsteroidal anti-inflammatory drugs (NSAIDs) and by patient report (self-administered survey) of nonpharmacologic treatments. Changes in outcomes (knee pain and physical function) over 6 months were measured with the Western Ontario and McMaster Universities Osteoarthritis Index. RESULTS Patients of RHs were 2-3 years older (P = 0.035) and tended to exhibit greater radiographic severity of OA (P = 0.064) and poorer physical function (P = 0.076) at baseline than the other 2 groups. In all 3 groups, knee pain and physical function improved slightly over 6 months; however, between-group differences were not significant. Compared to drug management of knee pain by FMs or RHs, that by the GIMs was distinguished by greater utilization of acetaminophen and nonacetylated salicylates (P = 0.008), lower prescribed doses of NSAIDs (P = 0.007), and, therefore, lower risk of iatrogenic gastroenteropathy (P < 0.001). In contrast, patients of RHs were more likely than those of FMs and GIMs to report that they had been instructed in use of isometric quadriceps and range-of-motion exercises (P < or = 0.001), application of heat (P = 0.051) and cold (P < 0.001) packs, and in the principles of joint protection (P = 0.016). Neither physician specialty nor specific management practices accounted for variations in patient outcomes. CONCLUSION This observational study identified specialty-related variability in key aspects of the management of knee OA in the community (i.e., frequency and dosing of NSAIDs, use of nonpharmacologic modalities) that bear strong implications for long-term safety and cost. However, changes in knee pain and function over 6 months were unrelated to variations in management practices.
Collapse
|
Comparative Study |
28 |
48 |
70
|
Abstract
Advances in organization and patient management in the intensive care unit (ICU) have led to reductions in the morbidity and mortality suffered by critically ill patients. Two such advances include multidisciplinary teams (MDTs) and the development of clinical protocols. The use of protocols and MDTs does not necessarily guarantee instant improvement in the quality of care, but it does offer useful tools for the pursuit of such objectives. As ICU physicians increasingly assume leadership roles in the pursuit of higher quality ICU care, their knowledge and skills in the discipline of quality improvement will become essential.
Collapse
|
research-article |
24 |
48 |
71
|
Peterson NB, Murff HJ, Ness RM, Dittus RS. Colorectal cancer screening among men and women in the United States. J Womens Health (Larchmt) 2007; 16:57-65. [PMID: 17324097 DOI: 10.1089/jwh.2006.0131] [Citation(s) in RCA: 47] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND A few previous studies have shown that men were more likely than women to be screened for colorectal cancer (CRC). METHODS The 2000 National Health Interview Survey (NHIS) was administered to 32,374 adults > or = 18 years of age. Participants were asked if they ever had a sigmoidoscopy or colonoscopy and if they ever had a home fecal occult blood test (FOBT). Men and women > or = 50 years were eligible for analysis. Participants were considered to be current in testing if they reported sigmoidoscopy in the last 5 years, colonoscopy in the last 10 years, or home FOBT in the last 1 year. RESULTS Overall, 62.9% of adults had ever had CRC testing, and 37.1% were current for testing. Compared to older men, a greater proportion of older women were not current for testing (62.6% for women vs. 56.7% for men > 75 years). In multivariate analysis, women were not less likely than men to be current in CRC testing (OR 0.98, 95% CI 0.88-1.08). When compared with white women, black women were less likely to be current for CRC screening (OR 0.79, 95% CI 0.65-0.95). CONCLUSIONS CRC screening is underused. Targeting interventions to improve CRC screening for all appropriate patients will be important.
Collapse
|
Journal Article |
18 |
47 |
72
|
McNaughton CD, Collins SP, Kripalani S, Rothman R, Self WH, Jenkins C, Miller K, Arbogast P, Naftilan A, Dittus RS, Storrow AB. Low numeracy is associated with increased odds of 30-day emergency department or hospital recidivism for patients with acute heart failure. Circ Heart Fail 2012; 6:40-6. [PMID: 23230305 DOI: 10.1161/circheartfailure.112.969477] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
BACKGROUND More than 25% of Medicare patients hospitalized for heart failure are readmitted within 30 days. The contributions of numeracy and health literacy to recidivism for patients with acute heart failure (AHF) are not known. METHODS AND RESULTS A cohort of patients with acute heart failure who presented to 4 emergency departments between January 2008 and September 2011. Research assistants administered subjective measures of numeracy and health literacy; 30-day follow-up was performed by phone interview. Recidivism was defined as any unplanned return to the emergency department or hospital within 30 days of the index emergency department visit for AHF. Multivariable logistic regression adjusting for patient age, sex, race, insurance status, hospital site, days eligible for recidivism, chronic kidney disease, abnormal hemoglobin, and low ejection fraction evaluated the relation between numeracy and health literacy with 30-day recidivism. Of the 709 patients included in the analysis, 390 (55%) had low numeracy skills and 258 (37%) had low literacy skills. Low numeracy was associated with increased odds of recidivism within 30 days (adjusted odds ratio, 1.41; 95% confidence interval, 1.00-1.98; P=0.048). For low health literacy, adjusted odds ratio of recidivism was 1.17 (95% confidence interval, 0.83-1.65; P=0.37). CONCLUSIONS Low numeracy was associated with greater odds of 30-day recidivism. Further investigation is warranted to determine whether addressing numeracy and health literacy may reduce 30-day recidivism for patients with acute heart failure.
Collapse
|
Research Support, U.S. Gov't, Non-P.H.S. |
13 |
47 |
73
|
Han JH, Vasilevskis EE, Chandrasekhar R, Liu X, Schnelle JF, Dittus RS, Ely EW. Delirium in the Emergency Department and Its Extension into Hospitalization (DELINEATE) Study: Effect on 6-month Function and Cognition. J Am Geriatr Soc 2017; 65:1333-1338. [PMID: 28263444 DOI: 10.1111/jgs.14824] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2016] [Revised: 12/15/2016] [Accepted: 12/16/2016] [Indexed: 11/28/2022]
Abstract
BACKGROUND The natural course and clinical significance of delirium in the emergency department (ED) is unclear. OBJECTIVES We sought to (1) describe the extent to which delirium in the ED persists into hospitalization (ED delirium duration) and (2) determine how ED delirium duration is associated with 6-month functional status and cognition. DESIGN Prospective cohort study. SETTING Tertiary care, academic medical center. PARTICIPANTS ED patients ≥65 years old who were admitted to the hospital. MEASUREMENTS The modified Brief Confusion Assessment Method was used to ascertain delirium in the ED and hospital. Premorbid and 6-month function were determined using the Older American Resources and Services Activities of Daily Living (OARS ADL) questionnaire which ranged from 0 (completely dependent) to 28 (completely dependent). Premorbid and 6-month cognition were determined using the short form Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) which ranged from 1 to 5 (severe dementia). Multiple linear regression was performed to determine if ED delirium duration was associated with 6-month function and cognition adjusted for baseline OARS ADL and IQCODE, and other confounders. RESULTS A total of 228 older ED patients were enrolled. Of the 105 patients who were delirious in the ED, 81 (77.1%) patients' delirium persisted into hospitalization. For every ED delirium duration day, the 6-month OARS ADL decreased by 0.63 points (95% CI: -1.01 to -0.24), indicating poorer function. For every ED delirium duration day, the 6-month IQCODE increased 0.06 points (95% CI: 0.01-0.10) indicating poorer cognition. CONCLUSIONS Delirium in the ED is not a transient event and frequently persists into hospitalization. Longer ED delirium duration is associated with an incremental worsening of 6-month functional and cognitive outcomes.
Collapse
|
Journal Article |
8 |
45 |
74
|
Speroff T, Nwosu S, Greevy R, Weinger MB, Talbot TR, Wall RJ, Deshpande JK, France DJ, Ely EW, Burgess H, Englebright J, Williams MV, Dittus RS. Organisational culture: variation across hospitals and connection to patient safety climate. Qual Saf Health Care 2011; 19:592-6. [PMID: 21127115 DOI: 10.1136/qshc.2009.039511] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
CONTEXT Bureaucratic organisational culture is less favourable to quality improvement, whereas organisations with group (teamwork) culture are better aligned for quality improvement. OBJECTIVE To determine if an organisational group culture shows better alignment with patient safety climate. DESIGN Cross-sectional administration of questionnaires. Setting 40 Hospital Corporation of America hospitals. PARTICIPANTS 1406 nurses, ancillary staff, allied staff and physicians. MAIN OUTCOME MEASURES Competing Values Measure of Organisational Culture, Safety Attitudes Questionnaire (SAQ), Safety Climate Survey (SCSc) and Information and Analysis (IA). RESULTS The Cronbach alpha was 0.81 for the group culture scale and 0.72 for the hierarchical culture scale. Group culture was positively correlated with SAQ and its subscales (from correlation coefficient r = 0.44 to 0.55, except situational recognition), ScSc (r = 0.47) and IA (r = 0.33). Hierarchical culture was negatively correlated with the SAQ scales, SCSc and IA. Among the 40 hospitals, 37.5% had a hierarchical dominant culture, 37.5% a dominant group culture and 25% a balanced culture. Group culture hospitals had significantly higher safety climate scores than hierarchical culture hospitals. The magnitude of these relationships was not affected after adjusting for provider job type and hospital characteristics. CONCLUSIONS Hospitals vary in organisational culture, and the type of culture relates to the safety climate within the hospital. In combination with prior studies, these results suggest that a healthcare organisation's culture is a critical factor in the development of its patient safety climate and in the successful implementation of quality improvement initiatives.
Collapse
|
Research Support, U.S. Gov't, P.H.S. |
14 |
45 |
75
|
Wall RJ, Ely EW, Elasy TA, Dittus RS, Foss J, Wilkerson KS, Speroff T. Using real time process measurements to reduce catheter related bloodstream infections in the intensive care unit. Qual Saf Health Care 2006; 14:295-302. [PMID: 16076796 PMCID: PMC1744064 DOI: 10.1136/qshc.2004.013516] [Citation(s) in RCA: 45] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
PROBLEM Measuring a process of care in real time is essential for continuous quality improvement (CQI). Our inability to measure the process of central venous catheter (CVC) care in real time prevented CQI efforts aimed at reducing catheter related bloodstream infections (CR-BSIs) from these devices. DESIGN A system was developed for measuring the process of CVC care in real time. We used these new process measurements to continuously monitor the system, guide CQI activities, and deliver performance feedback to providers. SETTING Adult medical intensive care unit (MICU). KEY MEASURES FOR IMPROVEMENT Measured process of CVC care in real time; CR-BSI rate and time between CR-BSI events; and performance feedback to staff. STRATEGIES FOR CHANGE An interdisciplinary team developed a standardized, user friendly nursing checklist for CVC insertion. Infection control practitioners scanned the completed checklists into a computerized database, thereby generating real time measurements for the process of CVC insertion. Armed with these new process measurements, the team optimized the impact of a multifaceted intervention aimed at reducing CR-BSIs. EFFECTS OF CHANGE The new checklist immediately provided real time measurements for the process of CVC insertion. These process measures allowed the team to directly monitor adherence to evidence-based guidelines. Through continuous process measurement, the team successfully overcame barriers to change, reduced the CR-BSI rate, and improved patient safety. Two years after the introduction of the checklist the CR-BSI rate remained at a historic low. LESSONS LEARNT Measuring the process of CVC care in real time is feasible in the ICU. When trying to improve care, real time process measurements are an excellent tool for overcoming barriers to change and enhancing the sustainability of efforts. To continually improve patient safety, healthcare organizations should continually measure their key clinical processes in real time.
Collapse
|
Research Support, U.S. Gov't, P.H.S. |
19 |
45 |