1
|
Levy C, Kononowech J, Ersek M, Phibbs CS, Scott W, Sales A. Evaluating feedback reports to support documentation of veterans' care preferences in home based primary care. BMC Geriatr 2024; 24:389. [PMID: 38693502 PMCID: PMC11064362 DOI: 10.1186/s12877-024-04999-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 04/19/2024] [Indexed: 05/03/2024] Open
Abstract
BACKGROUND To evaluate the effectiveness of delivering feedback reports to increase completion of LST notes among VA Home Based Primary Care (HBPC) teams. The Life Sustaining Treatment Decisions Initiative (LSTDI) was implemented throughout the Veterans Health Administration (VHA) in the United States in 2017 to ensure that seriously ill Veterans have care goals and LST decisions elicited and documented. METHODS We distributed monthly feedback reports summarizing LST template completion rates to 13 HBPC intervention sites between October 2018 and February 2020 as the sole implementation strategy. We used principal component analyses to match intervention to 26 comparison sites and used interrupted time series/segmented regression analyses to evaluate the differences in LST template completion rates between intervention and comparison sites. Data were extracted from national databases for VA HBPC in addition to interviews and surveys in a mixed methods process evaluation. RESULTS LST template completion rose from 6.3 to 41.9% across both intervention and comparison HBPC teams between March 1, 2018, and February 26, 2020. There were no statistically significant differences for intervention sites that received feedback reports. CONCLUSIONS Feedback reports did not increase documentation of LST preferences for Veterans at intervention compared with comparison sites. Observed increases in completion rates across intervention and comparison sites can likely be attributed to implementation strategies used nationally as part of the national roll-out of the LSTDI. Our results suggest that feedback reports alone were not an effective implementation strategy to augment national implementation strategies in HBPC teams.
Collapse
Affiliation(s)
- Cari Levy
- Denver-Seattle VA Center of Innovation for Value Driven & Veteran-Centric Care, Rocky Mountain Regional VA Medical Center at VA Eastern Colorado Health Care System, Aurora, CO, USA
- Division of Geriatric Medicine, University of Colorado Anschutz Medical Campus, Aurora, CO, USA
| | - Jennifer Kononowech
- Center for Clinical Management Research, VA Ann Arbor Healthcare System, Ann Arbor, MI, USA.
| | - Mary Ersek
- Center for Health Equity and Promotion, Corporal Michael J. Crescenz VA Medical Center, Philadelphia, PA, USA
- Schools of Nursing and Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Ciaran S Phibbs
- Geriatrics and Extended Care Data and Analysis Center, VA Palo Alto Health Care System, Palo Alto, CA, USA
- Departments of Pediatrics and Health Policy, Stanford University School of Medicine, Stanford, CA, USA
| | - Winifred Scott
- Geriatrics and Extended Care Data and Analysis Center, VA Palo Alto Health Care System, Palo Alto, CA, USA
| | - Anne Sales
- Center for Clinical Management Research, VA Ann Arbor Healthcare System, Ann Arbor, MI, USA
- Sinclair School of Nursing, Department of Family and Community Medicine, University of Missouri, Columbia, MO, USA
| |
Collapse
|
2
|
Carpenter JG, Scott WJ, Kononowech J, Foglia MB, Haverhals LM, Hogikyan R, Kolanowski A, Landis‐Lewis Z, Levy C, Miller SC, Periyakoil VJ, Phibbs CS, Potter L, Sales A, Ersek M. Evaluating implementation strategies to support documentation of veterans' care preferences. Health Serv Res 2022; 57:734-743. [PMID: 35261022 PMCID: PMC9264454 DOI: 10.1111/1475-6773.13958] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2021] [Revised: 01/19/2022] [Accepted: 02/08/2022] [Indexed: 12/15/2022] Open
Abstract
OBJECTIVE To evaluate the effectiveness of feedback reports and feedback reports + external facilitation on completion of life-sustaining treatment (LST) note the template and durable medical orders. This quality improvement program supported the national roll-out of the Veterans Health Administration (VA) LST Decisions Initiative (LSTDI), which aims to ensure that seriously-ill veterans have care goals and LST decisions elicited and documented. DATA SOURCES Primary data from national databases for VA nursing homes (called Community Living Centers [CLCs]) from 2018 to 2020. STUDY DESIGN In one project, we distributed monthly feedback reports summarizing LST template completion rates to 12 sites as the sole implementation strategy. In the second involving five sites, we distributed similar feedback reports and provided robust external facilitation, which included coaching, education, and learning collaboratives. For each project, principal component analyses matched intervention to comparison sites, and interrupted time series/segmented regression analyses evaluated the differences in LSTDI template completion rates between intervention and comparison sites. DATA COLLECTION METHODS Data were extracted from national databases in addition to interviews and surveys in a mixed-methods process evaluation. PRINCIPAL FINDINGS LSTDI template completion rose from 0% to about 80% throughout the study period in both projects' intervention and comparison CLCs. There were small but statistically significant differences for feedback reports alone (comparison sites performed better, coefficient estimate 3.48, standard error 0.99 for the difference between groups in change in trend) and feedback reports + external facilitation (intervention sites performed better, coefficient estimate -2.38, standard error 0.72). CONCLUSIONS Feedback reports + external facilitation was associated with a small but statistically significant improvement in outcomes compared with comparison sites. The large increases in completion rates are likely due to the well-planned national roll-out of the LSTDI. This finding suggests that when dissemination and support for widespread implementation are present and system-mandated, significant enhancements in the adoption of evidence-based practices may require more intensive support.
Collapse
Affiliation(s)
- Joan G. Carpenter
- Organizational Systems and Adult HealthUniversity of Maryland School of NursingBaltimoreMarylandUSA,Corporal Michael J. Crescenz VAMCPhiladelphiaPennsylvaniaUSA,Department of Biobehavioral Health SciencesUniversity of Pennsylvania School of NursingPhiladelphiaPennsylvaniaUSA
| | | | - Jennifer Kononowech
- Center for Clinical Management ResearchVA Ann Arbor Health Care SystemAnn ArborMichiganUSA
| | - Mary Beth Foglia
- Veterans Health AdministrationNational Center for Ethics in Health CareWashingtonDistrict of ColumbiaUSA,School of Medicine, Department of Bioethics and HumanitiesUniversity of WashingtonSeattleWashingtonUSA
| | - Leah M. Haverhals
- Denver‐Seattle Center of Innovation, Rocky Mountain Regional VA Medical CenterVA Eastern Colorado Health Care SystemAuroraColoradoUSA,Division of Health Care Policy and Research, School of MedicineUniversity of Colorado Anschutz Medical CampusAuroraColoradoUSA
| | - Robert Hogikyan
- Department of Internal Medicine, Division of Geriatric and Palliative MedicineUniversity of MichiganAnn ArborMichiganUSA,GRECCVA Ann Arbor Healthcare SystemAnn ArborMichiganUSA
| | - Ann Kolanowski
- Penn StateRoss & Carol Nese College of NursingUniversity ParkPennsylvaniaUSA
| | | | - Cari Levy
- Denver‐Seattle Center of Innovation, Rocky Mountain Regional VA Medical CenterVA Eastern Colorado Health Care SystemAuroraColoradoUSA,Division of Health Care Policy and Research, School of MedicineUniversity of Colorado Anschutz Medical CampusAuroraColoradoUSA
| | - Susan C. Miller
- Brown University School of Public HealthWarwickRhode IslandUSA
| | - V. J. Periyakoil
- Health Economics Resource Center (HERC)VA Palo Alto Health Care SystemMenlo ParkCaliforniaUSA,Stanford University School of MedicineStanfordCaliforniaUSA
| | - Ciaran S. Phibbs
- Health Economics Resource Center (HERC)VA Palo Alto Health Care SystemMenlo ParkCaliforniaUSA,Stanford University School of MedicineStanfordCaliforniaUSA
| | - Lucinda Potter
- Veterans Health AdministrationNational Center for Ethics in Health CareWashingtonDistrict of ColumbiaUSA
| | - Anne Sales
- Center for Clinical Management ResearchVA Ann Arbor Health Care SystemAnn ArborMichiganUSA,Sinclair School of NursingUniversity of MissouriColumbiaMissouriUSA
| | - Mary Ersek
- Corporal Michael J. Crescenz VAMCPhiladelphiaPennsylvaniaUSA,Department of Biobehavioral Health SciencesUniversity of Pennsylvania School of NursingPhiladelphiaPennsylvaniaUSA,Leonard Davis InstitutePhiladelphiaPennsylvaniaUSA
| |
Collapse
|
3
|
Ersek M, Sales A, Keddem S, Ayele R, Haverhals LM, Magid KH, Kononowech J, Murray A, Carpenter JG, Foglia MB, Potter L, McKenzie J, Davis D, Levy C. Preferences Elicited and Respected for Seriously Ill Veterans through Enhanced Decision-Making (PERSIVED): a protocol for an implementation study in the Veterans Health Administration. Implement Sci Commun 2022; 3:78. [PMID: 35859140 PMCID: PMC9296899 DOI: 10.1186/s43058-022-00321-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Accepted: 06/20/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Empirical evidence supports the use of structured goals of care conversations and documentation of life-sustaining treatment (LST) preferences in durable, accessible, and actionable orders to improve the care for people living with serious illness. As the largest integrated healthcare system in the USA, the Veterans Health Administration (VA) provides an excellent environment to test implementation strategies that promote this evidence-based practice. The Preferences Elicited and Respected for Seriously Ill Veterans through Enhanced Decision-Making (PERSIVED) program seeks to improve care outcomes for seriously ill Veterans by supporting efforts to conduct goals of care conversations, systematically document LST preferences, and ensure timely and accurate communication about preferences across VA and non-VA settings. METHODS PERSIVED encompasses two separate but related implementation projects that support the same evidence-based practice. Project 1 will enroll 12 VA Home Based Primary Care (HBPC) programs and Project 2 will enroll six VA Community Nursing Home (CNH) programs. Both projects begin with a pre-implementation phase during which data from diverse stakeholders are gathered to identify barriers and facilitators to adoption of the LST evidence-based practice. This baseline assessment is used to tailor quality improvement activities using audit with feedback and implementation facilitation during the implementation phase. Site champions serve as the lynchpin between the PERSIVED project team and site personnel. PERSIVED teams support site champions through monthly coaching sessions. At the end of implementation, baseline site process maps are updated to reflect new steps and procedures to ensure timely conversations and documentation of treatment preferences. During the sustainability phase, intense engagement with champions ends, at which point champions work independently to maintain and improve processes and outcomes. Ongoing process evaluation, guided by the RE-AIM framework, is used to monitor Reach, Adoption, Implementation, and Maintenance outcomes. Effectiveness will be assessed using several endorsed clinical metrics for seriously ill populations. DISCUSSION The PERSIVED program aims to prevent potentially burdensome LSTs by consistently eliciting and documenting values, goals, and treatment preferences of seriously ill Veterans. Working with clinical operational partners, we will apply our findings to HBPC and CNH programs throughout the national VA healthcare system during a future scale-out period.
Collapse
Affiliation(s)
- Mary Ersek
- Center for Health Equity and Promotion, Corporal Michael J. Crescenz VA Medical Center, 3900 Woodland Avenue, Annex Suite 203, Philadelphia, PA, 19104, USA. .,University of Pennsylvania Schools of Nursing and Medicine, Philadelphia, PA, USA.
| | - Anne Sales
- Sinclair School of Nursing and Department of Family and Community Medicine, University of Missouri, Columbia, MO, USA.,Center for Clinical Management Research, VA Ann Arbor Healthcare System, Ann Arbor, MI, USA
| | - Shimrit Keddem
- Center for Health Equity and Promotion, Corporal Michael J. Crescenz VA Medical Center, 3900 Woodland Avenue, Annex Suite 203, Philadelphia, PA, 19104, USA.,Department of Family Medicine and Community Health, University of Pennsylvania Perelman School of Medicine, Philadelphia, USA
| | - Roman Ayele
- Rocky Mountain Regional VA Medical Center, Aurora, CO, USA.,University of Colorado Anschutz Medical Campus, Aurora, CO, USA
| | - Leah M Haverhals
- Rocky Mountain Regional VA Medical Center, Aurora, CO, USA.,University of Colorado Anschutz Medical Campus, Aurora, CO, USA
| | - Kate H Magid
- Rocky Mountain Regional VA Medical Center, Aurora, CO, USA
| | - Jennifer Kononowech
- Center for Clinical Management Research, VA Ann Arbor Healthcare System, Ann Arbor, MI, USA
| | - Andrew Murray
- Corporal Michael J. Crescenz VA Medical Center, Philadelphia, PA, USA
| | - Joan G Carpenter
- Corporal Michael J. Crescenz VA Medical Center, Philadelphia, PA, USA.,University of Maryland School of Nursing, Baltimore, MD, USA
| | - Mary Beth Foglia
- VA National Center for Ethics in Health Care, Washington, D.C., USA.,Department of Bioethics and Humanities, School of Medicine, University of Washington, Seattle, WA, USA
| | - Lucinda Potter
- VA National Center for Ethics in Health Care, Washington, D.C., USA
| | - Jennifer McKenzie
- VA Purchased Long-Term Services and Supports, Geriatrics and Extended Care, D, Washington, .C, USA
| | - Darlene Davis
- Home-Based Primary Care Program, Office of Geriatrics and Extended Care, Washington, D.C., USA
| | - Cari Levy
- Rocky Mountain Regional VA Medical Center, Aurora, CO, USA.,University of Colorado Anschutz Medical Campus, Aurora, CO, USA
| |
Collapse
|
4
|
Hysong SJ, SoRelle R, Hughes AM. Prevalence of Effective Audit-and-Feedback Practices in Primary Care Settings: A Qualitative Examination Within Veterans Health Administration. HUMAN FACTORS 2022; 64:99-108. [PMID: 33830786 DOI: 10.1177/00187208211005620] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
OBJECTIVE The purpose of this study is to uncover and catalog the various practices for delivering and disseminating clinical performance in various Veterans Affairs (VA) locations and to evaluate their quality against evidence-based models of effective feedback as reported in the literature. BACKGROUND Feedback can enhance clinical performance in subsequent performance episodes. However, evidence is clear that the way in which feedback is delivered determines whether performance is harmed or improved. METHOD We purposively sampled 16 geographically dispersed VA hospitals based on high, low, consistently moderate, and moderately average highly variable performance on a set of 17 outpatient clinical performance measures. We excluded four sites due to insufficient interview data. We interviewed four key personnel from each location (n = 48) to uncover effective and ineffective audit and feedback strategies. Interviews were transcribed and analyzed qualitatively using a framework-based content analysis approach to identify emergent themes. RESULTS We identified 102 unique strategies used to deliver feedback. Of these strategies, 64 (62.74%) have been found to be ineffective according to the audit-and-feedback research literature. Comparing features common to effective (e.g., individually tailored, computerized feedback reports) versus ineffective (e.g., large staff meetings) strategies, most ineffective strategies delivered feedback in meetings, whereas strategies receiving the highest effectiveness scores delivered feedback via visually understood reports that did not occur in a group setting. CONCLUSIONS Findings show that current practices are leveraging largely ineffective feedback strategies. Future research should seek to identify the longitudinal impact of current feedback and audit practices on clinical performance. APPLICATION Feedback in primary care has little standardization and does not follow available evidence for effective feedback design. Future research in this area is warranted.
Collapse
Affiliation(s)
- Sylvia J Hysong
- Michael E. DeBakey VA Medical Center, Texas, USA
- 3989 Baylor College of Medicine, Texas, USA
| | | | - Ashley M Hughes
- 5228 University of Illinois at Chicago, Champaign, USA
- 20116 Edward Hines JR VA Medical Center, Illinois, USA
| |
Collapse
|
5
|
O'Mahen P, Mehta P, Knox MK, Yang C, Kuebeler M, Rajan SS, Hysong SJ, Petersen LA. An Alternative Method of Public Reporting of Comparative Hospital Quality and Performance Data for Transparency Initiatives. Med Care 2021; 59:816-823. [PMID: 33999572 DOI: 10.1097/mlr.0000000000001567] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
BACKGROUND Hospital performance comparisons for transparency initiatives may be inadequate if peer comparison groups are poorly defined. OBJECTIVE The objective of this study was to evaluate a new approach identifying hospital peers for comparison. DESIGN/SETTING We used Mahalanobis distance as a new method of developing peer-specific groupings for hospitals to incorporate both external and internal complexity. We compared the overlap in groups with an existing method used by the Veterans' Health Administration's Office for Productivity, Efficiency, and Staffing (OPES). PARTICIPANTS One hundred twenty-two acute-care Veterans' Health Administration's Medical Facilities as defined in the OPES fiscal year 2014 report. MEASURES Using 15 variables in 9 categories developed from expert input, including both hospital internal measures and community-based external measures, we used principal components analysis and calculated Mahalanobis distance between each hospital pair. This method accounts for correlation between variables and allows for variables having different variances. We identified the 50 closest hospitals, then eliminated any potential peer whose score on the first component was >1 SD from the reference hospital. We compared overlap with OPES measures. RESULTS Of 15 variables, 12 have SDs exceeding 25% of their means. The first 2 components of our analysis explain 24.8% and 18.5% of variation among hospitals. Eight of 9 variables scaling positively on the first component measure internal complexity, aligning with OPES groups. Four of 5 variables scaling positively on the second component but not the first are factors from the policy environment; this component reflects a dimension not considered in OPES groups. CONCLUSION Individualized peers that incorporate external complexity generate more nuanced comparators to evaluate quality.
Collapse
Affiliation(s)
- Patrick O'Mahen
- Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey Veterans Affairs Medical Center
- Section of Health Services Research, Department of Medicine, Baylor College of Medicine
| | - Paras Mehta
- Department of Psychology, College of Liberal Arts and Social Sciences, University of Houston
| | - Melissa K Knox
- Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey Veterans Affairs Medical Center
- Section of Health Services Research, Department of Medicine, Baylor College of Medicine
| | - Christine Yang
- Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey Veterans Affairs Medical Center
- Section of Health Services Research, Department of Medicine, Baylor College of Medicine
| | - Mark Kuebeler
- Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey Veterans Affairs Medical Center
- Section of Health Services Research, Department of Medicine, Baylor College of Medicine
| | - Suja S Rajan
- School of Public Health, The University of Texas Health Science Center at Houston, Houston, TX
| | - Sylvia J Hysong
- Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey Veterans Affairs Medical Center
- Section of Health Services Research, Department of Medicine, Baylor College of Medicine
| | - Laura A Petersen
- Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey Veterans Affairs Medical Center
- Section of Health Services Research, Department of Medicine, Baylor College of Medicine
| |
Collapse
|
6
|
Improving team coordination in primary-care settings via multifaceted team-based feedback: a non-randomised controlled trial study. BJGP Open 2021; 5:BJGPO.2020.0185. [PMID: 33563700 PMCID: PMC8170607 DOI: 10.3399/bjgpo.2020.0185] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Accepted: 01/21/2021] [Indexed: 01/08/2023] Open
Abstract
Background Coordination is critical to successful team-based health care. Most clinicians, however, are not trained in effective coordination or teamwork. Audit and feedback (A&F) could improve team coordination, if designed with teams in mind. Aim The effectiveness of a multifaceted, A&F-plus-debrief intervention was tested to establish whether it improved coordination in primary care teams compared with controls. Design & setting Case-control trial within US Veterans Health Administration medical centres. Method Thirty-four primary care teams selected from four geographically distinct hospitals were compared with 34 administratively matched control teams. Intervention-arm teams received monthly A&F reports about key coordination behaviours and structured debriefings over 7 months. Control teams were followed exclusively via their clinical records. Outcome measures included a coordination composite and its component indicators (appointments starting on time, timely recall scheduling, emergency department utilisation, and electronic patient portal enrolment). Predictors included intervention arm, extent of exposure to intervention, and degree of multiple team membership (MTM). Results Intervention teams did not significantly improve over control teams, even after adjusting for MTM. Follow-up analyses indicated cross-team variability in intervention fidelity; although all intervention teams received feedback reports, not all teams attended all debriefings. Compared with their respective baselines, teams with high debriefing exposure improved significantly. Teams with high debriefing exposure improved significantly more than teams with low exposure. Low exposure teams significantly increased patient portal enrolment. Conclusion Team-based A&F, including adequate reflection time, can improve coordination; however, the effect is dose dependent. Consistency of debriefing appears more critical than proportion of team members attending a debriefing for ensuring implementation fidelity and effectiveness.
Collapse
|
7
|
Hysong SJ, Smitham K, SoRelle R, Amspoker A, Hughes AM, Haidet P. Mental models of audit and feedback in primary care settings. Implement Sci 2018; 13:73. [PMID: 29848372 PMCID: PMC5975441 DOI: 10.1186/s13012-018-0764-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2017] [Accepted: 05/17/2018] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Audit and feedback has been shown to be instrumental in improving quality of care, particularly in outpatient settings. The mental model individuals and organizations hold regarding audit and feedback can moderate its effectiveness, yet this has received limited study in the quality improvement literature. In this study we sought to uncover patterns in mental models of current feedback practices within high- and low-performing healthcare facilities. METHODS We purposively sampled 16 geographically dispersed VA hospitals based on high and low performance on a set of chronic and preventive care measures. We interviewed up to 4 personnel from each location (n = 48) to determine the facility's receptivity to audit and feedback practices. Interview transcripts were analyzed via content and framework analysis to identify emergent themes. RESULTS We found high variability in the mental models of audit and feedback, which we organized into positive and negative themes. We were unable to associate mental models of audit and feedback with clinical performance due to high variance in facility performance over time. Positive mental models exhibit perceived utility of audit and feedback practices in improving performance; whereas, negative mental models did not. CONCLUSIONS Results speak to the variability of mental models of feedback, highlighting how facilities perceive current audit and feedback practices. Findings are consistent with prior research in that variability in feedback mental models is associated with lower performance.; Future research should seek to empirically link mental models revealed in this paper to high and low levels of clinical performance.
Collapse
Affiliation(s)
- Sylvia J Hysong
- Michael E. DeBakey Veterans Affairs Medical Center, Houston, TX, USA. .,Baylor College of Medicine, 2002 Holcombe Blvd, Houston, TX, 77030, USA. .,Center for Innovations in Quality Effectiveness and Safety, Houston, Texas, USA.
| | - Kristen Smitham
- Michael E. DeBakey Veterans Affairs Medical Center, Houston, TX, USA.,Baylor College of Medicine, 2002 Holcombe Blvd, Houston, TX, 77030, USA
| | - Richard SoRelle
- VISN 4 Center for Evaluation of PACT (CEPACT), Corporal Michael J. Crescenz VA Medical Center, Philadelphia, PA, USA
| | - Amber Amspoker
- Michael E. DeBakey Veterans Affairs Medical Center, Houston, TX, USA.,Baylor College of Medicine, 2002 Holcombe Blvd, Houston, TX, 77030, USA
| | - Ashley M Hughes
- Department of Biomedical and Health Information Sciences, University of Illinois at Chicago, Chicago, IL, USA
| | - Paul Haidet
- Penn State University College of Medicine, Hershey, PA, USA
| |
Collapse
|
8
|
Payne VL, Hysong SJ. Model depicting aspects of audit and feedback that impact physicians' acceptance of clinical performance feedback. BMC Health Serv Res 2016; 16:260. [PMID: 27412170 PMCID: PMC4944319 DOI: 10.1186/s12913-016-1486-3] [Citation(s) in RCA: 71] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2016] [Accepted: 06/23/2016] [Indexed: 12/01/2022] Open
Abstract
Background Audit and feedback (A&F) is a strategy that has been used in various disciplines for performance and quality improvement. There is limited research regarding medical professionals’ acceptance of clinical-performance feedback and whether feedback impacts clinical practice. The objectives of our research were to (1) investigate aspects of A&F that impact physicians’ acceptance of performance feedback; (2) determine actions physicians take when receiving feedback; and (3) determine if feedback impacts physicians’ patient-management behavior. Methods In this qualitative study, we employed grounded theory methods to perform a secondary analysis of semi-structured interviews with 12 VA primary care physicians. We analyzed a subset of interview questions from the primary study, which aimed to determine how providers of high, low and moderately performing VA medical centers use performance feedback to maintain and improve quality of care, and determine perceived utility of performance feedback. Results Based on the themes emergent from our analysis and their observed relationships, we developed a model depicting aspects of the A&F process that impact feedback acceptance and physicians’ patient-management behavior. The model is comprised of three core components – Reaction, Action and Impact – and depicts elements associated with feedback recipients’ reaction to feedback, action taken when feedback is received, and physicians modifying their patient-management behavior. Feedback characteristics, the environment, external locus-of-control components, core values, emotion and the assessment process induce or deter reaction, action and impact. Feedback characteristics (content and timeliness), and the procedural justice of the assessment process (unjust penalties) impact feedback acceptance. External locus-of-control elements (financial incentives, competition), the environment (patient volume, time constraints) and emotion impact patient-management behavior. Receiving feedback generated intense emotion within physicians. The underlying source of the emotion was the assessment process, not the feedback. The emotional response impacted acceptance, impelled action or inaction, and impacted patient-management behavior. Emotion intensity was associated with type of action taken (defensive, proactive, retroactive). Conclusions Feedback acceptance and impact have as much to do with the performance assessment process as it does the feedback. In order to enhance feedback acceptance and the impact of feedback, developers of clinical performance systems and feedback interventions should consider multiple design elements.
Collapse
Affiliation(s)
- Velma L Payne
- Houston Center for Innovations in Quality, Effectiveness & Safety, Michael E. DeBakey Veterans Affairs Medical Center, 2002 Holcombe Blvd (MEDVAMC 152), Houston, TX, 77030, USA. .,Baylor College of Medicine, Houston, TX, USA.
| | - Sylvia J Hysong
- Houston Center for Innovations in Quality, Effectiveness & Safety, Michael E. DeBakey Veterans Affairs Medical Center, 2002 Holcombe Blvd (MEDVAMC 152), Houston, TX, 77030, USA.,Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
9
|
Menon S, Smith MW, Sittig DF, Petersen NJ, Hysong SJ, Espadas D, Modi V, Singh H. How context affects electronic health record-based test result follow-up: a mixed-methods evaluation. BMJ Open 2014; 4:e005985. [PMID: 25387758 PMCID: PMC4244393 DOI: 10.1136/bmjopen-2014-005985] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
Abstract
OBJECTIVES Electronic health record (EHR)-based alerts can facilitate transmission of test results to healthcare providers, helping ensure timely and appropriate follow-up. However, failure to follow-up on abnormal test results (missed test results) persists in EHR-enabled healthcare settings. We aimed to identify contextual factors associated with facility-level variation in missed test results within the Veterans Affairs (VA) health system. DESIGN, SETTING AND PARTICIPANTS Based on a previous survey, we categorised VA facilities according to primary care providers' (PCPs') perceptions of low (n=20) versus high (n=20) risk of missed test results. We interviewed facility representatives to collect data on several contextual factors derived from a sociotechnical conceptual model of safe and effective EHR use. We compared these factors between facilities categorised as low and high perceived risk, adjusting for structural characteristics. RESULTS Facilities with low perceived risk were significantly more likely to use specific strategies to prevent alerts from being lost to follow-up (p=0.0114). Qualitative analysis identified three high-risk scenarios for missed test results: alerts on tests ordered by trainees, alerts 'handed off' to another covering clinician (surrogate clinician), and alerts on patients not assigned in the EHR to a PCP. Test result management policies and procedures to address these high-risk situations varied considerably across facilities. CONCLUSIONS Our study identified several scenarios that pose a higher risk for missed test results in EHR-based healthcare systems. In addition to implementing provider-level strategies to prevent missed test results, healthcare organisations should consider implementing monitoring systems to track missed test results.
Collapse
Affiliation(s)
- Shailaja Menon
- Department of Medicine, Baylor College of Medicine, Center for Innovations in Quality, Effectiveness and Safety, the Michael E. DeBakey Veterans Affairs Medical Center and the Section of Health Services Research, Houston, Texas, USA
| | - Michael W Smith
- Department of Medicine, Baylor College of Medicine, Center for Innovations in Quality, Effectiveness and Safety, the Michael E. DeBakey Veterans Affairs Medical Center and the Section of Health Services Research, Houston, Texas, USA
| | - Dean F Sittig
- Department of Medicine, Baylor College of Medicine, Center for Innovations in Quality, Effectiveness and Safety, the Michael E. DeBakey Veterans Affairs Medical Center and the Section of Health Services Research, Houston, Texas, USA
| | - Nancy J Petersen
- University of Texas School of Biomedical Informatics and the UT-Memorial Hermann Center for Healthcare Quality & Safety, Houston, Texas, USA
| | - Sylvia J Hysong
- Department of Medicine, Baylor College of Medicine, Center for Innovations in Quality, Effectiveness and Safety, the Michael E. DeBakey Veterans Affairs Medical Center and the Section of Health Services Research, Houston, Texas, USA
| | - Donna Espadas
- Department of Medicine, Baylor College of Medicine, Center for Innovations in Quality, Effectiveness and Safety, the Michael E. DeBakey Veterans Affairs Medical Center and the Section of Health Services Research, Houston, Texas, USA
| | - Varsha Modi
- Department of Medicine, Baylor College of Medicine, Center for Innovations in Quality, Effectiveness and Safety, the Michael E. DeBakey Veterans Affairs Medical Center and the Section of Health Services Research, Houston, Texas, USA
| | - Hardeep Singh
- Department of Medicine, Baylor College of Medicine, Center for Innovations in Quality, Effectiveness and Safety, the Michael E. DeBakey Veterans Affairs Medical Center and the Section of Health Services Research, Houston, Texas, USA
| |
Collapse
|
10
|
Silber JH, Rosenbaum PR, Ross RN, Ludwig JM, Wang W, Niknam BA, Saynisch PA, Even-Shoshan O, Kelz RR, Fleisher LA. A hospital-specific template for benchmarking its cost and quality. Health Serv Res 2014; 49:1475-97. [PMID: 25201167 DOI: 10.1111/1475-6773.12226] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
OBJECTIVE Develop an improved method for auditing hospital cost and quality tailored to a specific hospital's patient population. DATA SOURCES/SETTING Medicare claims in general, gynecologic and urologic surgery, and orthopedics from Illinois, New York, and Texas between 2004 and 2006. STUDY DESIGN A template of 300 representative patients from a single index hospital was constructed and used to match 300 patients at 43 hospitals that had a minimum of 500 patients over a 3-year study period. DATA COLLECTION/EXTRACTION METHODS From each of 43 hospitals we chose 300 patients most resembling the template using multivariate matching. PRINCIPAL FINDINGS We found close matches on procedures and patient characteristics, far more balanced than would be expected in a randomized trial. There were little to no differences between the index hospital's template and the 43 hospitals on most patient characteristics yet large and significant differences in mortality, failure-to-rescue, and cost. CONCLUSION Matching can produce fair, directly standardized audits. From the perspective of the index hospital, "hospital-specific" template matching provides the fairness of direct standardization with the specific institutional relevance of indirect standardization. Using this approach, hospitals will be better able to examine their performance, and better determine why they are achieving the results they observe.
Collapse
Affiliation(s)
- Jeffrey H Silber
- The Department of Pediatrics, The University of Pennsylvania School of Medicine, Philadelphia, PA; Department of Anesthesiology and Critical Care, The University of Pennsylvania School of Medicine, Philadelphia, PA; Department of Health Care Management, The Wharton School, The University of Pennsylvania, Philadelphia, PA; The Leonard Davis Institute of Health Economics, The University of Pennsylvania, Philadelphia, PA
| | | | | | | | | | | | | | | | | | | |
Collapse
|
11
|
Sohn M, Jeong S, Lee HJ. Case-based context ontology construction using fuzzy set theory for personalized service in a smart home environment. Soft comput 2014. [DOI: 10.1007/s00500-014-1288-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
12
|
Byrne MM, Daw C, Pietz K, Reis B, Petersen LA. Creating peer groups for assessing and comparing nursing home performance. THE AMERICAN JOURNAL OF MANAGED CARE 2013; 19:933-9. [PMID: 24511989 PMCID: PMC9980409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 03/04/2023]
Abstract
BACKGROUND Publicly reported performance data for hospitals and nursing homes are becoming ubiquitous. For such comparisons to be fair, facilities must be compared with their peers. OBJECTIVES To adapt a previously published methodology for developing hospital peer groupings so that it is applicable to nursing homes and to explore the characteristics of "nearest-neighbor" peer groupings. STUDY DESIGN Analysis of Department of Veterans Affairs administrative databases and nursing home facility characteristics. METHODS The nearest-neighbor methodology for developing peer groupings involves calculating the Euclidean distance between facilities based on facility characteristics. We describe our steps in selection of facility characteristics, describe the characteristics of nearest-neighbor peer groups, and compare them with peer groups derived through classical cluster analysis. RESULTS The facility characteristics most pertinent to nursing home groupings were found to be different from those that were most relevant for hospitals. Unlike classical cluster groups, nearest neighbor groups are not mutually exclusive, and the nearest-neighbor methodology resulted in nursing home peer groupings that were substantially less diffuse than nursing home peer groups created using traditional cluster analysis. CONCLUSION It is essential that healthcare policy makers and administrators have a means of fairly grouping facilities for the purposes of quality, cost, or efficiency comparisons. In this research, we show that a previously published methodology can be successfully applied to a nursing home setting. The same approach could be applied in other clinical settings such as primary care.
Collapse
Affiliation(s)
- Margaret M Byrne
- Department of Epidemiology and Public Health, University of Miami, 1120 NW 14th St, Miami, FL 33136. E-mail:
| | - Christina Daw
- Department of Health and Mental Hygiene, State of Maryland, Baltimore, MD
| | - Ken Pietz
- Health Policy and Quality Program (KP, BR, LAP), Houston VA Health Services Research and Development Center of Excellence, Michael E. DeBakey VA Medical Center and Section for Health Services Research, Baylor College of Medicine, Houston, TX
| | - Brian Reis
- Health Policy and Quality Program (KP, BR, LAP), Houston VA Health Services Research and Development Center of Excellence, Michael E. DeBakey VA Medical Center and Section for Health Services Research, Baylor College of Medicine, Houston, TX
| | - Laura A Petersen
- Health Policy and Quality Program (KP, BR, LAP), Houston VA Health Services Research and Development Center of Excellence, Michael E. DeBakey VA Medical Center and Section for Health Services Research, Baylor College of Medicine, Houston, TX
| |
Collapse
|
13
|
Hysong SJ, Smitham KB, Knox M, Johnson KE, SoRelle R, Haidet P. Recruiting clinical personnel as research participants: a framework for assessing feasibility. Implement Sci 2013; 8:125. [PMID: 24153049 PMCID: PMC4015152 DOI: 10.1186/1748-5908-8-125] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2013] [Accepted: 10/18/2013] [Indexed: 11/17/2022] Open
Abstract
Increasing numbers of research studies test interventions for clinicians in addition to or instead of interventions for patients. Although previous studies have enumerated barriers to patient enrolment in clinical trials, corresponding barriers have not been identified for enrolling clinicians as subjects. We propose a framework of metrics for evidence-based estimation of time and resources required for recruiting clinicians as research participants, and present an example from a federally funded study. Our framework proposes metrics for tracking five steps in the recruitment process: gaining entry into facilities, obtaining accurate eligibility and contact information, reaching busy clinicians, assessing willingness to participate, and scheduling participants for data collection. We analyzed recruitment records from a qualitative study exploring performance feedback at US Department of Veterans Affairs Medical Centers (VAMCs); five recruiters sought to reach two clinicians at 16 facilities for a one-hour interview. Objective metrics were calculable for all five steps; metric values varied considerably across facilities. Obtaining accurate contact information slowed down recruiting the most. We conclude that successfully recruiting even small numbers of employees requires considerable resourcefulness and more calendar time than anticipated. Our proposed framework provides an empirical basis for estimating research-recruitment timelines, planning subject-recruitment strategies, and assessing the research accessibility of clinical sites.
Collapse
Affiliation(s)
- Sylvia J Hysong
- Center for Innovations in Quality, Effectiveness and Safety, Michael E, DeBakey VA Medical Center, 2002 Holcombe Blvd (152), Houston, TX 77030, USA.
| | | | | | | | | | | |
Collapse
|
14
|
Davila JA, Kramer JR, Duan Z, Richardson PA, Tyson GL, Sada YH, Kanwal F, El-Serag HB. Referral and receipt of treatment for hepatocellular carcinoma in United States veterans: effect of patient and nonpatient factors. Hepatology 2013; 57:1858-68. [PMID: 23359313 PMCID: PMC4046942 DOI: 10.1002/hep.26287] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/01/2012] [Accepted: 11/27/2012] [Indexed: 01/01/2023]
Abstract
UNLABELLED The delivery of treatment for hepatocellular carcinoma (HCC) could be influenced by the place of HCC diagnosis (hospitalization versus outpatient), subspecialty referral following diagnosis, as well as physician and facility factors. We conducted a study to examine the effect of patient and nonpatient factors on the place of HCC diagnosis, referral, and treatment in Veterans Administration (VA) hospitals in the United States. Using the VA Hepatitis C Clinical Case Registry, we identified hepatitis C virus (HCV)-infected patients who developed HCC during 1998-2006. All cases were verified and staged according to Barcelona Clinic Liver Cancer (BCLC) criteria. The main outcomes were place of HCC diagnosis, being seen by a surgeon or oncologist, and treatment. We examined factors related to these outcomes using hierarchical logistic regression. These factors included HCC stage, HCC surveillance, physician specialty, and facility factors, in addition to risk factors, comorbidity, and liver disease indicators. Approximately 37.2% of the 1,296 patients with HCC were diagnosed during hospitalization, 31.0% were seen by a surgeon or oncologist, and 34.3% received treatment. Being seen by a surgeon or oncologist was associated with surveillance (adjusted odds ratio [aOR] = 1.47; 95% CI: 1.20-1.80) and varied by geography (1.74;1.09-2.77). Seeing a surgeon or oncologist was predictive of treatment (aOR = 1.43; 95% CI: 1.24-1.66). There was a significant increase in treatment among patients who received surveillance (aOR = 1.37; 95% CI: 1.02-1.71), were seen by gastroenterology (1.65;1.21-2.24), or were diagnosed at a transplant facility (1.48;1.15-1.90). CONCLUSION Approximately 40% of patients were diagnosed during hospitalization. Most patients were not seen by a surgeon or oncologist for treatment evaluation and only 34% received treatment. Only receipt of HCC surveillance was associated with increased likelihood of outpatient diagnosis, being seen by a surgeon or oncologist, and treatment.
Collapse
Affiliation(s)
- Jessica A Davila
- Houston VA Health Services Research Center of Excellence, Section of Health Services Research, Michael E. DeBakey Veterans Affairs Medical Center, Houston, TX, USA.
| | | | | | | | | | | | | | | |
Collapse
|
15
|
Hysong SJ, Teal CR, Khan MJ, Haidet P. Improving quality of care through improved audit and feedback. Implement Sci 2012; 7:45. [PMID: 22607640 PMCID: PMC3462705 DOI: 10.1186/1748-5908-7-45] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2012] [Accepted: 04/30/2012] [Indexed: 11/12/2022] Open
Abstract
Background The Department of Veterans Affairs (VA) has led the industry in measuring facility performance as a critical element in improving quality of care, investing substantial resources to develop and maintain valid and cost-effective measures. The External Peer Review Program (EPRP) of the VA is the official data source for monitoring facility performance, used to prioritize the quality areas needing most attention. Facility performance measurement has significantly improved preventive and chronic care, as well as overall quality; however, much variability still exists in levels of performance across measures and facilities. Audit and feedback (A&F), an important component of effective performance measurement, can help reduce this variability and improve overall performance. Previous research suggests that VA Medical Centers (VAMCs) with high EPRP performance scores tend to use EPRP data as a feedback source. However, the manner in which EPRP data are used as a feedback source by individual providers as well as service line, facility, and network leadership is not well understood. An in-depth understanding of mental models, strategies, and specific feedback process characteristics adopted by high-performing facilities is thus urgently needed. This research compares how leaders of high, low, and moderately performing VAMCs use clinical performance data from the EPRP as a feedback tool to maintain and improve quality of care. Methods We will conduct a qualitative, grounded theory analysis of up to 64 interviews using a novel method of sampling primary care, facility, and Veterans Integrated Service Network (VISN) leadership at high-, moderate-, and low-performing facilities. We will analyze interviews for evidence of cross-facility differences in perceptions of performance data usefulness and strategies for disseminating performance data evaluating performance, with particular attention to timeliness, individualization, and punitiveness of feedback delivery. Discussion Most research examining feedback to improve provider and facility performance lacks a detailed understanding of the elements of effective feedback. This research will highlight the elements most commonly used at high-performing facilities and identify additional features of their successful feedback strategies not previously identified. Armed with this information, practices can implement more effective A&F interventions to improve quality of care.
Collapse
Affiliation(s)
- Sylvia J Hysong
- Houston VA Health Services Research & Development Center of Excellence, Michael E, DeBakey VA Medical Center, Houston, TX, USA.
| | | | | | | |
Collapse
|
16
|
Petersen LA, Simpson K, Sorelle R, Urech T, Chitwood SS. How variability in the institutional review board review process affects minimal-risk multisite health services research. Ann Intern Med 2012; 156:728-35. [PMID: 22586010 PMCID: PMC4174365 DOI: 10.7326/0003-4819-156-10-201205150-00011] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
BACKGROUND The Department of Health and Human Services recently called for public comment on human subjects research protections. OBJECTIVE To assess variability in reviews across institutional review boards (IRBs) for a multisite, minimal-risk trial of financial incentives for evidence-based hypertension care and to quantify the effect of review determinations on site participation, budget, and timeline. DESIGN A natural experiment occurring from multiple IRBs reviewing the same protocol for a multicenter trial (May 2005 to October 2007). PARTICIPANTS 25 Veterans Affairs (VA) medical centers. MEASUREMENTS Number of submissions, time to approval, and costs were evaluated; patient complexity, academic affiliation, size, and location (urban or rural) between participating and nonparticipating VA medical centers were compared. RESULTS Of 25 eligible VA medical centers, 6 did not meet requirements for IRB review and 2 declined to participate. Of 17 applications, 14 were approved. The process required 115 submissions, lasted 27 months, and cost close to $170 000 in staff salaries. One IRB's concern about incentivizing a particular medication recommended by national guidelines prompted a change in our design to broaden our inclusion criteria beyond uncomplicated hypertension. The change required amending the protocol at 14 sites to preserve internal validity. The IRBs that approved the protocol classified it as minimal risk. The 12 sites that ultimately participated in the trial were more likely to be urban and academically affiliated and to care for more complex patients, which limits the external validity of the trial's findings. LIMITATION Because data came from a single multisite trial in the VA system that uses a 2-stage review process, generalizability is limited. CONCLUSION Complying with IRB requirements for a minimal-risk study required substantial resources and threatened the study's internal and external validity. The current review of regulatory requirements may address some of these problems.
Collapse
Affiliation(s)
- Laura A Petersen
- Health Services Research and Development (152), Michael E. DeBakey Veterans Affairs Medical Center, 2002 Holcombe Boulevard, Houston, TX 77030, USA.
| | | | | | | | | |
Collapse
|
17
|
Non-high-density lipoprotein cholesterol reporting and goal attainment in primary care. J Clin Lipidol 2012; 6:545-52. [PMID: 23312050 DOI: 10.1016/j.jacl.2012.04.080] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2012] [Revised: 04/16/2012] [Accepted: 04/18/2012] [Indexed: 11/23/2022]
Abstract
BACKGROUND The Adult Treatment Panel III guidelines established non-high-density lipoprotein cholesterol (non-HDL-C) as a secondary treatment target. However, non-HDL-C levels are not reported on standard lipid panels by many hospital-based and/or commercial biochemical laboratories. OBJECTIVE We determined whether reporting non-HDL-C was associated with improved non-HDL-C goal attainment. METHODS We identified patients with cardiovascular disease (CVD) and/or diabetes receiving care within the Veterans Health Administration. We matched a facility that reported non-HDL-C levels on lipid panels (3994 CVD and 5108 diabetes patients) to a facility with similar size, patient complexity, and academic mission that did not report non-HDL-C (4269 CVD and 6591 diabetes patients). We performed patient-level analysis to assess differences in non-HDL-C from baseline to the most recent lipid panel at these facilities. RESULTS Baseline non-HDL-C levels for CVD patients were 114 mg/dL and 107 mg/dL at the reporting and nonreporting facilities, respectively. At 2.3-year follow-up, non-HDL-C levels decreased at both facilities but by a greater amount at the reporting facility (-11 mg/dL vs -3 mg/dL at the nonreporting facility, P < .001). Results remained significant (P < .001) after we adjusted for patient's age, race, gender, illness burden, history of diabetes, hypertension, medication adherence, statin use, number of lipid panels, and number of primary care visits between baseline and follow-up. Reductions were greater among CVD patients with triglycerides ≥200 mg/dL (-25 mg/dL vs -16 mg/dL at the respective facilities, P = .004). Results were similar in diabetes patients. Reporting was also associated with greater proportions of patients meeting non-HDL-C treatment goal of <130 mg/dL. CONCLUSION Non-HDL-C reporting could improve non-HDL-C goal attainment.
Collapse
|
18
|
Kang HC, Hong JS, Park HJ. Development of peer-group-classification criteria for the comparison of cost efficiency among general hospitals under the Korean NHI program. Health Serv Res 2012; 47:1719-38. [PMID: 22356558 DOI: 10.1111/j.1475-6773.2012.01379.x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
OBJECTIVES To classify general hospitals into homogeneous systematic-risk groups in order to compare cost efficiency and propose peer-group-classification criteria. DATA SOURCES Health care institution registration data and inpatient-episode-based claims data submitted by the Korea National Health Insurance system to the Health Insurance Review and Assessment Service from July 2007 to December 2009. STUDY DESIGN Cluster analysis was performed to classify general hospitals into peer groups based on similarities in hospital characteristics, case mix complexity, and service-distribution characteristics. Classification criteria reflecting clustering were developed. To test whether the new peer groups better adjusted for differences in systematic risks among peer groups, we compared the R(2) statistics of the current and proposed peer groups according to total variations in medical costs per episode and case mix indices influencing the cost efficiency. DATA COLLECTION A total of 1,236,471 inpatient episodes were constructed for 222 general hospitals in 2008. PRINCIPAL FINDINGS New criteria were developed to classify general hospitals into three peer groups (large general hospitals, small and medium general hospitals treating severe cases, and small and medium general hospitals) according to size and case mix index. CONCLUSIONS This study provides information about using peer grouping to enhance fairness in the performance assessment of health care providers.
Collapse
Affiliation(s)
- Hee-Chung Kang
- Review & Assessment Research Division, Health Insurance Review and Assessment Service, Seoul, Korea.
| | | | | |
Collapse
|
19
|
Passive monitoring versus active assessment of clinical performance: impact on measured quality of care. Med Care 2011; 49:883-90. [PMID: 21918399 DOI: 10.1097/mlr.0b013e318222a36c] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
CONTEXT Measurement of hospitals' clinical performance is becoming more ubiquitous in an effort to inform patient choices, payer reimbursement decisions, and quality improvement initiatives such as pay-for-performance. As more measures are developed, the intensity with which measures are monitored changes. Performance measures are often retired after a period of sustained performance and not monitored as actively as other measures where performance is more variable. The effect of actively versus passively monitoring performance on measured quality of care is not known. OBJECTIVE We compared the nature and rate of change in hospital outpatient clinical performance as a function of a measure's status (active vs. passive), and examined the mean time to stability of performance after changing status. We hypothesize that performance will be higher when measures are actively monitored than when they are passively monitored. DESIGN Longitudinal, hierarchical retrospective analyses of outpatient clinical performance measure data from Veterans Health Administration's External Peer Review Program from 2000 to 2008. SETTING One hundred thirty-three Veterans Health Administration Medical Centers throughout the United States and its associated territories. MAIN OUTCOME MEASURES Clinical performance on 17 measures covering 5 clinical areas common to ambulatory care: screening, immunization, chronic care after acute myocardial infarction, diabetes mellitus, and hypertension. RESULTS Contrary to expectations, we found that measure status (whether active or passive) did not significantly impact performance over time; time to stability of performance varied considerably by measure, and did not seem to covary with performance at the stability point (ie, performance scores for measures with short stability times were no higher or lower than scores for measures with longer stability times). CONCLUSIONS We found no significant "extinction" of performance after measures were retired, suggesting that other features of the health care system, such as organizational policies and procedures or other structural features, may be creating a "strong situation" and sustaining performance. Future research should aim to better understand the effects of monitoring performance using process-of-care measures and creating sustained high performance.
Collapse
|
20
|
Kramer JR, Kanwal F, Richardson P, Giordano TP, Petersen LA, El-Serag HB. Importance of patient, provider, and facility predictors of hepatitis C virus treatment in veterans: a national study. Am J Gastroenterol 2011; 106:483-91. [PMID: 21063393 DOI: 10.1038/ajg.2010.430] [Citation(s) in RCA: 67] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
OBJECTIVES Several patient characteristics are known to impact hepatitis C virus (HCV) antiviral treatment rates. However, it is unclear whether, and to what extent, health-care providers or facility characteristics impact HCV treatment rates. METHODS Using national data obtained from the Department of Veterans Affairs (VA) HCV Clinical Case Registry, we conducted a retrospective cohort study of patients with active HCV viremia, who were diagnosed between 2003 and 2004. We evaluated patient-, provider-, and facility-level predictors of receipt of HCV treatment with hierarchical logistic regression. RESULTS The overall HCV treatment rate in 29,695 patients was 14.2%. The strongest independent predictor for receipt of treatment was consultation with an HCV specialist (odds ratio=9.34; 8.03-10.87). Patients were less likely to receive HCV treatment if they were Black, older, male, current users of alcohol or drugs, had HCV genotype 1 or 4, had higher creatinine levels, or had severe anxiety/post-traumatic stress disorder or depression. Patients with high hemoglobin levels, cirrhosis, and persistently high liver enzyme levels were more likely to receive treatment. Patient, provider, and facility factors explained 15, 4, and 4%, respectively, of the variation in treatment rates. CONCLUSIONS Treatment rates for HCV are low in the VA. In addition to several important patient-level characteristics, a specialist consultant has a vital role in determining whether a patient should receive HCV treatment. These findings support the development of patient-level interventions targeted at identifying and managing comorbidities and contraindications and fostering greater involvement of specialists in the care of HCV.
Collapse
Affiliation(s)
- Jennifer R Kramer
- Houston VA Health Services Research & Development Center of Excellence, Michael E. DeBakey Veterans Affairs Medical Center, Houston, Texas 77030, USA.
| | | | | | | | | | | |
Collapse
|
21
|
Kramer JR, Hachem CY, Kanwal F, Mei M, El-Serag HB. Meeting vaccination quality measures for hepatitis A and B virus in patients with chronic hepatitis C infection. Hepatology 2011; 53:42-52. [PMID: 21254161 DOI: 10.1002/hep.24024] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/26/2010] [Accepted: 09/23/2010] [Indexed: 12/27/2022]
Abstract
UNLABELLED Coinfection with hepatitis A virus (HAV) or hepatitis B virus (HBV) in patients with chronic hepatitis C virus (HCV) is associated with increased morbidity and mortality. The Center for Medicare and Medicaid Services has identified HAV and HBV vaccination as a priority area for quality measurement in HCV. It is unclear to what extent patients with HCV meet these recommendations. We used national data from the Department of Veterans Affairs HCV Clinical Case Registry to evaluate the prevalence and predictors of meeting the quality measure (QM) of receiving vaccination or documented immunity to HAV and HBV in patients with chronic HCV. We identified 88,456 patients who had overall vaccination rates of 21.9% and 20.7% for HBV and HAV, respectively. The QM rates were 57.0% and 45.5% for HBV and HAV, respectively. Patients who were nonwhite or who had elevated alanine aminotransferase levels, cirrhosis, or human immunodeficiency virus were more likely to meet the HBV QM. Factors related to HCV care were also determinants of meeting the HBV QM. These factors included receiving a specialist consult, genotype testing, or HCV treatment. Patients who were older, had psychosis, and had a higher comorbidity score were less likely to meet the HBV QM. With a few exceptions, similar variables were related to meeting the HAV QM. The incidence of superinfection with acute HBV and HAV was low, but it was significantly lower in patients who received vaccination than in those who did not. CONCLUSION Quality measure rates for HAV and HBV are suboptimal for patients with chronic HCV. In addition, several patient-related factors and receiving HCV-related care are associated with a higher likelihood of meeting QMs.
Collapse
Affiliation(s)
- Jennifer R Kramer
- Houston VA Health Services Research & Development Center of Excellence, Michael E. DeBakey Veterans Affairs Medical Center, Houston, TX, USA.
| | | | | | | | | |
Collapse
|