1
|
Tran G, Kelly B, Hammersley M, Norman J, Okely A. The utility of website-based quality improvement tools for health professionals: a systematic review. Int J Qual Health Care 2024; 36:mzae068. [PMID: 38985665 PMCID: PMC11277856 DOI: 10.1093/intqhc/mzae068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 06/03/2024] [Accepted: 07/09/2024] [Indexed: 07/12/2024] Open
Abstract
As technology continues to advance, it is important to understand how website-based tools can support quality improvement. Website-based tools refer to resources such as toolkits that users can access and use autonomously through a dedicated website. This review examined how website-based tools can support healthcare professionals with quality improvement, including the optimal processes used to develop tools and the elements of an effective tool. A systematic search of seven databases was conducted to include articles published between January 2012 and January 2024. Articles were included if they were peer reviewed, written in English, based in health settings, and reported the development or evaluation of a quality improvement website-based tool for professionals. A narrative synthesis was conducted using NVivo. Risk of bias was assessed using the Mixed Methods Appraisal Tool. All papers were independently screened and coded by two authors using a six-phase conceptual framework by Braun and Clarke. Eighteen studies met the inclusion criteria. Themes identified were tool development processes, quality improvement mechanisms and barriers and facilitators to tool usage. Digitalizing existing quality improvement processes (n = 7), identifying gaps in practice (n = 6), and contributing to professional development (n = 3) were common quality improvement aims. Tools were associated with the reported enhancement of accuracy and efficiency in clinical tasks, improvement in adherence to guidelines, facilitation of reflective practice, and provision of tailored feedback for continuous quality improvement. Common features were educational resources (n = 7) and assisting the user to assess current practices against standards/recommendations (n = 6), which supported professionals in achieving better clinical outcomes, increased professional satisfaction and streamlined workflow in various settings. Studies reported facilitators to tool usage including relevance to practice, accessibility, and facilitating multidisciplinary action, making these tools practical and time-efficient for healthcare. However, barriers such as being time consuming, irrelevant to practice, difficult to use, and lack of organizational engagement were reported. Almost all tools were co-developed with stakeholders. The co-design approaches varied, reflecting different levels of stakeholder engagement and adoption of co-design methodologies. It is noted that the quality of included studies was low. These findings offer valuable insights for future development of quality improvement website-based tools in healthcare. Recommendations include ensuring tools are co-developed with healthcare professionals, focusing on practical usability and addressing common barriers to enhance engagement and effectiveness in improving healthcare quality. Randomized controlled trials are warranted to provide objective evidence of tool efficacy.
Collapse
Affiliation(s)
- Georgie Tran
- Early Start, Faculty of the Arts, Social Sciences and Humanities, University of Wollongong, Wollongong, NSW 2522, Australia
| | - Bridget Kelly
- Early Start, Faculty of the Arts, Social Sciences and Humanities, University of Wollongong, Wollongong, NSW 2522, Australia
| | - Megan Hammersley
- Early Start, Faculty of the Arts, Social Sciences and Humanities, University of Wollongong, Wollongong, NSW 2522, Australia
| | - Jennifer Norman
- Health Promotion Service, Illawarra Shoalhaven Local Health District, Warrawong, NSW 2502, Australia
| | - Anthony Okely
- Early Start, Faculty of the Arts, Social Sciences and Humanities, University of Wollongong, Wollongong, NSW 2522, Australia
| |
Collapse
|
2
|
MacEachern L, Ginsburg LR, Hoben M, Doupe M, Wagg A, Knopp-Sihota JA, Cranley L, Song Y, Estabrooks CA, Berta W. Developing a tool to measure enactment of complex quality improvement interventions in healthcare. BMJ Open Qual 2023; 12:bmjoq-2022-002027. [PMID: 36754540 PMCID: PMC9923287 DOI: 10.1136/bmjoq-2022-002027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Accepted: 01/24/2023] [Indexed: 02/10/2023] Open
Abstract
Quality improvement (QI) projects are common in healthcare settings and often involve interdisciplinary teams working together towards a common goal. Many interventions and programmes have been introduced through research to convey QI skills and knowledge to healthcare workers, however, a few studies have attempted to differentiate between what individuals 'learn' or 'know' versus their capacity to apply their learnings in complex healthcare settings. Understanding and differentiating between delivery, receipt, and enactment of QI skills and knowledge is important because while enactment alone does not guarantee desired QI outcomes, it might be reasonably assumed that 'better enactment' is likely to lead to better outcomes. This paper describes the development, application and validation of a tool to measure enactment of core QI skills and knowledge of a complex QI intervention in a healthcare setting. Based on the Institute for Healthcare Improvement's Model for Improvement, existing QI assessment tools, literature on enactment fidelity and our research protocols, 10 indicators related to core QI skills and knowledge were determined. Definitions and assessment criteria were tested and refined in five iterative cycles. Qualitative data from four QI teams in long-term care homes were used to test and validate the tool. The final measurement tool contains 10 QI indicators and a five-point scale. Inter-rater reliability ranged from good to excellent. Usability and acceptability among raters were considered high. This measurement tool assists in identifying strengths and weaknesses of a QI team and allows for targeted feedback on core QI components. The indicators developed in our tool and the approach to tool development may be useful in other health related contexts where similar data are collected.
Collapse
Affiliation(s)
- Lauren MacEachern
- Institute for Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
| | - Liane R Ginsburg
- Health Policy & Management, York University, Toronto, Ontario, Canada
| | - Matthias Hoben
- School of Health Policy and Management, York University, Toronto, Ontario, Canada,Faculty of Nursing, University of Alberta, Edmonton, Alberta, Canada
| | - Malcolm Doupe
- Rady Faculty of Health Sciences, University of Manitoba, Winnipeg, Manitoba, Canada,Centre for Care Research, Western Norway University of Applied Sciences, Bergen, Norway
| | - Adrian Wagg
- Department of Medicine, University of Alberta, Edmonton, Alberta, Canada
| | | | - Lisa Cranley
- Lawrence S Bloomberg Faculty of Nursing, University of Toronto, Toronto, Ontario, Canada
| | - Yuting Song
- Faculty of Nursing, University of Alberta, Edmonton, Alberta, Canada,School of Nursing, Qingdao University, Edmonton, Shandong, China
| | | | - Whitney Berta
- Institute for Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
3
|
Abraham C, Johnson-Martinez K, Tomolo A. A Scoring Rubric for the Knowledge Section of the Systems Quality Improvement Training and Assessment Tool. MEDEDPORTAL : THE JOURNAL OF TEACHING AND LEARNING RESOURCES 2022; 18:11290. [PMID: 36605542 PMCID: PMC9744987 DOI: 10.15766/mep_2374-8265.11290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 09/27/2022] [Indexed: 06/17/2023]
Abstract
INTRODUCTION Quality improvement (QI) competencies for health professions trainees were developed to address health care quality. Strategies to integrate QI into curricula exist, but methods for assessing interdisciplinary learners' competency are less developed. We refined the Knowledge section scoring rubric of the Systems Quality Improvement Training and Assessment Tool (SQI TAT) and examined its validity evidence. METHODS In 2017, the SQI TAT Knowledge section was expanded to cover seven core QI concepts, and the scoring rubric was refined. Three coders independently scored 35 SQI TAT Knowledge sections (18 pretests, 17 posttests). Interrater reliability was assessed by percent agreement and Cohen's kappa for individual variables and by Lin's concordance correlation for total scores for knowledge and application. Concurrent validity was assessed by comparing responses from two groups with different QI exposure and evaluating whether differences in exposure were measured. RESULTS Total-score interrater reliability average measures of concordance were .89 for all coders and >.70 for six of seven concept scores. The total score discriminated the two groups (p <. 05), and five of seven concept scores were higher for the group with more QI experience. Total scores were significantly higher posttest than pretest (p < .001), with improvement in posttest knowledge scores. DISCUSSION The SQI TAT Knowledge section provides a comprehensive assessment of QI knowledge. The scoring rubric was able to discriminate QI knowledge along a continuum. The SQI TAT Knowledge section is not linked to a clinical context, making it useful for assessing interprofessional learners and varying education levels.
Collapse
Affiliation(s)
- Corrine Abraham
- Associate Professor, Nell Hodgson Woodruff School of Nursing at Emory University; Coordinator, Evidence-Based Practice and Innovation, and Co-Director, VA Quality Scholars Fellowship Program, Atlanta VA Health Care System
| | - Krysta Johnson-Martinez
- Specialty Care Lead and Chief Medical Informatics Officer, VISN 8 VA Sunshine Healthcare Network
| | - Anne Tomolo
- Physician, National Center for Patient Safety; Associate Professor, Emory University School of Medicine
| |
Collapse
|
4
|
Ahuja V, Gorecka J, Yoo P, Emerson BL. A longitudinal course pilot to improve surgical resident acquisition of quality improvement skills. PLoS One 2021; 16:e0254922. [PMID: 34280243 PMCID: PMC8289028 DOI: 10.1371/journal.pone.0254922] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Accepted: 07/06/2021] [Indexed: 11/18/2022] Open
Abstract
Problem Despite mounting evidence that incorporation of QI curricula into surgical trainee education improves morbidity and outcomes, surgery training programs lack standardized QI curricula and tools to measure QI knowledge. In the current study, we developed, implemented, and evaluated a quality improvement curriculum for surgical residents. Intervention Surgical trainees participated in a longitudinal, year-long (2019–2020) curriculum based on the Institute for Healthcare Improvement’s online program. Online curriculum was supplemented with in person didactics and small group projects. Acquisition of skills was assessed pre- and post- course via self-report on a Likert scale as well as the Quality Improvement Knowledge Application Tool (QIKAT). Self-efficacy scores were assessed using the General Self-Efficacy Scale. 9 out of 18 total course participants completed the post course survey. This first course cohort was analyzed as a pilot for future work. Context The project was developed and deployed among surgical residents during their research/lab year. Teams of surgical residents were partnered with a faculty project mentor, as well as non-physician teammates for project work. Impact Participation in the QI course significantly increased skills related to studying the process (p = 0.0463), making changes in a system (p = 0.0167), identifying whether a change leads to an improvement (p = 0.0039), using small cycles of change (p = 0.0000), identifying best practices and comparing them to local practices (p = 0.0020), using PDSA model as a systematic framework for trial and learning (p = 0.0004), identifying how data is linked to specific processes (p = 0.0488), and building the next improvement cycle upon success or failure (p = 0.0316). There was also a significant improvement in aim (p = 0.037) and change (p = 0.029) responses to one QIKAT vignette. Lessons learned We describe the effectiveness of a pilot longitudinal, multi component QI course based on the IHI online curriculum in improving surgical trainee knowledge and use of key QI skills.
Collapse
Affiliation(s)
- Vanita Ahuja
- Department of Surgery, Yale School of Medicine, New Haven, Connecticut, United States of America
| | - Jolanta Gorecka
- Department of Surgery, Yale School of Medicine, New Haven, Connecticut, United States of America
| | - Peter Yoo
- Department of Surgery, Yale School of Medicine, New Haven, Connecticut, United States of America
| | - Beth L. Emerson
- Department of Pediatrics, Section of Pediatric Emergency Medicine, Yale University School of Medicine, New Haven, Connecticut, United States of America
- * E-mail:
| |
Collapse
|
5
|
Connor DM, Durning SJ, Rencic JJ. Clinical Reasoning as a Core Competency. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:1166-1171. [PMID: 31577583 DOI: 10.1097/acm.0000000000003027] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Diagnostic error is a challenging problem; addressing it effectively will require innovation across multiple domains of health care, including medical education. Diagnostic errors often relate to problems with clinical reasoning, which involves the cognitive and relational steps up to and including establishing a diagnostic and therapeutic plan with a patient. However, despite a call from the National Academies of Sciences for medical educators to improve the teaching and assessment of clinical reasoning, the creation of explicit, theory-informed clinical reasoning curricula, faculty development resources, and assessment tools has proceeded slowly in both undergraduate and graduate medical education. To accelerate the development of this critical element of health professions education and to promote needed research and innovation in clinical reasoning education, the Accreditation Council for Graduate Medical Education (ACGME) should revise its core competencies to include clinical reasoning. The core competencies have proven to be an effective means of expanding educational innovation across the United States and ensuring buy-in across a diverse array of institutions and disciplines. Reformulating the ACGME core competencies to include clinical reasoning would spark much-needed educational innovation and scholarship in graduate medical education, as well as collaboration across institutions in this vital aspect of physicianship, and ultimately, could contribute to a reduction of patient suffering by better preparing trainees to build individual, team-based, and system-based tools to monitor for and avoid diagnostic error.
Collapse
Affiliation(s)
- Denise M Connor
- D.M. Connor is associate professor of clinical medicine, Department of Medicine, and director of the Diagnostic Reasoning Block, School of Medicine, University of California, San Francisco, and associate program director of PRIME, an area of distinction for internal medicine residents based at the San Francisco Veterans Affairs Medical Center, San Francisco, California
| | - Steven J Durning
- S.J. Durning is professor, Departments of Medicine and Pathology, and director, Graduate Programs in Health Professions Education, Uniformed Services University of the Health Sciences, Bethesda, Maryland
| | - Joseph J Rencic
- J.J. Rencic is professor, Department of Internal Medicine, Tufts University School of Medicine, and associate program director, Internal Medicine Residency Program, Tufts Medical Center, Boston, Massachusetts
| |
Collapse
|
6
|
Corser W, Church B, Rohrer J, Hortos K. The Statewide Campus System Scholarly Activity Developmental Planning Framework for Community-Based GME Leaders. Spartan Med Res J 2018; 3:6521. [PMID: 33655133 PMCID: PMC7746042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Accepted: 04/15/2018] [Indexed: 09/16/2024] Open
Abstract
CONTEXT During recent years, Graduate Medical Education (GME) leaders in the United States of America have witnessed many substantive changes, including movement to a single accreditation system under the Accreditation Council for Graduate Medical Education. Both MD- and DO-trained residents and faculty must now meet an increasingly stringent set of accreditation standards outlined in Next Accreditation System standards. Specifically, updated scholarly activity standards emphasize a consistent volume and quantity of quality improvement/research projects and dissemination products. The GME literature to date has frequently provided general commentaries regarding individual project strategies or oriented to settings with greater project-related resources. There have also been few articles offering scholarly activity planning strategies for community-based GME officials striving to increase scholarly activity levels. PROPOSED PLANNING FRAMEWORK The authors propose a customizable assessment-planning framework, largely derived from their combined decades of consultation experiences with hundreds of community-based resident and faculty projects. The authors will first describe the primary elements of their proposed scholarly activity planning approach for GME leaders so often subject to worsening resource constraints. They will describe six ongoing developmental strategies with several exemplars described. Such a framework will likely require ongoing reassessments and modification. CONCLUSIONS The authors hope that this proposed planning framework will offer GME administrators, faculty and residents with a pragmatic set of strategies to develop scholarly activity projects and supports. Ideally, GME leaders can use this approach to inform their design of a sustainable system-customized infrastructure of scholarly activity supports.
Collapse
Affiliation(s)
- William Corser
- Michigan State University Statewide Campus System, College of Osteopathic Medicine, East Lansing, MI 48824
| | - Brandy Church
- Michigan State University Statewide Campus System, College of Osteopathic Medicine, East Lansing, MI 48824
| | - Jonathan Rohrer
- Michigan State University Statewide Campus System, College of Osteopathic Medicine, East Lansing, MI 48824
| | - Kari Hortos
- Michigan State University Statewide Campus System, College of Osteopathic Medicine, East Lansing, MI 48824
| |
Collapse
|
7
|
Rosenbluth G. Development of a Multi-Domain Assessment Tool for Quality Improvement Projects. J Grad Med Educ 2017; 9:473-478. [PMID: 28824761 PMCID: PMC5559243 DOI: 10.4300/jgme-d-17-00041.1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/17/2017] [Revised: 04/06/2017] [Accepted: 04/26/2017] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Improving the quality of health care and education has become a mandate at all levels within the medical profession. While several published quality improvement (QI) assessment tools exist, all have limitations in addressing the range of QI projects undertaken by learners in undergraduate medical education, graduate medical education, and continuing medical education. OBJECTIVE We developed and validated a tool to assess QI projects with learner engagement across the educational continuum. METHODS After reviewing existing tools, we interviewed local faculty who taught QI to understand how learners were engaged and what these faculty wanted in an ideal assessment tool. We then developed a list of competencies associated with QI, established items linked to these competencies, revised the items using an iterative process, and collected validity evidence for the tool. RESULTS The resulting Multi-Domain Assessment of Quality Improvement Projects (MAQIP) rating tool contains 9 items, with criteria that may be completely fulfilled, partially fulfilled, or not fulfilled. Interrater reliability was 0.77. Untrained local faculty were able to use the tool with minimal guidance. CONCLUSIONS The MAQIP is a 9-item, user-friendly tool that can be used to assess QI projects at various stages and to provide formative and summative feedback to learners at all levels.
Collapse
|
8
|
Abstract
PURPOSE Leading health systems have invested in substantial quality improvement (QI) capacity building, but little is known about the aggregate effect of these investments at the health system level. We conducted a systematic review to identify key steps and elements that should be considered for system-level evaluations of investment in QI capacity building. METHODS We searched for evaluations of QI capacity building and evaluations of QI training programmes. We included the most relevant indexed databases in the field and a strategic search of the grey literature. The latter included direct electronic scanning of 85 relevant government and institutional websites internationally. Data were extracted regarding evaluation design and common assessment themes and components. RESULTS 48 articles met the inclusion criteria. 46 articles described initiative-level non-economic evaluations of QI capacity building/training, while 2 studies included economic evaluations of QI capacity building/training, also at the initiative level. No system-level QI capacity building/training evaluations were found. We identified 17 evaluation components that fit within 5 overarching dimensions (characteristics of QI training; characteristics of QI activity; individual capacity; organisational capacity and impact) that should be considered in evaluations of QI capacity building. 8 key steps in return-on-investment (ROI) assessments in QI capacity building were identified: (1) planning-stakeholder perspective; (2) planning-temporal perspective; (3) identifying costs; (4) identifying benefits; (5) identifying intangible benefits that will not be included in the ROI estimation; (6) discerning attribution; (7) ROI calculations; (8) sensitivity analysis. CONCLUSIONS The literature on QI capacity building evaluation is limited in the number and scope of studies. Our findings, summarised in a Framework to Guide Evaluations of QI Capacity Building, can be used to start closing this knowledge gap.
Collapse
Affiliation(s)
- Gustavo Mery
- Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| | - Mark J Dobrow
- Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| | - G Ross Baker
- Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| | - Jennifer Im
- Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| | - Adalsteinn Brown
- Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
9
|
Doupnik SK, Ziniel SI, Glissmeyer EW, Moses JM. Validity and Reliability of a Tool to Assess Quality Improvement Knowledge and Skills in Pediatrics Residents. J Grad Med Educ 2017; 9:79-84. [PMID: 28261399 PMCID: PMC5319634 DOI: 10.4300/jgme-d-15-00799.1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Residency programs are expected to educate residents in quality improvement (QI). Effective assessments are needed to ensure residents gain QI knowledge and skills. Limitations of current tools include poor interrater reliability and requirement for scorer training. OBJECTIVE To provide evidence for the validity of the Assessment of Quality Improvement Knowledge and Skills (AQIKS), which is a new tool that provides a summative assessment of pediatrics residents' ability to recall QI concepts and apply them to a clinical scenario. METHODS We conducted a quasi-experimental study to measure the AQIKS performance in 2 groups of pediatrics residents: postgraduate year (PGY) 2 residents who participated in a 1-year longitudinal QI curriculum, and a concurrent control group of PGY-1 residents who received no formal QI training. The curriculum included 20 hours of didactics and participation in a resident-led QI project. Three faculty members with clinical QI experience, who were not involved in the curriculum and received no additional training, scored the AQIKS. RESULTS Complete data were obtained for 30 of 37 residents (81%) in the intervention group, and 36 of 40 residents (90%) in the control group. After completing a QI curriculum, the intervention group's mean score was 40% higher than at baseline (P < .001), while the control group showed no improvement (P = .29). Interrater reliability was substantial (κ = 0.74). CONCLUSIONS The AQIKS detects an increase in QI knowledge and skills among pediatrics residents who participated in a QI curriculum, with better interrater reliability than currently available assessment tools.
Collapse
Affiliation(s)
- Stephanie K. Doupnik
- Corresponding author: Stephanie K. Doupnik, MD, The Children's Hospital of Philadelphia, Center for Pediatric Clinical Effectiveness, 34th and Civic Center Boulevard, Philadelphia, PA 19104,
| | | | | | | |
Collapse
|
10
|
Tentler A, Feurdean M, Keller S, Kothari N. Integrating a Resident-Driven Longitudinal Quality Improvement Curriculum Within an Ambulatory Block Schedule. J Grad Med Educ 2016; 8:405-9. [PMID: 27413445 PMCID: PMC4936860 DOI: 10.4300/jgme-d-15-00371.1] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Quality improvement (QI) is essential in clinical practice, requiring effective teaching in residency. Barriers include lack of structure, mentorship, and time. OBJECTIVE To develop a longitudinal QI curriculum for an internal medicine residency program with limited faculty resources and evaluate its effectiveness. METHODS All medicine residents were provided with dedicated research time every 8 weeks during their ambulatory blocks. Groups of 3 to 5 residents across all postgraduate year levels were formed. Two faculty members and 1 chief resident advised all groups, meeting with each group every 8 weeks, with concrete expectations for each meeting. Residents were required to complete didactic modules from the Institute for Healthcare Improvement. Current residents and alumni were surveyed for feedback. RESULTS Over 3 years, all eligible residents (92 residents per year in 2012-2014, 102 in 2014-2015) participated in the curriculum. Residents worked on 54 quality assessment and 18 QI projects, with 6 QI projects showing statistically significant indicator improvements. About 50 mentoring hours per year were contributed by 2 faculty advisors and a chief resident. No other staff or IT support was needed. A total of 69 posters/abstracts were produced, with 13 projects presented at national or regional conferences. Survey respondents found the program useful; most (75% residents, 63% alumni) reported it changed their practice, and 71% of alumni found it useful after residency. CONCLUSIONS Our longitudinal QI curriculum requires minimal faculty time and resulted in increased QI-related publications and measurable improvements in quality indicators. Alumni reported a positive effect on practice after graduation.
Collapse
Affiliation(s)
- Aleksey Tentler
- Corresponding author: Aleksey Tentler, MD, Rutgers New Jersey Medical School, MSB C620, 185 South Orange Avenue, Newark, NJ 07103, 973.927.1687, fax 888.768.5044,
| | | | | | | |
Collapse
|
11
|
Chu D, Vaporciyan AA, Iannettoni MD, Ikonomidis JS, Odell DD, Shemin RJ, Starnes SL, Stein W, Badhwar V. Are There Gaps in Current Thoracic Surgery Residency Training Programs? Ann Thorac Surg 2016; 101:2350-5. [DOI: 10.1016/j.athoracsur.2016.01.038] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/13/2015] [Revised: 12/19/2015] [Accepted: 01/08/2016] [Indexed: 01/28/2023]
|
12
|
Evaluation of a temporal bone prototype by experts in otology. The Journal of Laryngology & Otology 2014; 128:586-90. [PMID: 24932528 DOI: 10.1017/s0022215114001297] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
BACKGROUND Inexperienced otologists require training on the temporal bone drilling process, prior to any surgical activity. The shortage of cadaveric temporal bones exerts pressure to create realistic physical prototypes. This paper describes the evaluation by otology experts of a specially developed temporal bone resin model. METHODS Computed tomography images were transformed into digital files, and anatomically identical right temporal bone models were created using stereolithography. These hand-painted resin prototypes were sent to 25 otologists, accompanied by a 20-item questionnaire. RESULTS Satisfaction rate was 92 per cent. The overall prototype score was 48.87 out of 60. Average scores were: 12.63 out of 15 for anatomy-morphology, 6.98 out of 9 for quality of drilling, 16.74 out of 21 for identification of anatomical elements and 7.41 out of 9 for stages of drilling. Limitations of the model included an excessively vivid facial nerve colour and difficulty in identifying the posterior semicircular canal. Disadvantages related to the thickness of the resin and its residues were identified. CONCLUSION The prototype appears to provide an attractive solution to the shortage of cadaveric temporal bones. However, interest in the model for drilling technique training for inexperienced otologists has not yet been assessed.
Collapse
|
13
|
Glissmeyer EW, Ziniel SI, Moses J. Use of the Quality Improvement (QI) Knowledge Application Tool in Assessing Pediatric Resident QI Education. J Grad Med Educ 2014; 6:284-91. [PMID: 24949133 PMCID: PMC4054728 DOI: 10.4300/jgme-d-13-00221.1] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/13/2013] [Revised: 11/18/2013] [Accepted: 01/12/2014] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Assessing the effectiveness of quality improvement curricula is important to improving this area of resident education. OBJECTIVE To assess the ability of the Quality Improvement Knowledge Application Tool (QIKAT) to differentiate between residents who were provided instruction in QI and those who were not, when scored by individuals not involved in designing the QIKAT, its scoring rubric, or QI curriculum instruction. METHODS The QIKAT and a 9-item self-assessment of QI proficiency were administered to an intervention and a control group. The intervention was a longitudinal curriculum consisting of 8 hours of didactic QI training and 6 workshops providing just-in-time training for resident QI projects. Two uninvolved faculty scored the QIKAT. RESULTS A total of 33 residents in the intervention group and 27 in the control group completed the baseline and postcurriculum QIKAT and self-assessment. QIKAT mean intervention group scores were significantly higher than mean control group scores postcurriculum (P < .001). Absolute QIKAT differences were small (of 15 points, intervention group improved from a mean score of 12.8 to 13.2). Interrater agreement as measured by kappa test was low (0.09). Baseline self-assessment showed no differences, and after instruction, the intervention group felt more proficient in QI knowledge than controls in 4 of 9 domains tested. CONCLUSIONS The QIKAT detected a statistically significant improvement postintervention, but the absolute differences were small. Self-reported gain in QI knowledge and proficiency agreed with the results of the QIKAT. However, QIKAT limitations include poor interrater agreement and a scoring rubric that lacks specificity. Programs considering using QIKAT to assess curricula should understand these limitations.
Collapse
|
14
|
Philibert I, Gonzalez Del Rey JA, Lannon C, Lieh-Lai M, Weiss KB. Quality improvement skills for pediatric residents: from lecture to implementation and sustainability. Acad Pediatr 2014; 14:40-6. [PMID: 24369868 DOI: 10.1016/j.acap.2013.03.015] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/08/2012] [Revised: 03/13/2013] [Accepted: 03/27/2013] [Indexed: 12/30/2022]
Abstract
Quality improvement (QI) skills are relevant to efforts to improve the health care system. The Accreditation Council for Graduate Medical Education (ACGME) program requirements call for resident participation in local and institutional QI efforts, and the move to outcomes-based accreditation is resulting in greater focus on the resulting learning and clinical outcomes. Many programs have enhanced practice-based learning and improvement (PBLI) and systems based practice (SBP) curricula, although efforts to actively involve residents in QI activities appear to be lagging. Using information from the extensive experience of Cincinnati Children's Hospital Medical Center, we offer recommendations for how to create meaningful QI experiences for residents meet ACGME requirements and the expectations of the Clinical Learning Environment Review (CLER) process. Resident involvement in QI requires a multipronged approach that overcomes barriers and limitations that have frustrated earlier efforts to move this education from lectures to immersion experiences at the bedside and in the clinic. We present 5 dimensions of effective programs that facilitate active resident participation in improvement work and enhance their QI skills: 1) providing curricula and education models that ground residents in QI principles; 2) ensuring faculty development to prepare physicians for their role in teaching QI and demonstrating it in day-to-day practice; 3) ensuring all residents receive meaningful QI education and practical exposure to improvement projects; 4) overcoming time and other constraints to allow residents to apply their newly developed QI skills; and 5) assessing the effect of exposure to QI on resident competence and project outcomes.
Collapse
Affiliation(s)
- Ingrid Philibert
- Accreditation Council for Graduate Medical Education, Chicago, Ill.
| | | | - Carole Lannon
- Department of Pediatrics, James M. Anderson Center for Health Systems Excellence at Cincinnati Children's Hospital Medical Center, University of Cincinnati, Cincinnati, Ohio
| | - Mary Lieh-Lai
- Accreditation Council for Graduate Medical Education, Chicago, Ill; Wayne State University, Detroit, Michigan
| | - Kevin B Weiss
- Accreditation Council for Graduate Medical Education, Chicago, Ill
| |
Collapse
|
15
|
Development of an Instrument to Evaluate Residents’ Confidence in Quality Improvement. Jt Comm J Qual Patient Saf 2013; 39:502-10. [DOI: 10.1016/s1553-7250(13)39066-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
16
|
Tomolo AM, Lawrence RH, Watts B, Augustine S, Aron DC, Singh MK. Pilot study evaluating a practice-based learning and improvement curriculum focusing on the development of system-level quality improvement skills. J Grad Med Educ 2011; 3:49-58. [PMID: 22379523 PMCID: PMC3186260 DOI: 10.4300/jgme-d-10-00104.1] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/11/2010] [Revised: 08/02/2010] [Accepted: 10/07/2010] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND We developed a practice-based learning and improvement (PBLI) curriculum to address important gaps in components of content and experiential learning activities through didactics and participation in systems-level quality improvement projects that focus on making changes in health care processes. METHODS We evaluated the impact of our curriculum on resident PBLI knowledge, self-efficacy, and application skills. A quasi-experimental design assessed the impact of a curriculum (PBLI quality improvement systems compared with non-PBLI) on internal medicine residents' learning during a 4-week ambulatory block. We measured application skills, self-efficacy, and knowledge by using the Systems Quality Improvement Training and Assessment Tool. Exit evaluations assessed time invested and experiences related to the team projects and suggestions for improving the curriculum. RESULTS The 2 groups showed differences in change scores. Relative to the comparison group, residents in the PBLI curriculum demonstrated a significant increase in the belief about their ability to implement a continuous quality improvement project (P = .020), comfort level in developing data collection plans (P = .010), and total knowledge scores (P < .001), after adjusting for prior PBLI experience. Participants in the PBLI curriculum also demonstrated significant improvement in providing a more complete aim statement for a proposed project after adjusting for prior PBLI experience (P = .001). Exit evaluations were completed by 96% of PBLI curriculum participants who reported high satisfaction with team performance. CONCLUSION Residents in our curriculum showed gains in areas fundamental for PBLI competency. The observed improvements were related to fundamental quality improvement knowledge, with limited gain in application skills. This suggests that while heading in the right direction, we need to conceptualize and structure PBLI training in a way that integrates it throughout the residency program and fosters the application of this knowledge and these skills.
Collapse
Affiliation(s)
- Anne M Tomolo
- Corresponding author: Anne M. Tomolo, MD, MPH, 1670 Clairmont Road, Atlanta, GA 30033, 404.321.6111, extension 4602,
| | | | | | | | | | | |
Collapse
|