1
|
Debets M, Jansen I, Lombarts K, Kuijer-Siebelink W, Kruijthof K, Steinert Y, Daams J, Silkens M. Linking leadership development programs for physicians with organization-level outcomes: a realist review. BMC Health Serv Res 2023; 23:783. [PMID: 37480101 PMCID: PMC10362722 DOI: 10.1186/s12913-023-09811-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 07/12/2023] [Indexed: 07/23/2023] Open
Abstract
BACKGROUND Hospitals invest in Leadership Development Programs (LDPs) for physicians, assuming they benefit the organization's performance. Researchers have listed the advantages of LDPs, but knowledge of how and why organization-level outcomes are achieved is missing. OBJECTIVE To investigate how, why and under which circumstances LDPs for physicians can impact organization-level outcomes. METHODS We conducted a realist review, following the RAMESES guidelines. Scientific articles and grey literature published between January 2010 and March 2021 evaluating a leadership intervention for physicians in the hospital setting were considered for inclusion. The following databases were searched: Medline, PsycInfo, ERIC, Web of Science, and Academic Search Premier. Based on the included documents, we developed a LDP middle-range program theory (MRPT) consisting of Context-Mechanism-Outcome configurations (CMOs) describing how specific contexts (C) trigger certain mechanisms (M) to generate organization-level outcomes (O). RESULTS In total, 3904 titles and abstracts and, subsequently, 100 full-text documents were inspected; 38 documents with LDPs from multiple countries informed our MRPT. The MRPT includes five CMOs that describe how LDPs can impact the organization-level outcomes categories 'culture', 'quality improvement', and 'the leadership pipeline': 'Acquiring self-insight and people skills (CMO1)', 'Intentionally building professional networks (CMO2)', 'Supporting quality improvement projects (CMO3)', 'Tailored LDP content prepares physicians (CMO4)', and 'Valuing physician leaders and organizational commitment (CMO5)'. Culture was the outcome of CMO1 and CMO2, quality improvement of CMO2 and CMO3, and the leadership pipeline of CMO2, CMO4, and CMO5. These CMOs operated within an overarching context, the leadership ecosystem, that determined realizing and sustaining organization-level outcomes. CONCLUSIONS LDPs benefit organization-level outcomes through multiple mechanisms. Creating the contexts to trigger these mechanisms depends on the resources invested in LDPs and adequately supporting physicians. LDP providers can use the presented MRPT to guide the development of LDPs when aiming for specific organization-level outcomes.
Collapse
Affiliation(s)
- Maarten Debets
- Amsterdam UMC, Medical Psychology, Univ of Amsterdam, Amsterdam Public Health, AMC, Meibergdreef 9, 1105AZ, Amsterdam, Netherlands.
| | - Iris Jansen
- Amsterdam UMC, Medical Psychology, Univ of Amsterdam, Amsterdam Public Health, AMC, Meibergdreef 9, 1105AZ, Amsterdam, Netherlands
| | - Kiki Lombarts
- Amsterdam UMC, Medical Psychology, Univ of Amsterdam, Amsterdam Public Health, AMC, Meibergdreef 9, 1105AZ, Amsterdam, Netherlands
| | - Wietske Kuijer-Siebelink
- School of Education, Research On Responsive Vocational and Professional Education, HAN University of Applied Sciences, Nijmegen, Netherlands
- Research On Learning and Education, Radboud University Medical Centre, Radboudumc Health Academy, Nijmegen, Netherlands
| | - Karen Kruijthof
- Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam Public Health, De Boelelaan 1117, Amsterdam, Netherlands
| | - Yvonne Steinert
- Faculty of Medicine and Health Sciences, Institute of Health Sciences Education, McGill University, Montreal, Canada
| | - Joost Daams
- Medical Library, Amsterdam University Medical Centers, Amsterdam, Noord-Holland, Netherlands
| | - Milou Silkens
- Department of Health Services Research & Management, City University of London, London, UK
- Erasmus School of Health Policy & Management, Erasmus University, Rotterdam, Netherlands
| |
Collapse
|
2
|
Hadler RA, Dexter F, Hindman BJ. Effect of Insufficient Interaction on the Evaluation of Anesthesiologists’ Quality of Clinical Supervision by Anesthesiology Residents and Fellows. Cureus 2022; 14:e23500. [PMID: 35494980 PMCID: PMC9036497 DOI: 10.7759/cureus.23500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/25/2022] [Indexed: 11/16/2022] Open
Abstract
Introduction In this study, we tested whether raters’ (residents and fellows) decisions to evaluate (or not) critical care anesthesiologists were significantly associated with clinical interactions documented from electronic health record progress notes and whether that influenced the reliability of supervision scores. We used the de Oliveira Filho clinical supervision scale for the evaluation of faculty anesthesiologists. Email requests were sent to raters who worked one hour or longer with the anesthesiologist the preceding day in an operating room. In contrast, potential raters were requested to evaluate all critical care anesthesiologists scheduled in intensive care units during the preceding week. Methods Over 7.6 years, raters (N=172) received a total of 7764 requests to evaluate 21 critical care anesthesiologists. Each rater received a median/mode of three evaluation requests, one per anesthesiologist on service that week. In this retrospective cohort study, we related responses (2970 selections of “insufficient interaction” to evaluate the faculty, and 3127 completed supervision scores) to progress notes (N=25,469) electronically co-signed by the rater and anesthesiologist combination during that week. Results Raters with few jointly signed notes were more likely to select insufficient interaction for evaluation (P < 0.0001): 62% when no joint notes versus 1% with at least 20 joint notes during the week. Still, rater-anesthesiologist combinations with no co-authored notes accounted not only for most (78%) of the evaluation requests but also most (56%) of the completed evaluations (both P < 0.0001). Among rater and anesthesiologist combinations with each anesthesiologist receiving evaluations from multiple (at least nine) raters and each rater evaluating multiple anesthesiologists, most (72%) rater-anesthesiologist combinations were among raters who had no co-authored notes with the anesthesiologist (P < 0.0001). Conclusions Regular use of the supervision scale should be practiced with raters who were selected not only from their scheduled clinical site but also using electronic health record data verifying joint workload with the anesthesiologist.
Collapse
|
3
|
Daniel SJ, Bouchard MJ, Tremblay M. Rethinking Our Annual Congress-Meeting the Needs of Specialist Physicians by Partnering With Provincial Simulation Centers. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2022; 42:e83-e87. [PMID: 34609357 PMCID: PMC8876424 DOI: 10.1097/ceh.0000000000000381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Canada's maintenance of certification programs for physicians has evolved to emphasize assessment activities. Our organization recognized the importance of offering more practice assessment opportunities to our members to enhance their practice and help them comply with a regulation from our provincial professional body related to ongoing continuing education. This led us to rethink our annual congress and enrich the program with a curriculum of interdisciplinary simulation sessions tailored to meet the needs of a broad audience of specialists. Our challenges are similar to those of many national specialty societies having limited access to simulation facilities, instructors, and simulation teams that can cover the breadth and scope of perceived and unperceived simulation needs for their specialty. Our innovative solution was to partner with local experts to develop 22 simulation sessions over the past three years. The response was very positive, drawing 867 participants. Over 95% of participants either agreed or strongly agreed that their simulation session (1) met their learning objectives, (2) was relevant for their practice, and (3) encouraged them to modify their practice. Narrative comments from a survey sent to the 2018 participants four months after their activity indicated several self-reported changes in their practice or patient outcomes. We were able to centralize offers from organizations that had previously worked in silo to develop simulation sessions meeting the needs of our members. Proposing simulation sessions allowed our organization to establish long-term partnerships and to expend our "educational toolbox" to address skill gaps not usually addressed during annual meetings.
Collapse
|
4
|
Debets MPM, Scheepers RA, Boerebach BCM, Arah OA, Lombarts KMJMH. Variability of residents' ratings of faculty's teaching performance measured by five- and seven-point response scales. BMC MEDICAL EDUCATION 2020; 20:325. [PMID: 32962692 PMCID: PMC7510269 DOI: 10.1186/s12909-020-02244-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Accepted: 09/14/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND Medical faculty's teaching performance is often measured using residents' feedback, collected by questionnaires. Researchers extensively studied the psychometric qualities of resulting ratings. However, these studies rarely consider the number of response categories and its consequences for residents' ratings of faculty's teaching performance. We compared the variability of residents' ratings measured by five- and seven-point response scales. METHODS This retrospective study used teaching performance data from Dutch anaesthesiology residency training programs. Questionnaires with five- and seven-point response scales from the extensively studied System for Evaluation of Teaching Qualities (SETQ) collected the ratings. We inspected ratings' variability by comparing standard deviations, interquartile ranges, and frequency (percentage) distributions. Relevant statistical tests were used to test differences in frequency distributions and teaching performance scores. RESULTS We examined 3379 residents' ratings and 480 aggregated faculty scores. Residents used the additional response categories provided by the seven-point scale - especially those differentiating between positive performances. Residents' ratings and aggregated faculty scores showed a more even distribution on the seven-point scale compared to the five-point scale. Also, the seven-point scale showed a smaller ceiling effect. After rescaling, the mean scores and (most) standard deviations of ratings from both scales were comparable. CONCLUSIONS Ratings from the seven-point scale were more evenly distributed and could potentially yield more nuanced, specific and user-friendly feedback. Still, both scales measured (almost) similar teaching performance outcomes. In teaching performance practice, residents and faculty members should discuss whether response scales fit their preferences and goals.
Collapse
Affiliation(s)
- Maarten P M Debets
- Amsterdam Center for Professional Performance and Compassionate Care, Department of Medical Psychology, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, PO Box 22700, 1100, DE, Amsterdam, The Netherlands.
| | - Renée A Scheepers
- Research group Socio-Medical Sciences, Erasmus School of Health Policy and Management, Erasmus University of Rotterdam, Rotterdam, The Netherlands
| | - Benjamin C M Boerebach
- Amsterdam Center for Professional Performance and Compassionate Care, Department of Medical Psychology, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, PO Box 22700, 1100, DE, Amsterdam, The Netherlands
| | - Onyebuchi A Arah
- Department of Epidemiology, Fielding School of Public Health, University of California, Los Angeles (UCLA), Los Angeles, California, USA
- UCLA Center for Health Policy Research, Los Angeles, California, USA
- Center for Social Statistics, UCLA, Los Angeles, California, USA
- Department of Statistics, UCLA, Los Angeles, California, USA
| | - Kiki M J M H Lombarts
- Amsterdam Center for Professional Performance and Compassionate Care, Department of Medical Psychology, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, PO Box 22700, 1100, DE, Amsterdam, The Netherlands
| |
Collapse
|
5
|
Dexter F, Hadlandsmyth K, Pearson ACS, Hindman BJ. Reliability and Validity of Performance Evaluations of Pain Medicine Clinical Faculty by Residents and Fellows Using a Supervision Scale. Anesth Analg 2020; 131:909-916. [PMID: 32332292 DOI: 10.1213/ane.0000000000004779] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
BACKGROUND Annual and/or semiannual evaluations of pain medicine clinical faculty are mandatory for multiple organizations in the United States. We evaluated the validity and psychometric reliability of a modified version of de Oliveira Filho et al clinical supervision scale for this purpose. METHODS Six years of weekly evaluations of pain medicine clinical faculty by resident physicians and pain medicine fellows were studied. A 1-4 rating (4 = "Always") was assigned to each of 9 items (eg, "The faculty discussed with me the management of patients before starting a procedure or new therapy and accepted my suggestions, when appropriate"). RESULTS Cronbach α of the 9 items equaled .975 (95% confidence interval [CI], 0.974-0.976). A G coefficient of 0.90 would be expected with 18 raters; the N = 12 six-month periods had mean 18.8 ± 5.9 (standard deviation [SD]) unique raters in each period (median = 20).Concurrent validity was shown by Kendall τb = 0.45 (P < .0001) pairwise by combination of ratee and rater between the average supervision score and the average score on a 21-item evaluation completed by fellows in pain medicine. Concurrent validity also was shown by τb = 0.36 (P = .0002) pairwise by combination of ratee and rater between the average pain medicine supervision score and the average operating room supervision score completed by anesthesiology residents.Average supervision scores differed markedly among the 113 raters (η = 0.485; CI, 0.447-0.490). Pairings of ratee and rater were nonrandom (Cramér V = 0.349; CI, 0.252-0.446).Mixed effects logistic regression was performed with rater leniency as covariates and the dependent variable being an average score equaling the maximum 4 vs <4. There were 3 of 13 ratees with significantly more averages <4 than the other ratees, based on P < .01 criterion; that is, their supervision was reliably rated as below average. There were 3 of 13 different ratees who provided supervision reliably rated as above average.Raters did not report higher supervision scores when they had the opportunity to perform more interventional pain procedures. CONCLUSIONS Evaluations of pain medicine clinical faculty are required. As found when used for evaluating operating room anesthesiologists, a supervision scale has excellent internal consistency, achievable reliability using 1-year periods of data, concurrent validity with other ratings, and the ability to differentiate among ratees. However, to be reliable, routinely collected supervision scores must be adjusted for rater leniency.
Collapse
Affiliation(s)
- Franklin Dexter
- From the Division of Management Consulting, Department of Anesthesia, University of Iowa, Iowa City, Iowa
| | | | - Amy C S Pearson
- Department of Anesthesia, University of Iowa, Iowa City, Iowa
| | | |
Collapse
|
6
|
Scheepers RA, Emke H, Epstein RM, Lombarts KMJMH. The impact of mindfulness-based interventions on doctors' well-being and performance: A systematic review. MEDICAL EDUCATION 2020; 54:138-149. [PMID: 31868262 PMCID: PMC7003865 DOI: 10.1111/medu.14020] [Citation(s) in RCA: 82] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 10/04/2019] [Accepted: 10/14/2019] [Indexed: 05/30/2023]
Abstract
OBJECTIVES The well-being of doctors is at risk, as evidenced by high burnout rates amongst doctors around the world. Alarmingly, burned-out doctors are more likely to exhibit low levels of professionalism and provide suboptimal patient care. Research suggests that burnout and the well-being of doctors can be improved by mindfulness-based interventions (MBIs). Furthermore, MBIs may improve doctors' performance (eg in empathy). However, there are no published systematic reviews that clarify the effects of MBIs on doctor well-being or performance to inform future research and professional development programmes. We therefore systematically reviewed and narratively synthesised findings on the impacts of MBIs on doctors' well-being and performance. METHODS We searched PubMed and PsycINFO from inception to 9 May 2018 and independently reviewed studies investigating the effects of MBIs on doctor well-being or performance. We systematically extracted data and assessed study quality according to the Medical Education Research Study Quality Instrument (MERSQI), and narratively reported study findings. RESULTS We retrieved a total of 934 articles, of which 24 studies met our criteria; these included randomised, (un)controlled or qualitative studies of average quality. Effects varied across MBIs with different training contents or formats: MBIs including essential mindfulness training elements, or employing group-based training, mostly showed positive effects on the well-being or performance of doctors across different educational and hospital settings. Doctors perceived both benefits (enhanced self- and other-understanding) and challenges (time limitations and feasibility) associated with MBIs. Findings were subject to the methodological limitations of studies (eg the use of self-selected participants, lack of placebo interventions, use of self-reported outcomes). CONCLUSIONS This review indicates that doctors can perceive positive impacts of MBIs on their well-being and performance. However, the evidence was subject to methodological limitations and does not yet support the standardisation of MBIs in professional development programmes. Rather, health care organisations could consider including group-based MBIs as voluntary modules for doctors with specific well-being needs or ambitions regarding professional development.
Collapse
Affiliation(s)
- Renée A. Scheepers
- Research Group in Socio‐Medical SciencesErasmus School of Health Policy and ManagementErasmus University of RotterdamRotterdamthe Netherlands
- Professional Performance and Compassionate Care Research GroupDepartment of Medical PsychologyAmsterdam University Medical CentreUniversity of AmsterdamAmsterdamthe Netherlands
| | - Helga Emke
- Professional Performance and Compassionate Care Research GroupDepartment of Medical PsychologyAmsterdam University Medical CentreUniversity of AmsterdamAmsterdamthe Netherlands
- Department of Health SciencesFaculty of ScienceFree University of AmsterdamAmsterdamthe Netherlands
| | - Ronald M. Epstein
- Department of Family Medicine, Psychiatry and OncologyUniversity of Rochester Medical CenterRochesterNew YorkUSA
| | - Kiki M. J. M. H. Lombarts
- Professional Performance and Compassionate Care Research GroupDepartment of Medical PsychologyAmsterdam University Medical CentreUniversity of AmsterdamAmsterdamthe Netherlands
| |
Collapse
|
7
|
Lockyer J, DiMillo S, Campbell C. An Examination of Self-Reported Assessment Activities Documented by Specialist Physicians for Maintenance of Certification. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2020; 40:19-26. [PMID: 32149945 DOI: 10.1097/ceh.0000000000000283] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
INTRODUCTION Specialists in a Maintenance of Certification program are required to participate in assessment activities, such as chart audit, simulation, knowledge assessment, and multisource feedback. This study examined data from five different specialties to identify variation in participation in assessment activities, examine differences in the learning stimulated by assessment, assess the frequency and type of planned changes, and assess the association between learning, discussion, and planned changes. METHODS E-portfolio data were categorized and analyzed descriptively. Chi-squared tests examined associations. RESULTS A total of 2854 anatomical pathologists, cardiologists, gastroenterologists, ophthalmologists, and orthopedic surgeons provided data about 6063 assessment activities. Although there were differences in the role that learning played by discipline and assessment type, the most common activities documented across all specialties were self-assessment programs (n = 2122), feedback on teaching (n = 1078), personal practice assessments which the physician did themselves (n = 751), annual reviews (n = 682), and reviews by third parties (n = 661). Learning occurred for 93% of the activities and was associated with change. For 2126 activities, there were planned changes. Activities in which there was a discussion with a peer or supervisor were more likely to result in a change. CONCLUSIONS AND DISCUSSION Although specialists engaged in many types of assessment activities to meet the Maintenance of Certification program requirements, there was variability in how assessment stimulated learning and planned changes. It seems that peer discussion may be an important component in fostering practice change and forming plans for improvement which bears further study.
Collapse
Affiliation(s)
- Jocelyn Lockyer
- Dr. Lockyer: Professor, Department of Community Health Sciences, Cumming School of Medicine, Calgary, Canada. Ms. DiMillo: Senior Data and Research Analyst, Health Policy and Advocacy, Royal College of Physicians and Surgeons of Canada, Ottawa, Canada. Dr. Campbell: Principal Senior Advisor, Competency-based CPD and interim Director, Continuing Professional Development, Office of Specialty Education, Royal College of Physicians and Surgeons of Canada, and Associate Professor, Department of Medicine, University of Ottawa, Ottawa, Canada
| | | | | |
Collapse
|