1
|
Grant AL, Torti J, Goldszmidt M. "Influential" Intraoperative Educators and Variability of Teaching Styles. JOURNAL OF SURGICAL EDUCATION 2023; 80:276-287. [PMID: 36333173 DOI: 10.1016/j.jsurg.2022.10.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 09/04/2022] [Accepted: 10/05/2022] [Indexed: 06/16/2023]
Abstract
OBJECTIVES Academic surgeons manage their role as intraoperative educators in a variety of ways. Such variability is neither idiosyncratic nor is there a single best approach. This study sought to explore the practices of surgeons deemed influential by their residents, allowing insight into a variety of potentially effective practices. PARTICIPANTS Constructivist grounded theory guided data collection and analysis. Data sources included surveys from senior surgical residents (PGY3-6) and recent graduates from an academic hospital in Canada (36% response rate), intraoperative observations of teaching interactions, and semi-structured interviews with observed surgeons. Rigour was supported by data triangulation, constant comparison, and collection to theoretical sufficiency. DESIGN We developed a framework grouping effective teaching into three overlapping approaches: exacting, empowering, and fostering. The approaches differ based on the level of independence granted and the degree of expectation placed on individual residents. Each demonstrates different strategies for balancing the multiple supervisory roles and patient care obligations faced by academic surgeons. We also identified strategies that could be used across approaches to enhance learning. CONCLUSIONS For surgical educators seeking to improve upon the quality of the intraoperative supervision they provide, frameworks such as this may serve as models of effective supervision. Enhancing surgeons' knowledge of proven strategies, combined with reflecting on how they teach and how they balance responsibilities to patients and trainees, may allow them to broaden their educational practice.
Collapse
Affiliation(s)
- Aaron L Grant
- Department of Surgery, Western University, London, Ontario, Canada.
| | - Jacqueline Torti
- Centre for Education Research and Innovation, Western University, London, Ontario, Canada
| | - Mark Goldszmidt
- Centre for Education Research and Innovation and Division of General Internal Medicine, Department of Medicine, Western University, London, Ontario, Canada
| |
Collapse
|
2
|
Olsen AA, Morbitzer KA, Zambrano S, Zeeman JM, Persky AM, Bush A, McLaughlin JE. Development and implementation of a formative instructional coaching program using the Teaching Practices Inventory within a health professions program. BMC MEDICAL EDUCATION 2022; 22:554. [PMID: 35842691 PMCID: PMC9288684 DOI: 10.1186/s12909-022-03616-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 07/11/2022] [Indexed: 05/22/2023]
Abstract
BACKGROUND A growing body of literature describes teaching practices that are positively associated with student achievement. Observing, characterizing, and providing feedback on these teaching practices is a necessary, yet significant challenge to improving teaching quality. This study describes the design, implementation, and evaluation of an instructional coaching program created to provide formative feedback to instructors based on their use of evidence-based teaching practices. METHODS The program was designed for formative purposes utilizing an instrument adapted from the Teaching Practices Inventory. All faculty were invited to participate in the program on a voluntary basis when the program launched in Fall 2019. Program coaches included any School personnel who completed required training. Two rounds of instrument development were conducted with multiple observers and assessed using Krippendorff's Alpha. The program was evaluated using an anonymous post-session survey. RESULTS Interrater reliability of the form improved over two rounds of piloting and no differences were found in scoring between trainees and education professionals. Seventeen observations were completed by nine coaches. Instructors indicated that feedback was practical, timely, specific, and collegial, suggesting that including student perspectives (e.g., focus groups, student course evaluations) in the coaching program might be helpful. CONCLUSIONS Creating programs that emphasize and foster the use of evidence-based teaching are critical for health professions education. Additional research is needed to further develop coaching programs that ensure teaching practices in the health professions are optimizing student learning.
Collapse
Affiliation(s)
- Amanda A. Olsen
- School of Education, University of Texas at Arlington, Arlington, TX USA
| | - Kathryn A. Morbitzer
- UNC Eshelman School of Pharmacy, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Skye Zambrano
- UNC Eshelman School of Pharmacy, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Jacqueline M. Zeeman
- UNC Eshelman School of Pharmacy, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Adam M. Persky
- UNC Eshelman School of Pharmacy, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | | | | |
Collapse
|
3
|
Jenq CC, Ou LS, Tseng HM, Chao YP, Lin JR, Monrouxe LV. Evaluating Clinical Educators' Competence in an East Asian Context: Who Values What? Front Med (Lausanne) 2022; 9:896822. [PMID: 35836950 PMCID: PMC9273768 DOI: 10.3389/fmed.2022.896822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 06/07/2022] [Indexed: 11/13/2022] Open
Abstract
BackgroundHow to evaluate clinical educators is an important question in faculty development. The issue of who are best placed to evaluate their performance is also critical. However, the whos and the hows of clinical educator evaluation may differ culturally. This study aims to understand what comprises suitable evaluation criteria, alongside who is best placed to undertake the evaluation of clinical educators in medicine within an East Asian culture: specifically Taiwan.MethodsAn 84-item web-based questionnaire was created based on a literature review and medical educational experts' opinions focusing on potential raters (i.e., who) and domains (i.e., what) for evaluating clinical educators. Using purposive sampling, we sent 500 questionnaires to clinical educators, residents, Post-Graduate Year Trainees (PGYs), Year-4~6/Year-7 medical students (M4~6/M7) and nurses.ResultsWe received 258 respondents with 52% response rate. All groups, except nurses, chose “teaching ability” as the most important domain. This contrasts with research from Western contexts that highlights role modeling, leadership and enthusiasm. The clinical educators and nurses have the same choices of the top five items in the “personal qualities” domain, but different choices in “assessment ability” and “curriculum planning” domains. The best fit rater groups for evaluating clinical educators were educators themselves and PGYs.ConclusionsThere may well be specific suitable domains and populations for evaluating clinical educators' competence in East Asian culture contexts. Further research in these contexts is required to examine the reach of these findings.
Collapse
Affiliation(s)
- Chang-Chyi Jenq
- Department of Nephrology, Linkou Chang Gung Memorial Hospital, Taoyuan, Taiwan
- Chang Gung University College of Medicine, Taoyuan, Taiwan
- Chang Gung Medical Education Research Center, Taoyuan, Taiwan
| | - Liang-Shiou Ou
- Chang Gung University College of Medicine, Taoyuan, Taiwan
- Chang Gung Medical Education Research Center, Taoyuan, Taiwan
- Division of Allergy, Asthma and Rheumatology, Department of Pediatrics, Chang Gung Memorial Hospital, Taoyuan, Taiwan
| | - Hsu-Min Tseng
- Chang Gung Medical Education Research Center, Taoyuan, Taiwan
- Department of Health Care Management, College of Management, Chang Gung University, Taoyuan, Taiwan
| | - Ya-Ping Chao
- Chang Gung Medical Education Research Center, Taoyuan, Taiwan
| | - Jiun-Ren Lin
- Chang Gung Medical Education Research Center, Taoyuan, Taiwan
| | - Lynn V. Monrouxe
- Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, Australia
- *Correspondence: Lynn V. Monrouxe
| |
Collapse
|
4
|
Trainor A, Richards JB. Training medical educators to teach: bridging the gap between perception and reality. Isr J Health Policy Res 2021; 10:75. [PMID: 34915929 PMCID: PMC8675462 DOI: 10.1186/s13584-021-00509-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Accepted: 12/06/2021] [Indexed: 11/24/2022] Open
Abstract
Teaching is a core expectation of physicians in academic hospitals and academic medical centers, but best practices for training physicians to teach have not been established. There is significant variability in how physicians are trained to teach medical students and residents across the world, and between Israeli hospitals. In an article published earlier this year in the Israel Journal of Health Policy Research, Nothman and colleagues describe a survey of 245 Israeli physicians in departments of internal medicine, pediatrics, and obstetrics and gynecology, at four different faculties of medicine across Israel. The majority of Israeli physicians responding to this survey reported receiving minimal training to teach, with only 35% receiving any training focused on medical education skills, most (55%) receiving training of only 1–2 days duration. In addition, the physicians surveyed perceived their training as inadequate and not aligned with their self-perceived educational needs. Furthermore, the respondents felt strongly that “compensation and appreciation” for medical education was less than for those involved in research. Despite the general lack of training in teaching skills and the perception that teaching physicians are less valued than researchers, survey respondents rated themselves as highly confident in most domains of medical education. In this context, this commentary reviews the disconnect between the general perception that all physicians can and should engage in teaching in the clinical setting with the well-described observation that competence in medical education requires dedicated and longitudinal training. Leveraging best practices in curriculum design by aligning educational interventions for teaching physicians with their self-perceived needs is discussed, and models for dedicated faculty development strategies for teaching medical education skills to physicians are reviewed. Finally, the importance of and potential strategies for assessing teaching physicians’ effectiveness in Israel and elsewhere are considered as a means to address these physicians’ perception that they are not as valued as researchers. Understanding teaching physicians’ perspectives on and motivations for training medical students and residents is critical for supporting the frontline teaching faculty who educate future healthcare providers at the bedside in medical schools, hospitals, and academic medical centers in Israel and beyond.
Collapse
Affiliation(s)
- Alison Trainor
- Division of Pulmonary, Critical Care, and Sleep Medicine, Beth Israel Deaconess Medical Center, 330 Brookline Avenue, KS-B23, Boston, MA, 02215, USA
| | - Jeremy B Richards
- Division of Pulmonary, Critical Care, and Sleep Medicine, Beth Israel Deaconess Medical Center, 330 Brookline Avenue, KS-B23, Boston, MA, 02215, USA. .,Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
5
|
Cai MT, Lai QL, Zheng Y, Fang GL, Qiao S, Shen CH, Zhang YX, Ding MP. Validation of the Clinical Assessment Scale for Autoimmune Encephalitis: A Multicenter Study. Neurol Ther 2021; 10:985-1000. [PMID: 34476753 PMCID: PMC8412851 DOI: 10.1007/s40120-021-00278-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Accepted: 08/20/2021] [Indexed: 11/22/2022] Open
Abstract
Introduction A new scale, named the Clinical Assessment Scale for Autoimmune Encephalitis (CASE), has recently been developed for rating the severity of autoimmune encephalitis (AE) with a high level of clinimetric properties. In this study, our primary objective was to validate the performance of CASE through a multicenter study in China. Methods Between July 2014 and December 2019, 143 consecutive patients with definite neuronal surface antibody-associated AE from three tertiary hospitals were enrolled in the study. We validated the reliability, internal consistency, and validity of CASE. We further compared CASE with the modified Rankin scale (mRS) among different subtypes of AE in terms of its sensitivity to disease dynamics. Statistical analyses were performed using GraphPad Prism and R software. Results Our analyses showed that CASE had good inter- and intraobserver reliability (intra-class correlation coefficient 0.96/0.98) and internal consistency (Cronbach α = 0.847) at disease onset. The scores of CASE and mRS remained well correlated in patients at admission and at discharge (both r = 0.80, p < 0.001). From admission to discharge, the scores of CASE changed in 81 (56.6%) patients, in comparison to changes in mRS in 48 (33.6%) patients (p = 0.007 and p < 0.001, respectively). The largest changes in scores occurred for non-motor symptoms, including psychiatric, memory, and language dysfunctions (40.6, 26.6, and 23.1% of patients, respectively); in contrast, scores for motor symptoms, such as dyskinesia, weakness and ataxia, changed the least (7.0, 15.4, and 16.1% of patients, respectively). Conclusion Based on these results, CASE performed well in assessing the severity of neuronal surface antibody-associated AE. In comparison to mRS, it performed better for non-motor symptoms and was more sensitive to changes in severity. Supplementary Information The online version contains supplementary material available at 10.1007/s40120-021-00278-9.
Collapse
Affiliation(s)
- Meng-Ting Cai
- Department of Neurology, Second Affiliated Hospital, School of Medicine, Zhejiang University, 88 Jiefang Road, Hangzhou, 310009, China
| | - Qi-Lun Lai
- Department of Neurology, Zhejiang Hospital, Hangzhou, China
| | - Yang Zheng
- Department of Neurology, Second Affiliated Hospital, School of Medicine, Zhejiang University, 88 Jiefang Road, Hangzhou, 310009, China
| | - Gao-Li Fang
- Department of Neurology, Zhejiang Chinese Medicine and Western Medicine Integrated Hospital, Hangzhou, China
| | - Song Qiao
- Department of Neurology, Zhejiang Hospital, Hangzhou, China
| | - Chun-Hong Shen
- Department of Neurology, Second Affiliated Hospital, School of Medicine, Zhejiang University, 88 Jiefang Road, Hangzhou, 310009, China
| | - Yin-Xi Zhang
- Department of Neurology, Second Affiliated Hospital, School of Medicine, Zhejiang University, 88 Jiefang Road, Hangzhou, 310009, China.
| | - Mei-Ping Ding
- Department of Neurology, Second Affiliated Hospital, School of Medicine, Zhejiang University, 88 Jiefang Road, Hangzhou, 310009, China.
| |
Collapse
|
6
|
Kao DS, Appelbaum NP, Kates SL, Domson GF. Surgery Resident Perceptions of the Clicker Evaluation System: A Novel Approach to Collecting and Utilizing Clinical Faculty Performance Data. JOURNAL OF SURGICAL EDUCATION 2021; 78:113-118. [PMID: 32653499 DOI: 10.1016/j.jsurg.2020.06.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Revised: 05/17/2020] [Accepted: 06/20/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVE Medical trainees often have a process in place to receive feedback from clinical faculty regarding overall performance. While there is guidance on effective methodologies for faculty to provide feedback for learners, there is a dearth of literature analyzing trainees' evaluation of faculty performance. We sought to identify an effective and anonymous method for surgery residents to evaluate clinical faculty. DESIGN The Department of Orthopedic Surgery at Virginia Commonwealth University (VCU) Health implemented a novel process to gather annual clinical faculty performance data from residents for the purpose of program improvement starting in 2012. Specifically, residents used a web-based audience response system, also known as a "clicker" system, to evaluate faculty performance over the academic year. During the June 2018 evaluation session, residents also completed an anonymous, 9 question survey to assess the residents' perceptions regarding this clicker evaluation process. SETTING VCU Health System, a tertiary care hospital in Richmond, Virginia. PARTICIPANTS All 24 orthopaedic surgery residents at VCU Health participated in the evaluation process and completed the perception survey in 2018. RESULTS Ninety-six percent (n = 23) of the residents agreed that they are able to accurately rate their attendings' performance, felt confident that their responses remained anonymous, and that their departmental chair values their opinion when evaluating their attendings' performance through the clicker process. Qualitative responses identified anonymity as a strength of the clicker process, while opportunities for improvement included refinement of questions. CONCLUSIONS The clicker evaluation system is an effective and anonymous method for resident evaluation of clinical faculty performance in academic settings. Future steps include refinement of questions based on departmental goals for education, adoption of the clicker evaluation system by other specialties, as well as research into ways to optimize the clicker evaluation process. Additional research should be done to see if and how the clicker evaluation feedback translates into change in clinical faculty behavior.
Collapse
Affiliation(s)
- David S Kao
- Department of Orthopaedic Surgery, Virginia Commonwealth University Health, Richmond, Virginia.
| | - Nital P Appelbaum
- Department of Orthopaedic Surgery, Virginia Commonwealth University Health, Richmond, Virginia; Department of Surgery, Baylor College of Medicine, Houston, Texas
| | - Stephen L Kates
- Department of Orthopaedic Surgery, Virginia Commonwealth University Health, Richmond, Virginia
| | - Gregory F Domson
- Department of Orthopaedic Surgery, Virginia Commonwealth University Health, Richmond, Virginia
| |
Collapse
|
7
|
Han HC, Wan HD, Wang X. Quantifying Engineering Faculty Performance Based on Expectations on Key Activities and Integration Using Flexible Weighting Factors. J Biomech Eng 2020; 142:114701. [PMID: 32529240 DOI: 10.1115/1.4047478] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2019] [Indexed: 11/08/2022]
Abstract
Faculty performance evaluation is an important element of assessment for departments and universities. A quantitative score is often needed for faculty annual evaluation, but its determination is often subjective, and it is hard to incorporate the versatile contributions of individual faculty members. Here, we propose a quantitative and objective faculty performance evaluation method. We established a well-structured quantitative evaluation system which scores faculty performance in key activities using expectation-based formula on key measures and then incorporates personalized flexible weights to integrate them into three area scores in teaching, research, and service as well as an overall score. It was implemented in a programed excel form, making it convenient to both faculty and evaluators and has generated very positive outcomes such as higher faculty satisfactory and improved productivity as indicated by associated increases in publications and new research grants etc. In conclusion, the quantitative faculty evaluation system provides more objective and transparent annual evaluation and a basis for making merit raise and award decisions. In addition, it can be readily adapted to evolving goals and needs of a department as well as different needs and cultures of different departments.
Collapse
Affiliation(s)
- Hai-Chao Han
- Department of Mechanical Engineering, University of Texas at San Antonio, San Antonio, TX 78249
| | - Hung-da Wan
- Department of Mechanical Engineering, University of Texas at San Antonio, San Antonio, TX 78249
| | - Xiaodu Wang
- Department of Mechanical Engineering, University of Texas at San Antonio, San Antonio, TX 78249
| |
Collapse
|
8
|
Majbar MA, Majbar Y, Benkabbou A, Amrani L, Bougtab A, Mohsine R, Souadka A. Validation of the French translation of the Dutch residency educational climate test. BMC MEDICAL EDUCATION 2020; 20:338. [PMID: 33008369 PMCID: PMC7531085 DOI: 10.1186/s12909-020-02249-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Accepted: 09/23/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND The learning environment is one of the most influential factors in training of medical residents. The Dutch Residency Educational Climate Test (D-RECT) is one of the strongest instruments for measuring the learning environment. However, it has not been translated in French. The objective of this study is the psychometric validation of the DRECT French version. MATERIAL AND METHODS After translation of the D-RECT questionnaire into French, residents of five Moroccan hospitals were invited to complete the questionnaire between July and September 2018. Confirmatory factor analysis was used to evaluate the validity of the construct using the standardized root mean square residual (SRMR), the root mean square error approximation (RMSEA), the Comparative Fit Index (CFI) and the Tucker- Lewis Index (TLI). Reliability analysis was analysed using Internal consistency and Test-retest. RESULTS During the study period, 211 residents completed the questionnaire. Confirmatory factor analysis showed an adequate model fit with the following indicators: SRMR = 0.058 / RMSEA = 0.07 / CFI = 0.88 / TLI = 0.87. The French translation had a good internal consistency (Cronbach alpha score > 0.7 for all subscales) and a good temporal stability (correlation score between two measurements = 0.89). CONCLUSION This French version has an acceptable validity of the construct, a good internal consistency and good temporal reliability, and may be used to evaluate the learning climate. Additional research is necessary in other French-speaking contexts, in order to confirm these results.
Collapse
Affiliation(s)
- Mohamed Anass Majbar
- Digestive Surgical Oncology Department, National Institute of Oncology, Ibn Sina University Hospital Centre, Rabat, Morocco.
- Surgery Department, Faculty of Medicine. Mohammed V University in Rabat, Rabat, Morocco.
| | - Yassin Majbar
- Faculty of Medicine, Sidi Mohamed Ben Abdellah University, Fes, Morocco
| | - Amine Benkabbou
- Digestive Surgical Oncology Department, National Institute of Oncology, Ibn Sina University Hospital Centre, Rabat, Morocco
- Surgery Department, Faculty of Medicine. Mohammed V University in Rabat, Rabat, Morocco
| | - Laila Amrani
- Digestive Surgical Oncology Department, National Institute of Oncology, Ibn Sina University Hospital Centre, Rabat, Morocco
- Surgery Department, Faculty of Medicine. Mohammed V University in Rabat, Rabat, Morocco
| | - Abdeslam Bougtab
- Digestive Surgical Oncology Department, National Institute of Oncology, Ibn Sina University Hospital Centre, Rabat, Morocco
- Surgery Department, Faculty of Medicine. Mohammed V University in Rabat, Rabat, Morocco
| | - Raouf Mohsine
- Digestive Surgical Oncology Department, National Institute of Oncology, Ibn Sina University Hospital Centre, Rabat, Morocco
- Surgery Department, Faculty of Medicine. Mohammed V University in Rabat, Rabat, Morocco
| | - Amine Souadka
- Digestive Surgical Oncology Department, National Institute of Oncology, Ibn Sina University Hospital Centre, Rabat, Morocco
- Surgery Department, Faculty of Medicine. Mohammed V University in Rabat, Rabat, Morocco
| |
Collapse
|
9
|
Debets MPM, Scheepers RA, Boerebach BCM, Arah OA, Lombarts KMJMH. Variability of residents' ratings of faculty's teaching performance measured by five- and seven-point response scales. BMC MEDICAL EDUCATION 2020; 20:325. [PMID: 32962692 PMCID: PMC7510269 DOI: 10.1186/s12909-020-02244-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Accepted: 09/14/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND Medical faculty's teaching performance is often measured using residents' feedback, collected by questionnaires. Researchers extensively studied the psychometric qualities of resulting ratings. However, these studies rarely consider the number of response categories and its consequences for residents' ratings of faculty's teaching performance. We compared the variability of residents' ratings measured by five- and seven-point response scales. METHODS This retrospective study used teaching performance data from Dutch anaesthesiology residency training programs. Questionnaires with five- and seven-point response scales from the extensively studied System for Evaluation of Teaching Qualities (SETQ) collected the ratings. We inspected ratings' variability by comparing standard deviations, interquartile ranges, and frequency (percentage) distributions. Relevant statistical tests were used to test differences in frequency distributions and teaching performance scores. RESULTS We examined 3379 residents' ratings and 480 aggregated faculty scores. Residents used the additional response categories provided by the seven-point scale - especially those differentiating between positive performances. Residents' ratings and aggregated faculty scores showed a more even distribution on the seven-point scale compared to the five-point scale. Also, the seven-point scale showed a smaller ceiling effect. After rescaling, the mean scores and (most) standard deviations of ratings from both scales were comparable. CONCLUSIONS Ratings from the seven-point scale were more evenly distributed and could potentially yield more nuanced, specific and user-friendly feedback. Still, both scales measured (almost) similar teaching performance outcomes. In teaching performance practice, residents and faculty members should discuss whether response scales fit their preferences and goals.
Collapse
Affiliation(s)
- Maarten P M Debets
- Amsterdam Center for Professional Performance and Compassionate Care, Department of Medical Psychology, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, PO Box 22700, 1100, DE, Amsterdam, The Netherlands.
| | - Renée A Scheepers
- Research group Socio-Medical Sciences, Erasmus School of Health Policy and Management, Erasmus University of Rotterdam, Rotterdam, The Netherlands
| | - Benjamin C M Boerebach
- Amsterdam Center for Professional Performance and Compassionate Care, Department of Medical Psychology, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, PO Box 22700, 1100, DE, Amsterdam, The Netherlands
| | - Onyebuchi A Arah
- Department of Epidemiology, Fielding School of Public Health, University of California, Los Angeles (UCLA), Los Angeles, California, USA
- UCLA Center for Health Policy Research, Los Angeles, California, USA
- Center for Social Statistics, UCLA, Los Angeles, California, USA
- Department of Statistics, UCLA, Los Angeles, California, USA
| | - Kiki M J M H Lombarts
- Amsterdam Center for Professional Performance and Compassionate Care, Department of Medical Psychology, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, PO Box 22700, 1100, DE, Amsterdam, The Netherlands
| |
Collapse
|
10
|
Romeo GR, Hirsch IB, Lash RW, Gabbay RA. Response to Letter to the Editor: "Trends in Endocrinology Fellowship Recruitment: Reasons for Concern and Possible Interventions". J Clin Endocrinol Metab 2020; 105:5850550. [PMID: 32485740 PMCID: PMC7324051 DOI: 10.1210/clinem/dgaa352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Accepted: 05/29/2020] [Indexed: 11/19/2022]
Affiliation(s)
- Giulio R Romeo
- Joslin Diabetes Center, Harvard Medical School, Boston, Massachusetts
- Correspondence and Reprint Requests: Giulio R. Romeo, MD, Joslin Diabetes Center, Harvard Medical School, One Joslin Place, Boston, Massachusetts 02215. E-mail:
| | - Irl B Hirsch
- Division of Endocrinology, Diabetes and Nutrition, University of Washington, Seattle, Washington
| | | | - Robert A Gabbay
- Joslin Diabetes Center, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
11
|
van der Meulen MW, Smirnova A, Heeneman S, Oude Egbrink MGA, van der Vleuten CPM, Lombarts KMJMH. Exploring Validity Evidence Associated With Questionnaire-Based Tools for Assessing the Professional Performance of Physicians: A Systematic Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:1384-1397. [PMID: 31460937 DOI: 10.1097/acm.0000000000002767] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
PURPOSE To collect and examine-using an argument-based validity approach-validity evidence of questionnaire-based tools used to assess physicians' clinical, teaching, and research performance. METHOD In October 2016, the authors conducted a systematic search of the literature seeking articles about questionnaire-based tools for assessing physicians' professional performance published from inception to October 2016. They included studies reporting on the validity evidence of tools used to assess physicians' clinical, teaching, and research performance. Using Kane's validity framework, they conducted data extraction based on four inferences in the validity argument: scoring, generalization, extrapolation, and implications. RESULTS They included 46 articles on 15 tools assessing clinical performance and 72 articles on 38 tools assessing teaching performance. They found no studies on research performance tools. Only 12 of the tools (23%) gathered evidence on all four components of Kane's validity argument. Validity evidence focused mostly on generalization and extrapolation inferences. Scoring evidence showed mixed results. Evidence on implications was generally missing. CONCLUSIONS Based on the argument-based approach to validity, not all questionnaire-based tools seem to support their intended use. Evidence concerning implications of questionnaire-based tools is mostly lacking, thus weakening the argument to use these tools for formative and, especially, for summative assessments of physicians' clinical and teaching performance. More research on implications is needed to strengthen the argument and to provide support for decisions based on these tools, particularly for high-stakes, summative decisions. To meaningfully assess academic physicians in their tripartite role as doctor, teacher, and researcher, additional assessment tools are needed.
Collapse
Affiliation(s)
- Mirja W van der Meulen
- M.W. van der Meulen is PhD candidate, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands, and member, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0003-3636-5469. A. Smirnova is PhD graduate and researcher, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands, and member, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0003-4491-3007. S. Heeneman is professor, Department of Pathology, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0002-6103-8075. M.G.A. oude Egbrink is professor, Department of Physiology, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0002-5530-6598. C.P.M. van der Vleuten is professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0001-6802-3119. K.M.J.M.H. Lombarts is professor, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0001-6167-0620
| | | | | | | | | | | |
Collapse
|
12
|
Dominguez LC, Silkens M, Sanabria A. The Dutch residency educational climate test: construct and concurrent validation in Spanish language. INTERNATIONAL JOURNAL OF MEDICAL EDUCATION 2019; 10:138-148. [PMID: 31371693 PMCID: PMC6773368 DOI: 10.5116/ijme.5d0c.bff7] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/31/2018] [Accepted: 06/21/2019] [Indexed: 06/10/2023]
Abstract
OBJECTIVES To translate the 35-item version of the Dutch Residency Educational Climate Test (D-RECT), and assess its reliability, construct validity and concurrent validity in the Spanish language. METHODS For this validation study, the D-RECT was translated using international recommendations. A total of 220 paper-based resident evaluations covering two Colombian universities were cross-sectionally collected in 2015. A Confirmatory Factor Analysis (CFA) was used to assess the internal validity of the instrument using the Comparative fit index (CFI), Tucker-Lewis index (TLI), Standardized root mean square residual (SRMSR), and Root mean square error of approximation (RMSA). Cronbach's α was used to assess reliability. The concurrent validity was investigated through Pearson correlations with the Spanish version of the Postgraduate Hospital Educational Environment Measure (PHEEM). RESULTS The original 9-factor structure showed an appropriate fit for the Spanish version of the instrument (CFI = 0.84, TLI = 0.82, SRMSR = 0.06, and RMSA = 0.06). The reliability coefficients were satisfactory (>0.70). The mean total scores of the D-RECT and the PHEEM showed a significant correlation (r = 0.7, p<0.01). CONCLUSIONS This study confirms the validity and reliability of the Spanish version of the Dutch Residency Educational Climate Test, indicating that the instrument is suitable for the evaluation of departments' learning climate in the Spanish context. Future research is needed to confirm these findings in other Spanish speaking countries.
Collapse
Affiliation(s)
| | - Milou Silkens
- Professional Performance and Compassionate Care research group, Department of Medical Psychology, Amsterdam UMC, Amsterdam, the Netherlands
| | - Alvaro Sanabria
- Department of Surgery, Universidad de Antioquia, Medellín, Colombia
| |
Collapse
|
13
|
Bakr RH, Jarrar MK, Abumadini MS, Al Sultan AI, Larbi EB. Effect of Leadership Support, Work Conditions and Job Security on Job Satisfaction in a Medical College. SAUDI JOURNAL OF MEDICINE & MEDICAL SCIENCES 2019; 7:100-105. [PMID: 31080390 PMCID: PMC6503694 DOI: 10.4103/sjmms.sjmms_105_17] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
Background: Faculty members are crucial elements of an educational institution, and their job satisfaction is likely essential for success of the educational process. Leadership support, work conditions and perceived job security could be factors affecting academic job satisfaction. Objective: The aim of the study was to investigate the effect of leadership support, work conditions and perceived job security on the overall academic job satisfaction of faculty. Materials and Methods: A cross-sectional survey, using a structured questionnaire, was conducted to determine the effect of leadership support, work conditions and perceived job security on academic job satisfaction among faculty and teaching staff at the College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia. Multiple regression analysis was performed to examine the significance of these relationships at 95% confidence interval and P < 0.05 level of significance. Results: Leadership support (β = 0.187, t = 2.714, P = 0.007), work conditions (β = 0.199, t = 2.628, P= 0.009) and perceived job security (β = 0.264, t = 3.369, P = 0.001) were found to be significantly associated with overall academic job satisfaction. Conclusion: The results of this study support the hypothesis that faculty and teaching staff working with supportive leaders and favorable work conditions as well as having an optimized sense of perceived job security demonstrate significantly higher levels of overall academic job satisfaction. These findings provide input for policymakers, and their implementation could enhance an institution's vitality and performance, and thus enable it to fulfill its goals.
Collapse
Affiliation(s)
- Radwa Hamdi Bakr
- Vice Deanship for Quality and Development, College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | - Mu'taman Khalil Jarrar
- Vice Deanship for Quality and Development, College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | - Mahdi Saeed Abumadini
- Vice Deanship for Quality and Development, College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia.,Department of Psychiatry, College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | - Ali Ibrahim Al Sultan
- Department of Internal Medicine, College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | - Emmanuel Bekoe Larbi
- Vice Deanship for Quality and Development, College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia.,Department of Internal Medicine, College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| |
Collapse
|
14
|
Chaudhry Z, Campagna-Vaillancourt M, Husein M, Varshney R, Roth K, Gooi A, Nguyen L. Perioperative Teaching and Feedback: How are we doing in Canadian OTL-HNS programs? J Otolaryngol Head Neck Surg 2019; 48:6. [PMID: 30654839 PMCID: PMC6337761 DOI: 10.1186/s40463-019-0330-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2018] [Accepted: 01/08/2019] [Indexed: 11/10/2022] Open
Abstract
Background Discrepancies between resident and faculty perceptions regarding optimal teaching and feedback during surgery are well known but these differences have not yet been described in Otolaryngology - Head and Neck Surgery (OTL-HNS). The objectives were thus to compare faculty and resident perceptions of perioperative teaching and feedback in OTL-HNS residency programs across Canada with the aim of highlighting potential areas for improvement. Methods An anonymous electronic questionnaire was distributed to residents and teaching faculty in OTL-HNS across Canada with additional paper copies distributed at four institutions. Surveys consisted of ratings on a 5-point Likert scale and open-ended questions. Responses among groups were analysed with the Wilcoxon-Mann Whitney test, while thematic analysis was used for the open-ended questions. Results A total of 143 teaching faculty and residents responded with statistically significant differences on 11 out of 25 variables. Namely, faculty reported higher rates of pre and intra-operative teaching compared to resident reports. Faculty also felt they gave adequate feedback on residents’ strengths and technical skills contrary to what the residents thought. Both groups did agree however that pre-operative discussion is not consistently done, nor is feedback consistently given or sought. Conclusion Faculty and residents in OTL-HNS residency programs disagree on the frequency and optimal timing of peri-operative teaching and feedback. This difference in perception emphasizes the need for a more structured approach to feedback delivery including explicitly stating when feedback is being given, and the overall need for better communication between residents and staff.
Collapse
Affiliation(s)
- Z Chaudhry
- Department of Medicine, McGill University, Montreal, Canada
| | - M Campagna-Vaillancourt
- Department of Otolaryngology - Head and Neck Surgery, McGill University, 1001 Decarie Boulevard, Room A02-3015, Montreal, Quebec, H4A 3J1, Canada
| | - M Husein
- Department of Otolaryngology - Head and Neck Surgery, Western University, London, Canada
| | - R Varshney
- Department of Otolaryngology - Head and Neck Surgery, McGill University, 1001 Decarie Boulevard, Room A02-3015, Montreal, Quebec, H4A 3J1, Canada
| | - K Roth
- Department of Otolaryngology - Head and Neck Surgery, Western University, London, Canada
| | - A Gooi
- Department of Otolaryngology - Head and Neck Surgery, University of Manitoba, Winnipeg, Canada
| | - Lhp Nguyen
- Department of Otolaryngology - Head and Neck Surgery, McGill University, 1001 Decarie Boulevard, Room A02-3015, Montreal, Quebec, H4A 3J1, Canada. .,Centre for Medical Education, McGill University, Montreal, Canada.
| |
Collapse
|
15
|
Bindels E, Boerebach B, van der Meulen M, Donkers J, van den Goor M, Scherpbier A, Lombarts K, Heeneman S. A New Multisource Feedback Tool for Evaluating the Performance of Specialty-Specific Physician Groups: Validity of the Group Monitor Instrument. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2019; 39:168-177. [PMID: 31306280 DOI: 10.1097/ceh.0000000000000262] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
INTRODUCTION Since clinical practice is a group-oriented process, it is crucial to evaluate performance on the group level. The Group Monitor (GM) is a multisource feedback tool that evaluates the performance of specialty-specific physician groups in hospital settings, as perceived by four different rater classes. In this study, we explored the validity of this tool. METHODS We explored three sources of validity evidence: (1) content, (2) response process, and (3) internal structure. Participants were 254 physicians, 407 staff, 621 peers, and 282 managers of 57 physician groups (in total 479 physicians) from 11 hospitals. RESULTS Content was supported by the fact that the items were based on a review of an existing instrument. Pilot rounds resulted in reformulation and reduction of items. Four subscales were identified for all rater classes: Medical practice, Organizational involvement, Professionalism, and Coordination. Physicians and staff had an extra subscale, Communication. However, the results of the generalizability analyses showed that variance in GM scores could mainly be explained by the specific hospital context and the physician group specialty. Optimization studies showed that for reliable GM scores, 3 to 15 evaluations were needed, depending on rater class, hospital context, and specialty. DISCUSSION The GM provides valid and reliable feedback on the performance of specialty-specific physician groups. When interpreting feedback, physician groups should be aware that rater classes' perceptions of their group performance are colored by the hospitals' professional culture and/or the specialty.
Collapse
Affiliation(s)
- Elisa Bindels
- Ms. Bindels: PhD Candidate, Department of Medical Psychology, Amsterdam Center for Professional Performance and Compassionate Care, Amsterdam UMC, University of Amsterdam, Amsterdam, the Netherlands, and Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands. Dr. Boerebach: Staff Advisor, Department of Medical Psychology, Amsterdam Center for Professional Performance and Compassionate Care, Amsterdam UMC, University of Amsterdam, Amsterdam, the Netherlands. Ms. van der Meulen: PhD Candidate, Department of Medical Psychology, Amsterdam Center for Professional Performance and Compassionate Care, Amsterdam UMC, University of Amsterdam, Amsterdam, the Netherlands, and Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands. Dr. Donkers: Assistant Professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands. Dr. van den Goor: PhD Candidate, Department of Medical Psychology, Amsterdam Center for Professional Performance and Compassionate Care, Amsterdam UMC, University of Amsterdam, Amsterdam, the Netherlands, and Q3 Consult, Zeist, the Netherlands. Dr. Scherpbier: Professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands. Dr. Lombarts: Professor, Department of Medical Psychology, Amsterdam Center for Professional Performance and Compassionate Care, Amsterdam UMC, University of Amsterdam, Amsterdam, the Netherlands. Dr. Heeneman: Professor, Department of Pathology, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands
| | | | | | | | | | | | | | | |
Collapse
|
16
|
Scheepers RA, van den Goor M, Arah OA, Heineman MJ, Lombarts KMJMH. Physicians' Perceptions of Psychological Safety and Peer Performance Feedback. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2018; 38:250-254. [PMID: 30346339 DOI: 10.1097/ceh.0000000000000225] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
INTRODUCTION For continuous professional development, it is imperative that physicians regularly receive performance feedback from their peers. Research shows that professionals are more proactive in learning and knowledge sharing with peers in teams with more psychological safety. Psychological safety has however not been studied in relation to peers' performance feedback. This study investigated the association between physicians' perceptions of psychological safety and performance feedback received from their peers. METHODS We invited physicians of cardiology, gastroenterology, obstetrics and gynecology, otorhinolaryngology, pulmonology, neurology, and neurosurgery departments of an academic medical center to participate. Physicians evaluated psychological safety using Edmondson's seven-item validated scale and performance feedback using the adapted four-item feedback subscale of the validated System for Evaluation of Teaching Qualities, including corrective and positive feedback, explanations of feedback, and suggestions for improvement from peers. We analyzed the data using multilevel linear regression analyses adjusted for physicians' sex, years since being certified a medical specialist, and months working in the clinic under the study. RESULTS This study included 105 physicians (86.8% participated). Psychological safety was positively associated with physicians' perceptions of performance feedback from peers (B = 0.54, 95% confidence interval = 0.34-0.73, P-value <.001). CONCLUSIONS Physicians who experienced more psychological safety were more likely to receive corrective and positive performance feedback from peers, explanations of feedback, and suggestions for improvement. Medical teams should consider investing in psychological safety to encourage performance feedback from peers, and thus support physicians' continuous professional development and their efforts to provide high-quality patient care.
Collapse
Affiliation(s)
- Renée A Scheepers
- Dr. Scheepers: Assistant Professor, Research Group Socio-Medical Sciences, Erasmus School of Health Policy and Management, Erasmus University of Rotterdam, Rotterdam, the Netherlands. Van den Goor, PhD Candidate, Professional Performance Research Group, Department of Medical Psychology, Amsterdam University Medical Center, University of Amsterdam, Amsterdam, the Netherlands Dr. Arah: Professor, Professional Performance Research Group, Department of Medical Psychology, Amsterdam University Medical Center, University of Amsterdam, Amsterdam, the Netherlands, and Dr. Arah: Professor, Department of Epidemiology, Fielding School of Public Health, University of California, Los Angeles (UCLA), Los Angeles, CA, and Dr. Arah: Professor, UCLA Center for Health Policy Research, Los Angeles, CA. Dr. Heineman: Professor, Professional Performance Research Group, Department of Medical Psychology, Amsterdam University Medical Center, University of Amsterdam, Amsterdam, the Netherlands. Dr. Lombarts: Professor, Professional Performance Research Group, Department of Medical Psychology, Amsterdam University Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | | | | | | | | |
Collapse
|
17
|
Rowan S, Newness EJ, Tetradis S, Prasad JL, Ko CC, Sanchez A. Should Student Evaluation of Teaching Play a Significant Role in the Formal Assessment of Dental Faculty? Two Viewpoints: Viewpoint 1: Formal Faculty Assessment Should Include Student Evaluation of Teaching and Viewpoint 2: Student Evaluation of Teaching Should Not Be Part of Formal Faculty Assessment. J Dent Educ 2017; 81:1362-1372. [PMID: 29093150 DOI: 10.21815/jde.017.093] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2017] [Accepted: 05/15/2017] [Indexed: 11/20/2022]
Abstract
Student evaluation of teaching (SET) is often used in the assessment of faculty members' job performance and promotion and tenure decisions, but debate over this use of student evaluations has centered on the validity, reliability, and application of the data in assessing teaching performance. Additionally, the fear of student criticism has the potential of influencing course content delivery and testing measures. This Point/Counterpoint article reviews the potential utility of and controversy surrounding the use of SETs in the formal assessment of dental school faculty. Viewpoint 1 supports the view that SETs are reliable and should be included in those formal assessments. Proponents of this opinion contend that SETs serve to measure a school's effectiveness in support of its core mission, are valid measures based on feedback from the recipients of educational delivery, and provide formative feedback to improve faculty accountability to the institution. Viewpoint 2 argues that SETs should not be used for promotion and tenure decisions, asserting that higher SET ratings do not correlate with improved student learning. The advocates of this viewpoint contend that faculty members may be influenced to focus on student satisfaction rather than pedagogy, resulting in grade inflation. They also argue that SETs are prone to gender and racial biases and that SET results are frequently misinterpreted by administrators. Low response rates and monotonic response patterns are other factors that compromise the reliability of SETs.
Collapse
Affiliation(s)
- Susan Rowan
- Dr. Rowan is Clinical Associate Professor and Clinical Dean, Department of Restorative Dentistry, University of Illinois at Chicago College of Dentistry; Dr. Newness is Clinical Assistant Professor, Department of Oral Health and Integrated Care, University of Detroit Mercy School of Dentistry; Dr. Tetradis is Professor and Chair, Section of Oral and Maxillofacial Radiology, University of California, Los Angeles School of Dentistry; Dr. Prasad is Assistant Professor, Department of Oral Biology, University of Pittsburgh School of Dental Medicine; Dr. Ko is Distinguished Professor and Vice Chair, Department of Orthodontics, University of North Carolina at Chapel Hill School of Dentistry; and Dr. Sanchez is Professor and Assistant Dean for Academic Affairs, Department of Restorative Dentistry, University of Puerto Rico School of Dental Medicine.
| | - Elmer J Newness
- Dr. Rowan is Clinical Associate Professor and Clinical Dean, Department of Restorative Dentistry, University of Illinois at Chicago College of Dentistry; Dr. Newness is Clinical Assistant Professor, Department of Oral Health and Integrated Care, University of Detroit Mercy School of Dentistry; Dr. Tetradis is Professor and Chair, Section of Oral and Maxillofacial Radiology, University of California, Los Angeles School of Dentistry; Dr. Prasad is Assistant Professor, Department of Oral Biology, University of Pittsburgh School of Dental Medicine; Dr. Ko is Distinguished Professor and Vice Chair, Department of Orthodontics, University of North Carolina at Chapel Hill School of Dentistry; and Dr. Sanchez is Professor and Assistant Dean for Academic Affairs, Department of Restorative Dentistry, University of Puerto Rico School of Dental Medicine
| | - Sotirios Tetradis
- Dr. Rowan is Clinical Associate Professor and Clinical Dean, Department of Restorative Dentistry, University of Illinois at Chicago College of Dentistry; Dr. Newness is Clinical Assistant Professor, Department of Oral Health and Integrated Care, University of Detroit Mercy School of Dentistry; Dr. Tetradis is Professor and Chair, Section of Oral and Maxillofacial Radiology, University of California, Los Angeles School of Dentistry; Dr. Prasad is Assistant Professor, Department of Oral Biology, University of Pittsburgh School of Dental Medicine; Dr. Ko is Distinguished Professor and Vice Chair, Department of Orthodontics, University of North Carolina at Chapel Hill School of Dentistry; and Dr. Sanchez is Professor and Assistant Dean for Academic Affairs, Department of Restorative Dentistry, University of Puerto Rico School of Dental Medicine
| | - Joanne L Prasad
- Dr. Rowan is Clinical Associate Professor and Clinical Dean, Department of Restorative Dentistry, University of Illinois at Chicago College of Dentistry; Dr. Newness is Clinical Assistant Professor, Department of Oral Health and Integrated Care, University of Detroit Mercy School of Dentistry; Dr. Tetradis is Professor and Chair, Section of Oral and Maxillofacial Radiology, University of California, Los Angeles School of Dentistry; Dr. Prasad is Assistant Professor, Department of Oral Biology, University of Pittsburgh School of Dental Medicine; Dr. Ko is Distinguished Professor and Vice Chair, Department of Orthodontics, University of North Carolina at Chapel Hill School of Dentistry; and Dr. Sanchez is Professor and Assistant Dean for Academic Affairs, Department of Restorative Dentistry, University of Puerto Rico School of Dental Medicine
| | - Ching-Chang Ko
- Dr. Rowan is Clinical Associate Professor and Clinical Dean, Department of Restorative Dentistry, University of Illinois at Chicago College of Dentistry; Dr. Newness is Clinical Assistant Professor, Department of Oral Health and Integrated Care, University of Detroit Mercy School of Dentistry; Dr. Tetradis is Professor and Chair, Section of Oral and Maxillofacial Radiology, University of California, Los Angeles School of Dentistry; Dr. Prasad is Assistant Professor, Department of Oral Biology, University of Pittsburgh School of Dental Medicine; Dr. Ko is Distinguished Professor and Vice Chair, Department of Orthodontics, University of North Carolina at Chapel Hill School of Dentistry; and Dr. Sanchez is Professor and Assistant Dean for Academic Affairs, Department of Restorative Dentistry, University of Puerto Rico School of Dental Medicine
| | - Arlene Sanchez
- Dr. Rowan is Clinical Associate Professor and Clinical Dean, Department of Restorative Dentistry, University of Illinois at Chicago College of Dentistry; Dr. Newness is Clinical Assistant Professor, Department of Oral Health and Integrated Care, University of Detroit Mercy School of Dentistry; Dr. Tetradis is Professor and Chair, Section of Oral and Maxillofacial Radiology, University of California, Los Angeles School of Dentistry; Dr. Prasad is Assistant Professor, Department of Oral Biology, University of Pittsburgh School of Dental Medicine; Dr. Ko is Distinguished Professor and Vice Chair, Department of Orthodontics, University of North Carolina at Chapel Hill School of Dentistry; and Dr. Sanchez is Professor and Assistant Dean for Academic Affairs, Department of Restorative Dentistry, University of Puerto Rico School of Dental Medicine
| |
Collapse
|
18
|
Redesign of the System for Evaluation of Teaching Qualities in Anesthesiology Residency Training (SETQ Smart). Anesthesiology 2017; 125:1056-1065. [PMID: 27606931 DOI: 10.1097/aln.0000000000001341] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
BACKGROUND Given the increasing international recognition of clinical teaching as a competency and regulation of residency training, evaluation of anesthesiology faculty teaching is needed. The System for Evaluating Teaching Qualities (SETQ) Smart questionnaires were developed for assessing teaching performance of faculty in residency training programs in different countries. This study investigated (1) the structure, (2) the psychometric qualities of the new tools, and (3) the number of residents' evaluations needed per anesthesiology faculty to use the instruments reliably. METHODS Two SETQ Smart questionnaires-for faculty self-evaluation and for resident evaluation of faculty-were developed. A multicenter survey was conducted among 399 anesthesiology faculty and 430 residents in six countries. Statistical analyses included exploratory factor analysis, reliability analysis using Cronbach α, item-total scale correlations, interscale correlations, comparison of composite scales to global ratings, and generalizability analysis to assess residents' evaluations needed per faculty. RESULTS In total, 240 residents completed 1,622 evaluations of 247 faculty. The SETQ Smart questionnaires revealed six teaching qualities consisting of 25 items. Cronbach α's were very high (greater than 0.95) for the overall SETQ Smart questionnaires and high (greater than 0.80) for the separate teaching qualities. Interscale correlations were all within the acceptable range of moderate correlation. Overall, questionnaire and scale scores correlated moderately to highly with the global ratings. For reliable feedback to individual faculty, three to five resident evaluations are needed. CONCLUSIONS The first internationally piloted questionnaires for evaluating individual anesthesiology faculty teaching performance can be reliably, validly, and feasibly used for formative purposes in residency training.
Collapse
|
19
|
Cubas Rolim E, Martins de Oliveira J, Dalvi LT, Moreira DC, Garcia Caldas N, Fernandes Lobo F, André Polli D, Campos ÉG, Hermes-Lima M. Blog construction as an effective tool in biochemistry active learning. BIOCHEMISTRY AND MOLECULAR BIOLOGY EDUCATION : A BIMONTHLY PUBLICATION OF THE INTERNATIONAL UNION OF BIOCHEMISTRY AND MOLECULAR BIOLOGY 2017; 45:205-215. [PMID: 27862849 DOI: 10.1002/bmb.21028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/03/2016] [Revised: 08/06/2016] [Accepted: 09/08/2016] [Indexed: 06/06/2023]
Abstract
To boost active learning in undergraduate students, they were given the task of preparing blogs on topics of clinical biochemistry. This "experiment" lasted for 12 teaching-semesters (from 2008 to 2013), and included a survey on the blogs' usefulness at the end of each semester. The survey (applied in the 2008-2010 period) used a Likert-like questionnaire with eight questions and a 1-to-6 scale, from "totally disagree" to "fully agree." Answers of 428 students were analyzed and indicated overall approval of the blog activity: 86% and 35% of the responses scored 4-to-6 and 6, respectively. Considering the survey results, the high grades obtained by students on their blogs (averaging 8.3 in 2008-2010), and the significant increase in average grades of the clinical biochemistry exam after the beginning of the blog system (from 5.5 in 2007 to 6.4 in 2008-2010), we concluded that blogging activity on biochemistry is a promising tool for boosting active learning. © 2016 by The International Union of Biochemistry and Molecular Biology, 45(3):205-215, 2017.
Collapse
Affiliation(s)
- Estêvão Cubas Rolim
- Departamento de Biologia Celular, Universidade de Brasília, Brasília, Distrito Federal, Brazil
- Hospital Universitário de Brasília (HUB), Universidade de Brasília, Brasília, Brazil
| | - Julia Martins de Oliveira
- Departamento de Biologia Celular, Universidade de Brasília, Brasília, Distrito Federal, Brazil
- Hospital Universitário de Brasília (HUB), Universidade de Brasília, Brasília, Brazil
| | - Luana T Dalvi
- Departamento de Biologia Celular, Universidade de Brasília, Brasília, Distrito Federal, Brazil
| | - Daniel C Moreira
- Departamento de Biologia Celular, Universidade de Brasília, Brasília, Distrito Federal, Brazil
| | - Natasha Garcia Caldas
- Departamento de Biologia Celular, Universidade de Brasília, Brasília, Distrito Federal, Brazil
- Hospital Universitário de Brasília (HUB), Universidade de Brasília, Brasília, Brazil
| | - Felipe Fernandes Lobo
- Departamento de Biologia Celular, Universidade de Brasília, Brasília, Distrito Federal, Brazil
- Hospital Universitário de Brasília (HUB), Universidade de Brasília, Brasília, Brazil
| | - Démerson André Polli
- Departamento de Estatística, Universidade de Brasília, Brasília, Distrito Federal, Brazil
| | - Élida G Campos
- Departamento de Biologia Celular, Universidade de Brasília, Brasília, Distrito Federal, Brazil
| | - Marcelo Hermes-Lima
- Departamento de Biologia Celular, Universidade de Brasília, Brasília, Distrito Federal, Brazil
| |
Collapse
|
20
|
van der Meulen MW, Boerebach BCM, Smirnova A, Heeneman S, Oude Egbrink MGA, van der Vleuten CPM, Arah OA, Lombarts KMJMH. Validation of the INCEPT: A Multisource Feedback Tool for Capturing Different Perspectives on Physicians' Professional Performance. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2017; 37:9-18. [PMID: 28212117 DOI: 10.1097/ceh.0000000000000143] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
INTRODUCTION Multisource feedback (MSF) instruments are used to and must feasibly provide reliable and valid data on physicians' performance from multiple perspectives. The "INviting Co-workers to Evaluate Physicians Tool" (INCEPT) is a multisource feedback instrument used to evaluate physicians' professional performance as perceived by peers, residents, and coworkers. In this study, we report on the validity, reliability, and feasibility of the INCEPT. METHODS The performance of 218 physicians was assessed by 597 peers, 344 residents, and 822 coworkers. Using explorative and confirmatory factor analyses, multilevel regression analyses between narrative and numerical feedback, item-total correlations, interscale correlations, Cronbach's α and generalizability analyses, the psychometric qualities, and feasibility of the INCEPT were investigated. RESULTS For all respondent groups, three factors were identified, although constructed slightly different: "professional attitude," "patient-centeredness," and "organization and (self)-management." Internal consistency was high for all constructs (Cronbach's α ≥ 0.84 and item-total correlations ≥ 0.52). Confirmatory factor analyses indicated acceptable to good fit. Further validity evidence was given by the associations between narrative and numerical feedback. For reliable total INCEPT scores, three peer, two resident and three coworker evaluations were needed; for subscale scores, evaluations of three peers, three residents and three to four coworkers were sufficient. DISCUSSION The INCEPT instrument provides physicians performance feedback in a valid and reliable way. The number of evaluations to establish reliable scores is achievable in a regular clinical department. When interpreting feedback, physicians should consider that respondent groups' perceptions differ as indicated by the different item clustering per performance factor.
Collapse
Affiliation(s)
- Mirja W van der Meulen
- Ms. van der Meulen: PhD Candidate, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands, and Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands. Dr. Boerebach: Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands. Dr. Smirnova: PhD Candidate, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands, and Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands. Dr. Heeneman: Professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands. Dr. oude Egbrink: Professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands. Dr. van der Vleuten: Professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands. Dr. Arah: Professor, Department of Epidemiology, Fielding School of Public Health, University of California, Los Angeles (UCLA), Los Angeles, CA, and UCLA Center for Health Policy Research, Los Angeles, CA. Dr. Lombarts: Professor, Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | | | | | | | | | | | | | | |
Collapse
|
21
|
Scheepers RA, Arah OA, Heineman MJ, Lombarts KMJMH. How personality traits affect clinician-supervisors' work engagement and subsequently their teaching performance in residency training. MEDICAL TEACHER 2016; 38:1105-1111. [PMID: 27089424 DOI: 10.3109/0142159x.2016.1170774] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
PURPOSE Clinician-supervisors often work simultaneously as doctors and teachers. Supervisors who are more engaged for their teacher work are evaluated as better supervisors. Work engagement is affected by the work environment, yet the role of supervisors' personality traits is unclear. This study examined (i) the impact of supervisors' personality traits on work engagement in their doctors' and teachers' roles and (ii) how work engagement in both roles affects their teaching performance. METHODS Residents evaluated supervisors' teaching performance, using the validated System for Evaluation of Teaching Qualities. Supervisors' reported work engagement in doctor and teacher roles separately using the validated Utrecht Work Engagement Scale. Supervisors' personality traits were measured using the Big Five Inventory's five factor model covering conscientiousness, agreeableness, extraversion, emotional stability and openness. RESULTS Overall, 549 (68%) residents and 636 (78%) supervisors participated. Conscientiousness, extraversion and agreeableness were positively associated with supervisors' engagement in their teacher work, which was subsequently positively associated with teaching performance. CONCLUSIONS Conscientious, extraverted, and agreeable supervisors showed more engagement with their teacher work, which made them more likely to deliver adequate residency training. In addition to optimizing the work environment, faculty development and career planning could be tailor-made to fit supervisors' personality traits.
Collapse
Affiliation(s)
- Renée A Scheepers
- a Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center , University of Amsterdam , Amsterdam , The Netherlands
| | - Onyebuchi A Arah
- a Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center , University of Amsterdam , Amsterdam , The Netherlands
- b Department of Epidemiology, The Fielding School of Public Health , University of California, Los Angeles (UCLA) , Los Angeles , CA , USA
- c UCLA Center for Health Policy Research , Los Angeles , CA , USA
| | - Maas Jan Heineman
- a Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center , University of Amsterdam , Amsterdam , The Netherlands
- d Member of the Board of Directors, Academic Medical Center , University of Amsterdam , Amsterdam , The Netherlands
| | - Kiki M J M H Lombarts
- a Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center , University of Amsterdam , Amsterdam , The Netherlands
| |
Collapse
|
22
|
Van Der Leeuw RM, Boerebach BCM, Lombarts KMJMH, Heineman MJ, Arah OA. Clinical teaching performance improvement of faculty in residency training: A prospective cohort study. MEDICAL TEACHER 2016; 38:464-470. [PMID: 26166690 DOI: 10.3109/0142159x.2015.1060302] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
PURPOSE The purpose of this study is to investigate how aspects of a teaching performance evaluation system may affect faculty's teaching performance improvement as perceived by residents over time. METHODS Prospective multicenter cohort study conducted in The Netherlands between 1 September 2008 and 1 February 2013. Nine hundred and one residents and 1068 faculty of 65 teaching programs in 16 hospitals were invited to annually (self-) evaluate teaching performance using the validated, specialty-specific System for Evaluation of Teaching Qualities (SETQ). We used multivariable adjusted generalized estimating equations to analyze the effects of (i) residents' numerical feedback, (ii) narrative feedback, and (iii) faculty's participation in self-evaluation on residents' perception of faculty's teaching performance improvement. RESULTS The average response rate over three years was 69% for faculty and 81% for residents. Higher numerical feedback scores were associated with residents rating faculty as having improved their teaching performance one year following the first measurement (regression coefficient, b: 0.077; 95% CI: 0.002-0.151; p = 0.045), but not after the second wave of receiving feedback and evaluating improvement. Receiving more suggestions for improvement was associated with improved teaching performance in subsequent years. CONCLUSIONS Evaluation systems on clinical teaching performance appear helpful in enhancing teaching performance in residency training programs. High performing teachers also appear to improve in the perception of the residents.
Collapse
|
23
|
Silkens MEWM, Smirnova A, Stalmeijer RE, Arah OA, Scherpbier AJJA, Van Der Vleuten CPM, Lombarts KMJMH. Revisiting the D-RECT tool: Validation of an instrument measuring residents' learning climate perceptions. MEDICAL TEACHER 2016; 38:476-481. [PMID: 26172348 DOI: 10.3109/0142159x.2015.1060300] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
INTRODUCTION Credible evaluation of the learning climate requires valid and reliable instruments in order to inform quality improvement activities. Since its initial validation the Dutch Residency Educational Climate Test (D-RECT) has been increasingly used to evaluate the learning climate, yet it has not been tested in its final form and on the actual level of use - the department. AIM Our aim was to re-investigate the internal validity and reliability of the D-RECT at the resident and department levels. METHODS D-RECT evaluations collected during 2012-2013 were included. Internal validity was assessed using exploratory and confirmatory factor analyses. Reliability was assessed using generalizability theory. RESULTS In total, 2306 evaluations and 291 departments were included. Exploratory factor analysis showed a 9-factor structure containing 35 items: teamwork, role of specialty tutor, coaching and assessment, formal education, resident peer collaboration, work is adapted to residents' competence, patient sign-out, educational atmosphere, and accessibility of supervisors. Confirmatory factor analysis indicated acceptable to good fit. Three resident evaluations were needed to assess the overall learning climate reliably and eight residents to assess the subscales. CONCLUSION This study reaffirms the reliability and internal validity of the D-RECT in measuring residency training learning climate. Ongoing evaluation of the instrument remains important.
Collapse
Affiliation(s)
| | - Alina Smirnova
- a University of Amsterdam , The Netherlands
- b Maastricht University , The Netherlands
| | | | - Onyebuchi A Arah
- a University of Amsterdam , The Netherlands
- c University of California Los Angeles , USA
- d UCLA Center for Health Policy Research , USA
| | | | | | | |
Collapse
|
24
|
Sommer J, Lanier C, Perron NJ, Nendaz M, Clavet D, Audétat MC. A teaching skills assessment tool inspired by the Calgary-Cambridge model and the patient-centered approach. PATIENT EDUCATION AND COUNSELING 2016; 99:600-609. [PMID: 26680755 DOI: 10.1016/j.pec.2015.11.024] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2015] [Revised: 11/14/2015] [Accepted: 11/24/2015] [Indexed: 06/05/2023]
Abstract
OBJECTIVE The aim of this study was to develop a descriptive tool for peer review of clinical teaching skills. Two analogies framed our research: (1) between the patient-centered and the learner-centered approach; (2) between the structures of clinical encounters (Calgary-Cambridge communication model) and teaching sessions. METHOD During the course of one year, each step of the action research was carried out in collaboration with twelve clinical teachers from an outpatient general internal medicine clinic and with three experts in medical education. The content validation consisted of a literature review, expert opinion and the participatory research process. Interrater reliability was evaluated by three clinical teachers coding thirty audiotaped standardized learner-teacher interactions. RESULTS This tool contains sixteen items covering the process and content of clinical supervisions. Descriptors define the expected teaching behaviors for three levels of competence. Interrater reliability was significant for eleven items (Kendall's coefficient p<0.05). CONCLUSION This peer assessment tool has high reliability and can be used to facilitate the acquisition of teaching skills.
Collapse
Affiliation(s)
- Johanna Sommer
- Primary care unit, University of Geneva, Geneva, Switzerland.
| | - Cédric Lanier
- Primary care unit, University of Geneva, Geneva, Switzerland; Department of community medicine, primary care and emergencies, Geneva University Hospitals, Geneva, Switzerland.
| | - Noelle Junod Perron
- Department of community medicine, primary care and emergencies, Geneva University Hospitals, Geneva, Switzerland; Unit of development and research in medical education, University of Geneva, Geneva, Switzerland.
| | - Mathieu Nendaz
- Unit of development and research in medical education, University of Geneva, Geneva, Switzerland; Service of General Internal Medicine, Geneva University Hospitals, Geneva, Switzerland.
| | - Diane Clavet
- Center for health sciences education, Université de Sherbrooke, Sherbrooke, Canada.
| | - Marie-Claude Audétat
- Primary care unit, University of Geneva, Geneva, Switzerland; Family medicine and Emergency Department, Université de Montréal, Montréal, Canada.
| |
Collapse
|
25
|
Medical Students' Perceptions of Clinical Teachers as Role Model. PLoS One 2016; 11:e0150478. [PMID: 26959364 PMCID: PMC4784941 DOI: 10.1371/journal.pone.0150478] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2015] [Accepted: 02/15/2016] [Indexed: 11/23/2022] Open
Abstract
Introduction Role models facilitate student learning and assists in the development of professional identity. However, social organization and cultural values influence the choice of role models. Considering that the social organization and cultural values in South East Asia are different from other countries, it is important to know whether this affects the characteristics medical students look for in their role models in these societies. Methods A 32 item questionnaire was developed and self-administered to undergraduate medical students. Participants rated the characteristics on a three point scale (0 = not important, 1 = mildly important, 2 = very important). One way ANOVA and student's t-test were used to compare the groups. Results A total of 349 (65.23%) distributed questionnaires were returned. The highest ranked themes were teaching and facilitating learning, patient care and continuing professional development followed by communication and professionalism. Safe environment and guiding personal and professional development was indicated least important. Differences were also observed between scores obtained by males and females. Conclusion Globally there are attributes which are perceived as essential for role models, while others are considered desirable. An understanding of the attributes which are essential and desirable for role models can help medical educators devise strategies which can reinforce those attributes within their institutions.
Collapse
|
26
|
Lee-Hsieh J, O'Brien A, Liu CY, Cheng SF, Lee YW, Kao YH. The development and validation of the Clinical Teaching Behavior Inventory (CTBI-23): Nurse preceptors' and new graduate nurses' perceptions of precepting. NURSE EDUCATION TODAY 2016; 38:107-114. [PMID: 26743525 DOI: 10.1016/j.nedt.2015.12.005] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/13/2015] [Revised: 11/27/2015] [Accepted: 12/08/2015] [Indexed: 06/05/2023]
Abstract
BACKGROUND Few studies have examined the perceptions of clinical teaching behaviors among both nurse preceptors and preceptees. PURPOSES To develop a Clinical Teaching Behavior Inventory (CTBI) for nurse preceptors' self-evaluation, and for new graduate nurse preceptee evaluation of preceptor clinical teaching behaviors and to test the validity and reliability of the CTBI. METHODS This study used mixed research techniques in five phases. Phase I: based on a literature review, the researchers developed an instrument to measure clinical teaching behaviors. Phase II: 17 focus group interviews were conducted with 63 preceptors and 24 new graduate nurses from five hospitals across Taiwan. Clinical teaching behavior themes were extracted from the focus group data and integrated into the domains and items of the CTBI. Phase III: two rounds of an expert Delphi study were conducted to determine the content validity of the instrument. Phase IV: a total of 290 nurse preceptors and 260 new graduate nurses were recruited voluntarily in the same five hospitals in Taiwan. Of these, 521 completed questionnaires to test the construct validity of CTBI by using confirmatory factory analysis. Phase V: the internal consistency and reliability of the instrument were tested. RESULTS CTBI consists of 23 items in six domains: (1) 'Committing to Teaching'; (2) 'Building a Learning Atmosphere'; (3) 'Using Appropriate Teaching Strategies'; (4) 'Guiding Inter-professional Communication'; (5) 'Providing Feedback and Evaluation'; and (6) 'Showing Concern and Support'. The confirmatory factor analysis yielded a good fit and reliable scores for the CTBI-23 model. CONCLUSIONS The CTBI-23 is a valid and reliable instrument for identifying the clinical teaching behaviors of a preceptor as perceived by preceptors and new graduate preceptees. The CTBI-23 depicts clinical teaching behaviors of nurse preceptors in Taiwan.
Collapse
Affiliation(s)
- Jane Lee-Hsieh
- Graduate Institute of Allied Health Education, National Taipei University of Nursing and Health Sciences, Taipei, Taiwan
| | - Anthony O'Brien
- School of Nursing and Midwifery, Faculty of Health and Medicine, University of Newcastle, NSW, Australia
| | - Chieh-Yu Liu
- Graduate Institute of Nursing-Midwifery, National Taipei University of Nursing and Health Sciences, Taipei, Taiwan.
| | - Su-Fen Cheng
- School of Nursing, National Taipei University of Nursing and Health Sciences, Taipei, Taiwan
| | - Yea-Wen Lee
- Nursing Department, Changhua Christian Hospital, Changhua, Taiwan
| | - Yu-Hsiu Kao
- Graduate Institute of Allied Health Education, National Taipei University of Nursing and Health Sciences, Taipei, Taiwan
| |
Collapse
|
27
|
Lases SSL, Arah OA, Pierik EGJMR, Heineman E, Lombarts MJMHK. Residents' engagement and empathy associated with their perception of faculty's teaching performance. World J Surg 2015; 38:2753-60. [PMID: 25008244 DOI: 10.1007/s00268-014-2687-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
BACKGROUND Faculty members rely on residents' feedback about their teaching performance. The influence of residents' characteristics on evaluations of faculty is relatively unexplored. We aimed to evaluate the levels of work engagement and empathy among residents and the association of both characteristics with their evaluation of the faculty's teaching performance. METHODS A multicenter questionnaire study among 271 surgery and gynecology residents was performed from September 2012 to February 2013. Residents' ratings of the faculty's teaching performance were collected using the system for evaluation of teaching quality (SETQ). Residents were also invited to fill out standardized measures of work engagement and empathy using the short Utrecht Work Engagement Scale and the Jefferson Scale of Physician Empathy, respectively. Linear regression analysis using generalized estimating equations to evaluate the association of residents' engagement and empathy with residents' evaluations of teaching performance. RESULTS Overall, 204 (75.3 %) residents completed 1814 SETQ evaluations of 302 faculty, and 143 (52.8 %) and 140 (51.7 %) residents, respectively, completed the engagement and empathy measurements. The median scores of residents' engagement and empathy were 4.56 (scale 0-6) and 5.55 (scale 1-7), respectively. Higher levels of residents' engagement (regression coefficient b = 0.128; 95 % confidence interval (CI) 0.072-0.184; p < 0.001) and empathy (b = 0.113; 95 % CI 0.063-0.164; p < 0.001) were associated with higher faculty teaching performance scores. CONCLUSIONS Residents' engagement and empathy appear to be positively associated with their evaluation of the faculty's performance. A possible explanation is that residents who are more engaged and can understand and share others' perspectives stimulate and experience faculty's teaching better than others.
Collapse
Affiliation(s)
- S S Lenny Lases
- Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Meibergdreef 9, PO Box 22660, Amsterdam, 1100, DD, The Netherlands,
| | | | | | | | | |
Collapse
|
28
|
Scheepers RA, Arah OA, Heineman MJ, Lombarts KMJMH. In the eyes of residents good supervisors need to be more than engaged physicians: the relevance of teacher work engagement in residency training. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2015; 20:441-55. [PMID: 25118859 DOI: 10.1007/s10459-014-9538-0] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2014] [Accepted: 07/24/2014] [Indexed: 05/09/2023]
Abstract
During their development into competent medical specialists, residents benefit from their attending physicians' excellence in teaching and role modelling. Work engagement increases overall job performance, but it is unknown whether this also applies to attending physicians' teaching performance and role modelling. Attending physicians in clinical teaching practice take on roles as doctors and teachers. Therefore, this study (a) examined levels of attending physicians' work engagement in both roles, and (b) quantified the relationships of both work engagement roles to their teaching performance and role model status. In this multicenter survey, residents evaluated attending physicians' teaching performance and role model status using the validated System for Evaluation of Teaching Qualities. Attending physicians self-reported their work engagement on a 7-point scale, separately for their roles as doctors and teachers, using the validated 9-item Utrecht Work Engagement Scale. In total, 549 (68 %) residents filled out 4,305 attending physician evaluations and 627 (78 %) attending physicians participated. Attending physicians reported higher work engagement in their doctor than in their teacher roles (mean difference: 0.95; 95 % CI 0.86-1.04; p < 0.001). Teacher work engagement was positively related to teaching performance (regression coefficient, B: 0.11; 95 % CI 0.08-0.14; p < 0.001), which in turn was positively associated to role model status (B: 1.08; 95 % CI 0.10-1.18; p < 0.001). In the eyes of residents, good supervisors need to be more than engaged physicians, as attending physicians with high teacher work engagement were evaluated as better teachers.
Collapse
Affiliation(s)
- Renée A Scheepers
- Professional Performance Research Group, Center for Evidence Based Education, Academic Medical Center, University of Amsterdam, Meibergdreef 9, PO Box 22700, 1100 DE, Amsterdam, The Netherlands,
| | | | | | | |
Collapse
|
29
|
Wiegers SE, Houser SR, Pearson HE, Untalan A, Cheung JY, Fisher SG, Kaiser LR, Feldman AM. A Metric-Based System for Evaluating the Productivity of Preclinical Faculty at an Academic Medical Center in the Era of Clinical and Translational Science. Clin Transl Sci 2015; 8:357-61. [PMID: 25740181 DOI: 10.1111/cts.12269] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
Academic medical centers are faced with increasing budgetary constraints due to a flat National Institutes of Health budget, lower reimbursements for clinical services, higher costs of technology including informatics and a changing competitive landscape. As such, institutional stakeholders are increasingly asking whether resources are allocated appropriately and whether there are objective methods for measuring faculty contributions and engagement. The complexities of translational research can be particularly challenging when trying to assess faculty contributions because of team science. For over a decade, we have used an objective scoring system called the Matrix to assess faculty productivity and engagement in four areas: research, education, scholarship, and administration or services. The Matrix was developed to be dynamic, quantitative, and able to insure that a fully engaged educator would have a Matrix score that was comparable to a fully engaged investigator. In this report, we present the Matrix in its current form in order to provide a well-tested objective system of performance evaluation for nonclinical faculty to help academic leaders in decision making.
Collapse
Affiliation(s)
- Susan E Wiegers
- Department of Medicine, Temple University School of Medicine, Philadelphia, Pennsylvania, USA
| | - Steven R Houser
- Department of Physiology, Cardiovascular Research Center, Temple University School of Medicine, Philadelphia, Pennsylvania, USA
| | - Helen E Pearson
- Department of Anatomy and Cell Biology, Temple University School of Medicine, Philadelphia, Pennsylvania, USA
| | - Ann Untalan
- Department of Finance, Temple University School of Medicine, Philadelphia, Pennsylvania, USA
| | - Joseph Y Cheung
- Department of Medicine, Temple University School of Medicine, Philadelphia, Pennsylvania, USA
| | - Susan G Fisher
- Department of Clinical Sciences, Temple University School of Medicine, Philadelphia, Pennsylvania, USA
| | - Larry R Kaiser
- Department of Surgery, Temple University School of Medicine, Philadelphia, Pennsylvania, USA
| | - Arthur M Feldman
- Departments of Medicine and Physiology, Temple University School of Medicine, Philadelphia, Pennsylvania, USA
| |
Collapse
|
30
|
Masum AKM, Azad MAK, Beh LS. Determinants of academics' job satisfaction: empirical evidence from private universities in Bangladesh. PLoS One 2015; 10:e0117834. [PMID: 25699518 PMCID: PMC4336319 DOI: 10.1371/journal.pone.0117834] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2014] [Accepted: 01/01/2015] [Indexed: 11/18/2022] Open
Abstract
The job satisfaction of academics is related to a number of variables of complex function such as demographic characters, the work itself, pay, work responsibilities, variety of tasks, promotional opportunities, relationship with co-workers and others. Academics may be simultaneously satisfied with some facets of the job and dissatisfied with others. This paper aims at determining the influential factors that contribute to the enhancement or reduction of academics' job satisfaction among private universities in Bangladesh with special reference to Dhaka, the capital city of Bangladesh. A total of 346 respondents are considered from ten private universities using non-probability sampling. A pre-tested and closed-ended questionnaire using a seven-point Likert scale is used for data collection. In this study, descriptive statistics, Pearson product moment correlation, multiple regression, and factor analysis are exercised as statistical tools. A conceptual model of job satisfaction is developed and applied for academics' job satisfaction. The results reveal that compensation package, supervisory support, job security, training and development opportunities, team cohesion, career growth, working conditions, and organizational culture and policies are positively associated with the academics' job satisfaction. Amongst them, three factors stood out as significant contributors for job satisfaction of academics i.e. compensation package, job security, and working conditions. Therefore, the management of private universities should focus their effort on these areas of human resource management for maintaining academics' job satisfaction and employee retention. The study will be useful for university management in improving overall job satisfaction as it suggests some strategies for employee satisfaction practices.
Collapse
Affiliation(s)
- Abdul Kadar Muhammad Masum
- Department of Administrative Studies & Politics, Faculty of Economics & Administration, University of Malaya, Kuala Lumpur, Malaysia
- Department of Business Administration, International Islamic University Chittagong, Chittagong, Bangladesh
- * E-mail:
| | - Md. Abul Kalam Azad
- Department of Business Administration, International Islamic University Chittagong, Chittagong, Bangladesh
- Department of Applied Statistics, Faculty of Economics & Administration, University of Malaya, Kuala Lumpur, Malaysia
| | - Loo-See Beh
- Department of Administrative Studies & Politics, Faculty of Economics & Administration, University of Malaya, Kuala Lumpur, Malaysia
| |
Collapse
|
31
|
Da Dalt L, Anselmi P, Furlan S, Carraro S, Baraldi E, Robusto E, Perilongo G. Validating a set of tools designed to assess the perceived quality of training of pediatric residency programs. Ital J Pediatr 2015; 41:2. [PMID: 25599713 PMCID: PMC4339004 DOI: 10.1186/s13052-014-0106-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/26/2014] [Accepted: 12/18/2014] [Indexed: 12/02/2022] Open
Abstract
Background The Paediatric Residency Program (PRP) of Padua, Italy, developed a set of questionnaires to assess the quality of the training provided by each faculty member, the quality of the professional experience the residents experienced during the various rotations and the functioning of the Resident Affair Committee (RAC), named respectively: “Tutor Assessment Questionnaire” (TAQ), “Rotation Assessment Questionnaire” (RAQ), and RAC Assessment Questionnaire”. The process that brought to their validation are herein presented. Method Between July 2012 and July 2013, 51 residents evaluated 26 tutors through the TAQ, and 25 rotations through the RAQ. Forty-eight residents filled the RAC Assessment Questionnaire. The three questionnaires were validated through a many-facet Rasch measurement analysis. Results In their final form, the questionnaires produced measures that were valid, reliable, unidimensional, and free from gender biases. TAQ and RAQ distinguished tutors and rotations into 5–6 levels of different quality and effectiveness. The three questionnaires allowed the identification of strengths and weaknesses of tutors, rotations, and RAC. The agreement observed among judges was coherent to the predicted values, suggesting that no particular training is required for developing a shared interpretation of the items. Conclusions The work herein presented serves to enrich the armamentarium of tools that resident medical programs can use to monitor their functioning. A larger application of these tools will serve to consolidate and refine further the results presented. Electronic supplementary material The online version of this article (doi:10.1186/s13052-014-0106-2) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Liviana Da Dalt
- Paediatric Residency Program, Department of Woman's and Child's Health, University of Padua, Via Giustiniani 3 - 35128, Padua, Italy.
| | | | - Sara Furlan
- Department FISPPA, University of Padua, Padua, Italy.
| | - Silvia Carraro
- Paediatric Residency Program, Department of Woman's and Child's Health, University of Padua, Via Giustiniani 3 - 35128, Padua, Italy.
| | - Eugenio Baraldi
- Paediatric Residency Program, Department of Woman's and Child's Health, University of Padua, Via Giustiniani 3 - 35128, Padua, Italy.
| | | | - Giorgio Perilongo
- Paediatric Residency Program, Department of Woman's and Child's Health, University of Padua, Via Giustiniani 3 - 35128, Padua, Italy.
| |
Collapse
|
32
|
Slootweg IA, Lombarts KMJMH, Boerebach BCM, Heineman MJ, Scherpbier AJJA, van der Vleuten CPM. Development and validation of an instrument for measuring the quality of teamwork in teaching teams in postgraduate medical training (TeamQ). PLoS One 2014; 9:e112805. [PMID: 25393006 PMCID: PMC4231160 DOI: 10.1371/journal.pone.0112805] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2014] [Accepted: 10/15/2014] [Indexed: 11/19/2022] Open
Abstract
Background Teamwork between clinical teachers is a challenge in postgraduate medical training. Although there are several instruments available for measuring teamwork in health care, none of them are appropriate for teaching teams. The aim of this study is to develop an instrument (TeamQ) for measuring teamwork, to investigate its psychometric properties and to explore how clinical teachers assess their teamwork. Method To select the items to be included in the TeamQ questionnaire, we conducted a content validation in 2011, using a Delphi procedure in which 40 experts were invited. Next, for pilot testing the preliminary tool, 1446 clinical teachers from 116 teaching teams were requested to complete the TeamQ questionnaire. For data analyses we used statistical strategies: principal component analysis, internal consistency reliability coefficient, and the number of evaluations needed to obtain reliable estimates. Lastly, the median TeamQ scores were calculated for teams to explore the levels of teamwork. Results In total, 31 experts participated in the Delphi study. In total, 114 teams participated in the TeamQ pilot. The median team response was 7 evaluations per team. The principal component analysis revealed 11 factors; 8 were included. The reliability coefficients of the TeamQ scales ranged from 0.75 to 0.93. The generalizability analysis revealed that 5 to 7 evaluations were needed to obtain internal reliability coefficients of 0.70. In terms of teamwork, the clinical teachers scored residents' empowerment as the highest TeamQ scale and feedback culture as the area that would most benefit from improvement. Conclusions This study provides initial evidence of the validity of an instrument for measuring teamwork in teaching teams. The high response rates and the low number of evaluations needed for reliably measuring teamwork indicate that TeamQ is feasible for use by teaching teams. Future research could explore the effectiveness of feedback on teamwork in follow up measurements.
Collapse
Affiliation(s)
- Irene A. Slootweg
- Professional Performance Research group, Center of Expertise in Evidence-based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
- Department of Educational Development and Research, University of Maastricht, Maastricht, the Netherlands
- * E-mail:
| | - Kiki M. J. M. H. Lombarts
- Professional Performance Research group, Center of Expertise in Evidence-based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Benjamin C. M. Boerebach
- Professional Performance Research group, Center of Expertise in Evidence-based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Maas Jan Heineman
- Professional Performance Research group, Center of Expertise in Evidence-based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | | | - Cees P. M. van der Vleuten
- Department of Educational Development and Research, University of Maastricht, Maastricht, the Netherlands
| |
Collapse
|
33
|
Boerebach BCM, Lombarts KMJMH, Arah OA. Confirmatory Factor Analysis of the System for Evaluation of Teaching Qualities (SETQ) in Graduate Medical Training. Eval Health Prof 2014; 39:21-32. [DOI: 10.1177/0163278714552520] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The System for Evaluation of Teaching Qualities (SETQ) was developed as a formative system for the continuous evaluation and development of physicians’ teaching performance in graduate medical training. It has been seven years since the introduction and initial exploratory psychometric analysis of the SETQ questionnaires. This study investigates the validity and reliability of the SETQ questionnaires across hospitals and medical specialties using confirmatory factor analyses (CFAs), reliability analysis, and generalizability analysis. The SETQ questionnaires were tested in a sample of 3,025 physicians and 2,848 trainees in 46 hospitals. The CFA revealed acceptable fit of the data to the previously identified five-factor model. The high internal consistency estimates suggest satisfactory reliability of the subscales. These results provide robust evidence for the validity and reliability of the SETQ questionnaires for evaluating physicians’ teaching performance.
Collapse
Affiliation(s)
- Benjamin C. M. Boerebach
- Professional Performance research group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| | - Kiki M. J. M. H. Lombarts
- Professional Performance research group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| | - Onyebuchi A. Arah
- Department of Epidemiology, University of California, Los Angeles (UCLA), School of Public Health, Los Angeles, CA, USA
- UCLA Center for Health Policy Research, Los Angeles, CA, USA
| |
Collapse
|
34
|
Lombarts KMJMH. A (good) look at the rating of teaching effectiveness: towards holistic and programmatic assessment. MEDICAL EDUCATION 2014; 48:744-747. [PMID: 25039729 DOI: 10.1111/medu.12491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
|
35
|
The Impact of Resident- and Self-Evaluations on Surgeon’s Subsequent Teaching Performance. World J Surg 2014; 38:2761-9. [DOI: 10.1007/s00268-014-2655-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
36
|
Lombarts KMJMH, Plochg T, Thompson CA, Arah OA. Measuring professionalism in medicine and nursing: results of a European survey. PLoS One 2014; 9:e97069. [PMID: 24849320 PMCID: PMC4029578 DOI: 10.1371/journal.pone.0097069] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2013] [Accepted: 04/14/2014] [Indexed: 11/18/2022] Open
Abstract
Background Leveraging professionalism has been put forward as a strategy to drive improvement of patient care. We investigate professionalism as a factor influencing the uptake of quality improvement activities by physicians and nurses working in European hospitals. Objective To (i) investigate the reliability and validity of data yielded by using the self-developed professionalism measurement tool for physicians and nurses, (ii) describe their levels of professionalism displayed, and (iii) quantify the extent to which professional attitudes would predict professional behaviors. Methods and Materials We designed and deployed survey instruments amongst 5920 physicians and nurses working in European hospitals. This was conducted under the cross-sectional multilevel study “Deepening Our Understanding of Quality Improvement in Europe” (DUQuE). We used psychometric and generalized linear mixed modelling techniques to address the aforementioned objectives. Results In all, 2067 (response rate 69.8%) physicians and 2805 nurses (94.8%) representing 74 hospitals in 7 European countries participated. The professionalism instrument revealed five subscales of professional attitude and one scale for professional behaviour with moderate to high internal consistency and reliability. Physicians and nurses display equally high professional attitude sum scores (11.8 and 11.9 respectively out of 16) but seem to have different perceptions towards separate professionalism aspects. Lastly, professionals displaying higher levels of professional attitudes were more involved in quality improvement actions (physicians: b = 0.019, P<0.0001; nurses: b = 0.016, P<0.0001) and more inclined to report colleagues’ underperformance (physicians – odds ratio (OR) 1.12, 95% CI 1.01–1.24; nurses – OR 1.11, 95% CI 1.01–1.23) or medical errors (physicians – OR 1.14, 95% CI 1.01–1.23; nurses – OR 1.43, 95% CI 1.22–1.67). Involvement in QI actions was found to increase the odds of reporting incompetence or medical errors. Conclusion A tool that reliably and validly measures European physicians’ and nurses’ commitment to professionalism is now available. Collectively leveraging professionalism as a quality improvement strategy may be beneficial to patient care quality.
Collapse
Affiliation(s)
- Kiki M. J. M. H. Lombarts
- Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
- * E-mail:
| | - Thomas Plochg
- Department of Public Health, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| | - Caroline A. Thompson
- Department of Epidemiology, Fielding School of Public Health, University of California Los Angeles, Los Angeles, California, United States of America
- Palo Alto Medical Foundation Research Institute, Palo Alto, California, United States of America
| | - Onyebuchi A. Arah
- Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
- Department of Epidemiology, Fielding School of Public Health, University of California Los Angeles, Los Angeles, California, United States of America
- UCLA Center for Health Policy Research, Los Angeles, California, United States of America
| | | |
Collapse
|
37
|
Personality traits affect teaching performance of attending physicians: results of a multi-center observational study. PLoS One 2014; 9:e98107. [PMID: 24844725 PMCID: PMC4028262 DOI: 10.1371/journal.pone.0098107] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2013] [Accepted: 04/28/2014] [Indexed: 11/19/2022] Open
Abstract
Background Worldwide, attending physicians train residents to become competent providers of patient care. To assess adequate training, attending physicians are increasingly evaluated on their teaching performance. Research suggests that personality traits affect teaching performance, consistent with studied effects of personality traits on job performance and academic performance in medicine. However, up till date, research in clinical teaching practice did not use quantitative methods and did not account for specialty differences. We empirically studied the relationship of attending physicians' personality traits with their teaching performance across surgical and non-surgical specialties. Method We conducted a survey across surgical and non-surgical specialties in eighteen medical centers in the Netherlands. Residents evaluated attending physicians' overall teaching performance, as well as the specific domains learning climate, professional attitude, communication, evaluation, and feedback, using the validated 21-item System for Evaluation of Teaching Qualities (SETQ). Attending physicians self-evaluated their personality traits on a 5-point scale using the validated 10-item Big Five Inventory (BFI), yielding the Five Factor model: extraversion, conscientiousness, neuroticism, agreeableness and openness. Results Overall, 622 (77%) attending physicians and 549 (68%) residents participated. Extraversion positively related to overall teaching performance (regression coefficient, B: 0.05, 95% CI: 0.01 to 0.10, P = 0.02). Openness was negatively associated with scores on feedback for surgical specialties only (B: −0.10, 95% CI: −0.15 to −0.05, P<0.001) and conscientiousness was positively related to evaluation of residents for non-surgical specialties only (B: 0.13, 95% CI: 0.03 to 0.22, p = 0.01). Conclusions Extraverted attending physicians were consistently evaluated as better supervisors. Surgical attending physicians who display high levels of openness were evaluated as less adequate feedback-givers. Non-surgical attending physicians who were conscientious seem to be good at evaluating residents. These insights could contribute to future work on development paths of attending physicians in medical education.
Collapse
|
38
|
Wagner C, Groene O, Thompson CA, Klazinga NS, Dersarkissian M, Arah OA, Suñol R. Development and validation of an index to assess hospital quality management systems. Int J Qual Health Care 2014; 26 Suppl 1:16-26. [PMID: 24618212 PMCID: PMC4001698 DOI: 10.1093/intqhc/mzu021] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
Abstract
Objective The aim of this study was to develop and validate an index to assess the implementation of quality management systems (QMSs) in European countries. Design Questionnaire development was facilitated through expert opinion, literature review and earlier empirical research. A cross-sectional online survey utilizing the questionnaire was undertaken between May 2011 and February 2012. We used psychometric methods to explore the factor structure, reliability and validity of the instrument. Setting and participants As part of the Deepening our Understanding of Quality improvement in Europe (DUQuE) project, we invited a random sample of 188 hospitals in 7 countries. The quality managers of these hospitals were the main respondents. Main Outcome Measure The extent of implementation of QMSs. Results Factor analysis yielded nine scales, which were combined to build the Quality Management Systems Index. Cronbach's reliability coefficients were satisfactory (ranging from 0.72 to 0.82) for eight scales and low for one scale (0.48). Corrected item-total correlations provided adequate evidence of factor homogeneity. Inter-scale correlations showed that every factor was related, but also distinct, and added to the index. Construct validity testing showed that the index was related to recent measures of quality. Participating hospitals attained a mean value of 19.7 (standard deviation of 4.7) on the index that theoretically ranged from 0 to 27. Conclusion Assessing QMSs across Europe has the potential to help policy-makers and other stakeholders to compare hospitals and focus on the most important areas for improvement.
Collapse
Affiliation(s)
- C Wagner
- NIVEL, Netherlands Institute for Health Services Research, PO Box 1568, 3500 BN Utrecht, the Netherlands.
| | | | | | | | | | | | | | | |
Collapse
|
39
|
Plochg T, Arah OA, Botje D, Thompson CA, Klazinga NS, Wagner C, Mannion R, Lombarts K. Measuring clinical management by physicians and nurses in European hospitals: development and validation of two scales. Int J Qual Health Care 2014; 26 Suppl 1:56-65. [PMID: 24615595 PMCID: PMC4001689 DOI: 10.1093/intqhc/mzu014] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
Objective Clinical management is hypothesized to be critical for hospital management and hospital performance. The aims of this study were to develop and validate professional involvement scales for measuring the level of clinical management by physicians and nurses in European hospitals. Design Testing of validity and reliability of scales derived from a questionnaire of 21 items was developed on the basis of a previous study and expert opinion and administered in a cross-sectional seven-country research project ‘Deepening our Understanding of Quality improvement in Europe’ (DUQuE). Setting and Participants A sample of 3386 leading physicians and nurses working in 188 hospitals located in Czech Republic, France, Germany, Poland, Portugal, Spain and Turkey. Main Outcome Measures Validity and reliability of professional involvement scales and subscales. Results Psychometric analysis yielded four subscales for leading physicians: (i) Administration and budgeting, (ii) Managing medical practice, (iii) Strategic management and (iv) Managing nursing practice. Only the first three factors applied well to the nurses. Cronbach's alpha for internal consistency ranged from 0.74 to 0.86 for the physicians, and from 0.61 to 0.81 for the nurses. Except for the 0.74 correlation between ‘Administration and budgeting’ and ‘Managing medical practice’ among physicians, all inter-scale correlations were <0.70 (range 0.43–0.61). Under testing for construct validity, the subscales were positively correlated with ‘formal management roles’ of physicians and nurses. Conclusions The professional involvement scales appear to yield reliable and valid data in European hospital settings, but the scale ‘Managing medical practice’ for nurses needs further exploration. The measurement instrument can be used for international research on clinical management.
Collapse
Affiliation(s)
- Thomas Plochg
- Department of Public Health, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; Meibergdreef 9, 1100 DE Amsterdam J2-211, The Netherlands;
| | | | | | | | | | | | | | | | | |
Collapse
|
40
|
Effect of the learning climate of residency programs on faculty's teaching performance as evaluated by residents. PLoS One 2014; 9:e86512. [PMID: 24489734 PMCID: PMC3904911 DOI: 10.1371/journal.pone.0086512] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2013] [Accepted: 12/14/2013] [Indexed: 11/19/2022] Open
Abstract
Background To understand teaching performance of individual faculty, the climate in which residents’ learning takes place, the learning climate, may be important. There is emerging evidence that specific climates do predict specific outcomes. Until now, the effect of learning climate on the performance of the individual faculty who actually do the teaching was unknown. Objectives This study: (i) tested the hypothesis that a positive learning climate was associated with better teaching performance of individual faculty as evaluated by residents, and (ii) explored which dimensions of learning climate were associated with faculty’s teaching performance. Methods and Materials We conducted two cross-sectional questionnaire surveys amongst residents from 45 residency training programs and multiple specialties in 17 hospitals in the Netherlands. Residents evaluated the teaching performance of individual faculty using the robust System for Evaluating Teaching Qualities (SETQ) and evaluated the learning climate of residency programs using the Dutch Residency Educational Climate Test (D-RECT). The validated D-RECT questionnaire consisted of 11 subscales of learning climate. Main outcome measure was faculty’s overall teaching (SETQ) score. We used multivariable adjusted linear mixed models to estimate the separate associations of overall learning climate and each of its subscales with faculty’s teaching performance. Results In total 451 residents completed 3569 SETQ evaluations of 502 faculty. Residents also evaluated the learning climate of 45 residency programs in 17 hospitals in the Netherlands. Overall learning climate was positively associated with faculty’s teaching performance (regression coefficient 0.54, 95% confidence interval: 0.37 to 0.71; P<0.001). Three out of 11 learning climate subscales were substantially associated with better teaching performance: ‘coaching and assessment’, ‘work is adapted to residents’ competence’, and ‘formal education’. Conclusions Individual faculty’s teaching performance evaluations are positively affected by better learning climate of residency programs.
Collapse
|
41
|
Curran DS, Stalburg CM, Xu X, Dewald SR, Quint EH. Effect of resident evaluations of obstetrics and gynecology faculty on promotion. J Grad Med Educ 2013; 5:620-4. [PMID: 24455011 PMCID: PMC3886461 DOI: 10.4300/jgme-d-13-00002.1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/31/2012] [Revised: 06/20/2013] [Accepted: 07/01/2013] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Promotion for academic faculty depends on a variety of factors, including their research, publications, national leadership, and quality of their teaching. OBJECTIVE We sought to determine the importance of resident evaluations of faculty for promotion in obstetrics-gynecology programs. METHODS A 28-item questionnaire was developed and distributed to 185 department chairs of US obstetrics-gynecology residency programs. RESULTS Fifty percent (93 of 185) responded, with 40% (37 of 93) stating that teaching has become more important for promotion in the past 10 years. When faculty are being considered for promotion, teaching evaluations were deemed "very important" 60% of the time for clinician track faculty but were rated as mainly "not important" or "not applicable" for research faculty. Sixteen respondents (17%) stated a faculty member had failed to achieve promotion in the past 5 years because of poor teaching evaluations. Positive teaching evaluations outweighed low publication numbers for clinical faculty 24% of the time, compared with 5% for research faculty and 8% for tenured faculty being considered for promotion. The most common reason for rejection for promotion in all tracks was the number of publications. Awards for excellence in teaching improved chances of promotion. CONCLUSIONS Teaching quality is becoming more important in academic obstetrics-gynecology departments, especially for clinical faculty. Although in most institutions promotion is not achieved without adequate research and publications, the importance of teaching excellence is obvious, with 1 of 6 (17%) departments reporting a promotion had been denied due to poor teaching evaluations.
Collapse
|
42
|
van der Leeuw RM, Slootweg IA, Heineman MJ, Lombarts KMJMH. Explaining how faculty members act upon residents' feedback to improve their teaching performance. MEDICAL EDUCATION 2013; 47:1089-1098. [PMID: 24117555 DOI: 10.1111/medu.12257] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2012] [Revised: 01/21/2013] [Accepted: 04/11/2013] [Indexed: 06/02/2023]
Abstract
CONTEXT Responsiveness to feedback is a complex phenomenon that requires and receives attention. However, knowledge on the responsiveness of faculty members to residents' feedback on their teaching performance is lacking. Excellent teaching performance is essential to ensure patient safety and residents' learning in residency training. This study aims to increase our understanding of how faculty staff react to and act upon residents' feedback on their teaching performance. OBJECTIVES This multi-specialty, multi-institution interview study was conducted to gain insight into: (i) how teaching faculty proceed after they have received residents' feedback on their teaching performance, and (ii) the factors that influence their progression. METHODS Between August and December 2011, 24 faculty members who had received formative feedback on their teaching performance through valid and reliable feedback systems participated in this study. They reflected upon their (re)action(s) during individual semi-structured interviews. The interview protocol and analysis were guided by a comprehensive transtheoretical framework describing and explaining stages and processes of behavioural change. RESULTS Faculty staff involved in residency training used residents' feedback to different extents to adapt or improve their teaching performance. Important tipping points in the processes of change necessary for faculty staff to put feedback into practice were: experiencing negative emotions in themselves or recognising those in residents as a result of failure to act upon feedback; realising that something should be done with or without support from others, and making a strong commitment to change. In addition, having the confidence to act upon feedback and recognising the benefits of change were found to stimulate faculty members to change their teaching behaviour. CONCLUSIONS The responsiveness of faculty members to residents' feedback on their teaching performance varies. The adapted transtheoretical framework explains how and why faculty members do or do not proceed to action after receiving residents' feedback. Given this, organising residents' feedback for faculty staff in a systematic way is a first step and is necessary to effect potential improvements in teaching performance.
Collapse
Affiliation(s)
- Renée M van der Leeuw
- Professional Performance Research Group, Centre of Evidence-based Education, Academic Medical Centre, University of Amsterdam, Amsterdam, the Netherlands
| | | | | | | |
Collapse
|
43
|
van der Leeuw RM, Overeem K, Arah OA, Heineman MJ, Lombarts KMJMH. Frequency and determinants of residents' narrative feedback on the teaching performance of faculty: narratives in numbers. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2013; 88:1324-31. [PMID: 23886996 DOI: 10.1097/acm.0b013e31829e3af4] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
PURPOSE Physicians involved in residency training often receive feedback from residents on their teaching. Research shows that learners value narrative feedback, but knowledge of the frequency and determinants of narrative feedback in teaching performance evaluation is lacking. This study aims to identifythe frequency with which residentsgave positive comments and suggestions for improvement to faculty, and the factors influencing that frequency. METHOD From September 2008 through May 2010, the authors collected data, using a validated formative feedback system (System for Evaluation of Teaching Qualities). The authors used univariate and multivariable analysis to investigate the associations between participants' characteristics, including faculty members' teaching performance, and the frequency of the two types of narrative comments. RESULTS In total, 659 residents (79% of 839) completed 6,216 evaluations on 917 faculty (95% of 964), resulting in 11,574 positive comments and 4,870 suggestions for improvement. On average, faculty members received 13 positive comments and 5 suggestions for improvement. Multivariable analysis showed that higher teaching performance was associated with higher numbers of positive comments (regression coefficient 0.538; 95% confidence interval: 0.464 to 0.613) and with lower numbers of suggestions for improvement (-0.802; -0.911 to -0.692), both P < .0001. Nonacademic hospitals, participation in teacher training, and female residents' evaluation were statistically significant determinants of receiving more narrative feedback. CONCLUSIONS Residents provided narrative feedback that paralleled and elaborated on quantitative evaluations they provided; therefore, faculty would be wise to attend to narrative feedback. Analysis of the quality of narrative feedback is needed to understand its effectiveness.
Collapse
Affiliation(s)
- Renée M van der Leeuw
- Center for Evidence-based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands.
| | | | | | | | | |
Collapse
|
44
|
The teacher, the physician and the person: exploring causal connections between teaching performance and role model types using directed acyclic graphs. PLoS One 2013; 8:e69449. [PMID: 23936020 PMCID: PMC3720648 DOI: 10.1371/journal.pone.0069449] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2013] [Accepted: 06/07/2013] [Indexed: 11/19/2022] Open
Abstract
Background In fledgling areas of research, evidence supporting causal assumptions is often scarce due to the small number of empirical studies conducted. In many studies it remains unclear what impact explicit and implicit causal assumptions have on the research findings; only the primary assumptions of the researchers are often presented. This is particularly true for research on the effect of faculty’s teaching performance on their role modeling. Therefore, there is a need for robust frameworks and methods for transparent formal presentation of the underlying causal assumptions used in assessing the causal effects of teaching performance on role modeling. This study explores the effects of different (plausible) causal assumptions on research outcomes. Methods This study revisits a previously published study about the influence of faculty’s teaching performance on their role modeling (as teacher-supervisor, physician and person). We drew eight directed acyclic graphs (DAGs) to visually represent different plausible causal relationships between the variables under study. These DAGs were subsequently translated into corresponding statistical models, and regression analyses were performed to estimate the associations between teaching performance and role modeling. Results The different causal models were compatible with major differences in the magnitude of the relationship between faculty’s teaching performance and their role modeling. Odds ratios for the associations between teaching performance and the three role model types ranged from 31.1 to 73.6 for the teacher-supervisor role, from 3.7 to 15.5 for the physician role, and from 2.8 to 13.8 for the person role. Conclusions Different sets of assumptions about causal relationships in role modeling research can be visually depicted using DAGs, which are then used to guide both statistical analysis and interpretation of results. Since study conclusions can be sensitive to different causal assumptions, results should be interpreted in the light of causal assumptions made in each study.
Collapse
|
45
|
Abstract
BACKGROUND Feedback is generally regarded as crucial for learning. We focus on feedback provided through instruments developed to inform self-assessment and support learners to improve performance. These instruments are being used commonly in medical education, but they are ineffective if the feedback is not well received and put into practice. METHODS The authors formulated twelve tips to make the best use of feedback based on widely cited publications on feedback. To include recent developments and hands-on experiences in the field of medical education, the authors discussed the tips with their research team consisting of experts in the field of medical education and professional performance, to reach agreement on the most practical strategies. RESULTS When utilizing feedback for performance improvement, medical students, interns, residents, clinical teachers and practicing physicians could make use of the twelve tips to put feedback into practice. The twelve tips provide strategies to reflect, interact and respond to feedback one receives through (validated) feedback instruments. CONCLUSIONS Since the goal of those involved in medical education and patient care is to perform at the highest possible level, we offer twelve practical tips for making the best use of feedback in order to support learners of all levels.
Collapse
|
46
|
Boerebach BCM, Arah OA, Busch ORC, Lombarts KMJMH. Reliable and valid tools for measuring surgeons' teaching performance: residents' vs. self evaluation. JOURNAL OF SURGICAL EDUCATION 2012; 69:511-520. [PMID: 22677591 DOI: 10.1016/j.jsurg.2012.04.003] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2012] [Revised: 03/09/2012] [Accepted: 04/05/2012] [Indexed: 06/01/2023]
Abstract
BACKGROUND In surgical education, there is a need for educational performance evaluation tools that yield reliable and valid data. This paper describes the development and validation of robust evaluation tools that provide surgeons with insight into their clinical teaching performance. We investigated (1) the reliability and validity of 2 tools for evaluating the teaching performance of attending surgeons in residency training programs, and (2) whether surgeons' self evaluation correlated with the residents' evaluation of those surgeons. MATERIALS AND METHODS We surveyed 343 surgeons and 320 residents as part of a multicenter prospective cohort study of faculty teaching performance in residency training programs. The reliability and validity of the SETQ (System for Evaluation Teaching Qualities) tools were studied using standard psychometric techniques. We then estimated the correlations between residents' and surgeons' evaluations. RESULTS The response rate was 87% among surgeons and 84% among residents, yielding 2625 residents' evaluations and 302 self evaluations. The SETQ tools yielded reliable and valid data on 5 domains of surgical teaching performance, namely, learning climate, professional attitude towards residents, communication of goals, evaluation of residents, and feedback. The correlations between surgeons' self and residents' evaluations were low, with coefficients ranging from 0.03 for evaluation of residents to 0.18 for communication of goals. CONCLUSIONS The SETQ tools for the evaluation of surgeons' teaching performance appear to yield reliable and valid data. The lack of strong correlations between surgeons' self and residents' evaluations suggest the need for using external feedback sources in informed self evaluation of surgeons.
Collapse
Affiliation(s)
- Benjamin C M Boerebach
- Department of Quality Management and Process Innovation, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands.
| | | | | | | |
Collapse
|
47
|
Boerebach BCM, Lombarts KMJMH, Keijzer C, Heineman MJ, Arah OA. The teacher, the physician and the person: how faculty's teaching performance influences their role modelling. PLoS One 2012; 7:e32089. [PMID: 22427818 PMCID: PMC3299651 DOI: 10.1371/journal.pone.0032089] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2011] [Accepted: 01/22/2012] [Indexed: 12/02/2022] Open
Abstract
Objective Previous studies identified different typologies of role models (as teacher/supervisor, physician and person) and explored which of faculty's characteristics could distinguish good role models. The aim of this study was to explore how and to which extent clinical faculty's teaching performance influences residents' evaluations of faculty's different role modelling statuses, especially across different specialties. Methods In a prospective multicenter multispecialty study of faculty's teaching performance, we used web-based questionnaires to gather empirical data from residents. The main outcome measures were the different typologies of role modelling. The predictors were faculty's overall teaching performance and faculty's teaching performance on specific domains of teaching. The data were analyzed using multilevel regression equations. Results In total 219 (69% response rate) residents filled out 2111 questionnaires about 423 (96% response rate) faculty. Faculty's overall teaching performance influenced all role model typologies (OR: from 8.0 to 166.2). For the specific domains of teaching, overall, all three role model typologies were strongly associated with “professional attitude towards residents” (OR: 3.28 for teacher/supervisor, 2.72 for physician and 7.20 for the person role). Further, the teacher/supervisor role was strongly associated with “feedback” and “learning climate” (OR: 3.23 and 2.70). However, the associations of the specific domains of teaching with faculty's role modelling varied widely across specialties. Conclusion This study suggests that faculty can substantially enhance their role modelling by improving their teaching performance. The amount of influence that the specific domains of teaching have on role modelling differs across specialties.
Collapse
Affiliation(s)
- Benjamin C M Boerebach
- Department of Quality Management and Process Innovation, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands.
| | | | | | | | | |
Collapse
|