51
|
Vaižgėlienė E, Padaiga Ž, Rastenytė D, Tamelis A, Petrikonis K, Kregždytė R, Fluit C. Validation of the EFFECT questionnaire for competence-based clinical teaching in residency training in Lithuania. MEDICINA-LITHUANIA 2017; 53:173-178. [PMID: 28596069 DOI: 10.1016/j.medici.2017.05.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2016] [Revised: 02/06/2017] [Accepted: 05/08/2017] [Indexed: 10/19/2022]
Abstract
BACKGROUND AND AIM In 2013, all residency programs at the Lithuanian University of Health Sciences were renewed into a competency-based medical education curriculum. To assess the quality of clinical teaching in residency training, we chose the EFFECT (evaluation and feedback for effective clinical teaching) questionnaire designed and validated at the Radboud University Medical Centre in the Netherlands. The aim of this study was to validate the EFFECT questionnaire for quality assessment of clinical teaching in residency training. MATERIALS AND METHODS The research was conducted as an online survey using the questionnaire containing 58 items in 7 domains. The questionnaire was double-translated into Lithuanian. It was sent to 182 residents of 7 residency programs (anesthesiology reanimathology, cardiology, dermatovenerology, emergency medicine, neurology, obstetrics and gynecology, physical medicine and rehabilitation). Overall, 333 questionnaires about 146 clinical teachers were filled in. To determine the item characteristics and internal consistency (Cronbach's α), the item and reliability analyses were performed. Furthermore, confirmatory factor analysis (CFI) was performed using a model for maximum-likelihood estimation. RESULTS Cronbach's α within different domains ranged between 0.91 and 0.97 and was comparable with the original version of the questionnaire. Confirmatory factor analysis demonstrated satisfactory model-fit with CFI of 0.841 and absolute model-fit RMSEA of 0.098. CONCLUSIONS The results suggest that the Lithuanian version of the EFFECT maintains its original validity and may serve as a valid instrument for quality assessment of clinical teaching in competency-based residency training in Lithuania.
Collapse
Affiliation(s)
- Eglė Vaižgėlienė
- Department of Preventive Medicine, Faculty of Public Health, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania.
| | - Žilvinas Padaiga
- Department of Preventive Medicine, Faculty of Public Health, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania
| | - Daiva Rastenytė
- Department of Neurology, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania
| | - Algimantas Tamelis
- Department of Surgery, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania
| | - Kęstutis Petrikonis
- Department of Neurology, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania
| | - Rima Kregždytė
- Department of Preventive Medicine, Faculty of Public Health, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania
| | | |
Collapse
|
52
|
Al Ansari A, Strachan K, Hashim S, Otoom S. Analysis of psychometric properties of the modified SETQ tool in undergraduate medical education. BMC MEDICAL EDUCATION 2017; 17:56. [PMID: 28302151 PMCID: PMC5356325 DOI: 10.1186/s12909-017-0893-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2016] [Accepted: 03/08/2017] [Indexed: 06/06/2023]
Abstract
BACKGROUND Effective clinical teaching is crucially important for the future of patient care. Robust clinical training therefore is essential to produce physicians capable of delivering high quality health care. Tools used to evaluate medical faculty teaching qualities should be reliable and valid. This study investigates the psychometric properties of modification of the System for Evaluation of Teaching Qualities (SETQ) instrument in the clinical years of undergraduate medical education. METHODS This cross-sectional multicenter study was conducted in four teaching hospitals in the Kingdom of Bahrain. Two-hundred ninety-eight medical students were invited to evaluate 105 clinical teachers using the SETQ instrument between January 2015 and March 2015. Questionnaire feasibility was analyzed using average time required to complete the form and the number of raters required to produce reliable results. Instrument reliability (stability) was assessed by calculating the Cronbach's alpha coefficient for the total scale and for each sub-scale (factor). To provide evidence of construct validity, an exploratory factor analysis was conducted to identify which items on the survey belonged together, which were then grouped as factors. RESULTS One-hundred twenty-five medical students completed 1161 evaluations of 105 clinical teachers. The response rates were 42% for student evaluations and 57% for clinical teacher self-evaluations. The factor analysis showed that the questionnaire was composed of six factors, explaining 76.7% of the total variance. Cronbach's alpha was 0.94 or higher for the six factors in the student survey; for the clinical teacher survey, Cronbach's alpha was 0.88. In both instruments, the item-total correlation was above 0.40 for all items within their respective scales. CONCLUSION Our modified SETQ questionnaire was found to be both reliable and valid, and was implemented successfully across various departments and specialties in different hospitals in the Kingdom of Bahrain.
Collapse
Affiliation(s)
- Ahmed Al Ansari
- Bahrain Defense Force Hospital, Off Waly Alahed Avenue, P. O. Box-28743, Riffa, Kingdom of Bahrain
- RCSI Bahrain, Arabian Gulf University, P.O. Box 15503, Adliya, Kingdom of Bahrain
- General Surgery Arabian Gulf University (AGU), Manama, Bahrain
| | - Kathryn Strachan
- RCSI Bahrain, Arabian Gulf University, P.O. Box 15503, Adliya, Kingdom of Bahrain
| | - Sumaya Hashim
- RCSI Bahrain, Arabian Gulf University, P.O. Box 15503, Adliya, Kingdom of Bahrain
| | - Sameer Otoom
- RCSI Bahrain, Arabian Gulf University, P.O. Box 15503, Adliya, Kingdom of Bahrain
| |
Collapse
|
53
|
Hexom B, Trueger NS, Levene R, Ioannides KLH, Cherkas D. The educational value of emergency department teaching: it is about time. Intern Emerg Med 2017; 12:207-212. [PMID: 27059721 DOI: 10.1007/s11739-016-1447-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/14/2015] [Accepted: 03/25/2016] [Indexed: 10/22/2022]
Abstract
There is a paucity of research on the quality and quantity of clinical teaching in the emergency department (ED) setting. While many factors impact residents' perceptions of attending physicians' educational skill, the authors hypothesized that the amount of time residents spend with attending in direct teaching is a determinant of residents' perception of their shift's educational value. Researchers shadowed emergency medicine (EM) attendings during ED shifts, and recorded teaching time with each resident. Residents were surveyed on their assessment of the educational value (EV) of the shift and potential confounders, as well as the attending physician's teaching quality using the ER Scale. The study was performed in the EDs of two urban teaching hospitals affiliated with an EM residency program. Subjects were EM residents and rotators from other specialties. The main outcome measure was the regression of impact of teaching time on EV. Researchers observed 20 attendings supervising 47 residents (mean 2.35 residents per attending, range 2-3). The correlation between teaching time in minutes (mean 60.8, st.dev 25.6, range 7.6-128.1) and EV (mean 3.45 out of 5, st. dev 0.75, range 2-5) was significant (r = 0.302, r 2 = 0.091, p < 0.05). No confounders had a significant effect. The study shows a moderate correlation between the total time attendings spend directly teaching residents and the residents' perception of educational value over a single ED shift. The authors suggest that mechanisms to increase the time attending physicians spend teaching during clinical shifts may result in improved resident education.
Collapse
Affiliation(s)
- Braden Hexom
- Department of Emergency Medicine, Icahn School of Medicine at Mount Sinai, 1 Gustave L. Levy Place, Box 1149, New York, NY, 10029, USA.
| | - N Seth Trueger
- Section of Emergency Medicine, University of Chicago, Chicago, IL, USA
| | - Rachel Levene
- Department of Pediatrics, SUNY Downstate Medical Center, New York, NY, USA
| | | | - David Cherkas
- Department of Emergency Medicine, Icahn School of Medicine at Mount Sinai, 1 Gustave L. Levy Place, Box 1149, New York, NY, 10029, USA
| |
Collapse
|
54
|
van der Meulen MW, Boerebach BCM, Smirnova A, Heeneman S, Oude Egbrink MGA, van der Vleuten CPM, Arah OA, Lombarts KMJMH. Validation of the INCEPT: A Multisource Feedback Tool for Capturing Different Perspectives on Physicians' Professional Performance. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2017; 37:9-18. [PMID: 28212117 DOI: 10.1097/ceh.0000000000000143] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
INTRODUCTION Multisource feedback (MSF) instruments are used to and must feasibly provide reliable and valid data on physicians' performance from multiple perspectives. The "INviting Co-workers to Evaluate Physicians Tool" (INCEPT) is a multisource feedback instrument used to evaluate physicians' professional performance as perceived by peers, residents, and coworkers. In this study, we report on the validity, reliability, and feasibility of the INCEPT. METHODS The performance of 218 physicians was assessed by 597 peers, 344 residents, and 822 coworkers. Using explorative and confirmatory factor analyses, multilevel regression analyses between narrative and numerical feedback, item-total correlations, interscale correlations, Cronbach's α and generalizability analyses, the psychometric qualities, and feasibility of the INCEPT were investigated. RESULTS For all respondent groups, three factors were identified, although constructed slightly different: "professional attitude," "patient-centeredness," and "organization and (self)-management." Internal consistency was high for all constructs (Cronbach's α ≥ 0.84 and item-total correlations ≥ 0.52). Confirmatory factor analyses indicated acceptable to good fit. Further validity evidence was given by the associations between narrative and numerical feedback. For reliable total INCEPT scores, three peer, two resident and three coworker evaluations were needed; for subscale scores, evaluations of three peers, three residents and three to four coworkers were sufficient. DISCUSSION The INCEPT instrument provides physicians performance feedback in a valid and reliable way. The number of evaluations to establish reliable scores is achievable in a regular clinical department. When interpreting feedback, physicians should consider that respondent groups' perceptions differ as indicated by the different item clustering per performance factor.
Collapse
Affiliation(s)
- Mirja W van der Meulen
- Ms. van der Meulen: PhD Candidate, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands, and Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands. Dr. Boerebach: Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands. Dr. Smirnova: PhD Candidate, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands, and Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands. Dr. Heeneman: Professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands. Dr. oude Egbrink: Professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands. Dr. van der Vleuten: Professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands. Dr. Arah: Professor, Department of Epidemiology, Fielding School of Public Health, University of California, Los Angeles (UCLA), Los Angeles, CA, and UCLA Center for Health Policy Research, Los Angeles, CA. Dr. Lombarts: Professor, Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | | | | | | | | | | | | | | |
Collapse
|
55
|
McAndrew M, Mucciolo TW, Jahangiri L. Characteristics of Effective Simulation (Preclinical) Teachers as Identified by Dental Students: A Qualitative Study. J Dent Educ 2016. [DOI: 10.1002/j.0022-0337.2016.80.11.tb06213.x] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
| | | | - Leila Jahangiri
- Department of Prosthodontics; New York University College of Dentistry
| |
Collapse
|
56
|
Ojano Sheehan O, Brannan G, Dogbey G. Osteopathic medical students' perception of teaching effectiveness of their primary care clinical preceptors. INT J OSTEOPATH MED 2016. [DOI: 10.1016/j.ijosm.2015.12.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
57
|
Sommer J, Lanier C, Perron NJ, Nendaz M, Clavet D, Audétat MC. A teaching skills assessment tool inspired by the Calgary-Cambridge model and the patient-centered approach. PATIENT EDUCATION AND COUNSELING 2016; 99:600-609. [PMID: 26680755 DOI: 10.1016/j.pec.2015.11.024] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2015] [Revised: 11/14/2015] [Accepted: 11/24/2015] [Indexed: 06/05/2023]
Abstract
OBJECTIVE The aim of this study was to develop a descriptive tool for peer review of clinical teaching skills. Two analogies framed our research: (1) between the patient-centered and the learner-centered approach; (2) between the structures of clinical encounters (Calgary-Cambridge communication model) and teaching sessions. METHOD During the course of one year, each step of the action research was carried out in collaboration with twelve clinical teachers from an outpatient general internal medicine clinic and with three experts in medical education. The content validation consisted of a literature review, expert opinion and the participatory research process. Interrater reliability was evaluated by three clinical teachers coding thirty audiotaped standardized learner-teacher interactions. RESULTS This tool contains sixteen items covering the process and content of clinical supervisions. Descriptors define the expected teaching behaviors for three levels of competence. Interrater reliability was significant for eleven items (Kendall's coefficient p<0.05). CONCLUSION This peer assessment tool has high reliability and can be used to facilitate the acquisition of teaching skills.
Collapse
Affiliation(s)
- Johanna Sommer
- Primary care unit, University of Geneva, Geneva, Switzerland.
| | - Cédric Lanier
- Primary care unit, University of Geneva, Geneva, Switzerland; Department of community medicine, primary care and emergencies, Geneva University Hospitals, Geneva, Switzerland.
| | - Noelle Junod Perron
- Department of community medicine, primary care and emergencies, Geneva University Hospitals, Geneva, Switzerland; Unit of development and research in medical education, University of Geneva, Geneva, Switzerland.
| | - Mathieu Nendaz
- Unit of development and research in medical education, University of Geneva, Geneva, Switzerland; Service of General Internal Medicine, Geneva University Hospitals, Geneva, Switzerland.
| | - Diane Clavet
- Center for health sciences education, Université de Sherbrooke, Sherbrooke, Canada.
| | - Marie-Claude Audétat
- Primary care unit, University of Geneva, Geneva, Switzerland; Family medicine and Emergency Department, Université de Montréal, Montréal, Canada.
| |
Collapse
|
58
|
Hydes C, Ajjawi R. Selecting, training and assessing new general practice community teachers in UK medical schools. EDUCATION FOR PRIMARY CARE 2016; 26:297-304. [PMID: 26808791 DOI: 10.1080/14739879.2015.1079017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
BACKGROUND Standards for undergraduate medical education in the UK, published in Tomorrow's Doctors, include the criterion 'everyone involved in educating medical students will be appropriately selected, trained, supported and appraised'. AIMS To establish how new general practice (GP) community teachers of medical students are selected, initially trained and assessed by UK medical schools and establish the extent to which Tomorrow's Doctors standards are being met. METHOD A mixed-methods study with questionnaire data collected from 24 lead GPs at UK medical schools, 23 new GP teachers from two medical schools plus a semi-structured telephone interview with two GP leads. Quantitative data were analysed descriptively and qualitative data were analysed informed by framework analysis. RESULTS GP teachers' selection is non-standardised. One hundred per cent of GP leads provide initial training courses for new GP teachers; 50% are mandatory. The content and length of courses varies. All GP leads use student feedback to assess teaching, but other required methods (peer review and patient feedback) are not universally used. CONCLUSIONS To meet General Medical Council standards, medical schools need to include equality and diversity in initial training and use more than one method to assess new GP teachers. Wider debate about the selection, training and assessment of new GP teachers is needed to agree minimum standards.
Collapse
Affiliation(s)
| | - Rola Ajjawi
- b Centre for Medical Education , University of Dundee , UK
| |
Collapse
|
59
|
Abstract
Evaluations of clinicians' teaching performance are usually a preliminary, although essential, activity in quality management and improvement activities. This PhD project focused on testing the validity, reliability and impact of a performance evaluation system named the System of Evaluation of Teaching Qualities (SETQ) across specialities and centres in the Netherlands. The results of this project show that the SETQ questionnaires can provide clinicians with valid and reliable performance feedback that can enhance their teaching performance. Also, we tried to investigate the predictive validity of the SETQ. In conclusion, the SETQ appears to be a helpful tool for improving clinicians' teaching performance.
Collapse
Affiliation(s)
- Benjamin C M Boerebach
- Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, Amsterdam, The Netherlands.
- Department of Strategy & Information, University of Amsterdam, Spui 21, 1012WX, Amsterdam, The Netherlands.
| |
Collapse
|
60
|
Schönrock-Adema J, Visscher M, Raat ANJ, Brand PLP. Development and Validation of the Scan of Postgraduate Educational Environment Domains (SPEED): A Brief Instrument to Assess the Educational Environment in Postgraduate Medical Education. PLoS One 2015; 10:e0137872. [PMID: 26413836 PMCID: PMC4587553 DOI: 10.1371/journal.pone.0137872] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2015] [Accepted: 08/22/2015] [Indexed: 11/27/2022] Open
Abstract
Introduction Current instruments to evaluate the postgraduate medical educational environment lack theoretical frameworks and are relatively long, which may reduce response rates. We aimed to develop and validate a brief instrument that, based on a solid theoretical framework for educational environments, solicits resident feedback to screen the postgraduate medical educational environment quality. Methods Stepwise, we developed a screening instrument, using existing instruments to assess educational environment quality and adopting a theoretical framework that defines three educational environment domains: content, atmosphere and organization. First, items from relevant existing instruments were collected and, after deleting duplicates and items not specifically addressing educational environment, grouped into the three domains. In a Delphi procedure, the item list was reduced to a set of items considered most important and comprehensively covering the three domains. These items were triangulated against the results of semi-structured interviews with 26 residents from three teaching hospitals to achieve face validity. This draft version of the Scan of Postgraduate Educational Environment Domains (SPEED) was administered to residents in a general and university hospital and further reduced and validated based on the data collected. Results Two hundred twenty-three residents completed the 43-item draft SPEED. We used half of the dataset for item reduction, and the other half for validating the resulting SPEED (15 items, 5 per domain). Internal consistencies were high. Correlations between domain scores in the draft and brief versions of SPEED were high (>0.85) and highly significant (p<0.001). Domain score variance of the draft instrument was explained for ≥80% by the items representing the domains in the final SPEED. Conclusions The SPEED comprehensively covers the three educational environment domains defined in the theoretical framework. Because of its validity and brevity, the SPEED is promising as useful and easily applicable tool to regularly screen educational environment quality in postgraduate medical education.
Collapse
Affiliation(s)
- Johanna Schönrock-Adema
- Center for Educational Development and Research in health sciences, Institute for Medical Education, University of Groningen and University Medical Center Groningen, Groningen, The Netherlands
- * E-mail: (JSA); (PLPB)
| | - Maartje Visscher
- UMCG Postgraduate School of Medicine, University of Groningen and University Medical Center Groningen, Groningen, The Netherlands
| | - A. N. Janet Raat
- Center for Educational Development and Research in health sciences, Institute for Medical Education, University of Groningen and University Medical Center Groningen, Groningen, The Netherlands
| | - Paul L. P. Brand
- Center for Educational Development and Research in health sciences, Institute for Medical Education, University of Groningen and University Medical Center Groningen, Groningen, The Netherlands
- UMCG Postgraduate School of Medicine, University of Groningen and University Medical Center Groningen, Groningen, The Netherlands
- Princess Amalia Children’s Centre, Isala Hospital, Zwolle, The Netherlands
- * E-mail: (JSA); (PLPB)
| |
Collapse
|
61
|
Fluit CRMG, Feskens R, Bolhuis S, Grol R, Wensing M, Laan R. Understanding resident ratings of teaching in the workplace: a multi-centre study. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2015; 20:691-707. [PMID: 25314933 DOI: 10.1007/s10459-014-9559-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/19/2014] [Accepted: 10/03/2014] [Indexed: 06/04/2023]
Abstract
Providing clinical teachers with feedback about their teaching skills is a powerful tool to improve teaching. Evaluations are mostly based on questionnaires completed by residents. We investigated to what extent characteristics of residents, clinical teachers, and the clinical environment influenced these evaluations, and the relation between residents' scores and their teachers' self-scores. The evaluation and feedback for effective clinical teaching questionnaire (EFFECT) was used to (self)assess clinical teachers from 12 disciplines (15 departments, four hospitals). Items were scored on a five-point Likert scale. Main outcome measures were residents' mean overall scores (MOSs), specific scale scores (MSSs), and clinical teachers' self-evaluation scores. Multilevel regression analysis was used to identify predictors. Residents' scores and self-evaluations were compared. Residents filled in 1,013 questionnaires, evaluating 230 clinical teachers. We received 160 self-evaluations. 'Planning Teaching' and 'Personal Support' (4.52, SD .61 and 4.53, SD .59) were rated highest, 'Feedback Content' (CanMEDS related) (4.12, SD .71) was rated lowest. Teachers in affiliated hospitals showed highest MOS and MSS. Medical specialty did not influence MOS. Female clinical teachers were rated higher for most MSS, achieving statistical significance. Residents in year 1-2 were most positive about their teachers. Residents' gender did not affect the mean scores, except for role modeling. At group level, self-evaluations and residents' ratings correlated highly (Kendall's τ 0.859). Resident evaluations of clinical teachers are influenced by teacher's gender, year of residency training, type of hospital, and to a lesser extent teachers' gender. Clinical teachers and residents agree on strong and weak points of clinical teaching.
Collapse
Affiliation(s)
- Cornelia R M G Fluit
- Academic Educational Institute, Radboud University Medical Center Nijmegen, 306 IWOO, PO Box 9101, 6500 HB, Nijmegen, The Netherlands,
| | | | | | | | | | | |
Collapse
|
62
|
Warman SM. Challenges and Issues in the Evaluation of Teaching Quality: How Does it Affect Teachers' Professional Practice? A UK Perspective. JOURNAL OF VETERINARY MEDICAL EDUCATION 2015; 42:245-251. [PMID: 25631882 DOI: 10.3138/jvme.0914-096r1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Evaluation of the quality of higher education is undertaken for the purposes of ensuring accountability, accreditation, and improvement, all of which are highly relevant to veterinary teaching institutions in the current economic climate. If evaluation is to drive change, it needs to be able to influence teaching practice. This article reviews the literature relating to evaluation of teaching quality in higher education with a particular focus on teachers' professional practice. Student evaluation and peer observation of teaching are discussed as examples of widely used evaluation processes. These approaches clearly have the potential to influence teachers' practice. Institutions should strive to ensure the development of a supportive culture that prioritizes teaching quality while being aware of any potential consequences related to cost, faculty time, or negative emotional responses that might result from the use of different evaluation methods.
Collapse
|
63
|
Vaughan B. Developing a clinical teaching quality questionnaire for use in a university osteopathic pre-registration teaching program. BMC MEDICAL EDUCATION 2015; 15:70. [PMID: 25885108 PMCID: PMC4404120 DOI: 10.1186/s12909-015-0358-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/08/2014] [Accepted: 03/30/2015] [Indexed: 06/04/2023]
Abstract
BACKGROUND Clinical education is an important component of many health professional training programs. There is a range of questionnaires to assess the quality of the clinical educator however none are in student-led clinic environments. The present study developed a questionnaire to assess the quality of the clinical educators in the osteopathy program at Victoria University. METHODS A systematic search of the literature was used to identify questionnaires that evaluated the quality of clinical teaching. Eighty-three items were extracted and reviewed for their appropriateness to include in a questionnaire by students, clinical educators and academics. A fifty-six item questionnaire was then trialled with osteopathy students. A variety of statistics were used to determine the number of factors to extract. Exploratory factor analysis (EFA) was used to investigate the factor structure. RESULTS The number of factors to extract was calculated to be between 3 and 6. Review of the factor structures suggested the most appropriate fit was four and five factors. The EFA of the four-factor solution collapsed into three factors. The five-factor solution demonstrated the most stable structure. Internal consistency of the five-factor solution was greater than 0.70. CONCLUSIONS The five factors were labelled Learning Environment (Factor 1), Reflective Practice (Factor 2), Feedback (Factor 3) and Patient Management (Factor 4) and Modelling (Factor 5). Further research is now required to continue investigating the construct validity and reliability of the questionnaire.
Collapse
Affiliation(s)
- Brett Vaughan
- Centre for Chronic Disease Prevention & Management, College of Health & Biomedicine, Victoria University, Melbourne, Australia.
- Institute of Sport, Exercise & Active Living, Victoria University, Melbourne, Australia.
- School of Health & Human Sciences, Southern Cross University, Lismore, Australia.
| |
Collapse
|
64
|
Haider SI, Johnson N, Thistlethwaite JE, Fagan G, Bari MF. WATCH: Warwick Assessment insTrument for Clinical teacHing: Development and testing. MEDICAL TEACHER 2015; 37:289-295. [PMID: 25155842 DOI: 10.3109/0142159x.2014.947936] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
OBJECTIVE Medical education and teaching skills are core competencies included in the generic curriculum for specialty training. To support the development of these skills, there is need for a validated instrument. This study aims to develop and test an instrument to measure the attributes of specialty trainees as effective teachers. METHODS The study was conducted in two phases. In first phase, the content of the instrument was generated from the literature and tested using the Delphi technique. In second phase, the instrument was field tested for validity and reliability using factor analysis and generalizability study. Feasibility was calculated by the time taken to complete the instrument. Acceptability and educational impact were determined by qualitative analysis of written feedback. Attributes of specialty trainees were assessed by clinical supervisors, peers, and students. RESULTS The Delphi study produced consensus on 15 statements which formed the basis of the instrument. In field study, a total of 415 instruments were completed. Factor analysis demonstrated a three-factor solution ('learning-teaching milieu', 'teaching skills', and 'learner-orientated'). A generalizability coefficient was 0.92. Mean time to complete the instrument was five minutes. Feedback indicated that it was an acceptable and useful method of assessment. CONCLUSION This new instrument provides valid, reliable, feasible, and acceptable assessment of clinical teaching.
Collapse
|
65
|
Chen HC, O'Sullivan P, Teherani A, Fogh S, Kobashi B, ten Cate O. Sequencing learning experiences to engage different level learners in the workplace: An interview study with excellent clinical teachers. MEDICAL TEACHER 2015; 37:1090-1097. [PMID: 25693794 DOI: 10.3109/0142159x.2015.1009431] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
PURPOSE Learning in the clinical workplace can appear to rely on opportunistic teaching. The cognitive apprenticeship model describes assigning tasks based on learner rather than just workplace needs. This study aimed to determine how excellent clinical teachers select clinical learning experiences to support the workplace participation and development of different level learners. METHODS Using a constructivist grounded theory approach, we conducted semi-structured interviews with medical school faculty identified as excellent clinical teachers teaching multiple levels of learners. We explored their approach to teach different level learners and their perceived role in promoting learner development. We performed thematic analysis of the interview transcripts using open and axial coding. RESULTS We interviewed 19 clinical teachers and identified three themes related to their teaching approach: sequencing of learning experiences, selection of learning activities and teacher responsibilities. All teachers used sequencing as a teaching strategy by varying content, complexity and expectations by learner level. The teachers initially selected learning activities based on learner level and adjusted for individual competencies over time. They identified teacher responsibilities for learner education and patient safety, and used sequencing to promote both. CONCLUSIONS Excellent clinical teachers described strategies for matching available learning opportunities to learners' developmental levels to safely engage learners and improve learning in the clinical workplace.
Collapse
Affiliation(s)
| | | | | | | | | | - Olle ten Cate
- b University Medical Center Utrecht , The Netherlands
| |
Collapse
|
66
|
Da Dalt L, Anselmi P, Furlan S, Carraro S, Baraldi E, Robusto E, Perilongo G. Validating a set of tools designed to assess the perceived quality of training of pediatric residency programs. Ital J Pediatr 2015; 41:2. [PMID: 25599713 PMCID: PMC4339004 DOI: 10.1186/s13052-014-0106-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/26/2014] [Accepted: 12/18/2014] [Indexed: 12/02/2022] Open
Abstract
Background The Paediatric Residency Program (PRP) of Padua, Italy, developed a set of questionnaires to assess the quality of the training provided by each faculty member, the quality of the professional experience the residents experienced during the various rotations and the functioning of the Resident Affair Committee (RAC), named respectively: “Tutor Assessment Questionnaire” (TAQ), “Rotation Assessment Questionnaire” (RAQ), and RAC Assessment Questionnaire”. The process that brought to their validation are herein presented. Method Between July 2012 and July 2013, 51 residents evaluated 26 tutors through the TAQ, and 25 rotations through the RAQ. Forty-eight residents filled the RAC Assessment Questionnaire. The three questionnaires were validated through a many-facet Rasch measurement analysis. Results In their final form, the questionnaires produced measures that were valid, reliable, unidimensional, and free from gender biases. TAQ and RAQ distinguished tutors and rotations into 5–6 levels of different quality and effectiveness. The three questionnaires allowed the identification of strengths and weaknesses of tutors, rotations, and RAC. The agreement observed among judges was coherent to the predicted values, suggesting that no particular training is required for developing a shared interpretation of the items. Conclusions The work herein presented serves to enrich the armamentarium of tools that resident medical programs can use to monitor their functioning. A larger application of these tools will serve to consolidate and refine further the results presented. Electronic supplementary material The online version of this article (doi:10.1186/s13052-014-0106-2) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Liviana Da Dalt
- Paediatric Residency Program, Department of Woman's and Child's Health, University of Padua, Via Giustiniani 3 - 35128, Padua, Italy.
| | | | - Sara Furlan
- Department FISPPA, University of Padua, Padua, Italy.
| | - Silvia Carraro
- Paediatric Residency Program, Department of Woman's and Child's Health, University of Padua, Via Giustiniani 3 - 35128, Padua, Italy.
| | - Eugenio Baraldi
- Paediatric Residency Program, Department of Woman's and Child's Health, University of Padua, Via Giustiniani 3 - 35128, Padua, Italy.
| | | | - Giorgio Perilongo
- Paediatric Residency Program, Department of Woman's and Child's Health, University of Padua, Via Giustiniani 3 - 35128, Padua, Italy.
| |
Collapse
|
67
|
Mintz M, Southern DA, Ghali WA, Ma IWY. Validation of the 25-Item Stanford Faculty Development Program Tool on Clinical Teaching Effectiveness. TEACHING AND LEARNING IN MEDICINE 2015; 27:174-181. [PMID: 25893939 DOI: 10.1080/10401334.2015.1011645] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
UNLABELLED CONSTRUCT: The 25-item Stanford Faculty Development Program Tool on Clinical Teaching Effectiveness assesses clinical teaching effectiveness. BACKGROUND Valid and reliable rating of teaching effectiveness is helpful for providing faculty with feedback. The 25-item Stanford Faculty Development Program Tool on Clinical Teaching Effectiveness was intended to evaluate seven dimensions of clinical teaching. Confirmation of the structure of this tool has not been previously performed. APPROACH This study sought to validate this tool using a confirmatory factor analysis, testing a 7-factor model and compared its goodness of fit with a modified model. Acceptability of the use of the tool was assessed using a 6-item survey, completed by final year medical students (N = 119 of 156 students; 76%). RESULTS The testing of the goodness of fit indicated that the 7-factor model performed poorly, χ(2)(254) = 457.4, p < .001 (root mean square error of approximation [RMSEA] = 0.08, comparative fit index [CFI] = 0.91, non-normed fit index [NNFI] = 0.89). Only standardized root mean square residual (SRMR) indicated acceptable fit (0.06). Further exploratory analysis identified 10 items that cross-loaded on 2 factors. The remainder of the items loaded on factors as originally intended. By removing these 10 items, repeat confirmatory factor analysis on the modified 15-item, 5-factor model demonstrated a better fit than the original model: SRMR = 0.075, NNFI = 0.91, χ(2)(80) = 150.1, p < .001; RMSEA = 0.09; CFI = 0.93. Although 75% of the participants stated they were willing to fill the tool on their preceptors on a biweekly basis, only 25% were willing to do so on a weekly basis. CONCLUSIONS Our study failed to confirm factor structure of the 25-item tool. A modified tool with fewer, more conceptually distinct items was best fit by a 5-factor model. Further, the acceptability of use for the 25-item tool may be poor for rotations with a new preceptor weekly. The abbreviated tool may be preferable in that setting.
Collapse
Affiliation(s)
- Marcy Mintz
- a Department of Medicine , University of Calgary , Calgary , Alberta , Canada
| | | | | | | |
Collapse
|
68
|
Huff NG, Roy B, Estrada CA, Centor RM, Castiglioni A, Willett LL, Shewchuk RM, Cohen S. Teaching behaviors that define highest rated attending physicians: a study of the resident perspective. MEDICAL TEACHER 2014; 36:991-996. [PMID: 25072844 DOI: 10.3109/0142159x.2014.920952] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
BACKGROUND Better understanding teaching behaviors of highly rated clinical teachers could improve training for teaching. We examined teaching behaviors demonstrated by higher rated attending physicians. METHODS Qualitative and quantitative group consensus using the nominal group technique (NGT) among internal medicine residents and students on hospital services (2004-2005); participants voted on the three most important teaching behaviors (weight of 3 = top rated, 1 = lowest rated). Teaching behaviors were organized into domains of successful rounding characteristics. We used teaching evaluations to sort attending physicians into tertiles of overall teaching effectiveness. RESULTS Participants evaluated 23 faculty in 17 NGT sessions. Participants identified 66 distinct teaching behaviors (total sum of weights [sw] = 502). Nineteen items had sw ≥ 10, and these were categorized into the following domains: Teaching Process (n = 8; sw = 215, 42.8%), Learning Atmosphere (n = 5; sw = 145, 28.9%), Role Modeling (n = 3; sw = 74, 14.7%) and Team Management (n = 3; sw = 65, 12.9%). Attendings in the highest tertile received a larger number of votes for characteristics within the Teaching Process domain (56% compared to 39% in lowest tertile). CONCLUSIONS The most effective teaching behaviors fell into two broad domains: Teaching Process and Learning Atmosphere. Highest rated attending physicians are most recognized for characteristics in the Teaching Process domain.
Collapse
|
69
|
A Farahani M, Emamzadeh Ghasemi HS, Nikpaima N, Fereidooni Z, Rasoli M. Development and psychometric evaluation of the nursing instructors' clinical teaching performance inventory. Glob J Health Sci 2014; 7:30-6. [PMID: 25948430 PMCID: PMC4802076 DOI: 10.5539/gjhs.v7n3p30] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2014] [Accepted: 09/10/2014] [Indexed: 11/17/2022] Open
Abstract
Evaluation of nursing instructors’ clinical teaching performance is a prerequisite to the quality assurance of nursing education. One of the most common procedures for this purpose is using student evaluations. This study was to develop and evaluate the psychometric properties of Nursing Instructors’ Clinical Teaching Performance Inventory (NICTPI). The primary items of the inventory were generated by reviewing the published literature and the existing questionnaires as well as consulting with the members of the Faculties Evaluation Committee of the study setting. Psychometric properties were assessed by calculating its content validity ratio and index, and test-retest correlation coefficient as well as conducting an exploratory factor analysis and an internal consistency assessment. The content validity ratios and indices of the items were respectively higher than 0.85 and 0.79. The final version of the inventory consisted of 25 items, and in the exploratory factor analysis, items were loaded on three factors which jointly accounting for 72.85% of the total variance. The test-retest correlation coefficient and the Cronbach’s alpha of the inventory were 0.93 and 0.973, respectively. The results revealed that the developed inventory is an appropriate, valid, and reliable instrument for evaluating nursing instructors’ clinical teaching performance.
Collapse
|
70
|
Boerebach BCM, Lombarts KMJMH, Arah OA. Confirmatory Factor Analysis of the System for Evaluation of Teaching Qualities (SETQ) in Graduate Medical Training. Eval Health Prof 2014; 39:21-32. [DOI: 10.1177/0163278714552520] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The System for Evaluation of Teaching Qualities (SETQ) was developed as a formative system for the continuous evaluation and development of physicians’ teaching performance in graduate medical training. It has been seven years since the introduction and initial exploratory psychometric analysis of the SETQ questionnaires. This study investigates the validity and reliability of the SETQ questionnaires across hospitals and medical specialties using confirmatory factor analyses (CFAs), reliability analysis, and generalizability analysis. The SETQ questionnaires were tested in a sample of 3,025 physicians and 2,848 trainees in 46 hospitals. The CFA revealed acceptable fit of the data to the previously identified five-factor model. The high internal consistency estimates suggest satisfactory reliability of the subscales. These results provide robust evidence for the validity and reliability of the SETQ questionnaires for evaluating physicians’ teaching performance.
Collapse
Affiliation(s)
- Benjamin C. M. Boerebach
- Professional Performance research group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| | - Kiki M. J. M. H. Lombarts
- Professional Performance research group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| | - Onyebuchi A. Arah
- Department of Epidemiology, University of California, Los Angeles (UCLA), School of Public Health, Los Angeles, CA, USA
- UCLA Center for Health Policy Research, Los Angeles, CA, USA
| |
Collapse
|
71
|
Kikukawa M, Stalmeijer RE, Emura S, Roff S, Scherpbier AJJA. An instrument for evaluating clinical teaching in Japan: content validity and cultural sensitivity. BMC MEDICAL EDUCATION 2014; 14:179. [PMID: 25164309 PMCID: PMC4167259 DOI: 10.1186/1472-6920-14-179] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2014] [Accepted: 08/08/2014] [Indexed: 05/24/2023]
Abstract
BACKGROUND Many instruments for evaluating clinical teaching have been developed but almost all in Western countries. None of these instruments have been validated for the Asian culture, and a literature search yielded no instruments that were developed specifically for that culture. A key element that influences content validity in developing instruments for evaluating the quality of teaching is culture. The aim of this study was to develop a culture-specific instrument with strong content validity for evaluating clinical teaching in initial medical postgraduate training in Japan. METHODS Based on data from a literature search and an earlier study we prepared a draft evaluation instrument. To ensure a good cultural fit of the instrument with the Asian context we conducted a modified Delphi procedure among three groups of stakeholders (five education experts, twelve clinical teachers and ten residents) to establish content validity, as this factor is particularly susceptible to cultural factors. RESULTS Two rounds of Delphi were conducted. Through the procedure, 52 prospective items were reworded, combined or eliminated, resulting in a 25-item instrument validated for the Japanese setting. CONCLUSIONS This is the first study describing the development and content validation of an instrument for evaluating clinical teaching specifically tailored to an East Asian setting. The instrument has similarities and differences compared with instruments of Western origin. Our findings suggest that designers of evaluation instruments should consider the probability that the content validity of instruments for evaluating clinical teachers can be influenced by cultural aspects.
Collapse
Affiliation(s)
- Makoto Kikukawa
- />Department of Medical Education, Kyushu University, 3-1-1 Maidashi Higashi-ku Fukuoka, 81-8582 Kyushu, Japan
| | - Renee E Stalmeijer
- />Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Sei Emura
- />Centre for Graduate Medical Education Development and Research, Saga University Hospital, Saga, Japan
| | - Sue Roff
- />The Centre for Medical Education, Dundee Medical School, Dundee, Scotland
| | - Albert JJA Scherpbier
- />Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
72
|
Khan SB, Chikte UM, Omar R. From Classroom Teaching to Clinical Practice: Experiences of Senior Dental Students Regarding the Shortened Dental Arch Concept. J Dent Educ 2014. [DOI: 10.1002/j.0022-0337.2014.78.6.tb05744.x] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Saadika B. Khan
- Department of Restorative Dentistry; University of the Western Cape; South Africa
| | - Usuf M.E. Chikte
- Department of Interdisciplinary Health Sciences; Stellenbosch University; South Africa
| | - Ridwaan Omar
- Department of Prosthodontics; Kuwait University; Kuwait
| |
Collapse
|
73
|
Pelgrim EAM, Kramer AWM, Mokkink HGA, van der Vleuten CPM. Factors influencing trainers' feedback-giving behavior: a cross-sectional survey. BMC MEDICAL EDUCATION 2014; 14:65. [PMID: 24690387 PMCID: PMC4230419 DOI: 10.1186/1472-6920-14-65] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2013] [Accepted: 03/24/2014] [Indexed: 05/14/2023]
Abstract
BACKGROUND The literature provides some insight into the role of feedback givers, but little information about within-trainer factors influencing 'feedback-giving behaviours'. We looked for relationships between characteristics of feedback givers (self-efficacy, task perception, neuroticism, extraversion, agreeableness and conscientiousness) and elements of observation and feedback (frequency, quality of content and consequential impact). METHODS We developed and tested several hypotheses regarding the characteristics and elements in a cross-sectional digital survey among GP trainers and their trainees in 2011 and 2012. We conducted bivariate analysis using Pearson correlations and performed multiple regression analysis. RESULTS Sixty-two trainer-trainee couples from three Dutch institutions for postgraduate GP training participated in the study. Trainer scores on 'task perception' and on a scale of the trait 'neuroticism' correlated positively with frequency of feedback and quality of feedback content. Multiple regression analysis supported positive correlations between task perception and frequency of feedback and between neuroticism and quality of feedback content. No other correlations were found. CONCLUSION This study contributes to the literature on feedback giving by revealing factors that influence feedback-giving behaviour, namely neuroticism and task perception. Trainers whose task perception included facilitation of observation and feedback (task perception) and trainers who were concerned about the safety of their patients during consultations with trainees (neuroticism) engaged more frequently in observation and feedback and gave feedback of higher quality.
Collapse
Affiliation(s)
- Elisabeth AM Pelgrim
- Department of Primary Care and Community Care, Radboud University Nijmegen Medical Centre, Postbus 9101, Huispostnummer 117, 6500 HB Nijmegen, the Netherlands
| | - Anneke WM Kramer
- Department of Primary Care and Community Care, Radboud University Nijmegen Medical Centre, Postbus 9101, Huispostnummer 117, 6500 HB Nijmegen, the Netherlands
| | - Henk GA Mokkink
- Department of Primary Care and Community Care, Radboud University Nijmegen Medical Centre, Postbus 9101, Huispostnummer 117, 6500 HB Nijmegen, the Netherlands
| | - Cees PM van der Vleuten
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, P.O. Box 616, 6200 MD Maastricht, the Netherlands
| |
Collapse
|
74
|
Young ME, Cruess SR, Cruess RL, Steinert Y. The Professionalism Assessment of Clinical Teachers (PACT): the reliability and validity of a novel tool to evaluate professional and clinical teaching behaviors. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2014; 19:99-113. [PMID: 23754583 DOI: 10.1007/s10459-013-9466-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2012] [Accepted: 05/28/2013] [Indexed: 06/02/2023]
Abstract
Physicians function as clinicians, teachers, and role models within the clinical environment. Negative learning environments have been shown to be due to many factors, including the presence of unprofessional behaviors among clinical teachers. Reliable and valid assessments of clinical teacher performance, including professional behaviors, may provide a foundation for evidence-based feedback to clinical teachers, enable targeted remediation or recognition, and help to improve the learning environment. However, few tools exist for the evaluation of clinical teachers that include a focus on both professional and clinical teaching behaviors. The Professionalism Assessment of Clinical Teachers (PACT) was developed and implemented at one Canadian institution and was assessed for evidence of reliability and validity. Following each clerkship rotation, students in the 2009-2010 third-year undergraduate clerkship cohort (n = 178) anonymously evaluated a minimum of two clinical teachers using the PACT. 4,715 forms on 567 faculty members were completed. Reliability, validity, and free text comments (present in 45 % of the forms) were examined. An average of 8.6 PACT forms were completed per faculty (range 1-60), with a reliability of 0.31 for 2.9 forms (harmonic mean); 12 forms were necessary for a reliability of 0.65. Global evaluations of teachers aligned with ratings of free-text comments (r = 0.77, p < 0.001). Comment length related negatively with overall rating (r = -0.19, p < 0.001). Mean performance related negatively with variability of performance (r = -0.72, p < 0.001), although this may be related to a ceiling effect. Most faculty members were rated highly; however 'provided constructive feedback' was the least well-rated item. Respectful interactions with students appeared to be the most influential item in the global rating of faculty performance. The PACT is a moderately reliable tool for the assessment of professional behaviors of clinical teachers, with evidence supporting its validity.
Collapse
Affiliation(s)
- Meredith E Young
- Department of Medicine, Centre for Medical Education, Faculty of Medicine, McGill University, 1110 Pine Ave West, Montreal, QC, H3A 1A3, Canada,
| | | | | | | |
Collapse
|
75
|
Steven K, Wenger E, Boshuizen H, Scherpbier A, Dornan T. How clerkship students learn from real patients in practice settings. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2014; 89:469-76. [PMID: 24448040 DOI: 10.1097/acm.0000000000000129] [Citation(s) in RCA: 59] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
PURPOSE To explore how undergraduate medical students learn from real patients in practice settings, the factors that affect their learning, and how clerkship learning might be enhanced. METHOD In 2009, 22 medical students in the three clerkship years of an undergraduate medical program in the United Kingdom made 119 near-contemporaneous audio diary entries reflecting how they learned from real patients. Nineteen attended focus groups; 18 were individually interviewed. The authors used a qualitative theory-building methodology with a conceptual orientation toward Communities of Practice theory. A learning theorist guided selective coding of a constant-comparative analysis. RESULTS Participants learned informally by participating in the communicative practices of workplaces. Two overlapping practices, patient care and education, were identified. Patient care created learning opportunities, which were enriched when practitioners intentionally supported participants' learning. Education, however, was not always coupled with patient care. So, talk positioned the boundaries of two practices in three configurations: education without patient care, education within patient care, and patient care without education. The nature and quality of participants' learning depended on how practitioners entered dialogue with them and linked the dialogue to authentic patient care. CONCLUSIONS Findings strongly suggest that medical students learn from real patients by participating in patient care within an educational practice. Their learning is affected by clinicians' willingness to engage in supportive dialogue. Promoting an informal, inclusive discourse of workplace learning might enhance clerkship education. This approach should take its place alongside-and perhaps ahead of-the currently dominant discourse of "clinical teaching."
Collapse
Affiliation(s)
- Kathryn Steven
- Dr. Steven an academic fellow in general practice, the University of St. Andrews, St. Andrews, United Kingdom. Dr. Wenger is a social learning theorist and consultant, Grass Valley, California. Dr. Boshuizen is an education researcher, Open University, Heerlen, the Netherlands. Dr. Scherpbier is dean and education researcher, Maastricht University, Maastricht, the Netherlands. Dr. Dornan is an education researcher, Maastricht University, Maastricht, the Netherlands
| | | | | | | | | |
Collapse
|
76
|
Breckwoldt J, Svensson J, Lingemann C, Gruber H. Does clinical teacher training always improve teaching effectiveness as opposed to no teacher training? A randomized controlled study. BMC MEDICAL EDUCATION 2014; 14:6. [PMID: 24400838 PMCID: PMC3893403 DOI: 10.1186/1472-6920-14-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2013] [Accepted: 01/02/2014] [Indexed: 05/21/2023]
Abstract
BACKGROUND Teacher training may improve teaching effectiveness, but it might also have paradoxical effects. Research on expertise development suggests that the integration of new strategies may result in a temporary deterioration of performance until higher levels of competence are reached. In this study, the impact of a clinical teacher training on teaching effectiveness was assessed in an intensive course in emergency medicine. As primary study outcome students' practical skills at the end of their course were chosen. METHODS The authors matched 18 clinical teachers according to clinical experience and teaching experience and then randomly assigned them to a two-day-teacher training, or no training. After 14 days, both groups taught within a 12-hour intensive course in emergency medicine for undergraduate students. The course followed a clearly defined curriculum. After the course students were assessed by structured clinical examination (SCE) and MCQ. The teaching quality was rated by students using a questionnaire. RESULTS Data for 96 students with trained teachers, and 97 students with untrained teachers were included. Students taught by untrained teachers performed better in the SCE domains 'alarm call' (p < 0.01) and 'ventilation' (p = 0.01), while the domains 'chest compressions' and 'use of automated defibrillator' did not differ. MCQ scores revealed no statistical difference. Overall, teaching quality was rated significantly better by students of untrained teachers (p = 0.05). CONCLUSIONS At the end of a structured intensive course in emergency medicine, students of trained clinical teachers performed worse in 2 of 4 practical SCE domains compared to students of untrained teachers. In addition, subjective evaluations of teaching quality were worse in the group of trained teachers. Difficulties in integrating new strategies in their teaching styles might be a possible explanation.
Collapse
Affiliation(s)
- Jan Breckwoldt
- Medical Faculty of the University of Zurich, Pestalozzistr. 3-5, Zurich CH-8091, Switzerland
- Department of Anaesthesiology, Charité, Medical University of Berlin Campus, Benjamin Franklin, Berlin, Germany
- Dieter Scheffner Centre for Medical Education, Charité – Medical University of Berlin, Berlin, Germany
| | - Jörg Svensson
- Department of Anaesthesiology, Charité, Medical University of Berlin Campus, Benjamin Franklin, Berlin, Germany
| | - Christian Lingemann
- Department of Anaesthesiology, Charité, Medical University of Berlin Campus, Benjamin Franklin, Berlin, Germany
| | - Hans Gruber
- Department of Educational Science, University of Regensburg, Regensburg, Germany
| |
Collapse
|
77
|
Owolabi MO. Development and psychometric characteristics of a new domain of the stanford faculty development program instrument. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2014; 34:13-24. [PMID: 24648360 DOI: 10.1002/chp.21213] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
INTRODUCTION Teacher's attitude domain, a pivotal aspect of clinical teaching, is missing in the Stanford Faculty Development Program Questionnaire (SFDPQ), the most widely used student-based assessment method of clinical teaching skills. This study was conducted to develop and validate the teacher's attitude domain and evaluate the validity and internal consistency reliability of the augmented SFDPQ. METHODS Items generated for the new domain included teacher's enthusiasm, sobriety, humility, thoroughness, empathy, and accessibility. The study involved 20 resident doctors assessed once by 64 medical students using the augmented SFDPQ. Construct validity was explored using correlation among the different domains and a global rating scale. Factor analysis was performed. RESULTS The response rate was 94%. The new domain had a Cronbach's alpha of 0.89, with 1-factor solution explaining 57.1% of its variance. It showed the strongest correlation to the global rating scale (rho = 0.71). The augmented SFDPQ, which had a Cronbach's alpha of 0.93, correlated better (rho = 0.72, p < 0.00001) to the global rating scale than the original SFDPQ (rho = 0.67, p < 0.00001). DISCUSSION The new teacher's attitude domain exhibited good internal consistency and construct and factorial validity. It enhanced the content and construct validity of the SFDPQ. The validated construct of the augmented SFDPQ is recommended for design and evaluation of basic and continuing clinical teaching programs.
Collapse
|
78
|
Fluit CV, Bolhuis S, Klaassen T, DE Visser M, Grol R, Laan R, Wensing M. Residents provide feedback to their clinical teachers: reflection through dialogue. MEDICAL TEACHER 2013; 35:e1485-92. [PMID: 23968325 DOI: 10.3109/0142159x.2013.785631] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
BACKGROUND Physicians play a crucial role in teaching residents in clinical practice. Feedback on their teaching performance to support this role needs to be provided in a carefully designed and constructive way. AIMS We investigated an evaluation system for evaluating supervisors and providing formative feedback. METHOD In a design based research approach, the 'Evaluation and Feedback For Effective Clinical Teaching System' (EFFECT-S) was examined by conducting semi-structured interviews with residents and supervisors of five departments in five different hospitals about feedback conditions, acceptance and its effects. Interviews were analysed by three researchers, using qualitative research software (ATLAS-Ti). RESULTS Principles and characteristics of the design are supported by evaluating EFFECT-S. All steps of EFFECT-S appear necessary. A new step, team evaluation, was added. Supervisors perceived the feedback as instructive; residents felt capable of providing feedback. Creating safety and honesty require different actions for residents and supervisors. Outcomes include awareness of clinical teaching, residents learning feedback skills, reduced hierarchy and an improved learning climate. CONCLUSIONS EFFECT-S appeared useful for evaluating supervisors. Key mechanism was creating a safe environment for residents to provide honest and constructive feedback. Residents learned providing feedback, being part of the CanMEDS and ACGME competencies of medical education programmes.
Collapse
|
79
|
Fluit CRMG, Feskens R, Bolhuis S, Grol R, Wensing M, Laan R. Repeated evaluations of the quality of clinical teaching by residents. PERSPECTIVES ON MEDICAL EDUCATION 2013; 2:87-94. [PMID: 23670697 PMCID: PMC3656177 DOI: 10.1007/s40037-013-0060-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Many studies report on the validation of instruments for facilitating feedback to clinical supervisors. There is mixed evidence whether evaluations lead to more effective teaching and higher ratings. We assessed changes in resident ratings after an evaluation and feedback session with their supervisors. Supervisors of three medical specialities were evaluated, using a validated instrument (EFFECT). Mean overall scores (MOS) and mean scale scores were calculated and compared using paired T-tests. 24 Supervisors from three departments were evaluated at two subsequent years. MOS increased from 4.36 to 4.49. The MOS of two scales showed an increase >0.2: 'teaching methodology' (4.34-4.55), and 'assessment' (4.11-4.39). Supervisors with an MOS <4.0 at year 1 (n = 5) all demonstrated a strong increase in the MOS (mean overall increase 0.50, range 0.34-0.64). Four supervisors with an MOS between 4.0 and 4.5 (n = 6) demonstrated an increase >0.2 in their MOS (mean overall increase 0.21, range -0.15 to 53). One supervisor with an MOS >4.5 (n = 13) demonstrated an increase >0.02 in the MOS, two demonstrated a decrease >0.2 (mean overall increase -0.06, range -0.42 to 0.42). EFFECT-S was associated with a positive change in residents' ratings of their supervisors, predominantly in supervisors with relatively low initial scores.
Collapse
Affiliation(s)
- Cornelia R M G Fluit
- Academic Educational Institute, Radboud University Nijmegen Medical Centre, 306 IWOO, PO Box 9101, 6500 HB, Nijmegen, the Netherlands.
| | - Remco Feskens
- Department of Methods and Statistics, Utrecht University, Utrecht, the Netherlands
| | - Sanneke Bolhuis
- Academic Educational Institute, Radboud University Nijmegen Medical Centre, 306 IWOO, PO Box 9101, 6500 HB, Nijmegen, the Netherlands
| | - Richard Grol
- Scientific Institute for Quality of Healthcare, Radboud University Nijmegen Medical Centre, 306 IWOO, PO Box 9101, 6500 HB, Nijmegen, the Netherlands
| | - Michel Wensing
- Scientific Institute for Quality of Healthcare, Radboud University Nijmegen Medical Centre, 306 IWOO, PO Box 9101, 6500 HB, Nijmegen, the Netherlands
| | - Roland Laan
- Academic Educational Institute, Radboud University Nijmegen Medical Centre, 306 IWOO, PO Box 9101, 6500 HB, Nijmegen, the Netherlands
- Department of Rheumatology, Radboud University Nijmegen Medical Centre, 306 IWOO, PO Box 9101, 6500 HB, Nijmegen, the Netherlands
| |
Collapse
|
80
|
Jochemsen-van der Leeuw HGAR, van Dijk N, van Etten-Jamaludin FS, Wieringa-de Waard M. The attributes of the clinical trainer as a role model: a systematic review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2013; 88:26-34. [PMID: 23165277 DOI: 10.1097/acm.0b013e318276d070] [Citation(s) in RCA: 89] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
PURPOSE Medical trainees (interns and residents) and their clinical trainers need to be aware of the differences between positive and negative role modeling to ensure that trainees imitate and that trainers demonstrate the professional behavior required to provide high-quality patient care. The authors systematically reviewed the medical and medical education literature to identify the attributes characterizing clinical trainers as positive and negative role models for trainees. METHOD The authors searched the MEDLINE, EMBASE, ERIC, and PsycINFO databases from their earliest dates until May 2011. They included quantitative and qualitative original studies, published in any language, on role modeling by clinical trainers for trainees in graduate medical education. They assessed the methodological quality of and extracted data from the included studies, using predefined forms. RESULTS Seventeen articles met inclusion criteria. The authors divided attributes of role models into three categories: patient care qualities, teaching qualities, and personal qualities. Positive role models were frequently described as excellent clinicians who were invested in the doctor-patient relationship. They inspired and taught trainees while carrying out other tasks, were patient, and had integrity. These findings confirm the implicit nature of role modeling. Positive role models' appearance and scientific achievements were among their least important attributes. Negative role models were described as uncaring toward patients, unsupportive of trainees, cynical, and impatient. CONCLUSIONS The identified attributes may help trainees recognize which aspects of the clinical trainer's professional behavior to imitate, by adding the important step of apperception to the process of learning professional competencies through observation.
Collapse
|
81
|
Schönrock-Adema J, Bouwkamp-Timmer T, van Hell EA, Cohen-Schotanus J. Key elements in assessing the educational environment: where is the theory? ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2012; 17:727-42. [PMID: 22307806 PMCID: PMC3490064 DOI: 10.1007/s10459-011-9346-8] [Citation(s) in RCA: 69] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2011] [Accepted: 12/23/2011] [Indexed: 05/07/2023]
Abstract
The educational environment has been increasingly acknowledged as vital for high-quality medical education. As a result, several instruments have been developed to measure medical educational environment quality. However, there appears to be no consensus about which concepts should be measured. The absence of a theoretical framework may explain this lack of consensus. Therefore, we aimed to (1) find a comprehensive theoretical framework defining the essential concepts, and (2) test its applicability. An initial review of the medical educational environment literature indicated that such frameworks are lacking. Therefore, we chose an alternative approach to lead us to relevant frameworks from outside the medical educational field; that is, we applied a snowballing technique to find educational environment instruments used to build the contents of the medical ones and investigated their theoretical underpinnings (Study 1). We found two frameworks, one of which was described as incomplete and one of which defines three domains as the key elements of human environments (personal development/goal direction, relationships, and system maintenance and system change) and has been validated in different contexts. To test its applicability, we investigated whether the items of nine medical educational environment instruments could be mapped unto the framework (Study 2). Of 374 items, 94% could: 256 (68%) pertained to a single domain, 94 (25%) to more than one domain. In our context, these domains were found to concern goal orientation, relationships and organization/regulation. We conclude that this framework is applicable and comprehensive, and recommend using it as theoretical underpinning for medical educational environment measures.
Collapse
Affiliation(s)
- Johanna Schönrock-Adema
- Center for Research and Innovation in Medical Education, University of Groningen, The Netherlands.
| | | | | | | |
Collapse
|
82
|
Dornan T, Muijtjens A, Graham J, Scherpbier A, Boshuizen H. Manchester Clinical Placement Index (MCPI). Conditions for medical students' learning in hospital and community placements. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2012; 17:703-16. [PMID: 22234383 PMCID: PMC3490061 DOI: 10.1007/s10459-011-9344-x] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2011] [Accepted: 12/20/2011] [Indexed: 05/16/2023]
Abstract
The drive to quality-manage medical education has created a need for valid measurement instruments. Validity evidence includes the theoretical and contextual origin of items, choice of response processes, internal structure, and interrelationship of a measure's variables. This research set out to explore the validity and potential utility of an 11-item measurement instrument, whose theoretical and empirical origins were in an Experience Based Learning model of how medical students learn in communities of practice (COPs), and whose contextual origins were in a community-oriented, horizontally integrated, undergraduate medical programme. The objectives were to examine the psychometric properties of the scale in both hospital and community COPs and provide validity evidence to support using it to measure the quality of placements. The instrument was administered twice to students learning in both hospital and community placements and analysed using exploratory factor analysis and a generalizability analysis. 754 of a possible 902 questionnaires were returned (84% response rate), representing 168 placements. Eight items loaded onto two factors, which accounted for 78% of variance in the hospital data and 82% of variance in the community data. One factor was the placement learning environment, whose five constituent items were how learners were received at the start of the placement, people's supportiveness, and the quality of organisation, leadership, and facilities. The other factor represented the quality of training-instruction in skills, observing students performing skills, and providing students with feedback. Alpha coefficients ranged between 0.89 and 0.93 and there were no redundant or ambiguous items. Generalisability analysis showed that between 7 and 11 raters would be needed to achieve acceptable reliability. There is validity evidence to support using the simple 8-item, mixed methods Manchester Clinical Placement Index to measure key conditions for undergraduate medical students' experience based learning: the quality of the learning environment and the training provided within it. Its conceptual orientation is towards Communities of Practice, which is a dominant contemporary theory in undergraduate medical education.
Collapse
Affiliation(s)
- Tim Dornan
- Department of Educational Development and Research, Maastricht University, The Netherlands.
| | | | | | | | | |
Collapse
|
83
|
Schönrock-Adema J, Boendermaker PM, Remmelts P. Opportunities for the CTEI: disentangling frequency and quality in evaluating teaching behaviours. PERSPECTIVES ON MEDICAL EDUCATION 2012; 1:172-179. [PMID: 23205342 PMCID: PMC3508268 DOI: 10.1007/s40037-012-0023-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Students' perceptions of teaching quality are vital for quality assurance purposes. An increasingly used, department-independent instrument is the (Cleveland) clinical teaching effectiveness instrument (CTEI). Although the CTEI was developed carefully and its validity and reliability confirmed, we noted an opportunity for improvement given an intermingling in its rating scales: the labels of the answering scales refer to both frequency and quality of teaching behaviours. Our aim was to investigate whether frequency and quality scores on the CTEI items differed. A sample of 112 residents anonymously completed the CTEI with separate 5-point rating scales for frequency and quality. Differences between frequency and quality scores were analyzed using paired t tests. Quality was, on average, rated higher than frequency, with significant differences for ten out of 15 items. The mean scores differed significantly in favour of quality. As the effect size was large, the difference in mean scores was substantial. Since quality was generally rated higher than frequency, the authors recommend distinguishing frequency from quality. This distinction helps to obtain unambiguous outcomes, which may be conducive to providing concrete and accurate feedback, improving faculty development and making fair decisions concerning promotion, tenure or salary.
Collapse
Affiliation(s)
- Johanna Schönrock-Adema
- Center for Research and Innovation in Medical Education, University of Groningen and University Medical Center Groningen, Antonius Deusinglaan 1, 9713 AV, Groningen, the Netherlands.
| | - Peter M Boendermaker
- Wenckebach Institute, University of Groningen and University Medical Center Groningen, Groningen, the Netherlands
| | - Pine Remmelts
- Wenckebach Institute, University of Groningen and University Medical Center Groningen, Groningen, the Netherlands
| |
Collapse
|
84
|
Guldal D, Windak A, Maagaard R, Allen J, Kjaer NK. Educational expectations of GP trainers. A EURACT needs analysis. Eur J Gen Pract 2012; 18:233-7. [DOI: 10.3109/13814788.2012.712958] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
85
|
Fluit C, Bolhuis S, Grol R, Ham M, Feskens R, Laan R, Wensing M. Evaluation and feedback for effective clinical teaching in postgraduate medical education: validation of an assessment instrument incorporating the CanMEDS roles. MEDICAL TEACHER 2012; 34:893-901. [PMID: 22816979 DOI: 10.3109/0142159x.2012.699114] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
BACKGROUND Providing clinical teachers in postgraduate medical education with feedback about their teaching skills is a powerful tool to improve clinical teaching. A systematic review showed that available instruments do not comprehensively cover all domains of clinical teaching. We developed and empirically test a comprehensive instrument for assessing clinical teachers in the setting of workplace learning and linked to the CanMEDS roles. METHODS In a Delphi study, the content validity of a preliminary instrument with 88 items was studied, leading to the construction of the EFFECT (evaluation and feedback for effective clinical teaching) instrument. The response process was explored in a pilot test and focus group research with 18 residents of 6 different disciplines. A confirmatory factor analyses (CFA) and reliability analyses were performed on 407 evaluations of 117 supervisors, collected in 3 medical disciplines (paediatrics, pulmonary diseases and surgery) of 6 departments in 4 different hospitals. RESULTS CFA yielded an 11 factor model with a good to excellent fit and internal consistencies ranged from 0.740 to 0.940 per domain; 7 items could be deleted. CONCLUSION The model of workplace learning showed to be a useful framework for developing EFFECT, which incorporates the CanMEDS competencies and proved to be valid and reliable.
Collapse
Affiliation(s)
- Cornelia Fluit
- Radboud University Nijmegen Medical Centre, HB Nijmegen, the Netherlands.
| | | | | | | | | | | | | |
Collapse
|
86
|
Boerboom TBB, Mainhard T, Dolmans DHJM, Scherpbier AJJA, Van Beukelen P, Jaarsma ADC. Evaluating clinical teachers with the Maastricht clinical teaching questionnaire: how much 'teacher' is in student ratings? MEDICAL TEACHER 2012; 34:320-326. [PMID: 22455701 DOI: 10.3109/0142159x.2012.660220] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
BACKGROUND Students are a popular source of data to evaluate the performance of clinical teachers. Instruments to obtain student evaluations must have proven validity. One aspect of validity that often remains underexposed is the possibility of effects of between-student differences and teacher and student characteristics not directly related to teaching performance. AIM The authors examined the occurrence of such effects, using multilevel analysis to analyse data from the Maastricht clinical teaching questionnaire (MCTQ), a validated evaluation instrument, in a veterinary curriculum. METHODS The 15-item MCTQ covers five domains. The authors used multilevel analysis to divide the variance in the domain scores in components related to, respectively, teachers and students. They estimated subsequent models to explore how the MCTQ scores are dependent on teacher and student characteristics. RESULTS Significant amounts of variance in student ratings were due to between-teacher differences, particularly for learning climate, modelling and coaching. The effects of teacher and student characteristics were mostly non-significant or small. CONCLUSION Large portions of variance in MCTQ scores were due to differences between teachers, while the contribution of student and teacher characteristics was negligible. The results support the validity of student ratings obtained with the MCTQ for evaluating teacher performance.
Collapse
Affiliation(s)
- Tobias B B Boerboom
- Quality Improvement of Veterinary Education, Faculty of Veterinary Medicine, Utrecht University, The Netherlands.
| | | | | | | | | | | |
Collapse
|
87
|
|
88
|
Abstract
The role of clinician educators (CEs) in institutions and medical centres continues to be vital without any doubt. Although there has been more than a century since Sir William Osler established the role of the CE and the tradition by encouraging bed-side teaching, there is still a lack of consensus on the attributes that define a 'clinician-educator'. The concept of a superior clinician who is also a dedicated teacher seems to fit the description of a CE but most often seems insufficient to support the CE's academic advancement.
Collapse
Affiliation(s)
- I Alexandraki
- Department of Medicine, University of Florida College of Medicine, Jacksonville, FL, USA.
| | | |
Collapse
|
89
|
van Roermund TCM, Tromp F, Scherpbier AJJA, Bottema BJAM, Bueving HJ. Teachers' ideas versus experts' descriptions of 'the good teacher' in postgraduate medical education: implications for implementation. A qualitative study. BMC MEDICAL EDUCATION 2011; 11:42. [PMID: 21711507 PMCID: PMC3163623 DOI: 10.1186/1472-6920-11-42] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/04/2010] [Accepted: 06/28/2011] [Indexed: 05/06/2023]
Abstract
BACKGROUND When innovations are introduced in medical education, teachers often have to adapt to a new concept of what being a good teacher includes. These new concepts do not necessarily match medical teachers' own, often strong beliefs about what it means to be a good teacher.Recently, a new competency-based description of the good teacher was developed and introduced in all the Departments of Postgraduate Medical Education for Family Physicians in the Netherlands. We compared the views reflected in the new description with the views of teachers who were required to adopt the new framework. METHODS Qualitative study. We interviewed teachers in two Departments of Postgraduate Medical Education for Family Physicians in the Netherlands. The transcripts of the interviews were analysed independently by two researchers, who coded and categorised relevant fragments until consensus was reached on six themes. We investigated to what extent these themes matched the new description. RESULTS Comparing the teachers' views with the concepts described in the new competency-based framework is like looking into two mirrors that reflect clearly dissimilar images. At least two of the themes we found are important in relation to the implementation of new educational methods: the teachers' identification and organisational culture. The latter plays an important role in the development of teachers' ideas about good teaching. CONCLUSIONS The main finding of this study is the key role played by the teachers' feelings regarding their professional identity and by the local teaching culture in shaping teachers' views and expectations regarding their work. This suggests that in implementing a new teaching framework and in faculty development programmes, careful attention should be paid to teachers' existing identification model and the culture that fostered it.
Collapse
Affiliation(s)
- Thea CM van Roermund
- Department Primary and Community Care, Radboud University Nijmegen Medical Centre, Nijmegen, Postbus 9101, Route number 166, 6500 HB Nijmegen, the Netherlands
| | - Fred Tromp
- Department Primary and Community Care Radboud University Nijmegen Medical Centre, Nijmegen, the Netherlands
| | - Albert JJA Scherpbier
- Institute for Medical Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands
| | - Ben JAM Bottema
- Department Primary and Community Care, Radboud University Nijmegen Medical Centre, Nijmegen, the Netherlands
| | - Herman J Bueving
- Department of Post Graduate Medical Training for Family Medicine, ErasmusMC, University Medical Center Rotterdam, Rotterdam, the Netherlands
| |
Collapse
|
90
|
Boerboom TBB, Jaarsma D, Dolmans DHJM, Scherpbier AJJA, Mastenbroek NJJM, Van Beukelen P. Peer group reflection helps clinical teachers to critically reflect on their teaching. MEDICAL TEACHER 2011; 33:e615-e623. [PMID: 22022915 DOI: 10.3109/0142159x.2011.610840] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
BACKGROUND Student evaluations can help clinical teachers to reflect on their teaching skills and find ways to improve their teaching. Studies have shown that the mere presentation of student evaluations is not a sufficient incentive for teachers to critically reflect on their teaching. AIM We evaluated and compared the effectiveness of two feedback facilitation strategies that were identical except for a peer reflection meeting. METHOD In this study, 54 clinical teachers were randomly assigned to two feedback strategies. In one strategy, a peer reflection was added as an additional step. All teachers completed a questionnaire evaluating the strategy that they had experienced. We analysed the reflection reports and the evaluation questionnaire. RESULTS Both strategies stimulated teachers to reflect on feedback and formulate alternative actions for their teaching practice. The teachers who had participated in the peer reflection meeting showed deeper critical reflection and more concrete plans to change their teaching. All feedback strategies were considered effective by the majority of the teachers. CONCLUSIONS Strategies with student feedback and self-assessment stimulated reflection on teaching and helped clinical teachers to formulate plans for improvement. A peer reflection meeting seemed to enhance reflection quality. Further research should establish whether it can have lasting effects on teaching quality.
Collapse
Affiliation(s)
- Tobias B B Boerboom
- Quality Improvement of Veterinary Education, Faculty of Veterinary Medicine, Utrecht University, The Netherlands.
| | | | | | | | | | | |
Collapse
|
91
|
|