51
|
Holdsworth C, Skinner EH, Delany CM. Using simulation pedagogy to teach clinical education skills: A randomized trial. Physiother Theory Pract 2016; 32:284-95. [DOI: 10.3109/09593985.2016.1139645] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Affiliation(s)
- Clare Holdsworth
- Department of Physiotherapy, Western Health, Footscray, Victoria, Australia
| | - Elizabeth H. Skinner
- Department of Physiotherapy, Western Health, Footscray, Victoria, Australia
- Department of Physiotherapy, School of Health Sciences, The University of Melbourne, Parkville, Victoria, Australia
- Department of Physiotherapy, School of Primary Health Care, Faculty of Medicine, Nursing and Health Science, Monash University, Frankston, Victoria, Australia
| | - Clare M. Delany
- Department of Physiotherapy, School of Health Sciences, The University of Melbourne, Parkville, Victoria, Australia
| |
Collapse
|
52
|
Ojano Sheehan O, Brannan G, Dogbey G. Osteopathic medical students' perception of teaching effectiveness of their primary care clinical preceptors. INT J OSTEOPATH MED 2016. [DOI: 10.1016/j.ijosm.2015.12.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
53
|
Olmos-Vega F, Dolmans D, Donkers J, Stalmeijer RE. Understanding how residents' preferences for supervisory methods change throughout residency training: a mixed-methods study. BMC MEDICAL EDUCATION 2015; 15:177. [PMID: 26475161 PMCID: PMC4609127 DOI: 10.1186/s12909-015-0462-7] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2015] [Accepted: 10/09/2015] [Indexed: 05/16/2023]
Abstract
BACKGROUND A major challenge for clinical supervisors is to encourage their residents to be independent without jeopardising patient safety. Residents' preferences according to level of training on this regard have not been completely explored. This study has sought to investigate which teaching methods of the Cognitive Apprenticeship (CA) model junior, intermediate and senior residents preferred and why, and how these preferences differed between groups. METHODS We invited 301 residents of all residency programmes of Javeriana University, Bogotá, Colombia, to participate. Each resident was asked to complete a Maastricht Clinical Teaching Questionnaire (MCTQ), which, being based on the teaching methods of CA, asked residents to rate the importance to their learning of each teaching method and to indicate which of these they preferred the most and why. RESULTS A total of 215 residents (71 %) completed the questionnaire. All concurred that all CA teaching methods were important or very important to their learning, regardless of their level of training. However, the reasons for their preferences clearly differed between groups: junior and intermediate residents preferred teaching methods that were more supervisor-directed, such as modelling and coaching, whereas senior residents preferred teaching methods that were more resident-directed, such as exploration and articulation. CONCLUSIONS The results indicate that clinical supervision (CS) should accommodate to residents' varying degrees of development by attuning the configuration of CA teaching methods to each level of residency training. This configuration should initially vest more power in the supervisor, and gradually let the resident take charge, without ever discontinuing CS.
Collapse
Affiliation(s)
- Francisco Olmos-Vega
- Pontificia Universidad Javeriana, Carrera 7 # 40-62, Bogotá, Colombia.
- Anaesthesiology Department, San Ignacio Hospital, Carrera 7 N42-00 Fourth floor, Bogotá, DC, Colombia.
| | - Diana Dolmans
- Maastricht University, 6200 MD, Maastricht, The Netherlands.
| | - Jeroen Donkers
- Maastricht University, 6200 MD, Maastricht, The Netherlands.
| | | |
Collapse
|
54
|
Schiekirka S, Feufel MA, Herrmann-Lingen C, Raupach T. Evaluation in medical education: A topical review of target parameters, data collection tools and confounding factors. GERMAN MEDICAL SCIENCE : GMS E-JOURNAL 2015; 13:Doc15. [PMID: 26421003 PMCID: PMC4576315 DOI: 10.3205/000219] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/02/2015] [Revised: 08/31/2015] [Indexed: 02/02/2023]
Abstract
Background and objective: Evaluation is an integral part of education in German medical schools. According to the quality standards set by the German Society for Evaluation, evaluation tools must provide an accurate and fair appraisal of teaching quality. Thus, data collection tools must be highly reliable and valid. This review summarises the current literature on evaluation of medical education with regard to the possible dimensions of teaching quality, the psychometric properties of survey instruments and potential confounding factors. Methods: We searched Pubmed, PsycINFO and PSYNDEX for literature on evaluation in medical education and included studies published up until June 30, 2011 as well as articles identified in the “grey literature”. Results are presented as a narrative review. Results: We identified four dimensions of teaching quality: structure, process, teacher characteristics, and outcome. Student ratings are predominantly used to address the first three dimensions, and a number of reliable tools are available for this purpose. However, potential confounders of student ratings pose a threat to the validity of these instruments. Outcome is usually operationalised in terms of student performance on examinations, but methodological problems may limit the usability of these data for evaluation purposes. In addition, not all examinations at German medical schools meet current quality standards. Conclusion: The choice of tools for evaluating medical education should be guided by the dimension that is targeted by the evaluation. Likewise, evaluation results can only be interpreted within the context of the construct addressed by the data collection tool that was used as well as its specific confounding factors.
Collapse
Affiliation(s)
- Sarah Schiekirka
- Universitätsmedizin Göttingen, Studiendekanat, Göttingen, Germany
| | - Markus A Feufel
- Charité - Universitätsmedizin Berlin, Prodekanat für Studium und Lehre, Berlin, Germany ; Max-Planck-Institut für Bildungsforschung, Forschungsbereich Adaptives Verhalten und Kognition und Harding Zentrum für Risikokommunikation, Berlin, Germany
| | - Christoph Herrmann-Lingen
- Universitätsmedizin Göttingen, Klinik für Psychosomatische Medizin und Psychotherapie, Göttingen, Germany ; Arbeitsgemeinschaft der Wissenschaftlichen Medizinischen Fachgesellschaften, Düsseldorf, Germany
| | - Tobias Raupach
- Universitätsmedizin Göttingen, Klinik für Kardiologie und Pneumologie, Göttingen, Germany ; University College London, Health Behaviour Research Centre, London, Great Britain
| |
Collapse
|
55
|
Uijtdehaage S, O'Neal C. A curious case of the phantom professor: mindless teaching evaluations by medical students. MEDICAL EDUCATION 2015; 49:928-932. [PMID: 26296409 DOI: 10.1111/medu.12647] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2014] [Revised: 08/07/2014] [Accepted: 10/21/2014] [Indexed: 06/04/2023]
Abstract
CONTEXT Student evaluations of teaching (SETs) inform faculty promotion decisions and course improvement, a process that is predicated on the assumption that students complete the evaluations with diligence. Anecdotal evidence suggests that this may not be so. OBJECTIVES We sought to determine the degree to which medical students complete SETs deliberately in a classroom-style, multi-instructor course. METHODS We inserted one fictitious lecturer into each of two pre-clinical courses. Students were required to submit their anonymous ratings of all lecturers, including the fictitious one, within 2 weeks after the course using a 5-point Likert scale, but could choose not to evaluate a lecturer. The following year, we repeated this but included a portrait of the fictitious lecturer. The number of actual lecturers in each course ranged from 23 to 52. RESULTS Response rates were 99% and 94%, respectively, in the 2 years of the study. Without a portrait, 66% (183 of 277) of students evaluated the fictitious lecturer, but fewer students (49%, 140 of 285) did so with a portrait (chi-squared test, p < 0.0001). CONCLUSIONS These findings suggest that many medical students complete SETs mindlessly, even when a photograph is included, without careful consideration of whom they are evaluating and much less of how that faculty member performed. This hampers programme quality improvement and may harm the academic advancement of faculty members. We present a framework that suggests a fundamentally different approach to SET that involves students prospectively and proactively.
Collapse
Affiliation(s)
- Sebastian Uijtdehaage
- Center for Educational Development and Research, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California, USA
| | - Christopher O'Neal
- Center for Educational Development and Research, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California, USA
| |
Collapse
|
56
|
Lases SSL, Arah OA, Pierik EGJMR, Heineman E, Lombarts MJMHK. Residents' engagement and empathy associated with their perception of faculty's teaching performance. World J Surg 2015; 38:2753-60. [PMID: 25008244 DOI: 10.1007/s00268-014-2687-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
BACKGROUND Faculty members rely on residents' feedback about their teaching performance. The influence of residents' characteristics on evaluations of faculty is relatively unexplored. We aimed to evaluate the levels of work engagement and empathy among residents and the association of both characteristics with their evaluation of the faculty's teaching performance. METHODS A multicenter questionnaire study among 271 surgery and gynecology residents was performed from September 2012 to February 2013. Residents' ratings of the faculty's teaching performance were collected using the system for evaluation of teaching quality (SETQ). Residents were also invited to fill out standardized measures of work engagement and empathy using the short Utrecht Work Engagement Scale and the Jefferson Scale of Physician Empathy, respectively. Linear regression analysis using generalized estimating equations to evaluate the association of residents' engagement and empathy with residents' evaluations of teaching performance. RESULTS Overall, 204 (75.3 %) residents completed 1814 SETQ evaluations of 302 faculty, and 143 (52.8 %) and 140 (51.7 %) residents, respectively, completed the engagement and empathy measurements. The median scores of residents' engagement and empathy were 4.56 (scale 0-6) and 5.55 (scale 1-7), respectively. Higher levels of residents' engagement (regression coefficient b = 0.128; 95 % confidence interval (CI) 0.072-0.184; p < 0.001) and empathy (b = 0.113; 95 % CI 0.063-0.164; p < 0.001) were associated with higher faculty teaching performance scores. CONCLUSIONS Residents' engagement and empathy appear to be positively associated with their evaluation of the faculty's performance. A possible explanation is that residents who are more engaged and can understand and share others' perspectives stimulate and experience faculty's teaching better than others.
Collapse
Affiliation(s)
- S S Lenny Lases
- Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Meibergdreef 9, PO Box 22660, Amsterdam, 1100, DD, The Netherlands,
| | | | | | | | | |
Collapse
|
57
|
Tai J, Bearman M, Edouard V, Kent F, Nestel D, Molloy E. Clinical supervision training across contexts. CLINICAL TEACHER 2015; 13:262-6. [DOI: 10.1111/tct.12432] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Joanna Tai
- Health Professions Education and Educational Research (HealthPEER); Monash University; Melbourne Victoria Australia
| | - Margaret Bearman
- Health Professions Education and Educational Research (HealthPEER); Monash University; Melbourne Victoria Australia
| | - Vicki Edouard
- Health Professions Education and Educational Research (HealthPEER); Monash University; Melbourne Victoria Australia
| | - Fiona Kent
- Health Professions Education and Educational Research (HealthPEER); Monash University; Melbourne Victoria Australia
| | - Debra Nestel
- Health Professions Education and Educational Research (HealthPEER); Monash University; Melbourne Victoria Australia
| | - Elizabeth Molloy
- Health Professions Education and Educational Research (HealthPEER); Monash University; Melbourne Victoria Australia
| |
Collapse
|
58
|
Vaughan B. Developing a clinical teaching quality questionnaire for use in a university osteopathic pre-registration teaching program. BMC MEDICAL EDUCATION 2015; 15:70. [PMID: 25885108 PMCID: PMC4404120 DOI: 10.1186/s12909-015-0358-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/08/2014] [Accepted: 03/30/2015] [Indexed: 06/04/2023]
Abstract
BACKGROUND Clinical education is an important component of many health professional training programs. There is a range of questionnaires to assess the quality of the clinical educator however none are in student-led clinic environments. The present study developed a questionnaire to assess the quality of the clinical educators in the osteopathy program at Victoria University. METHODS A systematic search of the literature was used to identify questionnaires that evaluated the quality of clinical teaching. Eighty-three items were extracted and reviewed for their appropriateness to include in a questionnaire by students, clinical educators and academics. A fifty-six item questionnaire was then trialled with osteopathy students. A variety of statistics were used to determine the number of factors to extract. Exploratory factor analysis (EFA) was used to investigate the factor structure. RESULTS The number of factors to extract was calculated to be between 3 and 6. Review of the factor structures suggested the most appropriate fit was four and five factors. The EFA of the four-factor solution collapsed into three factors. The five-factor solution demonstrated the most stable structure. Internal consistency of the five-factor solution was greater than 0.70. CONCLUSIONS The five factors were labelled Learning Environment (Factor 1), Reflective Practice (Factor 2), Feedback (Factor 3) and Patient Management (Factor 4) and Modelling (Factor 5). Further research is now required to continue investigating the construct validity and reliability of the questionnaire.
Collapse
Affiliation(s)
- Brett Vaughan
- Centre for Chronic Disease Prevention & Management, College of Health & Biomedicine, Victoria University, Melbourne, Australia.
- Institute of Sport, Exercise & Active Living, Victoria University, Melbourne, Australia.
- School of Health & Human Sciences, Southern Cross University, Lismore, Australia.
| |
Collapse
|
59
|
Leppink J. Data analysis in medical education research: a multilevel perspective. PERSPECTIVES ON MEDICAL EDUCATION 2015; 4:14-24. [PMID: 25609172 PMCID: PMC4348225 DOI: 10.1007/s40037-015-0160-5] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
A substantial part of medical education research focuses on learning in teams (e.g., departments, problem-based learning groups) or centres (e.g., clinics, institutions) that are followed over time. Individual students or employees sharing the same team or centre tend to be more similar in learning than students or employees from different teams or centres. In other words, when students or employees are nested within teams or centres, there is a within-team or within-centre correlation that should be taken into account in the analysis of data obtained from individuals in these teams or centres. Further, when individuals are measured several times on the same performance (or other) variable, these repeated measurements tend to be correlated, that is: we are dealing with an intra-individual correlation that should be taken into account when analyzing data obtained from these individuals. In such a study context, many researchers resort to methods that cannot account for intra-team and/or intra-individual correlation and this may result in incorrect conclusions with regard to effects and relations of interest. This comparison paper presents the benefits which result from adopting a proper multilevel perspective on the conceptualization and estimation of effects and relations of interest.
Collapse
Affiliation(s)
- Jimmie Leppink
- Department of Educational Development and Research, School of Health Professions Education, Maastricht University, 6200 MD, PO Box 616, Maastricht, The Netherlands,
| |
Collapse
|
60
|
Kikukawa M, Stalmeijer RE, Emura S, Roff S, Scherpbier AJJA. An instrument for evaluating clinical teaching in Japan: content validity and cultural sensitivity. BMC MEDICAL EDUCATION 2014; 14:179. [PMID: 25164309 PMCID: PMC4167259 DOI: 10.1186/1472-6920-14-179] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2014] [Accepted: 08/08/2014] [Indexed: 05/24/2023]
Abstract
BACKGROUND Many instruments for evaluating clinical teaching have been developed but almost all in Western countries. None of these instruments have been validated for the Asian culture, and a literature search yielded no instruments that were developed specifically for that culture. A key element that influences content validity in developing instruments for evaluating the quality of teaching is culture. The aim of this study was to develop a culture-specific instrument with strong content validity for evaluating clinical teaching in initial medical postgraduate training in Japan. METHODS Based on data from a literature search and an earlier study we prepared a draft evaluation instrument. To ensure a good cultural fit of the instrument with the Asian context we conducted a modified Delphi procedure among three groups of stakeholders (five education experts, twelve clinical teachers and ten residents) to establish content validity, as this factor is particularly susceptible to cultural factors. RESULTS Two rounds of Delphi were conducted. Through the procedure, 52 prospective items were reworded, combined or eliminated, resulting in a 25-item instrument validated for the Japanese setting. CONCLUSIONS This is the first study describing the development and content validation of an instrument for evaluating clinical teaching specifically tailored to an East Asian setting. The instrument has similarities and differences compared with instruments of Western origin. Our findings suggest that designers of evaluation instruments should consider the probability that the content validity of instruments for evaluating clinical teachers can be influenced by cultural aspects.
Collapse
Affiliation(s)
- Makoto Kikukawa
- />Department of Medical Education, Kyushu University, 3-1-1 Maidashi Higashi-ku Fukuoka, 81-8582 Kyushu, Japan
| | - Renee E Stalmeijer
- />Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Sei Emura
- />Centre for Graduate Medical Education Development and Research, Saga University Hospital, Saga, Japan
| | - Sue Roff
- />The Centre for Medical Education, Dundee Medical School, Dundee, Scotland
| | - Albert JJA Scherpbier
- />Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
61
|
Jochemsen-van der Leeuw HGAR, van Dijk N, Wieringa-de Waard M. Assessment of the clinical trainer as a role model: a Role Model Apperception Tool (RoMAT). ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2014; 89:671-7. [PMID: 24556764 PMCID: PMC4885572 DOI: 10.1097/acm.0000000000000169] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
PURPOSE Positive role modeling by clinical trainers is important for helping trainees learn professional and competent behavior. The authors developed and validated an instrument to assess clinical trainers as role models: the Role Model Apperception Tool (RoMAT). METHOD On the basis of a 2011 systematic review of the literature and through consultation with medical education experts and with clinical trainers and trainees, the authors developed 17 attributes characterizing a role model, to be assessed using a Likert scale. In 2012, general practice (GP) trainees, in their first or third year of postgraduate training, who attended a curriculum day at four institutes in different parts of the Netherlands, completed the RoMAT. The authors performed a principal component analysis on the data that were generated, and they tested the instrument's validity and reliability. RESULTS Of 328 potential GP trainees, 279 (85%) participated. Of these, 202 (72%) were female, and 154 (55%) were first-year trainees. The RoMAT demonstrated both content and convergent validity. Two components were extracted: "Caring Attitude" and "Effectiveness." Both components had high reliability scores (0.92 and 0.84, respectively). Less experienced trainees scored their trainers significantly higher on the Caring Attitude component. CONCLUSIONS The RoMAT proved to be a valid, reliable instrument for assessing clinical trainers' role-modeling behavior. Both components include an equal number of items addressing personal (Heart), teaching (Head), and clinical (Hands-on) qualities, thus demonstrating that competence in the "3Hs" is a condition for positive role modeling. Educational managers (residency directors) and trainees alike can use the RoMAT.
Collapse
Affiliation(s)
- H G A Ria Jochemsen-van der Leeuw
- Dr. Jochemsen-van der Leeuw is general practitioner and PhD student, Department of General Practice/Family Medicine, Academic Medical Center-University of Amsterdam, Amsterdam, the Netherlands. Dr. van Dijk is assistant professor, Department of General Practice/Family Medicine, Academic Medical Center-University of Amsterdam, Amsterdam, the Netherlands. Dr. Wieringa-de Waard is professor, Department of General Practice/Family Medicine, Academic Medical Center-University of Amsterdam, Amsterdam, the Netherlands
| | | | | |
Collapse
|
62
|
Owolabi MO. Development and psychometric characteristics of a new domain of the stanford faculty development program instrument. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2014; 34:13-24. [PMID: 24648360 DOI: 10.1002/chp.21213] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
INTRODUCTION Teacher's attitude domain, a pivotal aspect of clinical teaching, is missing in the Stanford Faculty Development Program Questionnaire (SFDPQ), the most widely used student-based assessment method of clinical teaching skills. This study was conducted to develop and validate the teacher's attitude domain and evaluate the validity and internal consistency reliability of the augmented SFDPQ. METHODS Items generated for the new domain included teacher's enthusiasm, sobriety, humility, thoroughness, empathy, and accessibility. The study involved 20 resident doctors assessed once by 64 medical students using the augmented SFDPQ. Construct validity was explored using correlation among the different domains and a global rating scale. Factor analysis was performed. RESULTS The response rate was 94%. The new domain had a Cronbach's alpha of 0.89, with 1-factor solution explaining 57.1% of its variance. It showed the strongest correlation to the global rating scale (rho = 0.71). The augmented SFDPQ, which had a Cronbach's alpha of 0.93, correlated better (rho = 0.72, p < 0.00001) to the global rating scale than the original SFDPQ (rho = 0.67, p < 0.00001). DISCUSSION The new teacher's attitude domain exhibited good internal consistency and construct and factorial validity. It enhanced the content and construct validity of the SFDPQ. The validated construct of the augmented SFDPQ is recommended for design and evaluation of basic and continuing clinical teaching programs.
Collapse
|
63
|
Strand P, Sjöborg K, Stalmeijer R, Wichmann-Hansen G, Jakobsson U, Edgren G. Development and psychometric evaluation of the Undergraduate Clinical Education Environment Measure (UCEEM). MEDICAL TEACHER 2013; 35:1014-26. [PMID: 24050817 DOI: 10.3109/0142159x.2013.835389] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
BACKGROUND There is a paucity of instruments designed to evaluate the multiple dimensions of the workplace as an educational environment for undergraduate medical students. AIM The aim was to develop and psychometrically evaluate an instrument to measure how undergraduate medical students perceive the clinical workplace environment, based on workplace learning theories and empirical findings. METHOD Development of the instrument relied on established standards including theoretical and empirical grounding, systematic item development and expert review at various stages to ensure content validity. Qualitative and quantitative methods were employed using a series of steps from conceptualization through psychometric analysis of scores in a Swedish medical student population. RESULTS The final result was a 25-item instrument with two overarching dimensions, experiential learning and social participation, and four subscales that coincided well with theory and empirical findings: Opportunities to learn in and through work & quality of supervision; Preparedness for student entry; Workplace interaction patterns & student inclusion; and Equal treatment. Evidence from various sources supported content validity, construct validity and reliability of the instrument. CONCLUSION The Undergraduate Clinical Education Environment Measure represents a valid, reliable and feasible multidimensional instrument for evaluation of the clinical workplace as a learning environment for undergraduate medical students. Further validation in different populations using various psychometric methods is needed.
Collapse
|
64
|
Jippes M, Driessen EW, Broers NJ, Majoor GD, Gijselaers WH, van der Vleuten CPM. A medical school's organizational readiness for curriculum change (MORC): development and validation of a questionnaire. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2013; 88:1346-56. [PMID: 23887017 DOI: 10.1097/acm.0b013e31829f0869] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
PURPOSE Because successful change implementation depends on organizational readiness for change, the authors developed and assessed the validity of a questionnaire, based on a theoretical model of organizational readiness for change, designed to measure, specifically, a medical school's organizational readiness for curriculum change (MORC). METHOD In 2012, a panel of medical education experts judged and adapted a preliminary MORC questionnaire through a modified Delphi procedure. The authors administered the resulting questionnaire to medical school faculty involved in curriculum change and tested the psychometric properties using exploratory and confirmatory factor analysis, and generalizability analysis. RESULTS The mean relevance score of the Delphi panel (n = 19) reached 4.2 on a five-point Likert-type scale (1 = not relevant and 5 = highly relevant) in the second round, meeting predefined criteria for completing the Delphi procedure. Faculty (n = 991) from 131 medical schools in 56 countries completed MORC. Exploratory factor analysis yielded three underlying factors-motivation, capability, and external pressure-in 12 subscales with 53 items. The scale structure suggested by exploratory factor analysis was confirmed by confirmatory factor analysis. Cronbach alpha ranged from 0.67 to 0.92 for the subscales. Generalizability analysis showed that the MORC results of 5 to 16 faculty members can reliably evaluate a school's organizational readiness for change. CONCLUSIONS MORC is a valid, reliable questionnaire for measuring organizational readiness for curriculum change in medical schools. It can identify which elements in a change process require special attention so as to increase the chance of successful implementation.
Collapse
Affiliation(s)
- Mariëlle Jippes
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands.
| | | | | | | | | | | |
Collapse
|
65
|
Stalmeijer RE, Dolmans DHJM, Snellen-Balendong HAM, van Santen-Hoeufft M, Wolfhagen IHAP, Scherpbier AJJA. Clinical teaching based on principles of cognitive apprenticeship: views of experienced clinical teachers. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2013; 88:861-5. [PMID: 23619074 DOI: 10.1097/acm.0b013e31828fff12] [Citation(s) in RCA: 65] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
PURPOSE To explore (1) whether an instructional model based on principles of cognitive apprenticeship fits with the practice of experienced clinical teachers and (2) which factors influence clinical teaching during clerkships from an environmental, teacher, and student level as perceived by the clinical teachers themselves. The model was designed to apply directly to teaching behaviors of clinical teachers and consists of three phases, advocating teaching behaviors such as modeling, creating a safe learning environment, coaching, knowledge articulation, and exploration. METHOD A purposive sample of 17 experienced clinical teachers from five different disciplines and four different teaching hospitals took part in semistructured individual interviews. Two researchers independently performed a thematic analysis of the interview transcripts. Coding was discussed within the research team until consensus was reached. RESULTS All participants recognized the theoretical model as a structured picture of the practice of teaching activities during both regular and senior clerkships. According to participants, modeling and creating a safe learning environment were fundamental to the learning process of both regular and senior clerkship students. Division of teaching responsibilities, longer rotations, and proactive behavior of teachers and students ensured that teachers were able to apply all steps in the model. CONCLUSIONS The theoretical model can offer valuable guidance in structuring clinical teaching activities and offers suggestions for the design of effective clerkships.
Collapse
Affiliation(s)
- Renée E Stalmeijer
- Department of Educational Development and Educational Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands.
| | | | | | | | | | | |
Collapse
|
66
|
Backeris ME, Patel RM, Metro DG, Sakai T. Impact of a productivity-based compensation system on faculty clinical teaching scores, as evaluated by anesthesiology residents. J Clin Anesth 2013; 25:209-13. [DOI: 10.1016/j.jclinane.2012.11.008] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2012] [Revised: 10/07/2012] [Accepted: 11/11/2012] [Indexed: 11/17/2022]
|
67
|
Fluit CRMG, Feskens R, Bolhuis S, Grol R, Wensing M, Laan R. Repeated evaluations of the quality of clinical teaching by residents. PERSPECTIVES ON MEDICAL EDUCATION 2013; 2:87-94. [PMID: 23670697 PMCID: PMC3656177 DOI: 10.1007/s40037-013-0060-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Many studies report on the validation of instruments for facilitating feedback to clinical supervisors. There is mixed evidence whether evaluations lead to more effective teaching and higher ratings. We assessed changes in resident ratings after an evaluation and feedback session with their supervisors. Supervisors of three medical specialities were evaluated, using a validated instrument (EFFECT). Mean overall scores (MOS) and mean scale scores were calculated and compared using paired T-tests. 24 Supervisors from three departments were evaluated at two subsequent years. MOS increased from 4.36 to 4.49. The MOS of two scales showed an increase >0.2: 'teaching methodology' (4.34-4.55), and 'assessment' (4.11-4.39). Supervisors with an MOS <4.0 at year 1 (n = 5) all demonstrated a strong increase in the MOS (mean overall increase 0.50, range 0.34-0.64). Four supervisors with an MOS between 4.0 and 4.5 (n = 6) demonstrated an increase >0.2 in their MOS (mean overall increase 0.21, range -0.15 to 53). One supervisor with an MOS >4.5 (n = 13) demonstrated an increase >0.02 in the MOS, two demonstrated a decrease >0.2 (mean overall increase -0.06, range -0.42 to 0.42). EFFECT-S was associated with a positive change in residents' ratings of their supervisors, predominantly in supervisors with relatively low initial scores.
Collapse
Affiliation(s)
- Cornelia R M G Fluit
- Academic Educational Institute, Radboud University Nijmegen Medical Centre, 306 IWOO, PO Box 9101, 6500 HB, Nijmegen, the Netherlands.
| | - Remco Feskens
- Department of Methods and Statistics, Utrecht University, Utrecht, the Netherlands
| | - Sanneke Bolhuis
- Academic Educational Institute, Radboud University Nijmegen Medical Centre, 306 IWOO, PO Box 9101, 6500 HB, Nijmegen, the Netherlands
| | - Richard Grol
- Scientific Institute for Quality of Healthcare, Radboud University Nijmegen Medical Centre, 306 IWOO, PO Box 9101, 6500 HB, Nijmegen, the Netherlands
| | - Michel Wensing
- Scientific Institute for Quality of Healthcare, Radboud University Nijmegen Medical Centre, 306 IWOO, PO Box 9101, 6500 HB, Nijmegen, the Netherlands
| | - Roland Laan
- Academic Educational Institute, Radboud University Nijmegen Medical Centre, 306 IWOO, PO Box 9101, 6500 HB, Nijmegen, the Netherlands
- Department of Rheumatology, Radboud University Nijmegen Medical Centre, 306 IWOO, PO Box 9101, 6500 HB, Nijmegen, the Netherlands
| |
Collapse
|
68
|
Jochemsen-van der Leeuw HGAR, van Dijk N, van Etten-Jamaludin FS, Wieringa-de Waard M. The attributes of the clinical trainer as a role model: a systematic review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2013; 88:26-34. [PMID: 23165277 DOI: 10.1097/acm.0b013e318276d070] [Citation(s) in RCA: 89] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
PURPOSE Medical trainees (interns and residents) and their clinical trainers need to be aware of the differences between positive and negative role modeling to ensure that trainees imitate and that trainers demonstrate the professional behavior required to provide high-quality patient care. The authors systematically reviewed the medical and medical education literature to identify the attributes characterizing clinical trainers as positive and negative role models for trainees. METHOD The authors searched the MEDLINE, EMBASE, ERIC, and PsycINFO databases from their earliest dates until May 2011. They included quantitative and qualitative original studies, published in any language, on role modeling by clinical trainers for trainees in graduate medical education. They assessed the methodological quality of and extracted data from the included studies, using predefined forms. RESULTS Seventeen articles met inclusion criteria. The authors divided attributes of role models into three categories: patient care qualities, teaching qualities, and personal qualities. Positive role models were frequently described as excellent clinicians who were invested in the doctor-patient relationship. They inspired and taught trainees while carrying out other tasks, were patient, and had integrity. These findings confirm the implicit nature of role modeling. Positive role models' appearance and scientific achievements were among their least important attributes. Negative role models were described as uncaring toward patients, unsupportive of trainees, cynical, and impatient. CONCLUSIONS The identified attributes may help trainees recognize which aspects of the clinical trainer's professional behavior to imitate, by adding the important step of apperception to the process of learning professional competencies through observation.
Collapse
|
69
|
Dornan T, Muijtjens A, Graham J, Scherpbier A, Boshuizen H. Manchester Clinical Placement Index (MCPI). Conditions for medical students' learning in hospital and community placements. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2012; 17:703-16. [PMID: 22234383 PMCID: PMC3490061 DOI: 10.1007/s10459-011-9344-x] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2011] [Accepted: 12/20/2011] [Indexed: 05/16/2023]
Abstract
The drive to quality-manage medical education has created a need for valid measurement instruments. Validity evidence includes the theoretical and contextual origin of items, choice of response processes, internal structure, and interrelationship of a measure's variables. This research set out to explore the validity and potential utility of an 11-item measurement instrument, whose theoretical and empirical origins were in an Experience Based Learning model of how medical students learn in communities of practice (COPs), and whose contextual origins were in a community-oriented, horizontally integrated, undergraduate medical programme. The objectives were to examine the psychometric properties of the scale in both hospital and community COPs and provide validity evidence to support using it to measure the quality of placements. The instrument was administered twice to students learning in both hospital and community placements and analysed using exploratory factor analysis and a generalizability analysis. 754 of a possible 902 questionnaires were returned (84% response rate), representing 168 placements. Eight items loaded onto two factors, which accounted for 78% of variance in the hospital data and 82% of variance in the community data. One factor was the placement learning environment, whose five constituent items were how learners were received at the start of the placement, people's supportiveness, and the quality of organisation, leadership, and facilities. The other factor represented the quality of training-instruction in skills, observing students performing skills, and providing students with feedback. Alpha coefficients ranged between 0.89 and 0.93 and there were no redundant or ambiguous items. Generalisability analysis showed that between 7 and 11 raters would be needed to achieve acceptable reliability. There is validity evidence to support using the simple 8-item, mixed methods Manchester Clinical Placement Index to measure key conditions for undergraduate medical students' experience based learning: the quality of the learning environment and the training provided within it. Its conceptual orientation is towards Communities of Practice, which is a dominant contemporary theory in undergraduate medical education.
Collapse
Affiliation(s)
- Tim Dornan
- Department of Educational Development and Research, Maastricht University, The Netherlands.
| | | | | | | | | |
Collapse
|
70
|
Kelly M, Bennett D, McDonald P. Evaluation of clinical teaching in general practice using the Maastricht Clinical Teaching Questionnaire. MEDICAL TEACHER 2012; 34:1089. [PMID: 22931143 DOI: 10.3109/0142159x.2012.716562] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
|
71
|
Fluit C, Bolhuis S, Grol R, Ham M, Feskens R, Laan R, Wensing M. Evaluation and feedback for effective clinical teaching in postgraduate medical education: validation of an assessment instrument incorporating the CanMEDS roles. MEDICAL TEACHER 2012; 34:893-901. [PMID: 22816979 DOI: 10.3109/0142159x.2012.699114] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
BACKGROUND Providing clinical teachers in postgraduate medical education with feedback about their teaching skills is a powerful tool to improve clinical teaching. A systematic review showed that available instruments do not comprehensively cover all domains of clinical teaching. We developed and empirically test a comprehensive instrument for assessing clinical teachers in the setting of workplace learning and linked to the CanMEDS roles. METHODS In a Delphi study, the content validity of a preliminary instrument with 88 items was studied, leading to the construction of the EFFECT (evaluation and feedback for effective clinical teaching) instrument. The response process was explored in a pilot test and focus group research with 18 residents of 6 different disciplines. A confirmatory factor analyses (CFA) and reliability analyses were performed on 407 evaluations of 117 supervisors, collected in 3 medical disciplines (paediatrics, pulmonary diseases and surgery) of 6 departments in 4 different hospitals. RESULTS CFA yielded an 11 factor model with a good to excellent fit and internal consistencies ranged from 0.740 to 0.940 per domain; 7 items could be deleted. CONCLUSION The model of workplace learning showed to be a useful framework for developing EFFECT, which incorporates the CanMEDS competencies and proved to be valid and reliable.
Collapse
Affiliation(s)
- Cornelia Fluit
- Radboud University Nijmegen Medical Centre, HB Nijmegen, the Netherlands.
| | | | | | | | | | | | | |
Collapse
|
72
|
Boerboom TBB, Mainhard T, Dolmans DHJM, Scherpbier AJJA, Van Beukelen P, Jaarsma ADC. Evaluating clinical teachers with the Maastricht clinical teaching questionnaire: how much 'teacher' is in student ratings? MEDICAL TEACHER 2012; 34:320-326. [PMID: 22455701 DOI: 10.3109/0142159x.2012.660220] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
BACKGROUND Students are a popular source of data to evaluate the performance of clinical teachers. Instruments to obtain student evaluations must have proven validity. One aspect of validity that often remains underexposed is the possibility of effects of between-student differences and teacher and student characteristics not directly related to teaching performance. AIM The authors examined the occurrence of such effects, using multilevel analysis to analyse data from the Maastricht clinical teaching questionnaire (MCTQ), a validated evaluation instrument, in a veterinary curriculum. METHODS The 15-item MCTQ covers five domains. The authors used multilevel analysis to divide the variance in the domain scores in components related to, respectively, teachers and students. They estimated subsequent models to explore how the MCTQ scores are dependent on teacher and student characteristics. RESULTS Significant amounts of variance in student ratings were due to between-teacher differences, particularly for learning climate, modelling and coaching. The effects of teacher and student characteristics were mostly non-significant or small. CONCLUSION Large portions of variance in MCTQ scores were due to differences between teachers, while the contribution of student and teacher characteristics was negligible. The results support the validity of student ratings obtained with the MCTQ for evaluating teacher performance.
Collapse
Affiliation(s)
- Tobias B B Boerboom
- Quality Improvement of Veterinary Education, Faculty of Veterinary Medicine, Utrecht University, The Netherlands.
| | | | | | | | | | | |
Collapse
|
73
|
Beran TN, Donnon T, Hecker K. A review of student evaluation of teaching: applications to veterinary medical education. JOURNAL OF VETERINARY MEDICAL EDUCATION 2012; 39:71-78. [PMID: 22433742 DOI: 10.3138/jvme.0311.037r] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Student evaluation of teaching is ubiquitous to teaching in colleges and universities around the world. Since the implementation of student evaluations in the 1970s in the US, considerable research has been devoted to their appropriate use as a means of judging the effectiveness of teaching. The present article aims to (1) examine the evidence for the reliability, validity, and utility of student ratings; (2) provide seven guidelines for ways to identify effective instruction, given that the purpose of student evaluation is to assess effective teaching; and (3) conclude with recommendations for the integration of student ratings into the continuous evaluation of veterinary medical education.
Collapse
Affiliation(s)
- Tanya N Beran
- Department of Community Health Sciences, University of Calgary, Calgary, Canada.
| | | | | |
Collapse
|
74
|
Abstract
This Guide provides an overview of educational theory relevant to learning from experience. It considers experience gained in clinical workplaces from early medical student days through qualification to continuing professional development. Three key assumptions underpin the Guide: learning is 'situated'; it can be viewed either as an individual or a collective process; and the learning relevant to this Guide is triggered by authentic practice-based experiences. We first provide an overview of the guiding principles of experiential learning and significant historical contributions to its development as a theoretical perspective. We then discuss socio-cultural perspectives on experiential learning, highlighting their key tenets and drawing together common threads between theories. The second part of the Guide provides examples of learning from experience in practice to show how theoretical stances apply to clinical workplaces. Early experience, student clerkships and residency training are discussed in turn. We end with a summary of the current state of understanding.
Collapse
|
75
|
Dolmans DHJM, Tigelaar D. Building bridges between theory and practice in medical education using a design-based research approach: AMEE Guide No. 60. MEDICAL TEACHER 2012; 34:1-10. [PMID: 22250671 DOI: 10.3109/0142159x.2011.595437] [Citation(s) in RCA: 85] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Medical education research has grown enormously over the past 20 years, but it does not sufficiently make use of theories, according to influential leaders and researchers in this field. In this AMEE Guide, it is argued that design-based research (DBR) studies should be conducted much more in medical education design research because these studies both advance the testing and refinement of theories and advance educational practice. In this Guide, the essential characteristics of DBR as well as how DBR differs from other approach such as formative evaluation are explained. It is also explained what the pitfalls and challenges of DBR are. The main challenges deal with how to insure that DBR studies reveal findings that are of a broader relevance than the local situation and how to insure that DBR contributes toward theory testing and refinement. An example of a series of DBR studies on the design of a teaching portfolio in higher education that is aimed at stimulating a teacher's professional development is described, to illustrate how DBR studies actually work in practice. Finally, it is argued that DBR-studies could play an important role in the advancement of theory and practice in the two broad domains of designing or redesigning work-based learning environments and assessment programs.
Collapse
Affiliation(s)
- Diana H J M Dolmans
- Department of Educational Development & Research, Maastricht University, The Netherlands.
| | | |
Collapse
|
76
|
Boerboom TBB, Dolmans DHJM, Jaarsma ADC, Muijtjens AMM, Van Beukelen P, Scherpbier AJJA. Exploring the validity and reliability of a questionnaire for evaluating veterinary clinical teachers' supervisory skills during clinical rotations. MEDICAL TEACHER 2011; 33:e84-e91. [PMID: 21275538 DOI: 10.3109/0142159x.2011.536277] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
BACKGROUND Feedback to aid teachers in improving their teaching requires validated evaluation instruments. When implementing an evaluation instrument in a different context, it is important to collect validity evidence from multiple sources. AIM We examined the validity and reliability of the Maastricht Clinical Teaching Questionnaire (MCTQ) as an instrument to evaluate individual clinical teachers during short clinical rotations in veterinary education. METHODS We examined four sources of validity evidence: (1) Content was examined based on theory of effective learning. (2) Response process was explored in a pilot study. (3) Internal structure was assessed by confirmatory factor analysis using 1086 student evaluations and reliability was examined utilizing generalizability analysis. (4) Relations with other relevant variables were examined by comparing factor scores with other outcomes. RESULTS Content validity was supported by theory underlying the cognitive apprenticeship model on which the instrument is based. The pilot study resulted in an additional question about supervision time. A five-factor model showed a good fit with the data. Acceptable reliability was achievable with 10-12 questionnaires per teacher. Correlations between the factors and overall teacher judgement were strong. CONCLUSIONS The MCTQ appears to be a valid and reliable instrument to evaluate clinical teachers' performance during short rotations.
Collapse
Affiliation(s)
- T B B Boerboom
- Quality Improvement of Veterinary Education, Faculty of Veterinary Medicine, Utrecht University, PO BOX 80163, 3508 TD Utrecht, The Netherlands.
| | | | | | | | | | | |
Collapse
|