1
|
Walden D, Rawls M, Santen SA, Feldman M, Vinnikova A, Dow A. Rapid Feedback: Assessing Pre-clinical Teaching in the Era of Online Learning. MEDICAL SCIENCE EDUCATOR 2022; 32:819-826. [PMID: 35729989 PMCID: PMC9198414 DOI: 10.1007/s40670-022-01573-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 05/29/2022] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Medical schools vary in their approach to providing feedback to faculty. The purpose of this study was to test the effects of rapid student feedback in a course utilizing novel virtual learning methods. METHODS Second-year medical students were supplied with an optional, short questionnaire at the end of each class session and asked to provide feedback within 48 h. At the close of each survey, results were emailed to faculty. After the course, students and faculty were asked to rate the effectiveness of this method. This study did not affect administration of the usual end-of-course summative evaluations. RESULTS Ninety-one percent of students who participated noted increased engagement in the feedback process, but only 18% on average chose to participate. Faculty rated rapid feedback as more actionable than summative feedback (67%), 50% rated it as more specific, and 42% rated it as more helpful. Some wrote that comments were too granular, and others noted a negative personal emotional response. CONCLUSION Rapid feedback engaged students, provided actionable feedback, and increased communication between students and instructors, suggesting that this approach added value. Care must be taken to reduce the student burden and support relational aspects of the process.
Collapse
Affiliation(s)
- Daniel Walden
- Virginia Commonwealth University School of Medicine, Richmond, VA USA
| | - Meagan Rawls
- Office of Assessment, Evaluation, and Scholarship, Virginia Commonwealth University School of Medicine, Richmond, VA USA
| | - Sally A. Santen
- Office of Assessment, Evaluation, and Scholarship, Virginia Commonwealth University School of Medicine, Richmond, VA USA
- University of Cincinnati College of Medicine, Cincinnati, USA
| | - Moshe Feldman
- Office of Assessment, Evaluation, and Scholarship, Virginia Commonwealth University School of Medicine, Richmond, VA USA
| | - Anna Vinnikova
- Department of Internal Medicine, Virginia Commonwealth University School of Medicine, Richmond, VA USA
| | - Alan Dow
- Department of Internal Medicine, Virginia Commonwealth University School of Medicine, Richmond, VA USA
| |
Collapse
|
2
|
Constantinou C, Wijnen-Meijer M. Student evaluations of teaching and the development of a comprehensive measure of teaching effectiveness for medical schools. BMC MEDICAL EDUCATION 2022; 22:113. [PMID: 35183151 PMCID: PMC8858452 DOI: 10.1186/s12909-022-03148-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Accepted: 01/29/2022] [Indexed: 06/14/2023]
Abstract
The evaluation of courses and faculty is of vital importance in all higher education institutions including medical schools. Student Evaluations of Teaching (SETs) commonly take the form of completion of anonymous questionnaires and even though they were originally developed to evaluate courses and programmes, throughout the years they have also been used to measure teaching effectiveness and subsequently to guide important decisions related to the faculty's career progression. Nevertheless, certain factors and biases may influence SET rates and may not measure teaching effectiveness objectively. Although the literature on course and faculty evaluations is well-researched in general higher education, there are concerns with regards to the use of the same tools for evaluation of courses and teachers in medical programmes. Specifically, the SETs in general higher education cannot be directly applied to the structure of courses and delivery of curriculum in medical schools. This review provides an overview of how SETs can be improved at the levels of instrumentation, administration and interpretation. In addition, the paper supports that through the collection and triangulation of data from multiple sources, including students, peers, program administrators and self-awareness via the use of different methods such as peer reviews, focus groups and self-evaluations, it will be possible to develop a comprehensive evaluation system that will present an effective measure of teaching effectiveness, will support the professional development of medical teachers and will improve the quality of teaching in medical education.
Collapse
Affiliation(s)
| | - Marjo Wijnen-Meijer
- Technical University of Munich, School of Medicine, TUM Medical Education Center, Ismaninger Straße 22, 81675, Munich, Germany.
| |
Collapse
|
3
|
Hwang JE, Kim NJ, Song M, Cui Y, Kim EJ, Park IA, Lee HI, Gong HJ, Kim SY. Individual class evaluation and effective teaching characteristics in integrated curricula. BMC MEDICAL EDUCATION 2017; 17:252. [PMID: 29233131 PMCID: PMC5728067 DOI: 10.1186/s12909-017-1097-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/07/2017] [Accepted: 12/06/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND In an integrated curriculum, multiple instructors take part in a course in the form of team teaching. Accordingly, medical schools strive to manage each course run by numerous instructors. As part of the curriculum management, course evaluation is conducted, but a single, retrospective course evaluation does not comprehensively capture student perception of classes by different instructors. This study aimed to demonstrate the need for individual class evaluation, and further to identify teaching characteristics that instructors need to keep in mind when preparing classes. METHODS From 2014 to 2015, students at one medical school left comments on evaluation forms after each class. Courses were also assessed after each course. Their comments were categorized by connotation (positive or negative) and by subject. Within each subject category, test scores were compared between positively and negatively mentioned classes. The Mann-Whitney U test was performed to test group differences in scores. The same method was applied to the course evaluation data. RESULTS Test results for course evaluation showed group difference only in the practice/participation category. However, test results for individual class evaluation showed group differences in six categories: difficulty, main points, attitude, media/contents, interest, and materials. That is, the test scores of classes positively mentioned in six domains were significantly higher than those of negatively mentioned classes. CONCLUSIONS It was proved that individual class evaluation is needed to manage multi-instructor courses in integrated curricula of medical schools. Based on the students' extensive feedback, we identified teaching characteristics statistically related to academic achievement. School authorities can utilize these findings to encourage instructors to develop effective teaching characteristics in class preparation.
Collapse
Affiliation(s)
- Jung Eun Hwang
- Department of Pathology, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Na Jin Kim
- Master Center for Medical Education Support, College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul, 06591 Republic of Korea
| | - Meiying Song
- Department of Pathology, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Yinji Cui
- Department of Pathology, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Eun Ju Kim
- Master Center for Medical Education Support, College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul, 06591 Republic of Korea
| | - In Ae Park
- Master Center for Medical Education Support, College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul, 06591 Republic of Korea
| | - Hye In Lee
- Master Center for Medical Education Support, College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul, 06591 Republic of Korea
| | - Hye Jin Gong
- Master Center for Medical Education Support, College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul, 06591 Republic of Korea
| | - Su Young Kim
- Department of Pathology, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
- Master Center for Medical Education Support, College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul, 06591 Republic of Korea
| |
Collapse
|
4
|
Peer review of teaching at the Budapest University of Technology and Economics Faculty of Economic and Social Sciences. INTERNATIONAL JOURNAL OF QUALITY AND SERVICE SCIENCES 2017. [DOI: 10.1108/ijqss-02-2017-0014] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PurposeThis paper aims to describe an internal quality enhancement system based on peer reviewing and summarizes the first results of application at the Budapest University of Technology and Economics Faculty of Economic and Social Sciences.Design/methodology/approachA peer review framework has been developed to evaluate and further develop the teaching programs and practices. The questionnaire-based peer review program included 22 courses and involved almost 100 lecturers. Peer review outcomes are completed by end-of-semester student course evaluations.FindingsThe results allow us to map differences between lecturers and courses and to identify correlations between the assessment criteria applied for peer reviewing.Practical implicationsThe implemented framework implies individual, faculty and organizational development to enhance a deeper understanding of how to create quality in teaching programs and processes. Secondly, the peer review program contributes to the establishment of a learning community with a growing common understanding of what is considered good quality in business education.Originality/valueThe paper is valuable as a guide to faculty management wishing to implement a peer review framework within their own institution. The novelty of the presented approach is that it focuses on a semester-long teaching performance including classroom performance, course outlines, teaching materials, course requirements and processes and means of student performance assessments.
Collapse
|
5
|
Müller T, Montano D, Poinstingl H, Dreiling K, Schiekirka-Schwake S, Anders S, Raupach T, von Steinbüchel N. Evaluation of large-group lectures in medicine - development of the SETMED-L (Student Evaluation of Teaching in MEDical Lectures) questionnaire. BMC MEDICAL EDUCATION 2017; 17:137. [PMID: 28821257 PMCID: PMC5563045 DOI: 10.1186/s12909-017-0970-8] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2017] [Accepted: 07/28/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND The seven categories of the Stanford Faculty Development Program (SFDP) represent a framework for planning and assessing medical teaching. Nevertheless, so far there is no specific evaluation tool for large-group lectures that is based on these categories. This paper reports the development and psychometric validation of a short German evaluation tool for large-group lectures in medical education (SETMED-L: 'Student Evaluation of Teaching in MEDical Lectures') based on the SFDP-categories. METHODS Data were collected at two German medical schools. In Study 1, a full information factor analysis of the new 14-item questionnaire was performed. In Study 2, following cognitive debriefings and adjustments, a confirmatory factor analysis was performed. The model was tested for invariance across medical schools and student gender. Convergent validity was assessed by comparison with results of the FEVOR questionnaire. RESULTS Study 1 (n = 922) yielded a three-factor solution with one major (10 items) and two minor factors (2 items each). In Study 2 (n = 2740), this factor structure was confirmed. Scale reliability ranged between α = 0.71 and α = 0.88. Measurement invariance was given across student gender but not across medical schools. Convergent validity in the subsample tested (n = 246) yielded acceptable results. CONCLUSION The SETMED-L showed satisfactory to very good psychometric characteristics. The main advantages are its short yet comprehensive form, the integration of SFDP-categories and its focus on medical education.
Collapse
Affiliation(s)
- Tjark Müller
- Department of Legal Medicine, University Medical Centre Hamburg-Eppendorf, Butenfeld 34, D-22529 Hamburg, Germany
- Institute of Medical Psychology and Medical Sociology, Georg-August-University Göttingen, Waldweg 37, D-37075 Göttingen, Germany
| | - Diego Montano
- Institute of Medical Psychology and Medical Sociology, Georg-August-University Göttingen, Waldweg 37, D-37075 Göttingen, Germany
| | - Herbert Poinstingl
- Institute of Medical Psychology and Medical Sociology, Georg-August-University Göttingen, Waldweg 37, D-37075 Göttingen, Germany
| | - Katharina Dreiling
- Department of Cardiology and Pneumology, University Medical Centre Göttingen, Robert-Koch-Straße 40, D-37075 Göttingen, Germany
| | - Sarah Schiekirka-Schwake
- Division of Medical Education Research and Curriculum Development, Göttingen University Medical Centre, Robert-Koch-Straße 40, D-37075 Göttingen, Germany
| | - Sven Anders
- Department of Legal Medicine, University Medical Centre Hamburg-Eppendorf, Butenfeld 34, D-22529 Hamburg, Germany
| | - Tobias Raupach
- Department of Cardiology and Pneumology, University Medical Centre Göttingen, Robert-Koch-Straße 40, D-37075 Göttingen, Germany
- Division of Medical Education Research and Curriculum Development, Göttingen University Medical Centre, Robert-Koch-Straße 40, D-37075 Göttingen, Germany
| | - Nicole von Steinbüchel
- Institute of Medical Psychology and Medical Sociology, Georg-August-University Göttingen, Waldweg 37, D-37075 Göttingen, Germany
| |
Collapse
|
6
|
Chae SJ, Kim M, Chang KH, Chung YS. Potential bias factors that affect the course evaluation of students in preclinical courses. KOREAN JOURNAL OF MEDICAL EDUCATION 2017; 29:73-80. [PMID: 28597870 PMCID: PMC5465435 DOI: 10.3946/kjme.2017.54] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 10/26/2016] [Revised: 02/24/2017] [Accepted: 04/06/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE We aim to identify what potential bias factors affected students' overall course evaluation, and to observe what factors should be considered in the curriculum evaluation system of medical schools. METHODS This study analyzed students' ratings of preclinical instructions at the Ajou University School of Medicine. The ratings of instructions involved 41 first-year and 45 second-year medical students. RESULTS There was a statistically significant difference between years of study and ratings' scoring. Learning difficulty, learning amount, student assessment, and teacher preparation from second-year students were significantly higher than first-year students (p<0.05). The analysis results revealed that student assessment was the predictor of ratings from first-year students, while teacher preparation was the predictor of ratings from second-year students. CONCLUSION We found significant interactions between year of study and the students' rating results. We were able to confirm that satisfaction of instructions factors perceived by medical students were different for the characteristics of courses. Our results may be an important resource for evaluating preclinical curriculums.
Collapse
Affiliation(s)
- Su Jin Chae
- Department of Medical Humanities & Social Medicine, Ajou University School of Medicine, Suwon, Korea
- Office of Medical Education, Ajou University School of Medicine, Suwon, Korea
| | - Miran Kim
- Office of Medical Education, Ajou University School of Medicine, Suwon, Korea
- Department of Obstetrics & Gynecology, Ajou University School of Medicine, Suwon, Korea
| | - Ki Hong Chang
- Office of Medical Education, Ajou University School of Medicine, Suwon, Korea
| | - Yoon-Sok Chung
- Office of Medical Education, Ajou University School of Medicine, Suwon, Korea
- Department of Endocrinology and Metabolism, Ajou University School of Medicine, Suwon, Korea
| |
Collapse
|
7
|
Gerbase MW, Germond M, Cerutti B, Vu NV, Baroffio A. How Many Responses Do We Need? Using Generalizability Analysis to Estimate Minimum Necessary Response Rates for Online Student Evaluations. TEACHING AND LEARNING IN MEDICINE 2015; 27:395-403. [PMID: 26507997 DOI: 10.1080/10401334.2015.1077126] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
UNLABELLED CONSTRUCT: The study compares paper and online ratings of instructional units and analyses, with the G-study using the symmetry principle, the response rates needed to ensure acceptable precision of the measure when compliance is low. BACKGROUND Students' ratings of teaching contribute to the quality of medical training programs. To date, many schools have replaced pen-and-paper questionnaires with electronic forms, despite the lower response rates consistently reported with the latter. Few available studies have examined the effects of low response rates on the reliability and precision of the evaluation measure. Moreover, the minimum number of raters to target when response rates are low remains unclear. APPROACH Descriptive data were derived from 799 students' paper and online ratings of 11 preclinical instructional units (PIUs). Reliability was assessed by Cronbach's alpha coefficients. The generalizability method applying the symmetry principle approach was used to analyze the precision of the measure with a reference standard error of mean (SEM) set at 0.10; optimization models were built to estimate minimum response rates. RESULTS Overall, response rates were 74% and 30% (p < .001) and PIUs ratings were 3.8 ± 0.5 and 3.6 ± 0.5 (p = .02), respectively in paper and online questionnaires. Higher SEM levels and significantly larger 95% confidence intervals of PIUs rating scores were observed with online evaluations. To keep the SEM within preset limits of precision, a minimum of 48% response rate was estimated for online formats. CONCLUSIONS The proposed generalizability analysis allowed estimating the minimum response needed to maintain acceptable precision in online evaluations. The effects of response rates on accuracy are discussed.
Collapse
Affiliation(s)
- Margaret W Gerbase
- a Unit of Development and Research in Medical Education, University of Geneva Faculty of Medicine , Geneva , Switzerland
| | - Michèle Germond
- a Unit of Development and Research in Medical Education, University of Geneva Faculty of Medicine , Geneva , Switzerland
| | - Bernard Cerutti
- a Unit of Development and Research in Medical Education, University of Geneva Faculty of Medicine , Geneva , Switzerland
| | - Nu V Vu
- a Unit of Development and Research in Medical Education, University of Geneva Faculty of Medicine , Geneva , Switzerland
| | - Anne Baroffio
- a Unit of Development and Research in Medical Education, University of Geneva Faculty of Medicine , Geneva , Switzerland
| |
Collapse
|
8
|
Nation JG, Carmichael E, Fidler H, Violato C. The development of an instrument to assess clinical teaching with linkage to CanMEDS roles: A psychometric analysis. MEDICAL TEACHER 2011; 33:e290-6. [PMID: 21609164 DOI: 10.3109/0142159x.2011.565825] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
BACKGROUND Assessment of clinical teaching by learners is of value to teachers, department heads, and program directors, and must be comprehensive and feasible. AIMS To review published evaluation instruments with psychometric evaluations and to develop and psychometrically evaluate an instrument for assessing clinical teaching with linkages to the CanMEDS roles. METHOD We developed a 19-item questionnaire to reflect 10 domains relevant to teaching and the CanMEDS roles. A total of 317 medical learners assessed 170 instructors. Fourteen (4.4 %) clinical clerks, 229 (72.3%) residents, and 53 (16.7%) fellows assessed 170 instructors. Twenty-one (6.6%) did not specify their position. RESULTS A mean number of eight raters assessed each instructor. The internal consistency reliability of the 19-item instrument was Cronbach's α = 0.95. The generalizability coefficient (Ep(2)) analysis indicated that the raters achieved Ep(2) of 0.95. The factor analysis showed three factors that accounted for 67.97% of the total variance. The three factors together, with the variance accounted for and their internal consistency reliability, are teaching skills (variance = 53.25s%; Cronbach's α = 0.92), Patient interaction (variance = 8.56%; Cronbach's α = 0.91), and professionalism (variance = 6.16%; Cronbach's α = 0.86). The three factors are intercorrelated (correlations = 0.48, 0.58, 0.46; p < 0.01). CONCLUSION It is feasible to assess clinical teaching with the 19-item instrument that has demonstrated evidence of both validity and reliability.
Collapse
Affiliation(s)
- Jill G Nation
- Department of Obstetrics and Gynecology, Faculty of Medicine, University of Calgary, Calgary, AB T2N 4N2, Canada.
| | | | | | | |
Collapse
|
9
|
Are Students Learning What Faculty Are Intending to Teach? J Surg Res 2008; 147:225-8. [DOI: 10.1016/j.jss.2008.03.022] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2008] [Revised: 03/09/2008] [Accepted: 03/11/2008] [Indexed: 11/20/2022]
|
10
|
Silber C, Novielli K, Paskin D, Brigham T, Kairys J, Kane G, Veloski J. Use of critical incidents to develop a rating form for resident evaluation of faculty teaching. MEDICAL EDUCATION 2006; 40:1201-8. [PMID: 17118114 DOI: 10.1111/j.1365-2929.2006.02631.x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
CONTEXT Monitoring the teaching effectiveness of attending physicians is important to enhancing the quality of graduate medical education. METHODS We used a critical incident technique with 35 residents representing a cross-section of programmes in a teaching hospital to develop a 23-item rating form. We obtained ratings of 11 attending physicians in internal medicine and general surgery from 54 residents. We performed linear and logistic regression analysis to relate the items on the form to the residents' overall ratings of the attending physicians and the programme directors' ratings of the attending physicians. RESULTS The residents rated the attending physicians highly in most areas, but lower in provision of feedback, clarity of written communication and cost-effectiveness in making clinical decisions. When we used the residents' overall ratings as the criterion, the most important aspects of attending physicians' teaching were clarity of written communication, cost-effectiveness, commitment of time and energy and whether the resident would refer a family member or friend to the physician. When we used the programme directors' ratings as the criterion, the additional important aspects of performance were concern for the residents' professional well-being, knowledge of the literature and the delivery of clear verbal and written communication. CONCLUSIONS The critical incident technique can be used to develop an instrument that demonstrates content and construct validity. We found that residents consider commitment of time to teaching and clinical effectiveness to be the most important dimensions of faculty teaching. Other important dimensions include written and verbal communication, cost-effectiveness and concern for residents' professional development.
Collapse
Affiliation(s)
- Cynthia Silber
- Jefferson Medical College, Thomas Jefferson University, Philadelphia, Pennsylvania 19107, USA.
| | | | | | | | | | | | | |
Collapse
|