1
|
Almakadma AS, Fawzy NA, Baqal OJ, Kamada S. Perceptions and attitudes of medical students towards student evaluation of teaching: A cross-sectional study. MEDICAL EDUCATION ONLINE 2023; 28:2220175. [PMID: 37270796 DOI: 10.1080/10872981.2023.2220175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 05/01/2023] [Accepted: 05/26/2023] [Indexed: 06/06/2023]
Abstract
BACKGROUND Faculty evaluation surveys in the frame of student evaluation of teaching (SETs) are a widely utilized tool to assess faculty teaching. Although SETs are used regularly to evaluate teaching effectiveness, their sole use for making administrative decisions and as an indicator of teaching quality has been controversial. METHODS A survey containing 22 items assessing demographics, perception, and factors for evaluating faculty was distributed to medical students at our institute. Statistical analyses were conducted using Microsoft Excel and R Software utilizing regression analysis and ANOVA test. RESULTS The survey received 374 responses consisting of 191 (51.1%) male students and 183 (48.9%) female students. In all, 178 (47.5%) students considered the optimal time for providing faculty evaluation to be after the release of the exam results, compared to 127 (33.9%) students, who chose the after the exam but before the release of exam results option. When asked what happens whenever the tutor is aware about the SETs data, 273 (72.9%) and 254 (67.9%) students believed that it would influence the difficulty of the exam and grading/curving of the exam results, respectively. Better teaching skills (93%, 348), being responsive and open to student feedback and suggestions (84.7%, 317), being committed to class time and schedule (80.1%, 300), and an easier exam (68.6%, 257) were considered important factors to acquire a positive evaluation by a considerable proportion of students. Fewer lectures (P < 0.05), decreased number of slides per lecture (P < 0.01), easier exam (P < 0.05), and giving clues to students about the exam (P < 0.05) were found to be very important to obtain a positive tutor evaluation by students. CONCLUSIONS Institutions ought to continue exploring areas of improvement in the faculty evaluation process while raising awareness among students about the importance and administrative implications of their feedback.
Collapse
Affiliation(s)
| | - Nader A Fawzy
- College of Medicine, Alfaisal University, Riyadh, Saudi Arabia
| | - Omar J Baqal
- Department of Internal Medicine, Mayo Clinic Arizona, Phoenix, AZ, USA
| | - Sudha Kamada
- Department of Medical Oncology, All India Institute of Medical Sciences, Bhubaneswar, India
| |
Collapse
|
2
|
Dexter F, Hindman BJ, Thenuwara K. Lack of Benefit of Adjusting Adaptively Daily Invitations for the Evaluation of the Quality of Anesthesiologists' Supervision and Nurse Anesthetists' Work Habits. Cureus 2023; 15:e49661. [PMID: 38161883 PMCID: PMC10756328 DOI: 10.7759/cureus.49661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/29/2023] [Indexed: 01/03/2024] Open
Abstract
Introduction Whenever a department implements the evaluation of professionals, a reasonable operational goal is to request as few evaluations as possible. In anesthesiology, evaluations of anesthesiologists (by trainees) and nurse anesthetists (by anesthesiologists) with valid and psychometrically reliable scales have been made by requesting daily evaluations of the ratee's performance on the immediately preceding day. However, some trainees or nurse anesthetists are paired with the same anesthesiologist for multiple days of the same week. Multiple evaluations from the same rater during a given week may contribute little incremental information versus one evaluation from that rater for the week. We address whether daily evaluation requests could be adjusted adaptively to be made once per week, hopefully substantively reducing the number of evaluation requests. Methods Every day since 1 July 2013 at the studied department, anesthesia residents and fellows have been requested by email to evaluate anesthesiologists' quality of supervision provided during the preceding day using the De Oliveira Filho supervision scale. Every day since 29 March 2015, the anesthesiologists have been requested by email to evaluate the work habits of the nurse anesthetists during the preceding day. Both types of evaluations were made for interactions throughout the workday together, not for individual cases. The criterion for an electronic request to be sent is that the pair worked together for at least one hour that day. The current study was performed using evaluations of anesthesiologists' supervision and nurse anesthetists' work habits through 30 June 2023. Results If every evaluation request were completed by trainees on the same day it was requested, trainees would have received 13.5% fewer requests to evaluate anesthesiologists (9367/69,420), the maximum possible reduction. If anesthesiologists were to do the same for their evaluations of nurse anesthetists, the maximum possible reduction would be 7.1% fewer requests (4794/67,274). However, because most evaluations were completed after the day of the request (71%, 96,451/136,694), there would be fewer requests only if the evaluation were completed before or on the day of the next pairing. Consequently, in actual practice, there would have been only 2.4% fewer evaluation requests to trainees and 1.5% fewer to anesthesiologists, both decreases being significantly less than 5% (both adjusted P <0.0001). Among the trainees' evaluations of faculty anesthesiologists, there were 1.4% with very low scores, specifically, a mean score of less than three out of four (708/41,778). Using Bernoulli cumulative sum (CUSUM) among successive evaluations, 72 flags were raised over the 10 years. Among those, there were 36% with more than one rater giving an exceptionally low score during the same week (26/72). There were 97% (70/72) with at least one rater contributing more than one score to the recent cumulative sum. Conclusion Conceptually, evaluation requests could be skipped if a rater has already evaluated the ratee that week during an earlier day working together. Our results show that the opportunity for reductions in evaluation requests is significantly less than 5%. There may also be impaired monitoring for the detection of sudden major decreases in ratee performance. Thus, the simpler strategy of requesting evaluations daily after working together is warranted.
Collapse
|
3
|
Vassie C, Mulki O, Chu A, Smith SF. A practical guide to fostering teaching excellence in clinical education: experience from the UK. CLINICAL TEACHER 2020; 18:8-13. [PMID: 32588520 DOI: 10.1111/tct.13188] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Affiliation(s)
- Claire Vassie
- Medical Education Research Unit, Faculty of Medicine, Imperial College London, London, UK
| | - Omar Mulki
- Milton Keynes University Hospital NHS Foundation Trust, Milton Keynes, UK
| | - Ann Chu
- Department of Renal Medicine, Hammersmith Hospital, Imperial College NHS Health Trust, London, UK.,Faculty Education Office, Faculty of Medicine, Imperial College London, London, UK
| | - Sue F Smith
- Medical Education Research Unit, Faculty of Medicine, Imperial College London, London, UK
| |
Collapse
|
4
|
Aoun Bahous S, Salameh P, Salloum A, Salameh W, Park YS, Tekian A. Voluntary vs. compulsory student evaluation of clerkships: effect on validity and potential bias. BMC MEDICAL EDUCATION 2018; 18:9. [PMID: 29304800 PMCID: PMC5756350 DOI: 10.1186/s12909-017-1116-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2017] [Accepted: 12/28/2017] [Indexed: 05/30/2023]
Abstract
BACKGROUND Students evaluations of their learning experiences can provide a useful source of information about clerkship effectiveness in undergraduate medical education. However, low response rates in clerkship evaluation surveys remain an important limitation. This study examined the impact of increasing response rates using a compulsory approach on validity evidence. METHODS Data included 192 responses obtained voluntarily from 49 third-year students in 2014-2015, and 171 responses obtained compulsorily from 49 students in the first six months of the consecutive year at one medical school in Lebanon. Evidence supporting internal structure and response process validity was compared between the two administration modalities. The authors also tested for potential bias introduced by the use of the compulsory approach by examining students' responses to a sham item that was added to the last survey administration. RESULTS Response rates increased from 56% in the voluntary group to 100% in the compulsory group (P < 0.001). Students in both groups provided comparable clerkship rating except for one clerkship that received higher rating in the voluntary group (P = 0.02). Respondents in the voluntary group had higher academic performance compared to the compulsory group but this difference diminished when whole class grades were compared. Reliability of ratings was adequately high and comparable between the two consecutive years. Testing for non-response bias in the voluntary group showed that females were more frequent responders in two clerkships. Testing for authority-induced bias revealed that students might complete the evaluation randomly without attention to content. CONCLUSIONS While increasing response rates is often a policy requirement aimed to improve the credibility of ratings, using authority to enforce responses may not increase reliability and can raise concerns over the meaningfulness of the evaluation. Administrators are urged to consider not only response rates, but also representativeness and quality of responses in administering evaluation surveys.
Collapse
Affiliation(s)
- Sola Aoun Bahous
- Lebanese American University School of Medicine, Byblos, Lebanon
- Lebanese American University Medical Center – Rizk Hospital, May Zahhar Street, Ashrafieh, P.O. Box: 11-3288, Beirut, Lebanon
| | - Pascale Salameh
- Lebanese American University School of Pharmacy, Byblos, Lebanon
| | | | - Wael Salameh
- Lebanese American University School of Medicine, Byblos, Lebanon
| | - Yoon Soo Park
- Department of Medical Education, College of Medicine, University of Illinois at Chicago, Chicago, IL USA
| | - Ara Tekian
- Department of Medical Education, College of Medicine, University of Illinois at Chicago, Chicago, IL USA
| |
Collapse
|
5
|
Kost A, Combs H, Smith S, Klein E, Kritek P, Robins L, Cianciolo AT, Butani L, Gigante J, Ramani S. A Proposed Conceptual Framework and Investigation of Upward Feedback Receptivity in Medical Education. TEACHING AND LEARNING IN MEDICINE 2015; 27:359-361. [PMID: 26507991 DOI: 10.1080/10401334.2015.1077134] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
UNLABELLED WGEA 2015 CONFERENCE ABSTRACT (EDITED). Faculty Perceptions of Receiving Feedback From Third-Year Clerkship Students. Amanda Kost, Heidi Combs, Sherilyn Smith, Eileen Klein, Patricia Kritek, and Lynne Robins. PHENOMENON: In addition to giving feedback to 3rd-year clerkship students, some clerkship instructors receive feedback, requested or spontaneous, from students prior to the clerkship's end. The concept of bidirectional feedback is appealing as a means of fostering a culture of respectful communication and improvement. However, little is known about how teachers perceive this feedback in practice or how it impacts the learning environment. APPROACH We performed 24 semistructured 30-minute interviews with 3 to 7 attending physician faculty members each in Pediatrics, Internal Medicine, Family Medicine, Surgery, Psychiatry, and Obstetrics and Gynecology who taught in 3rd-year required clerkships during the 2012-2013 academic year. Questions probed teachers' experience with and attitudes toward receiving student feedback. Prompts were used to elicit stories and obtain participant demographics. Interviews were audio-recorded, transcribed, and entered into Dedoose for qualitative analysis. Researchers read transcripts holistically for meaning, designed a coding template, and then independently coded each transcript. A constant comparative approach and regular meetings were used to ensure consistent coding between research team members. FINDINGS Participants ranged in age from 37 to 74, with 5 to 35 years of teaching experience. Seventy-one percent were male, and 83% identified as White. In our preliminary analysis, our informants reported a range of experience in receiving student feedback prior to the end of a clerkship, varying from no experience to having developed mechanisms to regularly request specific feedback about their programs. Most expressed openness to actively soliciting and receiving student feedback on their teaching during the clerkship although many questioned whether this process was feasible. Actual responses to receiving student feedback were mixed. Some reported having received feedback that motivated change, and others rejected the feedback they received on the grounds that it lacked validity or was inappropriate. Others expressed uncertainty about how they would react to student feedback. Faculty expressed a preference for receiving feedback about behaviors and items that were within their control. INSIGHTS: These findings suggest there is opportunity to pilot implementation of a structured student feedback mechanism, separate from teacher evaluations, in selected 3rd-year clerkships. Materials should developed to help faculty solicit, understand, and respond to student feedback and to help students frame and provide the kinds of feedback to teachers that will lead to suggested improvements. Both these endeavors have the potential to improve the clinical learning environment during 3rd-year clerkships through the cultivation of respectful communication and the encouragement of improvement in teaching efforts.
Collapse
Affiliation(s)
- Amanda Kost
- a Department of Family Medicine , University of Washington School of Medicine , Seattle , Washington , USA
| | - Heidi Combs
- b Psychiatry and Behavioral Science, University of Washington School of Medicine , Seattle , Washington , USA
| | - Sherilyn Smith
- c Department of Pediatrics , University of Washington School of Medicine , Seattle , Washington , USA
| | - Eileen Klein
- c Department of Pediatrics , University of Washington School of Medicine , Seattle , Washington , USA
| | - Patricia Kritek
- d Division of Pulmonary & Critical Care Medicine, University of Washington School of Medicine , Seattle , Washington , USA
| | - Lynne Robins
- e Department of Medical Education and Biomedical Informatics at the University of Washington School of Medicine , Seattle , Washington , USA
| | - Anna T Cianciolo
- f Department of Medical Education , Southern Illinois University School of Medicine , Springfield , Illinois , USA
| | - Lavjay Butani
- g Department of Pediatrics , University of California Davis Medical Center , Sacramento , California , USA
| | - Joseph Gigante
- h Department of Pediatrics , Vanderbilt University School of Medicine , Nashville , Tennessee , USA
| | - Subha Ramani
- i Department of Medicine , Harvard Medical School , Boston , Massachusetts , USA
| |
Collapse
|
6
|
Gerbase MW, Germond M, Cerutti B, Vu NV, Baroffio A. How Many Responses Do We Need? Using Generalizability Analysis to Estimate Minimum Necessary Response Rates for Online Student Evaluations. TEACHING AND LEARNING IN MEDICINE 2015; 27:395-403. [PMID: 26507997 DOI: 10.1080/10401334.2015.1077126] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
UNLABELLED CONSTRUCT: The study compares paper and online ratings of instructional units and analyses, with the G-study using the symmetry principle, the response rates needed to ensure acceptable precision of the measure when compliance is low. BACKGROUND Students' ratings of teaching contribute to the quality of medical training programs. To date, many schools have replaced pen-and-paper questionnaires with electronic forms, despite the lower response rates consistently reported with the latter. Few available studies have examined the effects of low response rates on the reliability and precision of the evaluation measure. Moreover, the minimum number of raters to target when response rates are low remains unclear. APPROACH Descriptive data were derived from 799 students' paper and online ratings of 11 preclinical instructional units (PIUs). Reliability was assessed by Cronbach's alpha coefficients. The generalizability method applying the symmetry principle approach was used to analyze the precision of the measure with a reference standard error of mean (SEM) set at 0.10; optimization models were built to estimate minimum response rates. RESULTS Overall, response rates were 74% and 30% (p < .001) and PIUs ratings were 3.8 ± 0.5 and 3.6 ± 0.5 (p = .02), respectively in paper and online questionnaires. Higher SEM levels and significantly larger 95% confidence intervals of PIUs rating scores were observed with online evaluations. To keep the SEM within preset limits of precision, a minimum of 48% response rate was estimated for online formats. CONCLUSIONS The proposed generalizability analysis allowed estimating the minimum response needed to maintain acceptable precision in online evaluations. The effects of response rates on accuracy are discussed.
Collapse
Affiliation(s)
- Margaret W Gerbase
- a Unit of Development and Research in Medical Education, University of Geneva Faculty of Medicine , Geneva , Switzerland
| | - Michèle Germond
- a Unit of Development and Research in Medical Education, University of Geneva Faculty of Medicine , Geneva , Switzerland
| | - Bernard Cerutti
- a Unit of Development and Research in Medical Education, University of Geneva Faculty of Medicine , Geneva , Switzerland
| | - Nu V Vu
- a Unit of Development and Research in Medical Education, University of Geneva Faculty of Medicine , Geneva , Switzerland
| | - Anne Baroffio
- a Unit of Development and Research in Medical Education, University of Geneva Faculty of Medicine , Geneva , Switzerland
| |
Collapse
|
7
|
Aburawi E, McLean M, Shaban S. Evaluation of Faculty: Are medical students and faculty on the same page? Sultan Qaboos Univ Med J 2014; 14:e361-e368. [PMID: 25097772 PMCID: PMC4117662] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2013] [Revised: 02/20/2014] [Accepted: 03/06/2014] [Indexed: 06/03/2023] Open
Abstract
OBJECTIVES Student evaluation of individual teachers is important in the quality improvement cycle. The aim of this study was to explore medical student and faculty perceptions of teacher evaluation in the light of dwindling participation in online evaluations. METHODS This study was conducted at the United Arab Emirates University College of Medicine & Health Sciences between September 2010 and June 2011. A 21-item questionnaire was used to investigate learner and faculty perceptions of teacher evaluation in terms of purpose, etiquette, confidentiality and outcome on a five-point Likert scale. RESULTS The questionnaire was completed by 54% of faculty and 23% of students. Faculty and students generally concurred that teachers should be evaluated by students but believed that the purpose of the evaluation should be explained. Despite acknowledging the confidentiality of online evaluation, faculty members were less sure that they would not recognise individual comments. While students perceived that the culture allowed objective evaluation, faculty members were less convinced. Although teachers claimed to take evaluation seriously, with Medical Sciences faculty members in particular indicating that they changed their teaching as a result of feedback, students were unsure whether teachers responded to feedback. CONCLUSION Despite agreement on the value of evaluation, differences between faculty and student perceptions emerged in terms of confidentiality and whether evaluation led to improved practice. Educating both teachers and learners regarding the purpose of evaluation as a transparent process for quality improvement is imperative.
Collapse
Affiliation(s)
- Elhadi Aburawi
- Departments of Pediatrics, College of Medicine & Health Sciences, United Arab Emirates University, Al Ain, United Arab Emirates
| | - Michelle McLean
- Faculty of Health Sciences & Medicine, Bond University, Gold Coast, Queensland, Australia
| | - Sami Shaban
- Departments of Medical Education, College of Medicine & Health Sciences, United Arab Emirates University, Al Ain, United Arab Emirates
| |
Collapse
|
8
|
Schönrock-Adema J, Schaub-De Jong MA, Cohen-Schotanus J. The development and validation of a short form of the STERLinG: a practical, valid and reliable tool to evaluate teacher competencies to encourage reflective learning. MEDICAL TEACHER 2013; 35:864-866. [PMID: 23862754 DOI: 10.3109/0142159x.2013.809409] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
BACKGROUND To optimize response rates, it is important to have brief, comprehensive instruments. AIMS We have developed and validated a short form of an instrument for measuring students' perceptions of teachers' competencies to encourage students' reflective learning in small groups (the STERLinG). METHODS Based on statistical and content criteria, the original 36-item STERLinG was reduced to 15 items: three scales with five items each. This mini-STERLinG was validated. Confirmatory factor analysis was performed and internal consistencies were calculated. RESULTS The instrument was completed by 501 respondents (63%). The original instrument structure was confirmed with 62.6% explained variance. Reliabilities were high with 0.91 for the entire mini-STERLinG and 0.87, 0.85 and 0.81 for its subscales. CONCLUSIONS The mini-STERLinG was found to be a feasible, valid and reliable instrument.
Collapse
|
9
|
Schönrock-Adema J, Lubarsky S, Chalk C, Steinert Y, Cohen-Schotanus J. 'What would my classmates say?' An international study of the prediction-based method of course evaluation. MEDICAL EDUCATION 2013; 47:453-62. [PMID: 23574058 DOI: 10.1111/medu.12126] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
OBJECTIVES Traditional student feedback questionnaires are imperfect course evaluation tools, largely because they generate low response rates and are susceptible to response bias. Preliminary research suggests that prediction-based methods of course evaluation - in which students estimate their peers' opinions rather than provide their own personal opinions - require significantly fewer respondents to achieve comparable results and are less subject to biasing influences. This international study seeks further support for the validity of these findings by investigating: (i) the performance of the prediction-based method, and (ii) its potential for bias. METHODS Participants (210 Year 1 undergraduate medical students at McGill University, Montreal, Quebec, Canada, and 371 Year 1 and 385 Year 3 undergraduate medical students at the University Medical Center Groningen [UMCG], University of Groningen, Groningen, the Netherlands) were randomly assigned to complete course evaluations using either the prediction-based or the traditional opinion-based method. The numbers of respondents required to achieve stable outcomes were determined using an iterative process. Differences between the methods regarding the number of respondents required were analysed using t-tests. Differences in evaluation outcomes between the methods and between groups of students stratified by four potentially biasing variables (gender, estimated general level of achievement, expected test result, satisfaction after examination completion) were analysed using multivariate analysis of variance (manova). RESULTS Overall response rates in the three student cohorts ranged from 70% to 94%. The prediction-based method required significantly fewer respondents than the opinion-based method (averages of 26-28 and 67-79 respondents, respectively) across all samples (p < 0.001), whereas the outcomes achieved were fairly similar. Bias was found in four of 12 opinion-based condition comparisons (three sites, four variables), and in only one comparison in the prediction-based condition. CONCLUSIONS Our study supports previous findings that prediction-based methods require significantly fewer respondents to achieve results comparable with those obtained through traditional course evaluation methods. Moreover, our findings support the hypothesis that prediction-based responses are less subject to bias than traditional opinion-based responses. These findings lend credence to prediction-based as an accurate and efficient method of course evaluation.
Collapse
Affiliation(s)
- Johanna Schönrock-Adema
- University of Groningen and University Medical Centre Groningen, Center for Research and Innovation in Medical Education, Groningen, the Netherlands.
| | | | | | | | | |
Collapse
|
10
|
McNulty JA, Gruener G, Chandrasekhar A, Espiritu B, Hoyt A, Ensminger D. Are online student evaluations of faculty influenced by the timing of evaluations? ADVANCES IN PHYSIOLOGY EDUCATION 2010; 34:213-216. [PMID: 21098389 DOI: 10.1152/advan.00079.2010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Student evaluations of faculty are important components of the medical curriculum and faculty development. To improve the effectiveness and timeliness of student evaluations of faculty in the physiology course, we investigated whether evaluations submitted during the course differed from those submitted after completion of the course. A secure web-based system was developed to collect student evaluations that included numerical rankings (1-5) of faculty performance and a section for comments. The grades that students received in the course were added to the data, which were sorted according to the time of submission of the evaluations and analyzed by Pearson's correlation and Student's t-test. Only 26% of students elected to submit evaluations before completion of the course, and the average faculty ratings of these evaluations were highly correlated [r(14) = 0.91] with the evaluations submitted after completion of the course. Faculty evaluations were also significantly correlated with the previous year [r(14) = 0.88]. Concurrent evaluators provided more comments that were statistically longer and subjectively scored as more "substantive." Students who submitted their evaluations during the course and who included comments had significantly higher final grades in the course. In conclusion, the numeric ratings that faculty received were not influenced by the timing of student evaluations. However, students who submitted early evaluations tended to be more engaged as evidenced by their more substantive comments and their better performance on exams. The consistency of faculty evaluations from year to year and concurrent versus at the end of the course suggest that faculty tend not to make significant adjustments to student evaluations.
Collapse
Affiliation(s)
- John A McNulty
- Department of Cell and Molecular Physiology, Stritch School of Medicine, Loyola University, Maywood, IL 60153, USA.
| | | | | | | | | | | |
Collapse
|
11
|
de Oliveira Filho GR, Dal Mago AJ, Garcia JHS, Goldschmidt R. An Instrument Designed for Faculty Supervision Evaluation by Anesthesia Residents and Its Psychometric Properties. Anesth Analg 2008; 107:1316-22. [DOI: 10.1213/ane.0b013e318182fbdd] [Citation(s) in RCA: 58] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
12
|
McOwen KS, Kogan JR, Shea JA. Elapsed time between teaching and evaluation: does it matter? ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2008; 83:S29-32. [PMID: 18820495 DOI: 10.1097/acm.0b013e318183e37c] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
BACKGROUND Web-based course evaluation systems offer the potential advantage of timely evaluations. The authors examined whether elapsed time between teaching and student evaluation of teaching impacts preclinical courses' quality ratings. METHOD The overall relationship of elapsed time with evaluation rating was explored with regression and ANOVA. Time between teaching event and evaluation was categorized by weeks. Within-teaching-events means and variances in evaluations related to elapsed weeks were compared using repeated-measures ANOVA. RESULTS With more elapsed weeks, quality mean ratings increased (P < .001) and variability decreased (P < .001); effect sizes were small (average effect size = 0.06). Trends were similar in regression analysis and for data aggregated by event. CONCLUSIONS Summaries of event quality are negligibly impacted by evaluation timing. Future studies should examine the impact of other Web-based evaluation features on evaluation.
Collapse
Affiliation(s)
- Katherine S McOwen
- University of Pennsylvania School of Medicine, Philadelphia, PA 19104, USA.
| | | | | |
Collapse
|
13
|
Eva KW. Whither the need for faculty development? MEDICAL EDUCATION 2006; 40:99-100. [PMID: 16514775 DOI: 10.1111/j.1365-2929.2005.02386.x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Affiliation(s)
- Kevin W Eva
- Department of Clinical Epidemiology and Biostatistics, Program for Educational Research and Development, McMaster University, Hamilton, Ontario, Canada.
| |
Collapse
|