1
|
Fuentes-Cimma J, Sluijsmans D, Riquelme A, Villagran I, Isbej L, Olivares-Labbe MT, Heeneman S. Designing feedback processes in the workplace-based learning of undergraduate health professions education: a scoping review. BMC MEDICAL EDUCATION 2024; 24:440. [PMID: 38654360 PMCID: PMC11036781 DOI: 10.1186/s12909-024-05439-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 04/17/2024] [Indexed: 04/25/2024]
Abstract
BACKGROUND Feedback processes are crucial for learning, guiding improvement, and enhancing performance. In workplace-based learning settings, diverse teaching and assessment activities are advocated to be designed and implemented, generating feedback that students use, with proper guidance, to close the gap between current and desired performance levels. Since productive feedback processes rely on observed information regarding a student's performance, it is imperative to establish structured feedback activities within undergraduate workplace-based learning settings. However, these settings are characterized by their unpredictable nature, which can either promote learning or present challenges in offering structured learning opportunities for students. This scoping review maps literature on how feedback processes are organised in undergraduate clinical workplace-based learning settings, providing insight into the design and use of feedback. METHODS A scoping review was conducted. Studies were identified from seven databases and ten relevant journals in medical education. The screening process was performed independently in duplicate with the support of the StArt program. Data were organized in a data chart and analyzed using thematic analysis. The feedback loop with a sociocultural perspective was used as a theoretical framework. RESULTS The search yielded 4,877 papers, and 61 were included in the review. Two themes were identified in the qualitative analysis: (1) The organization of the feedback processes in workplace-based learning settings, and (2) Sociocultural factors influencing the organization of feedback processes. The literature describes multiple teaching and assessment activities that generate feedback information. Most papers described experiences and perceptions of diverse teaching and assessment feedback activities. Few studies described how feedback processes improve performance. Sociocultural factors such as establishing a feedback culture, enabling stable and trustworthy relationships, and enhancing student feedback agency are crucial for productive feedback processes. CONCLUSIONS This review identified concrete ideas regarding how feedback could be organized within the clinical workplace to promote feedback processes. The feedback encounter should be organized to allow follow-up of the feedback, i.e., working on required learning and performance goals at the next occasion. The educational programs should design feedback processes by appropriately planning subsequent tasks and activities. More insight is needed in designing a full-loop feedback process, in which specific attention is needed in effective feedforward practices.
Collapse
Affiliation(s)
- Javiera Fuentes-Cimma
- Department of Health Sciences, Faculty of Medicine, Pontificia Universidad Católica de Chile, Avenida Vicuña Mackenna 4860, Macul, Santiago, Chile.
- School of Health Professions Education, Maastricht University, Maastricht, Netherlands.
| | | | - Arnoldo Riquelme
- Centre for Medical and Health Profession Education, Department of Gastroenterology, Faculty of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Ignacio Villagran
- Department of Health Sciences, Faculty of Medicine, Pontificia Universidad Católica de Chile, Avenida Vicuña Mackenna 4860, Macul, Santiago, Chile
| | - Lorena Isbej
- School of Health Professions Education, Maastricht University, Maastricht, Netherlands
- School of Dentistry, Faculty of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile
| | | | - Sylvia Heeneman
- Department of Pathology, Faculty of Health, Medicine and Health Sciences, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
2
|
D'Aoust R, Slone SE, Russell N, Budhathoki C, Ling C. PRIME-nurse practitioner competency model validation and criterion based OSCE rubric interrater reliability. BMC MEDICAL EDUCATION 2024; 24:124. [PMID: 38326786 PMCID: PMC10851454 DOI: 10.1186/s12909-024-05056-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 01/12/2024] [Indexed: 02/09/2024]
Abstract
The PRIME-NP (Professional-Reporter-Interpreter-Manager-Educator/Evaluation-Nurse Practitioner) Model is adapted from the RIME (Reporter-Interpreter-Manager-Educator) model used in medical education to guide medical student and resident education. The Delphi technique was used to validate the PRIME-NP Model. After two rounds of review by a group of experts in NP curriculum, the model was determined to be valid based on expert consensus. Agreement percent increase from the first round to the second round in all categories. Interrater reliability (IRR) was assessed using interclass correlation after instrument validation was completed for each of the five levels of the PRIME-NP model. Overall, the IRR of the instrument was found to be acceptable with some notable exceptions. No variance was noted in professional behaviors at any level. Variance was increased in management and educator/evaluator behaviors in higher/later course levels. The PRIME-NP Model and PRIME-NP OSCE Rubric is a valid and reliable instrument to assess NP student progression in objective structured clinical examinations. This instrument has the potential for adaptation for use in other types of health sciences education and settings.
Collapse
Affiliation(s)
- Rita D'Aoust
- Johns Hopkins School of Nursing, Baltimore, MD, USA
- Johns Hopkins School of Medicine, Baltimore, MD, USA
| | - Sarah E Slone
- Johns Hopkins School of Nursing, Baltimore, MD, USA.
| | | | | | | |
Collapse
|
3
|
Klapheke M, Abrams MP, Cubero M, Zhu X. Aligning Medical Student Workplace-Based Assessments with Entrustable Professional Activities and the RIME Model in a Psychiatry Clerkship. ACADEMIC PSYCHIATRY : THE JOURNAL OF THE AMERICAN ASSOCIATION OF DIRECTORS OF PSYCHIATRIC RESIDENCY TRAINING AND THE ASSOCIATION FOR ACADEMIC PSYCHIATRY 2022; 46:283-288. [PMID: 35288865 DOI: 10.1007/s40596-022-01614-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 02/18/2022] [Indexed: 06/14/2023]
Abstract
OBJECTIVE The authors piloted use of workplace-based assessments of students during the psychiatry clerkship utilizing both entrustable professional activities (EPAs) and the reporter, interpreter, manager, and educator (RIME) model. METHODS After supervising clinicians conducted assessments of medical students (N=109) during the psychiatry clerkship using a supervisory scale aligned with both EPA and RIME models, each student received individualized formative feedback. Students were then surveyed on the usefulness of this feedback, and participating faculty/residents were surveyed on the ease of completion of the supervisory scale. RESULTS Students' mean skill profile suggested they no longer needed direct supervision on EPA1 and EPA6. Mean scores on other studied EPAs suggested students were well on their way toward performing these EPAs without direct supervision. Students had mean RIME scores that exceeded the suggested levels identified for a Reporter to start clerkships, for an Interpreter to start clerkships, and for a Manager to transition to the fourth year. Close to half of the students found the feedback helpful in their development as a clinician but most felt their performance should not be shared with residency program directors, either before or after the Match. Almost all responding preceptors felt the supervisory ratings were easy to complete. CONCLUSIONS This pilot RIME/EPA framework served as a successful step toward a more competency-based medical education in the psychiatry clerkship with relatively little additional faculty time commitment by using workplace-based assessments already in place and a supervisory scale based on EPAs and RIME.
Collapse
Affiliation(s)
| | | | | | - Xiang Zhu
- University of Central Florida, Orlando, FL, USA
| |
Collapse
|
4
|
Anderson HL, Kurtz J, West DC. Implementation and Use of Workplace-Based Assessment in Clinical Learning Environments: A Scoping Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S164-S174. [PMID: 34406132 DOI: 10.1097/acm.0000000000004366] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE Workplace-based assessment (WBA) serves a critical role in supporting competency-based medical education (CBME) by providing assessment data to inform competency decisions and support learning. Many WBA systems have been developed, but little is known about how to effectively implement WBA. Filling this gap is important for creating suitable and beneficial assessment processes that support large-scale use of CBME. As a step toward filling this gap, the authors describe what is known about WBA implementation and use to identify knowledge gaps and future directions. METHOD The authors used Arksey and O'Malley's 6-stage scoping review framework to conduct the review, including: (1) identifying the research question; (2) identifying relevant studies; (3) study selection; (4) charting the data; (5) collating, summarizing, and reporting the results; and (6) consulting with relevant stakeholders. RESULTS In 2019-2020, the authors searched and screened 726 papers for eligibility using defined inclusion and exclusion criteria. One hundred sixty-three met inclusion criteria. The authors identified 5 themes in their analysis: (1) Many WBA tools and programs have been implemented, and barriers are common across fields and specialties; (2) Theoretical perspectives emphasize the need for data-driven implementation strategies; (3) User perceptions of WBA vary and are often dependent on implementation factors; (4) Technology solutions could provide useful tools to support WBA; and (5) Many areas of future research and innovation remain. CONCLUSIONS Knowledge of WBA as an implemented practice to support CBME remains constrained. To remove these constraints, future research should aim to generate generalizable knowledge on WBA implementation and use, address implementation factors, and investigate remaining knowledge gaps.
Collapse
Affiliation(s)
- Hannah L Anderson
- H.L. Anderson is research associate, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania; ORCID: http://orcid.org/0000-0002-9435-1535
| | - Joshua Kurtz
- J. Kurtz is a first-year resident, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Daniel C West
- D.C. West is professor of pediatrics, The Perelman School of Medicine at the University of Pennsylvania, and associate chair for education and senior director of medical education, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania; ORCID: http://orcid.org/0000-0002-0909-4213
| |
Collapse
|
5
|
Midclerkship feedback in the surgical clerkship: the "Professionalism, Reporting, Interpreting, Managing, Educating, and Procedural Skills" application utilizing learner self-assessment. Am J Surg 2016; 213:212-216. [PMID: 27756451 DOI: 10.1016/j.amjsurg.2016.08.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2016] [Revised: 07/11/2016] [Accepted: 08/08/2016] [Indexed: 11/22/2022]
Abstract
BACKGROUND The Liaison Committee on Medical Education requires midclerkship formative (low stakes) feedback to students regarding their clinical skills. Student self-assessment is not commonly incorporated into this evaluation. We sought to determine the feasibility of collecting and comparing student self-assessment with that of their preceptors using an iPad application. These student self-ratings and preceptor ratings are jointly created and reviewed as part of a face-to-face midclerkship feedback session. METHODS Using our iPad application for Professionalism, Reporting, Interpreting, Managing, Educating, and Procedural Skills ("PRIMES"), students answer 6 questions based on their self-assessment of performance at midclerkship. Each skill is rated on a 3-point scale (beginning, competent, and strong) with specific behavioral anchors. The faculty preceptors then complete the same PRIMES form during the face-to-face meeting. The application displays a comparison of the 2 sets of ratings, facilitating a discussion to determine individualized learning objectives for the second half of the clerkship. RESULTS A total of 209 student-preceptor pairs completed PRIMES ratings. On average, student-preceptor ratings were in agreement for 38% of the time. Agreement between students and preceptors was highest for Professionalism (70%) and lowest for Procedural Skills (22%). On average, 60% of student-preceptor ratings did not agree. Students rated themselves lower than preceptors 52% of the time, while only 8% of students rated themselves higher than their preceptors' ratings (this difference is significant at the P value <.05 level). CONCLUSIONS This study demonstrates the value of using the PRIMES framework to incorporate surgery clerkship students' self-assessment into formative face-to-face midclerkship feedback sessions with their preceptors with the goal to improve performance during the second half of the clerkship.
Collapse
|
6
|
Grayev A, Ziemlewicz T, Kim D, Romandine A, Robbins J. Residents' perception of a novel end-of-rotation evaluation method. Acad Radiol 2013; 20:312-9. [PMID: 23452476 DOI: 10.1016/j.acra.2012.11.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2012] [Revised: 11/12/2012] [Accepted: 11/19/2012] [Indexed: 11/27/2022]
Abstract
RATIONALE AND OBJECTIVES With the advent of the new core and certifying examinations, a need has arisen to restructure learning and assessment to better reflect the emphasis on continuous learning throughout residency. We developed a multiparametric end-of-rotation (EOR) evaluation tool to assess medical knowledge, oral presentation, and written communication skills administered to the residents at the end of each core subspecialty rotation. The benefit of continual assessment is obvious from a program perspective; the purpose of this article is to evaluate the residents' perception of the process. MATERIALS AND METHODS All residents (n = 31, 28 postgraduate years two through five and 3 postgraduate year one) participate in the mandatory EOR evaluation as a required component of the residency program. After receiving Institutional Review Board approval, informed consent was obtained from the residents wishing to participate in quarterly 16-question online surveys assessing their experience. Each survey consists of 15 questions with Likert scale responses (1 through 5 from strongly disagree to strongly agree) and one free text answer. Data are collected quarterly, starting in September 2011. RESULTS Overall, the residents' response has been positive. The new evaluation method is felt to be more meaningful than (average 3.9, standard deviation 0.9) and is favored by the residents over the traditional competency-based evaluation (average 4.0, standard deviation 1.0). However, residents retain neutral attitudes regarding preparation for boards or changes in study habits (average score 3.6, standard deviation 0.9 and 3.6, and standard deviation 1.1, respectively). CONCLUSION Residents rate the EOR evaluation experience positively, although they do not report changes in study habits or increased preparedness for the new certifying examination.
Collapse
Affiliation(s)
- Allison Grayev
- Department of Radiology, University of Wisconsin, School of Medicine and Public Health, E3/366 Clinical Science Center, Madison, WI 53792-3252, USA.
| | | | | | | | | |
Collapse
|
7
|
Ander DS, Wallenstein J, Abramson JL, Click L, Shayne P. Reporter-Interpreter-Manager-Educator (RIME) descriptive ratings as an evaluation tool in an emergency medicine clerkship. J Emerg Med 2011; 43:720-7. [PMID: 21945508 DOI: 10.1016/j.jemermed.2011.05.069] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2010] [Revised: 01/05/2011] [Accepted: 05/28/2011] [Indexed: 10/17/2022]
Abstract
BACKGROUND Emergency Medicine (EM) clerkships traditionally assess students using numerical ratings of clinical performance. The descriptive ratings of the Reporter, Interpreter, Manager, and Educator (RIME) method have been shown to be valuable in other specialties. OBJECTIVES We hypothesized that the RIME descriptive ratings would correlate with clinical performance and examination scores in an EM clerkship, indicating that the RIME ratings are a valid measure of performance. METHODS This was a prospective cohort study of an evaluation instrument for 4(th)-year medical students completing an EM rotation. This study received exempt Institutional Review Board status. EM faculty and residents completed shift evaluation forms including both numerical and RIME ratings. Students completed a final examination. Mean scores for RIME and clinical evaluations were calculated. Linear regression models were used to determine whether RIME ratings predicted clinical evaluation scores or final examination scores. RESULTS Four hundred thirty-nine students who completed the EM clerkship were enrolled in the study. After excluding items with missing data, there were 2086 evaluation forms (based on 289 students) available for analysis. There was a clear positive relationship between RIME category and clinical evaluation score (r(2)=0.40, p<0.01). RIME ratings correlated most strongly with patient management skills and least strongly with humanistic qualities. A very weak correlation was seen with RIME and final examination. CONCLUSION We found a positive association between RIME and clinical evaluation scores, suggesting that RIME is a valid clinical evaluation instrument. RIME descriptive ratings can be incorporated into EM evaluation instruments and provides useful data related to patient management skills.
Collapse
Affiliation(s)
- Douglas S Ander
- Department of Emergency Medicine, Emory University School of Medicine, Atlanta, Georgia 30303, USA
| | | | | | | | | |
Collapse
|
8
|
White CB, Ross PT, Haftel HM. Assessing the assessment: are senior summative OSCEs measuring advanced knowledge, skills, and attitudes? ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2008; 83:1191-1195. [PMID: 19202499 DOI: 10.1097/acm.0b013e31818c6f6a] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
PURPOSE The authors investigated adaptation of Bloom's and Simpson's taxonomies for the medical (student) setting, and using the adapted taxonomies to determine whether a summative objective structured clinical examination (OSCE) used at their medical school was assessing higher-order knowledge, skills, and attitudes. METHOD Two faculty members (including H.M.H.) adapted the taxonomies and used them to categorize (knowledge, skills, or attitudes) and rank (by level within the taxonomies) every item on every OSCE station checklist. Interrater reliability was moderate to high. RESULTS Although there was a range of domains and levels within and across stations, on average every OSCE station was assessing learning behaviors at a lower level than expectations articulated in the school's goals for medical students' education. CONCLUSIONS The adapted taxonomies were useful for assessing the domains and levels of behaviors measured on the summative OSCE, and they can also be used to modify existing checklists or to create new assessment instruments that meet the expectations articulated in a school's goals for medical students' education.
Collapse
Affiliation(s)
- Casey B White
- University of Michigan Medical School, Ann Arbor, Michigan 48109-5726, USA.
| | | | | |
Collapse
|
9
|
Griffith CH, Wilson JF. The association of student examination performance with faculty and resident ratings using a modified RIME process. J Gen Intern Med 2008; 23:1020-3. [PMID: 18612736 PMCID: PMC2517939 DOI: 10.1007/s11606-008-0611-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
BACKGROUND RIME is a descriptive framework in which students and their teachers can gauge progress throughout a clerkship from R (reporter) to I (interpreter) to M (manager) to E (educator). RIME, as described in the literature, is complemented by residents and attending physicians meeting with a clerkship director to discuss individual student progress, with group discussion resulting in assignment of a RIME stage. OBJECTIVE 1) to determine whether a student's RIME rating is associated with end-of-clerkship examination performance; and 2) to determine whose independent RIME rating is most predictive of a student's examination performance: attendings, residents, or interns. DESIGN Prospective cohort study. PARTICIPANTS Third year medical students from academic years 2004-2005 and early 2005-2006 at 1 medical school. MEASUREMENTS AND MAIN RESULTS Each attending, resident, and intern independently assessed the student's final RIME stage attained. For the purpose of analysis, R stage=1, I=2, M=3, and E=4. Regression analyses were performed with examination scores as dependent variables (National Board of Medical Examiners [NBME] medicine subject examination and a clinical performance examination [CPE]), with independent variables of mean attending RIME score, mean resident score, and mean intern score. For the 122 students, significant predictors of NBME subject exam score were resident RIME rating (p = .008) and intern RIME rating (p = .02). Significant predictor of CPE performance was resident RIME rating (p = .01). CONCLUSION House staff RIME ratings of students are associated with student performance on written and clinical skills examinations.
Collapse
|
10
|
Hemmer PA, Papp KK, Mechaber AJ, Durning SJ. Evaluation, grading, and use of the RIME vocabulary on internal medicine clerkships: results of a national survey and comparison to other clinical clerkships. TEACHING AND LEARNING IN MEDICINE 2008; 20:118-126. [PMID: 18444197 DOI: 10.1080/10401330801991287] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
BACKGROUND Evaluation methods within and across clerkships are rapidly evolving, including greater emphasis or frameworks for descriptive evaluation and direct observation of competence. PURPOSE The purpose of this study is to describe current evaluation methods, use of the Reporter-Interpreter-Manager/Educator (RIME) framework, and grade assignment by internal medicine clerkship directors. METHODS In 2005, the Clerkship Directors in Internal Medicine surveyed its 109 institutional members. Topics included evaluation methods and grade contribution, use of evaluation sessions and/or RIME, and grade assignment (criterion referenced or normative). RESULTS Response rate was 81% (88/109). The evaluation methods were as follows: teachers' evaluations, 93% (64% of grade); National Board of Medical Examiners subject examination, 81% (25% of grade); faculty written exam, 34% (14% of grade); objective structured clinical examinations, 32% (12% of grade); direct observation, 22% (7% of grade). RIME is used by 42% of respondents. Many clerkship directors (43%) meet with teachers to discuss student performance. Criterion-referenced grading is used by 59%, and normative grading is used by 27%. Unsatisfactory grades are given for examination failures (72%), unprofessional behavior (49%), poor clinical performance (42%), and failure to meet requirements (18%). CONCLUSIONS Internal medicine clerkship directors emphasize description and observation of students. RIME and discussions with teachers are becoming commonplace.
Collapse
Affiliation(s)
- Paul A Hemmer
- Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland 20814, USA.
| | | | | | | |
Collapse
|
11
|
Espey E, Nuthalapaty F, Cox S, Katz N, Ogburn T, Peskin T, Goepfert A, Hammoud M, Casey P, Emmons S, Neutens JJ. To the point: Medical education review of the RIME method for the evaluation of medical student clinical performance. Am J Obstet Gynecol 2007; 197:123-33. [PMID: 17689622 DOI: 10.1016/j.ajog.2007.04.006] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2007] [Revised: 04/03/2007] [Accepted: 04/11/2007] [Indexed: 11/19/2022]
Abstract
This article, the sixth in the ongoing To The Point Series produced by the Association of Professors of Gynecology and Obstetrics Undergraduate Medical Education Committee, reviews the Reporter-Interpreter-Manager-Educator (RIME) method for the evaluation of student clinical performance on the obstetrics and gynecology rotation. This article discusses the inherent challenges of descriptive narrative evaluation and the superiority of the RIME method in producing meaningful evaluation of and feedback for students. The use of the method to fulfill Liaison Committee on Medical Education standards and implementation of the method are described.
Collapse
Affiliation(s)
- Eve Espey
- Department of Obstetrics and Gynecology, University of New Mexico, Albuquerque, NM, USA
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
12
|
Abstract
Threats to the professionalism of medical practice in the United States have resulted in an intense focus by educational organizations on what professionalism is, on how to define it, and how to evaluate it. This essay discusses alternative educational frameworks in which professionalism can be located. As the traditional analytic framework (knowledge, skills, and attitudes) and developmental frameworks are more familiar, emphasis will be placed on a "synthetic" framework that expresses a student's progress as "reporter," "interpreter," and "manager/educator." This "RIME" framework attempts to capture the classic rhythm of observation-reflection-action that is familiar to all scientists and clinicians, and attempts to express in less generic, more behavioral terms how skills, knowledge, and attitudes must all be brought to bear at the same time by a successful student. It is argued that the complexity of professional development can be embraced with simplicity, without being simplistic.
Collapse
Affiliation(s)
- Louis N Pangaro
- Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland 20814-4799, USA.
| |
Collapse
|
13
|
Ogburn JAT, Espey EL, Dorin MH, Ming C, Rayburn WF. Obstetrics and gynecology residents as teachers of medical students: predictors of excellence. Am J Obstet Gynecol 2005; 193:1831-4. [PMID: 16260244 DOI: 10.1016/j.ajog.2005.07.074] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2005] [Revised: 05/19/2005] [Accepted: 07/19/2005] [Indexed: 11/17/2022]
Abstract
OBJECTIVE The purpose of this study was to assess variables that might predict which intern candidates will become excellent teachers of medical students. STUDY DESIGN This retrospective cohort study compared demographic characteristics, previous work experience, United States Medical Licensing Examinations scores, honors on core clerkships, membership in Alpha Omega Alpha, and match list ranking of 43 residents to identify predictors of excellent teaching evaluations during residency. RESULTS Fifteen residents (35%) were identified as excellent teachers. They were more likely to have had previous work experience, to be older, or to be male. They were not more likely to have higher United States Medical Licensing Examinations test scores, more honors grades, Alpha Omega Alpha membership, or a higher rank list position. CONCLUSION Work experience, age, and male gender are associated with increased likelihood of being identified as an excellent teacher by medical students. Programs in which residents have a significant role as teachers of students may consider these factors in the residency selection process.
Collapse
Affiliation(s)
- Joseph A Tony Ogburn
- Department of Obstetrics and Gynecology, University of New Mexico, Albuquerque, NM 87131-0001, USA.
| | | | | | | | | |
Collapse
|
14
|
Metheny WP, Espey EL, Bienstock J, Cox SM, Erickson SS, Goepfert AR, Hammoud MM, Hartmann DM, Krueger PM, Neutens JJ, Puscheck E. To the point: medical education reviews evaluation in context: assessing learners, teachers, and training programs. Am J Obstet Gynecol 2005; 192:34-7. [PMID: 15671999 DOI: 10.1016/j.ajog.2004.07.036] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Learners, teachers, and programs need to be evaluated. This article reviews the purpose and the current methods for evaluating all 3. Clinical impressions of the learner are yielding increasingly to direct observation and skill assessment. The Reporter, Interpreter, Manager, and Educator (RIME) method offers a unique way of assessing and providing formative feedback to the learner. Learning portfolios help document achievements and provide a collection for self-assessment and growth. Teachers benefit from feedback especially if followed up with consultation. Programs need both quantitative and qualitative data to document performance. National data gathered locally from exit surveys now exist that facilitate comparison of programs (eg, clerkships) within and across institutions. The emphasis on institutional accountability makes it critical to directly evaluate learners and their educational programs.
Collapse
|