1
|
Lewis SK, Nolan NS, Zickuhr L. Frontline assessors' opinions about grading committees in a medicine clerkship. BMC MEDICAL EDUCATION 2024; 24:620. [PMID: 38840190 PMCID: PMC11151467 DOI: 10.1186/s12909-024-05604-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 05/24/2024] [Indexed: 06/07/2024]
Abstract
BACKGROUND Collective decision-making by grading committees has been proposed as a strategy to improve the fairness and consistency of grading and summative assessment compared to individual evaluations. In the 2020-2021 academic year, Washington University School of Medicine in St. Louis (WUSM) instituted grading committees in the assessment of third-year medical students on core clerkships, including the Internal Medicine clerkship. We explored how frontline assessors perceive the role of grading committees in the Internal Medicine core clerkship at WUSM and sought to identify challenges that could be addressed in assessor development initiatives. METHODS We conducted four semi-structured focus group interviews with resident (n = 6) and faculty (n = 17) volunteers from inpatient and outpatient Internal Medicine clerkship rotations. Transcripts were analyzed using thematic analysis. RESULTS Participants felt that the transition to a grading committee had benefits and drawbacks for both assessors and students. Grading committees were thought to improve grading fairness and reduce pressure on assessors. However, some participants perceived a loss of responsibility in students' grading. Furthermore, assessors recognized persistent challenges in communicating students' performance via assessment forms and misunderstandings about the new grading process. Interviewees identified a need for more training in formal assessment; however, there was no universally preferred training modality. CONCLUSIONS Frontline assessors view the switch from individual graders to a grading committee as beneficial due to a perceived reduction of bias and improvement in grading fairness; however, they report ongoing challenges in the utilization of assessment tools and incomplete understanding of the grading and assessment process.
Collapse
Affiliation(s)
- Sophia K Lewis
- Department of Medicine, Washington University School of Medicine, St. Louis, MO, USA.
| | - Nathanial S Nolan
- Division of Infectious Disease, VA St Louis Health Care System, St. Louis, MO, USA
- Division of Infectious Disease, Department of Medicine, Washington University School of Medicine, St. Louis, MO, USA
| | - Lisa Zickuhr
- Department of Medicine, Washington University School of Medicine, St. Louis, MO, USA
- Division of Rheumatology, Department of Medicine, Washington University School of Medicine, St. Louis, USA
| |
Collapse
|
2
|
Ekpenyong A, Becker K. What resources do clinical competency committees (CCCs) require to do their work? A pilot study comparing CCCs across specialties. MEDICAL TEACHER 2021; 43:86-92. [PMID: 32976733 DOI: 10.1080/0142159x.2020.1817878] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
PURPOSE Although a growing literature describes how clinical competency committees (CCCs) make decisions about trainees' clinical performance, little is known about the resources these committees need to perform their work. In this pilot study, we examined key characteristics of CCC processes across generalist and surgical specialties. This study intended to clarify topic areas for further investigation. METHODS A cross-sectional web-based survey of CCC chairpersons at two institutions was conducted in 2017. Survey items were designed to describe not only CCC work, including types of assessment data used and time spent discussing learners, but also resource needs such as faculty development, institutional support, and protected time for members. RESULTS The response rate was 59% (16/27). Only 44% offered faculty development to members. There was strong support for the institution to assist with faculty development for CCC members (81.25%), workshops for program coordinators (87.5%) and optimizing residency management software to organize assessment data (81.25%). Most respondents did not receive protected time for their committee work (93.75%). CONCLUSIONS Further studies are needed to elucidate whether CCC work varies across specialties and the associated committee resource needs. There may be opportunities for institutions to assist CCCs with resources across specialties.
Collapse
Affiliation(s)
- Andem Ekpenyong
- Department of Medicine, Rush University Medical Center, Chicago, IL, USA
| | - Kimberly Becker
- Graduate Medical Education Department, University of North Dakota School of Medicine and Health Sciences, Grand Forks, ND, USA
| |
Collapse
|
3
|
Frank AK, O'Sullivan P, Mills LM, Muller-Juge V, Hauer KE. Clerkship Grading Committees: the Impact of Group Decision-Making for Clerkship Grading. J Gen Intern Med 2019; 34:669-676. [PMID: 30993615 PMCID: PMC6502934 DOI: 10.1007/s11606-019-04879-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
BACKGROUND Faculty and students debate the fairness and accuracy of medical student clerkship grades. Group decision-making is a potential strategy to improve grading. OBJECTIVE To explore how one school's grading committee members integrate assessment data to inform grade decisions and to identify the committees' benefits and challenges. DESIGN This qualitative study used semi-structured interviews with grading committee chairs and members conducted between November 2017 and March 2018. PARTICIPANTS Participants included the eight core clerkship directors, who chaired their grading committees. We randomly selected other committee members to invite, for a maximum of three interviews per clerkship. APPROACH Interviews were recorded, transcribed, and analyzed using inductive content analysis. KEY RESULTS We interviewed 17 committee members. Within and across specialties, committee members had distinct approaches to prioritizing and synthesizing assessment data. Participants expressed concerns about the quality of assessments, necessitating careful scrutiny of language, assessor identity, and other contextual factors. Committee members were concerned about how unconscious bias might impact assessors, but they felt minimally impacted at the committee level. When committee members knew students personally, they felt tension about how to use the information appropriately. Participants described high agreement within their committees; debate was more common when site directors reviewed students' files from other sites prior to meeting. Participants reported multiple committee benefits including faculty development and fulfillment, as well as improved grading consistency, fairness, and transparency. Groupthink and a passive approach to bias emerged as the two main threats to optimal group decision-making. CONCLUSIONS Grading committee members view their practices as advantageous over individual grading, but they feel limited in their ability to address grading fairness and accuracy. Recommendations and support may help committees broaden their scope to address these aspirations.
Collapse
Affiliation(s)
- Annabel K Frank
- Department of Medicine, University of California, San Francisco, San Francisco, CA, USA
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Patricia O'Sullivan
- Department of Medicine, University of California, San Francisco, San Francisco, CA, USA
| | - Lynnea M Mills
- Department of Medicine, University of California, San Francisco, San Francisco, CA, USA
| | - Virginie Muller-Juge
- Department of Medicine, University of California, San Francisco, San Francisco, CA, USA
| | - Karen E Hauer
- Department of Medicine, University of California, San Francisco, San Francisco, CA, USA.
| |
Collapse
|
4
|
Black S, Capdeville M, Augoustides JGT, Nelson EW, Patel PA, Feinman JW, Gordon EK, Lockman JL, Yanofsky SD. The Clinical Competency Committee in Adult Cardiothoracic Anesthesiology-Perspectives From Program Directors Around the United States. J Cardiothorac Vasc Anesth 2019; 33:1819-1827. [PMID: 30679070 DOI: 10.1053/j.jvca.2019.01.001] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/28/2018] [Indexed: 11/11/2022]
Abstract
The clinical competency committee offers a fellowship program a structured approach to assess the clinical performance of each trainee in a comprehensive fashion This special article examines the structure and function of this important committee in detail. Furthermore, the strategies for the optimal functioning of this committee are also discussed as a way to enhance the overall quality of the fellowship program.
Collapse
Affiliation(s)
- Stephanie Black
- Anesthesiology and Critical Care, Children's Hospital of Philadelphia, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Michelle Capdeville
- Department of Cardiothoracic Anesthesiology, Cleveland Clinic Lerner College of Medicine, Cleveland, OH
| | - John G T Augoustides
- Department of Anesthesiology and Critical Care, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA.
| | - Eric W Nelson
- Department of Anesthesia and Perioperative Medicine, Medical University of South Carolina, Charleston, SC
| | - Prakash A Patel
- Department of Anesthesiology and Critical Care, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Jared W Feinman
- Department of Anesthesiology and Critical Care, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Emily K Gordon
- Department of Anesthesiology and Critical Care, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Justin L Lockman
- Anesthesiology and Critical Care, Children's Hospital of Philadelphia, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Samuel D Yanofsky
- Children's Hospital of Los Angeles, Keck School of Medicine, University of Southern California, Los Angeles, CA
| |
Collapse
|
5
|
Hemmer PA, Kelly WF. We need to talk: clinical competency committees in the key of c(onversation). PERSPECTIVES ON MEDICAL EDUCATION 2017; 6:141-143. [PMID: 28536965 PMCID: PMC5466573 DOI: 10.1007/s40037-017-0360-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Affiliation(s)
- Paul A Hemmer
- Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD, USA.
| | - William F Kelly
- Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| |
Collapse
|
6
|
Hemann BA, Durning SJ, Kelly WF, Dong T, Pangaro LN, Hemmer PA. Referral for competency committee review for poor performance on the internal medicine clerkship is associated with poor performance in internship. Mil Med 2016; 180:71-6. [PMID: 25850130 DOI: 10.7205/milmed-d-14-00575] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Abstract
PURPOSE To determine how students who are referred to a competency committee for concern over performance, and ultimately judged not to require remediation, perform during internship. METHODS Uniformed Services University of the Health Sciences' students who graduated between 2007 and 2011 were included in this study. We compared the performance during internship of three groups: students who were referred to the internal medicine competency committee for review who met passing criterion, students who were reviewed by the internal medicine competency committee who were determined not to have passed the clerkship and were prescribed remediation, and students who were never reviewed by this competency committee. Program Director survey results and United States Medical Licensing Examination (USMLE) Step 3 examination results were used as the outcomes of interest. RESULTS The overall survey response rate for this 5-year cohort was 81% (689/853). 102 students were referred to this competency committee for review. 63/102 students were reviewed by this competency committee, given passing grades in the internal medicine clerkship, and were not required to do additional remediation. 39/102 students were given less than passing grades by this competency committee and required to perform additional clinical work in the department of medicine to remediate their performance. 751 students were never presented to this competency committee. Compared to students who were never presented for review, the group of reviewed students who did not require remediation was 5.6 times more likely to receive low internship survey ratings in the realm of professionalism, 8.6 times more likely to receive low ratings in the domain of medical expertise, and had a higher rate of USMLE Step 3 failure (9.4% vs. 2.8%). When comparing the reviewed group to students who were reviewed and also required remediation, the only significant difference between groups regarding professionalism ratings with 50% of the group requiring remediation garnering low ratings compared to 18% of the reviewed group. CONCLUSIONS Students who are referred to a committee for review following completion of their internal medicine clerkship are more likely to receive poor ratings in internship and fail USMLE Step 3 compared to students whose performance in the medicine clerkship does not trigger a committee review. These findings provide validity evidence for our competency committee review in that the students identified as requiring further clinical work had significantly higher rates of poor ratings in professionalism than students who were reviewed by the competency committee but not required to remediate. Additionally, students reviewed but not required to remediate were nonetheless at risk of low internship ratings, suggesting that these students might need some intervention prior to graduation.
Collapse
Affiliation(s)
- Brian A Hemann
- F. Edward Hébert Uniformed Services University of Health Sciences, 4301 Jones Bridge Road, Bethesda, MD 20814
| | - Steven J Durning
- F. Edward Hébert Uniformed Services University of Health Sciences, 4301 Jones Bridge Road, Bethesda, MD 20814
| | - William F Kelly
- F. Edward Hébert Uniformed Services University of Health Sciences, 4301 Jones Bridge Road, Bethesda, MD 20814
| | - Ting Dong
- F. Edward Hébert Uniformed Services University of Health Sciences, 4301 Jones Bridge Road, Bethesda, MD 20814
| | - Louis N Pangaro
- F. Edward Hébert Uniformed Services University of Health Sciences, 4301 Jones Bridge Road, Bethesda, MD 20814
| | - Paul A Hemmer
- F. Edward Hébert Uniformed Services University of Health Sciences, 4301 Jones Bridge Road, Bethesda, MD 20814
| |
Collapse
|
7
|
Durning SJ, Dong T, Hemmer PA, Gilliland WR, Cruess DF, Boulet JR, Pangaro LN. Are Commonly Used Premedical School or Medical School Measures Associated With Board Certification? Mil Med 2015; 180:18-23. [DOI: 10.7205/milmed-d-14-00569] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Abstract
ABSTRACT
Purpose: To determine if there is an association between several commonly obtained premedical school and medical school measures and board certification performance. We specifically included measures from our institution for which we have predictive validity evidence into the internship year. We hypothesized that board certification would be most likely to be associated with clinical measures of performance during medical school, and with scores on standardized tests, whether before or during medical school. Methods: Achieving board certification in an American Board of Medical Specialties specialty was used as our outcome measure for a 7-year cohort of graduates (1995–2002). Age at matriculation, Medical College Admissions Test (MCAT) score, undergraduate college grade point average (GPA), undergraduate college science GPA, Uniformed Services University (USU) cumulative GPA, USU preclerkship GPA, USU clerkship year GPA, departmental competency committee evaluation, Internal Medicine (IM) clerkship clinical performance rating (points), IM total clerkship points, history of Student Promotion Committee review, and United States Medical Licensing Examination (USMLE) Step 1 score and USMLE Step 2 clinical knowledge score were associated with this outcome. Results: Ninety-three of 1,155 graduates were not certified, resulting in an average rate of board certification of 91.9% for the study cohort. Significant small correlations were found between board certification and IM clerkship points (r = 0.117), IM clerkship grade (r = 0.108), clerkship year GPA (r = 0.078), undergraduate college science GPA (r = 0.072), preclerkship GPA and medical school GPA (r = 0.068 for both), USMLE Step 1 (r = 0.066), undergraduate college total GPA (r = 0.062), and age at matriculation (r = −0.061). In comparing the two groups (board certified and not board certified cohorts), significant differences were seen for all included variables with the exception of MCAT and USMLE Step 2 clinical knowledge scores. All the variables put together could explain 4.1% of the variance of board certification by logistic regression. Conclusions: This investigation provides some additional validity evidence that measures collected for purposes of student evaluation before and during medical school are warranted.
Collapse
Affiliation(s)
- Steven J. Durning
- Department of Medicine, Uniformed Services University of the Health Sciences,4301 Jones Bridge Road, Bethesda, MD 20814
| | - Ting Dong
- Department of Medicine, Uniformed Services University of the Health Sciences,4301 Jones Bridge Road, Bethesda, MD 20814
| | - Paul A. Hemmer
- Department of Medicine, Uniformed Services University of the Health Sciences,4301 Jones Bridge Road, Bethesda, MD 20814
| | - William R. Gilliland
- F. Edward Hébert School of Medicine, Uniformed Services University of the Health Sciences, 4301 Jones Bridge Road, Bethesda, MD 20814
| | - David F. Cruess
- Department of Preventive Medicine and Biometrics, Uniformed Services University of the Health Sciences, 4301 Jones Bridge Road, Bethesda, MD 20814
| | - John R. Boulet
- Foundation for Advancement of International Medical Education and Research (FAIMER), 3634 Market Street, Philadelphia, PA 19104
| | - Louis N. Pangaro
- Department of Medicine, Uniformed Services University of the Health Sciences,4301 Jones Bridge Road, Bethesda, MD 20814
| |
Collapse
|
8
|
Hemann BA, Durning SJ, Kelly WF, Dong T, Pangaro LN, Hemmer PA. The Association of Students Requiring Remediation in the Internal Medicine Clerkship With Poor Performance During Internship. Mil Med 2015; 180:47-53. [DOI: 10.7205/milmed-d-14-00567] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Abstract
ABSTRACT
Purpose: To determine whether the Uniformed Services University (USU) system of workplace performance assessment for students in the internal medicine clerkship at the USU continues to be a sensitive predictor of subsequent poor performance during internship, when compared with assessments in other USU third year clerkships. Method: Utilizing Program Director survey results from 2007 through 2011 and U.S. Medical Licensing Examination (USMLE) Step 3 examination results as the outcomes of interest, we compared performance during internship for students who had less than passing performance in the internal medicine clerkship and required remediation, against students whose performance in the internal medicine clerkship was successful. We further analyzed internship ratings for students who received less than passing grades during the same time period on other third year clerkships such as general surgery, pediatrics, obstetrics and gynecology, family medicine, and psychiatry to evaluate whether poor performance on other individual clerkships were associated with future poor performance at the internship level. Results for this recent cohort of graduates were compared with previously published findings. Results: The overall survey response rate for this 5 year cohort was 81% (689/853). Students who received a less than passing grade in the internal medicine clerkship and required further remediation were 4.5 times more likely to be given poor ratings in the domain of medical expertise and 18.7 times more likely to demonstrate poor professionalism during internship. Further, students requiring internal medicine remediation were 8.5 times more likely to fail USMLE Step 3. No other individual clerkship showed any statistically significant associations with performance at the intern level. On the other hand, 40% of students who successfully remediated and did graduate were not identified during internship as having poor performance. Conclusions: Unsuccessful clinical performance which requires remediation in the third year internal medicine clerkship at Uniformed Services University of the Health Sciences continues to be strongly associated with poor performance at the internship level. No significant associations existed between any of the other clerkships and poor performance during internship and Step 3 failure. The strength of this association with the internal medicine clerkship is most likely because of an increased level of sensitivity in detecting poor performance.
Collapse
Affiliation(s)
- Brian A. Hemann
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University of the Health Sciences, 4301 Jones Bridge Road, Bethesda, MD 20814
| | - Steven J. Durning
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University of the Health Sciences, 4301 Jones Bridge Road, Bethesda, MD 20814
| | - William F. Kelly
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University of the Health Sciences, 4301 Jones Bridge Road, Bethesda, MD 20814
| | - Ting Dong
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University of the Health Sciences, 4301 Jones Bridge Road, Bethesda, MD 20814
| | - Louis N. Pangaro
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University of the Health Sciences, 4301 Jones Bridge Road, Bethesda, MD 20814
| | - Paul A. Hemmer
- Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University of the Health Sciences, 4301 Jones Bridge Road, Bethesda, MD 20814
| |
Collapse
|
9
|
|
10
|
Hanson JL, Rosenberg AA, Lane JL. Narrative descriptions should replace grades and numerical ratings for clinical performance in medical education in the United States. Front Psychol 2013; 4:668. [PMID: 24348433 PMCID: PMC3836691 DOI: 10.3389/fpsyg.2013.00668] [Citation(s) in RCA: 64] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2013] [Accepted: 09/05/2013] [Indexed: 11/13/2022] Open
Abstract
Background: In medical education, evaluation of clinical performance is based almost universally on rating scales for defined aspects of performance and scores on examinations and checklists. Unfortunately, scores and grades do not capture progress and competence among learners in the complex tasks and roles required to practice medicine. While the literature suggests serious problems with the validity and reliability of ratings of clinical performance based on numerical scores, the critical issue is not that judgments about what is observed vary from rater to rater but that these judgments are lost when translated into numbers on a scale. As the Next Accreditation System of the Accreditation Council on Graduate Medical Education (ACGME) takes effect, medical educators have an opportunity to create new processes of evaluation to document and facilitate progress of medical learners in the required areas of competence. Proposal and initial experience: Narrative descriptions of learner performance in the clinical environment, gathered using a framework for observation that builds a shared understanding of competence among the faculty, promise to provide meaningful qualitative data closely linked to the work of physicians. With descriptions grouped in categories and matched to milestones, core faculty can place each learner along the milestones' continua of progress. This provides the foundation for meaningful feedback to facilitate the progress of each learner as well as documentation of progress toward competence. Implications: This narrative evaluation system addresses educational needs as well as the goals of the Next Accreditation System for explicitly documented progress. Educators at other levels of education and in other professions experience similar needs for authentic assessment and, with meaningful frameworks that describe roles and tasks, may also find useful a system built on descriptions of learner performance in actual work settings. Conclusions: We must place medical learning and assessment in the contexts and domains in which learners do clinical work. The approach proposed here for gathering qualitative performance data in different contexts and domains is one step along the road to moving learners toward competence and mastery.
Collapse
Affiliation(s)
- Janice L Hanson
- Department of Pediatrics, University of Colorado School of Medicine Aurora, CO, USA
| | - Adam A Rosenberg
- Department of Pediatrics, University of Colorado School of Medicine Aurora, CO, USA
| | - J Lindsey Lane
- Department of Pediatrics, University of Colorado School of Medicine Aurora, CO, USA
| |
Collapse
|
11
|
Wilkinson TJ, Tweed MJ, Egan TG, Ali AN, McKenzie JM, Moore M, Rudland JR. Joining the dots: conditional pass and programmatic assessment enhances recognition of problems with professionalism and factors hampering student progress. BMC MEDICAL EDUCATION 2011; 11:29. [PMID: 21649925 PMCID: PMC3121726 DOI: 10.1186/1472-6920-11-29] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/02/2011] [Accepted: 06/07/2011] [Indexed: 05/12/2023]
Abstract
BACKGROUND Programmatic assessment that looks across a whole year may contribute to better decisions compared with those made from isolated assessments alone. The aim of this study is to describe and evaluate a programmatic system to handle student assessment results that is aligned not only with learning and remediation, but also with defensibility. The key components are standards based assessments, use of "Conditional Pass", and regular progress meetings. METHODS The new assessment system is described. The evaluation is based on years 4-6 of a 6-year medical course. The types of concerns staff had about students were clustered into themes alongside any interventions and outcomes for the students concerned. The likelihoods of passing the year according to type of problem were compared before and after phasing in of the new assessment system. RESULTS The new system was phased in over four years. In the fourth year of implementation 701 students had 3539 assessment results, of which 4.1% were Conditional Pass. More in-depth analysis for 1516 results available from 447 students revealed the odds ratio (95% confidence intervals) for failure was highest for students with problems identified in more than one part of the course (18.8 (7.7-46.2) p < 0.0001) or with problems with professionalism (17.2 (9.1-33.3) p < 0.0001). The odds ratio for failure was lowest for problems with assignments (0.7 (0.1-5.2) NS). Compared with the previous system, more students failed the year under the new system on the basis of performance during the year (20 or 4.5% compared with four or 1.1% under the previous system (p < 0.01)). CONCLUSIONS The new system detects more students in difficulty and has resulted in less "failure to fail". The requirement to state conditions required to pass has contributed to a paper trail that should improve defensibility. Most importantly it has helped detect and act on some of the more difficult areas to assess such as professionalism.
Collapse
Affiliation(s)
- Tim J Wilkinson
- University of Otago, Christchurch, C/- The Princess Margaret Hospital, PO Box 800, Christchurch, New Zealand
| | - Mike J Tweed
- Medical Education Unit, University of Otago, Wellington, PO Box 7343, Wellington 6242, New Zealand
| | - Tony G Egan
- Faculty Education Unit, Faculty of Medicine, University of Otago, PO Box 56, Dunedin 9054, New Zealand
| | - Anthony N Ali
- Medical Education Unit, University of Otago, Christchurch, PO Box 4345, Christchurch 8140, New Zealand
| | - Jan M McKenzie
- Medical Education Unit, University of Otago, Christchurch, PO Box 4345, Christchurch 8140, New Zealand
| | - MaryLeigh Moore
- Medical Education Unit, University of Otago, Christchurch, PO Box 4345, Christchurch 8140, New Zealand
| | - Joy R Rudland
- Faculty Education Unit, Faculty of Medicine, University of Otago, PO Box 56, Dunedin 9054, New Zealand
| |
Collapse
|
12
|
Katz ED, Dahms R, Sadosty AT, Stahmer SA, Goyal D. Guiding principles for resident remediation: recommendations of the CORD remediation task force. Acad Emerg Med 2010; 17 Suppl 2:S95-103. [PMID: 21199091 DOI: 10.1111/j.1553-2712.2010.00881.x] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Remediation of residents is a common problem and requires organized, goal-directed efforts to solve. The Council of Emergency Medicine Residency Directors (CORD) has created a task force to identify best practices for remediation and to develop guidelines for resident remediation. Faculty members of CORD volunteered to participate in periodic meetings, organized discussions and literature reviews to develop overall guidelines for resident remediation and in a collaborative authorship of this article identifying best practices for remediation. The task force recommends that residency programs: 1. Make efforts to understand the challenges of remediation, and recognize that the goal is successful correction of deficits, but that some deficits are not remediable. 2. Make efforts aimed at early identification of residents requiring remediation. 3. Create objective, achievable goals for remediation and maintain strict adherence to the terms of those plans, including planning for resolution when setting goals for remediation. 4. Involve the institution's Graduate Medical Education Committee (GMEC) early in remediation to assist with planning, obtaining resources, and documentation. 5. Involve appropriate faculty and educate those faculty into the role and terms of the specific remediation plan. 6. Ensure appropriate documentation of all stages of remediation. Resident remediation is frequently necessary and specific steps may be taken to justify, document, facilitate, and objectify the remediation process. Best practices for each step are identified and reported by the task force.
Collapse
Affiliation(s)
- Eric D Katz
- Department of Emergency Medicine, Maricopa Medical Center, Phoenix, AZ, USA.
| | | | | | | | | |
Collapse
|
13
|
Griffith CH, Wilson JF. The association of student examination performance with faculty and resident ratings using a modified RIME process. J Gen Intern Med 2008; 23:1020-3. [PMID: 18612736 PMCID: PMC2517939 DOI: 10.1007/s11606-008-0611-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
BACKGROUND RIME is a descriptive framework in which students and their teachers can gauge progress throughout a clerkship from R (reporter) to I (interpreter) to M (manager) to E (educator). RIME, as described in the literature, is complemented by residents and attending physicians meeting with a clerkship director to discuss individual student progress, with group discussion resulting in assignment of a RIME stage. OBJECTIVE 1) to determine whether a student's RIME rating is associated with end-of-clerkship examination performance; and 2) to determine whose independent RIME rating is most predictive of a student's examination performance: attendings, residents, or interns. DESIGN Prospective cohort study. PARTICIPANTS Third year medical students from academic years 2004-2005 and early 2005-2006 at 1 medical school. MEASUREMENTS AND MAIN RESULTS Each attending, resident, and intern independently assessed the student's final RIME stage attained. For the purpose of analysis, R stage=1, I=2, M=3, and E=4. Regression analyses were performed with examination scores as dependent variables (National Board of Medical Examiners [NBME] medicine subject examination and a clinical performance examination [CPE]), with independent variables of mean attending RIME score, mean resident score, and mean intern score. For the 122 students, significant predictors of NBME subject exam score were resident RIME rating (p = .008) and intern RIME rating (p = .02). Significant predictor of CPE performance was resident RIME rating (p = .01). CONCLUSION House staff RIME ratings of students are associated with student performance on written and clinical skills examinations.
Collapse
|