1
|
Almohaimede AA. Comparison between students' self-evaluation and faculty members' evaluation in a clinical endodontic course at King Saud University. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2022; 26:569-576. [PMID: 34870874 DOI: 10.1111/eje.12733] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Revised: 09/28/2021] [Accepted: 11/12/2021] [Indexed: 06/13/2023]
Abstract
INTRODUCTION The objective of this study was to compare faculty member evaluations with student self-evaluations in a clinical endodontic course in the dental school at King Saud University and to evaluate the reliability of the students' self-assessment scores after using a rubric with well-defined criteria. MATERIALS AND METHODS Evaluated and self-evaluated endodontic cases that were clinically treated by the fourth-year undergraduate dental students at the College of Dentistry, Girls University Campus, at King Saud University over 2 years (2017-2018) were included. Cases included anterior teeth, premolars and molars. The evaluation form was divided into six sections with well-defined criteria to cover all aspects of nonsurgical root canal treatment with a maximum grade of 10 points can be scored for each student per case. The students evaluated themselves for each section and then were evaluated by two faculty members. Student and faculty assessment agreement and the reliability of the students' self-assessment scores were measured. A p ≤ .05 was considered significant. RESULTS A total of 363 cases were included: 26.7% anterior teeth, 38.84% premolars and 34.43% molars. The students evaluated themselves with higher grades compared to the evaluators' grading in all steps and in the overall grading in all teeth types. The students' self-assessment scores showed good and moderate reliability in all steps and in the overall grading. CONCLUSION The students tend to overrate their performance, and their assessments have moderate to good reliability, which reflects the reliability of the rubric used as an accurate measurement tool that helps the evaluator and the student objectively assess their performance.
Collapse
Affiliation(s)
- Amal A Almohaimede
- Endodontic division, Department of Restorative Dental Sciences, College of Dentistry, King Saud University, Riyadh, Kingdom of Saudi Arabia
| |
Collapse
|
2
|
Johnson NR, Pelletier A, Berkowitz LR. Mini-Clinical Evaluation Exercise in the Era of Milestones and Entrustable Professional Activities in Obstetrics and Gynaecology: Resume or Reform? JOURNAL OF OBSTETRICS AND GYNAECOLOGY CANADA 2020; 42:718-725. [PMID: 31882285 DOI: 10.1016/j.jogc.2019.10.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Revised: 10/01/2019] [Accepted: 10/02/2019] [Indexed: 11/29/2022]
Abstract
OBJECTIVE The Accreditation Council for Graduate Medical Education (ACGME) milestones and the core Entrustable Professional Activities (EPAs) provide guiding frameworks and requirements for assessing residents' progress. The Mini-Clinical Evaluation Exercise (Mini-CEX) is a formative assessment tool used to provide direct observation after an ambulatory or clinical encounter. This study aimed to investigate the feasibility and reliability of the Mini-CEX in the authors' obstetrics and gynaecology (OB/GYN) residency program and its ability to measure residents' progress and competencies in the frameworks of ACGME milestones and EPAs. METHODS OB/GYN residents' 5-academic-year Mini-CEX performance was analyzed retrospectively to measure reliability and feasibility. Additionally, realistic evaluation was conducted to assess the usefulness of Mini-CEX in the frameworks of ACGME milestones and EPAs. RESULTS A total of 395 Mini-CEX evaluations for 49 OB/GYN residents were analyzed. Mini-CEX evaluation data significantly discriminated among residents' training levels (P < 0.003). Residents had an average of 8.1 evaluations per resident completed; 10% of second-year residents and 28% of third-year residents were evaluated 10 or more times per year, whereas no postgraduate year 1 or postgraduate year 4 residents achieved this number. Mini-CEX data could contribute to all 6 primary measurement domains of OB/GYN milestones and 8 of 10 EPAs required for first-year residents. CONCLUSION The Mini-CEX demonstrated potential for measuring residents' clinical competencies in their ACGME milestones. Faculty time commitment was the main challenge. Reform is necessary for the current feedback structure in Mini-CEX, faculty development, and operational guidelines that help residency programs match residents' clinical competency ratings with ACGME milestones and EPAs.
Collapse
Affiliation(s)
- Natasha R Johnson
- Department of Obstetrics and Gynecology, Brigham and Women's Hospital, Boston, MA; Department of Obstetrics, Gynecology and Reproductive Biology, Harvard Medical School, Boston, MA.
| | - Andrea Pelletier
- Department of Obstetrics and Gynecology, Brigham and Women's Hospital, Boston, MA
| | - Lori R Berkowitz
- Department of Obstetrics, Gynecology and Reproductive Biology, Harvard Medical School, Boston, MA; Department of Obstetrics and Gynecology, Massachusetts General Hospital, Boston, MA
| |
Collapse
|
3
|
Bansal M. Introduction of Directly Observed Procedural Skills (DOPS) as a Part of Competency-Based Medical Education in Otorhinolaryngology. Indian J Otolaryngol Head Neck Surg 2019; 71:161-166. [PMID: 31275823 DOI: 10.1007/s12070-019-01624-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2018] [Accepted: 02/12/2019] [Indexed: 11/26/2022] Open
Abstract
The Directly Observed Procedural/Practical Skill (DOPS) is a relatively new but reliable tool for formative assessment. The lack of desired awareness regarding DOPS among the Otorhinolayngologists of India made us to conduct this study. The aim of the study was introduction of DOPS in Oto-rhino-laryngology Department. The objectives of the study were: (1) To prepare lists of Oto-rhino-laryngology procedures for DOPS, (2) To conduct Orientation program of DOPS for the participants, (3) To prepare a structured list of items for the rating scale, (4) To facilitate and conduct DOPS encounters of different Oto-rhino-laryngology procedures. The study was conducted in a tertiary care medical college hospital from April 2018 to August 2018. Thirty-three trainees and 5 trainers participated. The 421 DOPS encounters involved 41 Oto-rhino-laryngology procedures. For checking the association between average time and clinical settings and Oto-rhino-laryngology procedures and DOPS encounters, the nonparametric test χ2 test was employed. Male trainees (63.63%) outnumbered female trainees. Mostly trainees (91%) were aged 22-25 years. Approximately half (49%) of the Oto-rhino-laryngology procedures (20/41) and 9/10th (86.22%) of DOPS encounters (363/421) were conducted in OPD. The average time taken to complete the E.N.T. procedures and DOPS encounters was 15 min or less in the majority (91% and 98%) of the Oto-rhino-laryngology procedures (38/41) and DOPS encounters (414/421). DOPS was introduced as a learning tool in the Oto-rhino-laryngology Department of our medical college. For assessing the "competency level" of trainees for E.N.T. procedures, DOPS is a high quality instrument as it tests the candidate at the "does" level.
Collapse
Affiliation(s)
- Mohan Bansal
- CU Shah Medical College, C-23 Doctors Quarters, Dudhrej Road, Surendranagar, Gujarat India
| |
Collapse
|
4
|
Abdelsattar JM, AlJamal YN, Ruparel RK, Rowse PG, Heller SF, Farley DR. Correlation of Objective Assessment Data With General Surgery Resident In-Training Evaluation Reports and Operative Volumes. JOURNAL OF SURGICAL EDUCATION 2018; 75:1430-1436. [PMID: 29773409 DOI: 10.1016/j.jsurg.2018.04.016] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/12/2017] [Revised: 03/30/2018] [Accepted: 04/22/2018] [Indexed: 06/08/2023]
Abstract
OBJECTIVE Faculty evaluations, ABSITE scores, and operative case volumes often tell little about true resident performance. We developed an objective structured clinical examination called the Surgical X-Games (5 rooms, 15 minutes each, 12-15 tests total, different for each postgraduate [PGY] level). We hypothesized that performance in X-Games will prove more useful in identifying areas of strength or weakness among general surgery (GS) residents than faculty evaluations, ABSITE scores, or operative cases volumes. DESIGN PGY 2 to 5 GS residents (n = 35) were tested in a semiannual X-Games assessment using multiple simulation tasks: laparoscopic skills, bowel anastomosis, CT/CXR analysis, chest tube placement, etc. over 1 academic year. Resident scores were compared to their ABSITE, in-training evaluation reports, and operating room case numbers. SETTING Academic medical center. PARTICIPANTS PGY-2, 3, 4, and 5 GS residents at Mayo Clinic in Rochester, MN. RESULTS Results varied greatly within each class except for staff evaluations: in-training evaluation reports medians for PGY-2s were 5.3 (range: 5.0-6.0), PGY-3s 5.9 (5.5-6.3), PGY-4s 5.6 (5.0-6.0), and PGY-5s were 6.1 (5.6-6.9). Although ABSITE and operating room case volumes fluctated greatly with each PGY class, only X-Games scores (median: PGY-2 = 82, PGY-3 = 61, PGY-4 = 76, and PGY-5 = 60) correlated positively (p < 0.05) with operative case volume and negatively (p < 0.05) with staff evaluations. CONCLUSIONS X-Games assessment generated wide differentiation of resident performance quickly, inexpensively, and objectively. Although "Minnesota-nice" surgical staff may feel all GS trainees are "above average," objective assessment tells us otherwise.
Collapse
Affiliation(s)
- Jad M Abdelsattar
- Department of Surgery, Mayo Clinic College of Medicine, Rochester, Minnesota
| | - Yazan N AlJamal
- Department of Surgery, Mayo Clinic College of Medicine, Rochester, Minnesota
| | - Raaj K Ruparel
- Department of Surgery, Mayo Clinic College of Medicine, Rochester, Minnesota
| | - Phillip G Rowse
- Department of Surgery, Mayo Clinic College of Medicine, Rochester, Minnesota
| | - Stephanie F Heller
- Department of Surgery, Mayo Clinic College of Medicine, Rochester, Minnesota
| | - David R Farley
- Department of Surgery, Mayo Clinic College of Medicine, Rochester, Minnesota.
| |
Collapse
|
5
|
Vergis A, Steigerwald S. Skill Acquisition, Assessment, and Simulation in Minimal Access Surgery: An Evolution of Technical Training in Surgery. Cureus 2018; 10:e2969. [PMID: 30221097 PMCID: PMC6136887 DOI: 10.7759/cureus.2969] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
Diminishing resources and expanding technologies, such as minimal access surgery, have complicated the acquisition and assessment of technical skills in surgical training programs. However, these challenges have been met with both innovation and an evolution in our understanding of how learners develop technical competence and how to better measure it. As these skills continue to grow in breadth and complexity, so too must the surgical education systems’ ability. This literature review examines and describes the pressures placed on surgical education programs and the development of methods to ameliorate them with a focus on surgical simulation.
Collapse
|
6
|
Direct observation of procedural skills (DOPS) evaluation method: Systematic review of evidence. Med J Islam Repub Iran 2018; 32:45. [PMID: 30159296 PMCID: PMC6108252 DOI: 10.14196/mjiri.32.45] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2017] [Indexed: 11/18/2022] Open
Abstract
Background: Evaluation is one of the most important aspects of medical education. Thus, new methods of effective evaluation are required in this area, and direct observation of procedural skills (DOPS) is one of these methods. This study was conducted to systematically review the evidence involved in this type of assessment to allow the effective use of this method.
Methods: Data were collected searching such keywords as evaluation, assessment, medical education, and direct observation of procedural skills (DOPS) on Google Scholar, PubMed, Science Direct, SID, Medlib and Google and by searching unpublished sources (Gray literature) and selected references (reference of reference).
Results: Of 236 papers, 28 were studied. Satisfaction with DOPS method was found to be moderate. The major strengths of this evaluation method are as follow: providing feedback to the participants and promoting independence and practical skills during assessment. However, stressful evaluation, time limitation for participants, and bias between assessors are the main drawbacks of this method. Positive impact of DOPS method on improving student performance has been noted in most studies. The results showed that the validity and reliability of DOPS are relatively acceptable. Performance of participants using DOPS was relatively satisfactory. However, not providing necessary trainings on how to take DOPS test, not providing essential feedback to participants, and insufficient time for the test are the major drawbacks of the DOPS tests.
Conclusion: According to the results of this study, DOPS tests can be applied as a valuable and effective evaluation method in medical education. However, more attention should be paid to the quality of these tests.
Collapse
|
7
|
Cheung WJ, Dudek NL, Wood TJ, Frank JR. Supervisor-trainee continuity and the quality of work-based assessments. MEDICAL EDUCATION 2017; 51:1260-1268. [PMID: 28971502 DOI: 10.1111/medu.13415] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2017] [Revised: 05/30/2017] [Accepted: 07/11/2017] [Indexed: 05/12/2023]
Abstract
CONTEXT Work-based assessments (WBAs) represent an increasingly important means of reporting expert judgements of trainee competence in clinical practice. However, the quality of WBAs completed by clinical supervisors is of concern. The episodic and fragmented interaction that often occurs between supervisors and trainees has been proposed as a barrier to the completion of high-quality WBAs. OBJECTIVES The primary purpose of this study was to determine the effect of supervisor-trainee continuity on the quality of assessments documented on daily encounter cards (DECs), a common form of WBA. The relationship between trainee performance and DEC quality was also examined. METHODS Daily encounter cards representing three differing degrees of supervisor-trainee continuity (low, intermediate, high) were scored by two raters using the Completed Clinical Evaluation Report Rating (CCERR), a previously published nine-item quantitative measure of DEC quality. An analysis of variance (anova) was performed to compare mean CCERR scores among the three groups. Linear regression analysis was conducted to examine the relationship between resident performance and DEC quality. RESULTS Differences in mean CCERR scores were observed between the three continuity groups (p = 0.02); however, the magnitude of the absolute differences was small (partial eta-squared = 0.03) and not educationally meaningful. Linear regression analysis demonstrated a significant inverse relationship between resident performance and CCERR score (p < 0.001, r2 = 0.18). This inverse relationship was observed in both groups representing on-service residents (p = 0.001, r2 = 0.25; p = 0.04, r2 = 0.19), but not in the Off-service group (p = 0.62, r2 = 0.05). CONCLUSIONS Supervisor-trainee continuity did not have an educationally meaningful influence on the quality of assessments documented on DECs. However, resident performance was found to affect assessor behaviours in the On-service group, whereas DEC quality remained poor regardless of performance in the Off-service group. The findings suggest that greater attention should be given to determining ways of improving the quality of assessments reported for off-service residents, as well as for those residents demonstrating appropriate clinical competence progression.
Collapse
Affiliation(s)
- Warren J Cheung
- Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Nancy L Dudek
- Division of Physical Medicine and Rehabilitation, Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Timothy J Wood
- Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada
| | - Jason R Frank
- Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada
- Royal College of Physicians and Surgeons of Canada, Ottawa, Ontario, Canada
| |
Collapse
|
8
|
Vergis A, Steigerwald S. A Preliminary Investigation of General and Technique-specific Assessments for the Evaluation of Laparoscopic Technical Skills. Cureus 2017; 9:e1757. [PMID: 29226047 PMCID: PMC5720594 DOI: 10.7759/cureus.1757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
Background Both general and technique-specific assessments of technical skill have been validated in surgical education. The purpose of this study was to assess the correlation between the objective structured assessment of technical skills (OSATS) and the global operative assessment of laparoscopic skills (GOALS) rating scales using a high-fidelity porcine laparoscopic cholecystectomy model. Methods Post-graduate year-one general surgery and urology residents (n=14) performed a live laparoscopic porcine cholecystectomy. Trained surgeons rated their performance using OSATS and GOALS assessment scales. Results Pearson's correlation coefficient between OSATS and GOALS was 0.96 for overall scores. It ranged from 0.78 - 0.89 for domains that overlapped between the two scales. Conclusion There is a very high correlation between OSATS and GOALS. This implies that they likely measure similar constructs and that either may be used for summative-type assessments of trainee skill. However, further investigation is needed to determine if technique-specific assessments may provide more useful feedback in formative evaluation.
Collapse
|
9
|
Hall AK, Damon Dagnone J, Moore S, Woolfrey KGH, Ross JA, McNeil G, Hagel C, Davison C, Sebok‐Syer SS. Comparison of Simulation-based Resuscitation Performance Assessments With In-training Evaluation Reports in Emergency Medicine Residents: A Canadian Multicenter Study. AEM EDUCATION AND TRAINING 2017; 1:293-300. [PMID: 30051047 PMCID: PMC6001706 DOI: 10.1002/aet2.10055] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/23/2017] [Revised: 07/19/2017] [Accepted: 08/08/2017] [Indexed: 05/23/2023]
Abstract
OBJECTIVE Simulation stands to serve an important role in modern competency-based programs of assessment in postgraduate medical education. Our objective was to compare the performance of individual emergency medicine (EM) residents in a simulation-based resuscitation objective structured clinical examination (OSCE) using the Queen's Simulation Assessment Tool (QSAT), with portfolio assessment of clinical encounters using a modified in-training evaluation report (ITER) to understand in greater detail the inferences that may be drawn from a simulation-based OSCE assessment. METHODS A prospective observational study was employed to explore the use of a multicenter simulation-based OSCE for evaluation of resuscitation competence. EM residents from five Canadian academic sites participated in the OSCE. Video-recorded performances were scored by blinded raters using the scenario-specific QSATs with domain-specific anchored scores (primary assessment, diagnostic actions, therapeutic actions, communication) and a global assessment score (GAS). Residents' portfolios were evaluated using a modified ITER subdivided by CanMEDS roles (medical expert, communicator, collaborator, leader, health advocate, scholar, and professional) and a GAS. Correlational and regression analyses were performed comparing components of each of the assessment methods. RESULTS Portfolio review and ITER scoring was performed for 79 residents participating in the simulation-based OSCE. There was a significant positive correlation between total OSCE and ITER scores (r = 0.341). The strongest correlations were found between ITER medical expert score and each of the OSCE GAS (r = 0.420), communication (r = 0.443), and therapeutic action (r = 0.484) domains. ITER medical expert was a significant predictor of OSCE total (p = 0.002). OSCE therapeutic action was a significant predictor of ITER total (p = 0.02). CONCLUSIONS Simulation-based resuscitation OSCEs and portfolio assessment captured by ITERs appear to measure differing aspects of competence, with weak to moderate correlation between those measures of conceptually similar constructs. In a program of competency-based assessment of EM residents, a simulation-based OSCE using the QSAT shows promise as a tool for assessing medical expert and communicator roles.
Collapse
Affiliation(s)
- Andrew Koch Hall
- Department of Emergency MedicineQueen's UniversityKingstonOntarioCanada
| | - J. Damon Dagnone
- Department of Emergency MedicineQueen's UniversityKingstonOntarioCanada
| | - Sean Moore
- Department of Emergency MedicineNorthern Ontario School of MedicineKenoraOntarioCanada
| | | | - John A. Ross
- Department of Emergency MedicineDalhousie UniversityHalifaxNova ScotiaCanada
| | - Gordon McNeil
- Department of Emergency MedicineUniversity of CalgaryCalgaryAlbertaCanada
| | - Carly Hagel
- Department of Emergency MedicineQueen's UniversityKingstonOntarioCanada
| | - Colleen Davison
- Department of Emergency MedicineQueen's UniversityKingstonOntarioCanada
- Department of Public Health SciencesQueen's UniversityKingstonOntarioCanada
| | - Stefanie S. Sebok‐Syer
- Centre for Education Research & InnovationSchulich School of Medicine and DentistryWestern UniversityLondonOntarioCanada
| |
Collapse
|
10
|
Sterling L, Mills K, Steele D, Shapiro H. End-of-Rotation Examinations in Canadian Obstetrics and Gynaecology Residency Programs: The Perspectives of Faculty Members and Residents. JOURNAL OF OBSTETRICS AND GYNAECOLOGY CANADA 2017; 39:465-470.e6. [DOI: 10.1016/j.jogc.2016.10.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2016] [Accepted: 10/17/2016] [Indexed: 11/24/2022]
|
11
|
Bodenmann AD, Bühler JM, Amato M, Weiger R, Zitzmann NU. Evaluation of a New Grading System for Clinical Skills in Dental Student Clinics. J Dent Educ 2017; 81:604-612. [DOI: 10.21815/jde.016.024] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2016] [Accepted: 12/02/2016] [Indexed: 11/20/2022]
Affiliation(s)
- Aurel D. Bodenmann
- Department of Periodontology, Endodontology, and Cariology; University of Basel; Basel Switzerland
| | - Julia M. Bühler
- Department of Periodontology, Endodontology, and Cariology; University of Basel; Switzerland
| | - Mauro Amato
- Department of Periodontology, Endodontology, and Cariology; University of Basel; Switzerland
| | - Ronald Weiger
- Department of Periodontology, Endodontology, and Cariology; University of Basel; Switzerland
| | - Nicola U. Zitzmann
- Department of Periodontology, Endodontology, and Cariology; University of Basel; Switzerland
| |
Collapse
|
12
|
Vergis A, Hardy K. Cognitive and Technical Skill Assessment in Surgical Education: a Changing Horizon. Indian J Surg 2017; 79:153-157. [PMID: 28442843 DOI: 10.1007/s12262-017-1603-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2010] [Accepted: 10/19/2010] [Indexed: 01/22/2023] Open
Abstract
Assessment is an integral component of training and credentialing surgeons for practice. Traditional methods of cognitive and technical appraisal are well established but have clear shortcomings. This review outlines the components of the surgical care assessment model, identifies the deficits of current evaluation techniques, and discusses novel and emerging technologies that attempt to ameliorate this educational void.
Collapse
Affiliation(s)
- Ashley Vergis
- Section of General Surgery, University of Manitoba, Winnipeg, MB Canada.,St. Boniface General Hospital, Z3039-409 Tache Avenue, Winnipeg, MB R2H 2A6 Canada
| | - Krista Hardy
- Section of General Surgery, University of Manitoba, Winnipeg, MB Canada
| |
Collapse
|
13
|
Establishing the concurrent validity of general and technique-specific skills assessments in surgical education. Am J Surg 2016; 211:268-73. [DOI: 10.1016/j.amjsurg.2015.04.024] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2015] [Revised: 04/25/2015] [Accepted: 04/30/2015] [Indexed: 11/21/2022]
|
14
|
May SA, Silva-Fletcher A. Scaffolded Active Learning: Nine Pedagogical Principles for Building a Modern Veterinary Curriculum. JOURNAL OF VETERINARY MEDICAL EDUCATION 2015; 42:332-339. [PMID: 26421513 DOI: 10.3138/jvme.0415-063r] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Veterinary discipline experts unfamiliar with the broader educational literature can find the adoption of an evidence-based approach to curriculum development challenging. However, greater societal and professional demands for achieving and verifying Day One knowledge and skills, together with continued progress in information generation and technology, make it all the more important that the defined period for initial professional training be well used. This article presents and discusses nine pedagogical principles that have been used in modern curricular development in Australia, the United Kingdom, and the United States: (1) outcomes-based curriculum design; (2) valid and reliable assessments; (3) active learning; (4) integrated knowledge for action; (5) tightly controlled core curriculum; (6) "just-in-time" rather than "just-in-case" knowledge; (7) vertical integration, the spiral curriculum, and sequential skills development; (8) learning skills support; and (9) bridges from classroom to workplace. Crucial to effective educational progress is active learning that embraces the skills required by the modern professional, made possible by tight control of curricular content. In this information age, professionals' ability to source information on a "just-in-time" basis to support high quality reasoning and decision making is far more important than the memorization of large bodies of increasingly redundant information on a "just-in-case" basis. It is important that those with responsibility for veterinary curriculum design ensure that their programs fully equip the modern veterinary professional for confident entry into the variety of roles in which society needs their skills.
Collapse
|
15
|
The Quality of Written Feedback by Attendings of Internal Medicine Residents. J Gen Intern Med 2015; 30:973-8. [PMID: 25691242 PMCID: PMC4471022 DOI: 10.1007/s11606-015-3237-2] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/14/2014] [Revised: 01/12/2015] [Accepted: 02/04/2015] [Indexed: 10/24/2022]
Abstract
BACKGROUND Attending evaluations are commonly used to evaluate residents. OBJECTIVES Evaluate the quality of written feedback of internal medicine residents. DESIGN Retrospective. PARTICIPANTS Internal medicine residents and faculty at the Medical College of Wisconsin from 2004 to 2012. MAIN MEASURES From monthly evaluations of residents by attendings, a randomly selected sample of 500 written comments by attendings were qualitatively coded and rated as high-, moderate-, or low-quality feedback by two independent coders with good inter-rater reliability (kappa: 0.94). Small group exercises with residents and attendings also coded the utterances as high, moderate, or low quality and developed criteria for this categorization. In-service examination scores were correlated with written feedback. KEY RESULTS There were 228 internal medicine residents who had 6,603 evaluations by 334 attendings. Among 500 randomly selected written comments, there were 2,056 unique utterances: 29% were coded as nonspecific statements, 20% were comments about resident personality, 16% about patient care, 14% interpersonal communication, 7% medical knowledge, 6% professionalism, and 4% each on practice-based learning and systems-based practice. Based on criteria developed by group exercises, the majority of written comments were rated as moderate quality (65%); 22% were rated as high quality and 13% as low quality. Attendings who provided high-quality feedback rated residents significantly lower in all six of the Accreditation Council for Graduate Medical Education (ACGME) competencies (p <0.0005 for all), and had a greater range of scores. Negative comments on medical knowledge were associated with lower in-service examination scores. CONCLUSIONS Most attending written evaluation was of moderate or low quality. Attendings who provided high-quality feedback appeared to be more discriminating, providing significantly lower ratings of residents in all six ACGME core competencies, and across a greater range. Attendings' negative written comments on medical knowledge correlated with lower in-service training scores.
Collapse
|
16
|
Obeid AA, Al-Qahtani KH, Ashraf M, Alghamdi FR, Marglani O, Alherabi A. Development and testing for an operative competency assessment tool for nasal septoplasty surgery. Am J Rhinol Allergy 2015; 28:e163-7. [PMID: 25197910 DOI: 10.2500/ajra.2014.28.4051] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
BACKGROUND Assessing surgical competency in otolaryngology is challenging, and residency programs are now responsible for ensuring the surgical competency of their graduates. Therefore, more objective assessment tools are being incorporated into the evaluation process. Objective structured assessment of technical skills (OSATSs) tools have been developed for multiple otolaryngology procedures. These include tonsillectomy, endoscopic sinus surgery, thyroidectomy, mastoidectomy, direct laryngoscopy, and rigid bronchoscopy. The purpose of this study was to develop and test a new assessment tool for septoplasty surgery and ensuring its feasibility, reliability, and construct validity. This study was designed to develop and test a valid, reliable, and feasible evaluation tool designed to measure the development of trainees' surgical skills in the operating room for septoplasty surgery. METHODS A new OSATSs-based instrument form for septoplasty was developed. During the study period of 2 years, 21 otolaryngology-head and neck surgery residents (ranging from postgraduate year 2 to 5) were evaluated intraoperatively by one faculty member obtaining a total of 175 evaluations. Surgical performance was rated using a seven-item task-specific checklist (TSC) and a global rating scale (GRS). The TSC assessed specific septoplasty technical skills, and the GRS assessed the overall surgical performance. RESULTS Our tool showed construct validity for both components of the assessment instrument, with increasing mean scores with advancing clinical levels. Cronbach's α, a measure of internal consistency, was 0.911 for TSC and 0.898 for GRS. Strong correlation between the TSC and GRS was established (r = 0.955; p < 0.01). CONCLUSION This study proved our educational tool to be a valid, reliable, and feasible method for assessing competency in septoplasty surgery. It can be integrated into surgical training programs to facilitate direct formative feedback. Assessing trainees' learning curves enables insight into their progression, ensuring their appropriate development.
Collapse
Affiliation(s)
- Amani A Obeid
- Department of Otolaryngology and Head and Neck Surgery, College of Medicine, King Saud University, Riyadh, Saudi Arabia
| | | | | | | | | | | |
Collapse
|
17
|
Development and Validation of an Assessment of Regional Anesthesia Ultrasound Interpretation Skills. Reg Anesth Pain Med 2015; 40:306-14. [DOI: 10.1097/aap.0000000000000236] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
18
|
Dudek N, Dojeiji S. Twelve tips for completing quality in-training evaluation reports. MEDICAL TEACHER 2014; 36:1038-1042. [PMID: 24986650 DOI: 10.3109/0142159x.2014.932897] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Assessing learners in the clinical setting is vital to determining their level of professional competence. Clinical performance assessments can be documented using In-training evaluation reports (ITERs). Previous research has suggested a need for faculty development in order to improve the quality of these reports. Previous work identified key features of high-quality completed ITERs which primarily involve the narrative comments. This aligns well with the recent discourse in the assessment literature focusing on the value of qualitative assessments. Evidence exists to demonstrate that faculty can be trained to complete higher quality ITERs. We present 12 key strategies to assist clinical supervisors in improving the quality of their completed ITERs. Higher quality completed ITERs will improve the documentation of the trainee's progress and be more defensible when questioned in an appeal or legal process.
Collapse
|
19
|
Azarnoush H, Alzhrani G, Winkler-Schwartz A, Alotaibi F, Gelinas-Phaneuf N, Pazos V, Choudhury N, Fares J, DiRaddo R, Del Maestro RF. Neurosurgical virtual reality simulation metrics to assess psychomotor skills during brain tumor resection. Int J Comput Assist Radiol Surg 2014; 10:603-18. [DOI: 10.1007/s11548-014-1091-z] [Citation(s) in RCA: 63] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2014] [Accepted: 06/09/2014] [Indexed: 01/22/2023]
|
20
|
Bindal N, Goodyear H, Bindal T, Wall D. DOPS assessment: a study to evaluate the experience and opinions of trainees and assessors. MEDICAL TEACHER 2013; 35:e1230-e1234. [PMID: 23627359 DOI: 10.3109/0142159x.2012.746447] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
BACKGROUND Workplace based assessments (WBAs) have been part of UK training for the last 3 years. Carrying out procedures efficiently and safely is of paramount importance in anaesthesia. AIMS To explore opinions and experiences of Direct Observation of Procedural Skills (DOPS) assessments in a regional anaesthetic training programme. METHODS 19 and 20-item questionnaires were distributed to trainees and consultants respectively. RESULTS Questionnaire response rate was 76% (90/119) for trainees and 65% (129/199) for consultants. 43% of consultants and 33% of trainees were not trained in DOPS use. Assessments were usually not planned. 50% were ad hoc and the remainder mainly retrospective. Time spent on assessment was short with DOPS and feedback achieved in ≤15 minutes in the majority of cases with lack of suggestions for further improvement. Both trainees and consultants felt that DOPS was not a helpful learning tool (p = 0.001) or a reflection of trainee competency. CONCLUSIONS DOPS assessments are currently not valued as an educational tool. Training is essential in use of this WBA tool which needs to be planned and sufficient time allocated so as to address current negative attitudes.
Collapse
Affiliation(s)
- Natish Bindal
- Department of Anaesthesia, Queen Elizabeth Hospital, Mindelsohn Way, Edgbaston, Birmingham, UK.
| | | | | | | |
Collapse
|
21
|
Taylor CL, Grey NJA, Satterthwaite JD. A comparison of grades awarded by peer assessment, faculty and a digital scanning device in a pre-clinical operative skills course. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2013; 17:e16-21. [PMID: 23279405 DOI: 10.1111/j.1600-0579.2012.00752.x] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/20/2012] [Indexed: 05/23/2023]
Abstract
OBJECTIVE The aim of this study was to compare the grades awarded by two experienced assessors with peer-assessment marks and measurements from a digital scanning device (Prepassistant; KaVo, Biberach, Germany), for full gold crown preparations completed in a pre-clinical operative skills course on typodont teeth. METHODS Seventy-eight preparations on typodont teeth were randomised and assessed by all three methods. Agreement was measured using weighted kappa statistics, and mean rank scores given by the Friedman test. RESULTS The highest agreement was seen between the experienced assessors (0.38), closely followed by peer assessment and experienced assessor agreement (0.36, 0.29). Despite this, the results indicate poor levels of agreement. No agreement was seen between any of the assessment methods when compared to the digital scanning device. CONCLUSIONS The findings of this study could be related to the difficulty of calculating a single grade from the output of the device, in addition to the inability of the machine to assess all the factors necessary for an acceptable preparation. From this study, it can be concluded that this device is not suitable for calculating grades when used in isolation. Further research could explore the role of the Prepassistant in providing student feedback, its potential to enhance the learning experience and the subsequent effect on performance.
Collapse
Affiliation(s)
- C L Taylor
- University of Manchester Dental School, Manchester, UK.
| | | | | |
Collapse
|
22
|
Chan TM, Wallner C, Swoboda TK, Leone KA, Kessler C. Assessing interpersonal and communication skills in emergency medicine. Acad Emerg Med 2012; 19:1390-402. [PMID: 23279246 DOI: 10.1111/acem.12030] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2012] [Accepted: 07/03/2012] [Indexed: 01/30/2023]
Abstract
Interpersonal and communication skills (ICS) are a key component of several competency-based schemata and key competency in the set of six Accreditation Council for Graduate Medical Education (ACGME) core competencies. With the shift toward a competency-based educational framework, the importance of robust learner assessment becomes paramount. The journal Academic Emergency Medicine (AEM) hosted a consensus conference to discuss education research in emergency medicine (EM). This article summarizes the initial preparatory research that was conducted to brief consensus conference attendees and reports the results of the consensus conference breakout session as it pertains to ICS assessment of learners. The goals of this consensus conference session were to twofold: 1) to determine the state of assessment of observable learner performance and 2) to determine a research agenda within the ICS field for medical educators. The working group identified six key recommendations for medical educators and researchers.
Collapse
Affiliation(s)
- Teresa M. Chan
- Department of Medicine; Division of Emergency Medicine; McMaster University; Hamilton; Ontario; Canada
| | - Clare Wallner
- Department of Emergency Medicine; Oregon Health Sciences University; Portland; OR
| | - Thomas K. Swoboda
- Department of Emergency Medicine; Louisiana State University Health Sciences Center; Shreveport; LA
| | - Katrina A. Leone
- Department of Emergency Medicine; Oregon Health Sciences University; Portland; OR
| | - Chad Kessler
- Department of Emergency Medicine; Jesse Brown VA Hospital; Chicago; IL
| |
Collapse
|
23
|
Laughlin T, Brennan A, Brailovsky C. Effect of field notes on confidence and perceived competence: survey of faculty and residents. CANADIAN FAMILY PHYSICIAN MEDECIN DE FAMILLE CANADIEN 2012; 58:e352-e356. [PMID: 22700743 PMCID: PMC3374708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
OBJECTIVE To evaluate the effectiveness of field notes in assessing teachers' confidence and perceived competence, and the effect of field notes on residents' perceptions of their development of competence. DESIGN A faculty and resident survey completed 5 years after field notes were introduced into the program. SETTING Five Dalhousie University family medicine sites--Fredericton, Moncton, and Saint John in New Brunswick, and Halifax and Sydney in Nova Scotia. PARTICIPANTS First- and second-year family medicine residents (as of May 2009) and core family medicine faculty. MAIN OUTCOME MEASURES Residents' outcome measures included beliefs about the effects of field notes on performance, learning, reflection, clinical skills development, and feedback received. Faculty outcome measures included beliefs about the effect of field notes on guiding feedback, teaching, and reflection on clinical practice. RESULTS Forty of 88 residents (45.5%) participated. Fifteen of 50 faculty (30.0%) participated, which only permitted a discussion of trends for faculty. Residents believed field note-directed feedback reinforced their performance (81.1%), helped them learn (67.6%), helped them reflect on practice and learning (66.7%), and focused the feedback they received, making it more useful (62.2%) (P < .001 for all); 63.3% believed field note-directed feedback helped with clinical skills development (P < .01). Faculty believed field notes helped to provide more focused (86.7%) and effective feedback (78.6%), improved teaching (75.0%), and encouraged reflection on their own clinical practice (73.3%). CONCLUSION Most surveyed residents believed field note use improved the feedback they received and helped them to develop competence through improved performance, learning, reflection, and clinical skills development. The trends from faculty information suggested faculty believed field notes were an effective teaching, feedback, and reflection tool.
Collapse
Affiliation(s)
- Tom Laughlin
- Department of Family Medicine at Dalhousie University in Halifax, NS.
| | | | | |
Collapse
|
24
|
Watling CJ, Lingard L. Toward meaningful evaluation of medical trainees: the influence of participants' perceptions of the process. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2012; 17:183-94. [PMID: 20143260 DOI: 10.1007/s10459-010-9223-x] [Citation(s) in RCA: 82] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2009] [Accepted: 01/28/2010] [Indexed: 05/11/2023]
Abstract
An essential goal of evaluation is to foster learning. Across the medical education spectrum, evaluation of clinical performance is dominated by subjective feedback to learners based on observation by expert supervisors. Research in non-medical settings has suggested that participants' perceptions of evaluation processes exert considerable influence over whether the feedback they receive actually facilitates learning, but similar research on perceptions of feedback in the medical setting has been limited. In this review, we examine the literature on recipient perceptions of feedback and how those perceptions influence the contribution that feedback makes to their learning. A focused exploration of relevant work on this subject in higher education and industrial psychology settings is followed by a detailed examination of available research on perceptions of evaluation processes in medical settings, encompassing both trainee and evaluator perspectives. We conclude that recipients' and evaluators' perceptions of an evaluation process profoundly affect the usefulness of the evaluation and the extent to which it achieves its goals. Attempts to improve evaluation processes cannot, therefore, be limited to assessment tool modification driven by reliability and validity concerns, but must also take account of the critical issue of feedback reception and the factors that influence it. Given the unique context of clinical performance evaluation in medicine, a research agenda is required that seeks to more fully understand the complexity of the processes of giving, receiving, interpreting, and using feedback as a basis for real progress toward meaningful evaluation.
Collapse
Affiliation(s)
- Christopher J Watling
- Department of Clinical Neurological Sciences, Schulich School of Medicine and Dentistry, University of Western Ontario, London, ON, Canada.
| | | |
Collapse
|
25
|
Laughlin T, Wetmore S, Allen T, Brailovsky C, Crichton T, Bethune C, Donoff M, Lawrence K. Defining competency-based evaluation objectives in family medicine: communication skills. CANADIAN FAMILY PHYSICIAN MEDECIN DE FAMILLE CANADIEN 2012; 58:e217-e224. [PMID: 22499824 PMCID: PMC3325474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
OBJECTIVE To provide a pragmatic approach to the evaluation of communication skills using observable behaviours, as part of a multiyear project to develop competency-based evaluation objectives for Certification in family medicine. DESIGN A nominal group technique was used to develop themes and subthemes and to identify positive and negative observable behaviours that demonstrate competence in communication in family medicine. SETTING The College of Family Physicians of Canada in Mississauga, Ont. PARTICIPANTS An expert group of 7 family physicians and 1 educational consultant, all of whom had experience in assessing competence in family medicine. Group members represented the Canadian context with respect to region, sex, language, community type, and experience. METHODS The group used the nominal group technique to derive a list of observable behaviours that would constitute a detailed operational definition of competence in communication skills; multiple iterations were used until saturation was achieved. The group met several times a year, and membership remained unchanged during the 4 years in which the work was conducted. The iterative process was undertaken twice--once for communication with patients and once for communication with colleagues. MAIN FINDINGS Five themes, 5 subthemes, and 106 positive and negative observable behaviours were generated. The subtheme of charting skills was defined using a key-features analysis. CONCLUSION Communication skills were defined in terms of themes and observable behaviours. These definitions were intended to help assess family physicians' competence at the start of independent practice.
Collapse
|
26
|
Rootman DB, Lam K, Sit M, Liu E, Dubrowski A, Lam WC. Psychometric properties of a new tool to assess task-specific and global competency in cataract surgery. Ophthalmic Surg Lasers Imaging Retina 2012; 43:229-34. [PMID: 22432603 DOI: 10.3928/15428877-20120315-02] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2011] [Accepted: 02/15/2012] [Indexed: 11/20/2022]
Abstract
BACKGROUND AND OBJECTIVE To establish and validate an assessment tool of cataract surgery performed by residents suitable for a competency-based curriculum. PATIENTS AND METHODS A three-component evaluation tool was created based on review of the literature and was refined using a modified Delphi technique. Faculty surgeons viewed two videos of cataract surgery, performed by a novice and an expert, and completed the evaluation tool. Results were analyzed for the psychometric properties. RESULTS Evaluators concluded the scale had excellent face validity. Construct validity showed the scale to reliably distinguish (P < .001) between novice (30.3 ± 6.1) and experienced (48.3 ± 7.2) surgeons. Internal consistency of the scale was high, with Cronbach's alpha equal to 0.981. Inter-rater reliability was high with an intraclass correlation coefficient equal to 0.811 (F(df) = 53.2 (25), P < .001). CONCLUSION The tool has excellent face validity, content validity, and reliability. Its task-specific, global-index scale and quantitative data form make it a valuable tool to assess residents' surgical skills.
Collapse
|
27
|
Abstract
Assessing a learner in the course of a hectic emergency department (ED) rotation is a daunting task for both experienced and new supervisors. This is particularly true if the learner is not doing well. In light of numerous impediments provided by the modern ED environment, sticking to basic principles can result in marked improvement in both the process and the outcome of in-training assessment. This article addresses these important principles for assessment as they apply in the clinical realm of the ED, with a focus on matching expectations to both the trainee and the available assessment strategies. It is critical that teachers strive for clarity, consistency, honesty, and adherence to due process in their learner assessments. This article provides an evidence-informed approach to succeeding with such an approach to clinical assessment.
Collapse
Affiliation(s)
- Glen Bandiera
- Department of Emergency Services, St. Michael's Hospital, Toronto, ON, Canada.
| |
Collapse
|
28
|
Pernar LI, Peyre SE, Warren LE, Gu X, Lipsitz S, Alexander EK, Ashley SW, Breen EM. Mini-clinical evaluation exercise as a student assessment tool in a surgery clerkship: Lessons learned from a 5-year experience. Surgery 2011; 150:272-7. [DOI: 10.1016/j.surg.2011.06.012] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2010] [Accepted: 06/14/2011] [Indexed: 11/25/2022]
|
29
|
|
30
|
van Lohuizen MT, Kuks JBM, van Hell EA, Raat AN, Stewart RE, Cohen-Schotanus J. The reliability of in-training assessment when performance improvement is taken into account. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2010; 15:659-669. [PMID: 20349272 PMCID: PMC2995207 DOI: 10.1007/s10459-010-9226-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2009] [Accepted: 03/08/2010] [Indexed: 05/27/2023]
Abstract
During in-training assessment students are frequently assessed over a longer period of time and therefore it can be expected that their performance will improve. We studied whether there really is a measurable performance improvement when students are assessed over an extended period of time and how this improvement affects the reliability of the overall judgement. In-training assessment results were obtained from 104 students on rotation at our university hospital or at one of the six affiliated hospitals. Generalisability theory was used in combination with multilevel analysis to obtain reliability coefficients and to estimate the number of assessments needed for reliable overall judgement, both including and excluding performance improvement. Students' clinical performance ratings improved significantly from a mean of 7.6 at the start to a mean of 7.8 at the end of their clerkship. When taking performance improvement into account, reliability coefficients were higher. The number of assessments needed to achieve a reliability of 0.80 or higher decreased from 17 to 11. Therefore, when studying reliability of in-training assessment, performance improvement should be considered.
Collapse
Affiliation(s)
- Mirjam T van Lohuizen
- Center for Research and Innovation in Medical Education, University of Groningen and University Medical Center Groningen, A. Deusinglaan 1, 9713 AV, Groningen, The Netherlands.
| | | | | | | | | | | |
Collapse
|
31
|
|
32
|
|
33
|
Affiliation(s)
- M D Bould
- The Hospital for Sick Children, University of Toronto, 555 University Avenue, Canada.
| | | | | |
Collapse
|
34
|
The aging physician with cognitive impairment: approaches to oversight, prevention, and remediation. Am J Geriatr Psychiatry 2009; 17:445-54. [PMID: 19461256 DOI: 10.1097/jgp.0b013e31819e2d7e] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
There are many important unanswered issues regarding the occurrence of cognitive impairment in physicians, such as detection of deficits, remediation efforts, policy implications for safe medical practice, and the need to safeguard quality patient care. The authors review existing literature on these complex issues and derive heuristic formulations regarding how to help manage the professional needs of the aging physician with dementia. To ensure safe standards of medical care while also protecting the needs of physicians and their families, state regulatory or licensing agencies in collaboration with state medical associations and academic medical centers should generate evaluation guidelines to assure continued high levels of functioning. The authors also raise the question of whether age should be considered as a risk factor that merits special screening for adequate functioning. Either age-related screening for cognitive impairment should be initiated or rigorous evaluation after lapses in standard of care should be the norm regardless of age. Ultimately, competence rather than mandatory retirement due to age per se should be the deciding factor regarding whether physicians should be able to continue their practice. Finally, the authors issue a call for an expert consensus panel to convene to make recommendations concerning aging physicians with cognitive impairment who are at risk for medical errors.
Collapse
|
35
|
Chou S, Lockyer J, Cole G, McLaughlin K. Assessing postgraduate trainees in Canada: are we achieving diversity in methods? MEDICAL TEACHER 2009; 31:e58-63. [PMID: 19089723 DOI: 10.1080/01421590802512938] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
BACKGROUND Resident evaluation is a complex and challenging task, and little is known about what assessment methods, predominate within or across specialties. AIMS To determine the methods program directors in Canada use to assess residents and their perceptions of how evaluation could be improved. METHODS We conducted a web-based survey of program directors from The Royal College of Physicians and Surgeons of Canada (RCPSC)-accredited training programs, to examine the use of the In-Training Evaluation Report (ITER), the use of non-ITER tools and program directors' perceived needs for improvement in evaluation methods. RESULTS One hundred forty-nine of the eligible 280 program directors participated in the survey. ITERs were used by all but one program. Of the non-ITER tools, multiple choice questions (71.8%) and oral examinations (85.9%) were most utilized, whereas essays (11.4%) and simulations (28.2%) were least used across all specialties. Surgical specialties had significantly higher multiple choice questions and logbook utilization, whereas medical specialties were significantly more likely to include Objective Stuctured Clinical Examinations (OSCEs). Program directors expressed a strong need for national collaboration between programs within a specialty to improve the resident evaluation processes. CONCLUSIONS Program directors use a variety of methods to assess trainees. They continue to rely heavily on the ITER, but are using other tools.
Collapse
Affiliation(s)
- Sophia Chou
- Faculty of Medicine, University of Calgary, Canada.
| | | | | | | |
Collapse
|
36
|
Norman G, Keane D, Oppenheimer L. Compliance of medical students with voluntary use of personal data assistants for clerkship assessments. TEACHING AND LEARNING IN MEDICINE 2008; 20:295-301. [PMID: 18855232 DOI: 10.1080/10401330802199542] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
BACKGROUND For several years, final-year students at McMaster University have been required to complete 10 mini-CEX type assessments per rotation. A similar system was being introduced at Ottawa. PURPOSE To facilitate data capture, we decided to introduce a personal data assistant (PDA)-based system and evaluate its impact. METHOD A randomized trial was designed to compare the acceptability of PDA and printed evaluation forms. The trial failed because of clerks' unwillingness to use PDAs. A focus group was held and user surveys were administered, chiefly by e-mail, to explore students' preference for printed forms. RESULTS Thirty percent of invited clerks (52/176) agreed to use a PDA; 6% (11; 21% of those agreeing) recorded one or more encounters; 2% (4) recorded at least the minimum number of evaluations required by their program. Most survey respondents expressed concerns related primarily to the relative inconvenience of PDAs compared to paper, a judgment reflecting the time required both to install required software and to become familiar with the software and data entry form, and to record information via the form. A minority were also concerned about assessors' willingness or ability to use PDA forms. CONCLUSION Before asking students and clinical supervisors to use a PDA-based encounter-evaluation form in clerkship, planners should conduct a careful assessment of the advantages and disadvantages for students of the system they hope to implement. The prima facie greater convenience and efficiency of the PDA may actually be offset by workplace disincentives and inefficiencies in data recording, relative to the incentives and efficiencies associated with a system based on printed (paper) forms.
Collapse
Affiliation(s)
- Geoffrey Norman
- Program for Educational Research and Development, McMaster University, Hamilton, Ontario, Canada.
| | | | | |
Collapse
|
37
|
Watling CJ, Kenyon CF, Zibrowski EM, Schulz V, Goldszmidt MA, Singh I, Maddocks HL, Lingard L. Rules of engagement: residents' perceptions of the in-training evaluation process. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2008; 83:S97-100. [PMID: 18820513 DOI: 10.1097/acm.0b013e318183e78c] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
BACKGROUND In-training evaluation reports (ITERs) often fall short of their goals of promoting resident learning and development. Efforts to address this problem through faculty development and assessment-instrument modification have been disappointing. The authors explored residents' experiences and perceptions of the ITER process to gain insight into why the process succeeds or fails. METHOD Using a grounded theory approach, semistructured interviews were conducted with 20 residents. Constant comparative analysis for emergent themes was conducted. RESULTS All residents identified aspects of "engagement" in the ITER process as the dominant influence on the success of ITERs. Both external (evaluator-driven, such as evaluator credibility) and internal (resident-driven, such as self-assessment) influences on engagement were elaborated. When engagement was lacking, residents viewed the ITER process as inauthentic. CONCLUSIONS Engagement is a critical factor to consider when seeking to improve ITER use. Our articulation of external and internal influences on engagement provides a starting point for targeted interventions.
Collapse
Affiliation(s)
- Christopher J Watling
- London Health Sciences Centre, Victoria Hospital, 800 Commissioners Road E., Rm. C3-302, London, ON, Canada N6A 5W9.
| | | | | | | | | | | | | | | |
Collapse
|
38
|
Malling B, Bested KM, Skjelsager K, Ostergaard HT, Ringsted C. Long-term effect of a course on in-training assessment in postgraduate specialist education. MEDICAL TEACHER 2007; 29:966-971. [PMID: 18158673 DOI: 10.1080/01421590701753534] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
BACKGROUND In-training assessment has become an important part of clinical teachers' responsibilities. One way to ensure that clinical teachers are qualified for this role is setting up a course. A "Teach the teachers" course focusing on in-training assessment was designed for anaesthesiologists in Denmark. AIMS To evaluate short and longer term effects of a course on in-training assessment for clinical teachers in Anaesthesiology. METHOD Fifty-one anaesthesiologists attended a 2-day interactive course about in-training assessment. Effects of the course on knowledge were assessed using identical pre- and post- tests. Longer- term effects were measured six months after the course using the same test. Self-reported use of in-training assessment methods was evaluated using supplemental questions in the follow-up test. RESULTS There were significant increases in knowledge about in-training assessment immediately following the course (effect size, Cohens d = 1, 5). The knowledge was retained six months later. Knowledge about assessment by clinical structured observation and by written assignments showed further increases in the follow-up period. Participants used the various assessment methods in their daily practice during the six-month study period. CONCLUSION A focused "Teach the teachers" course during the implementation phase of a new assessment programme increased participants' knowledge about in-training assessment.
Collapse
Affiliation(s)
- B Malling
- Department of Quality and Education, Regional Hospital, Viborg.
| | | | | | | | | |
Collapse
|
39
|
Hatala R, Ainslie M, Kassen BO, Mackie I, Roberts JM. Assessing the mini-Clinical Evaluation Exercise in comparison to a national specialty examination. MEDICAL EDUCATION 2006; 40:950-6. [PMID: 16987184 DOI: 10.1111/j.1365-2929.2006.02566.x] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
PURPOSE To evaluate the reliability and validity of the Mini-Clinical Evaluation Exercise (mini-CEX) for postgraduate year 4 (PGY-4) internal medicine trainees compared to a high-stakes assessment of clinical competence, the Royal College of Physicians and Surgeons of Canada Comprehensive Examination in Internal Medicine (RCPSC IM examination). METHODS Twenty-two PGY-4 residents at the University of British Columbia and the University of Calgary were evaluated, during the 6 months preceding their 2004 RCPSC IM examination, with a mean of 5.5 mini-CEX encounters (range 3-6). Experienced Royal College examiners from each site travelled to the alternate university to assess the encounters. RESULTS The mini-CEX encounters assessed a broad range of internal medicine patient problems. The inter-encounter reliability for the residents' mean mini-CEX overall clinical competence score was 0.74. The attenuated correlation between residents' mini-CEX overall clinical competence score and their 2004 RCPSC IM oral examination score was 0.59 (P = 0.01). CONCLUSION By examining multiple sources of validity evidence, this study suggests that the mini-CEX provides a reliable and valid assessment of clinical competence for PGY-4 trainees in internal medicine.
Collapse
|
40
|
Holmboe ES, Rodak W, Mills G, McFarlane MJ, Schultz HJ. Outcomes-based evaluation in resident education: creating systems and structured portfolios. Am J Med 2006; 119:708-14. [PMID: 16887420 DOI: 10.1016/j.amjmed.2006.05.031] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/05/2005] [Revised: 03/16/2006] [Accepted: 05/03/2006] [Indexed: 11/20/2022]
Affiliation(s)
- Eric S Holmboe
- American Board of Internal Medicine, Philadelphia, Pa 19106, USA.
| | | | | | | | | |
Collapse
|
41
|
Nair P, Siu SC, Sloggett CE, Biclar L, Sidhu RS, Yu EHC. The Assessment of Technical and Interpretative Proficiency in Echocardiography. J Am Soc Echocardiogr 2006; 19:924-31. [PMID: 16825004 DOI: 10.1016/j.echo.2006.01.015] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/12/2005] [Indexed: 11/29/2022]
Abstract
OBJECTIVES We sought to assess the relationship between traditional measures of proficiency in echocardiography and an objective assessment of technical and interpretative skills. BACKGROUND Determination of competency in echocardiography is currently based on the number of months of training, echocardiograms scanned, and echocardiograms interpreted. It has not been established whether completion of these requirements is a surrogate for competency. METHODS In all, 22 cardiology fellows underwent an echocardiography objective structured clinical examination (OSCE). RESULTS There was a correlation between the number of echocardiograms scanned and the interpretation (r = 0.45, P = .038) and scanning (r = 0.42, P = .048) scores. There was a weak correlation between the number of echocardiograms interpreted and interpretation scores (r = 0.33); and number of months of training and the scanning (r = 0.39) and interpretation (r = 0.42) scores. CONCLUSIONS Technical and interpretative proficiency in echocardiography is not related to traditional measures. An objective assessment of acquisition and interpretation of echocardiographic data should be incorporated into the assessment of proficiency in echocardiography.
Collapse
Affiliation(s)
- Parvathy Nair
- Gordon Yu Hoi Chiu and A. D. Barry McKelvey Echocardiographic Laboratories, The Toronto Western and Toronto General Hospitals, University Health Network, University of Toronto, Toronto, Ontario, Canada
| | | | | | | | | | | |
Collapse
|
42
|
Kimberlin CL. Communicating with patients: skills assessment in US colleges of pharmacy. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2006; 70:67. [PMID: 17136187 PMCID: PMC1636937 DOI: 10.5688/aj700367] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/26/2005] [Accepted: 12/11/2005] [Indexed: 05/12/2023]
Abstract
OBJECTIVE To describe current practices in assessing patient communication skills in US colleges and schools of pharmacy. METHODS Syllabi and behavioral assessment forms were solicited and key faculty members were interviewed. Forms were analyzed to determine skills most commonly assessed in communication with simulated or role-playing patients. RESULTS Fifty schools submitted behavioral assessment forms for patient communication skills. Individuals from 47 schools were interviewed. Colleges were found to vary in the way communication skills were assessed. Assessment forms focused more on dispensing a new prescription than monitoring ongoing therapy. Providing information was emphasized more than promoting adherence. Common faculty concerns were lack of continuity and congruence of assessment across the curriculum. CONCLUSIONS A common understanding of the standards and procedures for determining competence is needed. Experience and assessment activities should be sequenced throughout a program to build competence.
Collapse
|
43
|
Stimmel B, Cohen D, Fallar R, Smith L. The use of standardised patients to assess clinical competence: does practice make perfect? MEDICAL EDUCATION 2006; 40:444-9. [PMID: 16635124 DOI: 10.1111/j.1365-2929.2006.02446.x] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
CONTEXT The use of standardised patients (SPs) is now an integral component of the United States Medical Licensing Examination (USMLE). This new requirement has caused more schools to include SP examinations (SPEs) in their curricula. This study reviews the effect of prior experience with SPs in a medical school curriculum on SPE pass rates. METHODS This study reviewed the mean scores and pass rates on a 4-station SPE, comparing the performance of 121 US medical school graduates (USMGs) with that of 228 international medical graduates (IMGs). The analysis of USMGs' performance was based upon whether the resident had had previous exposure to an SPE during medical school, while the analysis of IMGs' performance was based upon whether the IMG had taken the Clinical Skills Assessment (CSA) for certification by the Education Commission for Foreign Medical Graduates. A distinction was made between those who had received prior exposure at Mount Sinai School of Medicine's Morchand Center, where the cases utilised were identical to those of the SPE, and those who had gained exposure elsewhere. RESULTS Neither the mean scores of the IMGs and the USMGs nor the percentage who failed was significantly different relative to prior exposure to SPs. CONCLUSION Prior exposure to SPs does not appear to have a positive effect on subsequent performance on an SPE unless similar or identical cases are used. However, the type and site of prior exposure limited the influence of the review. In view of the increased use of SPEs in medical schools, the content of prior exposure needs to be more fully established.
Collapse
Affiliation(s)
- Barry Stimmel
- Graduate Medical Education, Mount Sinai School of Medicine, New York, New York 10029, USA.
| | | | | | | |
Collapse
|
44
|
Gould JC. Building a laparoscopic surgical skills training laboratory: resources and support. JSLS 2006; 10:293-6. [PMID: 17212882 PMCID: PMC3015702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022] Open
Abstract
BACKGROUND Technical skills have historically been developed and assessed in the operating room. Multiple pressures including resident work hour limitations, increasing costs of operating room time, and patient safety concerns have led to an increased interest in conducting these activities in a safe, reproducible environment. To address some of these issues, many residency programs have developed laparoscopic surgical skills training laboratories. We sought to determine the current status of laparoscopic skills laboratories across residency programs. METHODS In December 2004, surveys were mailed to all 251 United States general surgery residency program directors. This brief 2-page survey consists of 9 questions regarding laparoscopic skills training laboratories. RESULTS Of the 251 mailed surveys, 111 completed surveys were returned (44%). Of the respondents, 81 have laparoscopic skills training laboratories in place (80%). Skills laboratories that used a defined curriculum, and general surgery programs that shared their laboratories with other training programs were determined to have significantly more resources. A wide variety of funding sources have been used to develop and support these skills laboratories. CONCLUSIONS Significant variability in training practices and equipment currently used exists between laboratories. A more efficient, standardized approach to skills training across residency programs is a desirable goal for the immediate future.
Collapse
Affiliation(s)
- Jon C Gould
- University of Wisconsin Medical School, Department of Surgery, Madison, Wisconsin, USA.
| |
Collapse
|
45
|
Lake FR. Teaching on the run tips 9: in-training assessment. Med J Aust 2005; 183:33-4. [PMID: 15992337 DOI: 10.5694/j.1326-5377.2005.tb06887.x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2005] [Accepted: 04/27/2005] [Indexed: 11/17/2022]
Affiliation(s)
- Fiona R Lake
- Education Centre, Faculty of Medicine and Dentistry, University of Western Australia, First Floor N Block, QEII Medical Centre, Verdun Street, Nedlands, WA 6009, Australia.
| |
Collapse
|
46
|
Al-Jarallah KF, Moussa MAA, Shehab D, Abdella N. Use of interaction cards to evaluate clinical performance. MEDICAL TEACHER 2005; 27:369-74. [PMID: 16024423 DOI: 10.1080/01421590500046429] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Currently in-training evaluation in Kuwait depends on the use of the global rating scale at the end of clinical rotation clerkships. Such a scale is inconsistent, subjective, and suffers from deficiencies such as positive skewness of the distribution of ratings and poor reliability. The aim of the study was to assess the inter-rater variation and the reliability of the recently introduced Interaction Card (IC) method for evaluating clinical performance and to measure the agreement between trainees' overall performance evaluation by the currently used global rating scale and the IC summative evaluation. In the study, 370 evaluators encountered 50 trainees during their basic clinical training rotations (internal medicine, surgery, obstetrics and gynecology, and pediatrics) at six hospitals. A total of 9146 encounters were conducted focusing on six clinical performance domains: clinical skills (taking history, case sheet, and physical examination), professional behaviour, case presentation, diagnosis, therapy and handling of emergencies. The method demonstrated significant inter-rater variation in the overall IC ratings according to specialty, rank of evaluator and hospital (p < 0.001). The Interaction Card was found to be reliable, as shown by the internal consistency between the six domains (Cronbach's alpha = 0.914). There was low correlation (Spearman rank correlation coefficient, rs = 0.337), and low agreement (Kappa = 0.131) between the global rating scale and Interaction Cards summative evaluations. The IC method provided instantaneous formative feedback and summative evaluation for clinical performance to trainees. The method can be generalized to encompass training and examinations programmes for all categories of trainees in most clinical specialties.
Collapse
|
47
|
Gómez-Fleitas M. La necesidad de cambios en la formación y la capacitación quirúrgica: un problema pendiente de resolver en la cirugía endoscópica. Cir Esp 2005; 77:3-5. [PMID: 16420875 DOI: 10.1016/s0009-739x(05)70795-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Manuel Gómez-Fleitas
- Centro de Formación e Investigación en Cirugía Endoscópica y Procedimientos Mínimamente Invasivos Guiados por la Imagen, Instituto de Formación e Investigación Marqués de Valdecilla, Universidad de Cantabria, Santander, Cantabria, Spain.
| |
Collapse
|
48
|
Kendal WS, MacRae R, Dagg P. Problems with Subjective In-training Evaluations. South Med J 2004; 97:1024. [PMID: 15558941 DOI: 10.1097/01.smj.0000140861.77140.31] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
49
|
Ringsted C, Henriksen AH, Skaarup AM, Van der Vleuten CPM. Educational impact of in-training assessment (ITA) in postgraduate medical education: a qualitative study of an ITA programme in actual practice. MEDICAL EDUCATION 2004; 38:767-77. [PMID: 15200401 DOI: 10.1111/j.1365-2929.2004.01841.x] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
OBJECTIVES To investigate the experiences and opinions of programme directors, clinical supervisors and trainees on an in-training assessment (ITA) programme on a broad spectrum of competence for first year training in anaesthesiology. How does the programme work in practice and what are the benefits and barriers? What are the users' experiences and thoughts about its effect on training, teaching and learning? What are their attitudes towards this concept of assessment? METHODS Semistructured interviews were conducted with programme directors, supervisors and trainees from 3 departments. Interviews were audiotaped and transcribed. The content of the interviews was analysed in a consensus process among the authors. RESULTS The programme was of benefit in making goals and objectives clear, in structuring training, teaching and learning, and in monitoring progress and managing problem trainees. There was a generally positive attitude towards assessment. Trainees especially appreciated the coupling of theory with practice and, in general, the programme inspired an academic dialogue. Issues of uncertainty regarding standards of performance and conflict with service declined over time and experience with the programme, and departments tended to resolve practical problems through structured planning. DISCUSSION Three interrelated factors appeared to influence the perceived value of assessment in postgraduate education: (1) the link between patient safety and individual practice when assessment is used as a licence to practise without supervision rather than as an end-of-training examination; (2) its benefits to educators and learners as an educational process rather than as merely a method of documenting competence, and (3) the attitude and rigour of assessment practice.
Collapse
Affiliation(s)
- C Ringsted
- Copenhagen Hospital Corporation, Postgraduate Medical Institute, Bispebjerg Hospital, Copenhagen, Denmark.
| | | | | | | |
Collapse
|
50
|
Feldman LS, Hagarty SE, Ghitulescu G, Stanbridge D, Fried GM. Relationship between objective assessment of technical skills and subjective in-training evaluations in surgical residents. J Am Coll Surg 2004; 198:105-10. [PMID: 14698317 DOI: 10.1016/j.jamcollsurg.2003.08.020] [Citation(s) in RCA: 112] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
BACKGROUND Technical skills of residents have traditionally been evaluated using subjective In-Training Evaluation Reports (ITERs). We have developed the McGill Inanimate System for Training and Evaluation of Laparoscopic Skills (MISTELS), an objective measure of laparoscopic technical ability. The purpose of the study was to assess the concurrent validity of the MISTELS by exploring the relationship between MISTELS score and ITER assessment. STUDY DESIGN Fifty surgery residents were assessed on the MISTELS system. Concurrent ITER assessments of technical skill were collected, and the proportion of superior ratings for the year was calculated. Statistical comparisons were performed by ANOVA and chi-square analysis. The Pearson correlation coefficient was used to compare the scores in the MISTELS with the ITER ratings. RESULTS The 50 residents received 277 ITERs for the year, of which 103 (37%) were "superior," 170 (61%) "satisfactory," 4 (1%) "borderline," and 0 "unsatisfactory." The MISTELS score correlated moderately well with the proportion of superior ITER scores (r = 0.51, p < 0.01). Residents who passed the MISTELS had a higher proportion of superior ITER assessments than those who failed the MISTELS (p = 0.02), but residents who performed below their expected level on the MISTELS still received mainly satisfactory ITERs (82 +/- 18%). CONCLUSIONS The ITER assessment is poor at identifying residents with below-average technical skills. Residents who perform well in the MISTELS laparoscopic simulator also have better ITER evaluations, providing evidence for the concurrent validity of the MISTELS. Multiple assessment instruments are recommended for assessment of technical competency.
Collapse
Affiliation(s)
- Liane S Feldman
- Steinberg-Bernstein Centre for Minimally Invasive Surgery, McGill University, Montreal, Quebec, Canada
| | | | | | | | | |
Collapse
|