1
|
Pearce J, Chiavaroli N, Tavares W. On the use and abuse of metaphors in assessment. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2023; 28:1333-1345. [PMID: 36729196 DOI: 10.1007/s10459-022-10203-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Accepted: 12/29/2022] [Indexed: 06/18/2023]
Abstract
This paper is motivated by a desire to advance assessment in the health professions through encouraging the judicious and productive use of metaphors. Through five specific examples (pixels, driving lesson/test, jury deliberations, signal processing, and assessment as a toolbox), we interrogate how metaphors are being used in assessment to consider what value they add to understanding and implementation of assessment practices. By unpacking these metaphors in action, we probe each metaphor's rationale and function, the gains each metaphor makes, and explore the unintended meanings they may carry. In summarizing common uses of metaphors, we elucidate how there may be both advantages and/or disadvantages. Metaphors can play important roles in simplifying, complexifying, communicating, translating, encouraging reflection, and convincing. They may be powerfully rhetorical, leading to intended consequences, actions, and other pragmatic outcomes. Although metaphors can be extremely helpful, they do not constitute thorough critique, justified evidence or argumentation. We argue that although metaphors have utility, they must be carefully considered if they are to serve assessment needs in intended ways. We should pay attention to how metaphors may be misinterpreted, what they ignore or unintentionally signal, and perhaps mitigate against this with anticipated corrections or nuanced qualifications. Failure to do so may lead to implementing practices that miss underlying and relevant complexities for assessment science and practice. Using metaphors requires careful attention with respect to their role, contributions, benefits and limitations. We highlight the value that comes from critiquing metaphors, and demonstrate the care required to ensure their continued utility.
Collapse
Affiliation(s)
- Jacob Pearce
- Tertiary Education, Australian Council for Educational Research, Camberwell, Australia.
| | - Neville Chiavaroli
- Tertiary Education, Australian Council for Educational Research, Camberwell, Australia
| | - Walter Tavares
- Department of Health and Society and Wilson Centre, Temerty Faculty of Medicine, University of Toronto, Toronto, ON., Canada
| |
Collapse
|
2
|
Schneider A, Messerer DAC, Kühn V, Horneffer A, Bugaj TJ, Nikendei C, Kühl M, Kühl SJ. Randomised controlled monocentric trial to compare the impact of using professional actors or peers for communication training in a competency-based inverted biochemistry classroom in preclinical medical education. BMJ Open 2022; 12:e050823. [PMID: 35618331 PMCID: PMC9137345 DOI: 10.1136/bmjopen-2021-050823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
OBJECTIVE In medical education, biochemistry topics are usually knowledge based, and students often are unaware of their clinical relevance. To improve students' awareness of the relevance, we integrated communication skills training into biochemistry education. No studies before have examined the difference between peer and standardised patient (SP) role plays where students explain the biochemical background of a disease in patient-centred language. Therefore, we evaluated whether students' self-perceived competency in Canadian Medical Education Directives for Specialists (CanMEDS) roles and their opinion of the quality of role play differ if the layperson is played by peers or SPs. METHODS We randomly assigned medical students in a preclinical semester to one of the two groups. The groups used predefined scripts to role play a physician-parent consultation with either a peer (peer group) or an SP (SP group) in the parent role. Students then assessed the activity's effects on their competency in CanMEDS roles and motivation and the relevance of the role play. To determine whether students achieved biochemistry learning goals, we evaluated results of a biochemistry exam. RESULTS Students' self-perceived competency improved in both groups. The SP group rated their competency in the roles 'Scholar' and 'Professional' significantly higher than the peer group did. The peer group rated their competency in the role of 'Medical Expert' significantly higher if they played the role of the parent rather than physician or observer. The SP group agreed more that they were motivated by the role play and wanted to receive more role play-based teaching. The SP group perceived the role play as being realistic and rated the feedback discussion as more beneficial. The examination results were the same in both groups. CONCLUSION We showed that role plays in a biochemistry seminar improve students' self-perceived competency. The use of SPs has some advantages, such as being more realistic.
Collapse
Affiliation(s)
- Achim Schneider
- Institute of Biochemistry and Molecular Biolog, Ulm University, Ulm, Germany
- Office of the Dean of Studies, Medical Faculty, Ulm University, Ulm, Germany
| | - David Alexander Christian Messerer
- Institute of Clinical and Experimental Trauma Immunology, Ulm University, Ulm, Germany
- Department of Transfusion Medicine and Hemostaseology, Friedrich-Alexander University Erlangen-Nuremberg, University Hospital Erlangen, Erlangen, Germany
| | - Veronika Kühn
- Office of the Dean of Studies, Medical Faculty, Ulm University, Ulm, Germany
| | - Astrid Horneffer
- Office of the Dean of Studies, Medical Faculty, Ulm University, Ulm, Germany
| | - Till Johannes Bugaj
- Department of General Internal Medicine and Psychosomatics, University Hospital Heidelberg, Heidelberg, Germany
| | - Christoph Nikendei
- Department of General Internal Medicine and Psychosomatics, University Hospital Heidelberg, Heidelberg, Germany
| | - Michael Kühl
- Institute of Biochemistry and Molecular Biolog, Ulm University, Ulm, Germany
| | - Susanne Julia Kühl
- Institute of Biochemistry and Molecular Biolog, Ulm University, Ulm, Germany
| |
Collapse
|
3
|
Lacasse M, Renaud JS, Côté L, Lafleur A, Codsi MP, Dove M, Pélissier-Simard L, Pitre L, Rheault C. [Feedback Guide for direct observation of family medicine residents in Canada: a francophone tool]. CANADIAN MEDICAL EDUCATION JOURNAL 2022; 13:29-54. [PMID: 35321416 PMCID: PMC8909829 DOI: 10.36834/cmej.72587] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
BACKGROUND There is no CanMEDS-FM-based milestone tool to guide feedback during direct observation (DO). We have developed a guide to support documentation of feedback for direct observation (DO) in Canadian family medicine (FM) programs. METHODS The Guide was designed in three phases with the collaboration of five Canadian FM programs with at least a French-speaking teaching site: 1) literature review and needs assessment; 2) development of the DO Feedback Guide; 3) testing the Guide in a video simulation context with qualitative content analysis. RESULTS Phase 1 demonstrated the need for a narrative guide aimed at 1) specifying mutual expectations according to the resident's level of training and the clinical context, 2) providing the supervisor with tools and structure in his observations 3) to facilitate documentation of feedback. Phase 2 made it possible to develop the Guide, in paper and electronic formats, meeting the needs identified. In phase 3, 15 supervisors used the guide for three levels of residence. The Guide was adjusted following this testing to recall the phases of the clinical encounter that were often forgotten during feedback (before consultation, diagnosis and follow-up), and to suggest types of formulation to be favored (stimulating questions, questions of clarification, reflections). CONCLUSION Based on evidence and a collaborative approach, this Guide will equip French-speaking Canadian supervisors and residents performing DO in family medicine.
Collapse
Affiliation(s)
| | | | - Luc Côté
- Université Laval, Québec, Canada
| | | | | | | | | | | | | |
Collapse
|
4
|
Bhat C, LaDonna KA, Dewhirst S, Halman S, Scowcroft K, Bhat S, Cheung WJ. Unobserved Observers: Nurses' Perspectives About Sharing Feedback on the Performance of Resident Physicians. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:271-277. [PMID: 34647919 DOI: 10.1097/acm.0000000000004450] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE Postgraduate training programs are incorporating feedback from registered nurses (RNs) to facilitate holistic assessments of resident performance. RNs are a potentially rich source of feedback because they often observe trainees during clinical encounters when physician supervisors are not present. However, RN perspectives about sharing feedback have not been deeply explored. This study investigated RN perspectives about providing feedback and explored the facilitators and barriers influencing their engagement. METHOD Constructivist grounded theory methodology was used in interviewing 11 emergency medicine and 8 internal medicine RNs at 2 campuses of a tertiary care academic medical center in Ontario, Canada, between July 2019 and March 2020. Interviews explored RN experiences working with and observing residents in clinical practice. Data collection and analysis were conducted iteratively. Themes were identified using constant comparative analysis. RESULTS RNs felt they could observe authentic day-to-day behaviors of residents often unwitnessed by supervising physicians and offer unique feedback related to patient advocacy, communication, leadership, collaboration, and professionalism. Despite a strong desire to contribute to resident education, RNs were apprehensive about sharing feedback and reported barriers related to hierarchy, power differentials, and a fear of overstepping professional boundaries. Although infrequent, a key stimulus that enabled RNs to feel safe in sharing feedback was an invitation from the supervising physician to provide input. CONCLUSIONS Perceived hierarchy in academic medicine is a critical barrier to engaging RNs in feedback for residents. Accessing RN feedback on authentic resident behaviors requires dismantling the negative effects of hierarchy and fostering a collaborative interprofessional working environment. A critical step toward this goal may require supervising physicians to model feedback-seeking behavior by inviting RNs to share feedback. Until a workplace culture is established that validates nurses' input and creates safe opportunities for them to contribute to resident education, the voices of nurses will remain unheard.
Collapse
Affiliation(s)
- Chirag Bhat
- C. Bhat is a resident physician, Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada; ORCID: https://orcid.org/0000-0003-3198-6450
| | - Kori A LaDonna
- K.A. LaDonna is assistant professor, Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada
| | - Sebastian Dewhirst
- S. Dewhirst is a lecturer, Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada; ORCID: https://orcid.org/0000-0003-1996-6692
| | - Samantha Halman
- S. Halman is assistant professor, Department of Medicine, University of Ottawa and the Ottawa Hospital, Ottawa, Ontario, Canada; ORCID: http://orcid.org/0000-0002-5474-9696
| | - Katherine Scowcroft
- K. Scowcroft is a research assistant, Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada
| | - Silke Bhat
- S. Bhat is a registered nurse, Department of Emergency Medicine, the Ottawa Hospital, Ottawa, Ontario, Canada
| | - Warren J Cheung
- W.J. Cheung is associate professor, Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada; ORCID: https://orcid.org/0000-0002-2730-8190
| |
Collapse
|
5
|
Kazevman G, Ng JCY, Marshall JL, Slater M, Leung FH, Guiang CB. Challenges for Family Medicine Residents in Attaining the CanMEDS Professional Role: A Thematic Analysis of Preceptor Field Notes. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:1598-1602. [PMID: 34039855 DOI: 10.1097/acm.0000000000004184] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
PURPOSE Among the roles of the competent physician is that of a professional, according to the Canadian Medical Education Directives for Specialists (CanMEDS) framework, which describes the abilities physicians require to effectively meet the health care needs of the people they serve. Through examination of preceptor field notes on resident performance, the authors identified aspects of this role with which family medicine residents struggle. METHOD The authors used a structured thematic analysis in this qualitative study to explore the written feedback postgraduate medical learners receive at the University of Toronto Department of Family and Community Medicine. Seventy field notes written between 2015 and 2017 by clinical educators for residents who scored "below expectation" in the CanMEDS professional role were analyzed. From free-text comments, the authors derived inductive codes, amalgamated the codes into themes, and measured the frequency of the occurrence of the codes. The authors then mapped the themes to the key competencies of the CanMEDS professional role. RESULTS From the field notes, 7 themes emerged that described reasons for poor performance. Lack of collegiality, failure to adhere to standards of practice or legal guidelines, and lack of reflection or self-learning were identified as major issues. Other themes were failure to maintain boundaries, taking actions that could have a negative impact on patient care, failure to maintain patient confidentiality, and failure to engage in self-care. When the themes were mapped to the key competencies in the CanMEDS professional role, most related to the competency "commitment to the profession." CONCLUSIONS This study highlights aspects of professional conduct with which residents struggle and suggests that the way professionalism is taught in residency programs-and at all medical training levels-should be reassessed. Educational interventions that emphasize learners' commitment to the profession could enhance the development of more practitioners who are consummate professionals.
Collapse
Affiliation(s)
- Gill Kazevman
- G. Kazevman is a third-year medical student, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Jessica C Y Ng
- J.C.Y. Ng is a graduate of the University of Toronto, Scarborough, Ontario, Canada
| | - Jessica L Marshall
- J.L. Marshall is a graduate of the University of Toronto, Scarborough, Ontario, Canada
| | - Morgan Slater
- M. Slater is a postdoctoral fellow, Department of Family Medicine, Queen's University School of Medicine, Kingston, Ontario, Canada
| | - Fok-Han Leung
- F.-H. Leung is associate professor, Department of Family and Community Medicine, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Charlie B Guiang
- C.B. Guiang is assistant professor and resident academic project coordinator, Department of Family and Community Medicine, Temerty Faculty of Medicine, University of Toronto, and physician co-lead, Wellesley-St. James Town Health Centre, Unity Health Toronto, Toronto, Ontario, Canada
| |
Collapse
|
6
|
Lafleur A, Côté L, Witteman HO. Analysis of Supervisors' Feedback to Residents on Communicator, Collaborator, and Professional Roles During Case Discussions. J Grad Med Educ 2021; 13:246-256. [PMID: 33897959 PMCID: PMC8054588 DOI: 10.4300/jgme-d-20-00842.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Revised: 11/06/2020] [Accepted: 01/10/2021] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Literature examining the feedback supervisors give to residents during case discussions in the realms of communication, collaboration, and professional roles (intrinsic roles) focuses on analyses of written feedback and self-reporting. OBJECTIVES We quantified how much of the supervisors' verbal feedback time targeted residents' intrinsic roles and how well feedback time was aligned with the role targeted by each case. We analyzed the educational goals of this feedback. We assessed whether feedback content differed depending on whether the residents implied or explicitly expressed a need for particular feedback. METHODS This was a mixed-methods study conducted from 2017 to 2019. We created scripted cases for radiology and internal medicine residents to present to supervisors, then analyzed the feedback given both qualitatively and quantitatively. The cases were designed to highlight the CanMEDS intrinsic roles of communicator, collaborator, and professional. RESULTS Radiologists (n = 15) spent 22% of case discussions providing feedback on intrinsic roles (48% aligned): 28% when the case targeted the communicator role, 14% for collaborator, and 27% for professional. Internists (n = 15) spent 70% of discussions on intrinsic roles (56% aligned): 66% for communicator, 73% for collaborator, and 72% for professional. Radiologists' goals were to offer advice (66%), reflections (21%), and agreements (7%). Internists offered advice (41%), reflections (40%), and clarifying questions (10%). We saw no consistent effects when residents explicitly requested feedback on an intrinsic role. CONCLUSIONS Case discussions represent frequent opportunities for substantial feedback on intrinsic roles, largely aligned with the clinical case. Supervisors predominantly offered monologues of advice and agreements.
Collapse
Affiliation(s)
- Alexandre Lafleur
- Alexandre Lafleur, MD, MHPE, is Associate Clinical Professor, Department of Medicine, Laval University Faculty of Medicine, Quebec City, Canada, and Co-Chairholder, CMA-MD Educational Leadership Chair in Health Professions Education
| | - Luc Côté
- Luc Côté, MSW, PhD, is Professor and Medical Education Researcher, Department of Family and Emergency Medicine, Office of Education and Continuing Professional Development, Laval University Faculty of Medicine, Quebec City, Canada
| | - Holly O. Witteman
- Holly O. Witteman, PhD, is Associate Professor, Department of Family and Emergency Medicine, Office of Education and Continuing Professional Development, Laval University Faculty of Medicine, Quebec City, Canada
| |
Collapse
|
7
|
Comparing the Ottawa Emergency Department Shift Observation Tool (O-EDShOT) to the traditional daily encounter card: measuring the quality of documented assessments. CAN J EMERG MED 2021; 23:383-389. [PMID: 33512695 DOI: 10.1007/s43678-020-00070-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Accepted: 12/14/2020] [Indexed: 10/22/2022]
Abstract
OBJECTIVES The Ottawa Emergency Department Shift Observation Tool (O-EDShOT) is a workplace-based assessment designed to assess a trainee's performance across an entire shift. It was developed in response to validity concerns with traditional end-of-shift workplace-based assessments, such as the daily encounter card. The O-EDShOT previously demonstrated strong psychometric characteristics; however, it remains unknown whether the O-EDShOT facilitates measurable improvements in the quality of documented assessments compared to daily encounter cards. METHODS Three randomly selected daily encounter cards and three O-EDShOTs completed by 24 faculty were scored by two raters using the Completed Clinical Evaluation Report Rating (CCERR), a previously published 9-item quantitative measure of the quality of a completed workplace-based assessment. Automated-CCERR (A-CCERR) scores, which do not require raters, were also calculated. Paired sample t tests were conducted to compare the quality of assessments between O-EDShOTs and DECs as measured by the CCERR and A-CCERR. RESULTS CCERR scores were significantly higher for O-EDShOTs (mean(SD) = 25.6(2.6)) compared to daily encounter cards (21.5(3.9); t(23) = 5.2, p < 0.001, d = 1.1). A-CCERR scores were also significantly higher for O-EDShOTs (mean(SD) = 18.5(1.6)) than for daily encounter cards (15.5(1.2); t(24) = 8.4, p < 0.001). CCERR items 1, 4 and 9 were rated significantly higher for O-EDShOTs compared to daily encounter cards. CONCLUSIONS The O-EDShOT yields higher quality documented assessments when compared to the traditional end-of-shift daily encounter card. Our results provide additional validity evidence for the O-EDShOT as an assessment tool for capturing trainee on-shift performance that can be used as a stimulus for actionable feedback and as a source for high-quality workplace-based assessment data to inform decisions about emergency medicine trainee progress and promotion.
Collapse
|
8
|
Cheung WJ, Wood TJ, Gofton W, Dewhirst S, Dudek N. The Ottawa Emergency Department Shift Observation Tool (O-EDShOT): A New Tool for Assessing Resident Competence in the Emergency Department. AEM EDUCATION AND TRAINING 2020; 4:359-368. [PMID: 33150278 PMCID: PMC7592826 DOI: 10.1002/aet2.10419] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Revised: 11/01/2019] [Accepted: 11/13/2019] [Indexed: 05/23/2023]
Abstract
OBJECTIVES The outcome of emergency medicine (EM) training is to produce physicians who can competently run an emergency department (ED) shift. However, there are few tools with supporting validity evidence specifically designed to assess multiple key competencies across an entire shift. The investigators developed and gathered validity evidence for a novel entrustment-based tool to assess a resident's ability to safely run an ED shift. METHODS Through a nominal group technique, local and national stakeholders identified dimensions of performance that are reflective of a competent ED physician and are required to safely manage an ED shift. These were included as items in the Ottawa Emergency Department Shift Observation Tool (O-EDShOT), and each item was scored using an entrustment-based rating scale. The tool was implemented in 2018 at the University of Ottawa Department of Emergency Medicine, and quantitative data and qualitative feedback were collected over 6 months. RESULTS A total of 1,141 forms were completed by 78 physicians for 45 residents. An analysis of variance demonstrated an effect of training level with statistically significant increases in mean O-EDShOT scores with each subsequent postgraduate year (p < 0.001). Scores did not vary by ED treatment area. Residents rated as able to safely run the shift had significantly higher mean ± SD scores (4.8 ± 0.3) than those rated as not able (3.8 ± 0.6; p < 0.001). Faculty and residents reported that the tool was feasible to use and facilitated actionable feedback aimed at progression toward independent practice. CONCLUSIONS The O-EDShOT successfully discriminated between trainees of different levels regardless of ED treatment area. Multiple sources of validity evidence support the O-EDShOT as a tool to assess a resident's ability to safely run an ED shift. It can serve as a stimulus for daily observation and feedback making it practical to use within an EM residency program.
Collapse
Affiliation(s)
- Warren J. Cheung
- Department of Emergency MedicineUniversity of OttawaOttawaOntarioCanada
| | - Timothy J. Wood
- Department of Innovation in Medical EducationUniversity of OttawaOttawaOntarioCanada
| | - Wade Gofton
- Department of SurgeryDivision of Orthopaedic SurgeryUniversity of OttawaOttawaOntarioCanada
| | | | - Nancy Dudek
- Department of MedicineDivision of Physical Medicine and RehabilitationUniversity of OttawaOttawaOntarioCanada
| |
Collapse
|
9
|
Chan TM, Sebok-Syer SS, Sampson C, Monteiro S. The Quality of Assessment of Learning (Qual) Score: Validity Evidence for a Scoring System Aimed at Rating Short, Workplace-Based Comments on Trainee Performance. TEACHING AND LEARNING IN MEDICINE 2020; 32:319-329. [PMID: 32013584 DOI: 10.1080/10401334.2019.1708365] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Construct: This study seeks to determine validity evidence for the Quality of Assessment for Learning score (QuAL score), which was created to evaluate short qualitative comments that are related to specific scores entered into a workplace-based assessment, common within the competency-based medical education (CBME) context. Background: In the age of CBME, qualitative comments play an important role in clarifying the quantitative scores rendered by observers at the bedside. Currently there are few practical tools that evaluate mixed data (e.g. associated score-and-comment data), other than the comprehensive Completed Clinical Evaluation Report Rating tool (CCERR) that was originally derived to rate end-of-rotation reports. Approach: A multi-center, randomized cohort-based rating exercise was conducted to evaluate the rating properties of the QuAL score as compared to the CCERR. One group rated comments using the QuAL score, and the other group rated comments using the CCERR. A generalizability study (G-Study) and a decision study (D-study) were conducted to determine the number of meta-raters for a reliable rating (phi-coefficient target of >0.80). Both scores were correlated against rater's gestalt perceptions of utility for both faculty and residents reading the scores. Results: Twenty-five meta-raters from 20 sites participated in this rating exercise. The G-study revealed that the CCERR group (n = 13) rated the comments with a very high reliability (Phi = 0.97). Meanwhile, the QuAL group (n = 12) rated the comments with a similarly high reliability (Phi = 0.97). The QuAL score required only two raters to reach an acceptable target reliability of >0.80, while the CCERR required three. The QuAL score correlated with perceptions of utility (Meta-rater usefulness, Pearson's r = 0.69, p < 0.001; Perceived usefulness for trainee, r = 0.74, p < 0.001). The CCERR performed similarly, correlating with perceived faculty (r = 0.67, <0.001) and resident utility (0.79, <0.001). Conclusions: The QuAL score is reliable rating score that correlates well with perceptions of utility. The QuAL score may be useful for rating shorter comments generated by workplace-based assessments.
Collapse
Affiliation(s)
- Teresa M Chan
- Division of Emergency Medicine, McMaster University, Hamilton, Ontario, Canada
| | | | - Christopher Sampson
- Department of Emergency Medicine, University of Missouri, Columbia, Missouri, USA
| | - Sandra Monteiro
- Health Research Methods, Evidence and Impact, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
10
|
McDonald M, Lavelle C, Wen M, Sherbino J, Hulme J. The state of health advocacy training in postgraduate medical education: a scoping review. MEDICAL EDUCATION 2019; 53:1209-1220. [PMID: 31430838 DOI: 10.1111/medu.13929] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2018] [Revised: 02/12/2019] [Accepted: 06/11/2019] [Indexed: 06/10/2023]
Abstract
CONTEXT Health advocacy is an essential component of postgraduate medical education, and is part of many physician competency frameworks such as the Canadian Medical Education Directives for Specialists (CanMEDS) roles. There is little consensus about how advocacy should be taught and assessed in the postgraduate context. There are no consolidated guides to assist in the design and implementation of postgraduate health advocacy curricula. OBJECTIVES This scoping review aims to identify and analyse existing literature pertaining to health advocacy education and assessment in postgraduate medicine. We specifically sought to summarise themes from the literature that may be useful to medical educators to inform further health advocacy curriculum interventions. METHODS MEDLINE, Embase and ERIC were searched using MeSH (medical student headings) and non-MeSH search terms. Additional articles were found using forward snowballing. The grey literature search included Google and relevant stakeholder websites, regulatory bodies, physician associations, government agencies and academic institutions. We followed a stepwise scoping review methodology, followed by thematic analysis using an inductive approach. RESULTS Of the 123 documents reviewed in full, five major themes emerged: (i) conceptions of health advocacy have evolved towards advocating with rather than for patients, communities and populations; (ii) longitudinal curricula were less common but appeared the most promising, often linked to scholarly or policy objectives; (iii) hands-on, immersive opportunities build competence and confidence; (iv) community-identified needs and partnerships are increasingly considered in designing curriculum, and (v) resident-led and motivated programmes appear to engage residents and allow for achievement of stated outcomes. There remain significant challenges to assessment of health advocacy competencies, and assessment tools for macro-level health advocacy were notably absent. CONCLUSIONS There is considerable heterogeneity in the way health advocacy is taught, assessed and incorporated into postgraduate curricula across programmes and disciplines. We consolidated recommendations from the literature to inform further health advocacy curriculum design, implementation and assessment.
Collapse
Affiliation(s)
- Madeline McDonald
- Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Conor Lavelle
- Department of Emergency Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Mei Wen
- Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Jonathan Sherbino
- Department of Medicine, McMaster University, Hamilton, Ontario, Canada
| | - Jennifer Hulme
- Department of Family and Community Medicine, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
11
|
Gawad N, Fowler A, Mimeault R, Raiche I. The Inter-Rater Reliability of Technical Skills Assessment and Retention of Rater Training. JOURNAL OF SURGICAL EDUCATION 2019; 76:1088-1093. [PMID: 30709756 DOI: 10.1016/j.jsurg.2019.01.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2018] [Revised: 12/02/2018] [Accepted: 01/06/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND The inter-rater reliability (IRR) of laparoscopic skills assessment is usually determined in the context of motivated raters from a single subspecialty practice group with significant experience using similar tools. The purpose of this study was to determine the IRR among attending surgeons of different experience and practices, the extent of rater training that is necessary to achieve good IRR, and if rater training is retained over periods of nonuse. METHODS In Part 1, 5 surgeons of different practice backgrounds assessed 3 laparoscopic cholecystectomy videos using the Global Operative Assessment of Laparoscopic Skills instrument. In Part 2, 2 of the surgeons assessed a total of 33 videos over 5 scoring sessions distributed across 6 months. They participated in 2 different training sessions, and retention was tested in the other 3 sessions. IRR was calculated for Parts 1 and 2 with an intraclass correlation (ICC) in a 2-way random-effects model. RESULTS The ICC for Part 1 was poor (ICC = 0.26). In Part 2, the ICC was highest after each training session (scoring #1 ICC = 0.76, scoring #3 ICC = 0.74). The ICC was not retained 1.5 months after the brief video-based training session (scoring #2 ICC = -0.17). The ICC was retained 2.5 months after the in-depth discussion training session (scoring #4 ICC = 0.70), but not 4.5 months later (scoring #5 ICC = 0.04). CONCLUSIONS Good IRR is not implicit among surgeons with varying backgrounds and experience. Good IRR can be achieved with different types of rater training, but the impact of rater training is lost in periods of nonuse. This suggests the need for further study of the IRR of technical skills assessment when performed by the wide variety of surgeon raters as is commonly encountered in the environment of postgraduate resident assessment.
Collapse
Affiliation(s)
- Nada Gawad
- Division of General Surgery, Department of Surgery, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada; The Ottawa Hospital, Ottawa, Ontario, Canada; Department of Innovation in Medical Education (DIME), University of Ottawa, Ottawa, Ontario, Canada.
| | - Amanda Fowler
- Division of General Surgery, Department of Surgery, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada; The Ottawa Hospital, Ottawa, Ontario, Canada; Department of Innovation in Medical Education (DIME), University of Ottawa, Ottawa, Ontario, Canada.
| | - Richard Mimeault
- Division of General Surgery, Department of Surgery, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada; The Ottawa Hospital, Ottawa, Ontario, Canada
| | - Isabelle Raiche
- Division of General Surgery, Department of Surgery, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada; The Ottawa Hospital, Ottawa, Ontario, Canada; Department of Innovation in Medical Education (DIME), University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
12
|
Faculty development in the age of competency-based medical education: A needs assessment of Canadian emergency medicine faculty and senior trainees. CAN J EMERG MED 2019; 21:527-534. [PMID: 31113499 DOI: 10.1017/cem.2019.343] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
OBJECTIVES The Royal College of Physicians and Surgeons of Canada (RCPSC) emergency medicine (EM) programs transitioned to the Competence by Design training framework in July 2018. Prior to this transition, a nation-wide survey was conducted to gain a better understanding of EM faculty and senior resident attitudes towards the implementation of this new program of assessment. METHODS A multi-site, cross-sectional needs assessment survey was conducted. We aimed to document perceptions about competency-based medical education, attitudes towards implementation, perceived/prompted/unperceived faculty development needs. EM faculty and senior residents were nominated by program directors across RCPSC EM programs. Simple descriptive statistics were used to analyse the data. RESULTS Between February and April 2018, 47 participants completed the survey (58.8% response rate). Most respondents (89.4%) thought learners should receive feedback during every shift; 55.3% felt that they provided adequate feedback. Many respondents (78.7%) felt that the ED would allow for direct observation, and most (91.5%) participants were confident that they could incorporate workplace-based assessments (WBAs). Although a fair number of respondents (44.7%) felt that Competence by Design would not impact patient care, some (17.0%) were worried that it may negatively impact it. Perceived faculty development priorities included feedback delivery, completing WBAs, and resident promotion decisions. CONCLUSIONS RCPSC EM faculty have positive attitudes towards competency-based medical education-relevant concepts such as feedback and opportunities for direct observation via WBAs. Perceived threats to Competence by Design implementation included concerns that patient care and trainee education might be negatively impacted. Faculty development should concentrate on further developing supervisors' teaching skills, focusing on feedback using WBAs.
Collapse
|
13
|
McConnell M, Gu A, Arshad A, Mokhtari A, Azzam K. An innovative approach to identifying learning needs for intrinsic CanMEDS roles in continuing professional development. MEDICAL EDUCATION ONLINE 2018; 23:1497374. [PMID: 30010510 PMCID: PMC6052411 DOI: 10.1080/10872981.2018.1497374] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2018] [Accepted: 07/01/2018] [Indexed: 05/23/2023]
Abstract
CONTEXT The CanMEDS framework promotes the development of competencies required to be an effective physician. However, it is still not well understood how to apply such frameworks to CPD contexts, particularly with respect to intrinsic competencies. OBJECTIVE This study explores whether physician narratives around challenging cases would provide information regarding learning needs that could help guide the development of CPD activities for intrinsic CanMEDS competencies. METHODS We surveyed medical and surgical specialists from Southern Ontario using an online survey. To assess perceived needs, participants were asked, 'Describe three CPD topic you would like to learn about in the next 12 months'. To identify learning needs that may have arisen from problems encountered in practice, participants were asked, 'Describe three challenging situations encountered in the past 12 months.' Responses to the two open-ended questions were analyzed using thematic content analysis. RESULTS Responses were received from 411 physicians, resulting in 226 intrinsic CanMEDS codes for perceived learning needs and 210 intrinsic codes for challenges encountered in practices. Discrepancies in the frequency of intrinsic roles were observed between the two questions. Specifically, Leader (28%), Scholar (43%), and Professional (16%) roles were frequently described perceived learning needs, as opposed to challenges in practice (Leader: 3%; Scholar: 2%; and Professional: 8%. Conversely, Communicator 39%, Health Advocate 39%, and to a lesser extent Collaborator 11%) roles were frequently described in narratives surrounding challenges in practice, but appeared in <10% of descriptions of perceived learning needs (Communicator: 4%; Health Advocate 6%; Collaborator: 3%). CONCLUSION The present study provides insight into potential learning needs associated with intrinsic CanMEDS competencies. Discrepancies in the frequency of intrinsic CanMEDS roles coded for perceived learning needs and challenges encountered in practice may provide insight into the selection and design of CPD activities.
Collapse
Affiliation(s)
- Meghan McConnell
- Department of Innovation in Medical Education, University of Ottawa, Ottawa, Canada
- Department of Anesthesiology and Pain Medicine, University of Ottawa, Ottawa, Canada
| | - Ada Gu
- Faculty of Health Sciences, McMaster University, Hamilton, Canada
| | - Aysha Arshad
- Faculty of Health Sciences, McMaster University, Hamilton, Canada
| | - Arastoo Mokhtari
- Faculty of Health Sciences, McMaster University, Hamilton, Canada
| | - Khalid Azzam
- Faculty of Health Sciences, McMaster University, Hamilton, Canada
| |
Collapse
|
14
|
Halman S, Rekman J, Wood T, Baird A, Gofton W, Dudek N. Avoid reinventing the wheel: implementation of the Ottawa Clinic Assessment Tool (OCAT) in Internal Medicine. BMC MEDICAL EDUCATION 2018; 18:218. [PMID: 30236097 PMCID: PMC6148769 DOI: 10.1186/s12909-018-1327-7] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2018] [Accepted: 09/13/2018] [Indexed: 05/16/2023]
Abstract
BACKGROUND Workplace based assessment (WBA) is crucial to competency-based education. The majority of healthcare is delivered in the ambulatory setting making the ability to run an entire clinic a crucial core competency for Internal Medicine (IM) trainees. Current WBA tools used in IM do not allow a thorough assessment of this skill. Further, most tools are not aligned with the way clinical assessors conceptualize performances. To address this, many tools aligned with entrustment decisions have recently been published. The Ottawa Clinic Assessment Tool (OCAT) is an entrustment-aligned tool that allows for such an assessment but was developed in the surgical setting and it is not known if it can perform well in an entirely different context. The aim of this study was to implement the OCAT in an IM program and collect psychometric data in this different setting. Using one tool across multiple contexts may reduce the need for tool development and ensure that tools used have proper psychometric data to support them. METHODS Psychometrics characteristics were determined. Descriptive statistics and effect sizes were calculated. Scores were compared between levels of training (juniors (PGY1), seniors (PGY2s and PGY3s) & fellows (PGY4s and PGY5s)) using a one-way ANOVA. Safety for independent practice was analyzed with a dichotomous score. Variance components were generated and used to estimate the reliability of the OCAT. RESULTS Three hundred ninety OCATs were completed over 52 weeks by 86 physicians assessing 44 residents. The range of ratings varied from 2 (I had to talk them through) to 5 (I did not need to be there) for most items. Mean scores differed significantly by training level (p < .001) with juniors having lower ratings (M = 3.80 (out of 5), SD = 0.49) than seniors (M = 4.22, SD = - 0.47) who had lower ratings than fellows (4.70, SD = 0.36). Trainees deemed safe to run the clinic independently had significantly higher mean scores than those deemed not safe (p < .001). The generalizability coefficient that corresponds to internal consistency is 0.92. CONCLUSIONS This study's psychometric data demonstrates that we can reliably use the OCAT in IM. We support assessing existing tools within different contexts rather than continuous developing discipline-specific instruments.
Collapse
Affiliation(s)
- Samantha Halman
- Department of Medicine, the University of Ottawa, The Ottawa Hospital General Campus, 501 Smyth Road, Box 209, Ottawa, Ontario K1H 8L6 Canada
| | - Janelle Rekman
- Department of Surgical Education, the University of Ottawa, The Ottawa Hospital Civic Campus, Loeb Research Building - Main Floor WM150b, 725 Parkdale Avenue, C/O Isabel Menard, Ottawa, Ontario K1Y 4E9 Canada
| | - Timothy Wood
- Department of Innovation in Medical Education, Faculty of Medicine, the University of Ottawa, 850 Peter Morand Crescent (Room 102), Ottawa, Ontario K1G 5Z3 Canada
| | - Andrew Baird
- Department of Medicine, the University of Ottawa, The Ottawa Hospital Parkdale Campus, Room 162, 1053 Carling Avenue, C/O Odile Kaufmann, Ottawa, Ontario K1Y 4E9 Canada
| | - Wade Gofton
- Department of Surgical Education, the University of Ottawa, Ottawa Hospital - Civic Campus, Suite J15, 1053 Carling Avenue, Ottawa, Ontario K1Y 4E9 Canada
| | - Nancy Dudek
- Department of Medicine, the University of Ottawa, The Rehabillitation Centre. 505 Smyth Road, Ottawa, Ontario K1H 8M2 Canada
| |
Collapse
|
15
|
Chan T, Sebok‐Syer S, Thoma B, Wise A, Sherbino J, Pusic M. Learning Analytics in Medical Education Assessment: The Past, the Present, and the Future. AEM EDUCATION AND TRAINING 2018; 2:178-187. [PMID: 30051086 PMCID: PMC6001721 DOI: 10.1002/aet2.10087] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Accepted: 01/30/2018] [Indexed: 05/09/2023]
Abstract
With the implementation of competency-based medical education (CBME) in emergency medicine, residency programs will amass substantial amounts of qualitative and quantitative data about trainees' performances. This increased volume of data will challenge traditional processes for assessing trainees and remediating training deficiencies. At the intersection of trainee performance data and statistical modeling lies the field of medical learning analytics. At a local training program level, learning analytics has the potential to assist program directors and competency committees with interpreting assessment data to inform decision making. On a broader level, learning analytics can be used to explore system questions and identify problems that may impact our educational programs. Scholars outside of health professions education have been exploring the use of learning analytics for years and their theories and applications have the potential to inform our implementation of CBME. The purpose of this review is to characterize the methodologies of learning analytics and explore their potential to guide new forms of assessment within medical education.
Collapse
Affiliation(s)
- Teresa Chan
- McMaster program for Education Research, Innovation, and Theory (MERIT)HamiltonOntarioCanada
| | - Stefanie Sebok‐Syer
- Centre for Education Research & InnovationSchulich School of Medicine and DentistrySaskatoonSaskatchewanCanada
| | - Brent Thoma
- Department of Emergency MedicineUniversity of SaskatchewanSaskatoonSaskatchewanCanada
| | - Alyssa Wise
- Steinhardt School of Culture, Education, and Human DevelopmentNew York UniversityNew YorkNY
| | - Jonathan Sherbino
- Faculty of Health ScienceDivision of Emergency MedicineDepartment of MedicineMcMaster UniversityHamiltonOntarioCanada
- McMaster program for Education Research, Innovation, and Theory (MERIT)HamiltonOntarioCanada
| | - Martin Pusic
- Department of Emergency MedicineNYU School of MedicineNew YorkNY
| |
Collapse
|
16
|
Bajwa NM, Yudkowsky R, Belli D, Vu NV, Park YS. Validity Evidence for a Residency Admissions Standardized Assessment Letter for Pediatrics. TEACHING AND LEARNING IN MEDICINE 2018; 30:173-183. [PMID: 29190140 DOI: 10.1080/10401334.2017.1367297] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
UNLABELLED Construct: This study aims to provide validity evidence for the standardized Assessment Letter for Pediatrics as a measure of competencies expected of a 1st-year pediatrics resident as part of a pediatric residency admissions process. BACKGROUND The Narrative Letter of Recommendation is a frequently used tool in the residency admissions process even though it has poor interrater reliability, lacks pertinent content, and does not correlate with residency performance. A newer tool, the Standardized Letter, has shown validity evidence for content and interrater reliability in other specialties. We sought to develop and provide validity evidence for the standardized Assessment Letter for Pediatrics. APPROACH All 2012 and 2013 applicants invited to interview at the University of Geneva Pediatrics Residency Program provided 2 standardized Assessment Letters. Content for the letter was based on CanMEDS roles and ratings of 6 desired competencies and an overall assessment. Validity evidence was gathered for internal structure (Cronbach's alpha and generalizability), response process (interrater reliability with intraclass correlation), relations to other variables (Pearson's correlation coefficient), and consequences (logistic regression to predict admission). RESULTS One hundred fourteen faculty completed 142 standardized Assessment Letters for 71 applicants. Average overall assessment was 3.0 of 4 (SD = 0.59). Cronbach's alpha was 0.93. The G-coefficient was 0.59. The decision study projected that four Assessment Letters are needed to attain a G-coefficient of 0.73. Applicant variance (28.5%) indicated high applicant differentiation. The Assessment Letter intraclass coefficient was 0.51, 95% confidence interval (CI) [0.43, 0.59]. Assessment Letter scores were correlated with the structured interview (r = .28), 95% CI [0.05, 0.51]; global rating (r = .36), 95% CI [0.13, 0.58]; and admissions decision (r = .25), 95% CI [0.02, 0.46]. Assessment Letter scores did not predict the admissions decision (odds ratio = 1.67, p = .37) after controlling for the unique contribution of the structured interview and global rating scores. CONCLUSION Validity evidence supports use of the Assessment Letter for Pediatrics; future studies should refine items to improve predictive validity and explore how to best integrate the Assessment Letter into the residency admissions process.
Collapse
Affiliation(s)
- Nadia M Bajwa
- a Department of Child and Adolescent Medicine , Children's Hospital, Geneva University Hospitals , Geneva , Switzerland
| | - Rachel Yudkowsky
- b Department of Medical Education , University of Illinois at Chicago, College of Medicine , Chicago , Illinois , USA
| | - Dominique Belli
- a Department of Child and Adolescent Medicine , Children's Hospital, Geneva University Hospitals , Geneva , Switzerland
| | - Nu Viet Vu
- c Unit of Development and Research in Medical Education , University of Geneva, Faculty of Medicine , Geneva , Switzerland
| | - Yoon Soo Park
- b Department of Medical Education , University of Illinois at Chicago, College of Medicine , Chicago , Illinois , USA
| |
Collapse
|
17
|
Cheung WJ, Dudek NL, Wood TJ, Frank JR. Supervisor-trainee continuity and the quality of work-based assessments. MEDICAL EDUCATION 2017; 51:1260-1268. [PMID: 28971502 DOI: 10.1111/medu.13415] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2017] [Revised: 05/30/2017] [Accepted: 07/11/2017] [Indexed: 05/12/2023]
Abstract
CONTEXT Work-based assessments (WBAs) represent an increasingly important means of reporting expert judgements of trainee competence in clinical practice. However, the quality of WBAs completed by clinical supervisors is of concern. The episodic and fragmented interaction that often occurs between supervisors and trainees has been proposed as a barrier to the completion of high-quality WBAs. OBJECTIVES The primary purpose of this study was to determine the effect of supervisor-trainee continuity on the quality of assessments documented on daily encounter cards (DECs), a common form of WBA. The relationship between trainee performance and DEC quality was also examined. METHODS Daily encounter cards representing three differing degrees of supervisor-trainee continuity (low, intermediate, high) were scored by two raters using the Completed Clinical Evaluation Report Rating (CCERR), a previously published nine-item quantitative measure of DEC quality. An analysis of variance (anova) was performed to compare mean CCERR scores among the three groups. Linear regression analysis was conducted to examine the relationship between resident performance and DEC quality. RESULTS Differences in mean CCERR scores were observed between the three continuity groups (p = 0.02); however, the magnitude of the absolute differences was small (partial eta-squared = 0.03) and not educationally meaningful. Linear regression analysis demonstrated a significant inverse relationship between resident performance and CCERR score (p < 0.001, r2 = 0.18). This inverse relationship was observed in both groups representing on-service residents (p = 0.001, r2 = 0.25; p = 0.04, r2 = 0.19), but not in the Off-service group (p = 0.62, r2 = 0.05). CONCLUSIONS Supervisor-trainee continuity did not have an educationally meaningful influence on the quality of assessments documented on DECs. However, resident performance was found to affect assessor behaviours in the On-service group, whereas DEC quality remained poor regardless of performance in the Off-service group. The findings suggest that greater attention should be given to determining ways of improving the quality of assessments reported for off-service residents, as well as for those residents demonstrating appropriate clinical competence progression.
Collapse
Affiliation(s)
- Warren J Cheung
- Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Nancy L Dudek
- Division of Physical Medicine and Rehabilitation, Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Timothy J Wood
- Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada
| | - Jason R Frank
- Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada
- Royal College of Physicians and Surgeons of Canada, Ottawa, Ontario, Canada
| |
Collapse
|
18
|
|
19
|
Chan TM. Nuance and Noise: Lessons Learned From Longitudinal Aggregated Assessment Data. J Grad Med Educ 2017; 9:724-729. [PMID: 29270262 PMCID: PMC5734327 DOI: 10.4300/jgme-d-17-00086.1] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/02/2017] [Revised: 07/04/2017] [Accepted: 08/22/2017] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Competency-based medical education requires frequent assessment to tailor learning experiences to the needs of trainees. In 2012, we implemented the McMaster Modular Assessment Program, which captures shift-based assessments of resident global performance. OBJECTIVE We described patterns (ie, trends and sources of variance) in aggregated workplace-based assessment data. METHODS Emergency medicine residents and faculty members from 3 Canadian university-affiliated, urban, tertiary care teaching hospitals participated in this study. During each shift, supervising physicians rated residents' performance using a behaviorally anchored scale that hinged on endorsements for progression. We used a multilevel regression model to examine the relationship between global rating scores and time, adjusting for data clustering by resident and rater. RESULTS We analyzed data from 23 second-year residents between July 2012 and June 2015, which yielded 1498 unique ratings (65 ± 18.5 per resident) from 82 raters. The model estimated an average score of 5.7 ± 0.6 at baseline, with an increase of 0.005 ± 0.01 for each additional assessment. There was significant variation among residents' starting score (y-intercept) and trajectory (slope). CONCLUSIONS Our model suggests that residents begin at different points and progress at different rates. Meta-raters such as program directors and Clinical Competency Committee members should bear in mind that progression may take time and learning trajectories will be nuanced. Individuals involved in ratings should be aware of sources of noise in the system, including the raters themselves.
Collapse
|
20
|
Bugaj TJ, Schmid C, Koechel A, Stiepak J, Groener JB, Herzog W, Nikendei C. Shedding light into the black box: A prospective longitudinal study identifying the CanMEDS roles of final year medical students' on-ward activities. MEDICAL TEACHER 2017; 39:883-890. [PMID: 28413889 DOI: 10.1080/0142159x.2017.1309377] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
INTRODUCTION To our best knowledge, a rigorous prospective analysis of final year medical students' (FY medical students) activity profiles during workplace learning is lacking. The present study investigated the CanMEDS characteristics of all on-ward activities performed by internal medicine FY medical students. We tested the hypotheses that during FY medical student workplace training (I) routine activities are predominantly performed, while supervised, more complex activities are underrepresented with (II) FY medical students performing an insufficient number of autonomous activities and that (III) the CanMEDS roles of the Communicator and the Professional prevail. METHODS During the second and the sixth week of their final year trimester at the University of Heidelberg Medical Hospital, N = 34 FY medical students (73% female; mean age 26.4 ± 2.4) were asked to keep a detailed record of all their on-ward activities and to document the duration, mode of action (active versus passive; independent versus supervised), estimated relevance for later practice, and difficulty-level in specially designed activity logbooks. CanMEDS roles were assigned to the documented activities via post-hoc expert consensus. RESULTS About 4308 activities lasting a total of 2211.4 h were documented. Drawing blood (20.8%) was the most frequently documented medical activity followed by full admission procedures (9.6%). About 14.9% of the time was spent with non-medical activities. About 82.1% of all medical activities performed went unsupervised. The Communicator (42%), the Professional (38%), and the Collaborator (7%) were assigned as the top three CanMEDS roles. CONCLUSIONS The results call for increased efforts in creating more authentic learning experiences for FY medical students shifting towards more complex, supervised tasks, and improved team integration.
Collapse
Affiliation(s)
- Till Johannes Bugaj
- a Department of General Internal and Psychosomatic Medicine , University of Heidelberg Medical Hospital , Heidelberg , Germany
| | - Carolin Schmid
- a Department of General Internal and Psychosomatic Medicine , University of Heidelberg Medical Hospital , Heidelberg , Germany
| | - Ansgar Koechel
- b Department of Dermatology , University of Heidelberg Medical Hospital , Heidelberg , Germany
| | - Jan Stiepak
- c Department of Cardiology, Angiology, and Pneumology , University of Heidelberg Medical Hospital , Heidelberg , Germany
| | - Jan B Groener
- d Department of Endocrinology and Metabolism , University of Heidelberg Medical Hospital , Heidelberg , Germany
| | - Wolfgang Herzog
- a Department of General Internal and Psychosomatic Medicine , University of Heidelberg Medical Hospital , Heidelberg , Germany
| | - Christoph Nikendei
- a Department of General Internal and Psychosomatic Medicine , University of Heidelberg Medical Hospital , Heidelberg , Germany
| |
Collapse
|
21
|
Warrington S, Beeson M, Bradford A. Inter-rater Agreement of End-of-shift Evaluations Based on a Single Encounter. West J Emerg Med 2017; 18:518-524. [PMID: 28435505 PMCID: PMC5391904 DOI: 10.5811/westjem.2016.12.32014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2016] [Revised: 12/14/2016] [Accepted: 12/30/2016] [Indexed: 11/11/2022] Open
Abstract
INTRODUCTION End-of-shift evaluation (ESE) forms, also known as daily encounter cards, represent a subset of encounter-based assessment forms. Encounter cards have become prevalent for formative evaluation, with some suggesting a potential for summative evaluation. Our objective was to evaluate the inter-rater agreement of ESE forms using a single scripted encounter at a conference of emergency medicine (EM) educators. METHODS Following institutional review board exemption, we created a scripted video simulating an encounter between an intern and a patient with an ankle injury. That video was shown during a lecture at the Council of EM Residency Director's Academic Assembly with attendees asked to evaluate the "resident" using one of eight possible ESE forms randomly distributed. Descriptive statistics were used to analyze the results with Fleiss' kappa to evaluate inter-rater agreement. RESULTS Most of the 324 respondents were leadership in residency programs (66%), with a range of 29-47 responses per evaluation form. Few individuals (5%) felt they were experts in assessing residents based on EM milestones. Fleiss' kappa ranged from 0.157 - 0.308 and did not perform much better in two post-hoc subgroup analyses. CONCLUSION The kappa ranges found show only slight to fair inter-rater agreement and raise concerns about the use of ESE forms in assessment of EM residents. Despite limitations present in this study, these results and a lack of other studies on inter-rater agreement of encounter cards should prompt further studies of such methods of assessment. Additionally, EM educators should focus research on methods to improve inter-rater agreement of ESE forms or other evaluating other methods of assessment of EM residents.
Collapse
Affiliation(s)
- Steven Warrington
- Kaweah Delta Medical Center, Department of Emergency Medicine, Visalia, California
| | - Michael Beeson
- Akron General Medical Center, Department of Emergency Medicine, Akron, Ohio
| | - Amber Bradford
- Akron General Medical Center, Department of Emergency Medicine, Akron, Ohio
| |
Collapse
|
22
|
Abstract
In the medical profession, activities related to ensuring access to care, navigating the system, mobilizing resources, addressing health inequities, influencing health policy and creating system change are known as health advocacy. Foundational concepts in health advocacy include social determinants of health and health inequities. The social determinants of health (i.e. the conditions in which people live and work) account for a significant proportion of an individual's and a population's health outcomes. Health inequities are disparities in health between populations, perpetuated by economic, social, and political forces. Although it is clear that efforts to improve the health of an individual or population must consider "upstream" factors, how this is operationalized in medicine and medical education is controversial. There is a lack of clarity around how health advocacy is delineated, how physicians' scope of responsibility is defined and how teaching and assessment is conceptualized and enacted. Numerous curricular interventions have been described in the literature; however, regardless of the success of isolated interventions, understanding health advocacy instruction, assessment and evaluation will require a broader examination of processes, practices and values throughout medicine and medical education. To support the instruction, assessment and evaluation of health advocacy, a novel framework for health advocacy is introduced. This framework was developed for several purposes: defining and delineating different types and approaches to advocacy, generating a "roadmap" of possible advocacy activities, establishing shared language and meaning to support communication and collaboration across disciplines and providing a tool for the assessment of learners and for the evaluation of teaching and programs. Current approaches to teaching and assessment of health advocacy are outlined, as well as suggestions for future directions and considerations.
Collapse
Affiliation(s)
- Maria Hubinette
- a Department of Family Practice, University of British Columbia , Canada
| | - Sarah Dobson
- b School of Nursing , University of British Columbia , Canada
| | - Ian Scott
- c Centre for Health Education Scholarship , University of British Columbia
| | - Jonathan Sherbino
- b School of Nursing , University of British Columbia , Canada
- d Department of Medicine , McMaster University , Canada
| |
Collapse
|
23
|
Gottlieb M, Boysen-Osborn M, Chan TM, Krzyzaniak SM, Pineda N, Spector J, Sherbino J. Academic Primer Series: Eight Key Papers about Education Theory. West J Emerg Med 2017; 18:293-302. [PMID: 28210367 PMCID: PMC5305140 DOI: 10.5811/westjem.2016.11.32315] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2016] [Revised: 10/27/2016] [Accepted: 11/07/2016] [Indexed: 11/23/2022] Open
Abstract
Introduction Many teachers adopt instructional methods based on assumptions of best practices without attention to or knowledge of supporting education theory. Familiarity with a variety of theories informs education that is efficient, strategic, and evidence-based. As part of the Academic Life in Emergency Medicine Faculty Incubator Program, a list of key education theories for junior faculty was developed. Methods A list of key papers on theories relevant to medical education was generated using an expert panel, a virtual community of practice synthetic discussion, and a social media call for resources. A three-round, Delphi-informed voting methodology including novice and expert educators produced a rank order of the top papers. Results These educators identified 34 unique papers. Eleven papers described the general use of education theory, while 23 papers focused on a specific theory. The top three papers on general education theories and top five papers on specific education theory were selected and summarized. The relevance of each paper for junior faculty and faculty developers is also presented. Conclusion This paper presents a reading list of key papers for junior faculty in medical education roles. Three papers about general education theories and five papers about specific educational theories are identified and annotated. These papers may help provide foundational knowledge in education theory to inform junior faculty teaching practice.
Collapse
Affiliation(s)
- Michael Gottlieb
- Rush University Medical Center, Department of Emergency Medicine, Chicago, Illinois
| | - Megan Boysen-Osborn
- University of California, Irvine, Department of Emergency Medicine, Irvine, California
| | - Teresa M Chan
- McMaster University, Department of Medicine, Division of Emergency Medicine, Hamilton, Ontario, Canada
| | - Sara M Krzyzaniak
- University of Illinois, Peoria, Department of Emergency Medicine, Peoria, Illinois
| | - Nicolas Pineda
- Universidad San Sebastián, Medicina de Urgencia, Santiago, Chile
| | - Jordan Spector
- Boston Medical Center, Department of Emergency Medicine, Boston, Massachusetts
| | - Jonathan Sherbino
- McMaster University, Department of Medicine, Division of Emergency Medicine, Hamilton, Ontario, Canada
| |
Collapse
|
24
|
Abstract
BACKGROUND The increasing use of workplace-based assessments (WBAs) in competency-based medical education has led to large data sets that assess resident performance longitudinally. With large data sets, problems that arise from missing data are increasingly likely. OBJECTIVE The purpose of this study is to examine (1) whether data are missing at random across various WBAs, and (2) the relationship between resident performance and the proportion of missing data. METHODS During 2012-2013, a total of 844 WBAs of CanMEDs Roles were completed for 9 second-year emergency medicine residents. To identify whether missing data were randomly distributed across various WBAs, the total number of missing data points was calculated for each Role. To examine whether the amount of missing data was related to resident performance, 5 faculty members rank-ordered the residents based on performance. A median rank score was calculated for each resident and was correlated with the proportion of missing data. RESULTS More data were missing for Health Advocate and Professional WBAs relative to other competencies (P < .001). Furthermore, resident rankings were not related to the proportion of missing data points (r = 0.29, P > .05). CONCLUSIONS The results of the present study illustrate that some CanMEDS Roles are less likely to be assessed than others. At the same time, the amount of missing data did not correlate with resident performance, suggesting lower-performing residents are no more likely to have missing data than their higher-performing peers. This article discusses several approaches to dealing with missing data.
Collapse
Affiliation(s)
| | | | - Teresa M. Chan
- Corresponding author: Teresa M. Chan, BEd, MD, FRCPC, Hamilton General Hospital, McMaster Clinic, Room 254, 237 Barton Street East, Hamilton, ON L8L 2X2, Canada,
| |
Collapse
|
25
|
Cheung WJ, Dudek N, Wood TJ, Frank JR. Daily Encounter Cards-Evaluating the Quality of Documented Assessments. J Grad Med Educ 2016; 8:601-604. [PMID: 27777675 PMCID: PMC5058597 DOI: 10.4300/jgme-d-15-00505.1] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Concerns over the quality of work-based assessment (WBA) completion has resulted in faculty development and rater training initiatives. Daily encounter cards (DECs) are a common form of WBA used in ambulatory care and shift work settings. A tool is needed to evaluate initiatives aimed at improving the quality of completion of this widely used form of WBA. OBJECTIVE The completed clinical evaluation report rating (CCERR) was designed to provide a measure of the quality of documented assessments on in-training evaluation reports. The purpose of this study was to provide validity evidence to support using the CCERR to assess the quality of DEC completion. METHODS Six experts in resident assessment grouped 60 DECs into 3 quality categories (high, average, and poor) based on how informative each DEC was for reporting judgments of the resident's performance. Eight supervisors (blinded to the expert groupings) scored the 10 most representative DECs in each group using the CCERR. Mean scores were compared to determine if the CCERR could discriminate based on DEC quality. RESULTS Statistically significant differences in CCERR scores were observed between all quality groups (P < .001). A generalizability analysis demonstrated the majority of score variation was due to differences in DECs. The reliability with a single rater was 0.95. CONCLUSIONS The CCERR is a reliable and valid tool to evaluate DEC quality. It can serve as an outcome measure for studying interventions targeted at improving the quality of assessments documented on DECs.
Collapse
Affiliation(s)
- Warren J. Cheung
- Corresponding author: Warren J. Cheung, MD, MMEd, FRCPC, University of Ottawa, Department of Emergency Medicine, F-Main, Room EM-206, 1053 Carling Avenue, Ottawa, Ontario K1Y 4E9 Canada, 613.798.5555, ext 17196, fax 613.761.5488,
| | | | | | | |
Collapse
|
26
|
Schoenherr JR, Hamstra SJ. Psychometrics and its discontents: an historical perspective on the discourse of the measurement tradition. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2016; 21:719-29. [PMID: 26303112 DOI: 10.1007/s10459-015-9623-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2014] [Accepted: 07/13/2015] [Indexed: 05/26/2023]
Abstract
Psychometrics has recently undergone extensive criticism within the medical education literature. The use of quantitative measurement using psychometric instruments such as response scales is thought to emphasize a narrow range of relevant learner skills and competencies. Recent reviews and commentaries suggest that a paradigm shift might be presently underway. We argue for caution, in that the psychometrics approach and the quantitative account of competencies that it reflects is based on a rich discussion regarding measurement and scaling that led to the establishment of this paradigm. Rather than reflecting a homogeneous discipline focused on core competencies devoid of consideration of context, the psychometric community has a history of discourse and debate within the field, with an acknowledgement that the techniques and instruments developed within psychometrics are heuristics that must be used pragmatically.
Collapse
Affiliation(s)
- Jordan Richard Schoenherr
- Faculty of Medicine, University of Ottawa, Ottawa, Canada
- Department of Psychology, Carleton University, Ottawa, Canada
| | - Stanley J Hamstra
- Accreditation Council for Graduate Medical Education, 515 N. State Street, Suite 2000, Chicago, IL, 60654, USA.
| |
Collapse
|
27
|
Rekman J, Hamstra SJ, Dudek N, Wood T, Seabrook C, Gofton W. A New Instrument for Assessing Resident Competence in Surgical Clinic: The Ottawa Clinic Assessment Tool. JOURNAL OF SURGICAL EDUCATION 2016; 73:575-82. [PMID: 27052202 DOI: 10.1016/j.jsurg.2016.02.003] [Citation(s) in RCA: 65] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2016] [Accepted: 02/13/2016] [Indexed: 05/26/2023]
Abstract
BACKGROUND The shift toward competency-based medical education has created a demand for feasible workplace-based assessment tools. Perhaps, more important than competence to assess an individual patient is the ability to successfully manage a surgical clinic. Trainee performance in clinic is a critical component of learning to manage a surgical practice, yet no assessment tool currently exists to assess daily performance in outpatient clinics for surgery residents. The development of a competency-based assessment tool, the Ottawa Clinic Assessment Tool (OCAT), is described here to address this gap. STUDY DESIGN A consensus group of experts was gathered to generate dimensions of performance reflective of a competent "generalist" surgeon in clinic. A 6-month pilot study of the OCAT was conducted in orthopedics, general surgery, and obstetrics and gynecology with quantitative and qualitative evidence of validity collected. In all, 2 subsequent feedback sessions and a survey for staff and residents evaluated the OCAT for clarity and utility. RESULTS The OCAT is a 9-item tool, with a global assessment item and 2 short-answer questions. Among the 2 divisions, 44 staff surgeons completed 132 OCAT assessments of 79 residents. Psychometric data was collected as evidence of validity. Analysis of feedback indicated that the entrustability rating scale was useful for surgeons and residents and that the items could be correlated with individual competencies. CONCLUSIONS Multiple sources of validity evidence collected in this study demonstrate that the OCAT can measure resident clinic competency in a valid and feasible manner.
Collapse
Affiliation(s)
- Janelle Rekman
- Department of Surgical Education, The University of Ottawa, Ottawa, Ontario, Canada.
| | - Stanley J Hamstra
- Milestones Research and Evaluation at the Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Nancy Dudek
- Department of Medicine, The Ottawa Hospital Rehabilitation Center, The University of Ottawa, Ottawa, Ontario, Canada
| | - Timothy Wood
- Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada
| | - Christine Seabrook
- Department of Surgical Education, The University of Ottawa, Ottawa, Ontario, Canada
| | - Wade Gofton
- Department of Surgical Education, The University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
28
|
Capsule Commentary on Post et al., Rating the Quality of Entrustable Professional Activities: Content Validation and Associations with the Clinical Context. J Gen Intern Med 2016; 31:537. [PMID: 26956139 PMCID: PMC4835370 DOI: 10.1007/s11606-016-3662-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
29
|
Renting N, Dornan T, Gans ROB, Borleffs JCC, Cohen-Schotanus J, Jaarsma ADC. What supervisors say in their feedback: construction of CanMEDS roles in workplace settings. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2016; 21:375-87. [PMID: 26342599 PMCID: PMC4801985 DOI: 10.1007/s10459-015-9634-9] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2015] [Accepted: 08/22/2015] [Indexed: 05/10/2023]
Abstract
The CanMEDS framework has been widely adopted in residency education and feedback processes are guided by it. It is, however, only one of many influences on what is actually discussed in feedback. The sociohistorical culture of medicine and individual supervisors' contexts, experiences and beliefs are also influential. Our aim was to find how CanMEDS roles are constructed in feedback in a postgraduate curriculum-in-action. We applied a set of discourse analytic tools to written feedback from 591 feedback forms from 7 hospitals, including 3150 feedback comments in which 126 supervisors provided feedback to 120 residents after observing their performance in authentic settings. The role of Collaborator was constructed in two different ways: a cooperative discourse of equality with other workers and patients; and a discourse, which gave residents positions of power-delegating, asserting and 'taking a firm stance'. Efficiency-being fast and to the point emerged as an important attribute of physicians. Patients were seldom part of the discourses and, when they were, they were constructed as objects of communication and collaboration rather than partners. Although some of the discourses are in line with what might be expected, others were in striking contrast to the spirit of CanMEDS. This study's findings suggest that it takes more than a competency framework, evaluation instruments, and supervisor training to change the culture of workplaces. The impact on residents of training in such demanding, efficiency-focused clinical environments is an important topic for future research.
Collapse
Affiliation(s)
- Nienke Renting
- Center for Educational Development and Research in Health Professions, University Medical Center Groningen and University of Groningen, Groningen, The Netherlands.
| | - Tim Dornan
- Centre for Medical Education, Queen's University Belfast, Belfast, UK
- Department of Education Development and Research, Maastricht University, Maastricht, The Netherlands
| | - Rijk O B Gans
- Department of Internal Medicine, University Medical Center Groningen and University of Groningen, Groningen, The Netherlands
| | - Jan C C Borleffs
- Center for Educational Development and Research in Health Professions, University Medical Center Groningen and University of Groningen, Groningen, The Netherlands
| | - Janke Cohen-Schotanus
- Center for Educational Development and Research in Health Professions, University Medical Center Groningen and University of Groningen, Groningen, The Netherlands
| | - A Debbie C Jaarsma
- Center for Educational Development and Research in Health Professions, University Medical Center Groningen and University of Groningen, Groningen, The Netherlands
| |
Collapse
|
30
|
Park YS, Zar FA, Norcini JJ, Tekian A. Competency Evaluations in the Next Accreditation System: Contributing to Guidelines and Implications. TEACHING AND LEARNING IN MEDICINE 2016; 28:135-145. [PMID: 26849397 DOI: 10.1080/10401334.2016.1146607] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
UNLABELLED CONSTRUCT: This study examines validity evidence of end-of-rotation evaluation scores used to measure competencies and milestones as part of the Next Accreditation System (NAS) of the Accreditation Council for Graduate Medical Education (ACGME). BACKGROUND Since the implementation of the milestones, end-of-rotation evaluations have surfaced as a potentially useful assessment method. However, validity evidence on the use of rotation evaluation scores as part of the NAS has not been studied. This article examines validity evidence for end-of-rotation evaluations that can contribute to developing guidelines that support the NAS. APPROACH Data from 2,701 end-of-rotation evaluations measuring 21 out of 22 Internal Medicine milestones for 142 residents were analyzed (July 2013-June 2014). Descriptive statistics were used to measure the distribution of ratings by evaluators (faculty, n = 116; fellows, n = 59; peer-residents, n = 131), by postgraduate years. Generalizability analysis and higher order confirmatory factor analysis were used to examine the internal structure of ratings. Psychometric implications for combining evaluation scores using composite score reliability were examined. RESULTS Milestone ratings were significantly higher for each subsequent year of training (15/21 milestones). Faculty evaluators had greater variability in ratings across milestones, compared to fellows and residents; faculty ratings were generally correlated with milestone ratings from fellows (r = .45) and residents (r = .25), but lower correlations were found for Professionalism and Interpersonal and Communication Skills. The Φ-coefficient was .71, indicating good reliability. Internal structure supported a 6-factor solution, corresponding to the hierarchical relationship between the milestones and the 6 core competencies. Evaluation scores corresponding to Patient Care, Medical Knowledge, and Practice-Based Learning and Improvement had higher correlations to milestones reported to the ACGME. Mean evaluation ratings predicted problem residents (odds ratio = 5.82, p < .001). CONCLUSIONS Guidelines for rotation evaluations proposed in this study provide useful solutions that can help program directors make decisions on resident progress and contribute to assessment systems in graduate medical education.
Collapse
Affiliation(s)
- Yoon Soo Park
- a Department of Medical Education , University of Illinois at Chicago College of Medicine , Chicago , Illinois , USA
| | - Fred A Zar
- b Department of Medicine , University of Illinois at Chicago College of Medicine , Chicago , Illinois , USA
| | - John J Norcini
- c Foundation for Advancement of International Medical Education and Research , Philadelphia , Pennsylvania , USA
| | - Ara Tekian
- a Department of Medical Education , University of Illinois at Chicago College of Medicine , Chicago , Illinois , USA
| |
Collapse
|
31
|
Farrell SE, Kuhn GJ, Coates WC, Shayne PH, Fisher J, Maggio LA, Lin M. Critical appraisal of emergency medicine education research: the best publications of 2013. Acad Emerg Med 2014; 21:1274-83. [PMID: 25377406 DOI: 10.1111/acem.12507] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2014] [Accepted: 07/27/2014] [Indexed: 11/30/2022]
Abstract
OBJECTIVES The objective was to critically appraise and highlight methodologically superior medical education research articles published in 2013 whose outcomes are pertinent to teaching and education in emergency medicine (EM). METHODS A search of the English-language literature in 2013 querying Education Resources Information Center (ERIC), PsychINFO, PubMed, and Scopus identified 251 EM-related studies using hypothesis-testing or observational investigations of educational interventions. Two reviewers independently screened all of the publications and removed articles using established exclusion criteria. Six reviewers then independently scored the remaining 43 publications using either a qualitative a or quantitative scoring system, based on the research methodology of each article. Each scoring system consisted of nine criteria. Selected criteria were based on accepted educational review literature and chosen a priori. Both scoring systems used parallel scoring metrics and have been used previously within this annual review. RESULTS Forty-three medical education research papers (37 quantitative and six qualitative studies) met the a priori criteria for inclusion and were reviewed. Six quantitative and one qualitative study were scored and ranked most highly by the reviewers as exemplary and are summarized in this article. CONCLUSIONS This annual critical appraisal article aims to promote superior research in EM-related education, by reviewing and highlighting seven of 43 major education research studies, meeting a priori criteria, and published in 2013. Common methodologic pitfalls in the 2013 papers are noted, and current trends in medical education research in EM are discussed.
Collapse
Affiliation(s)
- Susan E. Farrell
- The Partners Healthcare International Harvard Medical School Boston MA
| | - Gloria J. Kuhn
- The Wayne State University School of Medicine Detroit MI
| | - Wendy C. Coates
- Harbor–UCLA Medical Center University of California at Los Angeles Los Angeles CA
| | | | - Jonathan Fisher
- Beth Israel Deaconess Medical Center Harvard Medical School Boston MA
| | | | | |
Collapse
|
32
|
|