1
|
Braund H, Dalgarno N, O’Dell R, Taylor DR. Making assessment a team sport: a qualitative study of facilitated group feedback in internal medicine residency. CANADIAN MEDICAL EDUCATION JOURNAL 2024; 15:14-26. [PMID: 38827914 PMCID: PMC11139793 DOI: 10.36834/cmej.75250] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/05/2024]
Abstract
Purpose Competency-based medical education relies on feedback from workplace-based assessment (WBA) to direct learning. Unfortunately, WBAs often lack rich narrative feedback and show bias towards Medical Expert aspects of care. Building on research examining interactive assessment approaches, the Queen's University Internal Medicine residency program introduced a facilitated, team-based assessment initiative ("Feedback Fridays") in July 2017, aimed at improving holistic assessment of resident performance on the inpatient medicine teaching units. In this study, we aim to explore how Feedback Fridays contributed to formative assessment of Internal Medicine residents within our current model of competency-based training. Method A total of 53 residents participated in facilitated, biweekly group assessment sessions during the 2017 and 2018 academic year. Each session was a 30-minute facilitated assessment discussion done with one inpatient team, which included medical students, residents, and their supervising attending. Feedback from the discussion was collected, summarized, and documented in narrative form in electronic WBA forms by the program's assessment officer for the residents. For research purposes, verbatim transcripts of feedback sessions were analyzed thematically. Results The researchers identified four major themes for feedback: communication, intra- and inter-personal awareness, leadership and teamwork, and learning opportunities. Although feedback related to a broad range of activities, it showed strong emphasis on competencies within the intrinsic CanMEDS roles. Additionally, a clear formative focus in the feedback was another important finding. Conclusions The introduction of facilitated team-based assessment in the Queen's Internal Medicine program filled an important gap in WBA by providing learners with detailed feedback across all CanMEDS roles and by providing constructive recommendations for identified areas for improvement.
Collapse
Affiliation(s)
- Heather Braund
- Office of Professional Development and Educational Scholarship, Ontario, Canada
- Faculty of Education, Queen’s University, Ontario, Canada
| | - Nancy Dalgarno
- Office of Professional Development and Educational Scholarship, Ontario, Canada
- Department of Biomedical and Molecular Sciences, Faculty of Health Sciences, Queen’s University, Ontario, Canada
| | - Rachel O’Dell
- Department of Internal Medicine, Faculty of Health Sciences, Queen’s University, Ontario, Canada
| | - David R Taylor
- Academy for Teachers and Educators, Department of Medicine, Queen’s University, Ontario, Canada
| |
Collapse
|
2
|
Yilmaz Y, Chan MK, Richardson D, Atkinson A, Bassilious E, Snell L, Chan TM. Defining new roles and competencies for administrative staff and faculty in the age of competency-based medical education. MEDICAL TEACHER 2023; 45:395-403. [PMID: 36471921 DOI: 10.1080/0142159x.2022.2136517] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE These authors sought to define the new roles and competencies required of administrative staff and faculty in the age of CBME. METHOD A modified Delphi process was used to define the new CBME roles and competencies needed by faculty and administrative staff. We invited international experts in CBME (volunteers from the ICBME Collaborative email list), as well as faculty members and trainees identified via social media to help us determine the new competencies required of faculty and administrative staff in the CBME era. RESULTS Thirteen new roles were identified. The faculty-specific roles were: National Leader/Facilitator in CBME; Institutional/University lead for CBME; Assessment Process & Systems Designer; Local CBME Leads; CBME-specific Faculty Developers or Trainers; Competence Committee Chair; Competence Committee Faculty Member; Faculty Academic Coach/Advisor or Support Person; Frontline Assessor; Frontline Coach. The staff-specific roles were: Information Technology Lead; CBME Analytics/Data Support; Competence Committee Administrative Assistant. CONCLUSIONS The authors present a new set of faculty and staff roles that are relevant to the CBME context. While some of these new roles may be incorporated into existing roles, it may be prudent to examine how best to ensure that all of them are supported within all CBME contexts in some manner.
Collapse
Affiliation(s)
- Yusuf Yilmaz
- McMaster Education Research, Innovation, and Theory (MERIT), and Office of Continuing Professional Development, Faculty of Health Sciences, McMaster University, Hamilton, Canada
- Department of Medical Education, Faculty of Medicine, Ege University, Izmir, Turkey
| | - Ming-Ka Chan
- Department of Pediatrics and Child Health, University of Manitoba, Winnipeg, Canada
| | - Denyse Richardson
- Department of Medicine, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada
| | - Adelle Atkinson
- Department of Pediatrics, University of Toronto, Toronto, Canada
| | - Ereny Bassilious
- Department of Pediatrics, Faculty of Health Sciences, McMaster University, Hamilton, Canada
| | - Linda Snell
- Medicine and Health Sciences Education, Faculty of Medicine and Health Sciences, McGill University, Montreal, Canada
| | - Teresa M Chan
- McMaster Education Research, Innovation, and Theory (MERIT), and Office of Continuing Professional Development, Faculty of Health Sciences, McMaster University, Hamilton, Canada
- Divisions of Emergency Medicine, Department of Medicine, McMaster University, Hamilton, Canada
| |
Collapse
|
3
|
Tomiak A, Linford G, McDonald M, Willms J, Hammad N. Implementation of Competency-Based Medical Education in a Canadian Medical Oncology Training Program: a First Year Retrospective Review. JOURNAL OF CANCER EDUCATION : THE OFFICIAL JOURNAL OF THE AMERICAN ASSOCIATION FOR CANCER EDUCATION 2022; 37:852-856. [PMID: 33108804 DOI: 10.1007/s13187-020-01895-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/09/2020] [Indexed: 06/11/2023]
Abstract
As part of a university-wide initiative, competency-based medical education (CBME) was implemented in the Medical Oncology training program at Queen's University in July 2017. Stages, entrustable professional activities (EPAs), and required training experiences established by the Royal College of Physicians and Surgeons of Canada (RCPSC) national subspecialty committee were adopted. Entrada (Elentra), the electronic portfolio developed at Queen's University, was used for assessment collection. Between July 2017 and December 2018, participating faculty members completed 157 assessments. Eighty-nine percent were EPA assessments with a median of 16 assessments per faculty member (range 1-40). Ninety-five percent of assessments included written "Comments" or "Next steps" with 56% of all assessments including specific or actionable feedback. Discussions between the program director, residents, program administrator, CBME education consultant, and CBME lead led to the identification of 9 lessons learned during implementation. These centered on (1) faculty and resident development and engagement; (2) sharing the work of CBME; (3) collaboration and communication; (4) global assessment; (5) assessment plan challenges; (6) burden of CBME; (7) limitations of e-portfolio; (8) importance of early tracking of resident progress; and (9) culture change. This article describes the experience of the authors and considers strategies that may be helpful to programs implementing CBME in their teaching and learning environment.
Collapse
Affiliation(s)
- Anna Tomiak
- Department of Oncology, Queen's University, 25 King Street West, Kingston, K7L 5P9, Canada.
| | - Geordie Linford
- Department of Oncology, Queen's University, 25 King Street West, Kingston, K7L 5P9, Canada
| | - Micheline McDonald
- Department of Oncology, Queen's University, 25 King Street West, Kingston, K7L 5P9, Canada
| | - Jane Willms
- Department of Oncology, Queen's University, 25 King Street West, Kingston, K7L 5P9, Canada
| | - Nazik Hammad
- Department of Oncology, Queen's University, 25 King Street West, Kingston, K7L 5P9, Canada
| |
Collapse
|
4
|
Sample S, Al Rimawi H, Bérczi B, Chorley A, Pardhan A, Chan TM. Seeing potential opportunities for teaching (SPOT): Evaluating a bundle of interventions to augment entrustable professional activity acquisition. AEM EDUCATION AND TRAINING 2021; 5:e10631. [PMID: 34471797 PMCID: PMC8381386 DOI: 10.1002/aet2.10631] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2021] [Revised: 05/10/2021] [Accepted: 06/02/2021] [Indexed: 06/13/2023]
Abstract
INTRODUCTION Within the Canadian competency-based medical education system, entrustable professional activities (EPAs) are used to assess residents on performed clinical duties. This study aimed to determine whether implementing a bundle of two interventions (a case-based discussion intervention and a rotation-based nudging system) could increase the number of EPA assessments that could occur for our trainees. METHODS The authors designed an intervention bundle with two components: 1) a case-based workshop where trainees discussed which EPAs could be assessed with multiple cases and 2) a nudging system wherein each trainee was reminded of EPAs that would be useful to them on each rotation in their first year. We conducted a retrospective program evaluation to compare the intervention cohort (2019) to two historical cohorts using similar EPAs (2017, 2018). RESULTS Data from 22 trainees (seven in 2017, eight in 2018, and seven in 2019) were analyzed. There was a marked increase in the total number of EPA assessments acquired in the 2019 cohort (average per resident = 285.7, 95% confidence interval [CI] = 256.1 to 312.3, range = 195-350) compared to the two other years (2018 [average = 132.4, 95% CI = 107.5 to 157.02, range = 107-167] and 2017 [70.1, 95% CI 45.3 to 91.0, range = 49-95]), yielding an effect size of Cohen's d = 4.02 for our intervention bundle. CONCLUSIONS Within the limitations of a small sample size, there was a strong effect of introducing two interventions (a case-based orientation and a nudging system) upon EPA assessments with PGY-1 residents. These strategies may be useful to others seeking to improve EPA assessment numbers in other specialties and clinical environments.
Collapse
Affiliation(s)
- Spencer Sample
- Emergency Medicine Postgraduate Training ProgramMcMaster Royal College of Physicians and Surgeons of CanadaHamiltonOntarioCanada
| | - Hussein Al Rimawi
- Emergency Medicine Postgraduate Training ProgramMcMaster Royal College of Physicians and Surgeons of CanadaHamiltonOntarioCanada
| | - Beatrix Bérczi
- Emergency Medicine Postgraduate Training ProgramMcMaster Royal College of Physicians and Surgeons of CanadaHamiltonOntarioCanada
| | - Alexander Chorley
- Emergency Medicine Postgraduate Training ProgramMcMaster Royal College of Physicians and Surgeons of CanadaHamiltonOntarioCanada
- Division of Emergency Medicine, Department of Medicine, Faculty of Health SciencesMcMaster UniversityHamiltonOntarioCanada
- McMaster Education Research, Innovation, and Theory (MERIT)HamiltonOntarioCanada
| | - Alim Pardhan
- Emergency Medicine Postgraduate Training ProgramMcMaster Royal College of Physicians and Surgeons of CanadaHamiltonOntarioCanada
- Division of Emergency Medicine, Department of Medicine, Faculty of Health SciencesMcMaster UniversityHamiltonOntarioCanada
| | - Teresa M. Chan
- Emergency Medicine Postgraduate Training ProgramMcMaster Royal College of Physicians and Surgeons of CanadaHamiltonOntarioCanada
- Division of Emergency Medicine, Department of Medicine, Faculty of Health SciencesMcMaster UniversityHamiltonOntarioCanada
- McMaster Education Research, Innovation, and Theory (MERIT)HamiltonOntarioCanada
- Division of Education and Innovation, Department of Medicine, Faculty of Health SciencesMcMaster UniversityHamiltonOntarioCanada
| |
Collapse
|
5
|
Elamin A, Obeidat M, Davis G. The ePortfolio in UK cardiology training: time for a new digital platform? THE BRITISH JOURNAL OF CARDIOLOGY 2021; 28:31. [PMID: 35747697 PMCID: PMC8988799 DOI: 10.5837/bjc.2021.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The UK cardiology specialist training programme utilises the National Health Service (NHS) e-Portfolio to ensure adequate progression is being made during a trainees' career. The NHS e-portfolio has been used for 15 years, but many questions remain regarding its perceived learning value and usefulness for trainees and trainers. This qualitative study in the recent pre-COVID era explored the perceived benefits of the NHS e-Portfolio with cardiology trainees and trainers in two UK training deaneries. Questionnaires were sent to 66 trainees and to 50 trainers. 50% of trainees felt that their development had benefited from use of the ePortfolio. 61% of trainees found it an effective educational tool, and 25% of trainees and 39% of trainers found the ePortfolio useful for highlighting their strengths and weaknesses. 75% of trainees viewed workplace based assessments as a means to passing the ARCP. The results show that the NHS ePortfolio and workplace based assessments were perceived negatively by some trainees and trainers alike, with many feeling that significant improvements need to be made. In light of the progress and acceptance of digital technology and communication in the current COVID-19 era, it is likely to be the time for the development of a new optimal digital training platform for cardiology trainees and trainers. The specialist societies could help develop a more speciality specific learning and development tool.
Collapse
Affiliation(s)
- Ahmed Elamin
- Cardiology Specialist Registrar Liverpool Heart and Chest Hospital, Thomas Drive, Liverpool, L14 3PE
| | - Mohammed Obeidat
- Cardiology Specialist Registrar Liverpool Heart and Chest Hospital, Thomas Drive, Liverpool, L14 3PE
| | | |
Collapse
|
6
|
Chan T, Oswald A, Hauer KE, Caretta-Weyer HA, Nousiainen MT, Cheung WJ. Diagnosing conflict: Conflicting data, interpersonal conflict, and conflicts of interest in clinical competency committees. MEDICAL TEACHER 2021; 43:765-773. [PMID: 34182879 DOI: 10.1080/0142159x.2021.1925101] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Clinical competency committees (CCCs) are increasingly used within health professions education as their decisions are thought to be more defensible and fairer than those generated by previous training promotion processes. However, as with most group-based processes, it is inevitable that conflict will arise. In this paper the authors explore three ways conflict may arise within a CCC: (1) conflicting data submissions that are presented to the committee, (2) conflicts between members of the committee, and (3) conflicts of interest between a specific committee member and a trainee. The authors describe each of these conflict situations, dissect out the underlying problems, and explore possible solutions based on the current literature.
Collapse
Affiliation(s)
- Teresa Chan
- Faculty Development, Faculty of Health Sciences, McMaster University, Hamilton, Canada
- Division of Emergency Medicine, Department of Medicine, McMaster University, Hamilton, Canada
- McMaster program for Education Research, Innovation, and Theory (MERIT), Hamilton, Canada
| | - Anna Oswald
- Competency Based Medical Education, Office of Postgraduate Medical Education, University of Alberta, Edmonton, Canada
- CanMEDS Clinician Educator, Royal College of Physicians and Surgeons of Canada, Edmonton, Canada
- Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Canada
| | - Karen E Hauer
- Competency Assessment and Professional Standards, San Francisco, CA, USA
- Department of Medicine, University of California, San Francisco School of Medicine, San Francisco, CA, USA
| | - Holly A Caretta-Weyer
- Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, CA, USA
| | | | - Warren J Cheung
- Department of Emergency Medicine, University of Ottawa, Ottawa, Canada
- Senior Clinician Investigator, Ottawa Hospital Research Institute, Ottawa, Canada
- CanMEDS Clinician Educator, Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
| |
Collapse
|
7
|
Gottlieb M, Jordan J, Siegelman JN, Cooney R, Stehman C, Chan TM. Direct Observation Tools in Emergency Medicine: A Systematic Review of the Literature. AEM EDUCATION AND TRAINING 2021; 5:e10519. [PMID: 34041428 PMCID: PMC8138102 DOI: 10.1002/aet2.10519] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 07/31/2020] [Accepted: 08/09/2020] [Indexed: 05/07/2023]
Abstract
OBJECTIVES Direct observation is important for assessing the competency of medical learners. Multiple tools have been described in other fields, although the degree of emergency medicine-specific literature is unclear. This review sought to summarize the current literature on direct observation tools in the emergency department (ED) setting. METHODS We searched PubMed, Scopus, CINAHL, the Cochrane Central Register of Clinical Trials, the Cochrane Database of Systematic Reviews, ERIC, PsycINFO, and Google Scholar from 2012 to 2020 for publications on direct observation tools in the ED setting. Data were dual extracted into a predefined worksheet, and quality analysis was performed using the Medical Education Research Study Quality Instrument. RESULTS We identified 38 publications, comprising 2,977 learners. Fifteen different tools were described. The most commonly assessed tools included the Milestones (nine studies), Observed Structured Clinical Exercises (seven studies), the McMaster Modular Assessment Program (six studies), Queen's Simulation Assessment Test (five studies), and the mini-Clinical Evaluation Exercise (four studies). Most of the studies were performed in a single institution, and there were limited validity or reliability assessments reported. CONCLUSIONS The number of publications on direct observation tools for the ED setting has markedly increased. However, there remains a need for stronger internal and external validity data.
Collapse
Affiliation(s)
- Michael Gottlieb
- Department of Emergency MedicineRush University Medical CenterChicagoILUSA
| | - Jaime Jordan
- Department of Emergency MedicineRonald Reagan UCLA Medical CenterLos AngelesCAUSA
| | | | - Robert Cooney
- Department of Emergency MedicineGeisinger Medical CenterDanvillePAUSA
| | | | - Teresa M. Chan
- Department of MedicineDivision of Emergency MedicineMcMaster UniversityHamiltonOntarioCanada
| |
Collapse
|
8
|
Chan TM, Sebok‐Syer SS, Cheung WJ, Pusic M, Stehman C, Gottlieb M. Workplace-based Assessment Data in Emergency Medicine: A Scoping Review of the Literature. AEM EDUCATION AND TRAINING 2021; 5:e10544. [PMID: 34099992 PMCID: PMC8166307 DOI: 10.1002/aet2.10544] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2020] [Revised: 10/02/2020] [Accepted: 10/05/2020] [Indexed: 06/01/2023]
Abstract
OBJECTIVE In the era of competency-based medical education (CBME), the collection of more and more trainee data is being mandated by accrediting bodies such as the Accreditation Council for Graduate Medical Education and the Royal College of Physicians and Surgeons of Canada. However, few efforts have been made to synthesize the literature around the current issues surrounding workplace-based assessment (WBA) data. This scoping review seeks to synthesize the landscape of literature on the topic of data collection and utilization for trainees' WBAs in emergency medicine (EM). METHODS The authors conducted a scoping review in the style of Arksey and O'Malley, seeking to synthesize and map literature on collecting, aggregating, and reporting WBA data. The authors extracted, mapped, and synthesized literature that describes, supports, and substantiates effective data collection and utilization in the context of the CBME movement within EM. RESULTS Our literature search retrieved 189 potentially relevant references (after removing duplicates) that were screened to 29 abstracts and papers relevant to collecting, aggregating, and reporting WBAs. Our analysis shows that there is an increasing temporal trend toward contributions in these topics, with the majority of the papers (16/29) being published in the past 3 years alone. CONCLUSION There is increasing interest in the areas around data collection and utilization in the age of CBME. The field, however, is only beginning to emerge, leaving more work that can and should be done in this area.
Collapse
Affiliation(s)
- Teresa M. Chan
- From theDepartment of MedicineDivision of Emergency Medicine and the Division of Education & InnovationMcMaster UniversityHamiltonOntarioCanada
- theProgram for Faculty DevelopmentFaculty of Health SciencesMcMaster UniversityHamiltonOntarioCanada
- and the McMaster Program for Education Research, Innovation, and TheoryMcMaster UniversityHamiltonOntarioCanada
| | | | - Warren J. Cheung
- theDepartment of Emergency MedicineUniversity of OttawaOttawaOntarioCanada
| | - Martin Pusic
- theDepartment of PediatricsHarvard Medical SchoolBostonMAUSA
| | | | - Michael Gottlieb
- and theDepartment of Emergency MedicineRush University Medical CenterChicagoILUSA
| |
Collapse
|
9
|
Prentice S, Benson J, Kirkpatrick E, Schuwirth L. Workplace-based assessments in postgraduate medical education: A hermeneutic review. MEDICAL EDUCATION 2020; 54:981-992. [PMID: 32403200 DOI: 10.1111/medu.14221] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Revised: 03/30/2020] [Accepted: 05/06/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVES Since their introduction, workplace-based assessments (WBAs) have proliferated throughout postgraduate medical education. Previous reviews have identified mixed findings regarding WBAs' effectiveness, but have not considered the importance of user-tool-context interactions. The present review was conducted to address this gap by generating a thematic overview of factors important to the acceptability, effectiveness and utility of WBAs in postgraduate medical education. METHOD This review utilised a hermeneutic cycle for analysis of the literature. Four databases were searched to identify articles pertaining to WBAs in postgraduate medical education from the United Kingdom, Canada, Australia, New Zealand, the Netherlands and Scandinavian countries. Over the course of three rounds, 30 published articles were thematically analysed in an iterative fashion to deeply engage with the literature in order to answer three scoping questions concerning acceptability, effectiveness and assessment training. As each round was coded, themes were refined and questions added until saturation was reached. RESULTS Stakeholders value WBAs for permitting assessment of trainees' performance in an authentic context. Negative perceptions of WBAs stem from misuse due to low assessment literacy, disagreement with definitions and frameworks, and inadequate summative use of WBAs. Effectiveness is influenced by user (eg, engagement and assessment literacy) and tool attributes (eg, definitions and scales), but most fundamentally by user-tool-context interactions, particularly trainee-assessor relationships. Assessors' assessment literacy must be combined with cultural and administrative factors in organisations and the broader medical discipline. CONCLUSIONS The pivotal determinants of WBAs' effectiveness and utility are the user-tool-context interactions. From the identified themes, we present 12 lessons learned regarding users, tools and contexts to maximise WBA utility, including the separation of formative and summative WBA assessors, use of maximally useful scales, and instituting measures to reduce competitive demands.
Collapse
Affiliation(s)
- Shaun Prentice
- GPEx Ltd., Adelaide, South Australia, Australia
- School of Psychology, University of Adelaide, Adelaide, South Australia, Australia
| | - Jill Benson
- GPEx Ltd., Adelaide, South Australia, Australia
- Health in Human Diversity Unit, School of Medicine, University of Adelaide, Adelaide, South Australia, Australia
- Prideaux Centre, Flinders University, Adelaide, South Australia, Australia
| | - Emily Kirkpatrick
- GPEx Ltd., Adelaide, South Australia, Australia
- School of Medicine, University of Adelaide, Adelaide, South Australia, Australia
| | - Lambert Schuwirth
- Prideaux Centre, Flinders University, Adelaide, South Australia, Australia
- Maastrich University, Maastricht, the Netherlands
- Uniformed University for the Health Sciences, Bethesda, Maryland, USA
| |
Collapse
|
10
|
Ten Cate O, Dahdal S, Lambert T, Neubauer F, Pless A, Pohlmann PF, van Rijen H, Gurtner C. Ten caveats of learning analytics in health professions education: A consumer's perspective. MEDICAL TEACHER 2020; 42:673-678. [PMID: 32150499 DOI: 10.1080/0142159x.2020.1733505] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
A group of 22 medical educators from different European countries, gathered in a meeting in Utrecht in July 2019, discussed the topic of learning analytics (LA) in an open conversation and addressed its definition, its purposes and potential risks for learners and teachers. LA was seen as a significant advance with important potential to improve education, but the group felt that potential drawbacks of using LA may yet be under-exposed in the literature. After transcription and interpretation of the discussion's conclusions, a document was drafted and fed back to the group in two rounds to arrive at a series of 10 caveats educators should be aware of when developing and using LA, including too much standardized learning, with undue consequences of over-efficiency and pressure on learners and teachers, and a decrease of the variety of 'valid' learning resources. Learning analytics may misalign with eventual clinical performance and can run the risk of privacy breaches and inescapability of documented failures. These consequences may not happen, but the authors, on behalf of the full group of educators, felt it worth to signal these caveats from a consumers' perspective.
Collapse
Affiliation(s)
- Olle Ten Cate
- Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, The Netherlands
| | | | - Thomas Lambert
- Kepler University Hospital Linz, Johannes Kepler University Linz, Linz, Austria
| | - Florian Neubauer
- Institute for Medical Education, University of Bern, Bern, Switzerland
| | - Anina Pless
- Institute of Primary Health Care (BIHAM), University of Bern, Bern, Switzerland
| | | | - Harold van Rijen
- Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Corinne Gurtner
- Institute of Animal Pathology, Vetsuisse Faculty Bern, University of Bern, Bern, Switzerland
| |
Collapse
|
11
|
Chan TM, Sebok-Syer SS, Sampson C, Monteiro S. The Quality of Assessment of Learning (Qual) Score: Validity Evidence for a Scoring System Aimed at Rating Short, Workplace-Based Comments on Trainee Performance. TEACHING AND LEARNING IN MEDICINE 2020; 32:319-329. [PMID: 32013584 DOI: 10.1080/10401334.2019.1708365] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Construct: This study seeks to determine validity evidence for the Quality of Assessment for Learning score (QuAL score), which was created to evaluate short qualitative comments that are related to specific scores entered into a workplace-based assessment, common within the competency-based medical education (CBME) context. Background: In the age of CBME, qualitative comments play an important role in clarifying the quantitative scores rendered by observers at the bedside. Currently there are few practical tools that evaluate mixed data (e.g. associated score-and-comment data), other than the comprehensive Completed Clinical Evaluation Report Rating tool (CCERR) that was originally derived to rate end-of-rotation reports. Approach: A multi-center, randomized cohort-based rating exercise was conducted to evaluate the rating properties of the QuAL score as compared to the CCERR. One group rated comments using the QuAL score, and the other group rated comments using the CCERR. A generalizability study (G-Study) and a decision study (D-study) were conducted to determine the number of meta-raters for a reliable rating (phi-coefficient target of >0.80). Both scores were correlated against rater's gestalt perceptions of utility for both faculty and residents reading the scores. Results: Twenty-five meta-raters from 20 sites participated in this rating exercise. The G-study revealed that the CCERR group (n = 13) rated the comments with a very high reliability (Phi = 0.97). Meanwhile, the QuAL group (n = 12) rated the comments with a similarly high reliability (Phi = 0.97). The QuAL score required only two raters to reach an acceptable target reliability of >0.80, while the CCERR required three. The QuAL score correlated with perceptions of utility (Meta-rater usefulness, Pearson's r = 0.69, p < 0.001; Perceived usefulness for trainee, r = 0.74, p < 0.001). The CCERR performed similarly, correlating with perceived faculty (r = 0.67, <0.001) and resident utility (0.79, <0.001). Conclusions: The QuAL score is reliable rating score that correlates well with perceptions of utility. The QuAL score may be useful for rating shorter comments generated by workplace-based assessments.
Collapse
Affiliation(s)
- Teresa M Chan
- Division of Emergency Medicine, McMaster University, Hamilton, Ontario, Canada
| | | | - Christopher Sampson
- Department of Emergency Medicine, University of Missouri, Columbia, Missouri, USA
| | - Sandra Monteiro
- Health Research Methods, Evidence and Impact, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
12
|
Acai A, Li SA, Sherbino J, Chan TM. Attending Emergency Physicians' Perceptions of a Programmatic Workplace-Based Assessment System: The McMaster Modular Assessment Program (McMAP). TEACHING AND LEARNING IN MEDICINE 2019; 31:434-444. [PMID: 30835560 DOI: 10.1080/10401334.2019.1574581] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Construct: The McMaster Modular Assessment Program (McMAP) is a programmatic workplace-based assessment (WBA) system that provides emergency medicine trainees with competency judgments through frequent task-specific and global daily assessments. Background: The longevity of McMAP relative to other programmatic WBA systems affords a unique view that precedes large-scale transitions to competency-based medical education (CBME), particularly in North America. Although prior work has described the perspective of residents using this system, the in-depth experiences of assessors using the system have yet to be explored. This perspective is important for understanding the validity of the competency judgments the system produces. Approach: We conducted a qualitative study that used semi-structured interviews analyzed using interpretive description (Thorne) to explore 16 attending physicians' experiences using McMAP. Data analysis was completed independently by 2 researchers, who met regularly to discuss codes and resolve any disagreements. Results: Having a structured assessment framework for a range of clinical tasks has helped encourage what attendings perceived to be more frequent and better-quality assessments, with the added advantages of being holistic, flexible, and learner-driven. However, attendings also perceived a number of challenges of McMAP and programmatic WBA more broadly. These included a reluctance to give and to document negative feedback, "gaming" of the system by both attendings and residents, and a variety of logistic and technology-related concerns. Conclusions: Based on our findings, we offer several key recommendations that can help programs maximize the benefits of programmatic WBA as they transition to CBME.
Collapse
Affiliation(s)
- Anita Acai
- a Department of Psychology, Neuroscience & Behaviour and Office of Education Science, Department of Surgery, McMaster University , Hamilton , Ontario , Canada
| | - Shelly-Anne Li
- b Lawrence S. Bloomberg Faculty of Nursing, University of Toronto and The Hospital for Sick Children , Toronto , Ontario , Canada
| | - Jonathan Sherbino
- c Division of Emergency Medicine, Department of Medicine, and McMaster Education Research, Innovation and Theory Program, McMaster University , Hamilton , Ontario , Canada
| | - Teresa M Chan
- c Division of Emergency Medicine, Department of Medicine, and McMaster Education Research, Innovation and Theory Program, McMaster University , Hamilton , Ontario , Canada
| |
Collapse
|
13
|
Monteiro S, Xenodemetropoulos T. Resident Practice Audit in Gastroenterology (RPAGE): an innovative approach to trainee evaluation and professional development in medicine. CANADIAN MEDICAL EDUCATION JOURNAL 2019; 10:e72-e77. [PMID: 31388379 PMCID: PMC6681922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
BACKGROUND The Resident Practice Audit in Gastroenterology (RPAGE) captures assessments of knowledge, professionalism, and technical skills, in real time. This brief report describes this innovative instrument and aspects of its utility. METHODS Assessment data on colonoscopy, endoscopy, and sigmoidoscopy procedures in 2016 were submitted to a repeated measures ANOVA with six within subjects' assessments and one between subjects' factor of year of specialization to evaluate construct validity. The validity hypothesis tested was that more experienced residents would be rated higher than less experienced residents. Reliability was assessed using Cronbach's alpha. RESULTS The proportion of completed assessments was relatively low (9 to 22%). Overall reliability was high (α >0.8). There was evidence of validity as global ratings indicated higher competence for senior residents at colonoscopy (1.6) and upper endoscopy (1.4) than for more junior residents (1.9 and 2.1 respectively). These differences were significant for both colonoscopy, (F (1, 282) = 14.8, p <0.001) and endoscopy, F (1, 136) = 56.9, p <0.001. CONCLUSION These findings suggest RPAGE is an acceptable electronic log of practice data, but may not be acceptable for workplace based assessment. A key next step will be to evaluate how information collected through RPAGE can help inform resident competency committees.
Collapse
|
14
|
Faculty development in the age of competency-based medical education: A needs assessment of Canadian emergency medicine faculty and senior trainees. CAN J EMERG MED 2019; 21:527-534. [PMID: 31113499 DOI: 10.1017/cem.2019.343] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
OBJECTIVES The Royal College of Physicians and Surgeons of Canada (RCPSC) emergency medicine (EM) programs transitioned to the Competence by Design training framework in July 2018. Prior to this transition, a nation-wide survey was conducted to gain a better understanding of EM faculty and senior resident attitudes towards the implementation of this new program of assessment. METHODS A multi-site, cross-sectional needs assessment survey was conducted. We aimed to document perceptions about competency-based medical education, attitudes towards implementation, perceived/prompted/unperceived faculty development needs. EM faculty and senior residents were nominated by program directors across RCPSC EM programs. Simple descriptive statistics were used to analyse the data. RESULTS Between February and April 2018, 47 participants completed the survey (58.8% response rate). Most respondents (89.4%) thought learners should receive feedback during every shift; 55.3% felt that they provided adequate feedback. Many respondents (78.7%) felt that the ED would allow for direct observation, and most (91.5%) participants were confident that they could incorporate workplace-based assessments (WBAs). Although a fair number of respondents (44.7%) felt that Competence by Design would not impact patient care, some (17.0%) were worried that it may negatively impact it. Perceived faculty development priorities included feedback delivery, completing WBAs, and resident promotion decisions. CONCLUSIONS RCPSC EM faculty have positive attitudes towards competency-based medical education-relevant concepts such as feedback and opportunities for direct observation via WBAs. Perceived threats to Competence by Design implementation included concerns that patient care and trainee education might be negatively impacted. Faculty development should concentrate on further developing supervisors' teaching skills, focusing on feedback using WBAs.
Collapse
|
15
|
Chan T, Sebok‐Syer S, Thoma B, Wise A, Sherbino J, Pusic M. Learning Analytics in Medical Education Assessment: The Past, the Present, and the Future. AEM EDUCATION AND TRAINING 2018; 2:178-187. [PMID: 30051086 PMCID: PMC6001721 DOI: 10.1002/aet2.10087] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Accepted: 01/30/2018] [Indexed: 05/09/2023]
Abstract
With the implementation of competency-based medical education (CBME) in emergency medicine, residency programs will amass substantial amounts of qualitative and quantitative data about trainees' performances. This increased volume of data will challenge traditional processes for assessing trainees and remediating training deficiencies. At the intersection of trainee performance data and statistical modeling lies the field of medical learning analytics. At a local training program level, learning analytics has the potential to assist program directors and competency committees with interpreting assessment data to inform decision making. On a broader level, learning analytics can be used to explore system questions and identify problems that may impact our educational programs. Scholars outside of health professions education have been exploring the use of learning analytics for years and their theories and applications have the potential to inform our implementation of CBME. The purpose of this review is to characterize the methodologies of learning analytics and explore their potential to guide new forms of assessment within medical education.
Collapse
Affiliation(s)
- Teresa Chan
- McMaster program for Education Research, Innovation, and Theory (MERIT)HamiltonOntarioCanada
| | - Stefanie Sebok‐Syer
- Centre for Education Research & InnovationSchulich School of Medicine and DentistrySaskatoonSaskatchewanCanada
| | - Brent Thoma
- Department of Emergency MedicineUniversity of SaskatchewanSaskatoonSaskatchewanCanada
| | - Alyssa Wise
- Steinhardt School of Culture, Education, and Human DevelopmentNew York UniversityNew YorkNY
| | - Jonathan Sherbino
- Faculty of Health ScienceDivision of Emergency MedicineDepartment of MedicineMcMaster UniversityHamiltonOntarioCanada
- McMaster program for Education Research, Innovation, and Theory (MERIT)HamiltonOntarioCanada
| | - Martin Pusic
- Department of Emergency MedicineNYU School of MedicineNew YorkNY
| |
Collapse
|
16
|
Education scholarship in Canadian emergency medicine: The past, present, and future. CAN J EMERG MED 2018. [DOI: 10.1017/cem.2018.19] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
17
|
Ellaway RH, Chou CL, Kalet AL. Situating Remediation: Accommodating Success and Failure in Medical Education Systems. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2018; 93:391-398. [PMID: 28767496 DOI: 10.1097/acm.0000000000001855] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
There has been a widespread shift to competency-based medical education (CBME) in the United States and Canada. Much of the CBME discourse has focused on the successful learner, with relatively little attention paid to what happens in CBME systems when learners stumble or fail. Emerging issues, such as the well-documented problem of "failure to fail" and concerns about litigious learners, have highlighted a need for well-defined and integrated frameworks to support and guide strategic approaches to the remediation of struggling medical learners.This Perspective sets out a conceptual review of current practices and an argument for a holistic approach to remediation in the context of their parent medical education systems. The authors propose parameters for integrating remediation into CBME and describe a model based on five zones of practice along with the rules of engagement associated with each zone. The zones are "normal" curriculum, corrective action, remediation, probation, and exclusion.The authors argue that, by linking and integrating theory and practice in remediation with CBME, a more integrated systems-level response to differing degrees of learner difficulty and failure can be developed. The proposed model demonstrates how educational practice in different zones is based on different rules, roles, responsibilities, and thresholds for moving between zones. A model such as this can help medical educators and medical education leaders take a more integrated approach to learners' failures as well as their successes by being more explicit about the rules of engagement that apply in different circumstances across the competency continuum.
Collapse
Affiliation(s)
- Rachel H Ellaway
- R.H. Ellaway is professor, Department of Community Health Sciences, Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada; ORCID: http://orcid.org/0000-0002-3759-6624. C.L. Chou is professor, Department of Clinical Medicine, University of California, San Francisco, and staff physician, San Francisco VA Health Care System, San Francisco, California; ORCID: http://orcid.org/0000-0002-2391-4337. A.L. Kalet is professor, Division of General Internal Medicine and Clinical Innovation, Departments of Medicine and Surgery, New York University, New York, New York; ORCID: http://orcid.org/0000-0003-4855-0223
| | | | | |
Collapse
|
18
|
Chan TM. Nuance and Noise: Lessons Learned From Longitudinal Aggregated Assessment Data. J Grad Med Educ 2017; 9:724-729. [PMID: 29270262 PMCID: PMC5734327 DOI: 10.4300/jgme-d-17-00086.1] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/02/2017] [Revised: 07/04/2017] [Accepted: 08/22/2017] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Competency-based medical education requires frequent assessment to tailor learning experiences to the needs of trainees. In 2012, we implemented the McMaster Modular Assessment Program, which captures shift-based assessments of resident global performance. OBJECTIVE We described patterns (ie, trends and sources of variance) in aggregated workplace-based assessment data. METHODS Emergency medicine residents and faculty members from 3 Canadian university-affiliated, urban, tertiary care teaching hospitals participated in this study. During each shift, supervising physicians rated residents' performance using a behaviorally anchored scale that hinged on endorsements for progression. We used a multilevel regression model to examine the relationship between global rating scores and time, adjusting for data clustering by resident and rater. RESULTS We analyzed data from 23 second-year residents between July 2012 and June 2015, which yielded 1498 unique ratings (65 ± 18.5 per resident) from 82 raters. The model estimated an average score of 5.7 ± 0.6 at baseline, with an increase of 0.005 ± 0.01 for each additional assessment. There was significant variation among residents' starting score (y-intercept) and trajectory (slope). CONCLUSIONS Our model suggests that residents begin at different points and progress at different rates. Meta-raters such as program directors and Clinical Competency Committee members should bear in mind that progression may take time and learning trajectories will be nuanced. Individuals involved in ratings should be aware of sources of noise in the system, including the raters themselves.
Collapse
|