1
|
Frank JR, Karpinski J, Sherbino J, Snell LS, Atkinson A, Oswald A, Hall AK, Cooke L, Dojeiji S, Richardson D, Cheung WJ, Cavalcanti RB, Dalseg TR, Thoma B, Flynn L, Gofton W, Dudek N, Bhanji F, Wong BMF, Razack S, Anderson R, Dubois D, Boucher A, Gomes MM, Taber S, Gorman LJ, Fulford J, Naik V, Harris KA, St. Croix R, van Melle E. Competence By Design: a transformational national model of time-variable competency-based postgraduate medical education. PERSPECTIVES ON MEDICAL EDUCATION 2024; 13:201-223. [PMID: 38525203 PMCID: PMC10959143 DOI: 10.5334/pme.1096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Accepted: 02/16/2024] [Indexed: 03/26/2024]
Abstract
Postgraduate medical education is an essential societal enterprise that prepares highly skilled physicians for the health workforce. In recent years, PGME systems have been criticized worldwide for problems with variable graduate abilities, concerns about patient safety, and issues with teaching and assessment methods. In response, competency based medical education approaches, with an emphasis on graduate outcomes, have been proposed as the direction for 21st century health profession education. However, there are few published models of large-scale implementation of these approaches. We describe the rationale and design for a national, time-variable competency-based multi-specialty system for postgraduate medical education called Competence by Design. Fourteen innovations were bundled to create this new system, using the Van Melle Core Components of competency based medical education as the basis for the transformation. The successful execution of this transformational training system shows competency based medical education can be implemented at scale. The lessons learned in the early implementation of Competence by Design can inform competency based medical education innovation efforts across professions worldwide.
Collapse
Affiliation(s)
- Jason R. Frank
- Centre for Innovation in Medical Education and Professor, Department of Emergency Medicine, Faculty of Medicine, University of Ottawa, ON, Canada
| | - Jolanta Karpinski
- Department of Medicine, University of Ottawa, Ottawa, ON, Canada
- Competency Based Medical Education, University of Ottawa, Ottawa, ON, Canada
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
| | | | - Linda S. Snell
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Medicine and Health Sciences Education, McGill University, Montreal, QC, Canada
| | - Adelle Atkinson
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Department of Paediatrics, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Anna Oswald
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
- Competency Based Medical Education, University of Alberta, Edmonton, AB, Canada
| | - Andrew K. Hall
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Department of Emergency Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Lara Cooke
- Division of Neurology, Department of Clinical Neurosciences, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - Susan Dojeiji
- Physical Medicine and Rehabilitation, University of Ottawa, Ottawa, ON, Canada
| | - Denyse Richardson
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Department of Physical Medicine and Rehabilitation, Queen’s University, Kingston, ON, Canada
| | - Warren J. Cheung
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Department of Medicine, University of Toronto, Toronto, ON, Canada
| | - Rodrigo B. Cavalcanti
- Department of Medicine, University of Toronto, Toronto, ON, Canada
- HoPingKong Centre, University Health Network, Toronto, ON, Canada
| | - Timothy R. Dalseg
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Division of Emergency Medicine, University of Toronto, Toronto, ON, Canada
| | - Brent Thoma
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Emergency Medicine, University of Saskatchewan, Saskatoon, SK, Canada
| | - Leslie Flynn
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Departments of Psychiatry and Family Medicine, and Co-Director Master of Health Sciences Education, Queen’s University, Kingston, ON, Canada
| | - Wade Gofton
- Department of Surgery (Division of Orthopedic Surgery), The Ottawa Hospital and University of Ottawa, Ottawa, ON, Canada
| | - Nancy Dudek
- Department of Medicine (Division of Physical Medicine & Rehabilitation) and The Ottawa Hospital, University of Ottawa, Ottawa, Ontario, Canada
| | - Farhan Bhanji
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
| | - Brian M.-F. Wong
- Centre for Quality Improvement and Patient Safety, University of Toronto, Toronto, Canada
| | - Saleem Razack
- Centre for Health Education Scholarship, University of British Columbia and BC Children’s Hospital, Vancouver, BC, Canada
| | - Robert Anderson
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Northern Ontario School of Medicine University, Sudbury, ON, Canada
| | - Daniel Dubois
- Department of Anesthesiology and Pain Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Andrée Boucher
- Department of Medicine (Division of Endocrinology), Universitéde Montréal, Montréal, QC, Canada
| | - Marcio M. Gomes
- Department of Pathology and Laboratory Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Sarah Taber
- Office of Standards and Assessment, Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
| | - Lisa J. Gorman
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
| | - Jane Fulford
- Canadian Internet Registration Authority, Canada
| | - Viren Naik
- Department of Anesthesiology and Pain Medicine, University of Ottawa, Ottawa, ON, Canada
- Medical Council of Canada, Ottawa, ON, Canada
| | - Kenneth A. Harris
- Royal College of Physicians and Surgeons of Canada, Canada
- Emeritus, Western University, Canada
| | - Rhonda St. Croix
- Learning and Connecting at the Royal College of Physicians and Surgeons of Canada, Canada
| | - Elaine van Melle
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Department of Family Medicine, Queen’s University, Kingston, ON, Canada
| |
Collapse
|
2
|
Talmi L, Nabecker S, Piquette D, Mema B. Pediatric Critical Care Fellow Perception of Learning through Virtual Reality Bronchoscopy. ATS Sch 2024; 5:174-183. [PMID: 38585579 PMCID: PMC10995860 DOI: 10.34197/ats-scholar.2023-0097in] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Accepted: 11/28/2023] [Indexed: 04/09/2024] Open
Abstract
Background Virtual reality (VR) simulators have revolutionized training in bronchoscopy, offering unrestricted availability in a low-stakes learning environment and frequent assessments represented by automatic scoring. The VR assessments can be used to monitor and support learners' progression. How trainees perceive these assessments needs to be clarified. Objective The objective of this study was to examine what assessments learners select to document and receive feedback on and what influences their decisions. Methods We used a sequential explanatory mixed methods strategy. All participants were pediatric critical care medicine trainees requiring competency in bronchoscopy skills. During independent simulation practice, we collected the number of learning-focused practice attempts (scores not recorded), assessment-focused practice (scores recorded and reviewed by the instructor for feedback), and the amount of time each attempt lasted. After simulation training, we conducted interviews to explore learners' perceptions of assessment. Results There was no significant difference in the number of attempts for each practice type. The average time per learning-focused attempt was almost three times longer than the assessment-focused attempt (mean [standard deviation] 16 ± 1 min vs. 6 ± 3 min, respectively; P < 0.05). Learners perceived documentation of their scores as high stakes and only recorded their better scores. Learners felt safer experimenting if their assessments were not recorded. Conclusion During independent practice, learners took advantage of automatic assessments generated by the VR simulator to monitor their progression. However, the recording of scores from the simulation program to document learners' trajectory to a set goal was perceived as high stakes, discouraging learners from seeking supervisor feedback.
Collapse
Affiliation(s)
- Liron Talmi
- Department of Critical Care Medicine, Hospital for Sick Children, Toronto, Ontario, Canada
| | - Sabine Nabecker
- Department of Anesthesiology and Pain Medicine, Sinai Health System, Toronto, Ontario, Canada
| | - Dominique Piquette
- Interdepartmental Division of Critical Care Medicine, University of Toronto, Toronto, Ontario, Canada; and
- Department of Critical Care Medicine, Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Briseida Mema
- Department of Critical Care Medicine, Hospital for Sick Children, Toronto, Ontario, Canada
- Interdepartmental Division of Critical Care Medicine, University of Toronto, Toronto, Ontario, Canada; and
| |
Collapse
|
3
|
Choo EK, Woods R, Walker ME, O’Brien JM, Chan TM. The Quality of Assessment for Learning score for evaluating written feedback in anesthesiology postgraduate medical education: a generalizability and decision study. CANADIAN MEDICAL EDUCATION JOURNAL 2023; 14:78-85. [PMID: 38226296 PMCID: PMC10787859 DOI: 10.36834/cmej.75876] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/17/2024]
Abstract
Background Competency based residency programs depend on high quality feedback from the assessment of entrustable professional activities (EPA). The Quality of Assessment for Learning (QuAL) score is a tool developed to rate the quality of narrative comments in workplace-based assessments; it has validity evidence for scoring the quality of narrative feedback provided to emergency medicine residents, but it is unknown whether the QuAL score is reliable in the assessment of narrative feedback in other postgraduate programs. Methods Fifty sets of EPA narratives from a single academic year at our competency based medical education post-graduate anesthesia program were selected by stratified sampling within defined parameters [e.g. resident gender and stage of training, assessor gender, Competency By Design training level, and word count (≥17 or <17 words)]. Two competency committee members and two medical students rated the quality of narrative feedback using a utility score and QuAL score. We used Kendall's tau-b co-efficient to compare the perceived utility of the written feedback to the quality assessed with the QuAL score. The authors used generalizability and decision studies to estimate the reliability and generalizability coefficients. Results Both the faculty's utility scores and QuAL scores (r = 0.646, p < 0.001) and the trainees' utility scores and QuAL scores (r = 0.667, p < 0.001) were moderately correlated. Results from the generalizability studies showed that utility scores were reliable with two raters for both faculty (Epsilon=0.87, Phi=0.86) and trainees (Epsilon=0.88, Phi=0.88). Conclusions The QuAL score is correlated with faculty- and trainee-rated utility of anesthesia EPA feedback. Both faculty and trainees can reliability apply the QuAL score to anesthesia EPA narrative feedback. This tool has the potential to be used for faculty development and program evaluation in Competency Based Medical Education. Other programs could consider replicating our study in their specialty.
Collapse
Affiliation(s)
- Eugene K Choo
- Department of Anesthesiology, College of Medicine, University of Saskatchewan, Saskatchewan, Canada;
| | - Rob Woods
- Department of Emergency Medicine, College of Medicine, University of Saskatchewan, Saskatchewan, Canada
| | - Mary Ellen Walker
- Department of Anesthesiology, College of Medicine, University of Saskatchewan, Saskatchewan, Canada;
| | - Jennifer M O’Brien
- Department of Anesthesiology, College of Medicine, University of Saskatchewan, Saskatchewan, Canada;
| | - Teresa M Chan
- Department of Medicine (Division of Emergency Medicine; Division of Education & Innovation), Michael G. DeGroote School of Medicine, Faculty of Health Sciences, McMaster University and Office of Continuing Professional Development & McMaster Education Research, Innovation, and Theory (MERIT) Program, Faculty of Health Sciences, McMaster University, Ontario, Canada
| |
Collapse
|
4
|
McGuire N, Acai A, Sonnadara RR. The McMaster Narrative Comment Rating Tool: Development and Initial Validity Evidence. TEACHING AND LEARNING IN MEDICINE 2023:1-13. [PMID: 37964518 DOI: 10.1080/10401334.2023.2276799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/03/2022] [Accepted: 10/05/2023] [Indexed: 11/16/2023]
Abstract
CONSTRUCT The McMaster Narrative Comment Rating Tool aims to capture critical features reflecting the quality of written narrative comments provided in the medical education context: valence/tone of language, degree of correction versus reinforcement, specificity, actionability, and overall usefulness. BACKGROUND Despite their role in competency-based medical education, not all narrative comments contribute meaningfully to the development of learners' competence. To develop solutions to mitigate this problem, robust measures of narrative comment quality are needed. While some tools exist, most were created in specialty-specific contexts, have focused on one or two features of feedback, or have focused on faculty perceptions of feedback, excluding learners from the validation process. In this study, we aimed to develop a detailed, broadly applicable narrative comment quality assessment tool that drew upon features of high-quality assessment and feedback and could be used by a variety of raters to inform future research, including applications related to automated analysis of narrative comment quality. APPROACH In Phase 1, we used the literature to identify five critical features of feedback. We then developed rating scales for each of the features, and collected 670 competency-based assessments completed by first-year surgical residents in the first six-weeks of training. Residents were from nine different programs at a Canadian institution. In Phase 2, we randomly selected 50 assessments with written feedback from the dataset. Two education researchers used the scale to independently score the written comments and refine the rating tool. In Phase 3, 10 raters, including two medical education researchers, two medical students, two residents, two clinical faculty members, and two laypersons from the community, used the tool to independently and blindly rate written comments from another 50 randomly selected assessments from the dataset. We compared scores between and across rater pairs to assess reliability. FINDINGS Single and average measures intraclass correlation (ICC) scores ranged from moderate to excellent (ICCs = .51-.83 and .91-.98) across all categories and rater pairs. All tool domains were significantly correlated (p's <.05), apart from valence, which was only significantly correlated with degree of correction versus reinforcement. CONCLUSION Our findings suggest that the McMaster Narrative Comment Rating Tool can reliably be used by multiple raters, across a variety of rater types, and in different surgical contexts. As such, it has the potential to support faculty development initiatives on assessment and feedback, and may be used as a tool to conduct research on different assessment strategies, including automated analysis of narrative comments.
Collapse
Affiliation(s)
- Natalie McGuire
- Office of Professional Development and Educational Scholarship, Queen's University, Kingston, Ontario, Canada
| | - Anita Acai
- Department of Psychiatry and Behavioural Neurosciences and McMaster Education Research, Innovation and Theory (MERIT) Program, McMaster University, and St. Joseph's Education Research Centre (SERC), St. Joseph's Healthcare Hamilton, Hamilton, Canada
| | - Ranil R Sonnadara
- Office of Education Science, Department of Surgery, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
5
|
Blanchette P, Poitras ME, St-Onge C. Assessing trainee's performance using reported observations: Perceptions of nurse meta-assessors. NURSE EDUCATION TODAY 2023; 126:105836. [PMID: 37167832 DOI: 10.1016/j.nedt.2023.105836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 04/12/2023] [Accepted: 04/30/2023] [Indexed: 05/13/2023]
Abstract
BACKGROUND Educational and health care organizations who prepare meta-assessors to fulfill their role in the assessment of trainees' performance based on reported observations have little literature to rely on. While the assessment of trainees' performance based on reported observations has been operationalized, we have yet to understand the elements that can affect its quality fully. Closing this gap in the literature will provide valuable insight that could inform the implementation and quality monitoring of the assessment of trainees' performance based on reported observations. OBJECTIVES The purpose of this study was to explore the elements to consider in the assessment of trainees' performance based on reported observations from the perspectives of meta-assessors. METHODS Design, Settings, Participants, data collection and analysis. The authors adopted Sandelowski's qualitative descriptive approach to interview nurse meta-assessors from two nursing programs. A semi-structured interview guide was used to document the elements to consider in the assessment of nursing trainees' performance based on reported observations, and a survey was used to collect sociodemographic data. The authors conducted a thematic analysis of the interview transcripts. RESULTS Thirteen meta-assessors participated in the study. Three core themes were identified: (1) meta-assessors' appropriation of their perceived assessment roles and activities, (2) team climate of information sharing, and (3) challenges associated with the assessment of trainees' performance based on reported observations. Each theme is comprised of several sub themes. CONCLUSIONS To optimize the quality of the assessment of the trainee's performance based on reported observations and ratings, HPE programs might consider how to clarify better the meta-assessor's roles and activities, as well as how interventions could be created to promote a climate of information sharing and to address the challenges identified. This work will guide educational and health care organizations for better preparation and support for meta-assessors and preceptors.
Collapse
Affiliation(s)
| | - Marie-Eve Poitras
- Department of Family Medicine and Emergency Medicine, University of Sherbrooke, Sherbrooke, Quebec, Canada.
| | - Christina St-Onge
- Department of Medicine, University of Sherbrooke, Sherbrooke, Quebec, Canada
| |
Collapse
|
6
|
Chin M, Pack R, Cristancho S. "A whole other competence story": exploring faculty perspectives on the process of workplace-based assessment of entrustable professional activities. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2023; 28:369-385. [PMID: 35997910 DOI: 10.1007/s10459-022-10156-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Accepted: 08/07/2022] [Indexed: 05/11/2023]
Abstract
The centrality of entrustable professional activities (EPAs) in competency-based medical education (CBME) is predicated on the assumption that low-stakes, high-frequency workplace-based assessments used in a programmatic approach will result in accurate and defensible judgments of competence. While there have been conversations in the literature regarding the potential of this approach, only recently has the conversation begun to explore the actual experiences of clinical faculty in this process. The purpose of this qualitative study was to explore the process of EPA assessment for faculty in everyday practice. We conducted 18 semi-structured interviews with Anesthesia faculty at a Canadian academic center. Participants were asked to describe how they engage in EPA assessment in daily practice and the factors they considered. Interviews were audio-recorded, transcribed, and analysed using the constant comparative method of grounded theory. Participants in this study perceived two sources of tension in the EPA assessment process that influenced their scoring on official forms: the potential constraints of the assessment forms and the potential consequences of their assessment outcome. This was particularly salient in circumstances of uncertainty regarding the learner's level of competence. Ultimately, EPA assessment in CBME may be experienced as higher-stakes by faculty than officially recognized due to these tensions, suggesting a layer of discomfort and burden in the process that may potentially interfere with the goal of assessment for learning. Acknowledging and understanding the nature of this burden and identifying strategies to mitigate it are critical to achieving the assessment goals of CBME.
Collapse
Affiliation(s)
- Melissa Chin
- Department of Anesthesia and Perioperative Medicine, London Health Sciences Centre, Schulich School of Medicine and Dentistry, University of Western Ontario, London, ON, Canada.
| | - Rachael Pack
- Center for Education Research and Innovation, University of Western Ontario, London, ON, Canada
| | - Sayra Cristancho
- Center for Education Research and Innovation, University of Western Ontario, London, ON, Canada
| |
Collapse
|
7
|
Paterson QS, Alrimawi H, Sample S, Bouwsema M, Anjum O, Vincent M, Cheung WJ, Hall A, Woods R, Martin LJ, Chan T. Examining enablers and barriers to entrustable professional activity acquisition using the theoretical domains framework: A qualitative framework analysis study. AEM EDUCATION AND TRAINING 2023; 7:e10849. [PMID: 36994315 PMCID: PMC10041073 DOI: 10.1002/aet2.10849] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 01/20/2023] [Accepted: 01/29/2023] [Indexed: 06/19/2023]
Abstract
Background Without a clear understanding of the factors contributing to the effective acquisition of high-quality entrustable professional activity (EPA) assessments, trainees, supervising faculty, and training programs may lack appropriate strategies for successful EPA implementation and utilization. The purpose of this study was to identify barriers and facilitators to acquiring high-quality EPA assessments in Canadian emergency medicine (EM) training programs. Methods We conducted a qualitative framework analysis study utilizing the Theoretical Domains Framework (TDF). Semistructured interviews of EM resident and faculty participants underwent audio recording, deidentification, and line-by-line coding by two authors, being coded to extract themes and subthemes across the domains of the TDF. Results From 14 interviews (eight faculty and six residents) we identified, within the 14 TDF domains, major themes and subthemes for barriers and facilitators to EPA acquisition for both faculty and residents. The two most cited domains (and their frequencies) among residents and faculty were environmental context and resources (56) and behavioral regulation (48). Example strategies to improving EPA acquisition include orienting residents to the competency-based medical education (CBME) paradigm, recalibrating expectations relating to "low ratings" on EPAs, engaging in continuous faculty development to ensure familiarity and fluency with EPAs, and implementing longitudinal coaching programs between residents and faculty to encourage repetitive longitudinal interactions and high-quality specific feedback. Conclusions We identified key strategies to support residents, faculty, programs, and institutions in overcoming barriers and improving EPA assessment processes. This is an important step toward ensuring the successful implementation of CBME and the effective operationalization of EPAs within EM training programs.
Collapse
Affiliation(s)
- Quinten S. Paterson
- Department of Emergency MedicineUniversity of SaskatchewanSaskatoonSaskatchewanCanada
| | - Hussein Alrimawi
- Emergency Medicine Division, Department of MedicineMcMaster UniversityHamiltonOntarioCanada
| | - Spencer Sample
- Emergency Medicine Division, Department of MedicineMcMaster UniversityHamiltonOntarioCanada
| | - Melissa Bouwsema
- Department of Emergency MedicineQueens UniversityKingstonOntarioCanada
| | - Omar Anjum
- Department of Emergency MedicineUniversity of OttawaOttawaOntarioCanada
| | - Maggie Vincent
- Emergency Medicine Division, Department of MedicineMcMaster UniversityHamiltonOntarioCanada
| | - Warren J. Cheung
- Department of Emergency MedicineUniversity of OttawaOttawaOntarioCanada
| | - Andrew Hall
- Department of Emergency MedicineQueens UniversityKingstonOntarioCanada
- Department of Emergency MedicineUniversity of OttawaOttawaOntarioCanada
| | - Rob Woods
- Department of Emergency MedicineUniversity of SaskatchewanSaskatoonSaskatchewanCanada
| | - Lynsey J. Martin
- Department of Emergency MedicineUniversity of SaskatchewanSaskatoonSaskatchewanCanada
| | - Teresa Chan
- Emergency Medicine Division, Department of MedicineMcMaster UniversityHamiltonOntarioCanada
| |
Collapse
|
8
|
Yilmaz Y, Chan MK, Richardson D, Atkinson A, Bassilious E, Snell L, Chan TM. Defining new roles and competencies for administrative staff and faculty in the age of competency-based medical education. MEDICAL TEACHER 2023; 45:395-403. [PMID: 36471921 DOI: 10.1080/0142159x.2022.2136517] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE These authors sought to define the new roles and competencies required of administrative staff and faculty in the age of CBME. METHOD A modified Delphi process was used to define the new CBME roles and competencies needed by faculty and administrative staff. We invited international experts in CBME (volunteers from the ICBME Collaborative email list), as well as faculty members and trainees identified via social media to help us determine the new competencies required of faculty and administrative staff in the CBME era. RESULTS Thirteen new roles were identified. The faculty-specific roles were: National Leader/Facilitator in CBME; Institutional/University lead for CBME; Assessment Process & Systems Designer; Local CBME Leads; CBME-specific Faculty Developers or Trainers; Competence Committee Chair; Competence Committee Faculty Member; Faculty Academic Coach/Advisor or Support Person; Frontline Assessor; Frontline Coach. The staff-specific roles were: Information Technology Lead; CBME Analytics/Data Support; Competence Committee Administrative Assistant. CONCLUSIONS The authors present a new set of faculty and staff roles that are relevant to the CBME context. While some of these new roles may be incorporated into existing roles, it may be prudent to examine how best to ensure that all of them are supported within all CBME contexts in some manner.
Collapse
Affiliation(s)
- Yusuf Yilmaz
- McMaster Education Research, Innovation, and Theory (MERIT), and Office of Continuing Professional Development, Faculty of Health Sciences, McMaster University, Hamilton, Canada
- Department of Medical Education, Faculty of Medicine, Ege University, Izmir, Turkey
| | - Ming-Ka Chan
- Department of Pediatrics and Child Health, University of Manitoba, Winnipeg, Canada
| | - Denyse Richardson
- Department of Medicine, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada
| | - Adelle Atkinson
- Department of Pediatrics, University of Toronto, Toronto, Canada
| | - Ereny Bassilious
- Department of Pediatrics, Faculty of Health Sciences, McMaster University, Hamilton, Canada
| | - Linda Snell
- Medicine and Health Sciences Education, Faculty of Medicine and Health Sciences, McGill University, Montreal, Canada
| | - Teresa M Chan
- McMaster Education Research, Innovation, and Theory (MERIT), and Office of Continuing Professional Development, Faculty of Health Sciences, McMaster University, Hamilton, Canada
- Divisions of Emergency Medicine, Department of Medicine, McMaster University, Hamilton, Canada
| |
Collapse
|
9
|
Woods R, Singh S, Thoma B, Patocka C, Cheung W, Monteiro S, Chan TM. Validity evidence for the Quality of Assessment for Learning score: a quality metric for supervisor comments in Competency Based Medical Education. CANADIAN MEDICAL EDUCATION JOURNAL 2022; 13:19-35. [PMID: 36440075 PMCID: PMC9684040 DOI: 10.36834/cmej.74860] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
BACKGROUND Competency based medical education (CBME) relies on supervisor narrative comments contained within entrustable professional activities (EPA) for programmatic assessment, but the quality of these supervisor comments is unassessed. There is validity evidence supporting the QuAL (Quality of Assessment for Learning) score for rating the usefulness of short narrative comments in direct observation. OBJECTIVE We sought to establish validity evidence for the QuAL score to rate the quality of supervisor narrative comments contained within an EPA by surveying the key end-users of EPA narrative comments: residents, academic advisors, and competence committee members. METHODS In 2020, the authors randomly selected 52 de-identified narrative comments from two emergency medicine EPA databases using purposeful sampling. Six collaborators (two residents, two academic advisors, and two competence committee members) were recruited from each of four EM Residency Programs (Saskatchewan, McMaster, Ottawa, and Calgary) to rate these comments with a utility score and the QuAL score. Correlation between utility and QuAL score were calculated using Pearson's correlation coefficient. Sources of variance and reliability were calculated using a generalizability study. RESULTS All collaborators (n = 24) completed the full study. The QuAL score had a high positive correlation with the utility score amongst the residents (r = 0.80) and academic advisors (r = 0.75) and a moderately high correlation amongst competence committee members (r = 0.68). The generalizability study found that the major source of variance was the comment indicating the tool performs well across raters. CONCLUSION The QuAL score may serve as an outcome measure for program evaluation of supervisors, and as a resource for faculty development.
Collapse
Affiliation(s)
- Rob Woods
- Department of Emergency Medicine, University of Saskatchewan, Saskatchewan, Canada
| | - Sim Singh
- College of Medicine, University of Saskatchewan, Saskatchewan, Canada
| | - Brent Thoma
- Department of Emergency Medicine, University of Saskatchewan, Saskatchewan, Canada
| | - Catherine Patocka
- Department of Emergency Medicine, University of Calgary, Alberta, Canada
| | - Warren Cheung
- Department of Emergency Medicine, University of Ottawa, Ontario, Canada
| | - Sandra Monteiro
- Department of Health Research Methods Evidence and Impact, McMaster University, Ontario, Canada
| | - Teresa M Chan
- Division of Emergency Medicine and Education & Innovation, Department of Medicine, McMaster University, Ontario, Canada
| | | |
Collapse
|
10
|
Yilmaz Y, Jurado Nunez A, Ariaeinejad A, Lee M, Sherbino J, Chan TM. Harnessing Natural Language Processing to Support Decisions Around Workplace-Based Assessment: Machine Learning Study of Competency-Based Medical Education. JMIR MEDICAL EDUCATION 2022; 8:e30537. [PMID: 35622398 PMCID: PMC9187970 DOI: 10.2196/30537] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 12/05/2021] [Accepted: 04/30/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Residents receive a numeric performance rating (eg, 1-7 scoring scale) along with a narrative (ie, qualitative) feedback based on their performance in each workplace-based assessment (WBA). Aggregated qualitative data from WBA can be overwhelming to process and fairly adjudicate as part of a global decision about learner competence. Current approaches with qualitative data require a human rater to maintain attention and appropriately weigh various data inputs within the constraints of working memory before rendering a global judgment of performance. OBJECTIVE This study explores natural language processing (NLP) and machine learning (ML) applications for identifying trainees at risk using a large WBA narrative comment data set associated with numerical ratings. METHODS NLP was performed retrospectively on a complete data set of narrative comments (ie, text-based feedback to residents based on their performance on a task) derived from WBAs completed by faculty members from multiple hospitals associated with a single, large, residency program at McMaster University, Canada. Narrative comments were vectorized to quantitative ratings using the bag-of-n-grams technique with 3 input types: unigram, bigrams, and trigrams. Supervised ML models using linear regression were trained with the quantitative ratings, performed binary classification, and output a prediction of whether a resident fell into the category of at risk or not at risk. Sensitivity, specificity, and accuracy metrics are reported. RESULTS The database comprised 7199 unique direct observation assessments, containing both narrative comments and a rating between 3 and 7 in imbalanced distribution (scores 3-5: 726 ratings; and scores 6-7: 4871 ratings). A total of 141 unique raters from 5 different hospitals and 45 unique residents participated over the course of 5 academic years. When comparing the 3 different input types for diagnosing if a trainee would be rated low (ie, 1-5) or high (ie, 6 or 7), our accuracy for trigrams was 87%, bigrams 86%, and unigrams 82%. We also found that all 3 input types had better prediction accuracy when using a bimodal cut (eg, lower or higher) compared with predicting performance along the full 7-point rating scale (50%-52%). CONCLUSIONS The ML models can accurately identify underperforming residents via narrative comments provided for WBAs. The words generated in WBAs can be a worthy data set to augment human decisions for educators tasked with processing large volumes of narrative assessments.
Collapse
Affiliation(s)
- Yusuf Yilmaz
- McMaster Education Research, Innovation, and Theory Program, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
- Department of Medical Education, Ege University, Izmir, Turkey
- Program for Faculty Development, Office of Continuing Professional Development, McMaster University, Hamilton, ON, Canada
- Department of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
| | - Alma Jurado Nunez
- Department of Medicine and Masters in eHealth Program, McMaster University, Hamilton, ON, Canada
| | - Ali Ariaeinejad
- Department of Medicine and Masters in eHealth Program, McMaster University, Hamilton, ON, Canada
| | - Mark Lee
- McMaster Education Research, Innovation, and Theory Program, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
| | - Jonathan Sherbino
- McMaster Education Research, Innovation, and Theory Program, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
- Division of Emergency Medicine, Department of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
- Division of Education and Innovation, Department of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
| | - Teresa M Chan
- McMaster Education Research, Innovation, and Theory Program, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
- Program for Faculty Development, Office of Continuing Professional Development, McMaster University, Hamilton, ON, Canada
- Division of Emergency Medicine, Department of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
- Division of Education and Innovation, Department of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
11
|
Chan TM, Sebok-Syer SS, Yilmaz Y, Monteiro S. The Impact of Electronic Data to Capture Qualitative Comments in a Competency-Based Assessment System. Cureus 2022; 14:e23480. [PMID: 35494923 PMCID: PMC9038604 DOI: 10.7759/cureus.23480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/24/2022] [Indexed: 11/23/2022] Open
Abstract
Introduction Digitalizing workplace-based assessments (WBA) holds the potential for facilitating feedback and performance review, wherein we can easily record, store, and analyze data in real time. When digitizing assessment systems, however, it is unclear what is gained and lost in the message as a result of the change in medium. This study evaluates the quality of comments generated in paper vs. electronic media and the influence of an assessor’s seniority. Methods Using a realist evaluation framework, a retrospective database review was conducted with paper-based and electronic medium comments. A sample of assessments was examined to determine any influence of the medium on the word count and the Quality of Assessment for Learning (QuAL) score. A correlation analysis evaluated the relationship between word count and QuAL score. Separate univariate analyses of variance (ANOVAs) were used to examine the influence of the assessor's seniority and medium on word count, QuAL score, and WBA scores. Results The analysis included a total of 1,825 records. The average word count for the electronic comments (M=16) was significantly higher than the paper version (M=12; p=0.01). Longer comments positively correlated with QuAL score (r=0.2). Paper-based comments received lower QuAL scores (0.41) compared to electronic (0.51; p<0.01). Years in practice was negatively correlated with QuAL score (r=-0.08; p<0.001) as was word count (r=-0.2; p<0.001). Conclusion Digitization of WBAs increased the length of comments and did not appear to jeopardize the quality of WBAs; these results indicate higher-quality assessment data. True digital transformation may be possible by harnessing trainee data repositories and repurposing them to analyze for faculty-relevant metrics.
Collapse
|
12
|
Anderson HL, Kurtz J, West DC. Implementation and Use of Workplace-Based Assessment in Clinical Learning Environments: A Scoping Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S164-S174. [PMID: 34406132 DOI: 10.1097/acm.0000000000004366] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE Workplace-based assessment (WBA) serves a critical role in supporting competency-based medical education (CBME) by providing assessment data to inform competency decisions and support learning. Many WBA systems have been developed, but little is known about how to effectively implement WBA. Filling this gap is important for creating suitable and beneficial assessment processes that support large-scale use of CBME. As a step toward filling this gap, the authors describe what is known about WBA implementation and use to identify knowledge gaps and future directions. METHOD The authors used Arksey and O'Malley's 6-stage scoping review framework to conduct the review, including: (1) identifying the research question; (2) identifying relevant studies; (3) study selection; (4) charting the data; (5) collating, summarizing, and reporting the results; and (6) consulting with relevant stakeholders. RESULTS In 2019-2020, the authors searched and screened 726 papers for eligibility using defined inclusion and exclusion criteria. One hundred sixty-three met inclusion criteria. The authors identified 5 themes in their analysis: (1) Many WBA tools and programs have been implemented, and barriers are common across fields and specialties; (2) Theoretical perspectives emphasize the need for data-driven implementation strategies; (3) User perceptions of WBA vary and are often dependent on implementation factors; (4) Technology solutions could provide useful tools to support WBA; and (5) Many areas of future research and innovation remain. CONCLUSIONS Knowledge of WBA as an implemented practice to support CBME remains constrained. To remove these constraints, future research should aim to generate generalizable knowledge on WBA implementation and use, address implementation factors, and investigate remaining knowledge gaps.
Collapse
Affiliation(s)
- Hannah L Anderson
- H.L. Anderson is research associate, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania; ORCID: http://orcid.org/0000-0002-9435-1535
| | - Joshua Kurtz
- J. Kurtz is a first-year resident, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Daniel C West
- D.C. West is professor of pediatrics, The Perelman School of Medicine at the University of Pennsylvania, and associate chair for education and senior director of medical education, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania; ORCID: http://orcid.org/0000-0002-0909-4213
| |
Collapse
|
13
|
Yilmaz Y, Carey R, Chan TM, Bandi V, Wang S, Woods RA, Mondal D, Thoma B. Developing a dashboard for faculty development in competency-based training programs: a design-based research project. CANADIAN MEDICAL EDUCATION JOURNAL 2021; 12:48-64. [PMID: 34567305 PMCID: PMC8463237 DOI: 10.36834/cmej.72067] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
BACKGROUND Canadian specialist residency training programs are implementing a form of competency-based medical education (CBME) that requires frequent assessments of entrustable professional activities (EPAs). Faculty struggle to provide helpful feedback and assign appropriate entrustment scores. CBME faculty development initiatives rarely incorporate teaching metrics. Dashboards could be used to visualize faculty assessment data to support faculty development. METHODS Using a design-based research process, we identified faculty development needs related to CBME assessments and designed a dashboard containing elements (data, analytics, and visualizations) meeting these needs. Data was collected within the emergency medicine residency program at the University of Saskatchewan through interviews with program leaders, faculty development experts, and faculty participating in development sessions. Two investigators thematically analyzed interview transcripts to identify faculty needs that were audited by a third investigator. The needs were described using representative quotes and the dashboard elements designed to address them. RESULTS Between July 1, 2019 and December 11, 2020 we conducted 15 interviews with nine participants (two program leaders, three faculty development experts, and four faculty members). Three needs emerged as themes from the analysis: analysis of assessments, contextualization of assessments, and accessible reporting. We addressed these needs by designing an accessible dashboard to present contextualized quantitative and narrative assessment data for each faculty member. CONCLUSIONS We identified faculty development needs related to EPA assessments and designed dashboard elements to meet them. The resulting dashboard was used for faculty development sessions. This work will inform the development of CBME assessment dashboards for faculty.
Collapse
Affiliation(s)
- Yusuf Yilmaz
- Continuing Professional Development Office and McMaster Education Research, Innovation, and Theory (MERIT) Program, McMaster University, Ontario, Canada
- Department of Medical Education, Ege University, Izmir, Turkey
| | - Robert Carey
- Department of Emergency Medicine, University of Saskatchewan, Saskatchewan, Canada
| | - Teresa M Chan
- Continuing Professional Development Office and McMaster Education Research, Innovation, and Theory (MERIT) Program, McMaster University, Ontario, Canada
- Emergency Medicine, Department of Medicine, McMaster University, Ontario, Canada
| | - Venkat Bandi
- Department of Computer Science, University of Saskatchewan, Saskatchewan, Canada
| | - Shisong Wang
- Department of Computer Science, University of Saskatchewan, Saskatchewan, Canada
| | - Robert A Woods
- Department of Emergency Medicine, University of Saskatchewan, Saskatchewan, Canada
| | - Debajyoti Mondal
- Department of Computer Science, University of Saskatchewan, Saskatchewan, Canada
| | - Brent Thoma
- Department of Emergency Medicine, University of Saskatchewan, Saskatchewan, Canada
| |
Collapse
|
14
|
Weller JM, Coomber T, Chen Y, Castanelli DJ. Key dimensions of innovations in workplace-based assessment for postgraduate medical education: a scoping review. Br J Anaesth 2021; 127:689-703. [PMID: 34364651 DOI: 10.1016/j.bja.2021.06.038] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Revised: 05/31/2021] [Accepted: 06/20/2021] [Indexed: 11/28/2022] Open
Abstract
BACKGROUND Specialist training bodies continue to devise innovative methods of gathering information on trainee workplace performance to meet the requirements of competency-based medical education. We reviewed recent innovations in workplace-based assessment (WBA) tools to identify strengths, weaknesses, and trade-offs inherent in their design and use. METHODS In this scoping review, using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, we systematically searched databases between 2009 and 2019 for WBA tools with novel characteristics not typically seen in traditional WBAs. These included innovations in rating scales, ways of collecting information, technological innovations, ways of triggering WBAs, and approaches to compiling and using information. RESULTS We identified 30 innovative WBA tools whose characteristics could be categorised into seven dimensions: frequency of assessment, granularity (unit of performance assessed), coverage of the curriculum, rating method, initiation of the WBA, information use, and incentives. These dimensions had multiple interdependencies and trade-offs, often balancing generating assessment data with available resources. Philosophical stance on assessment also influenced WBA choice, for example prioritising trainee-centred learning (i.e. initiation of WBA and transparency of assessment data), perceptions of assessment and feedback as burdensome or beneficial, and holistic vs reductionist views on assessment of performance. CONCLUSIONS Our synthesis of the literature on innovative WBAs provides a framework for categorising tool characteristics across seven dimensions, systematically teasing apart the considerations in design and use of workplace assessments. It also draws attention to the trade-offs inherent in tool design and selection, and enables a more deliberate consideration of the tool characteristics most appropriate to the local context.
Collapse
Affiliation(s)
- Jennifer M Weller
- Centre for Medical and Health Sciences Education, School of Medicine, University of Auckland, Auckland, New Zealand; Department of Anaesthesia, Auckland City Hospital, Auckland, New Zealand.
| | - Ties Coomber
- Centre for Medical and Health Sciences Education, School of Medicine, University of Auckland, Auckland, New Zealand
| | - Yan Chen
- Centre for Medical and Health Sciences Education, School of Medicine, University of Auckland, Auckland, New Zealand
| | - Damian J Castanelli
- School of Clinical Sciences at Monash Health, Monash University, Clayton, VIC, Australia
| |
Collapse
|
15
|
Sample S, Al Rimawi H, Bérczi B, Chorley A, Pardhan A, Chan TM. Seeing potential opportunities for teaching (SPOT): Evaluating a bundle of interventions to augment entrustable professional activity acquisition. AEM EDUCATION AND TRAINING 2021; 5:e10631. [PMID: 34471797 PMCID: PMC8381386 DOI: 10.1002/aet2.10631] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2021] [Revised: 05/10/2021] [Accepted: 06/02/2021] [Indexed: 06/13/2023]
Abstract
INTRODUCTION Within the Canadian competency-based medical education system, entrustable professional activities (EPAs) are used to assess residents on performed clinical duties. This study aimed to determine whether implementing a bundle of two interventions (a case-based discussion intervention and a rotation-based nudging system) could increase the number of EPA assessments that could occur for our trainees. METHODS The authors designed an intervention bundle with two components: 1) a case-based workshop where trainees discussed which EPAs could be assessed with multiple cases and 2) a nudging system wherein each trainee was reminded of EPAs that would be useful to them on each rotation in their first year. We conducted a retrospective program evaluation to compare the intervention cohort (2019) to two historical cohorts using similar EPAs (2017, 2018). RESULTS Data from 22 trainees (seven in 2017, eight in 2018, and seven in 2019) were analyzed. There was a marked increase in the total number of EPA assessments acquired in the 2019 cohort (average per resident = 285.7, 95% confidence interval [CI] = 256.1 to 312.3, range = 195-350) compared to the two other years (2018 [average = 132.4, 95% CI = 107.5 to 157.02, range = 107-167] and 2017 [70.1, 95% CI 45.3 to 91.0, range = 49-95]), yielding an effect size of Cohen's d = 4.02 for our intervention bundle. CONCLUSIONS Within the limitations of a small sample size, there was a strong effect of introducing two interventions (a case-based orientation and a nudging system) upon EPA assessments with PGY-1 residents. These strategies may be useful to others seeking to improve EPA assessment numbers in other specialties and clinical environments.
Collapse
Affiliation(s)
- Spencer Sample
- Emergency Medicine Postgraduate Training ProgramMcMaster Royal College of Physicians and Surgeons of CanadaHamiltonOntarioCanada
| | - Hussein Al Rimawi
- Emergency Medicine Postgraduate Training ProgramMcMaster Royal College of Physicians and Surgeons of CanadaHamiltonOntarioCanada
| | - Beatrix Bérczi
- Emergency Medicine Postgraduate Training ProgramMcMaster Royal College of Physicians and Surgeons of CanadaHamiltonOntarioCanada
| | - Alexander Chorley
- Emergency Medicine Postgraduate Training ProgramMcMaster Royal College of Physicians and Surgeons of CanadaHamiltonOntarioCanada
- Division of Emergency Medicine, Department of Medicine, Faculty of Health SciencesMcMaster UniversityHamiltonOntarioCanada
- McMaster Education Research, Innovation, and Theory (MERIT)HamiltonOntarioCanada
| | - Alim Pardhan
- Emergency Medicine Postgraduate Training ProgramMcMaster Royal College of Physicians and Surgeons of CanadaHamiltonOntarioCanada
- Division of Emergency Medicine, Department of Medicine, Faculty of Health SciencesMcMaster UniversityHamiltonOntarioCanada
| | - Teresa M. Chan
- Emergency Medicine Postgraduate Training ProgramMcMaster Royal College of Physicians and Surgeons of CanadaHamiltonOntarioCanada
- Division of Emergency Medicine, Department of Medicine, Faculty of Health SciencesMcMaster UniversityHamiltonOntarioCanada
- McMaster Education Research, Innovation, and Theory (MERIT)HamiltonOntarioCanada
- Division of Education and Innovation, Department of Medicine, Faculty of Health SciencesMcMaster UniversityHamiltonOntarioCanada
| |
Collapse
|
16
|
The competency-based medical education evolution of Canadian emergency medicine specialist training. CAN J EMERG MED 2021; 22:95-102. [PMID: 31965965 DOI: 10.1017/cem.2019.417] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Canadian specialist emergency medicine (EM) residency training is undergoing the most significant transformation in its history. This article describes the rationale, process, and redesign of EM competency-based medical education. The rationale for this evolution in residency education includes 1) improved public trust by increasing transparency of the quality and rigour of residency education, 2) improved fiscal accountability to government and institutions regarding specialist EM training, 3) improved assessment systems to replace poor functioning end-of-rotation assessment reports and overemphasis on high-stakes, end-of-training examinations, and 4) and tailored learning for residents to address individualized needs. A working group with geographic and stakeholder representation convened over a 2-year period. A consensus process for decision-making was used. Four key design features of the new residency education design include 1) specialty EM-specific outcomes to be achieved in residency; 2) designation of four progressive stages of training, linked to required learning experiences and entrustable professional activities to be achieved at each stage; 3) tailored learning that provides residency programs and learner flexibility to adapt to local resources and learner needs; and 4) programmatic assessment that emphasizes systematic, longitudinal assessments from multiple sources, and sampling sentinel abilities. Required future study includes a program evaluation of this complex education intervention to ensure that intended outcomes are achieved and unintended outcomes are identified.
Collapse
|
17
|
Caretta-Weyer HA, Chan T, Bigham BL, Kinnear B, Huwendiek S, Schumacher DJ. If we could turn back time: Imagining time-variable, competency-based medical education in the context of COVID-19. MEDICAL TEACHER 2021; 43:774-779. [PMID: 34027813 DOI: 10.1080/0142159x.2021.1925641] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The COVID-19 pandemic has exposed a paradox in historical models of medical education: organizations responsible for applying consistent standards for progression have needed to adapt to training environments marked by inconsistency and change. Although some institutions have maintained their traditional requirements, others have accelerated their programs to rush nearly graduated trainees to the front lines. One interpretation of the unplanned shortening of the duration of training programs during a crisis is that standards have been lowered. But it is also possible that these trainees were examined according to the same standards as usual and were judged to have already met them. This paper discusses the impacts of the COVID-19 pandemic on the current workforce, provides an analysis of how competency-based medical education (CBME) in the context of the pandemic might have mitigated wide-scale disruption, and identifies structural barriers to achieving an ideal state. The paper further calls upon universities, health centres, governments, certifying bodies, regulatory authorities, and health care professionals to work collectively on a truly time-variable model of CBME. The pandemic has made clear that time variability in medical education already exists and should be adopted widely and formally. If our systems today had used a framework of outcome competencies, sequenced progression, tailored learning, focused instruction, and programmatic assessment, we may have been even more nimble in changing our systems to care for our patients with COVID-19.
Collapse
Affiliation(s)
| | - Teresa Chan
- Department of Medicine, McMaster University, Hamilton, Canada
- McMaster Program for Education Research, Innovation, and Theory (MERIT), Hamilton, Canada
| | - Blair L Bigham
- Division of Emergency Medicine, Department of Medicine, McMaster University, Hamilton, Canada
| | - Benjamin Kinnear
- Division of Hospital Medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Sören Huwendiek
- Department for Assessment and Evaluation, Institute for Medical Education, University of Bern, Bern, Switzerland
| | - Daniel J Schumacher
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
- Division of Emergency Medicine, Cincinnati Children's Hospital, Cincinnati, OH, USA
| |
Collapse
|
18
|
Richardson D, Kinnear B, Hauer KE, Turner TL, Warm EJ, Hall AK, Ross S, Thoma B, Van Melle E. Growth mindset in competency-based medical education. MEDICAL TEACHER 2021; 43:751-757. [PMID: 34410891 DOI: 10.1080/0142159x.2021.1928036] [Citation(s) in RCA: 54] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The ongoing adoption of competency-based medical education (CBME) across health professions training draws focus to learner-centred educational design and the importance of fostering a growth mindset in learners, teachers, and educational programs. An emerging body of literature addresses the instructional practices and features of learning environments that foster the skills and strategies necessary for trainees to be partners in their own learning and progression to competence and to develop skills for lifelong learning. Aligned with this emerging area is an interest in Dweck's self theory and the concept of the growth mindset. The growth mindset is an implicit belief held by an individual that intelligence and abilities are changeable, rather than fixed and immutable. In this paper, we present an overview of the growth mindset and how it aligns with the goals of CBME. We describe the challenges associated with shifting away from the fixed mindset of most traditional medical education assumptions and practices and discuss potential solutions and strategies at the individual, relational, and systems levels. Finally, we present future directions for research to better understand the growth mindset in the context of CBME.
Collapse
Affiliation(s)
- Denyse Richardson
- Department of Medicine, Division of Physiatry, University of Toronto, Ontario, Canada
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
| | - Benjamin Kinnear
- Internal Medicine and Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Karen E Hauer
- University of California, San Francisco, San Francisco, CA, USA
| | - Teri L Turner
- Pediatrics, Baylor College of Medicine, Houston, TX, USA
| | - Eric J Warm
- Internal Medicine and Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Andrew K Hall
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Department of Emergency Medicine, Queen's University, Kingston, Canada
| | - Shelley Ross
- Department of Family Medicine, University of Alberta, Edmonton, Canada
| | - Brent Thoma
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Department of Emergency Medicine, University of Saskatchewan, Saskatoon, Canada
| | - Elaine Van Melle
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Department of Family Medicine, Queen's University, Kingston, Canada
| |
Collapse
|
19
|
Gottlieb M, Jordan J, Siegelman JN, Cooney R, Stehman C, Chan TM. Direct Observation Tools in Emergency Medicine: A Systematic Review of the Literature. AEM EDUCATION AND TRAINING 2021; 5:e10519. [PMID: 34041428 PMCID: PMC8138102 DOI: 10.1002/aet2.10519] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 07/31/2020] [Accepted: 08/09/2020] [Indexed: 05/07/2023]
Abstract
OBJECTIVES Direct observation is important for assessing the competency of medical learners. Multiple tools have been described in other fields, although the degree of emergency medicine-specific literature is unclear. This review sought to summarize the current literature on direct observation tools in the emergency department (ED) setting. METHODS We searched PubMed, Scopus, CINAHL, the Cochrane Central Register of Clinical Trials, the Cochrane Database of Systematic Reviews, ERIC, PsycINFO, and Google Scholar from 2012 to 2020 for publications on direct observation tools in the ED setting. Data were dual extracted into a predefined worksheet, and quality analysis was performed using the Medical Education Research Study Quality Instrument. RESULTS We identified 38 publications, comprising 2,977 learners. Fifteen different tools were described. The most commonly assessed tools included the Milestones (nine studies), Observed Structured Clinical Exercises (seven studies), the McMaster Modular Assessment Program (six studies), Queen's Simulation Assessment Test (five studies), and the mini-Clinical Evaluation Exercise (four studies). Most of the studies were performed in a single institution, and there were limited validity or reliability assessments reported. CONCLUSIONS The number of publications on direct observation tools for the ED setting has markedly increased. However, there remains a need for stronger internal and external validity data.
Collapse
Affiliation(s)
- Michael Gottlieb
- Department of Emergency MedicineRush University Medical CenterChicagoILUSA
| | - Jaime Jordan
- Department of Emergency MedicineRonald Reagan UCLA Medical CenterLos AngelesCAUSA
| | | | - Robert Cooney
- Department of Emergency MedicineGeisinger Medical CenterDanvillePAUSA
| | | | - Teresa M. Chan
- Department of MedicineDivision of Emergency MedicineMcMaster UniversityHamiltonOntarioCanada
| |
Collapse
|
20
|
Chan TM, Sebok‐Syer SS, Cheung WJ, Pusic M, Stehman C, Gottlieb M. Workplace-based Assessment Data in Emergency Medicine: A Scoping Review of the Literature. AEM EDUCATION AND TRAINING 2021; 5:e10544. [PMID: 34099992 PMCID: PMC8166307 DOI: 10.1002/aet2.10544] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2020] [Revised: 10/02/2020] [Accepted: 10/05/2020] [Indexed: 06/01/2023]
Abstract
OBJECTIVE In the era of competency-based medical education (CBME), the collection of more and more trainee data is being mandated by accrediting bodies such as the Accreditation Council for Graduate Medical Education and the Royal College of Physicians and Surgeons of Canada. However, few efforts have been made to synthesize the literature around the current issues surrounding workplace-based assessment (WBA) data. This scoping review seeks to synthesize the landscape of literature on the topic of data collection and utilization for trainees' WBAs in emergency medicine (EM). METHODS The authors conducted a scoping review in the style of Arksey and O'Malley, seeking to synthesize and map literature on collecting, aggregating, and reporting WBA data. The authors extracted, mapped, and synthesized literature that describes, supports, and substantiates effective data collection and utilization in the context of the CBME movement within EM. RESULTS Our literature search retrieved 189 potentially relevant references (after removing duplicates) that were screened to 29 abstracts and papers relevant to collecting, aggregating, and reporting WBAs. Our analysis shows that there is an increasing temporal trend toward contributions in these topics, with the majority of the papers (16/29) being published in the past 3 years alone. CONCLUSION There is increasing interest in the areas around data collection and utilization in the age of CBME. The field, however, is only beginning to emerge, leaving more work that can and should be done in this area.
Collapse
Affiliation(s)
- Teresa M. Chan
- From theDepartment of MedicineDivision of Emergency Medicine and the Division of Education & InnovationMcMaster UniversityHamiltonOntarioCanada
- theProgram for Faculty DevelopmentFaculty of Health SciencesMcMaster UniversityHamiltonOntarioCanada
- and the McMaster Program for Education Research, Innovation, and TheoryMcMaster UniversityHamiltonOntarioCanada
| | | | - Warren J. Cheung
- theDepartment of Emergency MedicineUniversity of OttawaOttawaOntarioCanada
| | - Martin Pusic
- theDepartment of PediatricsHarvard Medical SchoolBostonMAUSA
| | | | - Michael Gottlieb
- and theDepartment of Emergency MedicineRush University Medical CenterChicagoILUSA
| |
Collapse
|
21
|
Jamieson J, Hay M, Gibson S, Palermo C. Implementing programmatic assessment transforms supervisor attitudes: An explanatory sequential mixed methods study. MEDICAL TEACHER 2021; 43:709-717. [PMID: 33705668 DOI: 10.1080/0142159x.2021.1893678] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
INTRODUCTION Programmatic assessment (PA) is an increasingly popular approach to competency-based assessment (CBA), yet evaluation evidence is limited. This study aimed to identify and explore supervisor attitudes before and after implementing a novel PA using a sequential explanatory mixed methods design. In phase one, a survey was used to identify supervisor perspectives on work-based placements, PA and CBA. Survey results were then applied to develop focus group questions to further explore supervisor attitudes. RESULTS PA was found to improve supervisor-student relationships by removing high-stakes assessment decisions and creating greater capacity for feedback and teaching, leading to a productive learning environment. Assessment was perceived as an important role and supervisors wanted to feel valued and heard within PA. Trust was conceptualised as a triad between supervisor, student and university, and enabled supervisors to engage with PA which was important for success. Supervisor learning of PA was experiential and often supported by students, highlighting the need for hands-on training. CONCLUSION Participants reported a high level of agreement with PA and CBA principles which may have made them amenable to educational change. Further research is needed to explore the experience of all stakeholders and to understand how worldviews and culture influence assessment initiatives.
Collapse
Affiliation(s)
- Janica Jamieson
- Department of Nutrition and Dietetics, Monash University, Melbourne, Australia
- School of Medical and Health Sciences, Edith Cowan University, Perth, Australia
| | - Margaret Hay
- Department of Nutrition and Dietetics, Monash University, Melbourne, Australia
| | - Simone Gibson
- Department of Nutrition and Dietetics, Monash University, Melbourne, Australia
| | - Claire Palermo
- Department of Nutrition and Dietetics, Monash University, Melbourne, Australia
| |
Collapse
|
22
|
Schut S, Maggio LA, Heeneman S, van Tartwijk J, van der Vleuten C, Driessen E. Where the rubber meets the road - An integrative review of programmatic assessment in health care professions education. PERSPECTIVES ON MEDICAL EDUCATION 2021; 10:6-13. [PMID: 33085060 PMCID: PMC7809087 DOI: 10.1007/s40037-020-00625-w] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2020] [Revised: 09/21/2020] [Accepted: 09/29/2020] [Indexed: 05/12/2023]
Abstract
INTRODUCTION Programmatic assessment was introduced as an approach to design assessment programmes with the aim to simultaneously optimize the decision-making and learning function of assessment. An integrative review was conducted to review and synthesize results from studies investigating programmatic assessment in health care professions education in practice. METHODS The authors systematically searched PubMed, Web of Science, and ERIC to identify studies published since 2005 that reported empirical data on programmatic assessment. Characteristics of the included studies were extracted and synthesized, using descriptive statistics and thematic analysis. RESULTS Twenty-seven studies were included, which used quantitative methods (n = 10), qualitative methods (n = 12) or mixed methods (n = 5). Most studies were conducted in clinical settings (77.8%). Programmatic assessment was found to enable meaningful triangulation for robust decision-making and used as a catalyst for learning. However, several problems were identified, including overload in assessment information and the associated workload, counterproductive impact of using strict requirements and summative signals, lack of a shared understanding of the nature and purpose of programmatic assessment, and lack of supportive interpersonal relationships. Thematic analysis revealed that the success and challenges of programmatic assessment were best understood by the interplay between quantity and quality of assessment information, and the influence of social and personal aspects on assessment perceptions. CONCLUSION Although some of the evidence may seem compelling to support the effectiveness of programmatic assessment in practice, tensions will emerge when simultaneously stimulating the development of competencies and assessing its result. The identified factors and inferred strategies provide guidance for navigating these tensions.
Collapse
Affiliation(s)
- Suzanne Schut
- School of Health Professions Education, Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands.
| | - Lauren A Maggio
- Department of Medicine, Uniformed Services, University of the Health Sciences, Bethesda, MD, USA
| | - Sylvia Heeneman
- School of Health Professions Education, Department of Pathology, Cardiovascular Research Institute Maastricht, Maastricht University, Maastricht, The Netherlands
| | - Jan van Tartwijk
- Department of Education, Utrecht University, Utrecht, The Netherlands
| | - Cees van der Vleuten
- School of Health Professions Education, Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands
| | - Erik Driessen
- School of Health Professions Education, Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
23
|
Li S, Acai A, Sherbino J, Chan TM. The Teacher, the Assessor, and the Patient Protector: A Conceptual Model Describing How Context Interfaces With the Supervisory Roles of Academic Emergency Physicians. AEM EDUCATION AND TRAINING 2021; 5:52-62. [PMID: 33521491 PMCID: PMC7821073 DOI: 10.1002/aet2.10431] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/22/2019] [Revised: 12/05/2019] [Accepted: 12/12/2019] [Indexed: 05/10/2023]
Abstract
OBJECTIVES Emergency medicine is a fast-paced specialty that demands emergency physicians to respond to rapidly evolving patient presentations, while engaging in clinical supervision. Most research on supervisory roles has focused on the behaviors of attending physicians, including their individual preferences of supervision and level of entrustment of clinical tasks to trainees. However, less research has investigated how the clinical context (patient case complexity, workflow) influences clinical supervision. In this study, we examined how the context of the emergency department (ED) shapes the ways in which emergency physicians reconcile their competing roles in patient care and clinical supervision to optimize learning and ensure patient safety. METHODS Emergency physicians who regularly participated in clinical supervision in several academic teaching hospitals were individually interviewed using a semi-structured format. The interviews were transcribed and analyzed using a constructivist grounded theory approach. RESULTS Sixteen emergency physicians were asked to reflect on their clinical supervisory roles in the ED. We conceptualized a model that describes three prominent roles: teacher, assessor, and patient protector. Contextual features such as trainee competence, pace of the ED, patient complexity, and the culture of academic medicine influenced the extent to which certain roles were considered salient at any given time. CONCLUSIONS This conceptual model can inform researchers and medical educators about the role of context in accentuating or minimizing various roles of emergency physicians. Identifying how context interfaces with these roles may help design faculty development initiatives aimed to navigate the tension between urgent patient care and medical education for emergency physicians.
Collapse
Affiliation(s)
- Shelly‐Anne Li
- Lawrence S. Bloomberg Faculty of NursingUniversity of TorontoTorontoOntarioCanada
| | - Anita Acai
- Department of PsychologyNeuroscience & Behaviour and Office of Education ScienceDepartment of SurgeryMcMaster UniversityHamiltonOntarioCanada
| | - Jonathan Sherbino
- Division of Emergency MedicineDepartment of Medicine, and McMaster Education Research, Innovation and Theory (MERIT) ProgramMcMaster UniversityHamiltonOntarioCanada
| | - Teresa M. Chan
- Division of Emergency MedicineDepartment of Medicine, and McMaster Education Research, Innovation and Theory (MERIT) ProgramMcMaster UniversityHamiltonOntarioCanada
- Program for Faculty DevelopmentMcMaster UniversityHamiltonOntarioCanada
| |
Collapse
|
24
|
Lee CW, Chen GL, Lee YK. User Experience Evaluation of the EPAs-Based e-Portfolio System and an Analysis of Its Impact. J Acute Med 2020; 10:115-125. [PMID: 33209570 DOI: 10.6705/j.jacme.202009_10(3).0003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Background To carry out competency-based medical education, this study has established five Entrustable Professional Activities (EPAs) for the emergency medicine residents. The EPAs involve substantial data collection, which requires integration and analysis for the fi nal interpretation. Therefore, the "EPAs-Based e-Portfolio System" has been developed for assisting users to perform ad-hoc assessment, recording of a discussion, teaching, and feedback. The purpose of this study is to examine, from the perspective of the Technology Acceptance Model, residents and clinicians' experience of the EPAs-Based e-Portfolio System, including the use of functions such as recording, feedback, and assessment, as well as the impact thereof. Methods This study uses in-depth interviews as a means of data collection. The interviewees are from emergency medicine training hospitals in north, central, and south Taiwan-11 resident doctors and nine medical teachers. Results The interviewees agree that (1) the EPAs-based e-Portfolio System provides users with a complete learning trajectory record through cloudization and ease of use; (2) it can assist users to gain feedback, case review, and reflection; (3) information on user status can reflect their learning progress, competencies, and performance; (4) other potential functions that can be added include shortcut keys, initiation of assessment sheets by a learner, feedback to teacher's comment, and voice/picture input. Conclusions The results of this study indicate that the easier a system is for users to use, the more helpful they will consider it and the more positive they will be, which will then translate into greater willingness to use the system and higher frequency of actual use. The system can authentically reflect trainees' professional capabilities if the ad-hoc teaching and feedback in the clinical setting connect strongly with the online assessment and recording.
Collapse
Affiliation(s)
- Chen-Wei Lee
- Buddhist Tzu Chi Medical Foundation Department of Emergency Medicine, Dalin Tzu Chi Hospital Chiayi Taiwan
| | - Guan-Liang Chen
- National Chung Cheng University Center for Innovative Research on Aging Society Chiayi Taiwan
| | - Yi-Kung Lee
- Buddhist Tzu Chi Medical Foundation Department of Emergency Medicine, Dalin Tzu Chi Hospital Chiayi Taiwan.,Tzu Chi University School of Medicine Hualien Taiwan
| |
Collapse
|
25
|
Thoma B, Hall AK, Clark K, Meshkat N, Cheung WJ, Desaulniers P, Ffrench C, Meiwald A, Meyers C, Patocka C, Beatty L, Chan TM. Evaluation of a National Competency-Based Assessment System in Emergency Medicine: A CanDREAM Study. J Grad Med Educ 2020; 12:425-434. [PMID: 32879682 PMCID: PMC7450748 DOI: 10.4300/jgme-d-19-00803.1] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 02/11/2020] [Accepted: 05/20/2020] [Indexed: 01/08/2023] Open
Abstract
BACKGROUND In 2018, Canadian postgraduate emergency medicine (EM) programs began implementing a competency-based medical education (CBME) assessment program. Studies evaluating these programs have focused on broad outcomes using data from national bodies and lack data to support program-specific improvement. OBJECTIVE We evaluated the implementation of a CBME assessment program within and across programs to identify successes and opportunities for improvement at the local and national levels. METHODS Program-level data from the 2018 resident cohort were amalgamated and analyzed. The number of entrustable professional activity (EPA) assessments (overall and for each EPA) and the timing of resident promotion through program stages were compared between programs and to the guidelines provided by the national EM specialty committee. Total EPA observations from each program were correlated with the number of EM and pediatric EM rotations. RESULTS Data from 15 of 17 (88%) programs containing 9842 EPA observations from 68 of 77 (88%) EM residents in the 2018 cohort were analyzed. Average numbers of EPAs observed per resident in each program varied from 92.5 to 229.6, correlating with the number of blocks spent on EM and pediatric EM (r = 0.83, P < .001). Relative to the specialty committee's guidelines, residents were promoted later than expected (eg, one-third of residents had a 2-month delay to promotion from the first to second stage) and with fewer EPA observations than suggested. CONCLUSIONS There was demonstrable variation in EPA-based assessment numbers and promotion timelines between programs and with national guidelines.
Collapse
|
26
|
Schut S, Heeneman S, Bierer B, Driessen E, van Tartwijk J, van der Vleuten C. Between trust and control: Teachers' assessment conceptualisations within programmatic assessment. MEDICAL EDUCATION 2020; 54:528-537. [PMID: 31998987 PMCID: PMC7318263 DOI: 10.1111/medu.14075] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2019] [Revised: 01/11/2020] [Accepted: 01/20/2020] [Indexed: 05/14/2023]
Abstract
OBJECTIVES Programmatic assessment attempts to facilitate learning through individual assessments designed to be of low-stakes and used only for high-stake decisions when aggregated. In practice, low-stake assessments have yet to reach their potential as catalysts for learning. We explored how teachers conceptualise assessments within programmatic assessment and how they engage with learners in assessment relationships. METHODS We used a constructivist grounded theory approach to explore teachers' assessment conceptualisations and assessment relationships in the context of programmatic assessment. We conducted 23 semi-structured interviews at two different graduate-entry medical training programmes following a purposeful sampling approach. Data collection and analysis were conducted iteratively until we reached theoretical sufficiency. We identified themes using a process of constant comparison. RESULTS Results showed that teachers conceptualise low-stake assessments in three different ways: to stimulate and facilitate learning; to prepare learners for the next step, and to use as feedback to gauge the teacher's own effectiveness. Teachers intended to engage in and preserve safe, yet professional and productive working relationships with learners to enable assessment for learning when securing high-quality performance and achievement of standards. When teachers' assessment conceptualisations were more focused on accounting conceptions, this risked creating tension in the teacher-learner assessment relationship. Teachers struggled between taking control and allowing learners' independence. CONCLUSIONS Teachers believe programmatic assessment can have a positive impact on both teaching and student learning. However, teachers' conceptualisations of low-stake assessments are not focused solely on learning and also involve stakes for teachers. Sampling across different assessments and the introduction of progress committees were identified as important design features to support teachers and preserve the benefits of prolonged engagement in assessment relationships. These insights contribute to the design of effective implementations of programmatic assessment within the medical education context.
Collapse
Affiliation(s)
- Suzanne Schut
- Department of Educational Development and ResearchFaculty of Health, Medicine and Life SciencesSchool of Health Professions EducationMaastricht UniversityMaastrichtthe Netherlands
| | - Sylvia Heeneman
- Department of Educational Development and ResearchFaculty of Health, Medicine and Life SciencesSchool of Health Professions EducationMaastricht UniversityMaastrichtthe Netherlands
- Department of PathologyCardiovascular Research Institute MaastrichtFaculty of Health, Medicine and Life SciencesMaastricht UniversityMaastrichtthe Netherlands
| | - Beth Bierer
- Education InstituteCleveland ClinicLerner College of Medicine, Case Western Reserve UniversityClevelandOhioUSA
| | - Erik Driessen
- Department of Educational Development and ResearchFaculty of Health, Medicine and Life SciencesSchool of Health Professions EducationMaastricht UniversityMaastrichtthe Netherlands
| | | | - Cees van der Vleuten
- Department of Educational Development and ResearchFaculty of Health, Medicine and Life SciencesSchool of Health Professions EducationMaastricht UniversityMaastrichtthe Netherlands
| |
Collapse
|
27
|
Thoma B, Bandi V, Carey R, Mondal D, Woods R, Martin L, Chan T. Developing a dashboard to meet Competence Committee needs: a design-based research project. CANADIAN MEDICAL EDUCATION JOURNAL 2020; 11:e16-e34. [PMID: 32215140 PMCID: PMC7082472 DOI: 10.36834/cmej.68903] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
BACKGROUND Competency-based programs are being adopted in medical education around the world. Competence Committees must visualize learner assessment data effectively to support their decision-making. Dashboards play an integral role in decision support systems in other fields. Design-based research allows the simultaneous development and study of educational environments. METHODS We utilized a design-based research process within the emergency medicine residency program at the University of Saskatchewan to identify the data, analytics, and visualizations needed by its Competence Committee, and developed a dashboard incorporating these elements. Narrative data were collected from two focus groups, five interviews, and the observation of two Competence Committee meetings. Data were qualitatively analyzed to develop a thematic framework outlining the needs of the Competence Committee and to inform the development of the dashboard. RESULTS The qualitative analysis identified four Competence Committee needs (Explore Workplace-Based Assessment Data, Explore Other Assessment Data, Understand the Data in Context, and Ensure the Security of the Data). These needs were described with narratives and represented through visualizations of the dashboard elements. CONCLUSIONS This work addresses the practical challenges of supporting data-driven decision making by Competence Committees and will inform the development of dashboards for programs, institutions, and learner management systems.
Collapse
Affiliation(s)
- Brent Thoma
- Department of Emergency Medicine, University of Saskatchewan, Saskatchewan, Canada
- Correspondence: Dr. Brent Thoma, Room 2646, Box 16, 103 Hospital Drive, Saskatoon, SK S7N 0W8; ; phone: 1-306-881-0112; Twitter: @Brent_Thoma
| | - Venkat Bandi
- Department of Computer Science, University of Saskatchewan, Saskatchewan, Canada
| | - Robert Carey
- Department of Emergency Medicine, University of Saskatchewan, Saskatchewan, Canada
| | - Debajyoti Mondal
- Department of Computer Science, University of Saskatchewan, Saskatchewan, Canada
| | - Rob Woods
- Department of Emergency Medicine, University of Saskatchewan, Saskatchewan, Canada
| | - Lynsey Martin
- Department of Emergency Medicine, University of Saskatchewan, Saskatchewan, Canada
| | - Teresa Chan
- Division of Emergency Medicine, Department of Medicine, McMaster University, Ontario, Canada
- McMaster program for Education Research, Innovation, and Theory (MERIT), Ontario, Canada
| |
Collapse
|