1
|
Jarrett JB, Elmes AT, Keller E, Stowe CD, Daugherty KK. Evaluating the Strengths and Barriers of Competency-Based Education in the Health Professions. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2024; 88:100709. [PMID: 38729616 DOI: 10.1016/j.ajpe.2024.100709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 04/30/2024] [Accepted: 04/30/2024] [Indexed: 05/12/2024]
Abstract
OBJECTIVE This study aimed to define competency-based education (CBE) for pharmacy education and describe how strengths and barriers of CBE can support or hinder implementation. FINDINGS Sixty-five studies were included from a variety of health professions in order to define competency based pharmacy education (CBPE) and identify barriers and benefits from the learner, faculty, institution, and society perspectives. From the 7 identified thematic categories, a CBPE definition was developed: "Competency-based pharmacy education is an outcomes-based curricular model of an organized framework of competencies (knowledge, skills, attitudes) for pharmacists to meet health care and societal needs. This learner-centered curricular model aligns authentic teaching and learning strategies and assessment (emphasizing workplace assessment and quality feedback) while deemphasizing time." SUMMARY This article provides a definition of CBE for its application within pharmacy education. The strengths and barriers for CBE were elucidated from other health professions' education literature. Identified implementation strengths and barriers aid in the discussions on what will support or hinder the implementation of CBE in pharmacy education.
Collapse
Affiliation(s)
- Jennie B Jarrett
- University of Illinois Chicago College of Pharmacy, Department of Pharmacy Practice, Chicago, IL, USA
| | - Abigail T Elmes
- University of Illinois Chicago College of Pharmacy, Department of Pharmacy Practice, Chicago, IL, USA
| | - Eden Keller
- University of Illinois Chicago College of Pharmacy, Department of Pharmacy Practice, Chicago, IL, USA
| | - Cindy D Stowe
- University of Arkansas for Medical Sciences College of Pharmacy, Little Rock, AR, USA
| | | |
Collapse
|
2
|
Ho JY, Tuang V, Teo DB, Ponnamperuma G. Development and validation of a new self-assessment tool to measure professionalism among medical students. ANNALS OF THE ACADEMY OF MEDICINE, SINGAPORE 2023; 52:457-466. [PMID: 38920192 DOI: 10.47102/annals-acadmedsg.2022457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
Introduction Professionalism is a key quality that medical students should possess, but it is difficult to define and assess. Current assess-ment tools have room for improvement. This study aimed to design and validate a self-assessment tool to assess professionalism among medical students. Method A questionnaire was created based on 10 tenets of professionalism from the Charter on Medical Professionalism jointly published by the American Board of Internal Medicine Foundation, American College of Physicians Foundation and European Federation of Internal Medicine, along with input from Singapore guides. The self-administered questionnaire was administered to Year 2 to 5 students from Yong Loo Lin School of Medicine, National University of Singapore in a voluntary, anonymised manner in the academic year of 2019/2020. Construct validity and internal reliability were evaluated using Principal Component Analysis (PCA) and Cronbach's alpha, respectively. Results There was a total of 541 respondents. After removing incomplete responses, 504 responses were included. Following PCA, a 17-item questionnaire titled "Medical Professionalism: A Self-assessment Tool" (MPAST) with a 5-component solution was obtained. The 5 components were commit-ment to: (1) patient's best interest, (2) honesty and integrity, (3) professional competency, (4) patient safety and care, and (5) educational responsibilities. Their Cronbach's alpha value ranged from 0.540 to 0.714, with an overall Cronbach's alpha value of 0.777. Conclusion MPAST is valid, reliable, practical, and is the first validated self-assessment tool to assess professional attributes and behaviours among medical students, to our knowledge.
Collapse
Affiliation(s)
- Jin Yang Ho
- Internal Medicine, National University Health System, Singapore
| | | | - Desmond B Teo
- Fast and Chronic Programmes, Alexandra Hospital, Singapore
| | - Gominda Ponnamperuma
- Centre for Medical Education, Yong Loo Lin School of Medicine, National University of Singapore
| |
Collapse
|
3
|
Dory V, Wagner M, Cruess R, Cruess S, Young M. If we assess, will they learn? Students' perspectives on the complexities of assessment-for-learning. CANADIAN MEDICAL EDUCATION JOURNAL 2023; 14:94-104. [PMID: 37719398 PMCID: PMC10500400 DOI: 10.36834/cmej.73875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/19/2023]
Abstract
Introduction Assessment can positively influence learning, however designing effective assessment-for-learning interventions has proved challenging. We implemented a mandatory assessment-for-learning system comprising a workplace-based assessment of non-medical expert competencies and a progress test in undergraduate medical education and evaluated its impact. Methods We conducted semi-structured interviews with year-3 and 4 medical students at McGill University to explore how the assessment system had influenced their learning in year 3. We conducted theory-informed thematic analysis of the data. Results Eleven students participated, revealing that the assessment influenced learning through several mechanisms. Some required little student engagement (i.e., feed-up, test-enhanced learning, looking things up after an exam). Others required substantial engagement (e.g., studying for tests, selecting raters for quality feedback, using feedback). Student engagement was moderated by the perceived credibility of the system and of the costs and benefits of engagement. Credibility was shaped by students' goals-in-context: becoming a good doctor, contributing to the healthcare team, succeeding in assessments. Discussion Our assessment system failed to engage students enough to leverage its full potential. We discuss the inherent flaws and external factors that hindered student engagement. Assessment designers should leverage easy-to-control mechanisms to support assessment-for-learning and anticipate significant collaborative work to modify learning cultures.
Collapse
Affiliation(s)
- Valérie Dory
- Department of General Practice, Faculty of Medicine, Université de Liège, Liège, Belgium
- Department of Medicine and Centre for Medical Education, Faculty of Medicine, McGill University, Quebec, Canada
- Institute of Health Sciences Education and Academic Centre of General Practice, Université catholique de Louvain, Brussels, Belgium
| | - Maryam Wagner
- Institute of Health Sciences Education, Faculty of Medicine and Health Sciences, McGill University, Quebec, Canada
| | - Richard Cruess
- Institute of Health Sciences Education, Faculty of Medicine and Health Sciences, McGill University, Quebec, Canada
| | - Sylvia Cruess
- Institute of Health Sciences Education, Faculty of Medicine and Health Sciences, McGill University, Quebec, Canada
| | - Meredith Young
- Institute of Health Sciences Education, Faculty of Medicine and Health Sciences, McGill University, Quebec, Canada
| |
Collapse
|
4
|
Westein MPD, Koster AS, Daelmans HEM, Bouvy ML, Kusurkar RA. How progress evaluations are used in postgraduate education with longitudinal supervisor-trainee relationships: a mixed method study. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2023; 28:205-222. [PMID: 36094680 PMCID: PMC9992254 DOI: 10.1007/s10459-022-10153-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 08/07/2022] [Indexed: 06/15/2023]
Abstract
The combination of measuring performance and giving feedback creates tension between formative and summative purposes of progress evaluations and can be challenging for supervisors. There are conflicting perspectives and evidence on the effects supervisor-trainee relationships have on assessing performance. The aim of this study was to learn how progress evaluations are used in postgraduate education with longitudinal supervisor-trainee relationships. Progress evaluations in a two-year community-pharmacy specialization program were studied with a mixed-method approach. An adapted version of the Canadian Medical Education Directives for Specialists (CanMEDS) framework was used. Validity of the performance evaluation scores of 342 trainees was analyzed using repeated measures ANOVA. Semi-structured interviews were held with fifteen supervisors to investigate their response processes, the utility of the progress evaluations, and the influence of supervisor-trainee relationships. Time and CanMEDS roles affected the three-monthly progress evaluation scores. Interviews revealed that supervisors varied in their response processes. They were more committed to stimulating development than to scoring actual performance. Progress evaluations were utilized to discuss and give feedback on trainee development and to add structure to the learning process. A positive supervisor-trainee relationship was seen as the foundation for feedback and supervisors preferred the roles of educator, mentor, and coach over the role of assessor. We found that progress evaluations are a good method for directing feedback in longitudinal supervisor-trainee relationships. The reliability of scoring performance was low. We recommend progress evaluations to be independent of formal assessments in order to minimize roles-conflicts of supervisors.
Collapse
Affiliation(s)
- Marnix P D Westein
- Department of Pharmaceutical Sciences, Utrecht University, Universiteitsweg 99, 3584, CG, Utrecht, The Netherlands.
- Research in Education, Faculty of Medicine Vrije Universiteit, Amsterdam, The Netherlands.
- The Royal Dutch Pharmacists Association (KNMP), The Hague, The Netherlands.
| | - A S Koster
- Department of Pharmaceutical Sciences, Utrecht University, Universiteitsweg 99, 3584, CG, Utrecht, The Netherlands
| | - H E M Daelmans
- Programme Director Master of Medicine, Faculty of Medicine Vrije Universiteit, Amsterdam, The Netherlands
| | - M L Bouvy
- Department of Pharmaceutical Sciences, Utrecht University, Universiteitsweg 99, 3584, CG, Utrecht, The Netherlands
| | - R A Kusurkar
- Research in Education, Faculty of Medicine Vrije Universiteit, Amsterdam, The Netherlands
| |
Collapse
|
5
|
Westein MPD, Koster AS, Daelmans HEM, Collares CF, Bouvy ML, Kusurkar RA. Validity evidence for summative performance evaluations in postgraduate community pharmacy education. CURRENTS IN PHARMACY TEACHING & LEARNING 2022; 14:701-711. [PMID: 35809899 DOI: 10.1016/j.cptl.2022.06.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 05/30/2022] [Accepted: 06/09/2022] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Workplace-based assessment of competencies is complex. In this study, the validity of summative performance evaluations (SPEs) made by supervisors in a two-year longitudinal supervisor-trainee relationship was investigated in a postgraduate community pharmacy specialization program in the Netherlands. The construct of competence was based on an adapted version of the 2005 Canadian Medical Education Directive for Specialists (CanMEDS) framework. METHODS The study had a case study design. Both quantitative and qualitative data were collected. The year 1 and year 2 SPE scores of 342 trainees were analyzed using confirmatory factor analysis and generalizability theory. Semi-structured interviews were held with 15 supervisors and the program director to analyze the inferences they made and the impact of SPE scores on the decision-making process. RESULTS A good model fit was found for the adapted CanMEDS based seven-factor construct. The reliability/precision of the SPE measurements could not be completely isolated, as every trainee was trained in one pharmacy and evaluated by one supervisor. Qualitative analysis revealed that supervisors varied in their standards for scoring competencies. Some supervisors were reluctant to fail trainees. The competency scores had little impact on the high-stakes decision made by the program director. CONCLUSIONS The adapted CanMEDS competency framework provided a valid structure to measure competence. The reliability/precision of SPE measurements could not be established and the SPE measurements provided limited input for the decision-making process. Indications of a shadow assessment system in the pharmacies need further investigation.
Collapse
Affiliation(s)
- Marnix P D Westein
- Department of Pharmaceutical Sciences, Utrecht University, Royal Dutch Pharmacists Association (KNMP), Research in Education, Faculty of Medicine Vrije Universiteit, Amsterdam, the Netherlands.
| | - Andries S Koster
- Department of Pharmaceutical Sciences, Utrecht University, Utrecht, the Netherlands.
| | - Hester E M Daelmans
- Master's programme of Medicine, Faculty of Medicine Vrije Universiteit, Amsterdam, the Netherlands.
| | - Carlos F Collares
- Maastricht University Faculty of Health Medicine and Life Sciences, Maastricht, the Netherlands.
| | - Marcel L Bouvy
- Department of Pharmaceutical Sciences, Utrecht University, Utrecht, the Netherlands.
| | - Rashmi A Kusurkar
- Research in Education, Faculty of Medicine Vrije Universiteit, Amsterdam, the Netherlands.
| |
Collapse
|
6
|
Tavares W, Hodwitz K, Rowland P, Ng S, Kuper A, Friesen F, Shwetz K, Brydges R. Implicit and inferred: on the philosophical positions informing assessment science. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2021; 26:1597-1623. [PMID: 34370126 DOI: 10.1007/s10459-021-10063-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Accepted: 07/25/2021] [Indexed: 06/13/2023]
Abstract
Assessment practices have been increasingly informed by a range of philosophical positions. While generally beneficial, the addition of options can lead to misalignment in the philosophical assumptions associated with different features of assessment (e.g., the nature of constructs and competence, ways of assessing, validation approaches). Such incompatibility can threaten the quality and defensibility of researchers' claims, especially when left implicit. We investigated how authors state and use their philosophical positions when designing and reporting on performance-based assessments (PBA) of intrinsic roles, as well as the (in)compatibility of assumptions across assessment features. Using a representative sample of studies examining PBA of intrinsic roles, we used qualitative content analysis to extract data on how authors enacted their philosophical positions across three key assessment features: (1) construct conceptualizations, (2) assessment activities, and (3) validation methods. We also examined patterns in philosophical positioning across features and studies. In reviewing 32 papers from established peer-reviewed journals, we found (a) authors rarely reported their philosophical positions, meaning underlying assumptions could only be inferred; (b) authors approached features of assessment in variable ways that could be informed by or associated with different philosophical assumptions; (c) we experienced uncertainty in determining (in)compatibility of philosophical assumptions across features. Authors' philosophical positions were often vague or absent in the selected contemporary assessment literature. Leaving such details implicit may lead to misinterpretation by knowledge users wishing to implement, build on, or evaluate the work. As such, assessing claims, quality and defensibility, may increasingly depend more on who is interpreting, rather than what is being interpreted.
Collapse
Affiliation(s)
- Walter Tavares
- The Wilson Centre, Temerty Faculty of Medicine, Department of Medicine, Institute for Health Policy, Management and Evaluation, University of Toronto/University Health Network, Toronto, Ontario, Canada.
| | - Kathryn Hodwitz
- Li Ka Shing Knowledge Institute, St. Michaels Hospital, Toronto, Ontario, Canada
| | - Paula Rowland
- The Wilson Centre, Temerty Faculty of Medicine, Department of Occupational Therapy and Occupational Science, University of Toronto/University Health Network, Toronto, Ontario , Canada
| | - Stella Ng
- The Wilson Centre, Temerty Faculty of Medicine, Department of Speech-Language Pathology, Temerty Faculty of Medicine, The Wilson Centre, University of Toronto, Centre for Faculty Development, Unity Health Toronto, Toronto, Ontario, Canada
| | - Ayelet Kuper
- The Wilson Centre, University Health Network/University of Toronto, Division of General Internal Medicine, Sunnybrook Health Sciences Centre, Department of Medicine, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Farah Friesen
- Centre for Faculty Development, Temerty Faculty of Medicine, University of Toronto at Unity Health Toronto, Toronto, Ontario, Canada
| | - Katherine Shwetz
- Department of English, University of Toronto, Toronto, Ontario, Canada
| | - Ryan Brydges
- The Wilson Centre, Temerty Faculty of Medicine, Department of Medicine, Unity Health Toronto, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
7
|
Bajwa NM, Nendaz MR, Posfay-Barbe KM, Yudkowsky R, Park YS. A Meaningful and Actionable Professionalism Assessment: Validity Evidence for the Professionalism Mini-Evaluation Exercise (P-MEX) Across 8 Years. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S151-S157. [PMID: 34348372 DOI: 10.1097/acm.0000000000004286] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE With the growing importance of professionalism in medical education, it is imperative to develop professionalism assessments that demonstrate robust validity evidence. The Professionalism Mini-Evaluation Exercise (P-MEX) is an assessment that has demonstrated validity evidence in the authentic clinical setting. Identifying the factorial structure of professionalism assessments determines professionalism constructs that can be used to provide diagnostic and actionable feedback. This study examines validity evidence for the P-MEX, a focused and standardized assessment of professionalism, in a simulated patient setting. METHOD The P-MEX was administered to 275 pediatric residency applicants as part of a 3-station standardized patient encounter, pooling data over an 8-year period (2012 to 2019 residency admission years). Reliability and construct validity for the P-MEX were evaluated using Cronbach's alpha, exploratory factor analysis (EFA), and confirmatory factor analysis (CFA). RESULTS Cronbach's alpha for the P-MEX was 0.91. The EFA yielded 4 factors: doctor-patient relationship skills, interprofessional skills, professional demeanor, and reflective skills. The CFA demonstrated good model fit with a root-mean-square error of approximation of .058 and a comparative fit index of .92, confirming the reproducibility of the 4-factor structure of professionalism. CONCLUSIONS The P-MEX demonstrates construct validity as an assessment of professionalism, with 4 underlying subdomains in doctor-patient relationship skills, interprofessional skills, professional demeanor, and reflective skills. These results yield new confidence in providing diagnostic and actionable subscores within the P-MEX assessment. Educators may wish to integrate the P-MEX assessment into their professionalism curricula.
Collapse
Affiliation(s)
- Nadia M Bajwa
- N.M. Bajwa is residency program director, Department of General Pediatrics, Children's Hospital, Geneva University Hospitals, and faculty member, Unit of Development and Research in Medical Education (UDREM), Faculty of Medicine, University of Geneva, Geneva, Switzerland; ORCID: http://orcid.org/0000-0002-1445-4594
| | - Mathieu R Nendaz
- M.R. Nendaz is professor and director, Unit of Development and Research in Medical Education (UDREM), Faculty of Medicine, University of Geneva, and attending physician, Division of General Internal Medicine, University Hospitals of Geneva, Geneva, Switzerland; ORCID: http://orcid.org/0000-0003-3795-3254
| | - Klara M Posfay-Barbe
- K.M. Posfay-Barbe is professor and chairperson, Department of General Pediatrics, Children's Hospital, Geneva University Hospitals, Geneva, Switzerland; ORCID: https://orcid.org/0000-0001-9464-5704
| | - Rachel Yudkowsky
- R. Yudkowsky is professor, Department of Medical Education, College of Medicine at the University of Illinois at Chicago, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-2145-7582
| | - Yoon Soo Park
- Y.S. Park is associate professor, Harvard Medical School, and director of health professions education research, Massachusetts General Hospital, Boston, Massachusetts; ORCID: http://orcid.org/0000-0001-8583-4335
| |
Collapse
|
8
|
Anderson HL, Kurtz J, West DC. Implementation and Use of Workplace-Based Assessment in Clinical Learning Environments: A Scoping Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S164-S174. [PMID: 34406132 DOI: 10.1097/acm.0000000000004366] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE Workplace-based assessment (WBA) serves a critical role in supporting competency-based medical education (CBME) by providing assessment data to inform competency decisions and support learning. Many WBA systems have been developed, but little is known about how to effectively implement WBA. Filling this gap is important for creating suitable and beneficial assessment processes that support large-scale use of CBME. As a step toward filling this gap, the authors describe what is known about WBA implementation and use to identify knowledge gaps and future directions. METHOD The authors used Arksey and O'Malley's 6-stage scoping review framework to conduct the review, including: (1) identifying the research question; (2) identifying relevant studies; (3) study selection; (4) charting the data; (5) collating, summarizing, and reporting the results; and (6) consulting with relevant stakeholders. RESULTS In 2019-2020, the authors searched and screened 726 papers for eligibility using defined inclusion and exclusion criteria. One hundred sixty-three met inclusion criteria. The authors identified 5 themes in their analysis: (1) Many WBA tools and programs have been implemented, and barriers are common across fields and specialties; (2) Theoretical perspectives emphasize the need for data-driven implementation strategies; (3) User perceptions of WBA vary and are often dependent on implementation factors; (4) Technology solutions could provide useful tools to support WBA; and (5) Many areas of future research and innovation remain. CONCLUSIONS Knowledge of WBA as an implemented practice to support CBME remains constrained. To remove these constraints, future research should aim to generate generalizable knowledge on WBA implementation and use, address implementation factors, and investigate remaining knowledge gaps.
Collapse
Affiliation(s)
- Hannah L Anderson
- H.L. Anderson is research associate, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania; ORCID: http://orcid.org/0000-0002-9435-1535
| | - Joshua Kurtz
- J. Kurtz is a first-year resident, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Daniel C West
- D.C. West is professor of pediatrics, The Perelman School of Medicine at the University of Pennsylvania, and associate chair for education and senior director of medical education, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania; ORCID: http://orcid.org/0000-0002-0909-4213
| |
Collapse
|
9
|
Tanaka P, Park YS, Roby J, Ahn K, Kakazu C, Udani A, Macario A. Milestone Learning Trajectories of Residents at Five Anesthesiology Residency Programs. TEACHING AND LEARNING IN MEDICINE 2021; 33:304-313. [PMID: 33327788 DOI: 10.1080/10401334.2020.1842210] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Construct: Every six months, residency programs report their trainees' Milestones Level achievement to the Accreditation Council for Graduate Medical Education (ACGME). Milestones should enable the learner and training program to know an individual's competency development trajectory. Background: Milestone Level ratings for residents grouped by specialty (e.g., Internal Medicine and Emergency Medicine) show that, in aggregate, senior residents receive higher ratings than junior residents. Anesthesiology Milestones, as assessed by both residents and faculty, also have a positive linear relationship with postgraduate year. However, these studies have been cross-sectional rather than longitudinal cohort studies, and studies of how individual residents progress during the course of training are needed. Longitudinal data analysis of performance assessment trajectories addresses a relevant validity question for the Next Accreditation System. We explored the application of learning analytics to longitudinal Milestones data to: 1) measure the frequency of "straight-lining"; 2) assess the proportion of residents that reach "Level 4" (ready for unsupervised practice) by graduation for each subcompetency; 3) identify variability among programs and individual residents in their baseline Milestone Level and rates of improvement; and 4) determine how hypothetically constructed growth curve models fit to the Milestones data reported to ACGME. Approach: De-identified Milestone Level ratings in each of the 25 subcompetencies submitted semiannually to the ACGME from July 1, 2014 to June 30, 2017 were retrospectively analyzed for graduating residents (n = 67) from a convenience sample of five anesthesia residency programs. The data reflected longitudinal resident Milestone progression from the beginning of the first year to the end of the third and final year of clinical anesthesiology training. The frequency of straight-lining, defined as the resident receiving the same exact Milestone Level rating for all 25 subcompetencies on a given 6-month report, was calculated for each program. Every resident was evaluated six times during training with the possibility of six straight-lined ratings. Findings: The number of residents in each program ranged from 5-21 (Median 13, range 16). Mean Milestone Level ratings for subcompetencies were significantly different at each six-month assessment (p < 0.001). Frequency of straight-lining varied significantly by program from 9% - 57% (Median 22%). Depending on the program, 53%-100% (median 86%) of residents reached the graduation target Level 4 or higher in all 25 anesthesiology subcompetencies. Nine to 18% of residents did not achieve a Level 4 rating for at least one subcompetency at any time during their residency. Across programs, significant variability was found in first-year clinical anesthesia training Milestone Levels, as well in the rate of improvement for five of the six core competencies. Conclusions: Anesthesia residents' Milestone Level growth trajectories as reported to the ACGME vary significantly across individual residents as well as by program. The present study offers a case example that raises concerns regarding the validity of the Next Accreditation System as it is currently used by some residency programs.
Collapse
Affiliation(s)
- Pedro Tanaka
- Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, California, USA
| | - Yoon Soo Park
- Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Jay Roby
- Department of Anesthesiology, University of Southern California, Los Angeles, California
| | - Kyle Ahn
- Department of Anesthesiology, University of California Irvine, Irvine, California, USA
| | - Clinton Kakazu
- UCLA-Harbor Medical Center, Lost Angeles, California, USA
| | - Ankeet Udani
- Department of Anesthesiology, Duke University School of Medicine, Raleigh, North Carolina, USA
| | - Alex Macario
- Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, California, USA
| |
Collapse
|
10
|
Pinilla S, Kyrou A, Klöppel S, Strik W, Nissen C, Huwendiek S. Workplace-based assessments of entrustable professional activities in a psychiatry core clerkship: an observational study. BMC MEDICAL EDUCATION 2021; 21:223. [PMID: 33882926 PMCID: PMC8059233 DOI: 10.1186/s12909-021-02637-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Accepted: 03/27/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND Entrustable professional activities (EPAs) in competency-based, undergraduate medical education (UME) have led to new formative workplace-based assessments (WBA) using entrustment-supervision scales in clerkships. We conducted an observational, prospective cohort study to explore the usefulness of a WBA designed to assess core EPAs in a psychiatry clerkship. METHODS We analyzed changes in self-entrustment ratings of students and the supervisors' ratings per EPA. Timing and frequencies of learner-initiated WBAs based on a prospective entrustment-supervision scale and resultant narrative feedback were analyzed quantitatively and qualitatively. Predictors for indirect supervision levels were explored via regression analysis, and narrative feedback was coded using thematic content analysis. Students evaluated the WBA after each clerkship rotation. RESULTS EPA 1 ("Take a patient's history"), EPA 2 ("Assess physical & mental status") and EPA 8 ("Document & present a clinical encounter") were most frequently used for learner-initiated WBAs throughout the clerkship rotations in a sample of 83 students. Clinical residents signed off on the majority of the WBAs (71%). EPAs 1, 2, and 8 showed the largest increases in self-entrustment and received most of the indirect supervision level ratings. We found a moderate, positive correlation between self-entrusted supervision levels at the end of the clerkship and the number of documented entrustment-supervision ratings per EPA (p < 0.0001). The number of entrustment ratings explained 6.5% of the variance in the supervisors' ratings for EPA 1. Narrative feedback was documented for 79% (n = 214) of the WBAs. Most narratives addressed the Medical Expert role (77%, n = 208) and used reinforcement (59%, n = 161) as a feedback strategy. Students perceived the feedback as beneficial. CONCLUSIONS Using formative WBAs with an entrustment-supervision scale and prompts for written feedback facilitated targeted, high-quality feedback and effectively supported students' development toward self-entrusted, indirect supervision levels.
Collapse
Affiliation(s)
- Severin Pinilla
- University Hospital of Old Age Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland.
- Department for Assessment and Evaluation, Institute for Medical Education, University of Bern, Bern, Switzerland.
| | - Alexandra Kyrou
- University Hospital of Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland
| | - Stefan Klöppel
- University Hospital of Old Age Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland
| | - Werner Strik
- University Hospital of Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland
| | - Christoph Nissen
- University Hospital of Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland
| | - Sören Huwendiek
- Department for Assessment and Evaluation, Institute for Medical Education, University of Bern, Bern, Switzerland
| |
Collapse
|
11
|
Gumuchian ST, Pal NE, Young M, Danoff D, Plotnick LH, Cummings BA, Gomez-Garibello C, Dory V. Learner handover: Perspectives and recommendations from the front-line. PERSPECTIVES ON MEDICAL EDUCATION 2020; 9:294-301. [PMID: 32809189 PMCID: PMC7550510 DOI: 10.1007/s40037-020-00601-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
INTRODUCTION Current medical education models increasingly rely on longitudinal assessments to document learner progress over time. This longitudinal focus has re-kindled discussion regarding learner handover-where assessments are shared across supervisors, rotations, and educational phases, to support learner growth and ease transitions. The authors explored the opinions of, experiences with, and recommendations for successful implementation of learner handover among clinical supervisors. METHODS Clinical supervisors from five postgraduate medical education programs at one institution completed an online questionnaire exploring their views regarding learner handover, specifically: potential benefits, risks, and suggestions for implementation. Survey items included open-ended and numerical responses. The authors used an inductive content analysis approach to analyze the open-ended questionnaire responses, and descriptive and correlational analyses for numerical data. RESULTS Seventy-two participants completed the questionnaire. Their perspectives varied widely. Suggested benefits of learner handover included tailored learning, improved assessments, and enhanced patient safety. The main reported risk was the potential for learner handover to bias supervisors' perceptions of learners, thereby affecting the validity of future assessments and influencing the learner's educational opportunities and well-being. Participants' suggestions for implementation focused on who should be involved, when and for whom it should occur, and the content that should be shared. DISCUSSION The diverse opinions of, and recommendations for, learner handover highlight the necessity for handover to maximize learning potential while minimizing potential harms. Supervisors' suggestions for handover implementation reveal tensions between assessment-of and for-learning.
Collapse
Affiliation(s)
| | - Nicole E Pal
- Institute of Health Sciences Education, McGill University, Montréal, Québec, Canada
| | - Meredith Young
- Institute of Health Sciences Education, McGill University, Montréal, Québec, Canada
- Department of Medicine, McGill University, Montréal, Québec, Canada
| | - Deborah Danoff
- Institute of Health Sciences Education, McGill University, Montréal, Québec, Canada
| | - Laurie H Plotnick
- Institute of Health Sciences Education, McGill University, Montréal, Québec, Canada
- Department of Pediatrics, McGill University, and Montreal Children's Hospital, McGill University Health Centre, Montréal, Québec, Canada
| | - Beth-Ann Cummings
- Institute of Health Sciences Education, McGill University, Montréal, Québec, Canada
- Department of Medicine, McGill University, Montréal, Québec, Canada
| | - Carlos Gomez-Garibello
- Institute of Health Sciences Education, McGill University, Montréal, Québec, Canada
- Department of Medicine, McGill University, Montréal, Québec, Canada
| | - Valérie Dory
- Department of General Practice, Université de Liège, Liège, Belgium.
| |
Collapse
|
12
|
Fernandez N, Cyr J, Perreault I, Brault I. Revealing tacit knowledge used by experienced health professionals for interprofessional collaboration. J Interprof Care 2020; 34:537-544. [PMID: 32067527 DOI: 10.1080/13561820.2020.1724901] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
With the current interest in interprofessional collaboration in health care as a response to ever-increasing complexity of health issues and scarcity of resources, many higher education institutions are developing interprofessional education (IPE) programs. However, there has been little empirical work on what. With the current interest for interprofessional collaboration in health care ever-increasing knowledge and skills are required to work collaboratively between health professions. We have undertaken to describe interprofessional collaboration as a practice largely underpinned by tacit knowledge acquired by experienced clinicians. Clinicians from all health professions in a large francophone university in Eastern Canada were invited to participate in explicitation interviews. Explicitation interviews require participants to freely recall an interprofessional collaboration event (e.g., team meeting or joint care delivery) and describe specific actions they personally enacted. An experienced health professional encounters many interprofessional situations over time; the actions they describe reflect their personal theories about the practice. Hence, it is highly probable that they use them frequently when working with colleagues in clinical settings. Unveiled tacit knowledge was divided into four themes: the importance of a sense of belonging to a team, the imperative to meet face-to-face, the practice of soliciting the working hypotheses of colleagues, and the art of summarizing meeting discussions.
Collapse
Affiliation(s)
- Nicolas Fernandez
- Department of Family Medicine and Emergency Medicine, Faculty of Medicine, Université De Montréal , Montréal, Quebec, Canada
| | - Jessica Cyr
- Faculty of Medicine, Université De Montréal , Montréal, Québec, Canada
| | - Isabelle Perreault
- Faculty of Educational Sciences, Université De Montréal , Montréal, Quebec, Canada
| | - Isabelle Brault
- Faculty of Nursing, Université De Montréal , Montreal, Quebec, Canada
| |
Collapse
|
13
|
Pal NE, Young M, Danoff D, Plotnick LH, Cummings BA, Gomez-Garibello C, Dory V. Teachers' mindsets in medical education: A pilot survey of clinical supervisors. MEDICAL TEACHER 2020; 42:291-298. [PMID: 31633998 DOI: 10.1080/0142159x.2019.1679359] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose: Current medical education models maintain that competencies such as professionalism and communication can be taught; however, some argue that certain attributes that make up these competencies, such as empathy, are fixed. Teachers' implicit theories, or mindsets (beliefs about the fixed versus learnable nature of human attributes) have been shown to impact their teaching and assessment practices; but little work has explored mindsets in medical education. We examined clinical supervisors' mindsets of two cognitive attributes (intelligence and clinical reasoning) and two affective attributes (moral character and empathy).Methods: Clinical supervisors (n = 40) from three specialities completed a survey designed to measure mindsets using two existing instruments for intelligence and moral character and 18 new items for clinical reasoning and empathy. Participants completed the survey twice for test-retest reliability (n = 25).Results: New items had satisfactory psychometric properties. Clinical supervisors' mindsets were mixed. Only 8% of participants saw clinical reasoning as fixed while more saw empathy (45%), intelligence (53%), and moral character (53%) as fixed - running counter to current educational models that characterize these attributes as learnable.Conclusion: This study provides evidence supporting the use of these new tools to measure mindsets that may help to better understand the impact of mindsets on medical education.
Collapse
Affiliation(s)
- N E Pal
- Institute of Health Sciences, McGill University, Montreal, Canada
| | - M Young
- Institute of Health Sciences, McGill University, Montreal, Canada
- Department of Medicine, McGill University, Montreal, Canada
| | - D Danoff
- Institute of Health Sciences, McGill University, Montreal, Canada
| | - L H Plotnick
- Department of Pediatrics, McGill University, Montreal, Canada
- Division of Pediatric Emergency Medicine, Montreal Children's Hospital, McGill University Health Centre, Montreal, Canada
| | - B-A Cummings
- Institute of Health Sciences, McGill University, Montreal, Canada
- Department of Medicine, McGill University, Montreal, Canada
| | - C Gomez-Garibello
- Institute of Health Sciences, McGill University, Montreal, Canada
- Department of Medicine, McGill University, Montreal, Canada
| | - V Dory
- Institute of Health Sciences, McGill University, Montreal, Canada
- Department of Medicine, McGill University, Montreal, Canada
- Institute of Health and Society and Academic Centre for General Practice, Faculty of Medicine, Université Catholique de Louvain, Ottignies-Louvain-la-Neuve, Belgique
| |
Collapse
|
14
|
Bartlett M, Couper I, Poncelet A, Worley P. The do's, don'ts and don't knows of establishing a sustainable longitudinal integrated clerkship. PERSPECTIVES ON MEDICAL EDUCATION 2020; 9:5-19. [PMID: 31953655 PMCID: PMC7012799 DOI: 10.1007/s40037-019-00558-z] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
INTRODUCTION The longitudinal integrated clerkship is a model of clinical medical education that is increasingly employed by medical schools around the world. These guidelines are a result of a narrative review of the literature which considered the question of how to maximize the sustainability of a new longitudinal integrated clerkship program. METHOD All four authors have practical experience of establishing longitudinal integrated clerkship programs. Each author individually constructed their Do's, Don'ts and Don't Knows and the literature that underpinned them. The lists were compiled and revised in discussion and a final set of guidelines was agreed. A statement of the strength of the evidence is included for each guideline. RESULTS The final set of 18 Do's, Don'ts and Don't Knows is presented with an appraisal of the evidence for each one. CONCLUSION Implementing a longitudinal integrated clerkship is a complex process requiring the involvement of a wide group of stakeholders in both hospitals and communities. The complexity of the change management processes requires careful and sustained attention, with a particular focus on the outcomes of the programs for students and the communities in which they learn. Effective and consistent leadership and adequate resourcing are important. There is a need to select teaching sites carefully, involve students and faculty in allocation of students to sites and support students and faculty though the implementation phase and beyond. Work is needed to address the Don't Knows, in particular the question of how cost-effectiveness is best measured.
Collapse
Affiliation(s)
- Maggie Bartlett
- Education in General Practice, Dundee University School of Medicine, Dundee, UK.
| | - Ian Couper
- Faculty of Medicine and Health Sciences, Ukwanda Centre for Rural Health, Stellenbosch University, Stellenbosch, South Africa
| | - Ann Poncelet
- Department of Neurology, University of California, San Francisco, CA, USA
| | - Paul Worley
- Department of Health, GPO Box 9848, 2601, Canberra, Australian Capital Territory, Australia
| |
Collapse
|