51
|
Warm EJ, Kinnear B. What Can the Giant Do? Defining the Path to Unsupervised Primary Care Practice by Competence, Not Time. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:937-939. [PMID: 30998573 DOI: 10.1097/acm.0000000000002753] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
In this issue of Academic Medicine, Dewan and Norcini examine the significant variability of time-in-training between patient care "giants"-the physicians, nurse practitioners, and physician assistants who practice primary care-and they call for further studies to determine optimal training duration and eventual scope of practice. They ask, what is the minimum education and training required to practice primary care, or "how tall is the shortest giant?" In this Invited Commentary, the authors reframe the question from identifying the minimum length of training required, to identifying desired patient care outcomes. Primary care is not a uniform entity. It ranges from complex elderly chronically ill patients, to twentysomething millennials with acute problems, to pregnant women, to families, and everything in between. The authors argue that training should be fit for purpose and produce high-quality outcomes for patients. Competence should be defined by these outcomes. Drawing parallels with Major League Baseball, the authors note that time to competence development will be variable for different training programs depending on purpose, and also variable for people within those programs, even with shared purpose. While time is a tool for competence attainment, it should not be the metric by which readiness for practice is measured.
Collapse
Affiliation(s)
- Eric J Warm
- E.J. Warm is professor of medicine and program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0002-6088-2434. B. Kinnear is assistant professor of medicine and pediatrics and associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0003-0052-4130
| | | |
Collapse
|
52
|
Sebok-Syer SS, Goldszmidt M, Watling CJ, Chahine S, Venance SL, Lingard L. Using Electronic Health Record Data to Assess Residents' Clinical Performance in the Workplace: The Good, the Bad, and the Unthinkable. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:853-860. [PMID: 30844936 DOI: 10.1097/acm.0000000000002672] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
PURPOSE Novel approaches are required to meet assessment demands and cultivate authentic feedback in competency-based medical education. One potential source of data to help meet these demands is the electronic health record (EHR). However, the literature offers limited guidance regarding how EHR data could be used to support workplace teaching and learning. Furthermore, given its sheer volume and availability, there exists a risk of exploiting the educational potential of EHR data. This qualitative study examined how EHR data might be effectively integrated and used to support meaningful assessments of residents' clinical performance. METHOD Following constructivist grounded theory, using both purposive and theoretical sampling, in 2016-2017 the authors conducted individual interviews with 11 clinical teaching faculty and 10 senior residents across 12 postgraduate specialties within the Schulich School of Medicine and Dentistry at Western University. Constant comparative inductive analysis was conducted. RESULTS Analysis identified key issues related to affordances and challenges of using EHRs to assess resident performance. These include the nature of EHR data; the potential of using EHR data for assessment; and the dangers of using EHR data for assessment. Findings offer considerations for using EHR data to assess resident performance in appropriate and meaningful ways. CONCLUSIONS EHR data have potential to support formative assessment practices and guide feedback discussions with residents, but evaluators must take context into account. The EHR was not designed with the purpose of assessing resident performance; therefore, adoption and use of these data for educational purposes require careful thought, consideration, and care.
Collapse
Affiliation(s)
- Stefanie S Sebok-Syer
- S.S. Sebok-Syer is instructor, Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, California; ORCID: http://orcid.org/0000-0002-3572-5971. M. Goldszmidt is professor, Department of Medicine, and associate director and scientist, Centre for Education, Research, and Innovation, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada; ORCID: https://orcid.org/0000-0002-5861-5222. C.J. Watling is professor, Departments of Clinical Neurological Sciences and Oncology, associate dean, Postgraduate Medical Education, and scientist, Centre for Education Research and Innovation, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada; ORCID: https://orcid.org/0000-0001-9686-795X. S. Chahine is assistant professor, Department of Medicine and Faculty of Education, and scientist, Centre for Education, Research, and Innovation, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada; ORCID: https://orcid.org/0000-0003-0488-773X. S.L. Venance is associate professor, Department of Clinical Neurological Sciences, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada; ORCID: https://orcid.org/0000-0003-4146-2263. L. Lingard is professor, Department of Medicine and Faculty of Education, and founding director and senior scientist, Centre for Education, Research, and Innovation, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada; ORCID: https://orcid.org/0000-0002-1524-0723
| | | | | | | | | | | |
Collapse
|
53
|
Hatala R, Ginsburg S, Hauer KE, Gingerich A. Entrustment Ratings in Internal Medicine Training: Capturing Meaningful Supervision Decisions or Just Another Rating? J Gen Intern Med 2019; 34:740-743. [PMID: 30993616 PMCID: PMC6502893 DOI: 10.1007/s11606-019-04878-y] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
The implementation of Entrustable Professional Activities has led to the simultaneous development of assessment based on a supervisor's entrustment of a learner to perform these activities without supervision. While entrustment may be intuitive when we consider the direct observation of a procedural task, the current implementation of rating scales for internal medicine's non-procedural tasks, based on entrustability, may not translate into meaningful learner assessment. In these Perspectives, we outline a number of potential concerns with ad hoc entrustability assessments in internal medicine post-graduate training: differences in the scope of procedural vs. non-procedural tasks, acknowledgement of the type of clinical oversight common within internal medicine, and the limitations of entrustment language. We point towards potential directions for inquiry that would require us to clarify the purpose of the entrustability assessment, reconsider each of the fundamental concepts of entrustment in internal medicine supervision and explore the use of descriptive rather than numeric assessment approaches.
Collapse
Affiliation(s)
- Rose Hatala
- Department of Medicine, University of British Columbia, Vancouver, Canada. .,St. Paul's Hospital, Suite 5907 Burrard Bldg, 1081 Burrard St., Vancouver, BC, V6Z 1Y6, Canada.
| | - Shiphra Ginsburg
- Department of Medicine, Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Karen E Hauer
- Department of Medicine, University of California at San Francisco, San Francisco, CA, USA
| | - Andrea Gingerich
- Northern Medical Program, University of Northern British Columbia, Prince George, Canada
| |
Collapse
|
54
|
Schumacher DJ, Bartlett KW, Elliott SP, Michelson C, Sharma T, Garfunkel LC, King B, Schwartz A. Milestone Ratings and Supervisory Role Categorizations Swim Together, but Is the Water Muddy? Acad Pediatr 2019; 19:144-151. [PMID: 29925038 DOI: 10.1016/j.acap.2018.06.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/30/2017] [Revised: 06/01/2018] [Accepted: 06/09/2018] [Indexed: 10/28/2022]
Abstract
OBJECTIVE This single-specialty, multi-institutional study aimed to determine 1) the association between milestone ratings for individual competencies and average milestone ratings (AMRs) and 2) the association between AMRs and recommended supervisory role categorizations made by individual clinical competency committee (CCC) members. METHODS During the 2015-16 academic year, CCC members at 14 pediatric residencies reported milestone ratings for 21 competencies and recommended supervisory role categories (may not supervise, may supervise in some settings, may supervise in all settings) for residents they reviewed. An exploratory factor analysis of competencies was conducted. The associations among individual competencies, the AMR, and supervisory role categorizations were determined by computing bivariate correlations. The relationship between AMRs and recommended supervisory role categorizations was examined using an ordinal mixed logistic regression model. RESULTS Of the 155 CCC members, 68 completed both milestone assignments and supervision categorizations for 451 residents. Factor analysis of individual competencies controlling for clustering of residents in raters and sites resulted in a single-factor solution (cumulative variance: 0.75). All individual competencies had large positive correlations with the AMR (correlation coefficient: 0.84-0.93), except for two professionalism competencies (Prof1: 0.63 and Prof4: 0.65). When combined across training year and time points, the AMR and supervisory role categorization had a moderately positive correlation (0.56). CONCLUSIONS This exploratory study identified a modest correlation between average milestone ratings and supervisory role categorization. Convergence of competencies on a single factor deserves further exploration, with possible rater effects warranting attention.
Collapse
Affiliation(s)
- Daniel J Schumacher
- Department of Pediatrics, Cincinnati Children's Hospital Medical Center, University of Cincinnati (DJ Schumacher), Cincinnati, Ohio.
| | | | - Sean P Elliott
- Department of Pediatrics, University of Arizona (SP Elliott), Tucson
| | - Catherine Michelson
- Department of Pediatrics, Boston University School of Medicine (C Michelson)
| | - Tanvi Sharma
- Department of Medicine, Boston Children's Hospital, and Harvard Medical School (T Sharma), Boston, Mass
| | - Lynn C Garfunkel
- Department of Pediatrics, University of Rochester (LC Garfunkel), Rochester, NY
| | - Beth King
- Association of Pediatric Program Directors Longitudinal Educational Assessment Research Network (B King), McLean, Va; and
| | - Alan Schwartz
- Department of Medical Education and Department of Pediatrics, University of Illinois at Chicago, and Association of Pediatric Program Directors Longitudinal Educational Assessment Research Network (A Schwartz), McLean, Va
| | | |
Collapse
|
55
|
Heath JK. ACGME Milestones Within Subspecialty Training Programs: One Institution's Experience. J Grad Med Educ 2019; 11:53-59. [PMID: 30805098 PMCID: PMC6375317 DOI: 10.4300/jgme-d-18-00308.1] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/20/2018] [Revised: 10/24/2018] [Accepted: 10/26/2018] [Indexed: 12/27/2022] Open
Abstract
BACKGROUND The Accreditation Council for Graduate Medical Education Milestones were created as a criterion-based framework to promote competency-based education during graduate medical education. Despite widespread implementation across subspecialty programs, extensive validity evidence supporting the use of milestones within fellowship training is lacking. OBJECTIVE We assessed the construct and response process validity of milestones in subspecialty fellowship programs in an academic medical center. METHODS From 2014-2016, we performed a single center retrospective cohort analysis of milestone data from fellows across 5 programs. We analyzed summary statistics and performed multivariable linear regression to assess change in milestone ratings by training year and variability in ratings across fellowship programs. Finally, we examined a subset of Professionalism and Interpersonal and Communication Skills subcompetencies from the first 6 months of training to identify the proportion of fellows deemed "ready for independent practice" in these domains. RESULTS Milestone data were available for 68 fellows, with 75 933 unique subcompetency ratings. Multivariable linear regression, adjusted for subcompetency and subspecialty, revealed an increase of 0.17 (0.16-0.19) in ratings with each postgraduate year level increase (P < .005), as well as significant variation in milestone ratings across subspecialties. For the Professionalism and Interpersonal and Communication Skills domains, mean ratings within the first 6 months of training were 3.78 and 3.95, respectively. CONCLUSIONS We noted a minimal upward trend of milestone ratings in subspecialty training programs, and significant variability in implementing milestones across differing subspecialties. This may suggest possible difficulties with the construct validity and response process of the milestone system in certain medical subspecialties.
Collapse
|
56
|
Warm EJ, Kinnear B, Kelleher M, Sall D, Holmboe E. Transforming Resident Assessment: An Analysis Using Deming's System of Profound Knowledge. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:195-201. [PMID: 30334842 DOI: 10.1097/acm.0000000000002499] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
W. Edwards Deming, in his System of Profound Knowledge, asserts that leaders who wish to transform a system should understand four essential elements: appreciation for a system, theory of knowledge, knowledge about variation, and psychology. The Accreditation Council for Graduate Medical Education (ACGME) introduced the milestones program as a part of the Next Accreditation System to create developmental language for the six core competencies and facilitate programmatic assessment within graduate medical education systems. Viewed through Deming's lens, the ACGME can be seen as the steward of a large system, with everyone who provides assessment data as workers in that system. The authors use Deming's framework to illustrate the working components of the assessment system of the University of Cincinnati College of Medicine's internal medicine residency program and draw parallels to the macrocosm of graduate medical education. Successes and failures in transforming resident assessment can be understood and predicted by identifying the system and its aims, turning information into knowledge, developing an understanding of variation, and appreciating the psychology of motivation of participants. The authors offer insights from their experience for educational leaders who wish to apply Deming's elements to their own assessment systems, with questions to explore, pitfalls to avoid, and practical approaches in doing this type of work.
Collapse
Affiliation(s)
- Eric J Warm
- E.J. Warm is professor of medicine and program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0002-6088-2434. B. Kinnear is assistant professor of medicine and pediatrics and associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio. M. Kelleher is assistant professor of medicine and pediatrics and associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio. D. Sall is assistant professor of medicine and associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio. E. Holmboe is senior vice president, Milestones Development and Evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois; adjunct professor of medicine, Yale University, New Haven, Connecticut; and adjunct professor, Feinberg School of Medicine at Northwestern University, Chicago, Illinois
| | | | | | | | | |
Collapse
|
57
|
Hicks PJ, Margolis MJ, Carraccio CL, Clauser BE, Donnelly K, Fromme HB, Gifford KA, Poynter SE, Schumacher DJ, Schwartz A. A novel workplace-based assessment for competency-based decisions and learner feedback. MEDICAL TEACHER 2018; 40:1143-1150. [PMID: 29688108 DOI: 10.1080/0142159x.2018.1461204] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
BACKGROUND Increased recognition of the importance of competency-based education and assessment has led to the need for practical and reliable methods to assess relevant skills in the workplace. METHODS A novel milestone-based workplace assessment system was implemented in 15 pediatrics residency programs. The system provided: (1) web-based multisource feedback (MSF) and structured clinical observation (SCO) instruments that could be completed on any computer or mobile device; and (2) monthly feedback reports that included competency-level scores and recommendations for improvement. RESULTS For the final instruments, an average of five MSF and 3.7 SCO assessment instruments were completed for each of 292 interns; instruments required an average of 4-8 min to complete. Generalizability coefficients >0.80 were attainable with six MSF observations. Users indicated that the new system added value to their existing assessment program; the need to complete the local assessments in addition to the new assessments was identified as a burden of the overall process. CONCLUSIONS Outcomes - including high participation rates and high reliability compared to what has traditionally been found with workplace-based assessment - provide evidence for the validity of scores resulting from this novel competency-based assessment system. The development of this assessment model is generalizable to other specialties.
Collapse
Affiliation(s)
- Patricia J Hicks
- a Children's Hospital of Philadelphia and University of Pennsylvania , Philadelphia , PA , USA
| | | | | | - Brian E Clauser
- b National Board of Medical Examiners , Philadelphia , PA , USA
| | | | - H Barrett Fromme
- e Pritzker School of Medicine , University of Chicago , Chicago , IL , USA
| | | | - Sue E Poynter
- g Pediatrics , Cincinnati Children's Hospital Medical Center/University of Cincinnati College of Medicine , Cincinnati , OH , USA
| | - Daniel J Schumacher
- g Pediatrics , Cincinnati Children's Hospital Medical Center/University of Cincinnati College of Medicine , Cincinnati , OH , USA
| | - Alan Schwartz
- h Medical Education and Pediatrics , University of Illinois at Chicago and Association of Pediatric Program Directors , Chicago , IL , USA
| |
Collapse
|
58
|
Gingerich A, Daniels V, Farrell L, Olsen SR, Kennedy T, Hatala R. Beyond hands-on and hands-off: supervisory approaches and entrustment on the inpatient ward. MEDICAL EDUCATION 2018; 52:1028-1040. [PMID: 29938831 DOI: 10.1111/medu.13621] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2017] [Revised: 02/05/2018] [Accepted: 04/12/2018] [Indexed: 05/22/2023]
Abstract
CONTEXT The concept of entrustment has garnered significant attention in medical specialties, despite variability in supervision styles and entrustment decisions. There is a need to further study the enactment of supervision on inpatient wards to inform competency-based assessment design. METHODS Attending physicians, while supervising on clinical teaching inpatient wards, were invited to describe a recent moment of enacting supervision with an internal medicine resident. Constructivist grounded theory guided data collection and analysis. Interview transcripts were analysed in iterative cycles to inform data collection. Constant comparison was used to build a theory of supervision from the identified themes. RESULTS In 2016-2017, 23 supervisors from two Canadian universities with supervision reputations ranging from very involved to less involved participated in one or two interviews (total: 28). Supervisors were not easily dichotomised into styles based on behaviour because all used similar oversight strategies. Supervisors described adjusting between 'hands-on' (e.g. detail oriented) and 'hands-off' (e.g. less visible on ward) styles depending on the context. All also contended with the competing roles of clinical teacher and care provider. Supervisors made a distinction between the terms `entrust' and `trust', and did not grant complete entrustment to senior residents. CONCLUSIONS We propose that a supervisor's perceived responsibility for the ward underlies adjustments between 'hands-on' (i.e. personal ward responsibility) and 'hands-off' (i.e. shared ward responsibility) styles. Our approaches to clinical supervision model combines this responsibility tension with the tension between patient care and teaching to illustrate four supervisory approaches, each with unique priorities influencing entrustment. Given the fluidity in supervision, documenting changes in oversight strategies, rather than absolute levels of entrustment, may be more informative for assessment purposes. Research is needed to determine if there is sufficient association between the supervision provided, the entrustment decision made and the supervisor's trust in a trainee to use these as proxies in assessing a trainee's competence.
Collapse
Affiliation(s)
- Andrea Gingerich
- Northern Medical Program, University of Northern British Columbia, Prince George, British Columbia, Canada
| | - Vijay Daniels
- Department of Medicine, University of Alberta, Edmonton, Alberta, Canada
| | - Laura Farrell
- Island Medical Program, University of British Columbia, Victoria, British Columbia, Canada
| | - Sharla-Rae Olsen
- Department of Medicine, Northern Medical Program, University of British Columbia, Prince George, British Columbia, Canada
| | - Tara Kennedy
- Department of Paediatrics, Dalhousie University, Fredericton, New Brunswick, Canada
| | - Rose Hatala
- Department of Medicine, Vancouver Fraser Medical Program, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
59
|
Halman S, Rekman J, Wood T, Baird A, Gofton W, Dudek N. Avoid reinventing the wheel: implementation of the Ottawa Clinic Assessment Tool (OCAT) in Internal Medicine. BMC MEDICAL EDUCATION 2018; 18:218. [PMID: 30236097 PMCID: PMC6148769 DOI: 10.1186/s12909-018-1327-7] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2018] [Accepted: 09/13/2018] [Indexed: 05/16/2023]
Abstract
BACKGROUND Workplace based assessment (WBA) is crucial to competency-based education. The majority of healthcare is delivered in the ambulatory setting making the ability to run an entire clinic a crucial core competency for Internal Medicine (IM) trainees. Current WBA tools used in IM do not allow a thorough assessment of this skill. Further, most tools are not aligned with the way clinical assessors conceptualize performances. To address this, many tools aligned with entrustment decisions have recently been published. The Ottawa Clinic Assessment Tool (OCAT) is an entrustment-aligned tool that allows for such an assessment but was developed in the surgical setting and it is not known if it can perform well in an entirely different context. The aim of this study was to implement the OCAT in an IM program and collect psychometric data in this different setting. Using one tool across multiple contexts may reduce the need for tool development and ensure that tools used have proper psychometric data to support them. METHODS Psychometrics characteristics were determined. Descriptive statistics and effect sizes were calculated. Scores were compared between levels of training (juniors (PGY1), seniors (PGY2s and PGY3s) & fellows (PGY4s and PGY5s)) using a one-way ANOVA. Safety for independent practice was analyzed with a dichotomous score. Variance components were generated and used to estimate the reliability of the OCAT. RESULTS Three hundred ninety OCATs were completed over 52 weeks by 86 physicians assessing 44 residents. The range of ratings varied from 2 (I had to talk them through) to 5 (I did not need to be there) for most items. Mean scores differed significantly by training level (p < .001) with juniors having lower ratings (M = 3.80 (out of 5), SD = 0.49) than seniors (M = 4.22, SD = - 0.47) who had lower ratings than fellows (4.70, SD = 0.36). Trainees deemed safe to run the clinic independently had significantly higher mean scores than those deemed not safe (p < .001). The generalizability coefficient that corresponds to internal consistency is 0.92. CONCLUSIONS This study's psychometric data demonstrates that we can reliably use the OCAT in IM. We support assessing existing tools within different contexts rather than continuous developing discipline-specific instruments.
Collapse
Affiliation(s)
- Samantha Halman
- Department of Medicine, the University of Ottawa, The Ottawa Hospital General Campus, 501 Smyth Road, Box 209, Ottawa, Ontario K1H 8L6 Canada
| | - Janelle Rekman
- Department of Surgical Education, the University of Ottawa, The Ottawa Hospital Civic Campus, Loeb Research Building - Main Floor WM150b, 725 Parkdale Avenue, C/O Isabel Menard, Ottawa, Ontario K1Y 4E9 Canada
| | - Timothy Wood
- Department of Innovation in Medical Education, Faculty of Medicine, the University of Ottawa, 850 Peter Morand Crescent (Room 102), Ottawa, Ontario K1G 5Z3 Canada
| | - Andrew Baird
- Department of Medicine, the University of Ottawa, The Ottawa Hospital Parkdale Campus, Room 162, 1053 Carling Avenue, C/O Odile Kaufmann, Ottawa, Ontario K1Y 4E9 Canada
| | - Wade Gofton
- Department of Surgical Education, the University of Ottawa, Ottawa Hospital - Civic Campus, Suite J15, 1053 Carling Avenue, Ottawa, Ontario K1Y 4E9 Canada
| | - Nancy Dudek
- Department of Medicine, the University of Ottawa, The Rehabillitation Centre. 505 Smyth Road, Ottawa, Ontario K1H 8M2 Canada
| |
Collapse
|
60
|
Young JQ, Hasser C, Hung EK, Kusz M, O'Sullivan PS, Stewart C, Weiss A, Williams N. Developing End-of-Training Entrustable Professional Activities for Psychiatry: Results and Methodological Lessons. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2018; 93:1048-1054. [PMID: 29166349 DOI: 10.1097/acm.0000000000002058] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
PURPOSE To develop entrustable professional activities (EPAs) for psychiatry and to demonstrate an innovative, validity-enhancing methodology that may be relevant to other specialties. METHOD A national task force employed a three-stage process from May 2014 to February 2017 to develop EPAs for psychiatry. In stage 1, the task force used an iterative consensus-driven process to construct proposed EPAs. Each included a title, full description, and relevant competencies. In stage 2, the task force interviewed four nonpsychiatric experts in EPAs and further revised the EPAs. In stage 3, the task force performed a Delphi study of national experts in psychiatric education and assessment. All survey participants completed a brief training program on EPAs. Quantitative and qualitative analysis led to further modifications. Essentialness was measured on a five-point scale. EPAs were included if the content validity index was at least 0.8 and the lower end of the asymmetric confidence interval was not lower than 4.0. RESULTS Stages 1 and 2 yielded 24 and 14 EPAs, respectively. In stage 3, 31 of the 39 invited experts participated in both rounds of the Delphi study. Round 1 reduced the proposed EPAs to 13. Ten EPAs met the inclusion criteria in Round 2. CONCLUSIONS The final EPAs provide a strong foundation for competency-based assessment in psychiatry. Methodological features such as critique by nonpsychiatry experts, a national Delphi study with frame-of-reference training, and stringent inclusion criteria strengthen the content validity of the findings and may serve as a model for future efforts in other specialties.
Collapse
Affiliation(s)
- John Q Young
- J.Q. Young is professor, Department of Psychiatry, Zucker School of Medicine at Hofstra/Northwell, New York, New York. C. Hasser is assistant professor, Department of Psychiatry, UCSF School of Medicine, San Francisco, California. E.K. Hung is associate professor, Department of Psychiatry, UCSF School of Medicine, San Francisco, California. M. Kusz is research assistant, Department of Psychiatry, Hofstra Northwell School of Medicine, New York, New York. P.S. O'Sullivan is professor, Department of Medicine and Surgery, UCSF School of Medicine, San Francisco, California. C. Stewart is assistant professor, Department of Psychiatry, Georgetown School of Medicine, Washington, DC. A. Weiss is associate professor, Department of Psychiatry and Behavioral Sciences, Albert Einstein School of Medicine, New York, New York. N. Williams is professor, Department of Psychiatry, University of Iowa Carver College of Medicine, Iowa City, Iowa
| | | | | | | | | | | | | | | |
Collapse
|
61
|
Chan T, Sebok‐Syer S, Thoma B, Wise A, Sherbino J, Pusic M. Learning Analytics in Medical Education Assessment: The Past, the Present, and the Future. AEM EDUCATION AND TRAINING 2018; 2:178-187. [PMID: 30051086 PMCID: PMC6001721 DOI: 10.1002/aet2.10087] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Accepted: 01/30/2018] [Indexed: 05/09/2023]
Abstract
With the implementation of competency-based medical education (CBME) in emergency medicine, residency programs will amass substantial amounts of qualitative and quantitative data about trainees' performances. This increased volume of data will challenge traditional processes for assessing trainees and remediating training deficiencies. At the intersection of trainee performance data and statistical modeling lies the field of medical learning analytics. At a local training program level, learning analytics has the potential to assist program directors and competency committees with interpreting assessment data to inform decision making. On a broader level, learning analytics can be used to explore system questions and identify problems that may impact our educational programs. Scholars outside of health professions education have been exploring the use of learning analytics for years and their theories and applications have the potential to inform our implementation of CBME. The purpose of this review is to characterize the methodologies of learning analytics and explore their potential to guide new forms of assessment within medical education.
Collapse
Affiliation(s)
- Teresa Chan
- McMaster program for Education Research, Innovation, and Theory (MERIT)HamiltonOntarioCanada
| | - Stefanie Sebok‐Syer
- Centre for Education Research & InnovationSchulich School of Medicine and DentistrySaskatoonSaskatchewanCanada
| | - Brent Thoma
- Department of Emergency MedicineUniversity of SaskatchewanSaskatoonSaskatchewanCanada
| | - Alyssa Wise
- Steinhardt School of Culture, Education, and Human DevelopmentNew York UniversityNew YorkNY
| | - Jonathan Sherbino
- Faculty of Health ScienceDivision of Emergency MedicineDepartment of MedicineMcMaster UniversityHamiltonOntarioCanada
- McMaster program for Education Research, Innovation, and Theory (MERIT)HamiltonOntarioCanada
| | - Martin Pusic
- Department of Emergency MedicineNYU School of MedicineNew YorkNY
| |
Collapse
|
62
|
Gruppen LD, Ten Cate O, Lingard LA, Teunissen PW, Kogan JR. Enhanced Requirements for Assessment in a Competency-Based, Time-Variable Medical Education System. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2018; 93:S17-S21. [PMID: 29485482 DOI: 10.1097/acm.0000000000002066] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Competency-based, time-variable medical education has reshaped the perceptions and practices of teachers, curriculum designers, faculty developers, clinician educators, and program administrators. This increasingly popular approach highlights the fact that learning among different individuals varies in duration, foundation, and goal. Time variability places particular demands on the assessment data that are so necessary for making decisions about learner progress. These decisions may be formative (e.g., feedback for improvement) or summative (e.g., decisions about advancing a student). This article identifies challenges to collecting assessment data and to making assessment decisions in a time-variable system. These challenges include managing assessment data, defining and making valid assessment decisions, innovating in assessment, and modeling the considerable complexity of assessment in real-world settings and richly interconnected social systems. There are hopeful signs of creativity in assessment both from researchers and practitioners, but the transition from a traditional to a competency-based medical education system will likely continue to create much controversy and offer opportunities for originality and innovation in assessment.
Collapse
Affiliation(s)
- Larry D Gruppen
- L.D. Gruppen is professor, Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, Michigan. O. ten Cate is professor of medical education, Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, the Netherlands. L.A. Lingard is professor, Department of Medicine, and director, Centre for Education Research & Innovation, Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada. P.W. Teunissen is professor, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands, and maternal fetal medicine specialist, VU University Medical Center, Amsterdam, the Netherlands. J.R. Kogan is professor of medicine, Department of Medicine, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania
| | | | | | | | | |
Collapse
|
63
|
Mink RB, Schwartz A, Herman BE, Turner DA, Curran ML, Myers A, Hsu DC, Kesselheim JC, Carraccio CL. Validity of Level of Supervision Scales for Assessing Pediatric Fellows on the Common Pediatric Subspecialty Entrustable Professional Activities. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2018; 93:283-291. [PMID: 28700462 DOI: 10.1097/acm.0000000000001820] [Citation(s) in RCA: 64] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
PURPOSE Entrustable professional activities (EPAs) represent the routine and essential activities that physicians perform in practice. Although some level of supervision scales have been proposed, they have not been validated. In this study, the investigators created level of supervision scales for EPAs common to the pediatric subspecialties and then examined their validity in a study conducted by the Subspecialty Pediatrics Investigator Network (SPIN). METHOD SPIN Steering Committee members used a modified Delphi process to develop unique scales for six of the seven common EPAs. The investigators sought validity evidence in a multisubspecialty study in which pediatric fellowship program directors and Clinical Competency Committees used the scales to evaluate fellows in fall 2014 and spring 2015. RESULTS Separate scales for the six EPAs, each with five levels of progressive entrustment, were created. In both fall and spring, more than 300 fellows in each year of training from over 200 programs were assessed. In both periods and for each EPA, there was a progressive increase in entrustment levels, with second-year fellows rated higher than first-year fellows (P < .001) and third-year fellows rated higher than second-year fellows (P < .001). For each EPA, spring ratings were higher (P < .001) than those in the fall. Interrater reliability was high (Janson and Olsson's iota = 0.73). CONCLUSIONS The supervision scales developed for these six common pediatric subspecialty EPAs demonstrated strong validity evidence for use in EPA-based assessment of pediatric fellows. They may also inform the development of scales in other specialties.
Collapse
Affiliation(s)
- Richard B Mink
- R.B. Mink is professor of pediatrics, David Geffen School of Medicine at UCLA, Los Angeles, California, and chief, Division of Pediatric Critical Care Medicine, and director, Pediatric Critical Care Medicine Fellowship, Harbor-UCLA Medical Center, Torrance, California. A. Schwartz is Michael Reese Endowed Professor of Medical Education, associate head, Department of Medical Education, and research professor, Department of Pediatrics, University of Illinois at Chicago College of Medicine, Chicago, Illinois. B.E. Herman is professor of pediatrics, vice chair for education, and residency program director, University of Utah School of Medicine, Salt Lake City, Utah. D.A. Turner is associate professor of pediatrics, Duke University School of Medicine, and associate director of graduate medical education, Duke University Medical Center, Durham, North Carolina. M.L. Curran is assistant professor of pediatrics and director, Pediatric Rheumatology Fellowship Program, Ann & Robert H. Lurie Children's Hospital of Chicago and Northwestern University Feinberg School of Medicine, Chicago, Illinois. A. Myers is associate professor and director, Infectious Diseases Fellowship Program, Children's Mercy Hospital and University of Missouri-Kansas City School of Medicine, Kansas City, Missouri. D.C. Hsu is associate professor of pediatrics, associate program director, Pediatric Residency Program, and clinical and education chief, Pediatric Emergency Medicine Section, Baylor College of Medicine/Texas Children's Hospital, Houston, Texas. J.C. Kesselheim is assistant professor of pediatrics, Harvard Medical School, and associate fellowship program director for education, Dana-Farber/Boston Children's Cancer and Blood Disorders Center, Boston, Massachusetts. C.L. Carraccio is vice president, Competency-Based Assessment, American Board of Pediatrics, Chapel Hill, North Carolina
| | | | | | | | | | | | | | | | | |
Collapse
|
64
|
Sobel HG. Resident and Preceptor Perceptions of Preceptor Integration Into Resident Clinic Scheduling Templates. J Grad Med Educ 2017; 9:497-502. [PMID: 28824765 PMCID: PMC5559247 DOI: 10.4300/jgme-d-16-00609.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2016] [Revised: 02/01/2017] [Accepted: 04/24/2017] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Some internal medicine residency programs on X+Y schedules have modified clinic preceptor schedules to mimic those of the resident cohort (resident matched). This is in contrast to a traditional model, in which preceptors supervise on the same half-day each week. OBJECTIVE We assessed preceptor and resident perceptions of the 2 precepting models. METHODS We surveyed 44 preceptors and 97 residents at 3 clinic sites in 2 academic medical centers. Two clinics used the resident-matched model, and 1 used a traditional model. Surveys were completed at 6 months and 1 year. We assessed resident and preceptor perceptions in 5 domains: relationships between residents and preceptors; preceptor familiarity with complex patients; preceptor ability to assess milestone achievements; ability to follow up on results; and quality of care. RESULTS There was no difference in perceptions of interpersonal relationships or satisfaction with patient care. Preceptors in the resident-matched schedule reported they were more familiar with complex patients at both 6 months and 1 year, and felt more comfortable evaluating residents' milestone achievements at 6 months, but not at 1 year. At 1 year, residents in the resident-matched model perceived preceptors were more familiar with complex patients than residents in the traditional model. The ability to discuss patient results between clinic weeks was low in both models. CONCLUSIONS The resident-matched model increased resident and preceptor perceptions of familiarity with complex patients and early preceptor perceptions of comfort in assessment of milestone achievements.
Collapse
|
65
|
Warm EJ, Englander R, Pereira A, Barach P. Improving Learner Handovers in Medical Education. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2017; 92:927-931. [PMID: 27805952 DOI: 10.1097/acm.0000000000001457] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Multiple studies have demonstrated that the information included in the Medical Student Performance Evaluation fails to reliably predict medical students' future performance. This faulty transfer of information can lead to harm when poorly prepared students fail out of residency or, worse, are shuttled through the medical education system without an honest accounting of their performance. Such poor learner handovers likely arise from two root causes: (1) the absence of agreed-on outcomes of training and/or accepted assessments of those outcomes, and (2) the lack of standardized ways to communicate the results of those assessments. To improve the current learner handover situation, an authentic, shared mental model of competency is needed; high-quality tools to assess that competency must be developed and tested; and transparent, reliable, and safe ways to communicate this information must be created.To achieve these goals, the authors propose using a learner handover process modeled after a patient handover process. The CLASS model includes a description of the learner's Competency attainment, a summary of the Learner's performance, an Action list and statement of Situational awareness, and Synthesis by the receiving program. This model also includes coaching oriented towards improvement along the continuum of education and care. Just as studies have evaluated patient handover models using metrics that matter most to patients, studies must evaluate this learner handover model using metrics that matter most to providers, patients, and learners.
Collapse
Affiliation(s)
- Eric J Warm
- E.J. Warm is the Sue P. and Richard W. Vilter Professor of Medicine and categorical medicine residency program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio. R. Englander is associate dean for undergraduate medical education, University of Minnesota Medical School, Minneapolis, Minnesota. A. Pereira is associate professor and assistant dean for clinical education, University of Minnesota Medical School, Minneapolis, Minnesota. P. Barach is clinical professor, Department of Pediatrics, Wayne State University School of Medicine, Detroit, Michigan
| | | | | | | |
Collapse
|
66
|
Affiliation(s)
- Su-Ting T. Li
- Corresponding author: Su-Ting T. Li, MD, MPH, University of California, Davis, Room 220, 2516 Stockton Boulevard, Sacramento, CA 95817, 916.734.2428, fax 916.734.0342,
| |
Collapse
|