1
|
Ohlin S, King S, Takashima M, Ossenberg C, Henderson A. Learning in the workplace: Development of a simple language statement assessment tool that supports second-level nurse practice. Nurse Educ Pract 2024; 77:103983. [PMID: 38701684 DOI: 10.1016/j.nepr.2024.103983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 04/15/2024] [Accepted: 04/18/2024] [Indexed: 05/05/2024]
Abstract
AIM To focus learning through clarity of the enrolled nurse (EN) role (a second tier nurse position) through development of a user-friendly workplace performance assessment tool commensurate with EN standards for practice. BACKGROUND Internationally, the nursing workforce comprises regulated and unregulated staff. In Australia, similar to other western countries, there are two tiers of regulated workforce, namely Registered Nurses (RNs) and Enrolled Nurses (ENs). Differences in RN and EN standards based on the education preparation are not always clearly differentiated in workplace practice. Roles are often seen as interchangeable: Improved clarity of both regulated and unregulated roles, when numbers of healthcare workers are burgeoning, assists performance assessment that guides further learning and safe practice. DESIGN Two phase sequential, non-experimental design. METHODS Phase one used focus groups (n=48), expert reference panel (n=8) and end-users (n=16) to develop simple language statements. Phase two involved field testing of the statements. FINDINGS A 30-item, criterion-based workplace performance tool was developed. Principal component analysis of completed tools indicated work could be organised around three key areas of practice, namely, higher order thinking and problem solving, routine daily activities of care and personal and social attributes. DISCUSSION Participants reported the statement items assisted in determining suitable activities and accompanying cues in discussing learning needs. Analysis assisted with discriminating broader elements of EN workplace performance. CONCLUSIONS Workplace learning is important for nurses to continue to build their capacity to deliver optimum care. Assessment tools that describe professional capability in plain language statements and provide examples of supportive behavioural cues help guide on-going learning through improving the validity and thereby consistency of assessment processes. Furthermore, comprehensible and meaningful statements and cues can readily be adopted by students and educators to target learning and feedback thereby enhancing clarity of the EN role, to distinguish from other nursing roles.
Collapse
Affiliation(s)
- Simone Ohlin
- Central Queensland University, Queensland, Australia
| | - Sue King
- Central Queensland University, Queensland, Australia
| | - Mari Takashima
- School of Nursing, Midwifery and Social Work, University of Queensland, Australia
| | - Christine Ossenberg
- Central Queensland University, Queensland, Australia; Princess Alexandra Hospital, Woolloongabba, Queensland, Australia
| | - Amanda Henderson
- Central Queensland University, Queensland, Australia; Princess Alexandra Hospital, Woolloongabba, Queensland, Australia.
| |
Collapse
|
2
|
Shumba TW, Tekian A. Competencies of undergraduate physiotherapy education: A scoping review. SOUTH AFRICAN JOURNAL OF PHYSIOTHERAPY 2024; 80:1879. [PMID: 38322654 PMCID: PMC10839158 DOI: 10.4102/sajp.v80i1.1879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Accepted: 08/31/2023] [Indexed: 02/08/2024] Open
Abstract
Background In recent years, the need for competency-based medical education has been emphasised. Each country needs a defined set of physiotherapy competencies from the associations and governing bodies. Objectives Our review aimed to map competencies of undergraduate physiotherapy education and propose a context-specific competency framework for Namibia. Method This scoping review was conducted following the Joanna Briggs Institute framework and was reported using the Preferred Reporting for Systematic Reviews and Meta-analysis Extension for Scoping Reviews. Qualitative direct content analysis utilising the five main competency domains from the WHO Rehabilitation Competency Framework was adapted. Results Five main competency domains were proposed: practice, professional growth and involvement, learning and development, management and leadership, and research. Nineteen potential competencies were identified, and each competency has a set of knowledge and skills activities that is expected of each student. Conclusion The proposed competencies still need to undergo expert consensus and content validation before they can be adopted and implemented in Namibia. Future studies can explore the perspectives and experiences of the faculty, students and clinicians on the current status of competency-based education of undergraduate physiotherapy programme in Namibia. Similarly, future studies can focus on possible assessment strategies that can be used for each competency and an evaluation framework for assessing milestones in student competencies from entry into clinical education to graduation. Clinical implications The review proposed a context-specific competency framework for Namibia with a set of knowledge and skills activities that is expected of each student. The faculty can adopt these competencies and improve on their competency-based physiotherapy education.
Collapse
Affiliation(s)
- Tonderai W Shumba
- Department of Occupational Therapy and Physiotherapy, Faculty of Health Sciences and Veterinary Medicine, University of Namibia, Windhoek, Namibia
| | - Ara Tekian
- Department of Medical Education, Chicago College of Medicine, University of Illinois, Chicago, United States
| |
Collapse
|
3
|
Edwards C, Perry R, Chester D, Childs J. Entrustable professional activities of graduate accredited General Medical Sonographers in Australia - Industry perceptions. J Med Radiat Sci 2023; 70:229-238. [PMID: 37029950 PMCID: PMC10500106 DOI: 10.1002/jmrs.676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 03/20/2023] [Indexed: 04/09/2023] Open
Abstract
INTRODUCTION Linking individual competencies to entrustable professional tasks provides a holistic view of Sonography graduate work readiness. The Australian Sonographers Accreditation Registry (ASAR) publishes a set of entrustable professional activities (EPAs) as part of its Standards for Accreditation of Sonography Courses. EPAs are distinct ultrasound examinations grouped within six critical practice units. This study reports on industry perspectives of current EPAs and their classification for graduates completing general sonography courses in Australia. The article also examines the value of EPAs and links their function to the assessment of graduate competency. METHODS An online survey tool elicited stakeholder feedback on graduate EPAs across six critical practice units and the potential for including a new Paediatric unit. From an original sample size of 655, 309 responded to questions about general sonography courses. RESULTS A majority (55.3%) recommended no changes to the existing EPA list, and 44.7% recommended amending the list. From respondents that recommended changes (138/309), all current EPAs received >80% agreement to be retained; in addition, nine new examinations received >70% agreement for inclusion at the graduate level. Whilst 42.7% (132/309) supported the current ASAR model requiring competency in five out of six critical practice units, 45.6% (141/309) recommended increasing it to all six. There was limited support, 11.7% (36/309), to reduce this number. Responding to the potential to add a new Paediatric specific critical practice unit, 61.8% (181/293) recommended its inclusion. CONCLUSIONS The findings demonstrate that the current list of EPAs aligns with industry expectations. In contrast, there are divergent views on the modelling and grouping of critical practice units. The article's critical analysis of the results and implications provides stakeholders with a practical approach to clinical teaching and EPA assessment, and helps to inform any review of accreditation standards.
Collapse
Affiliation(s)
- Christopher Edwards
- School of Clinical Sciences, Faculty of HealthQueensland University of TechnologyBrisbaneQueenslandAustralia
| | - Rebecca Perry
- Allied Health and Human PerformanceUniversity of South AustraliaAdelaideSouth AustraliaAustralia
| | - Deanne Chester
- School of Health, Medical and Applied SciencesCentral Queensland UniversityBrisbaneQueenslandAustralia
| | - Jessie Childs
- Allied Health and Human PerformanceUniversity of South AustraliaAdelaideSouth AustraliaAustralia
| |
Collapse
|
4
|
Spiegel MC, Lopez A, Kilb E. A Cross-Sectional Analysis of Internal Medicine Residency Program Directors' Adherence to Guidelines for Standardized Fellowship Letters of Recommendation. J Gen Intern Med 2023; 38:2846-2848. [PMID: 37436570 PMCID: PMC10506994 DOI: 10.1007/s11606-023-08312-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 06/29/2023] [Indexed: 07/13/2023]
Affiliation(s)
- Michelle C Spiegel
- Division of Pulmonary, Critical Care, Allergy, and Sleep Medicine, Department of Medicine, Medical University of South Carolina, Charleston, SC, USA.
| | - Alexandra Lopez
- Department of Medicine, Medical University of South Carolina, Charleston, SC, USA
| | - Edward Kilb
- Division of Pulmonary, Critical Care, Allergy, and Sleep Medicine, Department of Medicine, Medical University of South Carolina, Charleston, SC, USA
| |
Collapse
|
5
|
Staudenmann D, Waldner N, Lörwald A, Huwendiek S. Medical specialty certification exams studied according to the Ottawa Quality Criteria: a systematic review. BMC MEDICAL EDUCATION 2023; 23:619. [PMID: 37649019 PMCID: PMC10466740 DOI: 10.1186/s12909-023-04600-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Accepted: 08/18/2023] [Indexed: 09/01/2023]
Abstract
BACKGROUND Medical specialty certification exams are high-stakes summative assessments used to determine which doctors have the necessary skills, knowledge, and attitudes to treat patients independently. Such exams are crucial for patient safety, candidates' career progression and accountability to the public, yet vary significantly among medical specialties and countries. It is therefore of paramount importance that the quality of specialty certification exams is studied in the scientific literature. METHODS In this systematic literature review we used the PICOS framework and searched for papers concerning medical specialty certification exams published in English between 2000 and 2020 in seven databases using a diverse set of search term variations. Papers were screened by two researchers independently and scored regarding their methodological quality and relevance to this review. Finally, they were categorized by country, medical specialty and the following seven Ottawa Criteria of good assessment: validity, reliability, equivalence, feasibility, acceptability, catalytic and educational effect. RESULTS After removal of duplicates, 2852 papers were screened for inclusion, of which 66 met all relevant criteria. Over 43 different exams and more than 28 different specialties from 18 jurisdictions were studied. Around 77% of all eligible papers were based in English-speaking countries, with 55% of publications centered on just the UK and USA. General Practice was the most frequently studied specialty among certification exams with the UK General Practice exam having been particularly broadly analyzed. Papers received an average of 4.2/6 points on the quality score. Eligible studies analyzed 2.1/7 Ottawa Criteria on average, with the most frequently studied criteria being reliability, validity, and acceptability. CONCLUSIONS The present systematic review shows a growing number of studies analyzing medical specialty certification exams over time, encompassing a wider range of medical specialties, countries, and Ottawa Criteria. Due to their reliance on multiple assessment methods and data-points, aspects of programmatic assessment suggest a promising way forward in the development of medical specialty certification exams which fulfill all seven Ottawa Criteria. Further research is needed to confirm these results, particularly analyses of examinations held outside the Anglosphere as well as studies analyzing entire certification exams or comparing multiple examination methods.
Collapse
Affiliation(s)
| | - Noemi Waldner
- University of Bern, Institute for Medical Education, Bern, Switzerland
| | - Andrea Lörwald
- University of Bern, Institute for Medical Education, Bern, Switzerland
| | - Sören Huwendiek
- University of Bern, Institute for Medical Education, Bern, Switzerland
| |
Collapse
|
6
|
Alkalash SH, Farag NA. Effect of Workplace-Based Assessment Utilization as a Formative Assessment for Learning Among Family Medicine Postgraduates at the Faculty of Medicine, Menoufia University: A Prospective Study. Cureus 2023; 15:e35246. [PMID: 36968896 PMCID: PMC10034738 DOI: 10.7759/cureus.35246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/21/2023] [Indexed: 02/25/2023] Open
Abstract
Background Workplace-based assessment (WBA) is a group of assessment approaches that assesses the trainees' performance through their observation and monitoring in real clinical settings and then provides them with constructive and relevant feedback. Many WBA tools are available, including the mini-clinical evaluation exercise (mini-CEX), direct observation of procedural skills (DOPS), case-based discussions, and multisource feedback (peers, seniors, and patients). A WBA can help medical students improve their clinical competencies and ensure that qualified physicians graduate. Methods This prospective study was done in the family medicine department at the Menoufia Faculty of Medicine in Egypt and passed through two phases. Phase I was introducing an orientation lecture for family medicine staff and a convenient sample of 21 family medicine postgraduates about WBA. Phase II was conducting a monthly mini-CEX and DOPS for the postgraduates. Finally, students' satisfaction with the WBA was assessed, and all collected data were analyzed via Statistical Package for Social Science (SPSS) version 23 (IBM Corp., Armonk, NY). Results A total of 105 feedback sheets were obtained. These feedback sheets were subdivided into 63 mini-CEX feedback sheets (21 sheets from each mini-CEX session for three sessions) and 42 DOPS feedback sheets (21 sheets from each DOPS session for two sessions), all of which were collected and analyzed. A significant improvement was detected in the mini-CEX and DOPS feedback scores of the postgraduates throughout the consecutive sessions (9.5 ± 2.7, 24.9 ± 2.5, 27.29 ± 1.5) (P < 0.001) for Mini-CEX and (6.1 ± 1.8 versus 9.0 ± 1.2) (P < 0.001) for DOPS. About 93% of the postgraduates recommended the application of WBA for their peers, and 86% of them requested to perform it again for other different clinical cases and procedures. Conclusion Workplace-based assessment in the form of Mini-CEX and DOPS revealed its ability to improve clinical knowledge and skills among family medicine postgraduates who became motivated to undergo it again in search of improving their clinical performance and reducing their stresses related to final summative and objective structured clinical examinations (OSCEs).
Collapse
|
7
|
Kogan JR, Dine CJ, Conforti LN, Holmboe ES. Can Rater Training Improve the Quality and Accuracy of Workplace-Based Assessment Narrative Comments and Entrustment Ratings? A Randomized Controlled Trial. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:237-247. [PMID: 35857396 DOI: 10.1097/acm.0000000000004819] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE Prior research evaluating workplace-based assessment (WBA) rater training effectiveness has not measured improvement in narrative comment quality and accuracy, nor accuracy of prospective entrustment-supervision ratings. The purpose of this study was to determine whether rater training, using performance dimension and frame of reference training, could improve WBA narrative comment quality and accuracy. A secondary aim was to assess impact on entrustment rating accuracy. METHOD This single-blind, multi-institution, randomized controlled trial of a multifaceted, longitudinal rater training intervention consisted of in-person training followed by asynchronous online spaced learning. In 2018, investigators randomized 94 internal medicine and family medicine physicians involved with resident education. Participants assessed 10 scripted standardized resident-patient videos at baseline and follow-up. Differences in holistic assessment of narrative comment accuracy and specificity, accuracy of individual scenario observations, and entrustment rating accuracy were evaluated with t tests. Linear regression assessed impact of participant demographics and baseline performance. RESULTS Seventy-seven participants completed the study. At follow-up, the intervention group (n = 41), compared with the control group (n = 36), had higher scores for narrative holistic specificity (2.76 vs 2.31, P < .001, Cohen V = .25), accuracy (2.37 vs 2.06, P < .001, Cohen V = .20) and mean quantity of accurate (6.14 vs 4.33, P < .001), inaccurate (3.53 vs 2.41, P < .001), and overall observations (2.61 vs 1.92, P = .002, Cohen V = .47). In aggregate, the intervention group had more accurate entrustment ratings (58.1% vs 49.7%, P = .006, Phi = .30). Baseline performance was significantly associated with performance on final assessments. CONCLUSIONS Quality and specificity of narrative comments improved with rater training; the effect was mitigated by inappropriate stringency. Training improved accuracy of prospective entrustment-supervision ratings, but the effect was more limited. Participants with lower baseline rating skill may benefit most from training.
Collapse
Affiliation(s)
- Jennifer R Kogan
- J.R. Kogan is associate dean, Student Success and Professional Development, and professor of medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0001-8426-9506
| | - C Jessica Dine
- C.J. Dine is associate dean, Evaluation and Assessment, and associate professor of medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0001-5894-0861
| | - Lisa N Conforti
- L.N. Conforti is research associate for milestones evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-7317-6221
| | - Eric S Holmboe
- E.S. Holmboe is chief, research, milestones development and evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0000-0003-0108-6021
| |
Collapse
|
8
|
Khalife R, Gupta M, Gonsalves C, Park YS, Riddle J, Tekian A, Horsley T. Patient involvement in assessment of postgraduate medical learners: A scoping review. MEDICAL EDUCATION 2022; 56:602-613. [PMID: 34981565 DOI: 10.1111/medu.14726] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 11/18/2021] [Accepted: 12/27/2021] [Indexed: 06/14/2023]
Abstract
CONTEXT Competency-based assessment of learners may benefit from a more holistic, inclusive, approach for determining readiness for unsupervised practice. However, despite movements towards greater patient partnership in health care generally, inclusion of patients in postgraduate medical learners' assessment is largely absent. METHODS We conducted a scoping review to map the nature, extent and range of literature examining the inclusion (or exclusion) of patients within the assessment of postgraduate medical learners. Guided by Arskey and O'Malley's framework and informed by Levac et al. and Thomas et al., we searched two databases (MEDLINE® and Embase®) from inception until February 2021 using subheadings related to assessment, patients and postgraduate learners. Data analysis examined characteristics regarding the nature and factor influencing patient involvement in assessment. RESULTS We identified 41 papers spanning four decades. Some literature suggests patients are willing to be engaged in assessment, however choose not to engage when, for example, language barriers may exist. When stratified by specialty or clinical setting, the influence of factors such as gender, race, ethnicity or medical condition seems to remain consistent. Patients may participate in assessment as a stand-alone group or part of a multi-source feedback process. Patients generally provided high ratings but commented on the observed professional behaviours and communication skills in comparison with physicians who focused on medical expertise. CONCLUSION Factors that influence patient involvement in assessment are multifactorial including patients' willingness themselves, language and reading-comprehension challenges and available resources for training programmes to facilitate the integration of patient assessments. These barriers however are not insurmountable. While understudied, research examining patient involvement in assessment is increasing; however, our review suggests that the extent which the unique insights will be taken up in postgraduate medical education may be dependent on assessment systems readiness and, in particular, physician readiness to partner with patients in this way.
Collapse
Affiliation(s)
- Roy Khalife
- Department of Medicine (Hematology), The Ottawa Hospital, University of Ottawa, Ottawa, Ontario, Canada
| | - Manika Gupta
- Department of Medicine (Hematology), The Ottawa Hospital, University of Ottawa, Ottawa, Ontario, Canada
| | - Carol Gonsalves
- Department of Medicine (Hematology), The Ottawa Hospital, University of Ottawa, Ottawa, Ontario, Canada
| | - Yoon Soo Park
- Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Janet Riddle
- Department of Medical Education, University of Illinois College of Medicine at Chicago, Chicago, Illinois, USA
| | - Ara Tekian
- Department of Medical Education, University of Illinois College of Medicine at Chicago, Chicago, Illinois, USA
| | - Tanya Horsley
- Research Unit, Royal College of Physicians and Surgeons of Canada, Ottawa, Ontario, Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
9
|
Magin P, Ralston A, Tapley A, Holliday E, Ball J, van Driel ML, Davey A, Klein L, FitzGerald K, Spike N, Fielding A. 'Low-value' clinical care in general practice: associations of low value care in GP trainees' practice, including formative and summative examination performance - protocol for cross-sectional and retrospective cohort study analyses using the QUestionable In Training Clinical Activities (QUIT-CA) index. BMJ Open 2022; 12:e058989. [PMID: 35545391 PMCID: PMC9096564 DOI: 10.1136/bmjopen-2021-058989] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
INTRODUCTION 'Low value' clinical care and overuse of medical services are 'questionable' clinical activities that entail provision of medical services that are more likely to cause harm than good or whose benefit is disproportionately low compared with its cost. This study will seek to establish clinical practice associations of a non-observed work-based assessment of general practitioner (GP) trainees' (registrars') questionable practice (the QUestionable In Training Clinical Activities (QUIT-CA) index). We will also explore association of the QUIT-CA index with a formative observed work-based assessment, and will establish if registrars' QUIT-CA indexes are associated with summative examination performance. METHODS AND ANALYSIS We will conduct three analyses, all using data from the Registrar Clinical Encounters in Training (ReCEnT) study. ReCEnT is an ongoing (from 2010) cohort study in which Australian GP registrars record details of their in-consultation clinical and educational practice. The QUIT-CA index is compiled from ReCEnT consultation data. A cross-sectional analysis, using negative binomial regression, will establish clinical practice associations of the QUIT-CA index. A cross-sectional analysis using linear regression will be used to establish associations of QUIT-CA index with formative observed in-practice assessment (the General Practice Registrar-Competency Assessment Grid). A retrospective cohort study analysis using linear regression will be used to establish associations of the QUIT-CA index with summative examination performance (Royal Australian College of General Practice fellowship examinations results). ETHICS AND DISSEMINATION The study has ethical approval from the University of Newcastle HREC(H-2009-0323). Findings will be disseminated in peer-reviewed journal articles and conference presentations.
Collapse
Affiliation(s)
- Parker Magin
- School of Medicine and Public Health, The University of Newcastle, Callaghan, New South Wales, Australia
- NSW & ACT Research and Evaluation Unit, GP Synergy Regional Training Organisation, Mayfield West, New South Wales, Australia
| | - Anna Ralston
- NSW & ACT Research and Evaluation Unit, GP Synergy Regional Training Organisation, Mayfield West, New South Wales, Australia
| | - Amanda Tapley
- School of Medicine and Public Health, The University of Newcastle, Callaghan, New South Wales, Australia
- NSW & ACT Research and Evaluation Unit, GP Synergy Regional Training Organisation, Mayfield West, New South Wales, Australia
| | - Elizabeth Holliday
- School of Medicine and Public Health, The University of Newcastle, Callaghan, New South Wales, Australia
| | - Jean Ball
- Clinical Research Design and Statistical Support Unit (CReDITSS), Hunter Medical Research Institute (HMRI), New Lambton, New South Wales, Australia
| | - Mieke L van Driel
- Primary Care Clinical Unit, Faculty of Medicine, University of Queensland, Brisbane, Queensland, Australia
| | - Andrew Davey
- School of Medicine and Public Health, The University of Newcastle, Callaghan, New South Wales, Australia
- NSW & ACT Research and Evaluation Unit, GP Synergy Regional Training Organisation, Mayfield West, New South Wales, Australia
| | - Linda Klein
- School of Medicine and Public Health, The University of Newcastle, Callaghan, New South Wales, Australia
- NSW & ACT Research and Evaluation Unit, GP Synergy Regional Training Organisation, Mayfield West, New South Wales, Australia
| | - Kristen FitzGerald
- Australian General Practice Training, General Practice Training Tasmania (GPPT), Regional Training Organisation, Hobart, Tasmania, Australia
- Tasmanian School of Medicine, University of Tasmania, Hobart, Tasmania, Australia
| | - Neil Spike
- Eastern Victoria General Practice Training (EVGPT) Regional Training Organisation, Hawthorn, Victoria, Australia
- Department of General Practice and Primary Health Care, University of Melbourne, Carlton, Victoria, Australia
| | - Alison Fielding
- NSW & ACT Research and Evaluation Unit, GP Synergy Regional Training Organisation, Mayfield West, New South Wales, Australia
| |
Collapse
|
10
|
van der Meulen MW, Arah OA, Heeneman S, Oude Egbrink MGA, van der Vleuten CPM, Lombarts KMJMH. When Feedback Backfires: Influences of Negative Discrepancies Between Physicians' Self and Assessors' Scores on Their Subsequent Multisource Feedback Ratings. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2021; 41:94-103. [PMID: 34009839 DOI: 10.1097/ceh.0000000000000347] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
INTRODUCTION With multisource feedback (MSF) physicians might overrate their own performance compared with scores received from assessors. However, there is limited insight into how perceived divergent feedback affects physicians' subsequent performance scores. METHODS During 2012 to 2018, 103 physicians were evaluated twice by 684 peers, 242 residents, 999 coworkers, and themselves in three MSF performance domains. Mixed-effect models quantified associations between the outcome variable "score changes" between first and second MSF evaluations, and the explanatory variable "negative discrepancy score" (number of items that physicians rated themselves higher compared with their assessors' scores) at the first MSF evaluation. Whether associations differed across assessor groups and across a physician's years of experience as a doctor was analyzed too. RESULTS Forty-nine percent of physicians improved their total MSF score at the second evaluation, as assessed by others. Number of negative discrepancies was negatively associated with score changes in domains "organization and (self)management" (b = -0.02; 95% confidence interval [CI], -0.03 to -0.02; SE = 0.004) and "patient-centeredness" (b = -0.03; 95% CI, -0.03 to -0.02; SE = 0.004). For "professional attitude," only negative associations between score changes and negative discrepancies existed for physicians with more than 6-year experience (b6-10yearsofexperience = -0.03; 95% CI, -0.05 to -0.003; SE = 0.01; b16-20yearsofexperience = -0.03; 95% CI, -0.06 to -0.004; SE = 0.01). DISCUSSION The extent of performance improvement was less for physicians confronted with negative discrepancies. Performance scores actually declined when physicians overrated themselves on more than half of the feedback items. PA score changes of more experienced physicians confronted with negative discrepancies and were affected more adversely. These physicians might have discounted feedback due to having more confidence in own performance. Future work should investigate how MSF could improve physicians' performance taking into account physicians' confidence.
Collapse
Affiliation(s)
- Mirja W van der Meulen
- Dr. van der Meulen: is PhD Candidate, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands, and Professional Performance and Compassionate Care Research Group, Department of Medical Psychology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, the Netherlands. Dr. Arah: is professor, Department of Epidemiology, University of California, Los Angeles (UCLA), Los Angeles, the United States of America. Dr. Heeneman: is professor, Department of Pathology, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Cardiovascular Research Institute Maastricht, Maastricht University, Maastricht, the Netherlands. Dr. oude Egbrink: is professor, Department of Physiology, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands. Dr. van der Vleuten: is professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands. Dr. Lombarts: is professor, Professional Performance and Compassionate Care Research Group, Department of Medical Psychology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, the Netherlands
| | | | | | | | | | | |
Collapse
|
11
|
Wenghofer EF, Steele RS, Christiansen RG, Carter MH. Evaluation of a High Stakes Physician Competency Assessment: Lessons for Assessor Training, Program Accountability, and Continuous Improvement. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2021; 41:111-118. [PMID: 33929350 DOI: 10.1097/ceh.0000000000000362] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
INTRODUCTION There is a dearth of evidence evaluating postlicensure high-stakes physician competency assessment programs. Our purpose was to contribute to this evidence by evaluating a high-stakes assessment for assessor inter-rater reliability and the relationship between performance on individual assessment components and overall performance. We did so to determine if the assessment tools identify specific competency needs of the assessed physicians and contribute to our understanding of physician dyscompetence more broadly. METHOD Four assessors independently reviewed 102 video-recorded assessments and scored physicians on seven assessment components and overall performance. Inter-rater reliability was measured using intraclass correlation coefficients using a multiple rater, consistency, two-way random effect model. Analysis of variance with least-significant difference post-hoc analyses examined if the mean component scores differed significantly by quartile ranges of overall performance. Linear regression analysis determined the extent to which each component score was associated with overall performance. RESULTS Intraclass correlation coefficients ranged between 0.756 and 0.876 for all components scored and was highest for overall performance. Regression indicated that individual component scores were positively associated with overall performance. Levels of variation in component scores were significantly different across quartile ranges with higher variability in poorer performers. DISCUSSION High-stake assessments can be conducted reliably and identify performance gaps of potentially dyscompetent physicians. Physicians who performed well tended to do so in all aspects evaluated, whereas those who performed poorly demonstrated areas of strength and weakness. Understanding that dyscompetence rarely means a complete or catastrophic lapse competence is vital to understanding how educational needs change through a physician's career.
Collapse
Affiliation(s)
- Elizabeth F Wenghofer
- Dr. Wenghofer: Full Professor, School of Rural and Northern Health, Laurentian University, Sudbury, Ontario, Canada. Dr. Steele: Medical Director of Knowledge, Skills, Training, Assessment, and Training (KSTAR) Physician Programs, A&M Rural and Community Health Institute, Texas A&M University Health Science Center, College Station, TX. Dr. Christiansen: Professor of Medicine, Department of Medicine, University of Illinois College of Medicine, Rockford, IL. Dr. Carter: Clinical Assistant Professor of primary care medicine, Primary Care and Population Health, Texas A&M University Health Science Center, College Station, TX
| | | | | | | |
Collapse
|
12
|
Tu W, Hibbert R, Kontolemos M, Dang W, Wood T, Verma R, McInnes MDF. Diagnostic Radiology Residency Assessment Tools: A Scoping Review. Can Assoc Radiol J 2021; 72:651-660. [PMID: 33401932 DOI: 10.1177/0846537120981581] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023] Open
Abstract
PURPOSE The multifaceted nature of learning in diagnostic radiology residency requires a variety of assessment methods. However, the scope and quality of assessment tools has not been formally examined. A scoping review was performed to identify assessment tools available for radiology resident training and to evaluate the validity of these tools. METHODS A literature search was conducted through multiple databases and on-line resources. Inclusion criteria were defined as any tool used in assessment of radiology resident competence. Data regarding residents, evaluators and specifics of each tool was extracted. Each tool was subjected through a validation process with a customized rating scale using the 5 categories of validity: content, response process, internal structure, relations to other variables, and consequences. RESULTS The initial search returned 447 articles; 35 were included. The most evaluated competency being overall knowledge (31%), most common published journal was Academic Radiology (24%); evaluations were most commonly set in the United States (57%). In terms of validation, we found low adherence to modern integrated validity, with 34% of studies including a definition of validity. When specifically examining the 5 domains of validation evidence presented, most were either absent or of low rigor (70%). Only one study presented a modern definition of validation (3%, 1/35). CONCLUSION We identified 35 evaluation tools covering a variety of competency areas. However, few of these tools have been validated. Development of new validated assessment tools or validation of existing tools is essential for the ongoing transition to a competency-based curriculum.
Collapse
Affiliation(s)
- Wendy Tu
- Department of Radiology, 27337University of Ottawa, Ontario, Canada
| | - Rebecca Hibbert
- Department of Radiology, 27337University of Ottawa, Ontario, Canada
| | - Mario Kontolemos
- Department of Radiology, 27337University of Ottawa, Ontario, Canada
| | - Wilfred Dang
- Department of Radiology, 27337University of Ottawa, Ontario, Canada
| | - Tim Wood
- Department of Innovation in Medical Education, 27337University of Ottawa, Ontario, Canada
| | - Raman Verma
- Department of Radiology, 27337University of Ottawa, Ontario, Canada
| | - Matthew D F McInnes
- Department of Radiology, 27337University of Ottawa, Ontario, Canada.,Clinical Epidemiology Program, 10055Ottawa Hospital Research Institute, Ottawa, Ontario, Canada
| |
Collapse
|
13
|
Taylor D, Park YS, Smith C, Cate OT, Tekian A. Constructing Approaches to Entrustable Professional Activity Development that Deliver Valid Descriptions of Professional Practice. TEACHING AND LEARNING IN MEDICINE 2021; 33:89-97. [PMID: 32634323 DOI: 10.1080/10401334.2020.1784740] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Issue: Entrustable Professional Activities (EPAs) describe the core tasks health professionals must be competent performing prior to promotion and/or moving into unsupervised practice. When used for learner assessment, they serve as gateways to increased responsibility and autonomy. It follows that identifying and describing EPAs is a high-stakes form of work analysis aiming to describe the core work of a profession. However, hasty creation and adoption of EPAs without rigorous attention to content threatens the quality of judgments subsequently made from using EPA-based assessment tools. There is a clear need for approaches to identify validity evidence for EPAs themselves prior to their deployment in workplace-based assessment. Evidence: For EPAs to realize their potential in health professions education, they must first be constructed to reflect accurately the work of that profession or specialty. If the EPAs fail to do so, they cannot predict a graduate's readiness for or future performance in professional practice. Evaluating the methods used for identification, description, and adoption of EPAs through a construct validity lens helps give leaders and stakeholders of EPA development confidence that the EPAs constructed are, in fact, an accurate representation of the profession's work. Implications: Application of a construct validity lens to EPA development impacts all five commonly followed steps in EPA development: selection of experts; identification of candidate EPAs; iterative revisions; evaluation of proposed EPAs; and formal adoption of EPAs into curricula. It allows curricular developers to avoid pitfalls, bias, and common mistakes. Further, construct validity evidence for EPA development provides assurance that the EPAs adopted are appropriate for use in workplace-based assessment and entrustment decision-making.
Collapse
Affiliation(s)
- David Taylor
- Department of Medicine, Queen's University, Kingston, Canada
| | - Yoon Soo Park
- Department of Medical Education, University of Illinois at Chicago, Chicago, Illinois, USA
| | | | - Olle Ten Cate
- Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Ara Tekian
- Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, The Netherlands
| |
Collapse
|
14
|
|
15
|
Johnston J, Pinsk M. Daily Evaluation Cards Are Superior for Student Assessment Compared to Single Rater In-Training Evaluations. MEDICAL SCIENCE EDUCATOR 2020; 30:203-209. [PMID: 34457660 PMCID: PMC8368482 DOI: 10.1007/s40670-019-00855-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
INTRODUCTION The University of Manitoba's ambulatory pediatric clerkship transitioned to daily encounter cards (DECs) from single in-training evaluation reports (ITERs). The impact of this change on quality of student assessment was unknown. Using the validated Completed Clinical Evaluation Report Rating (CCERR) scale, we compared the assessment quality of the single ITER to the DEC-based system. METHODS Block randomization was used to select from a cohort of ITER- and DEC-based assessments during equivalent points in clerkship training. Data were transcribed and anonymized and scored by two blinded raters using the CCERR. RESULTS Inter-rater reliability for total CCERR scores was substantive (> 0.6). Mean total CCERR score for the DEC cohort was significantly higher than for the ITER cohort (25.2 vs. 16.8, p < 0.001), as were the mean scores for each item (2.81 vs. 1.86, p < 0.05). Multivariate logistical regression supported the significant influence of assessment method on assessment quality. CONCLUSIONS There is improvement in the average quality of student assessments associated with the transition from an ITER-based system to a DEC-based system. However, the improvement to only average CCERR scores for the DEC cohort suggests an unmet need for faculty development.
Collapse
Affiliation(s)
- James Johnston
- Department of Pediatrics & Child Health, Max Rady College of Medicine, University of Manitoba, FE009-840 Sherbrook St, Winnipeg, MB R3A 1S1 Canada
| | - Maury Pinsk
- Department of Pediatrics & Child Health, Max Rady College of Medicine, University of Manitoba, FE009-840 Sherbrook St, Winnipeg, MB R3A 1S1 Canada
| |
Collapse
|
16
|
Fielding A, Mulquiney K, Canalese R, Tapley A, Holliday E, Ball J, Klein L, Magin P. A general practice workplace-based assessment instrument: Content and construct validity. MEDICAL TEACHER 2020; 42:204-212. [PMID: 31597048 DOI: 10.1080/0142159x.2019.1670336] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Introduction: Relatively few general practice (GP) workplace-based assessment instruments have been psychometrically evaluated. This study aims to establish the content validity and internal consistency of the General Practice Registrar Competency Assessment Grid (GPR-CAG).Methods: The GPR-CAG was constructed as a formative assessment instrument for Australian GP registrars (trainees). GPR-CAG items were determined by an iterative literature review, expert opinion and pilot-testing process. Validation data were collected, between 2014 and 2016, during routine clinical teaching visits within registrars' first two general practice training terms (GPT1 and GPT2) for registrars across New South Wales and the Australian Capital Territory. Factor analysis and expert consensus were used to refine items and establish GPR-CAG's internal structure. GPT1 and GPT2 competencies were analysed separately.Results: Data of 555 registrars undertaking GPT1 and 537 registrars undertaking GPT2 were included in analyses. A four-factor, 16-item solution was identified for GPT1 competencies (Cronbach's alpha range: 0.71-0.83) and a seven-factor 27-item solution for GPT2 competencies (Cronbach's alpha: 0.63-0.84). The emergent factor structures were clinically characterisable and resonant with existing medical education competency frameworks.Discussion: This study establishes initial evidence for the content validity and internal consistency of GPR-CAG. GPR-CAG appears to have utility as a formative GP training WBA instrument.
Collapse
Affiliation(s)
- Alison Fielding
- GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, Australia
- School of Medicine and Public Health, University of Newcastle, Callaghan, Australia
| | - Katie Mulquiney
- GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, Australia
- School of Medicine and Public Health, University of Newcastle, Callaghan, Australia
| | | | - Amanda Tapley
- GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, Australia
- School of Medicine and Public Health, University of Newcastle, Callaghan, Australia
| | - Elizabeth Holliday
- School of Medicine and Public Health, University of Newcastle, Callaghan, Australia
| | - Jean Ball
- Clinical Research Design IT and Statistical Support, Hunter Medical Research Institute, New Lambton, Australia
| | - Linda Klein
- GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, Australia
- School of Medicine and Public Health, University of Newcastle, Callaghan, Australia
| | - Parker Magin
- GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, Australia
- School of Medicine and Public Health, University of Newcastle, Callaghan, Australia
| |
Collapse
|
17
|
Kohring JM, Harrast JJ, Stotts AK, Zhang C, Millar MM, Presson AP, Saltzman CL. Resident Independence Performing Common Orthopaedic Procedures at the End of Training: Perspective of the Graduated Resident. J Bone Joint Surg Am 2020; 102:e2. [PMID: 31567668 DOI: 10.2106/jbjs.18.01469] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
BACKGROUND The Accreditation Council for Graduate Medical Education (ACGME) has established minimum exposure rates for specific orthopaedic procedures during residency but has not established the achievement of competence at the end of training. The determination of independence performing surgical procedures remains undefined and may depend on the perspective of the observer. The purpose of this study was to understand the perceptions of recently graduated orthopaedic residents on the number of cases needed to achieve independence and on the ability to perform common orthopaedic procedures at the end of training. METHODS We conducted a web survey of all 727 recently graduated U.S. orthopaedic residents sitting for the 2018 American Board of Orthopaedic Surgery Part I Examination in July 2018. The surveyed participants were asked to assess the ability to independently perform 26 common adult and pediatric orthopaedic procedures as well as to recommend the number of cases to achieve independence at the end of training. We compared these data to the ACGME Minimum Numbers and the average ACGME resident experience data for residents who graduated from 2010 to 2012. RESULTS For 14 (78%) of the 18 adult procedures, >80% of respondents reported the ability to perform independently, and for 7 (88%) of the 8 pediatric procedures, >90% reported the ability to perform independently. The resident-recommended number of cases for independence was greater than the ACGME Minimum Numbers for all but 1 adult procedure. For 18 of the 26 adult and pediatric procedures, the mean 2010 to 2012 graduated resident exposure was significantly less than the mean number recommended for independence by 2018 graduates (p < 0.05). CONCLUSIONS Overall, recently graduated residents reported high self-perceived independence in performing the majority of the common adult and pediatric orthopaedic surgical procedures included in this study. In general, recently graduated residents recommended a greater number of case exposures to achieve independence than the ACGME Minimum Numbers.
Collapse
Affiliation(s)
- Jessica M Kohring
- Departments of Orthopaedics (J.M.K., A.K.S., A.P.P., and C.L.S.) and Internal Medicine (C.Z., M.M.M., and A.P.P.), University of Utah, Salt Lake City, Utah
| | | | - Alan K Stotts
- Departments of Orthopaedics (J.M.K., A.K.S., A.P.P., and C.L.S.) and Internal Medicine (C.Z., M.M.M., and A.P.P.), University of Utah, Salt Lake City, Utah
| | - Chong Zhang
- Departments of Orthopaedics (J.M.K., A.K.S., A.P.P., and C.L.S.) and Internal Medicine (C.Z., M.M.M., and A.P.P.), University of Utah, Salt Lake City, Utah
| | - Morgan M Millar
- Departments of Orthopaedics (J.M.K., A.K.S., A.P.P., and C.L.S.) and Internal Medicine (C.Z., M.M.M., and A.P.P.), University of Utah, Salt Lake City, Utah
| | - Angela P Presson
- Departments of Orthopaedics (J.M.K., A.K.S., A.P.P., and C.L.S.) and Internal Medicine (C.Z., M.M.M., and A.P.P.), University of Utah, Salt Lake City, Utah
| | - Charles L Saltzman
- Departments of Orthopaedics (J.M.K., A.K.S., A.P.P., and C.L.S.) and Internal Medicine (C.Z., M.M.M., and A.P.P.), University of Utah, Salt Lake City, Utah
| |
Collapse
|
18
|
Stotts AK, Kohring JM, Presson AP, Millar MM, Harrast JJ, Van Heest AE, Zhang C, Saltzman CL. Perceptions of the Recommended Resident Experience with Common Orthopaedic Procedures: A Survey of Program Directors and Early Practice Surgeons. J Bone Joint Surg Am 2019; 101:e63. [PMID: 31274728 PMCID: PMC6641477 DOI: 10.2106/jbjs.18.00149] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
BACKGROUND U.S. orthopaedic residency training is anchored by the Accreditation Council for Graduate Medical Education (ACGME) requirements, which include minimum numbers for 15 categories of procedures. The face validity of these recommendations and expectations for exposure to other common procedures has not been rigorously investigated. The main goals of this investigation were to understand the perceptions of program directors and early practice surgeons regarding the number of cases needed in residency training and to report which of the most commonly performed procedures residents should be able to perform independently upon graduation. METHODS We sent surveys to 157 current program directors of ACMGE-approved orthopaedic surgery residency programs and to all examinees sitting for the American Board of Orthopaedic Surgery (ABOS) Part II Oral Examination in 2017, requesting that they estimate the minimum number of exposures for the 22 adult and 24 pediatric procedures that are most commonly performed during residency and the first 2 years in practice. Where applicable, we compared these with the ACGME "Minimum Numbers" and the average ACGME resident experience data from 2010 to 2012 for resident graduates. For each of the 46 procedures, participants were asked if every orthopaedic resident should be able to independently perform the procedure upon graduation. We compared the percent for independence between the early practice surgeons and the program directors. RESULTS For the majority of adult and pediatric procedures, the early practitioners reported significantly higher numbers of cases needing to be performed during residency than the program directors. ACGME Minimum Numbers were always lower than the case numbers that were recommended by the early practice surgeons and the program directors. Overall we found good-to-excellent agreement for independence at graduation between program directors and early practitioners for adult cases (intraclass correlation coefficient [ICC], 0.98; 95% confidence interval [CI], 0.82 to 0.99) and moderate-to-good agreement for pediatric cases (ICC, 0.96; 95% CI, 0.74, 0.99). CONCLUSIONS The program directors frequently perceived the need for resident operative case exposure to common orthopaedic procedures to be lower than that estimated by the early practice surgeons. Both program directors and early practice surgeons generally agreed on which common cases residents should be able to perform independently by graduation.
Collapse
Affiliation(s)
- Alan K. Stotts
- Department of Orthopaedics (A.K.S., J.M.K., A.P.P., and C.L.S.) and Division of Epidemiology, Department of Internal Medicine (A.P.P., M.M.M., and C.Z.), University of Utah, Salt Lake City, Utah
| | - Jessica M. Kohring
- Department of Orthopaedics (A.K.S., J.M.K., A.P.P., and C.L.S.) and Division of Epidemiology, Department of Internal Medicine (A.P.P., M.M.M., and C.Z.), University of Utah, Salt Lake City, Utah
| | - Angela P. Presson
- Department of Orthopaedics (A.K.S., J.M.K., A.P.P., and C.L.S.) and Division of Epidemiology, Department of Internal Medicine (A.P.P., M.M.M., and C.Z.), University of Utah, Salt Lake City, Utah
| | - Morgan M. Millar
- Department of Orthopaedics (A.K.S., J.M.K., A.P.P., and C.L.S.) and Division of Epidemiology, Department of Internal Medicine (A.P.P., M.M.M., and C.Z.), University of Utah, Salt Lake City, Utah
| | | | - Ann E. Van Heest
- Department of Orthopaedic Surgery, University of Minnesota, Minneapolis, Minnesota
| | - Chong Zhang
- Department of Orthopaedics (A.K.S., J.M.K., A.P.P., and C.L.S.) and Division of Epidemiology, Department of Internal Medicine (A.P.P., M.M.M., and C.Z.), University of Utah, Salt Lake City, Utah
| | - Charles L. Saltzman
- Department of Orthopaedics (A.K.S., J.M.K., A.P.P., and C.L.S.) and Division of Epidemiology, Department of Internal Medicine (A.P.P., M.M.M., and C.Z.), University of Utah, Salt Lake City, Utah
| |
Collapse
|
19
|
Halman S, Rekman J, Wood T, Baird A, Gofton W, Dudek N. Avoid reinventing the wheel: implementation of the Ottawa Clinic Assessment Tool (OCAT) in Internal Medicine. BMC MEDICAL EDUCATION 2018; 18:218. [PMID: 30236097 PMCID: PMC6148769 DOI: 10.1186/s12909-018-1327-7] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2018] [Accepted: 09/13/2018] [Indexed: 05/16/2023]
Abstract
BACKGROUND Workplace based assessment (WBA) is crucial to competency-based education. The majority of healthcare is delivered in the ambulatory setting making the ability to run an entire clinic a crucial core competency for Internal Medicine (IM) trainees. Current WBA tools used in IM do not allow a thorough assessment of this skill. Further, most tools are not aligned with the way clinical assessors conceptualize performances. To address this, many tools aligned with entrustment decisions have recently been published. The Ottawa Clinic Assessment Tool (OCAT) is an entrustment-aligned tool that allows for such an assessment but was developed in the surgical setting and it is not known if it can perform well in an entirely different context. The aim of this study was to implement the OCAT in an IM program and collect psychometric data in this different setting. Using one tool across multiple contexts may reduce the need for tool development and ensure that tools used have proper psychometric data to support them. METHODS Psychometrics characteristics were determined. Descriptive statistics and effect sizes were calculated. Scores were compared between levels of training (juniors (PGY1), seniors (PGY2s and PGY3s) & fellows (PGY4s and PGY5s)) using a one-way ANOVA. Safety for independent practice was analyzed with a dichotomous score. Variance components were generated and used to estimate the reliability of the OCAT. RESULTS Three hundred ninety OCATs were completed over 52 weeks by 86 physicians assessing 44 residents. The range of ratings varied from 2 (I had to talk them through) to 5 (I did not need to be there) for most items. Mean scores differed significantly by training level (p < .001) with juniors having lower ratings (M = 3.80 (out of 5), SD = 0.49) than seniors (M = 4.22, SD = - 0.47) who had lower ratings than fellows (4.70, SD = 0.36). Trainees deemed safe to run the clinic independently had significantly higher mean scores than those deemed not safe (p < .001). The generalizability coefficient that corresponds to internal consistency is 0.92. CONCLUSIONS This study's psychometric data demonstrates that we can reliably use the OCAT in IM. We support assessing existing tools within different contexts rather than continuous developing discipline-specific instruments.
Collapse
Affiliation(s)
- Samantha Halman
- Department of Medicine, the University of Ottawa, The Ottawa Hospital General Campus, 501 Smyth Road, Box 209, Ottawa, Ontario K1H 8L6 Canada
| | - Janelle Rekman
- Department of Surgical Education, the University of Ottawa, The Ottawa Hospital Civic Campus, Loeb Research Building - Main Floor WM150b, 725 Parkdale Avenue, C/O Isabel Menard, Ottawa, Ontario K1Y 4E9 Canada
| | - Timothy Wood
- Department of Innovation in Medical Education, Faculty of Medicine, the University of Ottawa, 850 Peter Morand Crescent (Room 102), Ottawa, Ontario K1G 5Z3 Canada
| | - Andrew Baird
- Department of Medicine, the University of Ottawa, The Ottawa Hospital Parkdale Campus, Room 162, 1053 Carling Avenue, C/O Odile Kaufmann, Ottawa, Ontario K1Y 4E9 Canada
| | - Wade Gofton
- Department of Surgical Education, the University of Ottawa, Ottawa Hospital - Civic Campus, Suite J15, 1053 Carling Avenue, Ottawa, Ontario K1Y 4E9 Canada
| | - Nancy Dudek
- Department of Medicine, the University of Ottawa, The Rehabillitation Centre. 505 Smyth Road, Ottawa, Ontario K1H 8M2 Canada
| |
Collapse
|
20
|
Eva KW. Cognitive Influences on Complex Performance Assessment: Lessons from the Interplay between Medicine and Psychology. JOURNAL OF APPLIED RESEARCH IN MEMORY AND COGNITION 2018. [DOI: 10.1016/j.jarmac.2018.03.008] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
21
|
Cheung WJ, Dudek NL, Wood TJ, Frank JR. Supervisor-trainee continuity and the quality of work-based assessments. MEDICAL EDUCATION 2017; 51:1260-1268. [PMID: 28971502 DOI: 10.1111/medu.13415] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2017] [Revised: 05/30/2017] [Accepted: 07/11/2017] [Indexed: 05/12/2023]
Abstract
CONTEXT Work-based assessments (WBAs) represent an increasingly important means of reporting expert judgements of trainee competence in clinical practice. However, the quality of WBAs completed by clinical supervisors is of concern. The episodic and fragmented interaction that often occurs between supervisors and trainees has been proposed as a barrier to the completion of high-quality WBAs. OBJECTIVES The primary purpose of this study was to determine the effect of supervisor-trainee continuity on the quality of assessments documented on daily encounter cards (DECs), a common form of WBA. The relationship between trainee performance and DEC quality was also examined. METHODS Daily encounter cards representing three differing degrees of supervisor-trainee continuity (low, intermediate, high) were scored by two raters using the Completed Clinical Evaluation Report Rating (CCERR), a previously published nine-item quantitative measure of DEC quality. An analysis of variance (anova) was performed to compare mean CCERR scores among the three groups. Linear regression analysis was conducted to examine the relationship between resident performance and DEC quality. RESULTS Differences in mean CCERR scores were observed between the three continuity groups (p = 0.02); however, the magnitude of the absolute differences was small (partial eta-squared = 0.03) and not educationally meaningful. Linear regression analysis demonstrated a significant inverse relationship between resident performance and CCERR score (p < 0.001, r2 = 0.18). This inverse relationship was observed in both groups representing on-service residents (p = 0.001, r2 = 0.25; p = 0.04, r2 = 0.19), but not in the Off-service group (p = 0.62, r2 = 0.05). CONCLUSIONS Supervisor-trainee continuity did not have an educationally meaningful influence on the quality of assessments documented on DECs. However, resident performance was found to affect assessor behaviours in the On-service group, whereas DEC quality remained poor regardless of performance in the Off-service group. The findings suggest that greater attention should be given to determining ways of improving the quality of assessments reported for off-service residents, as well as for those residents demonstrating appropriate clinical competence progression.
Collapse
Affiliation(s)
- Warren J Cheung
- Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Nancy L Dudek
- Division of Physical Medicine and Rehabilitation, Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Timothy J Wood
- Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada
| | - Jason R Frank
- Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada
- Royal College of Physicians and Surgeons of Canada, Ottawa, Ontario, Canada
| |
Collapse
|
22
|
Holmboe ES. Work-based Assessment and Co-production in Postgraduate Medical Training. GMS JOURNAL FOR MEDICAL EDUCATION 2017; 34:Doc58. [PMID: 29226226 PMCID: PMC5704603 DOI: 10.3205/zma001135] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 11/09/2016] [Revised: 03/15/2017] [Accepted: 05/09/2017] [Indexed: 05/24/2023]
Abstract
Assessment has always been an essential component of postgraduate medical education and for many years focused predominantly on various types of examinations. While examinations of medical knowledge and more recently of clinical skills with standardized patients can assess learner capability in controlled settings and provide a level of assurance for the public, persistent and growing concerns regarding quality of care and patient safety worldwide has raised the importance and need for better work-based assessments. Work-based assessments, when done effectively, can more authentically capture the abilities of learners to actually provide safe, effective, patient-centered care. Furthermore, we have entered the era of interprofessional care where effective teamwork among multiple health care professionals is now paramount. Work-based assessment methods are now essential in an interprofessional healthcare world. To better prepare learners for these newer competencies and the ever-growing complexity of healthcare, many post-graduate medical education systems across the globe have turned to outcomes-based models of education, codified through competency frameworks. This commentary provides a brief overview on key methods of work-based assessment such as direct observation, multisource feedback, patient experience surveys and performance measures that are needed in a competency-based world that places a premium on educational and clinical outcomes. However, the full potential of work-based assessments will only be realized if post-graduate learners play an active role in their own assessment program. This will require a substantial culture change, and culture change only occurs through actions and changed behaviors. Co-production offers a practical and philosophical approach to engaging postgraduate learners to be active, intrinsically motivated agents for their own professional development, help to change learning culture and contribute to improving programmatic assessment in post-graduate training.
Collapse
Affiliation(s)
- Eric S. Holmboe
- Accreditation Council for Graduate Medical Education, Chicago, USA
| |
Collapse
|
23
|
Elnicki DM, Aiyer MK, Cannarozzi ML, Carbo A, Chelminski PR, Chheda SG, Chudgar SM, Harrell HE, Hood LC, Horn M, Johl K, Kane GC, McNeill DB, Muntz MD, Pereira AG, Stewart E, Tarantino H, Vu TR. An Entrustable Professional Activity (EPA)-Based Framework to Prepare Fourth-Year Medical Students for Internal Medicine Careers. J Gen Intern Med 2017. [PMID: 28634908 PMCID: PMC5653547 DOI: 10.1007/s11606-017-4089-8] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
The purpose of the fourth year of medical school remains controversial. Competing demands during this transitional phase cause confusion for students and educators. In 2014, the Association of American Medical Colleges (AAMC) released 13 Core Entrustable Professional Activities for Entering Residency (CEPAERs). A committee comprising members of the Clerkship Directors in Internal Medicine and the Association of Program Directors in Internal Medicine applied these principles to preparing students for internal medicine residencies. The authors propose a curricular framework based on five CEPAERs that were felt to be most relevant to residency preparation, informed by prior stakeholder surveys. The critical areas outlined include entering orders, forming and answering clinical questions, conducting patient care handovers, collaborating interprofessionally, and recognizing patients requiring urgent care and initiating that care. For each CEPAER, the authors offer suggestions about instruction and assessment of competency. The fourth year of medical school can be rewarding for students, while adequately preparing them to begin residency, by addressing important elements defined in the core entrustable activities. Thus prepared, new residents can function safely and competently in supervised postgraduate settings.
Collapse
Affiliation(s)
- D Michael Elnicki
- University of Pittsburgh School of Medicine, Pittsburgh, PA, USA. .,University of Pittsburgh, Pittsburgh, PA, USA.
| | - Meenakshy K Aiyer
- University of Illinois College of Medicine at Peoria, Peoria, IL, USA
| | | | - Alexander Carbo
- Harvard Medical School, Beth Israel Deaconess Medical Center, Boston, MA, USA
| | - Paul R Chelminski
- University of North Carolina School of Medicine, Chapel Hill, NC, USA
| | - Shobhina G Chheda
- University of Wisconsin School of Medicine and Public Health, Madison, WI, USA
| | | | | | - L Chad Hood
- University of Central Florida College of Medicine, Orlando, FL, USA
| | - Michelle Horn
- University of Mississippi School of Medicine, Jackson, MS, USA
| | - Karnjit Johl
- University of California-Davis School of Medicine, Sacramento, CA, USA
| | - Gregory C Kane
- Sidney Kimmel Medical College at Thomas Jefferson University, Philadelphia, PA, USA
| | | | | | - Anne G Pereira
- University of Minnesota Medical School, Minneapolis, MN, USA
| | - Emily Stewart
- Sidney Kimmel Medical College at Thomas Jefferson University, Philadelphia, PA, USA
| | | | - T Robert Vu
- Indiana University School of Medicine, Charlotte, NC, USA
| |
Collapse
|
24
|
Holmboe ES, Sherbino J, Englander R, Snell L, Frank JR. A call to action: The controversy of and rationale for competency-based medical education. MEDICAL TEACHER 2017; 39:574-581. [PMID: 28598742 DOI: 10.1080/0142159x.2017.1315067] [Citation(s) in RCA: 143] [Impact Index Per Article: 20.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Although medical education has enjoyed many successes over the last century, there is a recognition that health care is too often unsafe and of poor quality. Errors in diagnosis and treatment, communication breakdowns, poor care coordination, inappropriate use of tests and procedures, and dysfunctional collaboration harm patients and families around the world. These issues reflect on our current model of medical education and raise the question: Are physicians being adequately prepared for twenty-first century practice? Multiple reports have concluded the answer is "no." Concurrent with this concern is an increasing interest in competency-based medical education (CBME) as an approach to help reform medical education. The principles of CBME are grounded in providing better and safer care. As interest in CBME has increased, so have criticisms of the movement. This article summarizes and addresses objections and challenges related to CBME. These can provide valuable feedback to improve CBME implementation and avoid pitfalls. We strongly believe medical education reform should not be reduced to an "either/or" approach, but should blend theories and approaches to suit the needs and resources of the populations served. The incorporation of milestones and entrustable professional activities within existing competency frameworks speaks to the dynamic evolution of CBME, which should not be viewed as a fixed doctrine, but rather as a set of evolving concepts, principles, tools, and approaches that can enable important reforms in medical education that, in turn, enable the best outcomes for patients.
Collapse
Affiliation(s)
- Eric S Holmboe
- a Accreditation Council for Graduate Medical Education , Chicago , IL , USA
| | - Jonathan Sherbino
- b Division of Emergency Medicine, Department of Medicine , McMaster University , Hamilton , Canada
| | - Robert Englander
- c School of Medicine, University of Minnesota , Minneapolis , MN , USA
| | - Linda Snell
- d Centre for Medical and Department of General Internal Medicine , McGill University , Montreal , Quebec , Canada
- e Royal College of Physicians and Surgeons of Canada , Ottawa , Canada
| | - Jason R Frank
- e Royal College of Physicians and Surgeons of Canada , Ottawa , Canada
- f Department of Emergency Medicine , University of Ottawa , Ottawa , Canada
| |
Collapse
|
25
|
Lockyer J, Carraccio C, Chan MK, Hart D, Smee S, Touchie C, Holmboe ES, Frank JR. Core principles of assessment in competency-based medical education. MEDICAL TEACHER 2017; 39:609-616. [PMID: 28598746 DOI: 10.1080/0142159x.2017.1315082] [Citation(s) in RCA: 262] [Impact Index Per Article: 37.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
The meaningful assessment of competence is critical for the implementation of effective competency-based medical education (CBME). Timely ongoing assessments are needed along with comprehensive periodic reviews to ensure that trainees continue to progress. New approaches are needed to optimize the use of multiple assessors and assessments; to synthesize the data collected from multiple assessors and multiple types of assessments; to develop faculty competence in assessment; and to ensure that relationships between the givers and receivers of feedback are appropriate. This paper describes the core principles of assessment for learning and assessment of learning. It addresses several ways to ensure the effectiveness of assessment programs, including using the right combination of assessment methods and conducting careful assessor selection and training. It provides a reconceptualization of the role of psychometrics and articulates the importance of a group process in determining trainees' progress. In addition, it notes that, to reach its potential as a driver in trainee development, quality care, and patient safety, CBME requires effective information management and documentation as well as ongoing consideration of ways to improve the assessment system.
Collapse
Affiliation(s)
- Jocelyn Lockyer
- a Cumming School of Medicine , University of Calgary , Calgary , Canada
| | | | - Ming-Ka Chan
- c Max Rady College of Medicine, Rady Faculty of Health Sciences, University of Manitoba , Winnipeg , Canada
| | - Danielle Hart
- d Hennepin County Medical Center, Minneapolis , MN , USA
- e University of Minnesota Medical School , Minneapolis , MN , USA
| | - Sydney Smee
- f Medical Council of Canada , Ottawa , Canada
| | - Claire Touchie
- f Medical Council of Canada , Ottawa , Canada
- g Faculty of Medicine, University of Ottawa , Ottawa , Canada
| | - Eric S Holmboe
- h Accreditation Council for Graduate Medical Education , Chicago, IL , USA
| | - Jason R Frank
- i Royal College of Physicians and Surgeons of Canada , Ottawa , Canada
- j Department of Emergency Medicine , University of Ottawa , Ottawa , Canada
| |
Collapse
|
26
|
McLachlan JC. Gaps and Bridges. MEDICAL EDUCATION 2016; 50:984-985. [PMID: 27628714 DOI: 10.1111/medu.12897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
|
27
|
Eva KW, Bordage G, Campbell C, Galbraith R, Ginsburg S, Holmboe E, Regehr G. Towards a program of assessment for health professionals: from training into practice. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2016; 21:897-913. [PMID: 26590984 DOI: 10.1007/s10459-015-9653-6] [Citation(s) in RCA: 98] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/18/2015] [Accepted: 11/16/2015] [Indexed: 05/14/2023]
Abstract
Despite multifaceted attempts to "protect the public," including the implementation of various assessment practices designed to identify individuals at all stages of training and practice who underperform, profound deficiencies in quality and safety continue to plague the healthcare system. The purpose of this reflections paper is to cast a critical lens on current assessment practices and to offer insights into ways in which they might be adapted to ensure alignment with modern conceptions of health professional education for the ultimate goal of improved healthcare. Three dominant themes will be addressed: (1) The need to redress unintended consequences of competency-based assessment; (2) The potential to design assessment systems that facilitate performance improvement; and (3) The importance of ensuring authentic linkage between assessment and practice. Several principles cut across each of these themes and represent the foundational goals we would put forward as signposts for decision making about the continued evolution of assessment practices in the health professions: (1) Increasing opportunities to promote learning rather than simply measuring performance; (2) Enabling integration across stages of training and practice; and (3) Reinforcing point-in-time assessments with continuous professional development in a way that enhances shared responsibility and accountability between practitioners, educational programs, and testing organizations. Many of the ideas generated represent suggestions for strategies to pilot test, for infrastructure to build, and for harmonization across groups to be enabled. These include novel strategies for OSCE station development, formative (diagnostic) assessment protocols tailored to shed light on the practices of individual clinicians, the use of continuous workplace-based assessment, and broadening the focus of high-stakes decision making beyond determining who passes and who fails. We conclude with reflections on systemic (i.e., cultural) barriers that may need to be overcome to move towards a more integrated, efficient, and effective system of assessment.
Collapse
Affiliation(s)
- Kevin W Eva
- Centre for Health Education Scholarship, University of British Columbia, JPPN 3324, 910 West 10th Avenue, Vancouver, BC, V5Z 1M9, Canada.
| | | | - Craig Campbell
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
| | | | | | - Eric Holmboe
- Accreditation Council for Graduate Medical Education, Chicago, IL, USA
| | - Glenn Regehr
- Centre for Health Education Scholarship, University of British Columbia, JPPN 3324, 910 West 10th Avenue, Vancouver, BC, V5Z 1M9, Canada
| |
Collapse
|
28
|
Castanelli DJ, Jowsey T, Chen Y, Weller JM. Perceptions of purpose, value, and process of the mini-Clinical Evaluation Exercise in anesthesia training. Can J Anaesth 2016; 63:1345-1356. [DOI: 10.1007/s12630-016-0740-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2016] [Revised: 08/15/2016] [Accepted: 09/13/2016] [Indexed: 10/21/2022] Open
|
29
|
Wenghofer EF, Henzel TR, Miller SH, Norcross W, Boal P. Value of General Medical Knowledge Examinations in Performance Assessment of Practicing Physicians With Potential Competence and Performance Deficiencies. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2016; 36:113-118. [PMID: 27262154 DOI: 10.1097/ceh.0000000000000063] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
INTRODUCTION Problems with a physician's performance may arise at any point during their career. As such, there is a need for effective, valid tools and processes to accurately assess and identify deficiencies in competence or performance. Although scores on multiple-choice questions have been shown to be predictive of some aspects of physician performance in practicing physicians, their relationship to overall clinical competence is somewhat uncertain particularly after the first 10 years of practice. As such, the purpose of this study was to examine how a general medical knowledge multiple-choice question examination is associated with a comprehensive assessment of competence and performance in experienced practicing physicians with potential competence and performance deficiencies. METHODS The study included 233 physicians, of varying specialties, assessed by the University of California, San Diego Physician Assessment and Clinical Education Program (PACE), between 2008 and 2012, who completed the Post-Licensure Assessment System Mechanisms of Disease (MoD) examination. Logistic regression determined if the examination score significantly predicted passing assessment outcome after correcting for gender, international medical graduate status, certification status, and age. RESULTS Most physicians (89.7%) received an overall passing assessment outcome on the PACE assessment. The mean MoD score was 66.9% correct, with a median of 68.0%. Logistic regression (P = .038) was significant in indicating that physicians with higher MoD examination scores had an increased likelihood of achieving a passing assessment outcome (odds ratio = 1.057). DISCUSSION Physician MoD scores are significant predictors of overall physician competence and performance as evaluated by PACE assessment.
Collapse
Affiliation(s)
- Elizabeth F Wenghofer
- Dr. Wenghofer: Associate Professor, School of Rural and Northern Health, Laurentian University, Sudbury, ON, Canada, and Research Director, Physician Assessment and Clinical Education (PACE) Program, University of California San Diego, San Diego, CA. Dr. Henzel: Research Analyst, Policy & Product Development, National Board of Medical Examiners, Philadelphia, PA. Dr. Miller: Voluntary Clinical Professor of Family and Preventive Medicine and Surgery, University of California San Diego, San Diego, CA. Dr. Norcross: Clinical Professor and Director Physician Assessment and Clinical Education (PACE) Program, University of California San Diego, San Diego, CA. Mr. Boal: Associate Director, Physician Assessment and Clinical Education (PACE) Program, University of California San Diego, San Diego, CA
| | | | | | | | | |
Collapse
|
30
|
Hawkins RE, Welcher CM, Holmboe ES, Kirk LM, Norcini JJ, Simons KB, Skochelak SE. Implementation of competency-based medical education: are we addressing the concerns and challenges? MEDICAL EDUCATION 2015; 49:1086-102. [PMID: 26494062 DOI: 10.1111/medu.12831] [Citation(s) in RCA: 213] [Impact Index Per Article: 23.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2015] [Revised: 05/26/2015] [Accepted: 07/23/2015] [Indexed: 05/16/2023]
Abstract
CONTEXT Competency-based medical education (CBME) has emerged as a core strategy to educate and assess the next generation of physicians. Advantages of CBME include: a focus on outcomes and learner achievement; requirements for multifaceted assessment that embraces formative and summative approaches; support of a flexible, time-independent trajectory through the curriculum; and increased accountability to stakeholders with a shared set of expectations and a common language for education, assessment and regulation. OBJECTIVES Despite the advantages of CBME, numerous concerns and challenges to the implementation of CBME frameworks have been described, including: increased administrative requirements; the need for faculty development; the lack of models for flexible curricula, and inconsistencies in terms and definitions. Additionally, there are concerns about reductionist approaches to assessment in CBME, lack of good assessments for some competencies, and whether CBME frameworks include domains of current importance. This study will outline these issues and discuss the responses of the medical education community. METHODS The concerns and challenges expressed are primarily categorised as: (i) those related to practical, administrative and logistical challenges in implementing CBME frameworks, and (ii) those with more conceptual or theoretical bases. The responses of the education community to these issues are then summarised. CONCLUSIONS The education community has begun to address the challenges involved in implementing CBME. Models and guidance exist to inform implementation strategies across the continuum of education, and focus on the more efficient use of resources and technology, and the use of milestones and entrustable professional activities-based frameworks. Inconsistencies in CBME definitions and frameworks remain a significant obstacle. Evolution in assessment approaches from in vitro task-based methods to in vivo integrated approaches is responsive to many of the theoretical and conceptual concerns about CBME, but much work remains to be done to bring rigour and quality to work-based assessment.
Collapse
Affiliation(s)
- Richard E Hawkins
- Medical Education Outcomes, American Medical Association, Chicago, Illinois, USA
| | - Catherine M Welcher
- Medical Education Outcomes, American Medical Association, Chicago, Illinois, USA
| | - Eric S Holmboe
- Milestone Development and Evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois, USA
| | - Lynne M Kirk
- Department of Internal Medicine, Faculty of Medicine, University of Texas Southwestern, Dallas, Texas, USA
| | - John J Norcini
- Foundation for Advancement of International Medical Education and Research (FAIMER), Philadelphia, Pennsylvania, USA
| | - Kenneth B Simons
- Graduate Medical Education, Medical College of Wisconsin, Milwaukee, Wisconsin, USA
| | - Susan E Skochelak
- Medical Education, American Medical Association, Chicago, Illinois, USA
| |
Collapse
|