1
|
Smith C, Patel M. 'Ticked off'? Can a new outcomes-based postgraduate curriculum utilising programmatic assessment reduce assessment burden in Intensive Care Medicine? J Intensive Care Soc 2023; 24:170-177. [PMID: 37260422 PMCID: PMC10227897 DOI: 10.1177/17511437211061642] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/20/2023] Open
Abstract
Context Increasing dissatisfaction with existing methods of assessment in the workplace alongside a national drive towards outcomes-based postgraduate curricula led to a recent overhaul of the way Intensive Care Medicine (ICM) trainees are assessed in the United Kingdom. Programmatic assessment methodology was utilised; the existing 'tick-box' approach using workplace-based assessment to demonstrate competencies was de-emphasised and the expertise of trainers used to assess capability relating to fewer, high-level outcomes related to distinct areas of specialist practice. Methods A thematic analysis was undertaken investigating attitudes from 125 key stakeholders, including trainees and trainers, towards the new assessment strategy in relation to impact on assessment burden and acceptability. Results This qualitative study suggests increased satisfaction with the transition to an outcomes-based model with capability judged by educational supervisors. However, reflecting frustration relating to current assessment in the workplace, participants felt assessment burden has been significantly reduced. The approach taken was felt to be an improved method for assessing professional practice; there was enthusiasm for this change. However, this research highlights trainee and trainer anxiety regarding how to 'pass' these expert judgement decisions of capability in the real world. Additionally, concerns relating to the impact on subgroups of trainees due to the potential influence of implicit biases on the resultant fewer but 'higher stakes' interrogative judgements became apparent. Conclusion The move further towards a constructivist paradigm in workplace assessment in ICM reduces assessment burden yet can provoke anxiety amongst trainees and trainers requiring considered implementation. Furthermore, the perception of potential for bias in global judgements of performance requires further exploration.
Collapse
Affiliation(s)
- Christopher Smith
- Intensive Care Medicine Trainee ST6, North West School of ICM, Mersey, UK
| | - Mumtaz Patel
- North West School of ICM, Health Education England, UK
| |
Collapse
|
2
|
Schwitz F, Bartenstein A, Huwendiek S. [Workplace-based Assessments: A Needs Analysis of Residents and Supervisors]. PRAXIS 2022; 111:605-611. [PMID: 35975414 DOI: 10.1024/1661-8157/a003877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Workplace-based Assessments: A Needs Analysis of Residents and Supervisors Abstract. During residency training, four workplace-based assessments (WBA) are planned per year in the form of Mini-CEX and/or DOPS. They were introduced as a tool for giving feedback and defining learning objectives in the clinical setting. The aim of the present study is to identify facilitating and inhibiting factors. The results will provide information to improve the use of this learning tool to effectively promote learning in the workplace. First, all users must be trained in its use. In particular, it is important to provide immediate and specific feedback that identifies opportunities for improvement and sets achievable learning goals. Documentation should be user-friendly and provide an overview of the learning process. WBAs should not be perceived as a duty, but as a tool for valuable learning moments.
Collapse
Affiliation(s)
- Fabienne Schwitz
- Universitätsklinik für Kardiologie, Inselspital, Universitätsspital Bern, Bern, Schweiz
- Beide Autor_innen haben gleichermassen zum Manuskript beigetragen
| | - Andreas Bartenstein
- Universitätsklinik für Kinderchirurgie, Inselspital, Universitätsspital Bern, Bern, Schweiz
- Beide Autor_innen haben gleichermassen zum Manuskript beigetragen
| | - Sören Huwendiek
- 3 Abteilung für Assessment und Evaluation, Institut für Medizinische Lehre, Universität Bern, Bern, Schweiz
| |
Collapse
|
3
|
Long S, Rodriguez C, St-Onge C, Tellier PP, Torabi N, Young M. Factors affecting perceived credibility of assessment in medical education: A scoping review. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2022; 27:229-262. [PMID: 34570298 DOI: 10.1007/s10459-021-10071-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Revised: 08/31/2021] [Accepted: 09/11/2021] [Indexed: 06/13/2023]
Abstract
Assessment is more educationally effective when learners engage with assessment processes and perceive the feedback received as credible. With the goal of optimizing the educational value of assessment in medical education, we mapped the primary literature to identify factors that may affect a learner's perceptions of the credibility of assessment and assessment-generated feedback (i.e., scores or narrative comments). For this scoping review, search strategies were developed and executed in five databases. Eligible articles were primary research studies with medical learners (i.e., medical students to post-graduate fellows) as the focal population, discussed assessment of individual learners, and reported on perceived credibility in the context of assessment or assessment-generated feedback. We identified 4705 articles published between 2000 and November 16, 2020. Abstracts were screened by two reviewers; disagreements were adjudicated by a third reviewer. Full-text review resulted in 80 articles included in this synthesis. We identified three sets of intertwined factors that affect learners' perceived credibility of assessment and assessment-generated feedback: (i) elements of an assessment process, (ii) learners' level of training, and (iii) context of medical education. Medical learners make judgments regarding the credibility of assessments and assessment-generated feedback, which are influenced by a variety of individual, process, and contextual factors. Judgments of credibility appear to influence what information will or will not be used to improve later performance. For assessment to be educationally valuable, design and use of assessment-generated feedback should consider how learners interpret, use, or discount assessment-generated feedback.
Collapse
Affiliation(s)
- Stephanie Long
- Department of Family Medicine, McGill University, Montreal, QC, Canada
| | - Charo Rodriguez
- Department of Family Medicine, McGill University, Montreal, QC, Canada
| | - Christina St-Onge
- Faculty of Medicine and Health Sciences, University of Sherbrooke, Sherbrooke, QC, Canada
| | | | - Nazi Torabi
- Science Collections, University of Toronto Libraries, Toronto, ON, Canada
| | - Meredith Young
- Institute of Health Sciences Education, McGill University, 1110 Pine Ave West, Montreal, QC, H3A 1A3, Canada.
- Department of Medicine, McGill University, Montreal, QC, Canada.
| |
Collapse
|
4
|
Anderson HL, Kurtz J, West DC. Implementation and Use of Workplace-Based Assessment in Clinical Learning Environments: A Scoping Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S164-S174. [PMID: 34406132 DOI: 10.1097/acm.0000000000004366] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE Workplace-based assessment (WBA) serves a critical role in supporting competency-based medical education (CBME) by providing assessment data to inform competency decisions and support learning. Many WBA systems have been developed, but little is known about how to effectively implement WBA. Filling this gap is important for creating suitable and beneficial assessment processes that support large-scale use of CBME. As a step toward filling this gap, the authors describe what is known about WBA implementation and use to identify knowledge gaps and future directions. METHOD The authors used Arksey and O'Malley's 6-stage scoping review framework to conduct the review, including: (1) identifying the research question; (2) identifying relevant studies; (3) study selection; (4) charting the data; (5) collating, summarizing, and reporting the results; and (6) consulting with relevant stakeholders. RESULTS In 2019-2020, the authors searched and screened 726 papers for eligibility using defined inclusion and exclusion criteria. One hundred sixty-three met inclusion criteria. The authors identified 5 themes in their analysis: (1) Many WBA tools and programs have been implemented, and barriers are common across fields and specialties; (2) Theoretical perspectives emphasize the need for data-driven implementation strategies; (3) User perceptions of WBA vary and are often dependent on implementation factors; (4) Technology solutions could provide useful tools to support WBA; and (5) Many areas of future research and innovation remain. CONCLUSIONS Knowledge of WBA as an implemented practice to support CBME remains constrained. To remove these constraints, future research should aim to generate generalizable knowledge on WBA implementation and use, address implementation factors, and investigate remaining knowledge gaps.
Collapse
Affiliation(s)
- Hannah L Anderson
- H.L. Anderson is research associate, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania; ORCID: http://orcid.org/0000-0002-9435-1535
| | - Joshua Kurtz
- J. Kurtz is a first-year resident, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Daniel C West
- D.C. West is professor of pediatrics, The Perelman School of Medicine at the University of Pennsylvania, and associate chair for education and senior director of medical education, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania; ORCID: http://orcid.org/0000-0002-0909-4213
| |
Collapse
|
5
|
Branfield Day L, Miles A, Ginsburg S, Melvin L. Resident Perceptions of Assessment and Feedback in Competency-Based Medical Education: A Focus Group Study of One Internal Medicine Residency Program. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:1712-1717. [PMID: 32195692 DOI: 10.1097/acm.0000000000003315] [Citation(s) in RCA: 50] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
PURPOSE As key participants in the assessment dyad, residents must be engaged with the process. However, residents' experiences with competency-based medical education (CBME), and specifically with entrustable professional activity (EPA)-based assessments, have not been well studied. The authors explored junior residents' perceptions regarding the implementation of EPA assessment and feedback initiatives in an internal medicine program. METHOD From May to November 2018, 5 focus groups were conducted with 28 first-year internal medicine residents from the University of Toronto, exploring their experiences with facilitators and barriers to EPA-based assessments in the first years of the CBME initiative. Residents were exposed to EPA-based feedback tools from early in residency. Themes were identified using constructivist grounded theory to develop a framework to understand the resident perception of EPA assessment and feedback initiatives. RESULTS Residents' discussions reflected a growth mindset orientation, as they valued the idea of meaningful feedback through multiple low-stakes assessments. However, in practice, feedback seeking was onerous. While the quantity of feedback had increased, the quality had not; some residents felt it had worsened, by reducing it to a form-filling exercise. The assessments were felt to have increased daily workload with consequent disrupted workflow and to have blurred the lines between formative and summative assessment. CONCLUSIONS Residents embraced the driving principles behind CBME, but their experience suggested that changes are needed for CBME in the study site program to meet its goals. Efforts may be needed to reconcile the tension between assessment and feedback and to effectively embed meaningful feedback into CBME learning environments.
Collapse
Affiliation(s)
- Leora Branfield Day
- L. Branfield Day is a fourth-year chief medical resident, internal medicine training program, Department of Medicine, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Amy Miles
- A. Miles is a fourth-year resident, geriatric medicine subspecialty training program, Department of Medicine, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Shiphra Ginsburg
- S. Ginsburg is staff physician, Division of Respirology, Mount Sinai Hospital and University Health Network, professor, Department of Medicine, Faculty of Medicine, University of Toronto, and scientist, Wilson Centre for Research in Education, University of Toronto, Toronto, Ontario, Canada
| | - Lindsay Melvin
- L. Melvin is assistant professor, Department of Medicine, Faculty of Medicine, University of Toronto, and staff physician, Division of General Internal Medicine, University Health Network, Toronto, Ontario, Canada
| |
Collapse
|
6
|
Abstract
OBJECTIVES The formative aspect of the mini-clinical evaluation exercise (mini-CEX) in postgraduate medical workplace-based assessment is intended to afford opportunities for active learning. Yet, there is little understanding of the perceived relationship between the mini-CEX and how trainees self-regulate their learning. Our objective was to explore trainees' perceptions of their mini-CEX experiences from a learning perspective, using Zimmerman's self-regulated learning theoretical framework as an interpretive lens. DESIGN Qualitative, using semi-structured interviews conducted in 2017. The interviews were analysed thematically. SETTING Geriatric medicine training. PARTICIPANTS Purposive sampling was employed to recruit geriatric medicine trainees in Melbourne, Australia. Twelve advanced trainees participated in the interviews. RESULTS Four themes were found with a cyclical inter-relationship between three of these themes: namely, goal setting, task translation and perceived outcome. These themes reflect the phases of the self-regulated learning framework. Each phase was influenced by the fourth theme, supervisor co-regulation. Goal setting had motivational properties that had significant impact on the later phases of the cycle. A 'tick box' goal aligned with an opportunistic approach and poorer perceived educational outcomes. Participants reported that external feedback following assessment was critical for their self-evaluation, affective responses and perceived outcomes. CONCLUSIONS Trainees perceived the performance of a mini-CEX as a complex, inter-related cyclical process, influenced at all stages by the supervisor. Based on our trainee perspectives of the mini-CEX, we conclude that supervisor engagement is essential to support trainees to individually regulate their learning in the clinical environment.
Collapse
Affiliation(s)
- Eva Kipen
- Department of Medical Education, University of Melbourne, Melbourne, Victoria, Australia
- Central Clinical School, Faculty of Medicine Nursing and Health Sciences, Monash University, Australia
- Alfred Hospital, Melbourne, Victory, Australia
| | - Eleanor Flynn
- Department of Medical Education, University of Melbourne, Melbourne, Victoria, Australia
| | - Robyn Woodward-Kron
- Department of Medical Education, University of Melbourne, Melbourne, Victoria, Australia
| |
Collapse
|
7
|
Woolf K, Page M, Viney R. Assessing professional competence: a critical review of the Annual Review of Competence Progression. J R Soc Med 2019; 112:236-244. [PMID: 31124405 DOI: 10.1177/0141076819848113] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The Annual Review of Competence Progression is used to determine whether trainee doctors in the United Kingdom are safe and competent to progress to the next training stage. In this article we provide evidence to inform recommendations to enhance the validity of the summative and formative elements of the Annual Review of Competency Progression. The work was commissioned as part of a Health Education England review. We systematic searched the peer reviewed and grey literature, synthesising findings with information from national, local and specialty-specific Annual Review of Competence Progression guidance, critically evaluating the findings in the context of literature on assessing competence in medical education. National guidance lacked detail resulting in variability across locations and specialties, threatening validity and reliability. Trainees and trainers were concerned that the Annual Review of Competence Progression only reliably identifies the most poorly performing trainees. Feedback is not routinely provided, which can leave those with performance difficulties unsupported and high performers demotivated. Variability in the provision and quality of feedback can negatively affect learning. The Annual Review of Competence Progression functions as a high-stakes assessment, likely to have a significant impact on patient care. It should be subject to the same rigorous evaluation as other high-stakes assessments; there should be consistency in procedures across locations, specialties and grades; and all trainees should receive high-quality feedback.
Collapse
Affiliation(s)
- Katherine Woolf
- Research Department of Medical Education, UCL Medical School, Royal Free Hospital, London NW3 2PF, UK
| | - Michael Page
- Research Department of Medical Education, UCL Medical School, Royal Free Hospital, London NW3 2PF, UK
| | - Rowena Viney
- Research Department of Medical Education, UCL Medical School, Royal Free Hospital, London NW3 2PF, UK
| |
Collapse
|
8
|
Abstract
The aim of this study was to evaluate the quality of feedback provided to specialty trainees (ST3 or higher) in medical specialties during their workplace-based assessments (WBAs). The feedback given in WBAs was examined in detail in a group of 50 ST3 or higher trainees randomly selected from those taking part in a pilot study of changes to the WBA system conducted by the Joint Royal Colleges of Physicians Training Board. They were based in Health Education Northeast (Northern Deanery) and Health Education East of England (Eastern Deanery). Thematic analysis was used to identify commonly occurring themes. Feedback was mainly positive but there were differences in quality between specialties. Problems with feedback included insufficient detail, such that it was not possible to map the progression of the trainee, insufficient action plans made and the timing of feedback not being contemporaneous (feedback not being given at the time of assessment). Recommendations included feedback should be more specific; there need to be more options in the feedback forms for the supervisor to compare the trainee's performance to what is expected and action plans need to be made.
Collapse
Affiliation(s)
| | - Bill Burr
- Joint Royal Colleges of Physicians Training Board, London, UK
| | - Mairead Boohan
- Centre for Medical Education, Queen's University Belfast, Belfast, UK
| |
Collapse
|
9
|
Castanelli DJ, Jowsey T, Chen Y, Weller JM. Perceptions of purpose, value, and process of the mini-Clinical Evaluation Exercise in anesthesia training. Can J Anaesth 2016; 63:1345-1356. [DOI: 10.1007/s12630-016-0740-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2016] [Revised: 08/15/2016] [Accepted: 09/13/2016] [Indexed: 10/21/2022] Open
|
10
|
Batty L, McKinnon K, Skidmore A, McKinnon M. Supervised learning events: direct observation of procedural skills pilot. Occup Med (Lond) 2016; 66:656-661. [DOI: 10.1093/occmed/kqw090] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
11
|
Patel M, Agius S, Wilkinson J, Patel L, Baker P. Value of supervised learning events in predicting doctors in difficulty. MEDICAL EDUCATION 2016; 50:746-756. [PMID: 27295479 DOI: 10.1111/medu.12996] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2015] [Revised: 09/01/2015] [Accepted: 01/03/2016] [Indexed: 06/06/2023]
Abstract
CONTEXT In the UK, supervised learning events (SLE) replaced traditional workplace-based assessments for foundation-year trainees in 2012. A key element of SLEs was to incorporate trainee reflection and assessor feedback in order to drive learning and identify training issues early. Few studies, however, have investigated the value of SLEs in predicting doctors in difficulty. This study aimed to identify principles that would inform understanding about how and why SLEs work or not in identifying doctors in difficulty (DiD). METHODS A retrospective case-control study of North West Foundation School trainees' electronic portfolios was conducted. Cases comprised all known DiD. Controls were randomly selected from the same cohort. Free-text supervisor comments from each SLE were assessed for the four domains defined in the General Medical Council's Good Medical Practice Guidelines and each scored blindly for level of concern using a three-point ordinal scale. Cumulative scores for each SLE were then analysed quantitatively for their predictive value of actual DiD. A qualitative thematic analysis was also conducted. RESULTS The prevalence of DiD in this sample was 6.5%. Receiver operator characteristic curve analysis showed that Team Assessment of Behaviour (TAB) was the only SLE strongly predictive of actual DiD status. The Educational Supervisor Report (ESR) was also strongly predictive of DiD status. Fisher's test showed significant associations of TAB and ESR for both predicted and actual DiD status and also the health and performance subtypes. None of the other SLEs showed significant associations. Qualitative data analysis revealed inadequate completion and lack of constructive, particularly negative, feedback. This indicated that SLEs were not used to their full potential. CONCLUSIONS TAB and the ESR are strongly predictive of DiD. However, SLEs are not being used to their full potential, and the quality of completion of reports on SLEs and feedback needs to be improved in order to better identify and manage DiD.
Collapse
Affiliation(s)
- Mumtaz Patel
- Department of Renal Medicine, Manchester Royal Infirmary, Central Manchester University Hospitals NHS Foundation Trust, Manchester, UK
| | - Steven Agius
- Health Education England (North West Office), Manchester, UK
| | | | | | - Paul Baker
- Health Education England (North West Office), Manchester, UK
| |
Collapse
|
12
|
Massie J, Ali JM. Workplace-based assessment: a review of user perceptions and strategies to address the identified shortcomings. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2016; 21:455-73. [PMID: 26003590 DOI: 10.1007/s10459-015-9614-0] [Citation(s) in RCA: 86] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/24/2014] [Accepted: 05/13/2015] [Indexed: 05/12/2023]
Abstract
Workplace based assessments (WBAs) are now commonplace in postgraduate medical training. User acceptability and engagement is essential to the success of any medical education innovation. To this end, possessing an insight into trainee and trainer perceptions towards WBAs will help identify the major problems, permitting strategies to be introduced to improve WBA implementation. A review of literature was performed to identify studies examining trainee and trainer perceptions towards WBAs. Studies were excluded if non-English or sampling a non-medical/dental population. The identified literature was synthesised for the purpose of this critical narrative review. It is clear that there is widespread negativity towards WBAs in the workplace. This has negatively impacted on the effectiveness of WBA tools as learning aids. This negativity exists in trainees but also to an extent in their trainers. Insight gained from the literature reveals three dominant problems with WBA implementation: poor understanding as to the purpose of WBAs; insufficient time available for undertaking these assessments; and inadequate training of trainers. Approaches to addressing these three problems with WBA implementation are discussed. It is likely that a variety of solutions will be required. The prevalence of negativity towards WBAs is substantial in both trainees and trainers, eroding the effectiveness of learning that is consequent upon them. The educational community must now listen to the concerns being raised by the users and consider the range of strategies being proposed to improve the experiences of trainees, and their trainers.
Collapse
Affiliation(s)
- Jonathan Massie
- School of Clinical Medicine, University of Cambridge, Cambridge, UK
| | - Jason M Ali
- Department of Surgery, University of Cambridge, BOX 202, Addenbrookes Hospital, Cambridge, CB2 0QQ, UK.
| |
Collapse
|