1
|
Favier R, Proot J, Matiasovic M, Roos A, Knaake F, van der Lee A, den Toom M, Paes G, van Oostrom H, Verstappen F, Beukers M, van den Herik T, Bergknut N. Towards a flexible and personalised development of veterinarians and veterinary nurses working in a companion animal referral care setting. Vet Med Sci 2024; 10:e1518. [PMID: 38952266 PMCID: PMC11217593 DOI: 10.1002/vms3.1518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Revised: 05/15/2024] [Accepted: 06/10/2024] [Indexed: 07/03/2024] Open
Abstract
In the Netherlands, the demand for veterinarians and veterinary nurses (VNs) working within referral care is rapidly growing and currently exceeds the amount of available board-certified specialists. Simultaneously, a transparent structure to guide training and development and to assess quality of non-specialist veterinarians and VNs working in a referral setting is lacking. In response, we developed learning pathways guided by an entrustable professional activity (EPA) framework and programmatic assessment to support personalised development and competence of veterinarians and VNs working in referral settings. Between 4 and 35 EPAs varying per discipline (n = 11) were developed. To date, 20 trainees across five disciplines have been entrusted. Trainees from these learning pathways have proceeded to acquire new EPAs in addition to their already entrusted set of EPAs or progressed to specialist training during (n = 3) or after successfully completing (n = 1) the learning pathway. Due to their outcome-based approach, the learning pathways support flexible ways of development.
Collapse
Affiliation(s)
| | - Joachim Proot
- Evidensia Dierenziekenhuis BarendrechtBarendrechtThe Netherlands
| | | | - Arno Roos
- Evidensia Dierenziekenhuis NieuwegeinNieuwegeinThe Netherlands
| | - Frans Knaake
- Evidensia Dierenziekenhuis Den HaagDen HaagThe Netherlands
| | | | | | - Geert Paes
- IVC Evidensia the NetherlandsVleutenThe Netherlands
| | - Hugo van Oostrom
- Evidensia Dierenziekenhuis BarendrechtBarendrechtThe Netherlands
- Evidensia Dierenziekenhuis ArnhemArnhemThe Netherlands
| | | | - Martijn Beukers
- Evidensia Dierenziekenhuis BarendrechtBarendrechtThe Netherlands
- Evidensia Dierenziekenhuis Hart van BrabantWaalwijkThe Netherlands
| | | | - Niklas Bergknut
- Evidensia Dierenziekenhuis Hart van BrabantWaalwijkThe Netherlands
| |
Collapse
|
2
|
Andreou V, Peters S, Eggermont J, Schoenmakers B. A needs assessment for enhancing workplace-based assessment: a grounded theory study. BMC MEDICAL EDUCATION 2024; 24:659. [PMID: 38872142 DOI: 10.1186/s12909-024-05636-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 06/06/2024] [Indexed: 06/15/2024]
Abstract
OBJECTIVES Workplace-based assessment (WBA) has been vigorously criticized for not fulfilling its educational purpose by medical educators. A comprehensive exploration of stakeholders' needs regarding WBA is essential to optimize its implementation in clinical practice. METHOD Three homogeneous focus groups were conducted with three groups of stakeholders: General Practitioner (GP) trainees, GP trainers, and GP tutors. Due to COVID-19 measures, we opted for an online asynchronous form to enable participation. An constructivist grounded theory approach was used to employ this study and allow the identification of stakeholders' needs for using WBA. RESULTS Three core needs for WBA were identified in the analysis. Within GP Training, stakeholders found WBA essential, primarily, for establishing learning goals, secondarily, for assessment purposes, and, lastly, for providing or receiving feedback. CONCLUSION All stakeholders perceive WBA as valuable when it fosters learning. The identified needs were notably influenced by agency, trust, availability, and mutual understanding. These were facilitating factors influencing needs for WBA. Embracing these insights can significantly illuminate the landscape of workplace learning culture for clinical educators and guide a successful implementation of WBA.
Collapse
Affiliation(s)
- Vasiliki Andreou
- Academic Centre for General Practice, Department of Public Health and Primary Care, KU Leuven, Leuven, Belgium.
| | - Sanne Peters
- Academic Centre for General Practice, Department of Public Health and Primary Care, KU Leuven, Leuven, Belgium
- School of Health Sciences, Faculty of Medicine, Dentistry and Health Sciences, The University of Melbourne, Melbourne, Australia
| | - Jan Eggermont
- Department of Cellular and Molecular Medicine, KU Leuven, Leuven, Belgium
| | - Birgitte Schoenmakers
- Academic Centre for General Practice, Department of Public Health and Primary Care, KU Leuven, Leuven, Belgium
| |
Collapse
|
3
|
Nguyen-Tri I, Tremblay-Laroche D, Lavigne F, Tremblay ML, Lafleur A. Feedback in an Entrustment-Based Objective Structured Clinical Examination: Analysis of Content and Scoring Methods. J Grad Med Educ 2024; 16:286-295. [PMID: 38882423 PMCID: PMC11173042 DOI: 10.4300/jgme-d-23-00569.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/12/2023] [Revised: 12/21/2023] [Accepted: 04/08/2024] [Indexed: 06/18/2024] Open
Abstract
Background The integration of entrustable professional activities (EPAs) within objective structured clinical examinations (OSCEs) has yielded a valuable avenue for delivering timely feedback to residents. However, concerns about feedback quality persist. Objective This study aimed to assess the quality and content alignment of verbal feedback provided by examiners during an entrustment-based OSCE. Methods We conducted a progress test OSCE for internal medicine residents in 2022, assessing 7 EPAs. The immediate 2-minute feedback provided by examiners was recorded and analyzed using the Quality of Assessment of Learning (QuAL) score. We also analyzed the degree of alignment with EPA learning objectives: competency milestones and task-specific abilities. In a randomized crossover experiment, we compared the impact of 2 scoring methods used to assess residents' clinical performance (3-point entrustability scales vs task-specific checklists) on feedback quality and alignment. Results Twenty-one examiners provided feedback to 67 residents. The feedback demonstrated high quality (mean QuAL score 4.3 of 5) and significant alignment with the learning objectives of the EPAs. On average, examiners addressed in their feedback 2.5 milestones (61%) and 1.2 task-specific abilities (46%). The scoring methods used had no significant impact on QuAL scores (95% CI -0.3, 0.1, P=.28), alignment with competency milestones (95% CI -0.4, 0.1, P=.13), or alignment with task-specific abilities (95% CI -0.3, 0.1, P=.29). Conclusions In our entrustment-based OSCE, examiners consistently offered valuable feedback aligned with intended learning outcomes. Notably, we explored high-quality feedback and alignment as separate dimensions, finding no significant impact from our 2 scoring methods on either aspect.
Collapse
Affiliation(s)
- Isabelle Nguyen-Tri
- Isabelle Nguyen-Tri, MD, DESS(Ed), is Associate Professor, Department of Medicine, Faculty of Medicine, Laval University, Quebec City, Quebec, Canada
| | - Dave Tremblay-Laroche
- Dave Tremblay-Laroche, MD, MScCH-HPTE, is Associate Professor, Department of Medicine, Faculty of Medicine, Laval University, Quebec City, Quebec, Canada
| | - Félix Lavigne
- Félix Lavigne, MD, is Internal Medicine Resident, Department of Medicine, Faculty of Medicine, Laval University, Quebec City, Quebec, Canada
| | - Marie-Laurence Tremblay
- Marie-Laurence Tremblay, PhD, MSc, MHPE, is Assistant Professor, Faculty of Pharmacy, Laval University, and Chairholder, Familiprix Educational Leadership Chair in Community Pharmacy, Quebec City, Quebec, Canada; and
| | - Alexandre Lafleur
- Alexandre Lafleur, MD, MHPE, is Associate Professor, Department of Medicine, Faculty of Medicine, Laval University, Quebec City, Quebec, Canada
| |
Collapse
|
4
|
Kogan JR, Dine CJ, Conforti LN, Holmboe ES. Can Rater Training Improve the Quality and Accuracy of Workplace-Based Assessment Narrative Comments and Entrustment Ratings? A Randomized Controlled Trial. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:237-247. [PMID: 35857396 DOI: 10.1097/acm.0000000000004819] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE Prior research evaluating workplace-based assessment (WBA) rater training effectiveness has not measured improvement in narrative comment quality and accuracy, nor accuracy of prospective entrustment-supervision ratings. The purpose of this study was to determine whether rater training, using performance dimension and frame of reference training, could improve WBA narrative comment quality and accuracy. A secondary aim was to assess impact on entrustment rating accuracy. METHOD This single-blind, multi-institution, randomized controlled trial of a multifaceted, longitudinal rater training intervention consisted of in-person training followed by asynchronous online spaced learning. In 2018, investigators randomized 94 internal medicine and family medicine physicians involved with resident education. Participants assessed 10 scripted standardized resident-patient videos at baseline and follow-up. Differences in holistic assessment of narrative comment accuracy and specificity, accuracy of individual scenario observations, and entrustment rating accuracy were evaluated with t tests. Linear regression assessed impact of participant demographics and baseline performance. RESULTS Seventy-seven participants completed the study. At follow-up, the intervention group (n = 41), compared with the control group (n = 36), had higher scores for narrative holistic specificity (2.76 vs 2.31, P < .001, Cohen V = .25), accuracy (2.37 vs 2.06, P < .001, Cohen V = .20) and mean quantity of accurate (6.14 vs 4.33, P < .001), inaccurate (3.53 vs 2.41, P < .001), and overall observations (2.61 vs 1.92, P = .002, Cohen V = .47). In aggregate, the intervention group had more accurate entrustment ratings (58.1% vs 49.7%, P = .006, Phi = .30). Baseline performance was significantly associated with performance on final assessments. CONCLUSIONS Quality and specificity of narrative comments improved with rater training; the effect was mitigated by inappropriate stringency. Training improved accuracy of prospective entrustment-supervision ratings, but the effect was more limited. Participants with lower baseline rating skill may benefit most from training.
Collapse
Affiliation(s)
- Jennifer R Kogan
- J.R. Kogan is associate dean, Student Success and Professional Development, and professor of medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0001-8426-9506
| | - C Jessica Dine
- C.J. Dine is associate dean, Evaluation and Assessment, and associate professor of medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0001-5894-0861
| | - Lisa N Conforti
- L.N. Conforti is research associate for milestones evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-7317-6221
| | - Eric S Holmboe
- E.S. Holmboe is chief, research, milestones development and evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0000-0003-0108-6021
| |
Collapse
|
5
|
Pyke C, Anthony A, Archer J. Surgical Education: the RACS Model. Indian J Surg 2022. [DOI: 10.1007/s12262-021-03156-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
|
6
|
Dongre AR, Norcini J. Strengths, Weaknesses, and Suggestions for Improvement in Postgraduate Assessment in Community Medicine in India: A Delphi Study. Indian J Community Med 2021; 46:464-468. [PMID: 34759489 PMCID: PMC8575216 DOI: 10.4103/ijcm.ijcm_776_20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 04/27/2021] [Indexed: 11/05/2022] Open
Abstract
Objectives: It was to understand the strengths and weaknesses of the current postgraduate assessment system in community medicine in India, to identify recommendations for change, and to build a consensus around them. Materials and Methods: A conventional Delphi technique was preferred for consensus building among experts. We completed three Delphi rounds over a period of 4 weeks, and 16 experts participated in the study. Content analysis was done for open-ended responses, and consensus analysis was done for Likert-type scale questionnaire. In round three, we obtained their top five preferences for change in assessment. Results: The experts agreed to have an assessment system based on ongoing formative and one end-of-year summative assessment. Apart from this, they agreed on the various occasions for carrying out the formative assessment. Furthermore, they clearly agreed on measures such as blueprinting, improving test formats, and adequate briefing of test-taking students. Conclusion and Recommendations: Most of the consensus items were found to be in alignment with the modern assessment theory. Regulating body and policymakers should revise the current postgraduate assessment system in community medicine to enhance its validity and reliability.
Collapse
Affiliation(s)
- Amol R Dongre
- Department of Extension Programmes (SPARSH), Pramukhswami Medical College (PSMC), Karamsad, India
| | - John Norcini
- SUNY Upstate Medical University, Syracuse, New York, USA
| |
Collapse
|
7
|
Read EK, Brown A, Maxey C, Hecker KG. Comparing Entrustment and Competence: An Exploratory Look at Performance-Relevant Information in the Final Year of a Veterinary Program. JOURNAL OF VETERINARY MEDICAL EDUCATION 2021; 48:562-572. [PMID: 33661087 DOI: 10.3138/jvme-2019-0128] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Workplace-based assessments and entrustment scales have two primary goals: providing formative information to assist students with future learning; and, determining if and when learners are ready for safe, independent practice. To date, there has not been an evaluation of the relationship between these performance-relevant information pieces in veterinary medicine. This study collected quantitative and qualitative data from a single cohort of final-year students (n = 27) across in-training evaluation reports (ITERs) and entrustment scales in a distributed veterinary hospital environment. Here we compare progression in scoring and performance within and across student, within and across method of assessment, over time. Narrative comments were quantified using the Completed Clinical Evaluation Report Rating (CCERR) instrument to assess quality of written comments. Preliminary evidence suggests that we may be capturing different aspects of performance using these two different methods. Specifically, entrustment scale scores significantly increased over time, while ITER scores did not. Typically, comments on entrustment scale scores were more learner specific, longer, and used more of a coaching voice. Longitudinal evaluation of learner performance is important for learning and demonstration of competence; however, the method of data collection could influence how feedback is structured and how performance is ultimately judged.
Collapse
|
8
|
Dudek N, Duffy MC, Wood TJ, Gofton W. The Ottawa Resident Observation Form for Nurses (O-RON): Assessment of Resident Performance through the Eyes of the Nurses. JOURNAL OF SURGICAL EDUCATION 2021; 78:1666-1675. [PMID: 34092533 DOI: 10.1016/j.jsurg.2021.03.014] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 02/06/2021] [Accepted: 03/21/2021] [Indexed: 06/12/2023]
Abstract
OBJECTIVE Most work-place based assessment relies on physician supervisors making observations of residents. Many areas of performance are not directly observed by physicians but rather by other healthcare professionals, most often nurses. Assessment of resident performance by nurses is captured with multi-source feedback tools. However, these tools combine the assessments of nurses with other healthcare professionals and so their perspective can be lost. A novel tool was developed and implemented to assess resident performance on a hospital ward from the perspective of the nurses. DESIGN Through a nominal group technique, nurses identified dimensions of performance that are reflective of high-quality physician performance on a hospital ward. These were included as items in the Ottawa Resident Observation Form for Nurses (O-RON). The O-RON was voluntarily completed during an 11-month period. Validity evidence related to quantitative and qualitative data was collected. SETTING The Orthopedic Surgery Residency Program at the University of Ottawa. PARTICIPANTS 49 nurses on the Orthopedic Surgery wards at The Ottawa Hospital (tertiary care). RESULTS The O-RON has 15 items rated on a 3-point frequency scale, one global judgment yes/no question regarding whether they would want the resident on their team and a space for comments. 1079 O-RONs were completed on 38 residents. There was an association between the response to the global judgment question and the frequency of concerns (p < 0.01). With 8 forms per resident, the reliability of the O-RON was 0.80. Open-ended responses referred to aspects of interpersonal skills, responsiveness, dependability, communication skills, and knowledge. CONCLUSIONS The O-RON demonstrates promise as a work-place based assessment tool to provide residents and training programs with feedback on aspects of their performance on a hospital ward through the eyes of the nurses. It appears to be easy to use, has solid evidence for validity and can provide reliable data with a small number of completed forms.
Collapse
Affiliation(s)
- Nancy Dudek
- Department of Medicine (Division of Physical Medicine & Rehabilitation) and The Ottawa Hospital, University of Ottawa, Ottawa, Ontario, Canada.
| | - Melissa C Duffy
- Department of Educational Studies, University of South Carolina, College of Education, University of South Carolina, Wardlaw College, Columbia, South Carolina
| | - Timothy J Wood
- Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada
| | - Wade Gofton
- Department of Surgery (Division of Orthopedic Surgery) and The Ottawa Hospital, University of Ottawa, Division of Orthopedic Surgery, Ottawa, Ontario, Canada
| |
Collapse
|
9
|
Palis AG, Barrio-Barrio J, Mayorga EP, Mili-Boussen I, Noche CD, Swaminathan M, Golnik KC. The International Council of Ophthalmology Ophthalmic clinical evaluation exercise. Indian J Ophthalmol 2021; 69:43-47. [PMID: 33323570 PMCID: PMC7926108 DOI: 10.4103/ijo.ijo_154_20] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Purpose: Fifteen years after the publication of the Ophthalmic Clinical Evaluation Exercise (OCEX), it was deemed necessary to review and revise it, and to validate it for an international audience of ophthalmologists. This study to revise the OCEX and validate it for international use. Methods: The OCEX rubric was changed to a modified Dreyfus scale; a behavioral descriptor was created for each category. An international panel of ophthalmic educators reviewed the international applicability and appropriateness of the tool. Results: A tool for assessing and giving feedback on four aspects of clinical competence during the ophthalmic consultation (interview skills, examination, interpersonal and communication skills, and case presentation) was revised. The original scoring tool was improved to a new behavioral one, and relevant comments and suggestions from international reviewers were incorporated. The new tool has face and content validity for an international audience. Conclusion: The OCEX is the only tool for workplace assessment and feedback specifically for ophthalmology residents and the ophthalmic consultation. This improved and simplified version will facilitate its use and implementation to diverse programs around the world.
Collapse
Affiliation(s)
- Ana G Palis
- Department of Ophthalmology, Hospital Italiano de Buenos Aires, Buenos Aires, Argentina
| | - Jesús Barrio-Barrio
- Department of Ophthalmology, Clínica Universidad de Navarra, Navarra Institute for Health Research (IdiSNA), Pamplona, Spain
| | - Eduardo P Mayorga
- Department of Ophthalmology, Hospital Italiano de Buenos Aires, Buenos Aires, Argentina
| | - Ilhem Mili-Boussen
- Department of Ophthalmology, Charles Nicolle University Hospital, University of Tunis El Manar, Tunis, Tunisia
| | - Christelle D Noche
- Higher Institute of Health Sciences, Université des Montagnes, Bangangte, Cameroon
| | | | - Karl C Golnik
- Cincinnati Eye Institute and the University of Cincinnati, United States of America
| |
Collapse
|
10
|
[The Maastricht Education System in Aachen - a Few Miles Away or Worlds Apart?]. Zentralbl Chir 2020; 146:30-36. [PMID: 33152791 DOI: 10.1055/a-1265-7384] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
INTRODUCTION Education for residents in surgery varies not only throughout the world, but also throughout Europe. Our clinic is well connected to Maastricht University Medical in Centre in the Netherlands (European Surgical Centre Aachen Maastricht). On the other hand, there are clear differences in resident programs. In the Netherlands, a structured feedback according to the OSATS concept (Objective Structured Assessment of Technical Skills) is mandatory after every operation performed by residents. The aim of the present study was to transfer the OSATS concept from Maastricht to Aachen and to evaluate the feasibility and benefits of this concept for surgical education. MATERIAL AND METHODS The OSATS concept was implemented for 3 months in our clinic within a prospective clinical trial. Seven out of 10 residents that were working in our clinic at that time participated in the study (70%). Half of these were assigned to structured written feedback after every autonomously performed operation. Additionally, all participants performed structured written proper feedback according to the OSATS concept. The primary endpoint was the feasibility of the OSATS concept in our clinic; secondary endpoints were the benefits for the residents and the differences between external and self-evaluation. RESULTS The OSATS-concept was easily implemented in our clinic and met wide acceptance. Evaluation was performed after a mean of 70% of operations. External evaluation was regarded as more beneficial for residents than self-evaluation. Structured written evaluation according to the OSATS concept was not time-consuming (< 3 minutes) and most residents (86%) supported permanent implementation of the OSATS concept in our clinic. CONCLUSION The OSATS concept is a suitable approach to provide structured feedback to residents in continuous education. It can easily be implemented in resident education in Germany. Structured, written feedback by senior physicians is perceived as beneficial by residents.
Collapse
|
11
|
Halman S, Fu AYN, Pugh D. Entrustment within an objective structured clinical examination (OSCE) progress test: Bridging the gap towards competency-based medical education. MEDICAL TEACHER 2020; 42:1283-1288. [PMID: 32805146 DOI: 10.1080/0142159x.2020.1803251] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
PURPOSE Progress testing aligns well with competency-based medical education (CBME) frameworks, which stress the importance of continuous improvement. Entrustment is a useful assessment concept in CBME models. The purpose of this study was to explore the use of an entrustability rating scale within the context of an objective structured clinical examination (OSCE) Progress Test. METHODS A 9-case OSCE Progress Test was administered to Internal Medicine residents (PGYs 1-4). Residents were assessed using a checklist (CL), global rating scale (GRS), training level rating scale (TLRS), and entrustability scale (ENT). Reliability was calculated using Cronbach's alpha. Differences in performance by training year were explored using ANOVA and effect sizes were calculated using partial eta-squared. Examiners completed a post-examination survey. RESULTS Ninety one residents and forty two examiners participated in the OSCE. Inter-station reliability was high for all instruments. There was an overall effect of training level for all instruments (p < 0.001). Effect sizes were large. 88% of examiners completed the survey. Most (62%) indicated feeling comfortable in making entrustment decisions during the OSCE. CONCLUSIONS An entrustability scale can be used in an OSCE Progress Test to generate highly reliable ratings that discriminate between learners at different levels of training.
Collapse
Affiliation(s)
- Samantha Halman
- Department of Medicine, The Ottawa Hospital, Ottawa, Ontario, Canada
- Faculty of Medicine, The University of Ottawa, Ottawa, Ontario, Canada
| | - Angel Yi Nam Fu
- Faculty of Medicine, The University of Ottawa, Ottawa, Ontario, Canada
| | - Debra Pugh
- Department of Medicine, The Ottawa Hospital, Ottawa, Ontario, Canada
- Faculty of Medicine, The University of Ottawa, Ottawa, Ontario, Canada
- Medical Council of Canada, Ottawa, Ontario, Canada
| |
Collapse
|
12
|
McEllistrem B, Barrett A, Hanley K. Performance in practice; exploring trainer and trainee experiences of user-designed formative assessment tools. EDUCATION FOR PRIMARY CARE 2020; 32:27-33. [PMID: 33094687 DOI: 10.1080/14739879.2020.1815085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
INTRODUCTION General Practice training in Ireland currently has various methods of formative assessment and feedback delivered to trainees. In 2018 the Irish College of General Practitioners commissioned the generation of two new user-designed formative feedback tools that would allow trainee feedback to drive learning. These tools became known as the Performance in Practice (PiP) tools. AIMS To explore the experiences of General Practice (GP) trainers and trainees having completed a pilot of using the PiP tools for 4 months. METHODS An explorative phenomenological approach was taken to understand the experiences of trainers and trainees. One to one interviews were conducted, and the transcripts analysed for themes and sub-theme via Template analysis. RESULTS User experiences focused on two main areas; educational value and acceptability. In relation to educational value, the PiP tools were seen as an improvement over established forms of formative feedback, as they were centred around the curriculum and therefore reflected the unique multifaceted requirements of an independently practising GP. Acceptability primarily focused around data governance and structures, as well as practical issues such as ease of software use. CONCLUSIONS Overall, the experience of using the PiP tools was positive for both trainers and trainees. Future plans to further explore implementation of the PiP tools have been significantly informed by this research.
Collapse
Affiliation(s)
- B McEllistrem
- General Practice Training Unit, Irish College of General Practitioners, Dublin, Ireland
| | - A Barrett
- General Practice Training Unit, Irish College of General Practitioners, Dublin, Ireland
| | - K Hanley
- General Practice Training Unit, Irish College of General Practitioners, Dublin, Ireland
| |
Collapse
|
13
|
Fielding A, Mulquiney K, Canalese R, Tapley A, Holliday E, Ball J, Klein L, Magin P. A general practice workplace-based assessment instrument: Content and construct validity. MEDICAL TEACHER 2020; 42:204-212. [PMID: 31597048 DOI: 10.1080/0142159x.2019.1670336] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Introduction: Relatively few general practice (GP) workplace-based assessment instruments have been psychometrically evaluated. This study aims to establish the content validity and internal consistency of the General Practice Registrar Competency Assessment Grid (GPR-CAG).Methods: The GPR-CAG was constructed as a formative assessment instrument for Australian GP registrars (trainees). GPR-CAG items were determined by an iterative literature review, expert opinion and pilot-testing process. Validation data were collected, between 2014 and 2016, during routine clinical teaching visits within registrars' first two general practice training terms (GPT1 and GPT2) for registrars across New South Wales and the Australian Capital Territory. Factor analysis and expert consensus were used to refine items and establish GPR-CAG's internal structure. GPT1 and GPT2 competencies were analysed separately.Results: Data of 555 registrars undertaking GPT1 and 537 registrars undertaking GPT2 were included in analyses. A four-factor, 16-item solution was identified for GPT1 competencies (Cronbach's alpha range: 0.71-0.83) and a seven-factor 27-item solution for GPT2 competencies (Cronbach's alpha: 0.63-0.84). The emergent factor structures were clinically characterisable and resonant with existing medical education competency frameworks.Discussion: This study establishes initial evidence for the content validity and internal consistency of GPR-CAG. GPR-CAG appears to have utility as a formative GP training WBA instrument.
Collapse
Affiliation(s)
- Alison Fielding
- GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, Australia
- School of Medicine and Public Health, University of Newcastle, Callaghan, Australia
| | - Katie Mulquiney
- GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, Australia
- School of Medicine and Public Health, University of Newcastle, Callaghan, Australia
| | | | - Amanda Tapley
- GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, Australia
- School of Medicine and Public Health, University of Newcastle, Callaghan, Australia
| | - Elizabeth Holliday
- School of Medicine and Public Health, University of Newcastle, Callaghan, Australia
| | - Jean Ball
- Clinical Research Design IT and Statistical Support, Hunter Medical Research Institute, New Lambton, Australia
| | - Linda Klein
- GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, Australia
- School of Medicine and Public Health, University of Newcastle, Callaghan, Australia
| | - Parker Magin
- GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, Australia
- School of Medicine and Public Health, University of Newcastle, Callaghan, Australia
| |
Collapse
|
14
|
van der Vleuten CPM, Schuwirth LWT. Assessment in the context of problem-based learning. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2019; 24:903-914. [PMID: 31578642 PMCID: PMC6908559 DOI: 10.1007/s10459-019-09909-1] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Accepted: 08/07/2019] [Indexed: 05/29/2023]
Abstract
Arguably, constructive alignment has been the major challenge for assessment in the context of problem-based learning (PBL). PBL focuses on promoting abilities such as clinical reasoning, team skills and metacognition. PBL also aims to foster self-directed learning and deep learning as opposed to rote learning. This has incentivized researchers in assessment to find possible solutions. Originally, these solutions were sought in developing the right instruments to measure these PBL-related skills. The search for these instruments has been accelerated by the emergence of competency-based education. With competency-based education assessment moved away from purely standardized testing, relying more heavily on professional judgment of complex skills. Valuable lessons have been learned that are directly relevant for assessment in PBL. Later, solutions were sought in the development of new assessment strategies, initially again with individual instruments such as progress testing, but later through a more holistic approach to the assessment program as a whole. Programmatic assessment is such an integral approach to assessment. It focuses on optimizing learning through assessment, while at the same gathering rich information that can be used for rigorous decision-making about learner progression. Programmatic assessment comes very close to achieving the desired constructive alignment with PBL, but its wide adoption-just like PBL-will take many years ahead of us.
Collapse
Affiliation(s)
- Cees P M van der Vleuten
- School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, P.O. Box 616, 6200 MD, Maastricht, The Netherlands.
| | - Lambert W T Schuwirth
- Prideaux Centre for Research in Health Professions Education, College of Medicine and Public Health, Flinders University, Sturt Road, Bedford Park, SA, 5042, Australia
| |
Collapse
|
15
|
Andreassen P, Malling B. How are formative assessment methods used in the clinical setting? A qualitative study. INTERNATIONAL JOURNAL OF MEDICAL EDUCATION 2019; 10:208-215. [PMID: 31759332 PMCID: PMC7246116 DOI: 10.5116/ijme.5db3.62e3] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Accepted: 10/25/2019] [Indexed: 05/21/2023]
Abstract
OBJECTIVES To explore how formative assessment methods are used and perceived by second-year junior doctors in different clinical settings. METHODS A focused ethnography study was carried out. Ten second-year junior doctors from different specialties were selected using purposive sampling. The junior doctors were observed during a day in their clinical workplace where formative assessment was in focus. They were subsequently phone interviewed using a semi-structured interview guide regarding their experiences and attitudes towards formative assessment. Field notes from observations and interview transcriptions were analyzed using an inductive content analysis approach, and the concept of "everyday resistance" was used as a theoretical lens. RESULTS Three themes were identified: First, there were several barriers to the use of formative assessment methods in the clinical context, including subtle tactics of everyday resistance such as avoidance, deprioritizing, and contesting formative assessment methods. Secondly, junior doctors made careful selections when arranging a formative assessment. Finally, junior doctors had ambiguous attitudes towards the use of mandatory formative assessment methods and mixed experiences with their educational impact. CONCLUSIONS This study emphasizes that the use of formative assessment methods in the clinical setting is not a neutral and context-independent exercise, but rather is affected by a myriad of factors such as collegial relations, educational traditions, emotional issues, and subtle forms of resistance. An important implication for the health care sector will be to address these issues for formative assessment methods to be properly implemented in the clinic.
Collapse
Affiliation(s)
| | - Bente Malling
- Centre for Health Sciences Education, Aarhus University, Denmark
| |
Collapse
|
16
|
van der Vleuten C, van den Eertwegh V, Giroldi E. Assessment of communication skills. PATIENT EDUCATION AND COUNSELING 2019; 102:2110-2113. [PMID: 31351785 DOI: 10.1016/j.pec.2019.07.007] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Accepted: 07/06/2019] [Indexed: 05/26/2023]
Abstract
OBJECTIVE This paper addresses how communication skills can best be assessed. Since assessment and learning are strongly connected, the way communication skills are best learned is also described. RESULTS Communication skills are best learned in a longitudinal fashion with ample practice in an authentic setting. Confrontation of behavior initiates the learning process and should be supported by meaningful feedback through direct observation. When done appropriately a set of (learned) communication skills become integrated skilled communication, being versatilely used in purposeful goal-oriented clinical communication. The assessment of communication skills should follow a modern approach to assessment where the learning function of assessment is considered a priority. Individual assessments are feedback-oriented to promote further learning and development. The resulting rich information may be used to make progression decisions, usually in a group or committee decision. CONCLUSION This modern programmatic approach to assessment fits the learning of skilled communication well. PRACTICE IMPLICATIONS Implementation of a programmatic assessment approach to communication will entail a major innovation to education.
Collapse
Affiliation(s)
- Cees van der Vleuten
- Maastricht University, Department of Educational Development and Research, School of Health Professions Education(SHE), Faculty of Health, Medicine and Life Sciences, Maastricht, the Netherlands.
| | - Valerie van den Eertwegh
- Maastricht University, Skillslab, Faculty of Health, Medicine and Life Sciences, Maastricht, the Netherlands
| | - Esther Giroldi
- Maastricht University, Department of Educational Development and Research, School of Health Professions Education(SHE), Faculty of Health, Medicine and Life Sciences, Maastricht, the Netherlands; Maastricht University, Department of Family Medicine, Care and Public, Health Research Institute (CAPHRI), Faculty of Health, Medicine and Life Sciences, Maastricht, the Netherlands
| |
Collapse
|
17
|
Dudek N. Faculty and Resident Perspectives on Using Entrustment Anchors for Workplace-Based Assessment. J Grad Med Educ 2019; 11:287-294. [PMID: 31210859 PMCID: PMC6570427 DOI: 10.4300/jgme-d-18-01003.1] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Revised: 03/14/2019] [Accepted: 04/09/2019] [Indexed: 01/01/2023] Open
Abstract
BACKGROUND Research suggests that workplace-based assessment (WBA) tools using entrustment anchors provide more reliable assessments than those using traditional anchors. There is a lack of evidence describing how and why entrustment anchors work. OBJECTIVE The purpose of this study is to better understand the experience of residents and faculty with respect to traditional and entrustment anchors. METHODS We used constructivist grounded theory to guide data collection and analysis (March-December 2017) and semistructured interviews to gather reflections on anchors. Phase 1 involved residents and faculty (n = 12) who had only used assessment tools with traditional anchors. Phase 2 involved participants who had used tools with entrustment anchors (n = 10). Data were analyzed iteratively. RESULTS Participants expressed that the pragmatic language of entrustment anchors made WBA (1) concrete and justifiable; (2) transparent as they explicitly link clinical assessment and learning progress; and (3) align with training outcomes, enabling better feedback. Participants with no prior experience using entrustment anchors outlined contextual concerns regarding their use. Participants with experience described how they addressed these concerns. Participants expressed that entrustment anchors leave a gap in assessment information because they do not provide normative data. CONCLUSIONS Insights from this analysis contribute to a theoretical framework of benefits and challenges related to the adoption of entrustment anchors. This richer understanding of faculty and resident perspectives of entrustment anchors may assist WBA developers in creating more acceptable tools and inform the necessary faculty development initiatives that must accompany the use of these new WBA tools. .
Collapse
|
18
|
Lundsgaard KS, Tolsgaard MG, Mortensen OS, Mylopoulos M, Østergaard D. Embracing Multiple Stakeholder Perspectives in Defining Trainee Competence. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:838-846. [PMID: 30730374 DOI: 10.1097/acm.0000000000002642] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
PURPOSE To explore how multiple stakeholder groups contribute to the understanding of trainee competence. METHOD The authors conducted a constructivist qualitative study in 2015 using focus group discussions to explore the perceptions of different stakeholder groups (patients, nurses/nurse practitioners, supervisors/senior physicians, leaders/administrators, trainees) regarding trainee competence in the emergency department. The authors used a conventional content analysis, a comparative analysis of supervisors'/senior physicians' versus other stakeholders' perspectives, and a directed analysis informed by stakeholder theory to analyze the focus group transcripts. RESULTS Forty-six individuals participated in nine focus groups. Four categories of competence were identified: Core Clinical Activities, Patient Centeredness, Aligning Resources, and Code of Conduct. Stakeholders generally agreed in their overall expectations regarding trainee competence. Within individual categories, each stakeholder group identified new considerations, details, and conflicts, which were a replication, elaboration, or complication of a previously identified theme. All stakeholders stressed those aspects of trainee competence that were relevant to their work or values. Trainees were less aware of the patient perspective than that of the other stakeholder groups. CONCLUSIONS Considering multiple stakeholder perspectives enriched the description and conceptualization of trainee competence. It also can inform the development of curricula and assessment tools and guide learning about inter- and intradisciplinary conflicts. Further research should explore how trainees' perceptions of value are influenced by their organizational context and, in particular, how trainees adapt their learning goals in response to the divergent demands of key stakeholders.
Collapse
Affiliation(s)
- Kristine Sarauw Lundsgaard
- K.S. Lundsgaard is a PhD student, University of Copenhagen, Department of Occupational and Social Medicine, Copenhagen University Hospital Holbæk, Holbæk, Denmark; ORCID: https://orcid.org/0000-0002-6517-8497. M.G. Tolsgaard is associate professor, University of Copenhagen and Copenhagen Academy of Medical Education and Simulation, Capital Region, Denmark; ORCID: https://orcid.org/0000-0001-9197-5564. O.S. Mortensen is professor, Department of Public Health, Section of Social Medicine, University of Copenhagen, and Department of Occupational and Social Medicine, Copenhagen University Hospital Holbæk, Holbæk, Denmark; ORCID: https://orcid.org/0000-0002-4655-8048. M. Mylopoulos is associate professor, Department of Paediatrics, scientist, MD Program, and associate director, Wilson Centre, University of Toronto, Toronto, Ontario, Canada; ORCID: https://orcid.org/0000-0003-0012-5375. D. Østergaard is director, Copenhagen Academy of Medical Education and Simulation, and professor, University of Copenhagen, Capital Region, Denmark; ORCID: https://orcid.org/0000-0001-8542-6999
| | | | | | | | | |
Collapse
|
19
|
Kamp R, Möltner A, Harendza S. "Princess and the pea" - an assessment tool for palpation skills in postgraduate education. BMC MEDICAL EDUCATION 2019; 19:177. [PMID: 31146715 PMCID: PMC6543652 DOI: 10.1186/s12909-019-1619-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Accepted: 05/22/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND In osteopathic medicine, palpation is considered to be the key skill to be acquired during training. Whether palpation skills are adequately acquired during undergraduate or postgraduate training is difficult to assess. The aim of our study was to test a palpation assessment tool developed for undergraduate medical education in a postgraduate medical education (PME) setting. METHODS We modified and standardized an assessment tool, where a coin has to be palpated under different layers of copy paper. For every layer depth we randomized the hiding positions with a random generator. The task was to palpate the coin or to determine that no coin was hidden in the stack. We recruited three groups of participants: 22 physicians with no training in osteopathic medicine, 25 participants in a PME course of osteopathic techniques before and after a palpation training program, 31 physicians from an osteopathic expert group with at least 700 h of osteopathic skills training. These experts ran the test twice to check for test-retest-reliability. Inferential statistical analyzes were performed using generalized linear mixed models with the dichotomous variable "coin detected / not detected" as the dependent variable. RESULTS We measured a test-retest reliability of the assessment tool as a whole with 56 stations in the expert group of 0.67 (p < 0.001). For different paper layers, we found good retest reliabilities up to 300 sheets. The control group detected a coin significantly better in a depth of 150 sheets (p = 0.01) than the pre-training group. The osteopathic training group showed significantly more correct coin localizations after the training in layer depths of 200 (p = 0.03) and 300 sheets (p = 0.05). This group also had significantly better palpation results than the expert group in the depth of 300 sheets (p = 0.001). When there was no coin hidden, the expert group showed significantly better results than the post-training group (p = 0.01). CONCLUSIONS Our tool can be used with reliable results to test palpation course achievements with 200 and 300 sheets of paper. Further refinements of this tool will be needed to use it in complex assessment designs for the evaluation of more sophisticated palpatory skills in postgraduate medical settings.
Collapse
Affiliation(s)
- Rainer Kamp
- Academy of Medical Education of the Medical Council Westphalia-Lippe, Ärztekammer Westfalen-Lippe and Kassenärztliche Vereinigung Westfalen-Lippe, Münster, Germany
| | - Andreas Möltner
- Ruprecht-Karls-University, Center of Excellence for Assessment in Medicine – Baden Württemberg, Heidelberg, Germany
| | - Sigrid Harendza
- III. Department of Internal Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Universitätsklinikum Hamburg-Eppendorf, III. Medizinische Klinik Martinistr. 52, 20246 Hamburg, Germany
| |
Collapse
|
20
|
Current status of urology surgical training in Europe: an ESRU–ESU–ESUT collaborative study. World J Urol 2019; 38:239-246. [DOI: 10.1007/s00345-019-02763-1] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2019] [Accepted: 04/04/2019] [Indexed: 11/30/2022] Open
|
21
|
Harriman D, Singla R, Nguan C. The Resident Report Card: A Tool for Operative Feedback and Evaluation of Technical Skills. J Surg Res 2019; 239:261-268. [PMID: 30884382 DOI: 10.1016/j.jss.2019.02.006] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2018] [Revised: 01/28/2019] [Accepted: 02/01/2019] [Indexed: 10/27/2022]
Abstract
BACKGROUND Competency-based medical education surgical curriculums will require frequent, recorded trainee performance evaluations. It is our hypothesis that written feedback after each operation can be used to chart surgical progress, can identify underperforming trainees, and will prove beneficial for resident learning. METHODS The resident report card (RRC) is an online, easy-to-use evaluation tool designed to facilitate the creation and distribution of resident technical assessments. RRC data were collected from urologic trainees and analyzed using ANOVA and post hoc testing to confirm our hypothesis. A standardized survey was sent to residents, gauging their views on the RRC. RESULTS Over a 5-y period, 958 RRCs with the resident listed as the primary operator were collected across 29 different procedures. Resident cohort and individual performance scores stratified by postgraduate year (PGY) were shown to significantly improve when all procedures (cohort, 6.5 ± 1.9 [PGY-1] to 9.1 ± 1.0 [PGY-5]; individual [resident M], 8.8 ± 1.8 [PGY-3] to 9.4 ± 0.7 [PGY-5], P < 0.01) and specific procedures (laparoscopic donor nephrectomy: cohort, 7.3 ± 1.3 [PGY-3] to 8.9 ± 1.0 [PGY-5]; individual [resident I], 7.2 ± 1.3 [PGY-3] to 9.5 ± 0.6 [PGY-5], P < 0.01) were analyzed. Individual residents were able to be compared to their own peer group and to the average scores across all evaluated residents. Surveyed residents were overwhelmingly positive about the RRC. CONCLUSIONS The RRC adds further evidence to the fact that standardized, formative, and timely assessment can capture trainee performance over time and against comparator cohorts in an acceptable format to residents and academic training programs.
Collapse
Affiliation(s)
- David Harriman
- Department of Urologic Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Rohit Singla
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, British Columbia, Canada
| | - Christopher Nguan
- Department of Urologic Sciences, University of British Columbia, Vancouver, British Columbia, Canada.
| |
Collapse
|
22
|
de Jonge LPJWM, Timmerman AA, Govaerts MJB, Muris JWM, Muijtjens AMM, Kramer AWM, van der Vleuten CPM. Stakeholder perspectives on workplace-based performance assessment: towards a better understanding of assessor behaviour. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2017; 22:1213-1243. [PMID: 28155004 PMCID: PMC5663793 DOI: 10.1007/s10459-017-9760-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2016] [Accepted: 01/24/2017] [Indexed: 05/13/2023]
Abstract
Workplace-Based Assessment (WBA) plays a pivotal role in present-day competency-based medical curricula. Validity in WBA mainly depends on how stakeholders (e.g. clinical supervisors and learners) use the assessments-rather than on the intrinsic qualities of instruments and methods. Current research on assessment in clinical contexts seems to imply that variable behaviours during performance assessment of both assessors and learners may well reflect their respective beliefs and perspectives towards WBA. We therefore performed a Q methodological study to explore perspectives underlying stakeholders' behaviours in WBA in a postgraduate medical training program. Five different perspectives on performance assessment were extracted: Agency, Mutuality, Objectivity, Adaptivity and Accountability. These perspectives reflect both differences and similarities in stakeholder perceptions and preferences regarding the utility of WBA. In comparing and contrasting the various perspectives, we identified two key areas of disagreement, specifically 'the locus of regulation of learning' (i.e., self-regulated versus externally regulated learning) and 'the extent to which assessment should be standardised' (i.e., tailored versus standardised assessment). Differing perspectives may variously affect stakeholders' acceptance, use-and, consequently, the effectiveness-of assessment programmes. Continuous interaction between all stakeholders is essential to monitor, adapt and improve assessment practices and to stimulate the development of a shared mental model. Better understanding of underlying stakeholder perspectives could be an important step in bridging the gap between psychometric and socio-constructivist approaches in WBA.
Collapse
Affiliation(s)
- Laury P J W M de Jonge
- Department of Family Medicine, FHML, Maastricht University, P.O. Box 616, 6200 MD, Maastricht, The Netherlands.
| | - Angelique A Timmerman
- Department of Family Medicine, FHML, Maastricht University, P.O. Box 616, 6200 MD, Maastricht, The Netherlands
| | - Marjan J B Govaerts
- Department of Educational Research and Development, FHML, Maastricht University, Maastricht, The Netherlands
| | - Jean W M Muris
- Department of Family Medicine, FHML, Maastricht University, P.O. Box 616, 6200 MD, Maastricht, The Netherlands
| | - Arno M M Muijtjens
- Department of Educational Research and Development, FHML, Maastricht University, Maastricht, The Netherlands
| | - Anneke W M Kramer
- Department of Family Medicine, Leiden University, Leiden, The Netherlands
| | - Cees P M van der Vleuten
- Department of Educational Research and Development, FHML, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
23
|
Rekman J, Hamstra SJ, Dudek N, Wood T, Seabrook C, Gofton W. A New Instrument for Assessing Resident Competence in Surgical Clinic: The Ottawa Clinic Assessment Tool. JOURNAL OF SURGICAL EDUCATION 2016; 73:575-82. [PMID: 27052202 DOI: 10.1016/j.jsurg.2016.02.003] [Citation(s) in RCA: 65] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2016] [Accepted: 02/13/2016] [Indexed: 05/26/2023]
Abstract
BACKGROUND The shift toward competency-based medical education has created a demand for feasible workplace-based assessment tools. Perhaps, more important than competence to assess an individual patient is the ability to successfully manage a surgical clinic. Trainee performance in clinic is a critical component of learning to manage a surgical practice, yet no assessment tool currently exists to assess daily performance in outpatient clinics for surgery residents. The development of a competency-based assessment tool, the Ottawa Clinic Assessment Tool (OCAT), is described here to address this gap. STUDY DESIGN A consensus group of experts was gathered to generate dimensions of performance reflective of a competent "generalist" surgeon in clinic. A 6-month pilot study of the OCAT was conducted in orthopedics, general surgery, and obstetrics and gynecology with quantitative and qualitative evidence of validity collected. In all, 2 subsequent feedback sessions and a survey for staff and residents evaluated the OCAT for clarity and utility. RESULTS The OCAT is a 9-item tool, with a global assessment item and 2 short-answer questions. Among the 2 divisions, 44 staff surgeons completed 132 OCAT assessments of 79 residents. Psychometric data was collected as evidence of validity. Analysis of feedback indicated that the entrustability rating scale was useful for surgeons and residents and that the items could be correlated with individual competencies. CONCLUSIONS Multiple sources of validity evidence collected in this study demonstrate that the OCAT can measure resident clinic competency in a valid and feasible manner.
Collapse
Affiliation(s)
- Janelle Rekman
- Department of Surgical Education, The University of Ottawa, Ottawa, Ontario, Canada.
| | - Stanley J Hamstra
- Milestones Research and Evaluation at the Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Nancy Dudek
- Department of Medicine, The Ottawa Hospital Rehabilitation Center, The University of Ottawa, Ottawa, Ontario, Canada
| | - Timothy Wood
- Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada
| | - Christine Seabrook
- Department of Surgical Education, The University of Ottawa, Ottawa, Ontario, Canada
| | - Wade Gofton
- Department of Surgical Education, The University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
24
|
Rekman J, Gofton W, Dudek N, Gofton T, Hamstra SJ. Entrustability Scales: Outlining Their Usefulness for Competency-Based Clinical Assessment. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2016; 91:186-90. [PMID: 26630609 DOI: 10.1097/acm.0000000000001045] [Citation(s) in RCA: 171] [Impact Index Per Article: 21.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Meaningful residency education occurs at the bedside, along with opportunities for situated in-training assessment. A necessary component of workplace-based assessment (WBA) is the clinical supervisor, whose subjective judgments of residents' performance can yield rich and nuanced ratings but may also occasionally reflect bias. How to improve the validity of WBA instruments while simultaneously capturing meaningful subjective judgment is currently not clear. This Perspective outlines how "entrustability scales" may help bridge the gap between the assessment judgments of clinical supervisors and WBA instruments. Entrustment-based assessment evaluates trainees against what they will actually do when independent; thus, "entrustability scales"-defined as behaviorally anchored ordinal scales based on progression to competence-reflect a judgment that has clinical meaning for assessors. Rather than asking raters to assess trainees against abstract scales, entrustability scales provide raters with an assessment measure structured around the way evaluators already make day-to-day clinical entrustment decisions, which results in increased reliability. Entrustability scales help raters make assessments based on narrative descriptors that reflect real-world judgments, drawing attention to a trainee's readiness for independent practice rather than his/her deficiencies. These scales fit into milestone measurement both by allowing an individual resident to strive for independence in entrustable professional activities across the entire training period and by allowing residency directors to identify residents experiencing difficulty. Some WBA tools that have begun to use variations of entrustability scales show potential for allowing raters to produce valid judgments. This type of anchor scale should be brought into wider circulation.
Collapse
Affiliation(s)
- Janelle Rekman
- J. Rekman is a general surgery resident and master's in health professions education student, University of Ottawa, Ottawa, Ontario, Canada. W. Gofton is an orthopedic surgeon, University of Ottawa, Ottawa, Ontario, Canada. N. Dudek is associate professor, Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada. T. Gofton is Wissenschaftlicher Mitarbeiter, Department of Philosophy, Eberhard Karls Universität, Tübingen, Germany. S.J. Hamstra is vice president, Milestones Research and Evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | | | | | | | | |
Collapse
|
25
|
Fokkema JPI. Innovating the practice of medical speciality training. PERSPECTIVES ON MEDICAL EDUCATION 2016; 5:48-50. [PMID: 26754312 PMCID: PMC4754224 DOI: 10.1007/s40037-015-0245-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Educational innovations are being introduced into medical speciality training. But how do people who participate in medical speciality training (residents, consultants, programme directors) deal with these innovations? And what effects do educational innovations have according to these people?By addressing these questions, this thesis contributes to the knowledge about the challenging process of innovating medical speciality training.
Collapse
|
26
|
Christen HJ, Kordonouri O, Lange K, Berendonk C. Pilotstudie zum interprofessionellen Feedback in der pädiatrischen Weiterbildung. Monatsschr Kinderheilkd 2015. [DOI: 10.1007/s00112-015-3324-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
27
|
Fokkema JPI, Scheele F, Westerman M, van Exel J, Scherpbier AJJA, van der Vleuten CPM, Dörr PJ, Teunissen PW. Perceived effects of innovations in postgraduate medical education: a Q study focusing on workplace-based assessment. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2014; 89:1259-66. [PMID: 24988425 DOI: 10.1097/acm.0000000000000394] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
PURPOSE Anticipating users' perceptions of the effects an innovation will have in daily practice prior to implementation may lead to a more optimal innovation process. In this study, the authors aimed to identify the kinds of perceptions that exist concerning the effects of workplace-based assessment (WBA), an innovation that is widely used in medical education, among its users. METHOD In 2012, the authors used Q methodology to ascertain the principal user perceptions of effects of WBA in practice. Participating obstetrics-gynecology residents and attending physicians (including residency program directors) at six hospitals in the Netherlands performed individual Q sorts to rank 36 statements concerning WBA and WBA tools according to their level of agreement. The authors conducted by-person factor analysis to uncover patterns in the rankings of the statements. They used the statistical results and participant comments about their sorts to interpret and describe distinct perceptions. RESULTS The analysis of 65 Q sorts (completed by 22 residents and 43 attendings) identified five distinct user perceptions regarding the effects of WBA in practice, which the authors labeled enthusiasm, compliance, effort, neutrality, and skepticism. These perceptions were characterized by differences in views on three main issues: the intended goals of the innovation, its applicability (ease of applying it to practice), and its actual impact. CONCLUSIONS User perceptions of the effects of innovations in medical education can be typified and should be anticipated. This study's insights into five principal user perceptions can support the design and implementation of innovations in medical education.
Collapse
Affiliation(s)
- Joanne P I Fokkema
- Dr. Fokkema is a physician and PhD student, St. Lucas Andreas Hospital, Amsterdam, the Netherlands. Dr. Scheele is professor, VU University Medical Center, Amsterdam, the Netherlands, and a gynecologist and residency program director, St. Lucas Andreas Hospital, Amsterdam, the Netherlands. Dr. Westerman is a researcher, School of Medical Sciences, VU University Medical Center, Amsterdam, the Netherlands, and a resident in internal medicine, St. Lucas Andreas Hospital, Amsterdam, the Netherlands. Dr. van Exel is associate professor, Institute of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, the Netherlands. Dr. Scherpbier is professor and dean, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands. Dr. van der Vleuten is professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands. Dr. Dörr, deceased, was professor, Department of Education and Teaching, Leiden University Medical Center, Leiden, the Netherlands, and a gynecologist, Medical Centre Haaglanden, Den Haag, the Netherlands. Dr. Teunissen is a resident in obstetrics-gynecology, VU University Medical Center, Amsterdam, the Netherlands, and associate professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands
| | | | | | | | | | | | | | | |
Collapse
|
28
|
van der Vleuten CPM, Driessen EW. What would happen to education if we take education evidence seriously? PERSPECTIVES ON MEDICAL EDUCATION 2014; 3:222-232. [PMID: 24925627 PMCID: PMC4078056 DOI: 10.1007/s40037-014-0129-9] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Educational practice and educational research are not aligned with each other. Current educational practice heavily relies on information transmission or content delivery to learners. Yet evidence shows that delivery is only a minor part of learning. To illustrate the directions we might take to find better educational strategies, six areas of educational evidence are briefly reviewed. The flipped classroom idea is proposed to shift our expenditure and focus in education. All information delivery could be web distributed, thus creating more time for other more expensive educational strategies to support the learner. In research our focus should shift from comparing one curriculum to the other, to research that explains why things work in education and under which conditions. This may generate ideas for creative designers to develop new educational strategies. These best practices should be shared and further researched. At the same time attention should be paid to implementation and the realization that teachers learn in a way very similar to the people they teach. If we take the evidence seriously, our educational practice will look quite different to the way it does now.
Collapse
Affiliation(s)
- C P M van der Vleuten
- Department of Educational Development and Research, Maastricht University, PO Box 616, 6200 MD, Maastricht, the Netherlands.
| | - E W Driessen
- Department of Educational Development and Research, Maastricht University, PO Box 616, 6200 MD, Maastricht, the Netherlands
| |
Collapse
|
29
|
Hamdorf J. Assessment of surgical trainees in the workplace. ANZ J Surg 2013; 83:400-1. [DOI: 10.1111/ans.12179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Jeffrey Hamdorf
- School of Surgery; The University of Western Australia; Perth; Western Australia; Australia
| |
Collapse
|