1
|
Thoonen BPA, Scherpbier-de Haan ND, Fluit CRMG, Stalmeijer RE. How Do Trainees Use EPAs to Regulate Their Learning in the Clinical Environment? A Grounded Theory Study. PERSPECTIVES ON MEDICAL EDUCATION 2024; 13:431-441. [PMID: 39247555 PMCID: PMC11378707 DOI: 10.5334/pme.1403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Accepted: 08/19/2024] [Indexed: 09/10/2024]
Abstract
Introduction Entrustable Professional Activities (EPAs) can potentially support self-regulated learning in the clinical environment. However, critics of EPAs express doubts as they see potential harms, like checkbox behaviour. This study explores how GP-trainees use EPAs in the clinical environment through the lens of self-regulated learning theory and addresses the question of whether EPAs help or hinder trainees' learning in a clinical environment. Methods Using constructivist grounded theory methodology, a purposive and theoretical sample of GP-trainees across different years of training were interviewed. Two PICTOR interviews were added to refine and confirm constructed theory. Data collection and analysis followed principles of constant comparative analysis. Results and Discussion Trainees experience both hindering and helping influences of EPAs and self-regulate their learning by balancing these influences throughout GP-placements. Three consecutive stages were constructed each with different use of EPAs: adaptation, taking control, and checking the boxes. EPAs were most helpful in the 'taking control' stage. EPAs hindered self-regulated learning most during the final stage of training as trainees had other learning goals and experienced assessment of EPAs as bureaucratic and demotivating. Regularly discussing EPAs with supervisors helped to focus on specific learning goals, create opportunities for learning, and generate task-oriented feedback. Conclusion EPAs can both help and hinder self-regulated learning. How trainees balance both influences changes over time. Therefore, placements need to be at least long enough to enable trainees to gain and maintain control of learning. Supervisors and teachers should assist trainees in balancing the hindering and helping influences of EPAs.
Collapse
Affiliation(s)
- Bart P A Thoonen
- Development of Education in Primary Care at the Department of Primary and Community Care, Radboud University Medical Centre, PO Box 9101, 6500 HB Nijmegen, The Netherlands
| | - Nynke D Scherpbier-de Haan
- Department of Primary and Long-term Care, University Medical Centre Groningen, Groningen, The Netherlands
| | | | - Renée E Stalmeijer
- School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
2
|
Blanchette P, Poitras ME, Lefebvre AA, St-Onge C. Making judgments based on reported observations of trainee performance: a scoping review in Health Professions Education. CANADIAN MEDICAL EDUCATION JOURNAL 2024; 15:63-75. [PMID: 39310309 PMCID: PMC11415737 DOI: 10.36834/cmej.75522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/25/2024]
Abstract
Background Educators now use reported observations when assessing trainees' performance. Unfortunately, they have little information about how to design and implement assessments based on reported observations. Objective The purpose of this scoping review was to map the literature on the use of reported observations in judging health professions education (HPE) trainees' performances. Methods Arksey and O'Malley's (2005) method was used with four databases (sources: ERIC, CINAHL, MEDLINE, PsycINFO). Eligibility criteria for articles were: documents in English or French, including primary data, and initial or professional training; (2) training in an HPE program; (3) workplace-based assessment; and (4) assessment based on reported observations. The inclusion/exclusion, and data extraction steps were performed (agreement rate > 90%). We developed a data extraction grid to chart the data. Descriptive analyses were used to summarize quantitative data, and the authors conducted thematic analysis for qualitative data. Results Based on 36 papers and 13 consultations, the team identified six steps characterizing trainee performance assessment based on reported observations in HPE: (1) making first contact, (2) observing and documenting the trainee performance, (3) collecting and completing assessment data, (4) aggregating assessment data, (5) inferring the level of competence, and (6) documenting and communicating the decision to the stakeholders. Discussion The design and implementation of assessment based on reported observations is a first step towards a quality implementation by guiding educators and administrators responsible for graduating competent professionals. Future research might focus on understanding the context beyond assessor cognition to ensure the quality of meta-assessors' decisions.
Collapse
|
3
|
Oswald A, Dubois D, Snell L, Anderson R, Karpinski J, Hall AK, Frank JR, Cheung WJ. Implementing Competence Committees on a National Scale: Design and Lessons Learned. PERSPECTIVES ON MEDICAL EDUCATION 2024; 13:56-67. [PMID: 38343555 PMCID: PMC10854462 DOI: 10.5334/pme.961] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 07/03/2023] [Indexed: 02/15/2024]
Abstract
Competence committees (CCs) are a recent innovation to improve assessment decision-making in health professions education. CCs enable a group of trained, dedicated educators to review a portfolio of observations about a learner's progress toward competence and make systematic assessment decisions. CCs are aligned with competency based medical education (CBME) and programmatic assessment. While there is an emerging literature on CCs, little has been published on their system-wide implementation. National-scale implementation of CCs is complex, owing to the culture change that underlies this shift in assessment paradigm and the logistics and skills needed to enable it. We present the Royal College of Physicians and Surgeons of Canada's experience implementing a national CC model, the challenges the Royal College faced, and some strategies to address them. With large scale CC implementation, managing the tension between standardization and flexibility is a fundamental issue that needs to be anticipated and addressed, with careful consideration of individual program needs, resources, and engagement of invested groups. If implementation is to take place in a wide variety of contexts, an approach that uses multiple engagement and communication strategies to allow for local adaptations is needed. Large-scale implementation of CCs, like any transformative initiative, does not occur at a single point but is an evolutionary process requiring both upfront resources and ongoing support. As such, it is important to consider embedding a plan for program evaluation at the outset. We hope these shared lessons will be of value to other educators who are considering a large-scale CBME CC implementation.
Collapse
Affiliation(s)
- Anna Oswald
- Division of Rheumatology, Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
- Competency Based Medical Education, University of Alberta, Edmonton, AB, Canada
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- 8-130 Clinical Sciences building, 11350-83 Avenue, Edmonton, AB, Canada
| | - Daniel Dubois
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Department of Anesthesiology and Pain Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Linda Snell
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Institute of Health Sciences Education and Department of Medicine, McGill University, Montreal, QC, Canada
| | - Robert Anderson
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Northern Ontario School of Medicine University, Sudbury, ON, Canada
| | - Jolanta Karpinski
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Department of Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Andrew K. Hall
- Royal College of Physicians and Surgeons of Canada, Ottawa, ON, Canada
- Dept. of Emergency Medicine, University of Ottawa, Canada
| | - Jason R. Frank
- Centre for Innovation in Medical Education, Faculty of Medicine, University of Ottawa, Canada
| | - Warren J. Cheung
- Dept. of Emergency Medicine, University of Ottawa, Canada
- Royal College of Physicians and Surgeons of Canada, 1053 Carling Avenue, Rm F660, Ottawa, Canada
| |
Collapse
|
4
|
Bremer AE, van de Pol MHJ, Laan RFJM, Fluit CRMG. An Innovative Undergraduate Medical Curriculum Using Entrustable Professional Activities. JOURNAL OF MEDICAL EDUCATION AND CURRICULAR DEVELOPMENT 2023; 10:23821205231164894. [PMID: 37123076 PMCID: PMC10134152 DOI: 10.1177/23821205231164894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Accepted: 03/02/2023] [Indexed: 05/03/2023]
Abstract
The need to educate medical professionals in changing medical organizations has led to a revision of the Radboudumc's undergraduate medical curriculum. Entrustable professional activities (EPAs) were used as a learning tool to support participation and encourage feedback-seeking behavior, in order to offer students the best opportunities for growth. This paper describes the development of the Radboudumc's EPA-based Master's curriculum and how EPAs can facilitate continuity in learning in the clerkships. Four guiding principles were used to create a curriculum that offers possibilities for the students' development: (1) working with EPAs, (2) establishing entrustment, (3) providing continuity in learning, and (4) organizing smooth transitions. The new curriculum was designed with the implementation of EPAs and an e-portfolio, based on these 4 principles. The authors found that the revised curriculum corresponds to daily practice in clerkships. Students used their e-portfolios throughout all clerkships, which stimulates feedback-seeking behavior. Moreover, EPAs promote continuity in learning while rotating clerkships every 1 to 2 months. This might encourage curriculum developers to use EPAs when aiming for greater continuity in the development of students. Future research needs to focus on the effect of EPAs on transitions across clerkships in order to further improve the undergraduate medical curriculum.
Collapse
Affiliation(s)
- Anne E Bremer
- Radboud Institute for Health Sciences,
Department of Radboudumc Health Academy, Radboud University Medical Center, Nijmegen, the Netherlands
- Anne E Bremer, Radboudumc Health Academy,
Postbus 9101, 6500 HB, Nijmegen, The Netherlands.
| | - Marjolein H J van de Pol
- Department of Primary and Community
Care, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Roland F J M Laan
- Department of Radboudumc Health Academy, Radboud University Medical Center, Nijmegen, the
Netherlands
| | - Cornelia R M G Fluit
- Department of Radboudumc Health Academy, Radboud University Medical Center, Nijmegen, the
Netherlands
| |
Collapse
|
5
|
Westein MPD, Koster AS, Daelmans HEM, Collares CF, Bouvy ML, Kusurkar RA. Validity evidence for summative performance evaluations in postgraduate community pharmacy education. CURRENTS IN PHARMACY TEACHING & LEARNING 2022; 14:701-711. [PMID: 35809899 DOI: 10.1016/j.cptl.2022.06.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 05/30/2022] [Accepted: 06/09/2022] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Workplace-based assessment of competencies is complex. In this study, the validity of summative performance evaluations (SPEs) made by supervisors in a two-year longitudinal supervisor-trainee relationship was investigated in a postgraduate community pharmacy specialization program in the Netherlands. The construct of competence was based on an adapted version of the 2005 Canadian Medical Education Directive for Specialists (CanMEDS) framework. METHODS The study had a case study design. Both quantitative and qualitative data were collected. The year 1 and year 2 SPE scores of 342 trainees were analyzed using confirmatory factor analysis and generalizability theory. Semi-structured interviews were held with 15 supervisors and the program director to analyze the inferences they made and the impact of SPE scores on the decision-making process. RESULTS A good model fit was found for the adapted CanMEDS based seven-factor construct. The reliability/precision of the SPE measurements could not be completely isolated, as every trainee was trained in one pharmacy and evaluated by one supervisor. Qualitative analysis revealed that supervisors varied in their standards for scoring competencies. Some supervisors were reluctant to fail trainees. The competency scores had little impact on the high-stakes decision made by the program director. CONCLUSIONS The adapted CanMEDS competency framework provided a valid structure to measure competence. The reliability/precision of SPE measurements could not be established and the SPE measurements provided limited input for the decision-making process. Indications of a shadow assessment system in the pharmacies need further investigation.
Collapse
Affiliation(s)
- Marnix P D Westein
- Department of Pharmaceutical Sciences, Utrecht University, Royal Dutch Pharmacists Association (KNMP), Research in Education, Faculty of Medicine Vrije Universiteit, Amsterdam, the Netherlands.
| | - Andries S Koster
- Department of Pharmaceutical Sciences, Utrecht University, Utrecht, the Netherlands.
| | - Hester E M Daelmans
- Master's programme of Medicine, Faculty of Medicine Vrije Universiteit, Amsterdam, the Netherlands.
| | - Carlos F Collares
- Maastricht University Faculty of Health Medicine and Life Sciences, Maastricht, the Netherlands.
| | - Marcel L Bouvy
- Department of Pharmaceutical Sciences, Utrecht University, Utrecht, the Netherlands.
| | - Rashmi A Kusurkar
- Research in Education, Faculty of Medicine Vrije Universiteit, Amsterdam, the Netherlands.
| |
Collapse
|
6
|
Rassos J, Ginsburg S, Stalmeijer RE, Melvin LJ. The Senior Medical Resident's New Role in Assessment in Internal Medicine. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:711-717. [PMID: 34879012 DOI: 10.1097/acm.0000000000004552] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE With the introduction of competency-based medical education, senior residents have taken on a new, formalized role of completing assessments of their junior colleagues. However, no prior studies have explored the role of near-peer assessment within the context of entrustable professional activities (EPAs) and competency-based medical education. This study explored internal medicine residents' perceptions of near-peer feedback and assessment in the context of EPAs. METHOD Semistructured interviews were conducted from September 2019 to March 2020 with 16 internal medicine residents (8 first-year residents and 8 second- and third-year residents) at the University of Toronto, Toronto, Ontario, Canada. Interviews were conducted and coded iteratively within a constructivist grounded theory approach until sufficiency was reached. RESULTS Senior residents noted a tension in their dual roles of coach and assessor when completing EPAs. Senior residents managed the relationship with junior residents to not upset the learner and potentially harm the team dynamic, leading to the documentation of often inflated EPA ratings. Junior residents found senior residents to be credible providers of feedback; however, they were reticent to find senior residents credible as assessors. CONCLUSIONS Although EPAs have formalized moments of feedback, senior residents struggled to include constructive feedback comments, all while knowing the assessment decisions may inform the overall summative decision of their peers. As a result, EPA ratings were often inflated. The utility of having senior residents serve as assessors needs to be reexamined because there is concern that this new role has taken away the benefits of having a senior resident act solely as a coach.
Collapse
Affiliation(s)
- James Rassos
- J. Rassos is assistant professor, Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Shiphra Ginsburg
- S. Ginsburg is professor, Department of Medicine, and scientist, Wilson Centre for Education, University of Toronto, Toronto, Ontario, Canada
| | - Renée E Stalmeijer
- R.E. Stalmeijer is assistant professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands
| | - Lindsay J Melvin
- L.J. Melvin is assistant professor, Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
7
|
Abstract
If used thoughtfully and with intent, feedback and coaching will promote learning and growth as well as personal and professional development in our learners. Feedback is an educational tool as well as a social interaction between learner and supervisor, in the context of a respectful and trusting relationship. It challenges the learner's thinking and supports the learner's growth. Coaching is an educational philosophy dedicated to supporting learners' personal and professional development and growth and supporting them to reach their potential. In clinical education, feedback is most effective when it is explicitly distinguished from summative assessment. Importantly, feedback should be about firsthand observed behaviors (which can be direct or indirect) and not about information which comes from a third party. Learners are more receptive to feedback if it comes from a source that they perceive as credible, and with whom they have developed rapport. The coaching relationship between learner and supervisor should also be built on mutual trust and respect. Coaching can be provided in the moment (feedback on everyday clinical activities that leads to performance improvement, even with short interaction with a supervisor) and over time (a longer term relationship with a supervisor in which there is reflection on the learner's development and co-creation of new learning goals). Feedback and coaching are most valuable when the learner and teacher exhibit a growth mindset. At the organizational level, it is important that both the structures and training are in place to ensure a culture of effective feedback and coaching in the clinical workplace.Conclusions: Having a thoughtful and intentional approach to feedback and coaching with learners, as well as applying evidence-based principles, will not only contribute in a significant way to their developmental progression, but will also provide them with the tools they need to have the best chance of achieving competence throughout their training. What is Known: • Feedback and coaching are key to advancing the developmental progression of trainees as they work towards achieving competence. • Feedback is not a one-way delivery of specific information from supervisor to trainee, but rather a social interaction between two individuals in which trust and respect play a key role. • Provision of effective feedback may be hampered by confusing formative (supporting trainee learning and development) and summative (the judgment that is made about a trainee's level of competence) purposes. What is New: • Approaches to both the provision of feedback/coaching and the assessment of competence must be developed in parallel to ensure success in clinical training programs. • Faculty development is essential to provide clinical teachers with the skills to provide effective feedback and coaching. • Coaching's effectiveness relies on nurturing strong trainee-supervisor relationships, ensuring high-quality feedback, nourishing a growth mindset, and encouraging an institutional culture that embraces feedback and coaching.
Collapse
|
8
|
Pearce J, Tavares W. A philosophical history of programmatic assessment: tracing shifting configurations. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2021; 26:1291-1310. [PMID: 33893881 DOI: 10.1007/s10459-021-10050-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Accepted: 04/09/2021] [Indexed: 06/12/2023]
Abstract
Programmatic assessment is now well entrenched in medical education, allowing us to reflect on when it first emerged and how it evolved into the form we know today. Drawing upon the intellectual tradition of historical epistemology, we provide a philosophically-oriented historiographical study of programmatic assessment. Our goal is to trace its relatively short historical trajectory by describing shifting configurations in its scene of inquiry-focusing on questions, practices, and philosophical presuppositions. We identify three historical phases: emergence, evolution and entrenchment. For each, we describe the configurations of the scene; examine underlying philosophical presuppositions driving changes; and detail upshots in assessment practice. We find that programmatic assessment emerged in response to positivist 'turmoil' prior to 2005, driven by utility considerations and implicit pragmatist undertones. Once introduced, it evolved with notions of diversity and learning being underscored, and a constructivist ontology developing at its core. More recently, programmatic assessment has become entrenched as its own sub-discipline. Rich narratives have been emphasised, but philosophical underpinnings have been blurred. We hope to shed new light on current assessment practices in the medical education community by interrogating the history of programmatic assessment from this philosophical vantage point. Making philosophical presuppositions explicit highlights the perspectival nature of aspects of programmatic assessment, and suggest reasons for perceived benefits as well as potential tensions, contradictions and vulnerabilities in the approach today. We conclude by offering some reflections on important points to emerge from our historical study, and suggest 'what next' for programmatic assessment in light of this endeavour.
Collapse
Affiliation(s)
- J Pearce
- Tertiary Education (Assessment), Australian Council for Educational Research, 19 Prospect Hill Road, Camberwell, VIC, 3124, Australia.
| | - W Tavares
- The Wilson Centre and Post-MD Education. University Health Network and University of Toronto, Toronto, ON, Canada
| |
Collapse
|
9
|
Brand PLP, Jaarsma ADC, van der Vleuten CPM. Driving lesson or driving test? : A metaphor to help faculty separate feedback from assessment. PERSPECTIVES ON MEDICAL EDUCATION 2021; 10:50-56. [PMID: 32902828 PMCID: PMC7809072 DOI: 10.1007/s40037-020-00617-w] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Although there is consensus in the medical education world that feedback is an important and effective tool to support experiential workplace-based learning, learners tend to avoid the feedback associated with direct observation because they perceive it as a high-stakes evaluation with significant consequences for their future. The perceived dominance of the summative assessment paradigm throughout medical education reduces learners' willingness to seek feedback, and encourages supervisors to mix up feedback with provision of 'objective' grades or pass/fail marks. This eye-opener article argues that the provision and reception of effective feedback by clinical supervisors and their learners is dependent on both parties' awareness of the important distinction between feedback used in coaching towards growth and development (assessment for learning) and reaching a high-stakes judgement on the learner's competence and fitness for practice (assessment of learning). Using driving lessons and the driving test as a metaphor for feedback and assessment helps supervisors and learners to understand this crucial difference and to act upon it. It is the supervisor's responsibility to ensure that supervisor and learner achieve a clear mutual understanding of the purpose of each interaction (i.e. feedback or assessment). To allow supervisors to use the driving lesson-driving test metaphor for this purpose in their interactions with learners, it should be included in faculty development initiatives, along with a discussion of the key importance of separating feedback from assessment, to promote a feedback culture of growth and support programmatic assessment of competence.
Collapse
Affiliation(s)
- Paul L P Brand
- Department of Medical Education and Faculty Development, Isala Hospital, Isala Academy, Zwolle, The Netherlands.
- Lifelong Learning, Education and Assessment Research Network (LEARN), University Medical Centre Groningen, Groningen, The Netherlands.
| | - A Debbie C Jaarsma
- Lifelong Learning, Education and Assessment Research Network (LEARN), University Medical Centre Groningen, Groningen, The Netherlands
- Centre for Educational Development and Research (CEDAR), University Medical Centre Groningen, Groningen, The Netherlands
| | - Cees P M van der Vleuten
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
10
|
Tavares W, Young M, Gauthier G, St-Onge C. The Effect of Foregrounding Intended Use on Observers' Ratings and Comments in the Assessment of Clinical Competence. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:777-785. [PMID: 31725463 DOI: 10.1097/acm.0000000000003076] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
PURPOSE Some educational programs have adopted the premise that the same assessment can serve both formative and summative goals; however, how observers understand and integrate the intended uses of assessment may affect the way they execute the assessment task. The objective of this study was to explore the effect of foregrounding a different intended use (formative vs summative learner assessment) on observer contributions (ratings and comments). METHOD In this randomized, experimental, between-groups, mixed-methods study (May-September 2017), participants observed 3 prerecorded clinical performances under formative or summative assessment conditions. Participants rated performances using a global rating tool and provided comments. Participants were then asked to reconsider their ratings from the alternative perspective (from which they were originally blinded). They received the opportunity to alter their ratings and comments and to provide rationales for their decision to change or preserve their original ratings and comments. Outcomes included participant-observers' comments, ratings, changes to each, and stated rationales for changing or preserving their contributions. RESULTS Foregrounding different intended uses of assessment data for participant-observers did not result in differences in ratings, number or type of comments (both emphasized evaluative over constructive statements), or the ability to differentiate among performances. After adopting the alternative perspective, participant-observers made only small changes in ratings or comments. Participant-observers reported that they engage in the process in an evaluative manner despite different intended uses. CONCLUSIONS Foregrounding different intended uses for assessments did not result in significant systematic differences in the assessment data generated. Observers provided more evaluative than constructive statements overall, regardless of the intended use of the assessment. Future research is needed to explore whether these results hold in social/workplace-based contexts and how they might affect learners.
Collapse
Affiliation(s)
- Walter Tavares
- W. Tavares is assistant professor and scientist, The Wilson Centre, and Post-MD Education, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada; ORCID: https://orcid.org/0000-0001-8267-9448. M. Young is associate professor, Department of Medicine, McGill University, Montreal, Quebec, Canada; ORCID: https://orcid.org/0000-0002-2036-2119. G. Gauthier is adjunct professor, Medecine Interne, Université de Sherbrooke, Sherbrooke, Quebec, Canada; ORCID: https://orcid.org/0000-0001-7368-638X. C. St-Onge is professor, Department of Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Quebec, Canada; ORCID: http://orcid.org/0000-0001-5313-0456
| | | | | | | |
Collapse
|
11
|
Andreassen P, Malling B. How are formative assessment methods used in the clinical setting? A qualitative study. INTERNATIONAL JOURNAL OF MEDICAL EDUCATION 2019; 10:208-215. [PMID: 31759332 PMCID: PMC7246116 DOI: 10.5116/ijme.5db3.62e3] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Accepted: 10/25/2019] [Indexed: 05/21/2023]
Abstract
OBJECTIVES To explore how formative assessment methods are used and perceived by second-year junior doctors in different clinical settings. METHODS A focused ethnography study was carried out. Ten second-year junior doctors from different specialties were selected using purposive sampling. The junior doctors were observed during a day in their clinical workplace where formative assessment was in focus. They were subsequently phone interviewed using a semi-structured interview guide regarding their experiences and attitudes towards formative assessment. Field notes from observations and interview transcriptions were analyzed using an inductive content analysis approach, and the concept of "everyday resistance" was used as a theoretical lens. RESULTS Three themes were identified: First, there were several barriers to the use of formative assessment methods in the clinical context, including subtle tactics of everyday resistance such as avoidance, deprioritizing, and contesting formative assessment methods. Secondly, junior doctors made careful selections when arranging a formative assessment. Finally, junior doctors had ambiguous attitudes towards the use of mandatory formative assessment methods and mixed experiences with their educational impact. CONCLUSIONS This study emphasizes that the use of formative assessment methods in the clinical setting is not a neutral and context-independent exercise, but rather is affected by a myriad of factors such as collegial relations, educational traditions, emotional issues, and subtle forms of resistance. An important implication for the health care sector will be to address these issues for formative assessment methods to be properly implemented in the clinic.
Collapse
Affiliation(s)
| | - Bente Malling
- Centre for Health Sciences Education, Aarhus University, Denmark
| |
Collapse
|