51
|
Dart J, Rees C, Ash S, McCall L, Palermo C. Shifting the narrative and practice of assessing professionalism in dietetics education: An Australasian qualitative study. Nutr Diet 2023. [PMID: 36916155 DOI: 10.1111/1747-0080.12804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 01/10/2023] [Accepted: 02/01/2023] [Indexed: 03/16/2023]
Abstract
AIM We aimed to explore current approaches to assessing professionalism in dietetics education in Australia and New Zealand, and asked the questions what is working well and what needs to improve? METHOD We employed a qualitative interpretive approach and conducted interviews with academic and practitioner (workplace-based) educators (total sample n = 78) with a key stake in dietetics education across Australia and New Zealand. Data were analysed using team-based, framework analysis. RESULTS Our findings suggest significant shifts in dietetics education in the area of professionalism assessment. Professionalism assessment is embedded in formal curricula of dietetics programs and is occurring in university and placement settings. In particular, advances have been demonstrated in those programs assessing professionalism as part of the programmatic assessment. Progress has been enabled by philosophical and curricula shifts; clearer articulation and shared understandings of professionalism standards; enhanced learner agency and reduced power distance; early identification and intervention of professionalism lapses; and increased confidence and capabilities of educators. CONCLUSIONS These findings suggest there have been considerable advances in professionalism assessment in recent years with shifts in practice in approaching professionalism through a more interpretivist lens, holistically and more student-centred. Professionalism assessment in dietetics education is a shared responsibility and requires further development and transformation to more fully embed and strengthen curricula approaches across programs. Further work should investigate strategies to build safer learning cultures and capacity for professionalism conversations and in strengthening approaches to remediation.
Collapse
Affiliation(s)
- Janeane Dart
- Department of Nutrition, Dietetics and Food, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Victoria, Australia
| | - Charlotte Rees
- Head of School, School of Health Sciences, College of Health, Medicine and Wellbeing, University of Newcastle, Callaghan, New South Wales, Australia.,Monash Centre for Scholarship in Health Education (MCSHE), Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Victoria, Australia
| | - Susan Ash
- Department of Nutrition, Dietetics and Food, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Victoria, Australia
| | - Louise McCall
- Department of Nutrition, Dietetics and Food, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Victoria, Australia
| | - Claire Palermo
- Office of the Deputy Dean Education, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Victoria, Australia
| |
Collapse
|
52
|
Rietmeijer CBT, van Esch SCM, Blankenstein AH, van der Horst HE, Veen M, Scheele F, Teunissen PW. A phenomenology of direct observation in residency: Is Miller's 'does' level observable? MEDICAL EDUCATION 2023; 57:272-279. [PMID: 36515981 PMCID: PMC10107098 DOI: 10.1111/medu.15004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 12/08/2022] [Accepted: 12/10/2022] [Indexed: 06/17/2023]
Abstract
INTRODUCTION Guidelines on direct observation (DO) present DO as an assessment of Miller's 'does' level, that is, the learner's ability to function independently in clinical situations. The literature, however, indicates that residents may behave 'inauthentically' when observed. To minimise this 'observer effect', learners are encouraged to 'do what they would normally do' so that they can receive feedback on their actual work behaviour. Recent phenomenological research on patients' experiences with DO challenges this approach; patients needed-and caused-some participation of the observing supervisor. Although guidelines advise supervisors to minimise their presence, we are poorly informed on how some deliberate supervisor participation affects residents' experience in DO situations. Therefore, we investigated what residents essentially experienced in DO situations. METHODS We performed an interpretive phenomenological interview study, including six general practice (GP) residents. We collected and analysed our data, using the four phenomenological lenses of lived body, lived space, lived time and lived relationship. We grouped our open codes by interpreting what they revealed about common structures of residents' pre-reflective experiences. RESULTS Residents experienced the observing supervisor not just as an observer or assessor. They also experienced them as both a senior colleague and as the patient's familiar GP, which led to many additional interactions. When residents tried to act as if the supervisor was not there, they could feel insecure and handicapped because the supervisor was there, changing the situation. DISCUSSION Our results indicate that the 'observer effect' is much more material than was previously understood. Consequently, observing residents' 'authentic' behaviour at Miller's 'does' level, as if the supervisor was not there, seems impossible and a misleading concept: misleading, because it may frustrate residents and cause supervisors to neglect patients' and residents' needs in DO situations. We suggest that one-way DO is better replaced by bi-directional DO in working-and-learning-together sessions.
Collapse
Affiliation(s)
- Chris B. T. Rietmeijer
- Department of General PracticeAmsterdam UMC, location Vrije Universiteit AmsterdamAmsterdamThe Netherlands
| | - Suzanne C. M. van Esch
- Department of General PracticeAmsterdam UMC, location University of AmsterdamAmsterdamThe Netherlands
| | - Annette H. Blankenstein
- Department of General PracticeAmsterdam UMC, location Vrije Universiteit AmsterdamAmsterdamThe Netherlands
| | - Henriëtte E. van der Horst
- Department of General PracticeAmsterdam UMC, location Vrije Universiteit AmsterdamAmsterdamThe Netherlands
| | - Mario Veen
- Department of General PracticeErasmus Medical CenterRotterdamThe Netherlands
| | - Fedde Scheele
- School of Medical Sciences, Athena Institute for Transdisciplinary ResearchAmsterdam UMC, location Vrije Universiteit AmsterdamAmsterdamThe Netherlands
| | - Pim W. Teunissen
- School of Health Professions EducationMaastricht UniversityMaastrichtThe Netherlands
| |
Collapse
|
53
|
Westein MPD, Koster AS, Daelmans HEM, Bouvy ML, Kusurkar RA. How progress evaluations are used in postgraduate education with longitudinal supervisor-trainee relationships: a mixed method study. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2023; 28:205-222. [PMID: 36094680 PMCID: PMC9992254 DOI: 10.1007/s10459-022-10153-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 08/07/2022] [Indexed: 06/15/2023]
Abstract
The combination of measuring performance and giving feedback creates tension between formative and summative purposes of progress evaluations and can be challenging for supervisors. There are conflicting perspectives and evidence on the effects supervisor-trainee relationships have on assessing performance. The aim of this study was to learn how progress evaluations are used in postgraduate education with longitudinal supervisor-trainee relationships. Progress evaluations in a two-year community-pharmacy specialization program were studied with a mixed-method approach. An adapted version of the Canadian Medical Education Directives for Specialists (CanMEDS) framework was used. Validity of the performance evaluation scores of 342 trainees was analyzed using repeated measures ANOVA. Semi-structured interviews were held with fifteen supervisors to investigate their response processes, the utility of the progress evaluations, and the influence of supervisor-trainee relationships. Time and CanMEDS roles affected the three-monthly progress evaluation scores. Interviews revealed that supervisors varied in their response processes. They were more committed to stimulating development than to scoring actual performance. Progress evaluations were utilized to discuss and give feedback on trainee development and to add structure to the learning process. A positive supervisor-trainee relationship was seen as the foundation for feedback and supervisors preferred the roles of educator, mentor, and coach over the role of assessor. We found that progress evaluations are a good method for directing feedback in longitudinal supervisor-trainee relationships. The reliability of scoring performance was low. We recommend progress evaluations to be independent of formal assessments in order to minimize roles-conflicts of supervisors.
Collapse
Affiliation(s)
- Marnix P D Westein
- Department of Pharmaceutical Sciences, Utrecht University, Universiteitsweg 99, 3584, CG, Utrecht, The Netherlands.
- Research in Education, Faculty of Medicine Vrije Universiteit, Amsterdam, The Netherlands.
- The Royal Dutch Pharmacists Association (KNMP), The Hague, The Netherlands.
| | - A S Koster
- Department of Pharmaceutical Sciences, Utrecht University, Universiteitsweg 99, 3584, CG, Utrecht, The Netherlands
| | - H E M Daelmans
- Programme Director Master of Medicine, Faculty of Medicine Vrije Universiteit, Amsterdam, The Netherlands
| | - M L Bouvy
- Department of Pharmaceutical Sciences, Utrecht University, Universiteitsweg 99, 3584, CG, Utrecht, The Netherlands
| | - R A Kusurkar
- Research in Education, Faculty of Medicine Vrije Universiteit, Amsterdam, The Netherlands
| |
Collapse
|
54
|
Adam P, Mauksch LB, Brandenburg DL, Danner C, Ross VR. Optimal training in communication model (OPTiCOM): A programmatic roadmap. PATIENT EDUCATION AND COUNSELING 2023; 107:107573. [PMID: 36410312 DOI: 10.1016/j.pec.2022.107573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 11/13/2022] [Accepted: 11/16/2022] [Indexed: 06/16/2023]
Abstract
OBJECTIVES Teaching primary care residents patient communication skills is essential, complex, and impeded by barriers. We find no models guiding faculty how to train residents in the workplace that integrate necessary system components, the science of physician-patient communication training and competency-based medical education. The aim of this project is to create such a model. METHODS We created OPTiCOM using four steps: (1) communication educator interviews, analysis and theme development; (2) initial model construction; (3) model refinement using expert feedback; (4) structured literature review to validate, refine and finalize the model. RESULTS Our model contains ten interdependent building blocks organized into four developmental tiers. The Foundational value tier has one building block Naming relationship as a core value. The Expertize and resources tier includes four building blocks addressing: Curricular expertize, Curricular content, Leadership, and Time. The four building blocks in the Application and development tier are Observation form, Faculty development, Technology, and Formative assessment. The Language and culture tier identifies the final building block, Culture promoting continuous improvement in teaching communication. CONCLUSIONS OPTiCOM organizes ten interdependent systems building blocks to maximize and sustain resident learning of communication skills. Practice Implications Residency faculty can use OPTiCOM for self-assessment, program creation and revision.
Collapse
Affiliation(s)
- Patricia Adam
- Department of Family Medicine and Community Health, University of Minnesota, Smiley's Clinic, 2020 East 28th Street, Minneapolis, MN 55407, USA.
| | - Larry B Mauksch
- Emeritus - Department of Family Medicine, University of Washington, Home, 6026 30th Ave NE, Seattle, WA 98115, USA.
| | - Dana L Brandenburg
- Department of Family Medicine and Community Health, University of Minnesota, Smiley's Clinic, 2020 East 28th Street, Minneapolis, MN 55407, USA.
| | - Christine Danner
- Department of Family Medicine and Community Health, University of Minnesota, Bethesda Clinic, 580 Rice St, St Paul, MN 55103, USA.
| | - Valerie R Ross
- University of Washington Department of Family Medicine, Family Medicine Residency Program, Box 356390, 331 N.E. Thornton Place, Seattle, WA 98125, USA.
| |
Collapse
|
55
|
Ross S, Lawrence K, Bethune C, van der Goes T, Pélissier-Simard L, Donoff M, Crichton T, Laughlin T, Dhillon K, Potter M, Schultz K. Development, Implementation, and Meta-Evaluation of a National Approach to Programmatic Assessment in Canadian Family Medicine Residency Training. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:188-198. [PMID: 35671407 PMCID: PMC9855760 DOI: 10.1097/acm.0000000000004750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The growing international adoption of competency-based medical education has created a desire for descriptions of innovative assessment approaches that generate appropriate and sufficient information to allow for informed, defensible decisions about learner progress. In this article, the authors provide an overview of the development and implementation of the approach to programmatic assessment in postgraduate family medicine training programs in Canada, called Continuous Reflective Assessment for Training (CRAFT). CRAFT is a principles-guided, high-level approach to workplace-based assessment that was intentionally designed to be adaptable to local contexts, including size of program, resources available, and structural enablers and barriers. CRAFT has been implemented in all 17 Canadian family medicine residency programs, with each program taking advantage of the high-level nature of the CRAFT guidelines to create bespoke assessment processes and tools appropriate for their local contexts. Similarities and differences in CRAFT implementation between 5 different family medicine residency training programs, representing both English- and French-language programs from both Western and Eastern Canada, are described. Despite the intentional flexibility of the CRAFT guidelines, notable similarities in assessment processes and procedures across the 5 programs were seen. A meta-evaluation of findings from programs that have published evaluation information supports the value of CRAFT as an effective approach to programmatic assessment. While CRAFT is currently in place in family medicine residency programs in Canada, given its adaptability to different contexts as well as promising evaluation data, the CRAFT approach shows promise for application in other training environments.
Collapse
Affiliation(s)
- Shelley Ross
- S. Ross is professor and director, Research and Innovation, Teaching and Assessment Support Program, Department of Family Medicine, University of Alberta, Edmonton, Alberta, Canada; ORCID: http://orcid.org/0000-0001-9581-3191
| | - Kathrine Lawrence
- K. Lawrence is associate professor and assessment director and provincial head, Family Medicine, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Cheri Bethune
- C. Bethune is professor, Northern Ontario School of Medicine, clinical professor, Memorial University of Newfoundland, Newfoundland, Canada, and clinician educator, College of Family Physicians of Canada; ORCID: http://orcid.org/0000-0002-6230-6262
| | - Theresa van der Goes
- T. van der Goes is family physician (retired), medical educator, and director of assessment, University of British Columbia Family Medicine Residency Program, Vancouver, British Columbia, Canada
| | - Luce Pélissier-Simard
- L. Pélissier-Simard is associate professor, Department of Family Medicine and Emergency Medicine, and associate academic director, Centre de Développement Professionnel, Faculté de médecine et des sciences de la santé de l’Université de Sherbrooke, Sherbrooke, Québec, Canada; ORCID: http://orcid.org/0000-0002-9402-1798
| | - Michel Donoff
- M. Donoff is family physician, professor, and associate chair, Department of Family Medicine, University of Alberta, Edmonton, Alberta, Canada
| | - Thomas Crichton
- T. Crichton is family physician and senior advisor, Postgraduate Medical Education, Northern Ontario School of Medicine, Thunder Bay, Ontario, Canada
| | - Thomas Laughlin
- T. Laughlin is an associate professor, Department of Family Medicine, Dalhousie University, Halifax, Nova Scotia, Canada, and clinical associate professor, Discipline of Family Medicine, Memorial University of Newfoundland, Newfoundland, Canada
| | - Kiran Dhillon
- K. Dhillon is clinical lecturer, Department of Family Medicine, University of Alberta, Edmonton, Alberta, Canada, and member, Certification Process and Assessment Committee, College of Family Physicians of Canada
| | - Martin Potter
- M. Potter is assistant professor, Family Medicine and Emergency Department, Université de Montréal, Montréal, Québec, Canada
| | - Karen Schultz
- K. Schultz is professor and assessment director, Department of Family Medicine, Queen’s University, Kingston, Ontario, Canada, and chair, Certification Process and Assessment Committee, College of Family Physicians of Canada; ORCID: http://orcid.org/0000-0003-0208-3981
| |
Collapse
|
56
|
Carraccio C, Lentz A, Schumacher DJ. "Dismantling Fixed Time, Variable Outcome Education: Abandoning 'Ready or Not, Here they Come' is Overdue". PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:68-75. [PMID: 36937800 PMCID: PMC10022540 DOI: 10.5334/pme.10] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 02/28/2023] [Indexed: 05/05/2023]
Abstract
Two decades after competency-based medical education appeared in the lexicon of medical educators, the community continues to struggle with realizing its full potential. The implementation of the time variable, fixed outcome component has languished based on complexity compounded by resistance to change. Learners continue to transition from medical school to residency, and then practice, primarily based on time rather than having achieved the ability to meet the needs of the patient populations they will serve. Only those few who demonstrate glaring deficiencies do not graduate. The authors urge the medical education community to move from the current fixed time path of medical education toward the implementation of a true continuum of time variable, fixed outcome education, training, and deliberate practice. The latter is defined by purposeful learning, coaching, feedback, and repetition on the path to achieving and maintaining expertise. The opportunities afforded by such a time-variable, fixed outcome approach include: 1) development of a career long growth mindset, 2) ability to address evolving population health needs and careers within the context of one's practice, and 3) continual improvement of care quality and outcomes for patients on the journey towards expertise for providers.
Collapse
Affiliation(s)
| | | | - Daniel J. Schumacher
- Cincinnati Children’s Hospital Medical Center and University of Cincinnati College of Medicine, Cincinnati, Ohio, US
| |
Collapse
|
57
|
AlRadini FA. Perceptions of portfolio assessment in family medicine graduates: a qualitative interview study. BMC MEDICAL EDUCATION 2022; 22:905. [PMID: 36585681 PMCID: PMC9802017 DOI: 10.1186/s12909-022-03991-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Accepted: 12/27/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND The use of the portfolio methodology in medical education can serve as a tool for learning, assessment, and reflection on practice. This study concentrates on perceptions of the portfolio assessment methodology among participants in the Saudi Diploma of Family Medicine program. METHODS In this qualitative interview study, data were collected and analysed using a grounded theory approach. RESULTS Nine codes emerged: (1) Importance of understanding the definition, objectives, and process of portfolio assessment, (2) Impact of different understandings on the part of various trainers, (3) Role of the type of assessment, (4) Workload and stress of portfolio assessment, (5) Effectiveness of the portfolio contents, (6) Role of the mentor's feedback, (7) Role in the learning process, (8) Role in practice, (9) Suggestions for portfolio improvement. Open codes were then regrouped into three axial codes: context, strategy, and outcome of portfolio assessment. CONCLUSION This study explored a general explanation of portfolio assessment shaped by the postgraduate students. It identifies the importance of portfolio understanding in student acceptability of the portfolio assessment methodology. Thus, proper implementation is vital for the success of assessing the student by the portfolio methodology. The students perceived reflection as the most valuable part of the process, which facilitated their learning, confidence, and self-assessment. Mentor feedback is a good strategy for coping with portfolio challenges. Our findings provide some evidence of positive outcomes of portfolio assessment in practice and professional development.
Collapse
Affiliation(s)
- Faten A AlRadini
- Department of Clinical Sciences, College of Medicine, Princess Nourah bint Abdulrahman University, 11671, Riyadh, Saudi Arabia.
| |
Collapse
|
58
|
McDonald J, Ryan S, Heeneman S, Hu W. Informed self-assessment during the transition to medical school: a longitudinal narrative study. BMJ Open 2022; 12:e065203. [PMID: 36581430 PMCID: PMC9806099 DOI: 10.1136/bmjopen-2022-065203] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
OBJECTIVES To explore how medical students' narratives of informed self-assessment (ISA) change during their first 18 months of study. DESIGN This longitudinal study used student narratives drawn from qualitative interviews and written reflections during the transition to medical school, to examine changes in ISA. Our analysis was informed by Situated Cognition Theory which recognises the impact and interplay of personal and environmental factors in cognition. SETTING To study medicine, first year students need to adapt their self-regulated learning in the context of a new peer group, study demands and educational culture. During this adaptation, students need to seek and interpret available cues to inform their self-assessment. PARTICIPANTS Longitudinal data were collected at five time points over 18 months from a diverse sample of seven first year medical students in an undergraduate medical programme, including 13.5 hours of interviews and 12 written reflections. RESULTS Before and after starting medical school, the participants' self-assessments were informed by environmental influences (exam results and comparison with peers), and personal influences (fear of failure and anxiety about not belonging). Early uncertainty meant self-assessments were overestimated and underestimated.By the end of first year, an enhanced sense of belonging coincided with less fear of failure, less emphasis on exam performance and reduced competition with peers. Self-assessments became increasingly informed by evidence of clinical skills and knowledge gained related to future professional competence. CONCLUSION Influences on medical students' ISAs change during the transition to studying medicine. A greater sense of belonging, and evidence of progress towards clinical competence became more important to self-assessment than comparison with peers and exam performance. Our findings reinforce the importance of formative assessments, opportunities to study and socialise with peers and early clinical experiences during first year. These experiences enhance ISA skills during the transition to medical school.
Collapse
Affiliation(s)
- Jenny McDonald
- School of Medicine, Western Sydney University, Penrith South, New South Wales, Australia
| | - Samantha Ryan
- School of Medicine, Western Sydney University, Penrith South, New South Wales, Australia
| | - Sylvia Heeneman
- Department of Pathology, Maastricht University, Maastricht, Netherlands
| | - Wendy Hu
- School of Medicine, Western Sydney University, Penrith South, New South Wales, Australia
| |
Collapse
|
59
|
Bladt F, Khanal P, Prabhu AM, Hauke E, Kingsbury M, Saleh SN. Medical students' perception of changes in assessments implemented during the COVID-19 pandemic. BMC MEDICAL EDUCATION 2022; 22:844. [PMID: 36476483 PMCID: PMC9727955 DOI: 10.1186/s12909-022-03787-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 09/29/2022] [Indexed: 06/04/2023]
Abstract
BACKGROUND COVID-19 posed many challenges to medical education in the United Kingdom (UK). This includes implementing assessments during 4 months of national lockdowns within a 2-year period, where in-person education was prohibited. This study aimed to identify medical school assessment formats emerging during COVID-19 restrictions, investigate medical students' perspectives on these and identify influencing factors. METHODS The study consisted of two phases: a questionnaire asking medical students about assessment changes they experienced, satisfaction with these changes and preference regarding different assessments that emerged. The second phase involved semi-structured interviews with medical students across the UK to provide a deeper contextualized understanding of the complex factors influencing their perspectives. RESULTS In the questionnaire responses, open-book assessments had the highest satisfaction, and were the preferred option indicated. Furthermore, in the case of assessment cancellation, an increase in weighting of future assessments was preferred over increase in weighting of past assessments. Students were also satisfied with formative or pass-fail assessments. Interview analyses indicate that although cancellation or replacement of summative assessments with formative assessments reduced heightened anxiety from additional COVID-19 stressors, students worried about possible future knowledge gaps resulting from reduced motivation for assessment-related study. Students' satisfaction level was also affected by timeliness of communication from universities regarding changes, and student involvement in the decision-making processes. Perceived fairness and standardisation of test-taking conditions were ranked as the most important factors influencing student satisfaction, followed closely by familiarity with the format. In contrast, technical issues, lack of transparency about changes, perceived unfairness around invigilation, and uncertainty around changes in assessment format and weighting contributed to dissatisfaction. CONCLUSIONS Online open-book assessments were seen as the most ideal amongst all participants, and students who experienced these were the most satisfied with their assessment change. They were perceived as most fair and authentic compared to real-life medical training. We seek to inform educators about student perceptions of successful assessment strategies under COVID-19 restrictions and provide evidence to allow debate on ongoing assessment reform and innovation. While this work looks specifically at assessment changes during COVID-19, understanding factors affecting student perception of assessment is applicable to examinations beyond COVID-19.
Collapse
Affiliation(s)
- Francesca Bladt
- Imperial College School of Medicine, Imperial College London, London, UK
| | - Prakriti Khanal
- Imperial College School of Medicine, Imperial College London, London, UK
| | | | - Elizabeth Hauke
- Centre for Languages, Culture, and Communication, Imperial College London, London, UK
| | - Martyn Kingsbury
- Centre for Higher Education Research, Imperial College London, London, UK
| | - Sohag Nafis Saleh
- Imperial College School of Medicine, Imperial College London, London, UK.
| |
Collapse
|
60
|
Jamieson J, Gibson S, Hay M, Palermo C. Teacher, Gatekeeper, or Team Member: supervisor positioning in programmatic assessment. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2022:10.1007/s10459-022-10193-9. [PMID: 36469231 DOI: 10.1007/s10459-022-10193-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 11/27/2022] [Indexed: 06/17/2023]
Abstract
Competency-based assessment is undergoing an evolution with the popularisation of programmatic assessment. Fundamental to programmatic assessment are the attributes and buy-in of the people participating in the system. Our previous research revealed unspoken, yet influential, cultural and relationship dynamics that interact with programmatic assessment to influence success. Pulling at this thread, we conducted secondary analysis of focus groups and interviews (n = 44 supervisors) using the critical lens of Positioning Theory to explore how workplace supervisors experienced and perceived their positioning within programmatic assessment. We found that supervisors positioned themselves in two of three ways. First, supervisors universally positioned themselves as a Teacher, describing an inherent duty to educate students. Enactment of this position was dichotomous, with some supervisors ascribing a passive and disempowered position onto students while others empowered students by cultivating an egalitarian teaching relationship. Second, two mutually exclusive positions were described-either Gatekeeper or Team Member. Supervisors positioning themselves as Gatekeepers had a duty to protect the community and were vigilant to the detection of inadequate student performance. Programmatic assessment challenged this positioning by reorientating supervisor rights and duties which diminished their perceived authority and led to frustration and resistance. In contrast, Team Members enacted a right to make a valuable contribution to programmatic assessment and felt liberated from the burden of assessment, enabling them to assent power shifts towards students and the university. Identifying supervisor positions revealed how programmatic assessment challenged traditional structures and ideologies, impeding success, and provides insights into supporting supervisors in programmatic assessment.
Collapse
Affiliation(s)
- Janica Jamieson
- Monash University, Melbourne, Australia.
- School of Medical and Health Sciences, Edith Cowan University, 270 Joondalup Drive, Joondalup, WA, 6027, Australia.
| | | | | | | |
Collapse
|
61
|
Mooney CJ, Pascoe JM, Blatt AE, Lang VJ, Kelly MS, Braun MK, Burch JE, Stone RT. Predictors of faculty narrative evaluation quality in medical school clerkships. MEDICAL EDUCATION 2022; 56:1223-1231. [PMID: 35950329 DOI: 10.1111/medu.14911] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 08/01/2022] [Accepted: 08/08/2022] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Narrative approaches to assessment provide meaningful and valid representations of trainee performance. Yet, narratives are frequently perceived as vague, nonspecific and low quality. To date, there is little research examining factors associated with narrative evaluation quality, particularly in undergraduate medical education. The purpose of this study was to examine associations of faculty- and student-level characteristics with the quality of faculty member's narrative evaluations of clerkship students. METHODS The authors reviewed faculty narrative evaluations of 50 students' clinical performance in their inpatient medicine and neurology clerkships, resulting in 165 and 87 unique evaluations in the respective clerkships. The authors evaluated narrative quality using the Narrative Evaluation Quality Instrument (NEQI). The authors used linear mixed effects modelling to predict total NEQI score. Explanatory covariates included the following: time to evaluation completion, number of weeks spent with student, faculty total weeks on service per year, total faculty years in clinical education, student gender, faculty gender, and an interaction term between student and faculty gender. RESULTS Significantly higher narrative evaluation quality was associated with a shorter time to evaluation completion, with NEQI scores decreasing by approximately 0.3 points every 10 days following students' rotations (p = .004). Additionally, women faculty had statistically higher quality narrative evaluations with NEQI scores 1.92 points greater than men faculty (p = .012). All other covariates were not significant. CONCLUSIONS The quality of faculty members' narrative evaluations of medical students was associated with time to evaluation completion and faculty gender but not faculty experience in clinical education, faculty weeks on service, or the amount of time spent with students. Findings advance understanding on ways to improve the quality of narrative evaluations which are imperative given assessment models that will increase the volume and reliance on narratives.
Collapse
Affiliation(s)
- Christopher J Mooney
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Jennifer M Pascoe
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Amy E Blatt
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Valerie J Lang
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | | | - Melanie K Braun
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | - Jaclyn E Burch
- School of Medicine and Dentistry, University of Rochester, Rochester, New York, USA
| | | |
Collapse
|
62
|
Wright C, Matthews K. An intentional approach to the development and implementation of meaningful assessment in advanced radiation therapy practice curricula. Tech Innov Patient Support Radiat Oncol 2022; 24:13-18. [PMID: 36124225 PMCID: PMC9482137 DOI: 10.1016/j.tipsro.2022.08.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Revised: 08/16/2022] [Accepted: 08/29/2022] [Indexed: 11/03/2022] Open
Abstract
Creating meaningful assessment for advanced radiation therapy practice training programs is a challenge. This is because it requires a balance of formative and summative assessments, which meet the academic and professional needs of the practitioner, as well as the requirements of local service delivery, educational and professional standards. This paper discusses educational strategies and models used to integrate assessment into theoretical and clinical curricula, allowing practitioners to demonstrate higher order cognitive knowledge, advanced level clinical performance and attitudes/values associated with advanced practice. The discussion draws upon concepts of constructive alignment and programmatic approaches to assessment, which use Bloom's taxonomy, Benner's beginner to competent model of skill development, and Miller's pyramid of clinical competence. These models are analysed with respect to an advanced practice program in adaptive radiation therapy to provide context.
Collapse
Affiliation(s)
| | - Kristie Matthews
- Department of Medical Imaging and Radiation Sciences, 10 Chancellor's Walk, Monash University, Clayton Vic 3800, Australia
- Peter MacCallum Cancer Centre, 305 Grattan Street, Melbourne VIC 3000 Australia
| |
Collapse
|
63
|
Maundu J, Galbraith K, Croft H, Clark B, Kirsa S, Wilkinson G, Abeyaratne C. Development of
workplace‐based
assessment tools to support postgraduate training of provisionally registered pharmacists in Australia. JOURNAL OF THE AMERICAN COLLEGE OF CLINICAL PHARMACY 2022. [DOI: 10.1002/jac5.1714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Josephine Maundu
- Australian Pharmacy Council Civic Square Australian Capital Territory Australia
| | - Kirsten Galbraith
- Faculty of Pharmacy and Pharmaceutical Sciences Monash University Parkville Victoria Australia
| | - Hayley Croft
- School of Biomedical Sciences and Pharmacy University of Newcastle Callaghan New South Wales Australia
| | - Bronwyn Clark
- Australian Pharmacy Council Civic Square Australian Capital Territory Australia
| | - Sue Kirsa
- Australian Pharmacy Council Civic Square Australian Capital Territory Australia
| | - Glenys Wilkinson
- Australian Pharmacy Council Civic Square Australian Capital Territory Australia
| | - Carmen Abeyaratne
- Australian Pharmacy Council Civic Square Australian Capital Territory Australia
| |
Collapse
|
64
|
Kinnear B, Schumacher DJ, Driessen EW, Varpio L. How argumentation theory can inform assessment validity: A critical review. MEDICAL EDUCATION 2022; 56:1064-1075. [PMID: 35851965 PMCID: PMC9796688 DOI: 10.1111/medu.14882] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 07/07/2022] [Accepted: 07/15/2022] [Indexed: 05/21/2023]
Abstract
INTRODUCTION Many health professions education (HPE) scholars frame assessment validity as a form of argumentation in which interpretations and uses of assessment scores must be supported by evidence. However, what are purported to be validity arguments are often merely clusters of evidence without a guiding framework to evaluate, prioritise, or debate their merits. Argumentation theory is a field of study dedicated to understanding the production, analysis, and evaluation of arguments (spoken or written). The aim of this study is to describe argumentation theory, articulating the unique insights it can offer to HPE assessment, and presenting how different argumentation orientations can help reconceptualize the nature of validity in generative ways. METHODS The authors followed a five-step critical review process consisting of iterative cycles of focusing, searching, appraising, sampling, and analysing the argumentation theory literature. The authors generated and synthesised a corpus of manuscripts on argumentation orientations deemed to be most applicable to HPE. RESULTS We selected two argumentation orientations that we considered particularly constructive for informing HPE assessment validity: New rhetoric and informal logic. In new rhetoric, the goal of argumentation is to persuade, with a focus on an audience's values and standards. Informal logic centres on identifying, structuring, and evaluating arguments in real-world settings, with a variety of normative standards used to evaluate argument validity. DISCUSSION Both new rhetoric and informal logic provide philosophical, theoretical, or practical groundings that can advance HPE validity argumentation. New rhetoric's foregrounding of audience aligns with HPE's social imperative to be accountable to specific stakeholders such as the public and learners. Informal logic provides tools for identifying and structuring validity arguments for analysis and evaluation.
Collapse
Affiliation(s)
- Benjamin Kinnear
- Department of PediatricsUniversity of Cincinnati College of MedicineCincinnatiOhioUSA
- School of Health Professions Education (SHE)Maastricht UniversityMaastrichtThe Netherlands
| | - Daniel J. Schumacher
- Department of PediatricsUniversity of Cincinnati College of MedicineCincinnatiOhioUSA
| | - Erik W. Driessen
- School of Health Professions Education Faculty of HealthMedicine and Life Sciences of Maastricht UniversityMaastrichtThe Netherlands
| | - Lara Varpio
- Uniformed Services University of the Health SciencesBethesdaMarylandUSA
| |
Collapse
|
65
|
Favier R. Entrustable professional activities: bridging the gap between veterinary education and clinical practice. Vet Rec 2022; 191:378-380. [DOI: 10.1002/vetr.2414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Robert Favier
- IVC Evidensia/ IVC Evidensia Academy Utrecht Netherlands
| |
Collapse
|
66
|
Palermo C, Aretz HT, Holmboe ES. Editorial: Competency frameworks in health professions education. Front Med (Lausanne) 2022; 9:1034729. [PMID: 36275787 PMCID: PMC9583250 DOI: 10.3389/fmed.2022.1034729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Accepted: 09/05/2022] [Indexed: 11/13/2022] Open
Affiliation(s)
- Claire Palermo
- Faculty of Medicine, Nursing and Health Sciences, Monash University, Clayton, VIC, Australia,*Correspondence: Claire Palermo
| | - H. Thomas Aretz
- Mass General Brigham, Global Advisory, Harvard Medical School, Boston, MA, United States
| | - Eric S. Holmboe
- Accreditation Council for Graduate Medical Education, Chicago, IL, United States
| |
Collapse
|
67
|
Roberts C, Khanna P, Bleasel J, Lane S, Burgess A, Charles K, Howard R, O'Mara D, Haq I, Rutzou T. Student perspectives on programmatic assessment in a large medical programme: A critical realist analysis. MEDICAL EDUCATION 2022; 56:901-914. [PMID: 35393668 PMCID: PMC9542097 DOI: 10.1111/medu.14807] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 03/28/2022] [Accepted: 04/05/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND Fundamental challenges exist in researching complex changes of assessment practice from traditional objective-focused 'assessments of learning' towards programmatic 'assessment for learning'. The latter emphasise both the subjective and social in collective judgements of student progress. Our context was a purposively designed programmatic assessment system implemented in the first year of a new graduate entry curriculum. We applied critical realist perspectives to unpack the underlying causes (mechanisms) that explained student experiences of programmatic assessment, to optimise assessment practice for future iterations. METHODS Data came from 14 in-depth focus groups (N = 112/261 students). We applied a critical realist lens drawn from Bhasker's three domains of reality (the actual, empirical and real) and Archer's concept of structure and agency to understand the student experience of programmatic assessment. Analysis involved induction (pattern identification), abduction (theoretical interpretation) and retroduction (causal explanation). RESULTS As a complex educational and social change, the assessment structures and culture systems within programmatic assessment provided conditions (constraints and enablements) and conditioning (acceptance or rejection of new 'non-traditional' assessment processes) for the actions of agents (students) to exercise their learning choices. The emergent underlying mechanism that most influenced students' experience of programmatic assessment was one of balancing the complex relationships between learner agency, assessment structures and the cultural system. CONCLUSIONS Our study adds to debates on programmatic assessment by emphasising how the achievement of balance between learner agency, structure and culture suggests strategies to underpin sustained changes (elaboration) in assessment practice. These include; faculty and student learning development to promote collective reflexivity and agency, optimising assessment structures by enhancing integration of theory with practice, and changing learning culture by both enhancing existing and developing new social structures between faculty and the student body to gain acceptance and trust related to the new norms, beliefs and behaviours in assessing for and of learning.
Collapse
Affiliation(s)
- Chris Roberts
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Priya Khanna
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Jane Bleasel
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Stuart Lane
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Annette Burgess
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Kellie Charles
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
- Faculty of Medicine and Health, Sydney Pharmacy School, Discipline of PharmacologyThe University of SydneySydneyNew South WalesAustralia
| | - Rosa Howard
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Deborah O'Mara
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Inam Haq
- Faculty of Medicine and HealthThe University of SydneySydneyNew South WalesAustralia
| | - Timothy Rutzou
- School of MedicineThe University of Notre DameChippendaleNew South WalesAustralia
| |
Collapse
|
68
|
Bissell V, Dawson LJ. Assessment and feedback in dental education: a journey. Br Dent J 2022; 233:499-502. [PMID: 36151182 PMCID: PMC9507962 DOI: 10.1038/s41415-022-4968-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 07/07/2022] [Indexed: 11/09/2022]
Abstract
The authors describe their personal experience of responding to changing perceptions of best practice and the expanding evidence base, in relation to assessment and feedback in dental education. Changes at a particular dental school over the years are described, along with a more general outlook, culminating in suggestions for future directions.
Collapse
Affiliation(s)
- Vince Bissell
- School of Dentistry, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK.
| | - Luke J Dawson
- School of Dentistry, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| |
Collapse
|
69
|
Cate OT. How can Entrustable Professional Activities serve the quality of health care provision through licensing and certification? CANADIAN MEDICAL EDUCATION JOURNAL 2022; 13:8-14. [PMID: 36091739 PMCID: PMC9441117 DOI: 10.36834/cmej.73974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
This paper about Entrustable Professional Activities (EPAs) was solicited to support the discussion about the future of licensing within the Medical Council of Canada. EPAs, units of professional practice to be entrusted to learners or professionals once they have shown to possess sufficient competence, were proposed in 2005 to operationalize competency-based postgraduate medical education and have become widely popular for various health professions education programs in many countries. EPAs break the breadth of competence for license down to units of practice that can be overseen, assessed, monitored, documented, and entrusted. EPAs together may constitute an individual's portfolio of qualifications, and define a scope of practice. A medical license and a specialty certification can then be defined as the required combination of EPAs for which one is qualified at any specific moment in time. That 'snapshot' could change over time and reflect the professional development of the individual, both in their competence and in their privileges to practice. Micro-credentialing and digital badges might become an adequate option to show-case one's scope of practice at any time and operationalize the idea of a dynamic portfolio of EPAs.
Collapse
|
70
|
From Traditional to Programmatic Assessment in Three (Not So) Easy Steps. EDUCATION SCIENCES 2022. [DOI: 10.3390/educsci12070487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Programmatic assessment (PA) has strong theoretical and pedagogical underpinnings, but its practical implementation brings a number of challenges—particularly in traditional university settings involving large cohort sizes. This paper presents a detailed case report of an in-progress programmatic assessment implementation involving a decade of assessment innovation occurring in three significant and transformative steps. The starting position and subsequent changes represented in each step are reflected against the framework of established principles and implementation themes of PA. This case report emphasises the importance of ongoing innovation and evaluative research, the advantage of a dedicated team with a cohesive plan, and the fundamental necessity of electronic data collection. It also highlights the challenge of traditional university cultures, the potential advantage of a major pandemic disruption, and the necessity for curriculum renewal to support significant assessment change. Our PA implementation began with a plan to improve the learning potential of individual assessments and over the subsequent decade expanded to encompass a cohesive and course wide assessment program involving meaningful aggregation of assessment data. In our context (large cohort sizes and university-wide assessment policy) regular progress review meetings and progress decisions based on aggregated qualitative and quantitative data (rather than assessment format) remain local challenges.
Collapse
|
71
|
Concordance of Narrative Comments with Supervision Ratings Provided During Entrustable Professional Activity Assessments. J Gen Intern Med 2022; 37:2200-2207. [PMID: 35710663 PMCID: PMC9296736 DOI: 10.1007/s11606-022-07509-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Accepted: 03/24/2022] [Indexed: 10/18/2022]
Abstract
BACKGROUND Use of EPA-based entrustment-supervision ratings to determine a learner's readiness to assume patient care responsibilities is expanding. OBJECTIVE In this study, we investigate the correlation between narrative comments and supervision ratings assigned during ad hoc assessments of medical students' performance of EPA tasks. DESIGN Data from assessments completed for students enrolled in the clerkship phase over 2 academic years were used to extract a stratified random sample of 100 narrative comments for review by an expert panel. PARTICIPANTS A review panel, comprised of faculty with specific expertise related to their roles within the EPA program, provided a "gold standard" supervision rating using the comments provided by the original assessor. MAIN MEASURES Interrater reliability (IRR) between members of review panel and correlation coefficients (CC) between expert ratings and supervision ratings from original assessors. KEY RESULTS IRR among members of the expert panel ranged from .536 for comments associated with focused history taking to .833 for complete physical exam. CC (Kendall's correlation coefficient W) between panel members' assignment of supervision ratings and the ratings provided by the original assessors for history taking, physical examination, and oral presentation comments were .668, .697, and .735 respectively. The supervision ratings of the expert panel had the highest degree of correlation with ratings provided during assessments done by master assessors, faculty trained to assess students across clinical contexts. Correlation between supervision ratings provided with the narrative comments at the time of observation and supervision ratings assigned by the expert panel differed by clinical discipline, perhaps reflecting the value placed on, and perhaps the comfort level with, assessment of the task in a given specialty. CONCLUSIONS To realize the full educational and catalytic effect of EPA assessments, assessors must apply established performance expectations and provide high-quality narrative comments aligned with the criteria.
Collapse
|
72
|
LaRochelle JS, Aagaard EM. Crisis as the Catalyst for Meaningful Change. J Gen Intern Med 2022; 37:2135-2136. [PMID: 35578127 PMCID: PMC9109946 DOI: 10.1007/s11606-022-07667-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Affiliation(s)
| | - Eva M Aagaard
- Washington University School of Medicine, St. Louis, MO, USA.
| |
Collapse
|
73
|
Marceau M, St-Onge C, Gallagher F, Young M. Validity as a social imperative: users' and leaders' perceptions. CANADIAN MEDICAL EDUCATION JOURNAL 2022; 13:22-36. [PMID: 35875440 PMCID: PMC9297243 DOI: 10.36834/cmej.73518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Recently, validity as a social imperative was proposed as an emerging conceptualization of validity in the assessment literature in health professions education (HPE). To further develop our understanding, we explored the perceived acceptability and anticipated feasibility of validity as a social imperative with users and leaders engaged with assessment in HPE in Canada. METHODS We conducted a qualitative interpretive description study. Purposeful and snowball sampling were used to recruit participants for semi-structured individual interviews and focus groups. Each transcript was analyzed by two team members and discussed with the team until consensus was reached. RESULTS We conducted five focus group and eleven interviews with two different stakeholder groups (users and leaders). Our findings suggest that the participants perceived the concept of validity as a social imperative as acceptable. Regardless of group, participants shared similar considerations regarding: the limits of traditional validity models, the concept's timeliness and relevance, the need to clarify some terms used to characterize the concept, the similarities with modern theories of validity, and the anticipated challenges in applying the concept in practice. In addition, participants discussed some limits with current approaches to validity in the context of workplace-based and programmatic assessment. CONCLUSION Validity as a social imperative can be interwoven throughout existing theories of validity and may represent how HPE is adapting traditional models of validity in order to respond to the complexity of assessment in HPE; however, challenges likely remain in operationalizing the concept prior to its implementation.
Collapse
Affiliation(s)
- Mélanie Marceau
- School of Nursing, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Quebec, Canada
| | - Christina St-Onge
- Department of Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Quebec, Canada
| | - Frances Gallagher
- School of Nursing, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Quebec, Canada
| | - Meredith Young
- Institute of Health Sciences Education, Faculty of Medicine and Health Sciences, McGill University, Québec, Canada
| |
Collapse
|
74
|
Westein MPD, Koster AS, Daelmans HEM, Collares CF, Bouvy ML, Kusurkar RA. Validity evidence for summative performance evaluations in postgraduate community pharmacy education. CURRENTS IN PHARMACY TEACHING & LEARNING 2022; 14:701-711. [PMID: 35809899 DOI: 10.1016/j.cptl.2022.06.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 05/30/2022] [Accepted: 06/09/2022] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Workplace-based assessment of competencies is complex. In this study, the validity of summative performance evaluations (SPEs) made by supervisors in a two-year longitudinal supervisor-trainee relationship was investigated in a postgraduate community pharmacy specialization program in the Netherlands. The construct of competence was based on an adapted version of the 2005 Canadian Medical Education Directive for Specialists (CanMEDS) framework. METHODS The study had a case study design. Both quantitative and qualitative data were collected. The year 1 and year 2 SPE scores of 342 trainees were analyzed using confirmatory factor analysis and generalizability theory. Semi-structured interviews were held with 15 supervisors and the program director to analyze the inferences they made and the impact of SPE scores on the decision-making process. RESULTS A good model fit was found for the adapted CanMEDS based seven-factor construct. The reliability/precision of the SPE measurements could not be completely isolated, as every trainee was trained in one pharmacy and evaluated by one supervisor. Qualitative analysis revealed that supervisors varied in their standards for scoring competencies. Some supervisors were reluctant to fail trainees. The competency scores had little impact on the high-stakes decision made by the program director. CONCLUSIONS The adapted CanMEDS competency framework provided a valid structure to measure competence. The reliability/precision of SPE measurements could not be established and the SPE measurements provided limited input for the decision-making process. Indications of a shadow assessment system in the pharmacies need further investigation.
Collapse
Affiliation(s)
- Marnix P D Westein
- Department of Pharmaceutical Sciences, Utrecht University, Royal Dutch Pharmacists Association (KNMP), Research in Education, Faculty of Medicine Vrije Universiteit, Amsterdam, the Netherlands.
| | - Andries S Koster
- Department of Pharmaceutical Sciences, Utrecht University, Utrecht, the Netherlands.
| | - Hester E M Daelmans
- Master's programme of Medicine, Faculty of Medicine Vrije Universiteit, Amsterdam, the Netherlands.
| | - Carlos F Collares
- Maastricht University Faculty of Health Medicine and Life Sciences, Maastricht, the Netherlands.
| | - Marcel L Bouvy
- Department of Pharmaceutical Sciences, Utrecht University, Utrecht, the Netherlands.
| | - Rashmi A Kusurkar
- Research in Education, Faculty of Medicine Vrije Universiteit, Amsterdam, the Netherlands.
| |
Collapse
|
75
|
Bolster MB, Chandra S, Demaerschalk BM, Esper CD, Genkins JZ, Hayden EM, Tan-McGrory A, Schwamm LH. Crossing the Virtual Chasm: Practical Considerations for Rethinking Curriculum, Competency, and Culture in the Virtual Care Era. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:839-846. [PMID: 35263303 DOI: 10.1097/acm.0000000000004660] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Virtual care, introduced previously as a replacement for in-person visits, is now being integrated into clinical care delivery models to complement in-person visits. The COVID-19 pandemic sped up this process. The rapid uptake of virtual care at the start of the pandemic prevented educators from taking deliberate steps to design the foundational elements of the related learning environment, including workflow, competencies, and assessment methods. Educators must now pursue an informed and purposeful approach to design a curriculum and implement virtual care in the learning environment. Engaging learners in virtual care offers opportunities for novel ways to teach and assess their performance and to effectively integrate technology such that it is accessible and equitable. It also offers opportunities for learners to demonstrate professionalism in a virtual environment, to obtain a patient's history incorporating interpersonal and communication skills, to interact with multiple parties during a patient encounter (patient, caregiver, translator, telepresenter, faculty member), to enhance physical examination techniques via videoconferencing, and ideally to optimize demonstrations of empathy through "webside manner." Feedback and assessment, important features of training in any setting, must be timely, specific, and actionable in the new virtual care environment. Recognizing the importance of integrating virtual care into education, leaders from across the United States convened on September 10, 2020, for a symposium titled, "Crossing the Virtual Chasm: Rethinking Curriculum, Competency, and Culture in the Virtual Care Era." In this article, the authors share recommendations that came out of this symposium for the implementation of educational tools in the evolving virtual care environment. They present core competencies, assessment tools, precepting workflows, and technology to optimize the delivery of high-quality virtual care that is safe, timely, effective, efficient, equitable, and patient-centered.
Collapse
Affiliation(s)
- Marcy B Bolster
- M.B. Bolster is associate professor of medicine, Harvard Medical School, and director, Rheumatology Fellowship Training Program, Massachusetts General Hospital, Boston, Massachusetts; ORCID: https://orcid.org/0000-0002-5413-9345
| | - Shruti Chandra
- S. Chandra is assistant professor of emergency medicine, Thomas Jefferson University, director of Phase 3, Sidney Kimmel Medical College, and program director, Digital Health and Telehealth Education, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0002-0294-9397
| | - Bart M Demaerschalk
- B.M. Demaerschalk is professor of neurology, Mayo Clinic College of Medicine and Science, and medical director, Video Telemedicine Center for Connected Care and Center for Digital Health, Mayo Clinic, Phoenix, Arizona; ORCID: https://orcid.org/0000-0001-7262-817X
| | - Christine D Esper
- C.D. Esper is assistant professor of neurology, Emory University School of Medicine, and clinical director, Emory Brain Health Motion Capture Laboratory, Atlanta, Georgia; ORCID: https://orcid.org/0000-0002-1093-6537
| | - Julian Z Genkins
- J.Z. Genkins is a clinical informatics fellow, Department of Medicine, Stanford University School of Medicine, Palo Alto, California; ORCID: https://orcid.org/0000-0001-7673-8827
| | - Emily M Hayden
- E.M. Hayden is director of telehealth, Department of Emergency Medicine, Massachusetts General Hospital, Boston, Massachusetts
| | - Aswita Tan-McGrory
- A. Tan-McGrory is director, Disparities Solutions Center, and administrative director, Mongan Institute, Massachusetts General Hospital, Boston, Massachusetts
| | - Lee H Schwamm
- L.H. Schwamm is professor of neurology, Harvard Medical School, director, Center for TeleHealth, Massachusetts General Hospital and Harvard Medical School, and vice president, Digital Health Virtual Care, Mass General Brigham, Boston, Massachusetts
| |
Collapse
|
76
|
Barman L, McGrath C, Josephsson S, Silén C, Bolander Laksov K. Safeguarding fairness in assessments-How teachers develop joint practices. MEDICAL EDUCATION 2022; 56:651-659. [PMID: 35263464 PMCID: PMC9310582 DOI: 10.1111/medu.14789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 02/03/2022] [Accepted: 02/26/2022] [Indexed: 06/14/2023]
Abstract
INTRODUCTION In light of reforms demanding increased transparency of student performance assessments, this study offers an in-depth perspective of how teachers develop their assessment practice. Much is known about factors that influence assessments, and different solutions claim to improve the validity and reliability of assessments of students' clinical competency. However, little is known about how teachers go about improving their assessment practices. This study aims to contribute empirical findings about how teachers' assessment practice may change when shared criteria for assessing students' clinical competency are developed and implemented. METHODS Using a narrative-in-action research approach grounded in narrative theory about human sense-making, one group including nine health professions teachers was studied over a period of 1 year. Drawing upon data from observations, interviews, formal documents and written reflections from these teachers, we performed a narrative analysis to reveal how these teachers made sense of experiences associated with the development and implementation of joint grading criteria for assessing students' clinical performances. RESULTS The findings present a narrative showing how a shared assessment practice took years to develop and was based on the teachers changed approach to scrutiny. The teachers became highly motivated to use grading criteria to ensure fairness in assessments, but more importantly, to fulfil their moral obligation towards patients. The narrative also demonstrates how these teachers reasoned about dilemmas that arose when they applied standardised assessment criteria. DISCUSSION The narrative analysis shows clearly how teachers' development and application of assessment standards are embedded in local practices. Our findings highlight the importance of teachers' joint discussions on how to interpret criteria applied in formative and summative assessments of students' performances. In particular, teachers' different approaches to assessing 'pieces of skills' versus making holistic judgements on students' performances, regardless of whether the grading criteria are clear and well-articulated on paper, should be acknowledged. Understanding the journey that these teachers made gives new perspectives as to how faculty can be supported when assessments of professionalism and clinical competency are developed.
Collapse
Affiliation(s)
- Linda Barman
- Department of Learning in Engineering SciencesKTH Royal Institute of TechnologyStockholmSweden
| | - Cormac McGrath
- Department of EducationStockholm UniversityStockholmSweden
| | - Staffan Josephsson
- Department of Neurobiology, Care Sciences and SocietyKarolinska InstitutetStockholmSweden
| | - Charlotte Silén
- Department of Learning, Informatics, Management and EthicsKarolinska InstitutetStockholmSweden
| | - Klara Bolander Laksov
- Department of EducationStockholm UniversityStockholmSweden
- Department of Learning, Informatics, Management and EthicsKarolinska InstitutetStockholmSweden
| |
Collapse
|
77
|
de Jong LH, Bok HGJ, Schellekens LH, Kremer WDJ, Jonker FH, van der Vleuten CPM. Shaping the right conditions in programmatic assessment: how quality of narrative information affects the quality of high-stakes decision-making. BMC MEDICAL EDUCATION 2022; 22:409. [PMID: 35643442 PMCID: PMC9148525 DOI: 10.1186/s12909-022-03257-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Accepted: 03/10/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Programmatic assessment is increasingly being implemented within competency-based health professions education. In this approach a multitude of low-stakes assessment activities are aggregated into a holistic high-stakes decision on the student's performance. High-stakes decisions need to be of high quality. Part of this quality is whether an examiner perceives saturation of information when making a holistic decision. The purpose of this study was to explore the influence of narrative information in perceiving saturation of information during the interpretative process of high-stakes decision-making. METHODS In this mixed-method intervention study the quality of the recorded narrative information was manipulated within multiple portfolios (i.e., feedback and reflection) to investigate its influence on 1) the perception of saturation of information and 2) the examiner's interpretative approach in making a high-stakes decision. Data were collected through surveys, screen recordings of the portfolio assessments, and semi-structured interviews. Descriptive statistics and template analysis were applied to analyze the data. RESULTS The examiners perceived less frequently saturation of information in the portfolios with low quality of narrative feedback. Additionally, they mentioned consistency of information as a factor that influenced their perception of saturation of information. Even though in general they had their idiosyncratic approach to assessing a portfolio, variations were present caused by certain triggers, such as noticeable deviations in the student's performance and quality of narrative feedback. CONCLUSION The perception of saturation of information seemed to be influenced by the quality of the narrative feedback and, to a lesser extent, by the quality of reflection. These results emphasize the importance of high-quality narrative feedback in making robust decisions within portfolios that are expected to be more difficult to assess. Furthermore, within these "difficult" portfolios, examiners adapted their interpretative process reacting on the intervention and other triggers by means of an iterative and responsive approach.
Collapse
Affiliation(s)
- Lubberta H de Jong
- Department Population Health Sciences, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands.
| | - Harold G J Bok
- Department Population Health Sciences, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
| | - Lonneke H Schellekens
- Department Population Health Sciences, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
- Faculty of Social and Behavioural Sciences, Educational Consultancy and Professional Development, Utrecht University, Utrecht, The Netherlands
| | - Wim D J Kremer
- Department Population Health Sciences, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
| | - F Herman Jonker
- Department Population Health Sciences, Section Farm Animal Health, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
| | - Cees P M van der Vleuten
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
78
|
Meyer EG, Harvey E, Durning SJ, Uijtdehaage S. Pre-clerkship EPA assessments: a thematic analysis of rater cognition. BMC MEDICAL EDUCATION 2022; 22:347. [PMID: 35524304 PMCID: PMC9077896 DOI: 10.1186/s12909-022-03402-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 04/20/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND Entrustable Professional Activities (EPAs) assessments measure learners' competence with an entrustment or supervisory scale. Designed for workplace-based assessment EPA assessments have also been proposed for undergraduate medical education (UME), where assessments frequently occur outside the workplace and may be less intuitive, raising validity concerns. This study explored how assessors make entrustment determinations in UME, with additional specific comparison based on familiarity with prior performance in the context of longitudinal student-assessor relationships. METHODS A qualitative approach using think-alouds was employed. Assessors assessed two students (familiar and unfamiliar) completing a history and physical examination using a supervisory scale and then thought-aloud after each assessment. We conducted a thematic analysis of assessors' response processes and compared them based on their familiarity with a student. RESULTS Four themes and fifteen subthemes were identified. The most prevalent theme related to "student performance." The other three themes included "frame of reference," "assessor uncertainty," and "the patient." "Previous student performance" and "affective reactions" were subthemes more likely to inform scoring when faculty were familiar with a student, while unfamiliar faculty were more likely to reference "self" and "lack confidence in their ability to assess." CONCLUSIONS Student performance appears to be assessors' main consideration for all students, providing some validity evidence for the response process in EPA assessments. Several problematic themes could be addressed with faculty development while others appear to be inherent to entrustment and may be more challenging to mitigate. Differences based on assessor familiarity with student merits further research on how trust develops over time.
Collapse
Affiliation(s)
- Eric G Meyer
- Department of Psychiatry, Uniformed Services University of the Health Sciences, 4301 Jones Bridge Rd, Bethesda, MD, 20814, USA.
| | - Emily Harvey
- The Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD, USA
- Department of Medicine, Center for Health Professions Education, Bethesda, MD, USA
| | - Steven J Durning
- Department of Medicine, Center for Health Professions Education, Bethesda, MD, USA
| | | |
Collapse
|
79
|
Taher A, Chow J, Kwon MS, Hunter D, Lucewicz A, Samarawickrama C. Determining the learning curve for a novel microsurgical procedure using histopathology. BMC MEDICAL EDUCATION 2022; 22:342. [PMID: 35509098 PMCID: PMC9066982 DOI: 10.1186/s12909-022-03407-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Accepted: 04/04/2022] [Indexed: 06/14/2023]
Abstract
PURPOSE Wet laboratories are becoming an increasingly important training tool as part of a push to a proficiency-based training model. We created a microsurgical wet laboratory to investigate the utility of histopathology use in assessing surgical outcomes and determine the learning curve of a novel microsurgical procedure. METHODS A microsurgical wet laboratory was established using pig eyes to simulate the human cornea. Three novice surgeons and an experienced surgeon performed an anterior cornea lamellar dissection and the duration of the procedure was recorded. With the aid of histological analysis, the thickness and characteristics of the dissected graft was recorded. The number of attempts to complete the experiment, defined as three successful dissections with mean thickness below 100 μm, was documented. RESULTS The use of histopathology was highly successful allowing in-depth analysis of the dissected graft for each attempt. Trainees reached the endpoint of the study in 21, 26 and 36 attempts (mean: 28 attempts) whilst the corneal surgeon completed the experiment in 12 attempts (p = 0.07). Mean dissection thickness decreased over time for all participants. The mean dissection time for trainees was 10.6 ± 4.2 min compared to the corneal surgeon with a mean of 8.2 ± 3.1 min (p = 0.03). CONCLUSION We propose a corneal wet laboratory model that allows for simple, efficient, and flexible microsurgical training. The use of histopathological analysis allows for careful graft analysis, providing objective feedback throughout the training exercise. Trainees demonstrated improvements in the three key aspects of the procedure: accuracy as evidenced by decreasing histological thickness, confidence by self-report and fluidity by decreasing duration of the procedure.
Collapse
Affiliation(s)
- Amir Taher
- Royal Prince Alfred Hospital, 50 Missenden Rd, Camperdown, NSW, 2050, Australia
- Centre for Vision Research, Westmead Institute for Medical Research, 176 Hawkesbury Rd, Westmead, NSW, 2145, Australia
| | - Joanne Chow
- Centre for Vision Research, Westmead Institute for Medical Research, 176 Hawkesbury Rd, Westmead, NSW, 2145, Australia
- School of Medical Sciences, University of Sydney, Camperdown, NSW, 2006, Australia
| | - Min Sung Kwon
- Centre for Vision Research, Westmead Institute for Medical Research, 176 Hawkesbury Rd, Westmead, NSW, 2145, Australia
| | - Damien Hunter
- Centre for Vision Research, Westmead Institute for Medical Research, 176 Hawkesbury Rd, Westmead, NSW, 2145, Australia
- Westmead Clinical School, Discipline of Ophthalmology, University of Sydney, Darcy Rd, Westmead, NSW, 2145, Australia
| | - Ania Lucewicz
- University of Sydney, Darcy Rd, Westmead, NSW, 2145, Australia
| | - Chameen Samarawickrama
- Centre for Vision Research, Westmead Institute for Medical Research, 176 Hawkesbury Rd, Westmead, NSW, 2145, Australia.
- Westmead Clinical School, Discipline of Ophthalmology, University of Sydney, Darcy Rd, Westmead, NSW, 2145, Australia.
- Central Clinical School, Sydney University, Johns Hopkins Dr, Camperdown, NSW, 2050, Australia.
- Westmead Hospital, Cnr Hawkesbury Road and Darcy Road, Westmead, NSW, 2145, Australia.
| |
Collapse
|
80
|
O'Connor E, Doyle E. A Scoping Review of Assessment Methods Following Undergraduate Clinical Placements in Anesthesia and Intensive Care Medicine. Front Med (Lausanne) 2022; 9:871515. [PMID: 35449804 PMCID: PMC9016165 DOI: 10.3389/fmed.2022.871515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 03/09/2022] [Indexed: 11/13/2022] Open
Abstract
Introduction Anesthesia and intensive care medicine are relatively new undergraduate medical placements. Both present unique learning opportunities and educational challenges to trainers and medical students. In the context of ongoing advances in medical education assessment and the importance of robust assessment methods, our scoping review sought to describe current research around medical student assessment after anesthesia and intensive care placements. Methods Following Levac's 6 step scoping review guide, we searched PubMed, EMBASE, EBSCO, SCOPUS, and Web of Science from 1980 to August 2021, including English-language original articles describing assessment after undergraduate medical placements in anesthesia and intensive care medicine. Results were reported in accordance with PRISMA scoping review guidelines. Results Nineteen articles published between 1983 and 2021 were selected for detailed review, with a mean of 119 participants and a median placement duration of 4 weeks. The most common assessment tools used were multiple-choice questions (7 studies), written assessment (6 studies) and simulation (6 studies). Seven studies used more than one assessment tool. All pre-/post-test studies showed an improvement in learning outcomes following clinical placements. No studies used workplace-based assessments or entrustable professional activities. One study included an account of theoretical considerations in study design. Discussion A diverse range of evidence-based assessment tools have been used in undergraduate medical assessment after anesthesia and intensive care placements. There is little evidence that recent developments in workplace assessment, entrustable activities and programmatic assessment have translated to undergraduate anesthesia or intensive care practice. This represents an area for further research as well as for curricular and assessment developments.
Collapse
Affiliation(s)
- Enda O'Connor
- Department of Anesthesia and Intensive Care Medicine, St James's Hospital, Dublin, Ireland.,School of Medicine, Trinity College, Dublin, Ireland
| | - Evin Doyle
- Department of Anesthesia and Intensive Care Medicine, St James's Hospital, Dublin, Ireland.,School of Medicine, Trinity College, Dublin, Ireland
| |
Collapse
|
81
|
Do Resident Archetypes Influence the Functioning of Programs of Assessment? EDUCATION SCIENCES 2022. [DOI: 10.3390/educsci12050293] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
While most case studies consider how programs of assessment may influence residents’ achievement, we engaged in a qualitative, multiple case study to model how resident engagement and performance can reciprocally influence the program of assessment. We conducted virtual focus groups with program leaders from four residency training programs from different disciplines (internal medicine, emergency medicine, neurology, and rheumatology) and institutions. We facilitated discussion with live screen-sharing to (1) improve upon a previously-derived model of programmatic assessment and (2) explore how different resident archetypes (sample profiles) may influence their program of assessment. Participants agreed that differences in resident engagement and performance can influence their programs of assessment in some (mal)adaptive ways. For residents who are disengaged and weakly performing (of which there are a few), significantly more time is spent to make sense of problematic evidence, arrive at a decision, and generate recommendations. Whereas for residents who are engaged and performing strongly (the vast majority), significantly less effort is thought to be spent on discussion and formalized recommendations. These findings motivate us to fulfill the potential of programmatic assessment by more intentionally and strategically challenging those who are engaged and strongly performing, and by anticipating ways that weakly performing residents may strain existing processes.
Collapse
|
82
|
Abstract
Educational change in higher education is challenging and complex, requiring engagement with a multitude of perspectives and contextual factors. In this paper, we present a case study based on our experiences of enacting a fundamental educational change in a medical program; namely, the steps taken in the transition to programmatic assessment. Specifically, we reflect on the successes and failures in embedding a coaching culture into programmatic assessment. To do this, we refer to the principles of programmatic assessment as they apply to this case and conclude with some key lessons that we have learnt from engaging in this change process. Fostering a culture of programmatic assessment that supports learners to thrive through coaching has required compromise and adaptability, particularly in light of the changes to teaching and learning necessitated by the global pandemic. We continue to inculcate this culture and enact the principles of programmatic assessment with a focus on continuous quality improvement.
Collapse
|
83
|
Gingerich A, Sebok-Syer SS, Lingard L, Watling CJ. The shift from disbelieving underperformance to recognising failure: A tipping point model. MEDICAL EDUCATION 2022; 56:395-406. [PMID: 34668213 DOI: 10.1111/medu.14681] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 10/05/2021] [Accepted: 10/15/2021] [Indexed: 06/13/2023]
Abstract
CONTEXT Coming face to face with a trainee who needs to be failed is a stern test for many supervisors. In response, supervisors have been encouraged to report evidence of failure through numerous assessment redesigns. And yet, there are lingering signs that some remain reluctant to engage in assessment processes that could alter a trainee's progression in the programme. Failure is highly consequential for all involved and, although rare, requires explicit study. Recent work identified a phase of disbelief that preceded identification of underperformance. What remains unknown is how supervisors come to recognise that a trainee needs to be failed. METHODS Following constructivist grounded theory methodology, 42 physicians and surgeons in British Columbia, Canada shared their experiences supervising trainees who profoundly underperformed, required extensive remediation or were dismissed from the programme. We identified recurring themes using an iterative, constant comparative process. RESULTS The shift from disbelieving underperformance to recognising failure involves three patterns: accumulation of significant incidents, discovery of an egregious error after negligible deficits or illumination of an overlooked deficit when pointed out by someone else. Recognising failure was accompanied by anger, certainty and a sense of duty to prevent harm. CONCLUSION Coming to the point of recognising that a trainee needs to fail is akin to the psychological process of a tipping point where people first realise that noise is signal and cross a threshold where the pattern is no longer an anomaly. The co-occurrence of anger raises the possibility for emotions to be a driver of, and not only a barrier to, recognising failure. This warrants caution because tipping points, and anger, can impede detection of improvement. Our findings point towards possibilities for supporting earlier identification of underperformance and overcoming reluctance to report failure along with countermeasures to compensate for difficulties in detecting improvement once failure has been verified.
Collapse
Affiliation(s)
- Andrea Gingerich
- Division of Medical Sciences, University of Northern British Columbia, Prince George, British Columbia, Canada
| | | | - Lorelei Lingard
- Schulich School of Medicine & Dentistry, Centre for Education Research & Innovation, Western University, London, Ontario, Canada
| | - Christopher J Watling
- Schulich School of Medicine & Dentistry, Centre for Education Research & Innovation, Western University, London, Ontario, Canada
| |
Collapse
|
84
|
Meyer EG, Boulet JR, Monahan PB, Durning SJ, Uijtdehaage S. A Pilot Study of the Generalizability of Preclinical Entrustment Assessments in Undergraduate Medical Education. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:562-568. [PMID: 35020614 DOI: 10.1097/acm.0000000000004590] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE The reproducibility and consistency of assessments of entrustable professional activities (EPAs) in undergraduate medical education (UME) have been identified as potential areas of concern. EPAs were designed to facilitate workplace-based assessments by faculty with a shared mental model of a task who could observe a trainee complete the task multiple times. In UME, trainees are frequently assessed outside the workplace by faculty who only observe a task once. METHOD In November 2019, the authors conducted a generalizability study (G-study) to examine the impact of student, faculty, case, and faculty familiarity with the student on the reliability of 162 entrustment assessments completed in a preclerkship environment. Three faculty were recruited to evaluate 18 students completing 3 standardized patient (SP) cases. Faculty familiarity with each student was determined. Decision studies were also completed. Secondary analysis of the relationship between student performance and entrustment (scoring inference) compared average SP checklist scores and entrustment scores. RESULTS G-study analysis revealed that entrustment assessments struggled to achieve moderate reliability. The student accounted for 30.1% of the variance in entrustment scores with minimal influence from faculty and case, while the relationship between student and faculty accounted for 26.1% of the variance. G-study analysis also revealed a difference in generalizability between assessments by unfamiliar (φ = 0.75) and familiar (φ = 0.27) faculty. Subanalyses showed that entrustment assessments by familiar faculty were moderately correlated to average SP checklist scores (r = 0.44, P < .001), while those by unfamiliar faculty were weakly correlated (r = 0.16, P = .13). CONCLUSIONS While faculty and case had a limited impact on the generalizability of entrustment assessments made outside the workplace in UME, faculty who were familiar with a student's ability had a notable impact on generalizability and potentially on the scoring validity of entrustment assessments, which warrants further study.
Collapse
Affiliation(s)
- Eric G Meyer
- E.G. Meyer is associate professor, Department of Psychiatry, Uniformed Services University of the Health Sciences, Bethesda, Maryland; ORCID: https://orcid.org/0000-0002-0538-4344
| | - John R Boulet
- J.R. Boulet is adjunct professor, Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland
| | - Patrick B Monahan
- P.B. Monahan is assistant professor, Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland; ORCID: https://orcid.org/0000-0003-4069-170X
| | - Steven J Durning
- S.J. Durning is professor, Department of Medicine, Division of Health Professions Education, Uniformed Services University of the Health Sciences, Bethesda, Maryland; ORCID: http://orcid.org/0000-0002-2107-0126
| | - Sebastian Uijtdehaage
- S. Uijtdehaage is professor, Department of Medicine, Division of Health Professions Education, Uniformed Services University of the Health Sciences, Bethesda, Maryland; ORCID: https://orcid.org/0000-0001-8598-4683
| |
Collapse
|
85
|
A mobile application to facilitate implementation of programmatic assessment in anaesthesia training. Br J Anaesth 2022; 128:990-996. [DOI: 10.1016/j.bja.2022.02.038] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 02/03/2022] [Accepted: 02/28/2022] [Indexed: 11/22/2022] Open
|
86
|
Assessment for Learning: The University of Toronto Temerty Faculty of Medicine M.D. Program Experience. EDUCATION SCIENCES 2022. [DOI: 10.3390/educsci12040249] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
(1) Background: Programmatic assessment optimizes the coaching, learning, and decision-making functions of assessment. It utilizes multiple data points, fit for purpose, which on their own guide learning, but taken together form the basis of holistic decision making. While they are agreed on principles, implementation varies according to context. (2) Context: The University of Toronto MD program implemented programmatic assessment as part of a major curriculum renewal. (3) Design and implementation: This paper, structured around best practices in programmatic assessment, describes the implementation of the University of Toronto MD program, one of Canada’s largest. The case study illustrates the components of the programmatic assessment framework, tracking and making sense of data, how academic decisions are made, and how data guide coaching and tailored support and learning plans for learners. (4) Lessons learned: Key implementation lessons are discussed, including the role of context, resources, alignment with curriculum renewal, and the role of faculty development and program evaluation. (5) Conclusions: Large-scale programmatic assessment implementation is resource intensive and requires commitment both initially and on a sustained basis, requiring ongoing improvement and steadfast championing of the cause of optimally leveraging the learning function of assessment.
Collapse
|
87
|
The Importance of Professional Development in a Programmatic Assessment System: One Medical School’s Experience. EDUCATION SCIENCES 2022. [DOI: 10.3390/educsci12030220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The Cleveland Clinic Lerner College of Medicine of Case Western Reserve University (CCLCM) was created in 2004 as a 5-year undergraduate medical education program with a mission to produce future physician-investigators. CCLCM’s assessment system aligns with the principles of programmatic assessment. The curriculum is organized around nine competencies, where each competency has milestones that students use to self-assess their progress and performance. Throughout the program, students receive low-stakes feedback from a myriad of assessors across courses and contexts. With support of advisors, students construct portfolios to document their progress and performance. A separate promotion committee makes high-stakes promotion decisions after reviewing students’ portfolios. This case study describes a systematic approach to provide both student and faculty professional development essential for programmatic assessment. Facilitators, barriers, lessons learned, and future directions are discussed.
Collapse
|
88
|
Farrell RM, Gilbert GE, Betance L, Huck J, Hunt JA, Dundas J, Pope E. Evaluating validity evidence for 2 instruments developed to assess students' surgical skills in a simulated environment. Vet Surg 2022; 51:788-800. [PMID: 35261056 PMCID: PMC9314123 DOI: 10.1111/vsu.13791] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2021] [Revised: 12/20/2021] [Accepted: 01/31/2022] [Indexed: 11/26/2022]
Abstract
Objective To gather and evaluate validity evidence in the form of content and reliability of scores produced by 2 surgical skills assessment instruments, 1) a checklist, and 2) a modified form of the Objective Structured Assessment of Technical Skills (OSATS) global rating scale (GRS). Study design Prospective randomized blinded study. Sample population Veterinary surgical skills educators (n =10) evaluated content validity. Scores from students in their third preclinical year of veterinary school (n = 16) were used to assess reliability. Methods Content validity was assessed using Lawshe's method to calculate the Content Validity Index (CVI) for the checklist and modified OSATS GRS. The importance and relevance of each item was determined in relation to skills needed to successfully perform supervised surgical procedures. The reliability of scores produced by both instruments was determined using generalizability (G) theory. Results Based on the results of the content validation, 39 of 40 checklist items were included. The 39‐item checklist CVI was 0.81. One of the 6 OSATS GRS items was included. The 1‐item GRS CVI was 0.80. The G‐coefficients for the 40‐item checklist and 6‐item GRS were 0.85 and 0.79, respectively. Conclusion Content validity was very good for the 39‐item checklist and good for the 1‐item OSATS GRS. The reliability of scores from both instruments was acceptable for a moderate stakes examination. Impact These results provide evidence to support the use of the checklist described and a modified 1‐item OSAT GRS in moderate stakes examinations when evaluating preclinical third‐year veterinary students' technical surgical skills on low‐fidelity models.
Collapse
Affiliation(s)
- Robin M Farrell
- School of Veterinary Medicine, University College Dublin, Dublin, Ireland
| | - Gregory E Gilbert
- ΣigmaΣtats Consulting, LLC, Charleston, South Carolina, USA.,Biostatistics and Medical Writing, Real World Evidence Strategy & Analytics, ICON Commercialization & Outcomes Services, North Wales, Pennsylvania, USA
| | - Larry Betance
- School of Veterinary Medicine, Ross University, Basseterre, Saint Kitts and Nevis
| | - Jennifer Huck
- School of Veterinary Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Julie A Hunt
- College of Veterinary Medicine, Lincoln Memorial University, Harrogate, Tennessee, USA
| | - James Dundas
- Atlantic Veterinary College, Charlottetown, Prince Edward Island, Canada
| | - Eric Pope
- School of Veterinary Medicine, Ross University, Basseterre, Saint Kitts and Nevis
| |
Collapse
|
89
|
Lucey CR, Davis JA, Green MM. We Have No Choice but to Transform: The Future of Medical Education After the COVID-19 Pandemic. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:S71-S81. [PMID: 34789658 PMCID: PMC8855762 DOI: 10.1097/acm.0000000000004526] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Medical education exists to prepare the physician workforce that our nation needs, but the COVID-19 pandemic threatened to disrupt that mission. Likewise, the national increase in awareness of social justice gaps in our country pointed out significant gaps in health care, medicine, and our medical education ecosystem. Crises in all industries often present leaders with no choice but to transform-or to fail. In this perspective, the authors suggest that medical education is at such an inflection point and propose a transformational vision of the medical education ecosystem, followed by a 10-year, 10-point plan that focuses on building the workforce that will achieve that vision. Broad themes include adopting a national vision; enhancing medicine's role in social justice through broadened curricula and a focus on communities; establishing equity in learning and processes related to learning, including wellness in learners, as a baseline; and realizing the promise of competency-based, time-variable training. Ultimately, 2020 can be viewed as a strategic inflection point in medical education if those who lead and regulate it analyze and apply lessons learned from the pandemic and its associated syndemics.
Collapse
Affiliation(s)
- Catherine R. Lucey
- C.R. Lucey is professor of medicine, executive vice dean, and vice dean for education, University of California, San Francisco School of Medicine, San Francisco, California
| | - John A. Davis
- J.A. Davis is professor of medicine, associate dean for curriculum, and interim associate dean for students, University of California, San Francisco School of Medicine, San Francisco, California
| | - Marianne M. Green
- M.M. Green is Raymond H. Curry, MD Professor of Medical Education, professor of medicine, and vice dean for education, Northwestern University Feinberg School of Medicine, Chicago, Illinois
| |
Collapse
|
90
|
Lacasse M, Renaud JS, Côté L, Lafleur A, Codsi MP, Dove M, Pélissier-Simard L, Pitre L, Rheault C. [Feedback Guide for direct observation of family medicine residents in Canada: a francophone tool]. CANADIAN MEDICAL EDUCATION JOURNAL 2022; 13:29-54. [PMID: 35321416 PMCID: PMC8909829 DOI: 10.36834/cmej.72587] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
BACKGROUND There is no CanMEDS-FM-based milestone tool to guide feedback during direct observation (DO). We have developed a guide to support documentation of feedback for direct observation (DO) in Canadian family medicine (FM) programs. METHODS The Guide was designed in three phases with the collaboration of five Canadian FM programs with at least a French-speaking teaching site: 1) literature review and needs assessment; 2) development of the DO Feedback Guide; 3) testing the Guide in a video simulation context with qualitative content analysis. RESULTS Phase 1 demonstrated the need for a narrative guide aimed at 1) specifying mutual expectations according to the resident's level of training and the clinical context, 2) providing the supervisor with tools and structure in his observations 3) to facilitate documentation of feedback. Phase 2 made it possible to develop the Guide, in paper and electronic formats, meeting the needs identified. In phase 3, 15 supervisors used the guide for three levels of residence. The Guide was adjusted following this testing to recall the phases of the clinical encounter that were often forgotten during feedback (before consultation, diagnosis and follow-up), and to suggest types of formulation to be favored (stimulating questions, questions of clarification, reflections). CONCLUSION Based on evidence and a collaborative approach, this Guide will equip French-speaking Canadian supervisors and residents performing DO in family medicine.
Collapse
Affiliation(s)
| | | | - Luc Côté
- Université Laval, Québec, Canada
| | | | | | | | | | | | | |
Collapse
|
91
|
Castanelli DJ, Weller JM, Molloy E, Bearman M. Trust, power and learning in workplace-based assessment: The trainee perspective. MEDICAL EDUCATION 2022; 56:280-291. [PMID: 34433230 PMCID: PMC9292503 DOI: 10.1111/medu.14631] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 08/09/2021] [Accepted: 08/17/2021] [Indexed: 05/22/2023]
Abstract
For trainees to participate meaningfully in workplace-based assessment (WBA), they must have trust in their assessor. However, the trainee's dependent position complicates such trust. Understanding how power and trust influence WBAs may help us make them more effective learning opportunities. We conducted semi-structured interviews with 17 postgraduate anaesthesia trainees across Australia and New Zealand. Sensitised by notions of power, we used constructivist grounded theory methodology to examine trainees' experiences with trusting their supervisors in WBAs. In our trainee accounts, we found that supervisors held significant power to mediate access to learning opportunities and influence trainee progress in training. All episodes where supervisors could observe trainees, from simply working together to formal WBAs, were seen to generate assessment information with potential consequences. In response, trainees actively acquiesced to a deferential role, which helped them access desirable expertise and minimise the risk of reputational harm. Trainees granted trust based on how they anticipated a supervisor would use the power inherent in their role. Trainees learned to ration exposure of their authentic practice to supervisors in proportion to their trust in them. Trainees were more trusting and open to learning when supervisors used their power for the trainee's benefit and avoided WBAs with supervisors they perceived as less trustworthy. If assessment for learning is to flourish, then the trainee-supervisor power dynamic must evolve. Enhancing supervisor behaviour through reflection and professional development to better reward trainee trust would invite more trainee participation in assessment for learning. Modifying the assessment system design to nudge the power balance towards the trainee may also help. Modifications could include designated formative and summative assessments or empowering trainees to select which assessments count towards progress decisions. Attending to power and trust in WBA may stimulate progress towards the previously aspirational goal of assessment for learning in the workplace.
Collapse
Affiliation(s)
- Damian J. Castanelli
- School of Clinical Sciences at Monash HealthMonash UniversityClaytonVictoriaAustralia
- Department of Anaesthesia and Perioperative MedicineMonash HealthClaytonVictoriaAustralia
- Centre for Research and Assessment in Digital Learning (CRADLE)Deakin UniversityGeelongVictoriaAustralia
| | - Jennifer M. Weller
- Centre for Medical and Health Sciences Education, School of MedicineUniversity of AucklandAucklandNew Zealand
- Department of AnaesthesiaAucklandAucklandNew Zealand
| | - Elizabeth Molloy
- Department of Medical Education, Melbourne Medical SchoolUniversity of MelbourneMelbourneVictoriaAustralia
| | - Margaret Bearman
- Centre for Research and Assessment in Digital Learning (CRADLE)Deakin UniversityGeelongVictoriaAustralia
| |
Collapse
|
92
|
McMahon CJ, Tretter JT, Redington AN, Bu’Lock F, Zühlke L, Heying R, Mattos S, Krishna Kumar R, Jacobs JP, Windram JD. Medical education and training within congenital cardiology: current global status and future directions in a post COVID-19 world. Cardiol Young 2022; 32:185-197. [PMID: 33843546 PMCID: PMC8111178 DOI: 10.1017/s1047951121001645] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 03/29/2021] [Accepted: 04/05/2021] [Indexed: 12/26/2022]
Abstract
Despite enormous strides in our field with respect to patient care, there has been surprisingly limited dialogue on how to train and educate the next generation of congenital cardiologists. This paper reviews the current status of training and evolving developments in medical education pertinent to congenital cardiology. The adoption of competency-based medical education has been lauded as a robust framework for contemporary medical education over the last two decades. However, inconsistencies in frameworks across different jurisdictions remain, and bridging gaps between competency frameworks and clinical practice has proved challenging. Entrustable professional activities have been proposed as a solution, but integration of such activities into busy clinical cardiology practices will present its own challenges. Consequently, this pivot towards a more structured approach to medical education necessitates the widespread availability of appropriately trained medical educationalists, a development that will better inform curriculum development, instructional design, and assessment. Differentiation between superficial and deep learning, the vital role of rich formative feedback and coaching, should guide our trainees to become self-regulated learners, capable of critical reasoning yet retaining an awareness of uncertainty and ambiguity. Furthermore, disruptive innovations such as "technology enhanced learning" may be leveraged to improve education, especially for trainees from low- and middle-income countries. Each of these initiatives will require resources, widespread advocacy and raised awareness, and publication of supporting data, and so it is especially gratifying that Cardiology in the Young has fostered a progressive approach, agreeing to publish one or two articles in each journal issue in this domain.
Collapse
Affiliation(s)
- Colin J McMahon
- Department of Paediatric Cardiology, Children’s Health Ireland at Crumlin, Dublin, Ireland
- School of Medicine, University College Dublin, Belfield, Dublin 4, Ireland
| | - Justin T Tretter
- The Heart Institute, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, USA
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Andrew N Redington
- The Heart Institute, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, USA
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Frances Bu’Lock
- Department of Paediatric Cardiology, East Midlands Congenital Heart Centre, University Hospitals of Leicester NHS Trust, Leicester, UK
| | - Liesl Zühlke
- Division of Paediatric Cardiology, Department of Paediatrics, Red Cross War Memorial Children’s Hospital, University of Cape Town, Cape Town, South Africa
- Division of Cardiology, Department of Medicine, Groote Schuur Hospital University of Cape Town, Cape Town, South Africa
| | - Ruth Heying
- Department of Paediatric Cardiology, Leuven, Belgium
| | - Sandra Mattos
- Department of Paediatric Cardiology, Royal Portuguese Hospital, Recife, Brazil
| | - R Krishna Kumar
- Amrita Institute of Medical Sciences and Research Centre, Kochi, Kerala, India
| | - Jeffrey P Jacobs
- Congenital Heart Center, Division of Thoracic and Cardiovascular Surgery, Department of Surgery, University of Florida, Gainesville, Florida, USA
| | - Jonathan D Windram
- Department of Cardiology, Mazankowski Heart Institute, University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
93
|
Stringer JK, Gruppen LD, Ryan MS, Ginzburg SB, Cutrer WB, Wolff M, Santen SA. Measuring the Master Adaptive Learner: Development and Internal Structure Validity Evidence for a New Instrument. MEDICAL SCIENCE EDUCATOR 2022; 32:183-193. [PMID: 35003878 PMCID: PMC8726526 DOI: 10.1007/s40670-021-01491-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 12/08/2021] [Indexed: 06/14/2023]
Abstract
BACKGROUND The master adaptive learner (MAL) uses self-regulated learning skills to develop adaptive, efficient, and accurate skills in practice. Given rapid changes in healthcare, it is essential that medical students develop into MALs. There is a need for an instrument that can capture MAL behaviors and characteristics. The objective of this study was to develop an instrument for measuring the MAL process in medical students and evaluate its psychometric properties. METHODS As part of curriculum evaluation, 818 students completed previously developed instruments with validity evidence including the Self-Regulated Learning Perception Scale, Brief Resilience Scale, Goal Orientation Scale, and Jefferson Scale of Physician Lifelong Learning. The authors performed exploratory factor analysis to examine underlying relationships between items. Items with high factor loadings were retained. Cronbach's alpha was computed. In parallel, the multi-institutional research team rated the same items to provide content validity evidence of the items to MAL model. RESULTS The original 67 items were reduced to 28 items loading onto four factors: Planning, Learning, Resilience, and Motivation. Each subscale included the following number of items and Cronbach's alpha: Planning (10 items, alpha = 0.88), Learning (6 items, alpha = 0.81), Resilience (6 items, alpha = 0.89), and Motivation (6 items, alpha = 0.81). The findings from the factor analyses aligned with the research team ratings of linkage to the components of MAL. CONCLUSION These findings serve as a starting point for future work measuring master adaptive learning to identify and support learners. To fully measure the MAL construct, additional items may need to be developed.
Collapse
Affiliation(s)
- J. K. Stringer
- Virginia Commonwealth University School of Medicine, 1201 E Marshall St, MMEC 4-214, Box 980565, Richmond, VA 23298-0565 USA
| | - Larry D. Gruppen
- Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, MI USA
| | - Michael S. Ryan
- Virginia Commonwealth University School of Medicine, 1201 E Marshall St, MMEC 4-214, Box 980565, Richmond, VA 23298-0565 USA
| | | | | | - Margaret Wolff
- University of Michigan Medical School, Ann Arbor, MI USA
| | - Sally A. Santen
- Virginia Commonwealth University School of Medicine, 1201 E Marshall St, MMEC 4-214, Box 980565, Richmond, VA 23298-0565 USA
- University of Cincinnati College of Medicine, Cincinnati, OH USA
| |
Collapse
|
94
|
Lane AS, Roberts C. Contextualised reflective competence: a new learning model promoting reflective practice for clinical training. BMC MEDICAL EDUCATION 2022; 22:71. [PMID: 35093060 PMCID: PMC8801113 DOI: 10.1186/s12909-022-03112-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Accepted: 12/29/2021] [Indexed: 06/14/2023]
Abstract
BACKGROUND Reflection is a metacognitive process that allows self-regulation and the promotion of lifelong learning, and is an essential requirement to develop therapeutic relationships with patients and colleagues, as well as professional expertise. The medical literature is lacking on guidance for learners and educators to develop reflective abilities. METHODS Based on our program of research into junior doctors delivering open disclosure communication after medical error, we developed a framework called contextualised reflective competence, to assist students/trainees and educators in developing, maintaining, and ensuring reflective practice in the context of professional experiences. RESULTS The contextualised reflective competence framework has its origins in the conscious competency framework, an established learning paradigm within healthcare professions education, and it has been developed to encompass some of the vital concepts that the conscious competency matrix was lacking: the promotion of ongoing reflection practice, accurate assumptions of the learner's original mindset, variations in everyday performance, and erosion of skills. The contextualised reflective competence framework progresses the conscious competence framework from a 2x2 box diagram to a two-pronged flowchart. In our framework, if the learner possesses appropriate reflective practice, contextualised reflective competence, they move through alearning process where they achieve unconscious competence. If the learner does not possess contextualised reflective competence, they move though a learning process where this display generalised reflective incompetence, characterised by cognitive dissonance and rationalisation, leading to errors and non-learning. Generalised reflective incompetence is usually a temporary state with appropriate supervision. Our program of research demonstrated that contextualised reflective competence was related to critical cognitive frameworks, such as intellectual humility, situational awareness, the development of a 'growth mindset', and belongingness. CONCLUSIONS The Contextualised Reflective Competence framework promotes learners' understanding of their core competencies and provides opportunities for personal critical reflection. It provides educators and supervisors with a diagnostic pathway for those with reflective incompetence. We anticipate its use in the clinical environment where issues of competence are raised in professional experiences.
Collapse
Affiliation(s)
- Andrew Stuart Lane
- Sydney Medical School, University of Sydney, Sydney, Australia.
- Intensive Care Medicine, Nepean Hospital, Derby Street, Penrith, Sydney, NSW, 2773, Australia.
| | - Chris Roberts
- Health Professions Education, and Head of Faculty Development, Sydney Medical School, University of Sydney, Sydney, Australia
| |
Collapse
|
95
|
Lai H, Ameli N, Patterson S, Senior A, Lunardon D. Development of an electronic learning progression dashboard to monitor student clinical experiences. J Dent Educ 2022; 86:759-765. [PMID: 34989405 DOI: 10.1002/jdd.12871] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 12/08/2021] [Accepted: 12/23/2021] [Indexed: 11/09/2022]
Abstract
INTRODUCTION Clinical experience tracking mechanisms for students at dental schools provide patient assignment, student experience, and learning progression feedback. The purpose of this study was to evaluate dental students' clinical experiences following the implementation of a learning progression dashboard (LPD). METHODS After developing and deploying an electronic LPD using PHP, secondary data analysis on dental students' clinical experiences from 2017-2019 was conducted. Student experience differences were compared between the year before continuous use of the LPD and the first year using it. LPD data contained the required clinical procedures dentistry students must perform across all disciplines and the number of planned, in progress, and completed tasks each student has accomplished. Using two time points, the students' experiences were compared. Univariate statistics and independent t-tests were conducted in R for detecting the differences in the number and categories of codes. RESULTS The number and category of codes showed significant differences between the academic year 2017-2018 and 2018-2019 for both third- and fourth-year dental students after one and two terms. Overall, students recorded a 26% greater number of treatment codes and experienced a 26% greater number of code categories compared to the previous year. CONCLUSION Applying information management methods such as dashboards can better inform educators on student clinical experiences and improve clinical learning outcomes for students.
Collapse
Affiliation(s)
- Hollis Lai
- School of Dentistry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Nazila Ameli
- School of Dentistry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Steven Patterson
- School of Dentistry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Anthea Senior
- School of Dentistry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Doris Lunardon
- School of Dentistry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
96
|
Saiyad S, Bhagat P, Virk A, Mahajan R, Singh T. Changing Assessment Scenarios: Lessons for Changing Practice. Int J Appl Basic Med Res 2021; 11:206-213. [PMID: 34912682 PMCID: PMC8633695 DOI: 10.4103/ijabmr.ijabmr_334_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2021] [Revised: 08/03/2021] [Accepted: 09/02/2021] [Indexed: 11/04/2022] Open
Abstract
Assessment is a process that includes ascertainment of improvement in the performance of students over time, motivation of students to study, evaluation of teaching methods, and ranking of student capabilities. It is an important component of the educational process influencing student learning. Although we have embarked on a new curricular model, assessment has remained largely ignored despite being the hallmark of competency-based education. During the earlier stages, the assessment was considered akin to "measurement," believing that competence is "generic, fixed and transferable across content," could be measured quantitatively and can be expressed as a single score. The objective assessment was the norm and subjective tools were considered unreliable and biased. It was soon realized that "competence is specific and nontransferable," mandating the use of multiple assessment tools across multiple content areas using multiple assessors. A paradigm change through "programmatic assessment" only occurred with the understanding that competence is "dynamic, incremental and contextual." Here, information about the students' competence and progress is gathered continually over time, analysed and supplemented with purposefully collected additional information when needed, using carefully selected combination of tools and assessor expertise, leading to an authentic, observation-driven, institutional assessment system. In the conduct of any performance assessment, the assessor remains an important part of the process, therefore making assessor training indispensable. In this paper, we look at the changing paradigms of our understanding of clinical competence, corresponding global changes in assessment and then try to make out a case for adopting the prevailing trends in the assessment of clinical competence.
Collapse
Affiliation(s)
- Shaista Saiyad
- Department of Physiology, Smt N H L Municipal Medical College, Ahmedabad, Gujarat, India
| | - Purvi Bhagat
- M and J Western Regional Institute of Ophthalmology, B. J. Medical College, Ahmedabad, Gujarat, India
| | - Amrit Virk
- Department of Community Medicine, Adesh Medical College and Hospital, Kurukshetra, Haryana, India
| | - Rajiv Mahajan
- Department of Pharmacology, Adesh Institute of Medical Sciences and Research, Bathinda, Punjab, India
| | - Tejinder Singh
- Department of Medical Education, Sri Guru Ram Das Institute of Medical Sciences and Research, Amritsar, Punjab, India
| |
Collapse
|
97
|
Ross S, Lee AS. Relationships, continuity and time in health professions education. MEDICAL EDUCATION 2021; 55:1344-1346. [PMID: 34612531 DOI: 10.1111/medu.14674] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 09/23/2021] [Accepted: 10/01/2021] [Indexed: 06/13/2023]
Affiliation(s)
- Shelley Ross
- Department of Family Medicine, Faculty of Medicine, University of Alberta, Edmonton, Alberta, Canada
| | - Ann S Lee
- Department of Family Medicine, Faculty of Medicine, University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
98
|
Santen SA, Ryan M, Helou MA, Richards A, Perera RA, Haley K, Bradner M, Rigby FB, Park YS. Building reliable and generalizable clerkship competency assessments: Impact of 'hawk-dove' correction. MEDICAL TEACHER 2021; 43:1374-1380. [PMID: 34534035 DOI: 10.1080/0142159x.2021.1948519] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE Systematic differences among raters' approaches to student assessment may result in leniency or stringency of assessment scores. This study examines the generalizability of medical student workplace-based competency assessments including the impact of rater-adjusted scores for leniency and stringency. METHODS Data were collected from summative clerkship assessments completed for 204 students during 2017-2018 the clerkship at a single institution. Generalizability theory was used to explore variance attributed to different facets (rater, learner, item, and competency domain) through three unbalanced random-effects models by clerkship including applying assessor stringency-leniency adjustments. RESULTS In the original assessments, only 4-8% of the variance was attributed to the student with the remainder being rater variance and error. Aggregating items to create a composite score increased variability attributable to the student (5-13% of variance). Applying a stringency-leniency ('hawk-dove') correction substantially increased the variance attributed to the student (14.8-17.8%) and reliability. Controlling for assessor leniency/stringency reduced measurement error, decreasing the number of assessments required for generalizability from 16-50 to 11-14. CONCLUSIONS Similar to prior research, most of the variance in competency assessment scores was attributable to raters, with only a small proportion attributed to the student. Making stringency-leniency corrections using rater-adjusted scores improved the psychometric characteristics of assessment scores.
Collapse
Affiliation(s)
- Sally A Santen
- Virginia Commonwealth University School of Medicine, Richmond, VA, USA
| | - Michael Ryan
- Virginia Commonwealth University School of Medicine, Richmond, VA, USA
| | - Marieka A Helou
- Virginia Commonwealth University School of Medicine, Richmond, VA, USA
| | - Alicia Richards
- Virginia Commonwealth University School of Medicine, Richmond, VA, USA
| | - Robert A Perera
- Virginia Commonwealth University School of Medicine, Richmond, VA, USA
| | - Kellen Haley
- Virginia Commonwealth University School of Medicine, Richmond, VA, USA
| | - Melissa Bradner
- Virginia Commonwealth University School of Medicine, Richmond, VA, USA
| | - Fidelma B Rigby
- Virginia Commonwealth University School of Medicine, Richmond, VA, USA
| | - Yoon Soo Park
- College of Medicine, University of Illinois at Chicago, Chicago, IL, USA
- The Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
99
|
Dhaliwal G, Hauer KE. Excellence in medical training: developing talent-not sorting it. PERSPECTIVES ON MEDICAL EDUCATION 2021; 10:356-361. [PMID: 34415554 PMCID: PMC8377327 DOI: 10.1007/s40037-021-00678-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 06/22/2021] [Accepted: 06/23/2021] [Indexed: 06/02/2023]
Abstract
Many medical schools have reconsidered or eliminated clerkship grades and honor society memberships. National testing organizations announced plans to eliminate numerical scoring for the United States Medical Licensing Examination Step 1 in favor of pass/fail results. These changes have led some faculty to wonder: "How will we recognize and reward excellence?" Excellence in undergraduate medical education has long been defined by high grades, top test scores, honor society memberships, and publication records. However, this model of learner excellence is misaligned with how students learn or what society values. This accolade-driven view of excellence is perpetuated by assessments that are based on gestalt impressions influenced by similarity between evaluators and students, and assessments that are often restricted to a limited number of traditional skill domains. To achieve a new model of learner excellence that values the trainee's achievement, growth, and responsiveness to feedback across multiple domains, we must envision a new model of teacher excellence. Such teachers would have a growth mindset toward assessing competencies and learning new competencies. Actualizing true learner excellence will require teachers to change from evaluators who conduct assessments of learning to coaches who do assessment for learning. Schools will also need to establish policies and structures that foster a culture that supports this change. In this new paradigm, a teacher's core duty is to develop talent rather than sort it.
Collapse
Affiliation(s)
- Gurpreet Dhaliwal
- Department of Medicine, University of California San Francisco School of Medicine, San Francisco, CA, USA.
- Medical Service, San Francisco VA Medical Center, San Francisco, CA, USA.
| | - Karen E Hauer
- Department of Medicine, University of California San Francisco School of Medicine, San Francisco, CA, USA
| |
Collapse
|
100
|
Roberts C, Khanna P, Lane AS, Reimann P, Schuwirth L. Exploring complexities in the reform of assessment practice: a critical realist perspective. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2021; 26:1641-1657. [PMID: 34431028 DOI: 10.1007/s10459-021-10065-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 08/08/2021] [Indexed: 06/13/2023]
Abstract
Although the principles behind assessment for and as learning are well-established, there can be a struggle when reforming traditional assessment of learning to a program which encompasses assessment for and as learning. When introducing and reporting reforms, tensions in faculty may arise because of differing beliefs about the relationship between assessment and learning and the rules for the validity of assessments. Traditional systems of assessment of learning privilege objective, structured quantification of learners' performances, and are done to the students. Newer systems of assessment promote assessment for learning, emphasise subjectivity, collate data from multiple sources, emphasise narrative-rich feedback to promote learner agency, and are done with the students. This contrast has implications for implementation and evaluative research. Research of assessment which is done to students typically asks, "what works", whereas assessment that is done with the students focuses on more complex questions such as "what works, for whom, in which context, and why?" We applied such a critical realist perspective drawing on the interplay between structure and agency, and a systems approach to explore what theory says about introducing programmatic assessment in the context of pre-existing traditional approaches. Using a reflective technique, the internal conversation, we developed four factors that can assist educators considering major change to assessment practice in their own contexts. These include enabling positive learner agency and engagement; establishing argument-based validity frameworks; designing purposeful and eclectic evidence-based assessment tasks; and developing a shared narrative that promotes reflexivity in appreciating the complex relationships between assessment and learning.
Collapse
Affiliation(s)
- Chris Roberts
- Faculty of Medicine and Health, Education Office, Sydney Medical School, The University of Sydney, Sydney, NSW, Australia.
| | - Priya Khanna
- Faculty of Medicine and Health, Education Office, Sydney Medical School, The University of Sydney, Sydney, NSW, Australia
| | - Andrew Stuart Lane
- Faculty of Medicine and Health, Education Office, Sydney Medical School, The University of Sydney, Sydney, NSW, Australia
| | - Peter Reimann
- Centre for Research on Learning and Innovation (CRLI), The University of Sydney, Sydney, NSW, Australia
| | - Lambert Schuwirth
- Prideaux Discipline of Clinical Education, College of Medicine and Public Health, Flinders University, Adelaide, South Australia, Australia
| |
Collapse
|