1
|
Kerins J, Ralston K, Stirling SA, Simpson N, Tallentire VR. Training as imagined? A critical realist analysis of Scotland's internal medicine simulation programme. Adv Simul (Lond) 2024; 9:27. [PMID: 38926742 PMCID: PMC11210083 DOI: 10.1186/s41077-024-00299-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Accepted: 06/18/2024] [Indexed: 06/28/2024] Open
Abstract
BACKGROUND Evaluating the impact of simulation-based education (SBE) has prioritised demonstrating a causal link to improved patient outcomes. Recent calls herald a move away from looking for causation to understanding 'what else happened'. Inspired by Shorrock's varieties of human work from patient safety literature, this study draws on the concept of work-as-done versus work-as-imagined. Applying this to SBE recognises that some training impacts will be unexpected, and the realities of training will never be quite as imagined. This study takes a critical realist stance to explore the experience and consequences, intended and unintended, of the internal medicine training (IMT) simulation programme in Scotland, to better understand 'training-as-done'. METHODS Critical realism accepts that there is a reality to uncover but acknowledges that our knowledge of reality is inevitably our construction and cannot be truly objective. The IMT simulation programme involves three courses over a 3-year period: a 3-day boot camp, a skills day and a 2-day registrar-ready course. Following ethical approval, interviews were conducted with trainees who had completed all courses, as well as faculty and stakeholders both immersed in and distant from course delivery. Interviews were audio-recorded, transcribed verbatim and analysed using critical realist analysis, influenced by Shorrock's proxies for work-as-done. RESULTS Between July and December 2023, 24 interviews were conducted with ten trainees, eight faculty members and six stakeholders. Data described proxies for training-as-done within three broad categories: design, experience and impact. Proxies for training design included training-as-prescribed, training-as-desired and training-as-prioritised which compete to produce training-as-standardised. Experience included training-as-anticipated with pre-simulation anxiety and training-as-unintended with the valued opportunity for social comparison as well as a sense of identity and social cohesion. The impact reached beyond the individual trainee with faculty development and inspiration for other training ventures. CONCLUSION Our findings highlight unintended consequences of SBE such as social comparison and feeling 'valued as a trainee, valued as a person'. It sheds light on the fear of simulation, reinforcing the importance of psychological safety. A critical realist approach illuminated the 'bigger picture', revealing insights and underlying mechanisms that allow this study to present a new framework for conceptualising training evaluation.
Collapse
Affiliation(s)
- Joanne Kerins
- Scottish Centre for Simulation and Clinical Human Factors, Forth Valley Royal Hospital, Larbert, FK5 4WR, UK.
- NHS Education for Scotland, Glasgow, UK.
| | | | | | - Nicholas Simpson
- Scottish Centre for Simulation and Clinical Human Factors, Forth Valley Royal Hospital, Larbert, FK5 4WR, UK
| | - Victoria Ruth Tallentire
- Scottish Centre for Simulation and Clinical Human Factors, Forth Valley Royal Hospital, Larbert, FK5 4WR, UK
- Medical Education Directorate, NHS Lothian, Edinburgh, UK
- NHS Education for Scotland, Glasgow, UK
| |
Collapse
|
2
|
Sims DA, Lucio-Ramirez CA, Cilliers FJ. Factors influencing clinician-educators' assessment practice in varied Southern contexts: a health behaviour theory perspective. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2024:10.1007/s10459-024-10341-3. [PMID: 38811446 DOI: 10.1007/s10459-024-10341-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 05/12/2024] [Indexed: 05/31/2024]
Abstract
In many contexts, responsibility for exit-level assessment design and implementation in undergraduate medical programmes lies with individuals who convene clinical clerkships. Their assessment practice has significant consequences for students' learning and the patients and communities that graduates will serve. Interventions to enhance assessment must involve these assessors, yet little is known about factors influencing their assessment practice. The purpose of this study was to explore factors that influence assessment practice of clerkship convenors in three varied low-and-middle income contexts in the global South. Taking assessment practice as a behaviour, Health Behaviour Theory (HBT) was deployed as a theoretical framework to explore, describe and explain assessor behaviour. Thirty-one clinician-educators responsible for designing and implementing high-stakes clerkship assessment were interviewed in South Africa and Mexico. Interacting personal and contextual factors influencing clinician-educator assessment intention and action were identified. These included attitude, influenced by impact and response appraisal, and perceived self-efficacy; along with interpersonal, physical and organisational, and distal contextual factors. Personal competencies and conducive environments supported intention to action transition. While previous research has typically explored factors in isolation, the HBT framing enabled a systematic and coherent account of assessor behaviour. These findings add a particular contextual perspective to understanding assessment practice, yet also resonate with and extend existing work that predominantly emanates from high-income contexts in the global North. These findings provide a foundation for the planning of assessment change initiatives, such as targeted, multi-factorial faculty development.
Collapse
Affiliation(s)
- Danica Anne Sims
- University of Oxford, Oxford, UK.
- University of Johannesburg, Johannesburg, South Africa.
| | | | | |
Collapse
|
3
|
Smith JF, Piemonte NM. The Problematic Persistence of Tiered Grading in Medical School. TEACHING AND LEARNING IN MEDICINE 2023; 35:467-476. [PMID: 35619232 DOI: 10.1080/10401334.2022.2074423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 04/01/2022] [Accepted: 04/25/2022] [Indexed: 06/15/2023]
Abstract
Issue: The evaluation of medical students is a critical, complex, and controversial process. It is tightly woven into the medical school curriculum, beginning at the inception of the medical student's professional journey. In this respect, medical student evaluation is among the first in a series of ongoing, lifelong assessments that influence the interpersonal, ethical, and socioeconomic dimensions necessary for an effective physician workforce. Yet, tiered grading has a questionable historic pedagogic basis in American medical education, and evidence suggests that tiered grading itself is a source of student burnout, anxiety, depression, increased competitiveness, reduced group cohesion, and racial biases. Evidence: In its most basic form, medical student evaluation is an assessment of the initial cognitive and technical competencies ultimately needed for the safe and effective practice of contemporary medicine. At many American medical schools, such evaluation relies largely on norm-based comparisons, such as tiered grading. Yet, tiered grading can cause student distress, is considered unfair by most students, is associated with biases against under-represented minorities, and demonstrates inconsistent correlation with residency performance. While arguments that tiered grading motivates student performance have enjoyed historic precedence in academia, such arguments are not supported by robust data or theories of motivation. Implications: Given the evolving recognition of the deleterious effects on medical student mental health, cohesiveness, and diversity, the use of tiered grading in medical schools to measure or stimulate academic performance, or by residency program directors to distinguish residency applicants, remains questionable. Examination of tiered grading in its historical, psychometric, psychosocial, and moral dimensions and the various arguments used to maintain it reveals a need for investigation of, if not transition to, alternative and non-tiered assessments of our medical students.
Collapse
Affiliation(s)
- James F Smith
- Departments of Medical Education and Medical Humanities, Creighton University, Omaha, Nebraska, USA
| | - Nicole M Piemonte
- Departments of Medical Humanities and Student Affairs, Creighton University, Phoenix, Arizona, USA
| |
Collapse
|
4
|
Valentine N, Durning SJ, Shanahan EM, Schuwirth L. Fairness in Assessment: Identifying a Complex Adaptive System. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:315-326. [PMID: 37520508 PMCID: PMC10377744 DOI: 10.5334/pme.993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 07/02/2023] [Indexed: 08/01/2023]
Abstract
Introduction Assessment design in health professions education is continuously evolving. There is an increasing desire to better embrace human judgement in assessment. Thus, it is essential to understand what makes this judgement fair. This study builds upon existing literature by studying how assessment leaders conceptualise the characteristics of fair judgement. Methods Sixteen assessment leaders from 15 medical schools in Australia and New Zealand participated in online focus groups. Data collection and analysis occurred concurrently and iteratively. We used the constant comparison method to identify themes and build on an existing conceptual model of fair judgement in assessment. Results Fairness is a multi-dimensional construct with components at environment, system and individual levels. Components influencing fairness include articulated and agreed learning outcomes relating to the needs of society, a culture which allows for learner support, stakeholder agency and learning (environmental level), collection, interpretation and combination of evidence, procedural strategies (system level) and appropriate individual assessments and assessor expertise and agility (individual level). Discussion We observed that within the data at fractal, that is an infinite pattern repeating at different scales, could be seen suggesting fair judgement should be considered a complex adaptive system. Within complex adaptive systems, it is primarily the interaction between the entities which influences the outcome it produces, not simply the components themselves. Viewing fairness in assessment through a lens of complexity rather than as a linear, causal model has significant implications for how we design assessment programs and seek to utilise human judgement in assessment.
Collapse
Affiliation(s)
- Nyoli Valentine
- Prideaux Discipline of Clinical Education, Flinders University, Bedford Park, South Australia, Australia
| | - Steven J. Durning
- Department of Medicine, Director, Center for Health Professions Education, Uniformed Services University of the Health Sciences, Bethesda, MD, United States
| | | | - Lambert Schuwirth
- Prideaux Discipline of Clinical Education, Flinders University, Bedford Park, South Australia, Australia
| |
Collapse
|
5
|
Taoube L, Khanna P, Schneider C, Burgess A, Bleasel J, Haq I, Roberts C. Situated learning in community environments (SLICE): Systems design of an immersive and integrated curriculum for community-based learning. MEDICAL TEACHER 2023; 45:80-88. [PMID: 35914523 DOI: 10.1080/0142159x.2022.2102468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE We sought to design a micro-curriculum to structure supervised clinical placements for junior medical students within a variety of community-based settings of differing clinical disciplines. Given the gaps in the literature, this paper reflects on the opportunities and challenges of our design, implementation, and evaluation strategies in constructing an integrated task-based micro-curriculum for interprofessional community-based learning in year 2 of a four-year graduate entry program. METHODS The design was informed by a systems thinking framework and guided by contemporary curricular theories on self-directed and interprofessional learning. Extensive consultations with stakeholders were undertaken. Alignment with relevant national level documents and curricular frameworks was ensured. RESULTS The systems thinking approach provided first, an experience of applying thinking tools for a deeper understanding of how various parts of this micro-curriculum and subsystems should be integrated. Second, applying the toolkit uncovered tension points on which leverage could optimise future enhancements. Eighteen types of health professions were recruited including 105 general practitioners and 253 healthcare practitioners from a range of disciplines. CONCLUSION Systems thinking allows for the identification of various interacting elements within the curriculum to be considered as part of an integrated whole. Insights from this model could inform the design of similar innovative curricula.
Collapse
Affiliation(s)
- Linda Taoube
- The Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Priya Khanna
- The Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Carl Schneider
- The Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Annette Burgess
- The Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Jane Bleasel
- The Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Inam Haq
- The Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Chris Roberts
- The Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| |
Collapse
|
6
|
Jamieson J, Gibson S, Hay M, Palermo C. Teacher, Gatekeeper, or Team Member: supervisor positioning in programmatic assessment. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2022:10.1007/s10459-022-10193-9. [PMID: 36469231 DOI: 10.1007/s10459-022-10193-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 11/27/2022] [Indexed: 06/17/2023]
Abstract
Competency-based assessment is undergoing an evolution with the popularisation of programmatic assessment. Fundamental to programmatic assessment are the attributes and buy-in of the people participating in the system. Our previous research revealed unspoken, yet influential, cultural and relationship dynamics that interact with programmatic assessment to influence success. Pulling at this thread, we conducted secondary analysis of focus groups and interviews (n = 44 supervisors) using the critical lens of Positioning Theory to explore how workplace supervisors experienced and perceived their positioning within programmatic assessment. We found that supervisors positioned themselves in two of three ways. First, supervisors universally positioned themselves as a Teacher, describing an inherent duty to educate students. Enactment of this position was dichotomous, with some supervisors ascribing a passive and disempowered position onto students while others empowered students by cultivating an egalitarian teaching relationship. Second, two mutually exclusive positions were described-either Gatekeeper or Team Member. Supervisors positioning themselves as Gatekeepers had a duty to protect the community and were vigilant to the detection of inadequate student performance. Programmatic assessment challenged this positioning by reorientating supervisor rights and duties which diminished their perceived authority and led to frustration and resistance. In contrast, Team Members enacted a right to make a valuable contribution to programmatic assessment and felt liberated from the burden of assessment, enabling them to assent power shifts towards students and the university. Identifying supervisor positions revealed how programmatic assessment challenged traditional structures and ideologies, impeding success, and provides insights into supporting supervisors in programmatic assessment.
Collapse
Affiliation(s)
- Janica Jamieson
- Monash University, Melbourne, Australia.
- School of Medical and Health Sciences, Edith Cowan University, 270 Joondalup Drive, Joondalup, WA, 6027, Australia.
| | | | | | | |
Collapse
|
7
|
Roberts C, Khanna P, Bleasel J, Lane S, Burgess A, Charles K, Howard R, O'Mara D, Haq I, Rutzou T. Student perspectives on programmatic assessment in a large medical programme: A critical realist analysis. MEDICAL EDUCATION 2022; 56:901-914. [PMID: 35393668 PMCID: PMC9542097 DOI: 10.1111/medu.14807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 03/28/2022] [Accepted: 04/05/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND Fundamental challenges exist in researching complex changes of assessment practice from traditional objective-focused 'assessments of learning' towards programmatic 'assessment for learning'. The latter emphasise both the subjective and social in collective judgements of student progress. Our context was a purposively designed programmatic assessment system implemented in the first year of a new graduate entry curriculum. We applied critical realist perspectives to unpack the underlying causes (mechanisms) that explained student experiences of programmatic assessment, to optimise assessment practice for future iterations. METHODS Data came from 14 in-depth focus groups (N = 112/261 students). We applied a critical realist lens drawn from Bhasker's three domains of reality (the actual, empirical and real) and Archer's concept of structure and agency to understand the student experience of programmatic assessment. Analysis involved induction (pattern identification), abduction (theoretical interpretation) and retroduction (causal explanation). RESULTS As a complex educational and social change, the assessment structures and culture systems within programmatic assessment provided conditions (constraints and enablements) and conditioning (acceptance or rejection of new 'non-traditional' assessment processes) for the actions of agents (students) to exercise their learning choices. The emergent underlying mechanism that most influenced students' experience of programmatic assessment was one of balancing the complex relationships between learner agency, assessment structures and the cultural system. CONCLUSIONS Our study adds to debates on programmatic assessment by emphasising how the achievement of balance between learner agency, structure and culture suggests strategies to underpin sustained changes (elaboration) in assessment practice. These include; faculty and student learning development to promote collective reflexivity and agency, optimising assessment structures by enhancing integration of theory with practice, and changing learning culture by both enhancing existing and developing new social structures between faculty and the student body to gain acceptance and trust related to the new norms, beliefs and behaviours in assessing for and of learning.
Collapse
Affiliation(s)
- Chris Roberts
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Priya Khanna
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Jane Bleasel
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Stuart Lane
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Annette Burgess
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Kellie Charles
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
- Faculty of Medicine and Health, Sydney Pharmacy School, Discipline of PharmacologyThe University of SydneySydneyNew South WalesAustralia
| | - Rosa Howard
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Deborah O'Mara
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Inam Haq
- Faculty of Medicine and HealthThe University of SydneySydneyNew South WalesAustralia
| | - Timothy Rutzou
- School of MedicineThe University of Notre DameChippendaleNew South WalesAustralia
| |
Collapse
|
8
|
Abstract
Educational change in higher education is challenging and complex, requiring engagement with a multitude of perspectives and contextual factors. In this paper, we present a case study based on our experiences of enacting a fundamental educational change in a medical program; namely, the steps taken in the transition to programmatic assessment. Specifically, we reflect on the successes and failures in embedding a coaching culture into programmatic assessment. To do this, we refer to the principles of programmatic assessment as they apply to this case and conclude with some key lessons that we have learnt from engaging in this change process. Fostering a culture of programmatic assessment that supports learners to thrive through coaching has required compromise and adaptability, particularly in light of the changes to teaching and learning necessitated by the global pandemic. We continue to inculcate this culture and enact the principles of programmatic assessment with a focus on continuous quality improvement.
Collapse
|
9
|
The Importance of Professional Development in a Programmatic Assessment System: One Medical School’s Experience. EDUCATION SCIENCES 2022. [DOI: 10.3390/educsci12030220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The Cleveland Clinic Lerner College of Medicine of Case Western Reserve University (CCLCM) was created in 2004 as a 5-year undergraduate medical education program with a mission to produce future physician-investigators. CCLCM’s assessment system aligns with the principles of programmatic assessment. The curriculum is organized around nine competencies, where each competency has milestones that students use to self-assess their progress and performance. Throughout the program, students receive low-stakes feedback from a myriad of assessors across courses and contexts. With support of advisors, students construct portfolios to document their progress and performance. A separate promotion committee makes high-stakes promotion decisions after reviewing students’ portfolios. This case study describes a systematic approach to provide both student and faculty professional development essential for programmatic assessment. Facilitators, barriers, lessons learned, and future directions are discussed.
Collapse
|