1
|
Dart J, Rees C, Ash S, McCall L, Palermo C. Shifting the narrative and practice of assessing professionalism in dietetics education: An Australasian qualitative study. Nutr Diet 2023. [PMID: 36916155 DOI: 10.1111/1747-0080.12804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 01/10/2023] [Accepted: 02/01/2023] [Indexed: 03/16/2023]
Abstract
AIM We aimed to explore current approaches to assessing professionalism in dietetics education in Australia and New Zealand, and asked the questions what is working well and what needs to improve? METHOD We employed a qualitative interpretive approach and conducted interviews with academic and practitioner (workplace-based) educators (total sample n = 78) with a key stake in dietetics education across Australia and New Zealand. Data were analysed using team-based, framework analysis. RESULTS Our findings suggest significant shifts in dietetics education in the area of professionalism assessment. Professionalism assessment is embedded in formal curricula of dietetics programs and is occurring in university and placement settings. In particular, advances have been demonstrated in those programs assessing professionalism as part of the programmatic assessment. Progress has been enabled by philosophical and curricula shifts; clearer articulation and shared understandings of professionalism standards; enhanced learner agency and reduced power distance; early identification and intervention of professionalism lapses; and increased confidence and capabilities of educators. CONCLUSIONS These findings suggest there have been considerable advances in professionalism assessment in recent years with shifts in practice in approaching professionalism through a more interpretivist lens, holistically and more student-centred. Professionalism assessment in dietetics education is a shared responsibility and requires further development and transformation to more fully embed and strengthen curricula approaches across programs. Further work should investigate strategies to build safer learning cultures and capacity for professionalism conversations and in strengthening approaches to remediation.
Collapse
Affiliation(s)
- Janeane Dart
- Department of Nutrition, Dietetics and Food, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Victoria, Australia
| | - Charlotte Rees
- Head of School, School of Health Sciences, College of Health, Medicine and Wellbeing, University of Newcastle, Callaghan, New South Wales, Australia.,Monash Centre for Scholarship in Health Education (MCSHE), Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Victoria, Australia
| | - Susan Ash
- Department of Nutrition, Dietetics and Food, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Victoria, Australia
| | - Louise McCall
- Department of Nutrition, Dietetics and Food, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Victoria, Australia
| | - Claire Palermo
- Office of the Deputy Dean Education, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Victoria, Australia
| |
Collapse
|
2
|
Roberts C, Khanna P, Bleasel J, Lane S, Burgess A, Charles K, Howard R, O'Mara D, Haq I, Rutzou T. Student perspectives on programmatic assessment in a large medical programme: A critical realist analysis. MEDICAL EDUCATION 2022; 56:901-914. [PMID: 35393668 PMCID: PMC9542097 DOI: 10.1111/medu.14807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 03/28/2022] [Accepted: 04/05/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND Fundamental challenges exist in researching complex changes of assessment practice from traditional objective-focused 'assessments of learning' towards programmatic 'assessment for learning'. The latter emphasise both the subjective and social in collective judgements of student progress. Our context was a purposively designed programmatic assessment system implemented in the first year of a new graduate entry curriculum. We applied critical realist perspectives to unpack the underlying causes (mechanisms) that explained student experiences of programmatic assessment, to optimise assessment practice for future iterations. METHODS Data came from 14 in-depth focus groups (N = 112/261 students). We applied a critical realist lens drawn from Bhasker's three domains of reality (the actual, empirical and real) and Archer's concept of structure and agency to understand the student experience of programmatic assessment. Analysis involved induction (pattern identification), abduction (theoretical interpretation) and retroduction (causal explanation). RESULTS As a complex educational and social change, the assessment structures and culture systems within programmatic assessment provided conditions (constraints and enablements) and conditioning (acceptance or rejection of new 'non-traditional' assessment processes) for the actions of agents (students) to exercise their learning choices. The emergent underlying mechanism that most influenced students' experience of programmatic assessment was one of balancing the complex relationships between learner agency, assessment structures and the cultural system. CONCLUSIONS Our study adds to debates on programmatic assessment by emphasising how the achievement of balance between learner agency, structure and culture suggests strategies to underpin sustained changes (elaboration) in assessment practice. These include; faculty and student learning development to promote collective reflexivity and agency, optimising assessment structures by enhancing integration of theory with practice, and changing learning culture by both enhancing existing and developing new social structures between faculty and the student body to gain acceptance and trust related to the new norms, beliefs and behaviours in assessing for and of learning.
Collapse
Affiliation(s)
- Chris Roberts
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Priya Khanna
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Jane Bleasel
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Stuart Lane
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Annette Burgess
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Kellie Charles
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
- Faculty of Medicine and Health, Sydney Pharmacy School, Discipline of PharmacologyThe University of SydneySydneyNew South WalesAustralia
| | - Rosa Howard
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Deborah O'Mara
- Faculty of Medicine and Health, Sydney Medical School, Education OfficeThe University of SydneySydneyNew South Wales
| | - Inam Haq
- Faculty of Medicine and HealthThe University of SydneySydneyNew South WalesAustralia
| | - Timothy Rutzou
- School of MedicineThe University of Notre DameChippendaleNew South WalesAustralia
| |
Collapse
|
3
|
Torre D, Schuwirth L, Van der Vleuten C, Heeneman S. An international study on the implementation of programmatic assessment: Understanding challenges and exploring solutions. MEDICAL TEACHER 2022; 44:928-937. [PMID: 35701165 DOI: 10.1080/0142159x.2022.2083487] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Programmatic assessment is an approach to assessment aimed at optimizing the learning and decision function of assessment. It involves a set of key principles and ground rules that are important for its design and implementation. However, despite its intuitive appeal, its implementation remains a challenge. The purpose of this paper is to gain a better understanding of the factors that affect the implementation process of programmatic assessment and how specific implementation challenges are managed across different programs. METHODS An explanatory multiple case (collective) approach was used for this study. We identified 6 medical programs that had implemented programmatic assessment with variation regarding health profession disciplines, level of education and geographic location. We conducted interviews with a key faculty member from each of the programs and analyzed the data using inductive thematic analysis. RESULTS We identified two major factors in managing the challenges and complexity of the implementation process: knowledge brokers and a strategic opportunistic approach. Knowledge brokers were the people who drove and designed the implementation process acting by translating evidence into practice allowing for real-time management of the complex processes of implementation. These knowledge brokers used a 'strategic opportunistic' or agile approach to recognize new opportunities, secure leadership support, adapt to the context and take advantage of the unexpected. Engaging in an overall curriculum reform process was a critical factor for a successful implementation of programmatic assessment. DISCUSSION The study contributes to the understanding of the intricacies of implementation processes of programmatic assessment across different institutions. Managing opportunities, adaptive planning, awareness of context, were all critical aspects of thinking strategically and opportunistically in the implementation of programmatic assessment. Future research is needed to provide a more in-depth understanding of values and beliefs that underpin the assessment culture of an organization, and how such values may affect implementation.
Collapse
Affiliation(s)
- Dario Torre
- Director of Assessment, and Professor of Medicine, University of Central Florida College of Medicine, Orlando, FL, USA
| | - Lambert Schuwirth
- College of Medicine and Public Health, Flinders University, Adelaide, Australia
| | - Cees Van der Vleuten
- Department of Educational Development and Research, School of Health Profession Education, Maastricht University, Maastricht, The Netherlands
| | - Sylvia Heeneman
- Department of Pathology, School Health Profession Education, Cardiovascular Research Institute Maastricht, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
4
|
Wilkinson TJ. Four ways to get a grip on making robust decisions from workplace-based assessments. CANADIAN MEDICAL EDUCATION JOURNAL 2022; 13:43-46. [PMID: 35875436 PMCID: PMC9297242 DOI: 10.36834/cmej.73361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Synthesising the results of workplace-based assessments to inform robust decisions is seen as both important and difficult. Concerns about failing to fail the trainee not ready to proceed has drawn disproportionate attention to assessors. This paper proposes a model for a more systems-based view so that the value of the assessor's judgement is incorporated while preserving the value and robustness of collective decision-making. Our experience has shown it can facilitate robust decisions on some of the more difficult areas, such as professionalism.
Collapse
|
5
|
Gingerich A, Sebok-Syer SS, Lingard L, Watling CJ. The shift from disbelieving underperformance to recognising failure: A tipping point model. MEDICAL EDUCATION 2022; 56:395-406. [PMID: 34668213 DOI: 10.1111/medu.14681] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 10/05/2021] [Accepted: 10/15/2021] [Indexed: 06/13/2023]
Abstract
CONTEXT Coming face to face with a trainee who needs to be failed is a stern test for many supervisors. In response, supervisors have been encouraged to report evidence of failure through numerous assessment redesigns. And yet, there are lingering signs that some remain reluctant to engage in assessment processes that could alter a trainee's progression in the programme. Failure is highly consequential for all involved and, although rare, requires explicit study. Recent work identified a phase of disbelief that preceded identification of underperformance. What remains unknown is how supervisors come to recognise that a trainee needs to be failed. METHODS Following constructivist grounded theory methodology, 42 physicians and surgeons in British Columbia, Canada shared their experiences supervising trainees who profoundly underperformed, required extensive remediation or were dismissed from the programme. We identified recurring themes using an iterative, constant comparative process. RESULTS The shift from disbelieving underperformance to recognising failure involves three patterns: accumulation of significant incidents, discovery of an egregious error after negligible deficits or illumination of an overlooked deficit when pointed out by someone else. Recognising failure was accompanied by anger, certainty and a sense of duty to prevent harm. CONCLUSION Coming to the point of recognising that a trainee needs to fail is akin to the psychological process of a tipping point where people first realise that noise is signal and cross a threshold where the pattern is no longer an anomaly. The co-occurrence of anger raises the possibility for emotions to be a driver of, and not only a barrier to, recognising failure. This warrants caution because tipping points, and anger, can impede detection of improvement. Our findings point towards possibilities for supporting earlier identification of underperformance and overcoming reluctance to report failure along with countermeasures to compensate for difficulties in detecting improvement once failure has been verified.
Collapse
Affiliation(s)
- Andrea Gingerich
- Division of Medical Sciences, University of Northern British Columbia, Prince George, British Columbia, Canada
| | | | - Lorelei Lingard
- Schulich School of Medicine & Dentistry, Centre for Education Research & Innovation, Western University, London, Ontario, Canada
| | - Christopher J Watling
- Schulich School of Medicine & Dentistry, Centre for Education Research & Innovation, Western University, London, Ontario, Canada
| |
Collapse
|
6
|
Heeneman S, de Jong LH, Dawson LJ, Wilkinson TJ, Ryan A, Tait GR, Rice N, Torre D, Freeman A, van der Vleuten CPM. Ottawa 2020 consensus statement for programmatic assessment - 1. Agreement on the principles. MEDICAL TEACHER 2021; 43:1139-1148. [PMID: 34344274 DOI: 10.1080/0142159x.2021.1957088] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
INTRODUCTION In the Ottawa 2018 Consensus framework for good assessment, a set of criteria was presented for systems of assessment. Currently, programmatic assessment is being established in an increasing number of programmes. In this Ottawa 2020 consensus statement for programmatic assessment insights from practice and research are used to define the principles of programmatic assessment. METHODS For fifteen programmes in health professions education affiliated with members of an expert group (n = 20), an inventory was completed for the perceived components, rationale, and importance of a programmatic assessment design. Input from attendees of a programmatic assessment workshop and symposium at the 2020 Ottawa conference was included. The outcome is discussed in concurrence with current theory and research. RESULTS AND DISCUSSION Twelve principles are presented that are considered as important and recognisable facets of programmatic assessment. Overall these principles were used in the curriculum and assessment design, albeit with a range of approaches and rigor, suggesting that programmatic assessment is an achievable education and assessment model, embedded both in practice and research. Knowledge on and sharing how programmatic assessment is being operationalized may help support educators charting their own implementation journey of programmatic assessment in their respective programmes.
Collapse
Affiliation(s)
- Sylvia Heeneman
- Department of Pathology, School of Health Profession Education, Maastricht University, Maastricht, The Netherlands
| | - Lubberta H de Jong
- Department of Population Health Sciences, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
| | - Luke J Dawson
- School of Dentistry, University of Liverpool, Liverpool, UK
| | - Tim J Wilkinson
- Education Unit, University of Otago, Christchurch, New Zealand
| | - Anna Ryan
- Department of Medical Education, Melbourne Medical School, University of Melbourne, Melbourne, Australia
| | - Glendon R Tait
- MD Program, Department of Psychiatry, and The Wilson Centre, University of Toronto, Toronto, Canada
| | - Neil Rice
- College of Medicine and Health, University of Exeter Medical School, Exeter, UK
| | - Dario Torre
- Department of Medicine, Uniformed Services University of Health Sciences, Bethesda, MD, USA
| | - Adrian Freeman
- College of Medicine and Health, University of Exeter Medical School, Exeter, UK
| | - Cees P M van der Vleuten
- Department of Educational Development and Research, School of Health Profession Education, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
7
|
Ryan AT, Wilkinson TJ. Rethinking Assessment Design: Evidence-Informed Strategies to Boost Educational Impact in the Anatomical Sciences. ANATOMICAL SCIENCES EDUCATION 2021; 14:361-367. [PMID: 33752261 DOI: 10.1002/ase.2075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Revised: 03/16/2021] [Accepted: 03/17/2021] [Indexed: 06/12/2023]
Abstract
University assessment is in the midst of transformation. Assessments are no longer designed solely to determine that students can remember and regurgitate lecture content, nor in order to rank students to aid with some future selection process. Instead, assessments are expected to drive, support, and enhance learning and to contribute to student self-assessment and development of skills and attributes for a lifetime of learning. While traditional purposes of certifying achievement and determining readiness to progress remain important, these new expectations for assessment can create tensions in assessment design, selection, and deployment. With the recognition of these tensions, three contemporary approaches to assessment in medical education are described. These approaches include careful consideration of the educational impact of assessment-before, during (test or recall enhanced learning) and after assessments; development of student (and staff) assessment literacy; and planning of cohesive systems of assessment (with a range of assessment tools) designed to assess the various competencies demanded of future graduates. These approaches purposefully straddle the cross purposes of assessment in modern health professions education. The implications of these models are explored within the context of medical education and then linked with contemporary work in the anatomical sciences in order to highlight current synergies and potential future innovations when using evidence-informed strategies to boost the educational impact of assessments.
Collapse
Affiliation(s)
- Anna T Ryan
- Department of Medical Education, Melbourne Medical School, Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Melbourne, Victoria, Australia
| | - Tim J Wilkinson
- Education Unit, Otago Medical School, University of Otago, Christchurch, New Zealand
| |
Collapse
|
8
|
Nair B, Moonen – van Loon JMW, Parvathy M, van der Vleuten CPM. Composite Reliability of Workplace Based Assessment of International Medical Graduates. MEDEDPUBLISH 2021; 10:104. [PMID: 38486602 PMCID: PMC10939524 DOI: 10.15694/mep.2021.000104.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/17/2024] Open
Abstract
This article was migrated. The article was marked as recommended. Introduction All developed countries depend on International Medical Graduates (IMGs) to complement their workforce. However, the assessment of their fitness to practice and acculturation into the new system can be challenging. To improve this, we introduced Workplace Based Assessment (WBA), using a programmatic philosophy. This paper reports the reliability of this new approach. Method Over the past 10 years, we have assessed over 250 IMGs, each cohort assessed over a 6-month period. We used Mini-Cex, Case Based Discussions (CBD) and Multi-Source Feedback (MSF) to assess them. We analysed the reliability of each tool and the composite reliability of 12 Mini-Cex, 5 CBDs and 12 MSF assessments in the tool kit. Results A reliability coefficient of 0.78 with a SEM of 0.19 was obtained for the sample of 236 IMGs. We found the MSF to be the most reliable tool. By adding one more MSF to the assessment on two occasions, we can reach a reliability of 0.8 and SEM of 0.18. Conclusions The current assessment methodology has acceptable reliability. By increasing the MSF, we can improve the reliability. The lessons from this study are generalisable to IMG assessment and other medical education programs.
Collapse
|
9
|
Dart J, Twohig C, Anderson A, Bryce A, Collins J, Gibson S, Kleve S, Porter J, Volders E, Palermo C. The Value of Programmatic Assessment in Supporting Educators and Students to Succeed: A Qualitative Evaluation. J Acad Nutr Diet 2021; 121:1732-1740. [PMID: 33612437 DOI: 10.1016/j.jand.2021.01.013] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Revised: 01/13/2021] [Accepted: 01/17/2021] [Indexed: 11/28/2022]
Abstract
BACKGROUND Programmatic assessment has been proposed as the way forward for competency-based assessment, yet there is a dearth of literature describing the implementation and evaluation of programmatic assessment approaches. OBJECTIVE To evaluate the implementation of a programmatic assessment and explore its ability to support students and assessors. DESIGN A qualitative evaluation of programmatic assessment was employed. PARTICIPANTS/SETTING Interviews with graduates (n = 8) and preceptors (n = 12) together with focus groups with faculty assessors (n = 9) from the one Australian university explored experiences of the programmatic approach, role of assessment in learning, and defensibility of assessment decisions in determining competence. ANALYSIS PERFORMED Data were analyzed into key themes using framework analysis. RESULTS The programmatic assessment increased confidence in defensibility of assessment decisions, reduced emotional burden of assessment, increased value of assessment, and identified and remediated at-risk students earlier when philosophical and practice shifts in approaches to assessment were embraced. CONCLUSIONS Programmatic assessment supports a holistic approach to competency development and assessment and has multiple benefits for learners and assessors.
Collapse
|
10
|
Schut S, Maggio LA, Heeneman S, van Tartwijk J, van der Vleuten C, Driessen E. Where the rubber meets the road - An integrative review of programmatic assessment in health care professions education. PERSPECTIVES ON MEDICAL EDUCATION 2021; 10:6-13. [PMID: 33085060 PMCID: PMC7809087 DOI: 10.1007/s40037-020-00625-w] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2020] [Revised: 09/21/2020] [Accepted: 09/29/2020] [Indexed: 05/12/2023]
Abstract
INTRODUCTION Programmatic assessment was introduced as an approach to design assessment programmes with the aim to simultaneously optimize the decision-making and learning function of assessment. An integrative review was conducted to review and synthesize results from studies investigating programmatic assessment in health care professions education in practice. METHODS The authors systematically searched PubMed, Web of Science, and ERIC to identify studies published since 2005 that reported empirical data on programmatic assessment. Characteristics of the included studies were extracted and synthesized, using descriptive statistics and thematic analysis. RESULTS Twenty-seven studies were included, which used quantitative methods (n = 10), qualitative methods (n = 12) or mixed methods (n = 5). Most studies were conducted in clinical settings (77.8%). Programmatic assessment was found to enable meaningful triangulation for robust decision-making and used as a catalyst for learning. However, several problems were identified, including overload in assessment information and the associated workload, counterproductive impact of using strict requirements and summative signals, lack of a shared understanding of the nature and purpose of programmatic assessment, and lack of supportive interpersonal relationships. Thematic analysis revealed that the success and challenges of programmatic assessment were best understood by the interplay between quantity and quality of assessment information, and the influence of social and personal aspects on assessment perceptions. CONCLUSION Although some of the evidence may seem compelling to support the effectiveness of programmatic assessment in practice, tensions will emerge when simultaneously stimulating the development of competencies and assessing its result. The identified factors and inferred strategies provide guidance for navigating these tensions.
Collapse
Affiliation(s)
- Suzanne Schut
- School of Health Professions Education, Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands.
| | - Lauren A Maggio
- Department of Medicine, Uniformed Services, University of the Health Sciences, Bethesda, MD, USA
| | - Sylvia Heeneman
- School of Health Professions Education, Department of Pathology, Cardiovascular Research Institute Maastricht, Maastricht University, Maastricht, The Netherlands
| | - Jan van Tartwijk
- Department of Education, Utrecht University, Utrecht, The Netherlands
| | - Cees van der Vleuten
- School of Health Professions Education, Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands
| | - Erik Driessen
- School of Health Professions Education, Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
11
|
Lane AS, Roberts C, Khanna P. Do We Know Who the Person With the Borderline Score is, in Standard-Setting and Decision-Making. HEALTH PROFESSIONS EDUCATION 2020. [DOI: 10.1016/j.hpe.2020.07.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
12
|
Rich JV, Fostaty Young S, Donnelly C, Hall AK, Dagnone JD, Weersink K, Caudle J, Van Melle E, Klinger DA. Competency-based education calls for programmatic assessment: But what does this look like in practice? J Eval Clin Pract 2020; 26:1087-1095. [PMID: 31820556 DOI: 10.1111/jep.13328] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 11/17/2019] [Accepted: 11/20/2019] [Indexed: 11/29/2022]
Abstract
RATIONALE, AIMS, AND OBJECTIVES Programmatic assessment has been identified as a system-oriented approach to achieving the multiple purposes for assessment within Competency-Based Medical Education (CBME, i.e., formative, summative, and program improvement). While there are well-established principles for designing and evaluating programs of assessment, few studies illustrate and critically interpret, what a system of programmatic assessment looks like in practice. This study aims to use systems thinking and the 'two communities' metaphor to interpret a model of programmatic assessment and to identify challenges and opportunities with operationalization. METHOD An interpretive case study was used to investigate how programmatic assessment is being operationalized within one competency-based residency program at a Canadian university. Qualitative data were collected from residents, faculty, and program leadership via semi-structured group and individual interviews conducted at nine months post-CBME implementation. Data were analyzed using a combination of data-based inductive analysis and theory-derived deductive analysis. RESULTS In this model, Academic Advisors had a central role in brokering assessment data between communities responsible for producing and using residents' performance information for decision making (i.e., formative, summative/evaluative, and program improvement). As system intermediaries, Academic Advisors were in a privileged position to see how the parts of the assessment system contributed to the functioning of the whole and could identify which system components were not functioning as intended. Challenges were identified with the documentation of residents' performance information (i.e., system inputs); use of low-stakes formative assessments to inform high-stakes evaluative judgments about the achievement of competence standards; and gaps in feedback mechanisms for closing learning loops. CONCLUSIONS The findings of this research suggest that program stakeholders can benefit from a systems perspective regarding how their assessment practices contribute to the efficacy of the system as a whole. Academic Advisors are well positioned to support educational development efforts focused on overcoming challenges with operationalizing programmatic assessment.
Collapse
Affiliation(s)
- Jessica V Rich
- Faculty of Education, Queen's University, Kingston, Ontario, Canada
| | - Sue Fostaty Young
- Centre for Teaching and Learning, Queen's University, Kingston, Ontario, Canada
| | - Catherine Donnelly
- School of Rehabilitation Therapy, Queen's University, Kingston, Ontario, Canada
| | - Andrew K Hall
- Department of Emergency Medicine, Queen's University, Kingston, Ontario, Canada.,Royal College of Physicians and Surgeons of Canada, Ottawa, Ontario, Canada
| | - J Damon Dagnone
- Department of Emergency Medicine, Queen's University, Kingston, Ontario, Canada
| | - Kristen Weersink
- Department of Emergency Medicine, Queen's University, Kingston, Ontario, Canada
| | - Jaelyn Caudle
- Department of Emergency Medicine, Queen's University, Kingston, Ontario, Canada
| | - Elaine Van Melle
- Royal College of Physicians and Surgeons of Canada, Ottawa, Ontario, Canada
| | - Don A Klinger
- Te Kura Toi Tangata, University of Waikato, Hamilton, New Zealand
| |
Collapse
|
13
|
|
14
|
Ali A, Anakin M, Tweed MJ, Wilkinson TJ. Towards a Definition of Distinction in Professionalism. TEACHING AND LEARNING IN MEDICINE 2020; 32:126-138. [PMID: 31884828 DOI: 10.1080/10401334.2019.1705826] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Phenomenon: Professionalism can be characterized by a particular set of attributes that clinicians demonstrate in practice. Although much has been described on those attributes that define acceptable professionalism, the characteristics that define distinction in professionalism have not yet been well defined. Approach: In this exploratory project, qualitative methods were used to triangulate three sources of data collected from three campuses of one medical school: student assessment summaries, teacher interviews, and an institutional policy. Findings: One hundred-thirty student assessment summaries, eight teacher interviews, and one institutional policy were analyzed. Three characteristics emerged that define distinction in professionalism: improvement of oneself, helping others learn, and teamwork. These characteristics are in addition to students demonstrating a clear minimum standard in all other aspects of professionalism. Insights: Findings from this project offer a first step toward a definition of distinction in professionalism for assessing student performance. The characteristics can be demonstrated by students to varying degrees of proficiency and are potentially achievable by all students. Finally, the characteristics would be required in addition to demonstrating a clear minimum standard of performance in all other aspects of professionalism and cannot be inferred by the absence of negative or unprofessional behaviors. Recognizing that conceptions of professionalism have contextual and cultural influences, the characteristics of distinction identified by this project expand the language available for teachers and learners to discuss professionalism. Teachers may use these characteristics to help inform their teaching, learning, and feedback practices. Students will gain clarity about the expectations regarding their professional behavior.
Collapse
Affiliation(s)
- Anthony Ali
- Dean's Department, University of Otago, Christchurch, New Zealand
| | - Megan Anakin
- Dean's Department, Dunedin School of Medicine, University of Otago, Dunedin, New Zealand
| | - Mike J Tweed
- Department of Medicine, University of Otago, Wellington, New Zealand
| | - Tim J Wilkinson
- Dean's Department, University of Otago, Christchurch, New Zealand
| |
Collapse
|
15
|
Torre DM, Schuwirth LWT, Van der Vleuten CPM. Theoretical considerations on programmatic assessment. MEDICAL TEACHER 2020; 42:213-220. [PMID: 31622126 DOI: 10.1080/0142159x.2019.1672863] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Introduction: Programmatic assessment (PA) is an approach to assessment aimed at optimizing learning which continues to gain educational momentum. However, the theoretical underpinnings of PA have not been clearly described. An explanation of the theoretical underpinnings of PA will allow educators to gain a better understanding of this approach and, perhaps, facilitate its use and effective implementation. The purpose of this article is twofold: first, to describe salient theoretical perspectives on PA; second to examine how theory may help educators to develop effective PA programs, helping to overcome challenges around PA.Results: We outline a number of learning theories that underpin key educational principles of PA: constructivist and social constructivist theory supporting meaning making, and longitudinality; cognitivist and cognitive development orientation scaffolding the practice of a continuous feedback process; theory of instructional design underpinning assessment as learning; self-determination theory (SDT), self-regulation learning theory (SRL), and principles of deliberate practice providing theoretical tenets for student agency and accountability.Conclusion: The construction of a plausible and coherent link between key educational principles of PA and learning theories should enable educators to pose new and important inquiries, reflect on their assessment practices and help overcome future challenges in the development and implementation of PA in their programs.
Collapse
Affiliation(s)
- Dario M Torre
- Department of Medicine, Uniformed Services University of Health Sciences, Bethesda, MD, USA
| | - L W T Schuwirth
- Department of Education and Health Profession Education, Flinders Medical School, Adelaide, Australia
| | - C P M Van der Vleuten
- Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands
- Faculty of Health Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
16
|
Chou CL, Kalet A, Costa MJ, Cleland J, Winston K. Guidelines: The dos, don'ts and don't knows of remediation in medical education. PERSPECTIVES ON MEDICAL EDUCATION 2019; 8:322-338. [PMID: 31696439 PMCID: PMC6904411 DOI: 10.1007/s40037-019-00544-5] [Citation(s) in RCA: 55] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
INTRODUCTION Two developing forces have achieved prominence in medical education: the advent of competency-based assessments and a growing commitment to expand access to medicine for a broader range of learners with a wider array of preparation. Remediation is intended to support all learners to achieve sufficient competence. Therefore, it is timely to provide practical guidelines for remediation in medical education that clarify best practices, practices to avoid, and areas requiring further research, in order to guide work with both individual struggling learners and development of training program policies. METHODS Collectively, we generated an initial list of Do's, Don'ts, and Don't Knows for remediation in medical education, which was then iteratively refined through discussions and additional evidence-gathering. The final guidelines were then graded for the strength of the evidence by consensus. RESULTS We present 26 guidelines: two groupings of Do's (systems-level interventions and recommendations for individual learners), along with short lists of Don'ts and Don't Knows, and our interpretation of the strength of current evidence for each guideline. CONCLUSIONS Remediation is a high-stakes, highly complex process involving learners, faculty, systems, and societal factors. Our synthesis resulted in a list of guidelines that summarize the current state of educational theory and empirical evidence that can improve remediation processes at individual and institutional levels. Important unanswered questions remain; ongoing research can further improve remediation practices to ensure the appropriate support for learners, institutions, and society.
Collapse
Affiliation(s)
- Calvin L Chou
- Department of Medicine, University of California and Veterans Affairs Healthcare System, San Francisco, CA, USA.
| | - Adina Kalet
- Department of Medicine, New York University School of Medicine, New York, NY, USA
| | - Manuel Joao Costa
- Life and Health Sciences Research Institute, School of Medicine, University of Minho, Minho, Portugal
| | - Jennifer Cleland
- Centre for Healthcare Education Research and Innovation (CHERI), University of Aberdeen, Aberdeen, UK
| | - Kalman Winston
- Department of Public Health and Primary Care, Cambridge University, Cambridge, UK
| |
Collapse
|
17
|
van der Vleuten CPM, Schuwirth LWT. Assessment in the context of problem-based learning. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2019; 24:903-914. [PMID: 31578642 PMCID: PMC6908559 DOI: 10.1007/s10459-019-09909-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Accepted: 08/07/2019] [Indexed: 05/29/2023]
Abstract
Arguably, constructive alignment has been the major challenge for assessment in the context of problem-based learning (PBL). PBL focuses on promoting abilities such as clinical reasoning, team skills and metacognition. PBL also aims to foster self-directed learning and deep learning as opposed to rote learning. This has incentivized researchers in assessment to find possible solutions. Originally, these solutions were sought in developing the right instruments to measure these PBL-related skills. The search for these instruments has been accelerated by the emergence of competency-based education. With competency-based education assessment moved away from purely standardized testing, relying more heavily on professional judgment of complex skills. Valuable lessons have been learned that are directly relevant for assessment in PBL. Later, solutions were sought in the development of new assessment strategies, initially again with individual instruments such as progress testing, but later through a more holistic approach to the assessment program as a whole. Programmatic assessment is such an integral approach to assessment. It focuses on optimizing learning through assessment, while at the same gathering rich information that can be used for rigorous decision-making about learner progression. Programmatic assessment comes very close to achieving the desired constructive alignment with PBL, but its wide adoption-just like PBL-will take many years ahead of us.
Collapse
Affiliation(s)
- Cees P M van der Vleuten
- School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, P.O. Box 616, 6200 MD, Maastricht, The Netherlands.
| | - Lambert W T Schuwirth
- Prideaux Centre for Research in Health Professions Education, College of Medicine and Public Health, Flinders University, Sturt Road, Bedford Park, SA, 5042, Australia
| |
Collapse
|
18
|
Tweed M, Wilkinson T. Student progress decision-making in programmatic assessment: can we extrapolate from clinical decision-making and jury decision-making? BMC MEDICAL EDUCATION 2019; 19:176. [PMID: 31146714 PMCID: PMC6543577 DOI: 10.1186/s12909-019-1583-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/08/2018] [Accepted: 04/30/2019] [Indexed: 05/03/2023]
Abstract
BACKGROUND Despite much effort in the development of robustness of information provided by individual assessment events, there is less literature on the aggregation of this information to make progression decisions on individual students. With the development of programmatic assessment, aggregation of information from multiple sources is required, and needs to be completed in a robust manner. The issues raised by this progression decision-making have parallels with similar issues in clinical decision-making and jury decision-making. MAIN BODY Clinical decision-making is used to draw parallels with progression decision-making, in particular the need to aggregate information and the considerations to be made when additional information is needed to make robust decisions. In clinical decision-making, diagnoses can be based on screening tests and diagnostic tests, and the balance of sensitivity and specificity can be applied to progression decision-making. There are risks and consequences associated with clinical decisions, and likewise with progression decisions. Both clinical decision-making and progression decision-making can be tough. Tough and complex clinical decisions can be improved by making decisions as a group. The biases associated with decision-making can be amplified or attenuated by group processes, and have similar biases to those seen in clinical and progression decision-making. Jury decision-making is an example of a group making high-stakes decisions when the correct answer is not known, much like progression decision panels. The leadership of both jury and progression panels is important for robust decision-making. Finally, the parallel between a jury's leniency towards the defendant and the failure to fail phenomenon is considered. CONCLUSION It is suggested that decisions should be made by appropriately selected decision-making panels; educational institutions should have policies, procedures, and practice documentation related to progression decision-making; panels and panellists should be provided with sufficient information; panels and panellists should work to optimise their information synthesis and reduce bias; panellists should reach decisions by consensus; and that the standard of proof should be that student competence needs to be demonstrated.
Collapse
Affiliation(s)
- Mike Tweed
- Department of Medicine, University of Otago Wellington, Wellington, New Zealand
| | - Tim Wilkinson
- University of Otago Christchurch, Christchurch, New Zealand
| |
Collapse
|
19
|
Higher Education for Professional and Civic Values: A Critical Review and Analysis. SUSTAINABILITY 2018. [DOI: 10.3390/su10124442] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Education for sustainable development (ESD) is generally thought to involve some degree of education for particular professional and civic values, attitudes and behaviours (leading to, as examples, being environmentally, socially and culturally responsible); although it is notable that the application of ESD in higher education is contested. This conceptual article analyses literature that describes how higher education addresses professional and civic values, mindfully or unintentionally, in an attempt to provide clarity to the arguments involved in this contestation. The article uses three disciplinary lenses (education, psychology and professional education) in the context of four educational paradigms (experiential learning; role modelling; assessment/evaluation; critical thinking) to explore the theoretical and practical bases of values-education. Our conceptual analysis confirms that values are: of great interest to higher education; a significant focus within experiential learning and in the context of role modelling; but challenging to define and even more so to assess or to evaluate the attainment of. Our three disciplinary lenses also lead us to conclude that encouraging students to develop a disposition to explore their world critically is a form of values-education; and that this may be the only truly legitimate form of values-education open to higher education.
Collapse
|
20
|
Davenport R, Hewat S, Ferguson A, McAllister S, Lincoln M. Struggle and failure on clinical placement: a critical narrative review. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2018; 53:218-227. [PMID: 29159842 DOI: 10.1111/1460-6984.12356] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2016] [Revised: 10/03/2017] [Accepted: 10/09/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND Clinical placements are crucial to the development of skills and competencies in speech-language pathology (SLP) education and, more generally, a requirement of all health professional training programmes. Literature from medical education provides a context for understanding how the environment can be vital to all students' learning. Given the increasing costs of education and demands on health services, students who struggle or fail on clinical placement place an additional burden on educators. Therefore, if more is known or understood about these students and their experience in relation to the clinical learning environment, appropriate strategies and support can be provided to reduce the burden. However, this literature does not specifically explore marginal or failing students and their experience. AIMS To review existing research that has explored failing and struggling health professional students undertaking clinical placements and, in particular, SLP students. METHODS & PROCEDURES A critical narrative review was undertaken. Three electronic databases, ProQuest, CINAHL and OVID (Medline 1948-), were searched for papers exploring marginal and failing students in clinical placement contexts across all health professions, published between 1988 and 2017. Data were extracted and examined to determine the breadth of the existing research, and publications were critically appraised and major research themes identified. MAIN CONTRIBUTION Sixty-nine papers were included in the review. The majority came from medicine and nursing in the United States and United Kingdom, with other allied health disciplines less well represented. The review identified key themes with the majority of papers focused on identification of at risk students and support and remediation. The review also highlighted the absence of literature relating to the student voice and in the allied health professions. CONCLUSIONS & IMPLICATIONS This review highlighted the limited research related to failing/struggling student learning in clinical contexts, and only a handful of papers have specifically addressed marginal or failing students in allied health professions. The complexity of interrelated factors in this field has been highlighted in this review. Further research needs to include the student's voice to develop greater understanding and insights of struggle and failure in clinical contexts.
Collapse
Affiliation(s)
- Rachel Davenport
- Speech pathology, Newcastle University, Newcastle, NSW, Australia
- Speech pathology, La Trobe University, Melbourne, VIC, Australia
| | - Sally Hewat
- Speech pathology, Newcastle University, Newcastle, NSW, Australia
| | - Alison Ferguson
- Deputy Dean, Faculty of Health Sciences, University of Sydney, Sydney, NSW, Australia
| | - Sue McAllister
- Associate Dean, Faculty of Health Sciences, University of Sydney, Sydney, NSW, Australia
| | - Michelle Lincoln
- Deputy Dean, Faculty of Health Sciences, University of Sydney, Sydney, NSW, Australia
| |
Collapse
|
21
|
Wilkinson TJ, Tweed MJ. Deconstructing programmatic assessment. ADVANCES IN MEDICAL EDUCATION AND PRACTICE 2018; 9:191-197. [PMID: 29606896 PMCID: PMC5868629 DOI: 10.2147/amep.s144449] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
We describe programmatic assessment and the problems it might solve in relation to assessment and learning, identify some models implemented internationally, and then outline what we believe are programmatic assessment's key components and what these components might achieve. We then outline some issues around implementation, which include blueprinting, data collection, decision making, staff support, and evaluation. Rather than adopting an all-or-nothing approach, we suggest that elements of programmatic assessment can be gradually introduced into traditional assessment systems.
Collapse
Affiliation(s)
- Tim J Wilkinson
- Education Unit, University of Otago, Christchurch, New Zealand
- Correspondence: Tim J Wilkinson, Education Unit, University of Otago, Christchurch, PO Box 4345, Christchurch, New Zealand 8140, Tel +64 3 364 0530, Email
| | - Michael J Tweed
- Education Unit, University of Otago, Wellington, New Zealand
| |
Collapse
|
22
|
Wilbur K. Does faculty development influence the quality of in-training evaluation reports in pharmacy? BMC MEDICAL EDUCATION 2017; 17:222. [PMID: 29157239 PMCID: PMC5697106 DOI: 10.1186/s12909-017-1054-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2017] [Accepted: 11/02/2017] [Indexed: 06/02/2023]
Abstract
BACKGROUND In-training evaluation reports (ITERs) of student workplace-based learning are completed by clinical supervisors across various health disciplines. However, outside of medicine, the quality of submitted workplace-based assessments is largely uninvestigated. This study assessed the quality of ITERs in pharmacy and whether clinical supervisors could be trained to complete higher quality reports. METHODS A random sample of ITERs submitted in a pharmacy program during 2013-2014 was evaluated. These ITERs served as a historical control (control group 1) for comparison with ITERs submitted in 2015-2016 by clinical supervisors who participated in an interactive faculty development workshop (intervention group) and those who did not (control group 2). Two trained independent raters scored the ITERs using a previously validated nine-item scale assessing report quality, the Completed Clinical Evaluation Report Rating (CCERR). The scoring scale for each item is anchored at 1 ("not at all") and 5 ("exemplary"), with 3 categorized as "acceptable". RESULTS Mean CCERR score for reports completed after the workshop (22.9 ± 3.39) did not significantly improve when compared to prospective control group 2 (22.7 ± 3.63, p = 0.84) and were worse than historical control group 1 (37.9 ± 8.21, p = 0.001). Mean item scores for individual CCERR items were below acceptable thresholds for 5 of the 9 domains in control group 1, including supervisor documented evidence of specific examples to clearly explain weaknesses and concrete recommendations for student improvement. Mean item scores for individual CCERR items were below acceptable thresholds for 6 and 7 of the 9 domains in control group 2 and the intervention group, respectively. CONCLUSIONS This study is the first using CCERR to evaluate ITER quality outside of medicine. Findings demonstrate low baseline CCERR scores in a pharmacy program not demonstrably changed by a faculty development workshop, but strategies are identified to augment future rater training.
Collapse
Affiliation(s)
- Kerry Wilbur
- College of Pharmacy, Qatar University, PO Box 2713, Doha, Qatar.
| |
Collapse
|
23
|
Jardine DL, McKenzie JM, Wilkinson TJ. Predicting medical students who will have difficulty during their clinical training. BMC MEDICAL EDUCATION 2017; 17:43. [PMID: 28222710 PMCID: PMC5320727 DOI: 10.1186/s12909-017-0879-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2016] [Accepted: 01/31/2017] [Indexed: 05/06/2023]
Abstract
BACKGROUND We aimed to classify the difficulties students had passing their clinical attachments, and explore factors which might predict these problems. METHODS We analysed data from regular student progress meetings 2008-2012. Problem categories were: medical knowledge, professional behaviour and clinical skills. For each category we then undertook a predictive risk analysis. RESULTS Out of 561 students, 203 were found to have one or more problem category and so were defined as having difficulties. Prevalences of the categories were: clinical skills (67%), knowledge (59%) and professional behaviour (29%). A higher risk for all categories was associated with: male gender, international entry and failure in the first half of the course, but not with any of the minority ethnic groups. Professional and clinical skills problems were associated with lower marks in the Undergraduate Medical Admissions Test paper 2. Clinical skills problems were less likely in graduate students. CONCLUSIONS In our students, difficulty with clinical skills was just as prevalent as medical knowledge deficit. International entry students were at highest risk for clinical skills problems probably because they were not selected by our usual criteria and had shorter time to become acculturated.
Collapse
Affiliation(s)
- D. L. Jardine
- Department of General Medicine, Christchurch Hospital, University of Otago, Riccarton Ave 2, Christchurch, 8011 New Zealand
| | - J. M. McKenzie
- Department of General Medicine, Christchurch Hospital, University of Otago, Riccarton Ave 2, Christchurch, 8011 New Zealand
| | - T. J. Wilkinson
- Department of General Medicine, Christchurch Hospital, University of Otago, Riccarton Ave 2, Christchurch, 8011 New Zealand
| |
Collapse
|
24
|
Bierer SB, Dannefer EF, Tetzlaff JE. Time to Loosen the Apron Strings: Cohort-based Evaluation of a Learner-driven Remediation Model at One Medical School. J Gen Intern Med 2015; 30:1339-43. [PMID: 26173525 PMCID: PMC4539324 DOI: 10.1007/s11606-015-3343-1] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
BACKGROUND Remediation in the era of competency-based assessment demands a model that empowers students to improve performance. AIM To examine a remediation model where students, rather than faculty, develop remedial plans to improve performance. SETTING/PARTICIPANTS Private medical school, 177 medical students. PROGRAM DESCRIPTION A promotion committee uses student-generated portfolios and faculty referrals to identify struggling students, and has them develop formal remediation plans with personal reflections, improvement strategies, and performance evidence. Students submit reports to document progress until formally released from remediation by the promotion committee. PROGRAM EVALUATION Participants included 177 students from six classes (2009-2014). Twenty-six were placed in remediation, with more referrals occurring during Years 1 or 2 (n = 20, 76 %). Unprofessional behavior represented the most common reason for referral in Years 3-5. Remedial students did not differ from classmates (n = 151) on baseline characteristics (Age, Gender, US citizenship, MCAT) or willingness to recommend their medical school to future students (p < 0.05). Two remedial students did not graduate and three did not pass USLME licensure exams on first attempt. Most remedial students (92 %) generated appropriate plans to address performance deficits. DISCUSSION Students can successfully design remedial interventions. This learner-driven remediation model promotes greater autonomy and reinforces self-regulated learning.
Collapse
Affiliation(s)
- S Beth Bierer
- Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH, USA,
| | | | | |
Collapse
|
25
|
Imanipour M, Jalili M. Development of a comprehensive clinical performance assessment system for nursing students: A programmatic approach. Jpn J Nurs Sci 2015; 13:46-54. [PMID: 26108896 DOI: 10.1111/jjns.12085] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2014] [Accepted: 04/23/2015] [Indexed: 12/01/2022]
Abstract
AIM Evaluation of achievement of learning objectives needs an accurate assessment program. Hence, nursing educators should move away from the use of individual assessment methods to apply a programmatic approach. The aim of this study was to develop a comprehensive assessment system for nursing students in their critical care rotation based on a programmatic approach. METHODS The population of this study was nursing students in their critical care course. The learning objectives of the course were determined using an expert panel and classified into three categories. Suitable assessment methods were identified for each category according to the consensus of experts. Then, the assessment tools were designed and the content validity was established using content validity ratio (CVR) and index (CVI). The reliability was determined by Cronbach's alpha coefficient. The satisfaction of the participants was investigated using a questionnaire. RESULTS According to the findings, all items of the assessment system had a high CVR (P < 0.05) and CVI ranged 0.93-0.97. The alpha coefficient of the whole system was more than 0.90 and for subsystems ranged 0.72-0.96. The findings showed that 87.5% of the instructors and 89.47% of students believed that the new assessment system had a positive impact on learning. In addition, the majority of them were satisfied with the new assessment system. CONCLUSION A programmatic approach should be used for effective evaluation of clinical performance of nursing students in critical care settings because of high validity and reliability, multidimensionality, positive educational impact, and acceptability.
Collapse
Affiliation(s)
- Masoomeh Imanipour
- Nursing and Midwifery Care Research Center (NMCRC), Critical Care Department, School of Nursing and Midwifery, Tehran University of Medical Sciences, Tehran, Iran
| | - Mohammad Jalili
- Emergency Medicine Department, Center for Educational Research in Medical Sciences (CERMS), Critical Care Department, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
26
|
Wilbur K. Summative assessment in a doctor of pharmacy program: a critical insight. ADVANCES IN MEDICAL EDUCATION AND PRACTICE 2015; 6:119-126. [PMID: 25733948 PMCID: PMC4337416 DOI: 10.2147/amep.s77198] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
BACKGROUND The Canadian-accredited post-baccalaureate Doctor of Pharmacy program at Qatar University trains pharmacists to deliver advanced patient care. Emphasis on acquisition and development of the necessary knowledge, skills, and attitudes lies in the curriculum's extensive experiential component. A campus-based oral comprehensive examination (OCE) was devised to emulate a clinical viva voce and complement the extensive formative assessments conducted at experiential practice sites throughout the curriculum. We describe an evaluation of the final exit summative assessment for this graduate program. METHODS OCE results since the inception of the graduate program (3 years ago) were retrieved and recorded into a blinded database. Examination scores among each paired faculty examiner team were analyzed for inter-rater reliability and linearity of agreement using intraclass correlation and Spearman's correlation coefficient measurements, respectively. Graduate student ranking from individual examiner OCE scores was compared with that of other relative ranked student performance. RESULTS Sixty-one OCEs were administered to 30 graduate students over 3 years by a composite of eleven different pairs of faculty examiners. Intraclass correlation measures demonstrated that examiner team reliability was low and linearity of agreements was inconsistent. Only one examiner team in each respective academic year was found to have statistically significant inter-rater reliability, and linearity of agreements was inconsistent in all years. No association was found between examination performance rankings and other academic parameters. CONCLUSION Critical review of our final summative assessment implies it is lacking robustness and defensibility. Measures are in place to continue the quality improvement process and develop and implement an alternative means of evaluation within a more authentic context.
Collapse
Affiliation(s)
- Kerry Wilbur
- College of Pharmacy, Qatar University, Doha, Qatar
| |
Collapse
|
27
|
Wilkinson TJ, Hudson JN, Mccoll GJ, Hu WCY, Jolly BC, Schuwirth LWT. Medical school benchmarking - from tools to programmes. MEDICAL TEACHER 2015; 37:146-152. [PMID: 24989363 DOI: 10.3109/0142159x.2014.932902] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
BACKGROUND Benchmarking among medical schools is essential, but may result in unwanted effects. AIM To apply a conceptual framework to selected benchmarking activities of medical schools. METHODS We present an analogy between the effects of assessment on student learning and the effects of benchmarking on medical school educational activities. A framework by which benchmarking can be evaluated was developed and applied to key current benchmarking activities in Australia and New Zealand. RESULTS The analogy generated a conceptual framework that tested five questions to be considered in relation to benchmarking: what is the purpose? what are the attributes of value? what are the best tools to assess the attributes of value? what happens to the results? and, what is the likely "institutional impact" of the results? If the activities were compared against a blueprint of desirable medical graduate outcomes, notable omissions would emerge. CONCLUSION Medical schools should benchmark their performance on a range of educational activities to ensure quality improvement and to assure stakeholders that standards are being met. Although benchmarking potentially has positive benefits, it could also result in perverse incentives with unforeseen and detrimental effects on learning if it is undertaken using only a few selected assessment tools.
Collapse
|
28
|
Locke KA, Bates CK, Karani R, Chheda SG. A review of the medical education literature for graduate medical education teachers. J Grad Med Educ 2013; 5:211-8. [PMID: 24404262 PMCID: PMC3693683 DOI: 10.4300/jgme-d-12-00245.1] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/21/2012] [Revised: 11/26/2012] [Accepted: 01/25/2013] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND A rapidly evolving body of literature in medical education can impact the practice of clinical educators in graduate medical education. OBJECTIVE To aggregate studies published in the medical education literature in 2011 to provide teachers in general internal medicine with an overview of the current, relevant medical education literature. REVIEW We systematically searched major medical education journals and the general clinical literature for medical education studies with sound design and relevance to the educational practice of graduate medical education teachers. We chose 12 studies, grouped into themes, using a consensus method, and critiqued these studies. RESULTS Four themes emerged. They encompass (1) learner assessment, (2) duty hour limits and teaching in the inpatient setting, (3) innovations in teaching, and (4) learner distress. With each article we also present recommendations for how readers may use them as resources to update their clinical teaching. While we sought to identify the studies with the highest quality and greatest relevance to educators, limitation of the studies selected include their single-site and small sample nature, and the frequent lack of objective measures of outcomes. These limitations are shared with the larger body of medical education literature. CONCLUSIONS The themes and the recommendations for how to incorporate this information into clinical teaching have the potential to inform the educational practice of general internist educators as well as that of teachers in other specialties.
Collapse
|
29
|
|
30
|
Wilkinson TJ. Making sense of work-based assessments. MEDICAL EDUCATION 2012; 46:436. [PMID: 22429180 DOI: 10.1111/j.1365-2923.2011.04202.x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
|