251
|
Boulet JR, Durning SJ. What we measure … and what we should measure in medical education. MEDICAL EDUCATION 2019; 53:86-94. [PMID: 30216508 DOI: 10.1111/medu.13652] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2018] [Revised: 03/06/2018] [Accepted: 05/31/2018] [Indexed: 05/20/2023]
Abstract
CONTEXT As the practice of medicine evolves, the knowledge, skills and attitudes required to provide patient care will continue to change. These competency-based changes will necessitate the restructuring of assessment systems. High-quality assessment programmes are needed to fulfil health professions education's contract with society. OBJECTIVES We discuss several issues that are important to consider when developing assessments in health professions education. We organise the discussion along the continuum of medical education, outlining the tension between what has been deemed important to measure and what should be measured. We also attempt to alleviate some of the apprehension associated with measuring evolving competencies by discussing how emerging technologies, including simulation and artificial intelligence, can play a role. METHODS We focus our thoughts on the assessment of competencies that, at least historically, have been difficult to measure. We highlight several assessment challenges, discuss some of the important issues concerning the validity of assessment scores, and argue that medical educators must do a better job of justifying their use of specific assessment strategies. DISCUSSION As in most professions, there are clear tensions in medicine in relation to what should be assessed, who should be responsible for administering assessment content, and how much evidence should be gathered to support the evaluation process. Although there have been advances in assessment practices, there is still room for improvement. From the student's, resident's and practising physician's perspectives, assessments need to be relevant. Knowledge is certainly required, but there are other qualities and attributes that are important, and perhaps far more important. Research efforts spent now on delineating what makes a good physician, and on aligning new and upcoming assessment tools with the relevant competencies, will ensure that assessment practices, whether aimed at establishing competence or at fostering learning, are effective with respect to their primary goal: to produce qualified physicians.
Collapse
Affiliation(s)
- John R Boulet
- Foundation for Advancement of International Medical Education and Research (FAIMER), Philadelphia, Pennsylvania, USA
| | - Steven J Durning
- Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland, USA
| |
Collapse
|
252
|
Duijn CCMA, Ten Cate O, Kremer WDJ, Bok HGJ. The Development of Entrustable Professional Activities for Competency-Based Veterinary Education in Farm Animal Health. JOURNAL OF VETERINARY MEDICAL EDUCATION 2018; 46:218-224. [PMID: 30565977 DOI: 10.3138/jvme.0617-073r] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Entrustable professional activities (EPAs) are professional tasks that can be entrusted to a student under a given level of supervision once he or she has demonstrated competence in these tasks. The EPA construct was conceived to increase transparency in objectives for clinical workplace learning and to help ensure patient safety and the quality of care. A first step in implementing EPAs in a veterinary curriculum is to identify the core EPAs of the profession. The aim of this study was to develop EPAs for farm animal health. An initial set of 36 EPAs for farm animal health was prepared by a team of six veterinarians and curriculum developers and used in a modified Delphi study. In this iterative process, the EPAs were evaluated until higher than 80% agreement was reached. Of 83 veterinarians who participated, 39 (47%) completed the Delphi procedure. After two rounds, the panel reached consensus. A small expert group further refined and reorganized the EPAs for educational purposes into seven core EPAs for farm animal health and 29 sub-EPAs. This study is an important step in optimizing competency-based training in veterinary medicine. Future steps are to implement EPAs in the curriculum and train supervisors to assess students' ability to perform EPAs with increasing levels of independence.
Collapse
|
253
|
Exploring assessment of medical students' competencies in pain medicine-A review. Pain Rep 2018; 4:e704. [PMID: 30801044 PMCID: PMC6370140 DOI: 10.1097/pr9.0000000000000704] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2018] [Revised: 10/21/2018] [Accepted: 11/01/2018] [Indexed: 12/21/2022] Open
Abstract
Supplemental Digital Content is Available in the Text. Introduction: Considering the continuing high prevalence and public health burden of pain, it is critical that medical students are equipped with competencies in the field of pain medicine. Robust assessment of student expertise is integral for effective implementation of competency-based medical education. Objective: The aim of this review was to describe the literature regarding methods for assessing pain medicine competencies in medical students. Method: PubMed, Medline, EMBASE, ERIC, and Google Scholar, and BEME data bases were searched for empirical studies primarily focusing on assessment of any domain of pain medicine competencies in medical students published between January 1997 and December 2016. Results: A total of 41 studies met the inclusion criteria. Most assessments were performed for low-stakes summative purposes and did not reflect contemporary theories of assessment. Assessments were predominantly undertaken using written tests or clinical simulation methods. The most common pain medicine education topics assessed were pain pharmacology and the management of cancer and low-back pain. Most studies focussed on assessment of cognitive levels of learning as opposed to more challenging domains of demonstrating skills and attitudes or developing and implementing pain management plans. Conclusion: This review highlights the need for more robust assessment tools that effectively measure the abilities of medical students to integrate pain-related competencies into clinical practice. A Pain Medicine Assessment Framework has been developed to encourage systematic planning of pain medicine assessment at medical schools internationally and to promote continuous multidimensional assessments in a variety of clinical contexts based on well-defined pain medicine competencies.
Collapse
|
254
|
Bearman M, Ajjawi R. Actor-network theory and the OSCE: formulating a new research agenda for a post-psychometric era. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2018; 23:1037-1049. [PMID: 29027040 DOI: 10.1007/s10459-017-9797-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2017] [Accepted: 10/07/2017] [Indexed: 06/07/2023]
Abstract
The Objective Structured Clinical Examination (OSCE) is a ubiquitous part of medical education, although there is some debate about its value, particularly around possible impact on learning. Literature and research regarding the OSCE is most often situated within the psychometric or competency discourses of assessment. This paper describes an alternative approach: Actor-network-theory (ANT), a sociomaterial approach to understanding practice and learning. ANT provides a means to productively examine tensions and limitations of the OSCE, in part through extending research to include social relationships and physical objects. Using a narrative example, the paper suggests three ANT-informed insights into the OSCE. We describe: (1) exploring the OSCE as a holistic combination of people and objects; (2) thinking about the influences a checklist can exert over the OSCE; and (3) the implications of ANT educational research for standardisation within the OSCE. We draw from this discussion to provide a practical agenda for ANT research into the OSCE. This agenda promotes new areas for exploration in an often taken-for-granted assessment format.
Collapse
Affiliation(s)
- Margaret Bearman
- Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University, Geelong, VIC, Australia.
| | - Rola Ajjawi
- Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University, Geelong, VIC, Australia
| |
Collapse
|
255
|
Gingerich A, Schokking E, Yeates P. Comparatively salient: examining the influence of preceding performances on assessors' focus and interpretations in written assessment comments. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2018; 23:937-959. [PMID: 29980956 DOI: 10.1007/s10459-018-9841-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2018] [Accepted: 07/03/2018] [Indexed: 06/08/2023]
Abstract
Recent literature places more emphasis on assessment comments rather than relying solely on scores. Both are variable, however, emanating from assessment judgements. One established source of variability is "contrast effects": scores are shifted away from the depicted level of competence in a preceding encounter. The shift could arise from an effect on the range-frequency of assessors' internal scales or the salience of performance aspects within assessment judgments. As these suggest different potential interventions, we investigated assessors' cognition by using the insight provided by "clusters of consensus" to determine whether any change in the salience of performance aspects was induced by contrast effects. A dataset from a previous experiment contained scores and comments for 3 encounters: 2 with significant contrast effects and 1 without. Clusters of consensus were identified using F-sort and latent partition analysis both when contrast effects were significant and non-significant. The proportion of assessors making similar comments only significantly differed when contrast effects were significant with assessors more frequently commenting on aspects that were dissimilar with the standard of competence demonstrated in the preceding performance. Rather than simply influencing range-frequency of assessors' scales, preceding performances may affect salience of performance aspects through comparative distinctiveness: when juxtaposed with the context some aspects are more distinct and selectively draw attention. Research is needed to determine whether changes in salience indicate biased or improved assessment information. The potential should be explored to augment existing benchmarking procedures in assessor training by cueing assessors' attention through observation of reference performances immediately prior to assessment.
Collapse
Affiliation(s)
- Andrea Gingerich
- Northern Medical Program, University of Northern British Columbia, 3333 University Way, Prince George, BC, V2N 4Z9, Canada.
| | - Edward Schokking
- Northern Medical Program, University of Northern British Columbia, 3333 University Way, Prince George, BC, V2N 4Z9, Canada
| | - Peter Yeates
- Keele University School of Medicine, Keele, Staffordshire, UK
- Pennine Acute Hospitals NHS Trust, Bury, Lancashire, UK
| |
Collapse
|
256
|
Bok HGJ, de Jong LH, O'Neill T, Maxey C, Hecker KG. Validity evidence for programmatic assessment in competency-based education. PERSPECTIVES ON MEDICAL EDUCATION 2018; 7:362-372. [PMID: 30430439 PMCID: PMC6283777 DOI: 10.1007/s40037-018-0481-2] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
INTRODUCTION Competency-based education (CBE) is now pervasive in health professions education. A foundational principle of CBE is to assess and identify the progression of competency development in students over time. It has been argued that a programmatic approach to assessment in CBE maximizes student learning. The aim of this study is to investigate if programmatic assessment, i. e., a system of assessment, can be used within a CBE framework to track progression of student learning within and across competencies over time. METHODS Three workplace-based assessment methods were used to measure the same seven competency domains. We performed a retrospective quantitative analysis of 327,974 assessment data points from 16,575 completed assessment forms from 962 students over 124 weeks using both descriptive (visualization) and modelling (inferential) analyses. This included multilevel random coefficient modelling and generalizability theory. RESULTS Random coefficient modelling indicated that variance due to differences in inter-student performance was highest (40%). The reliability coefficients of scores from assessment methods ranged from 0.86 to 0.90. Method and competency variance components were in the small-to-moderate range. DISCUSSION The current validation evidence provides cause for optimism regarding the explicit development and implementation of a program of assessment within CBE. The majority of the variance in scores appears to be student-related and reliable, supporting the psychometric properties as well as both formative and summative score applications.
Collapse
Affiliation(s)
- Harold G J Bok
- Centre for Quality Improvement in Veterinary Education, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands.
| | - Lubberta H de Jong
- Centre for Quality Improvement in Veterinary Education, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
| | - Thomas O'Neill
- Department of Psychology, University of Calgary, Calgary, Canada
| | - Connor Maxey
- Veterinary Clinical and Diagnostic Sciences, Faculty of Veterinary Medicine, University of Calgary, Calgary, Canada
| | - Kent G Hecker
- Veterinary Clinical and Diagnostic Sciences, Faculty of Veterinary Medicine, University of Calgary, Calgary, Canada
| |
Collapse
|
257
|
Chung MP, Thang CK, Vermillion M, Fried JM, Uijtdehaage S. Exploring medical students' barriers to reporting mistreatment during clerkships: a qualitative study. MEDICAL EDUCATION ONLINE 2018; 23:1478170. [PMID: 29848223 PMCID: PMC5990956 DOI: 10.1080/10872981.2018.1478170] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
BACKGROUND Despite widespread implementation of policies to address mistreatment, the proportion of medical students who experience mistreatment during clinical training is significantly higher than the proportion of students who report mistreatment. Understanding barriers to reporting mistreatment from students' perspectives is needed before effective interventions can be implemented to improve the clinical learning environment. OBJECTIVE We explored medical students' reasons for not reporting perceived mistreatment or abuse experienced during clinical clerkships at the David Geffen School of Medicine at UCLA (DGSOM). DESIGN This was a sequential two-phase qualitative study. In the first phase, we analyzed institutional survey responses to an open-ended questionnaire administered to the DGSOM graduating classes of 2013-2015 asking why students who experienced mistreatment did not seek help or report incidents. In the second phase, we conducted focus group interviews with third- and fourth-year medical students to explore their reasons for not reporting mistreatment. In total, 30 of 362 eligible students participated in five focus groups. On the whole, 63% of focus group participants felt they had experienced mistreatment, of which over half chose not to report to any member of the medical school administration. Transcripts were analyzed via inductive thematic analysis. RESULTS The following major themes emerged: fear of reprisal even in the setting of anonymity; perception that medical culture includes mistreatment; difficulty reporting more subtle forms of mistreatment; incident is not important enough to report; reporting process damages the student-teacher relationship; reporting process is too troublesome; and empathy with the source of mistreatment. Differing perceptions arose as students debated whether or not reporting was beneficial to the clinical learning environment. CONCLUSIONS Multiple complex factors deeply rooted in the culture of medicine, along with negative connotations associated with reporting, prevent students from reporting incidents of mistreatment. Further research is needed to establish interventions that will help identify mistreatment and change the underlying culture.
Collapse
Affiliation(s)
- Melody P. Chung
- Deans Office, David Geffen School of Medicine at UCLA, Los Angeles, California, USA
| | - Christine K. Thang
- Deans Office, David Geffen School of Medicine at UCLA, Los Angeles, California, USA
| | - Michelle Vermillion
- Deans Office, David Geffen School of Medicine at UCLA, Los Angeles, California, USA
| | - Joyce M. Fried
- Deans Office, David Geffen School of Medicine at UCLA, Los Angeles, California, USA
- CONTACT Joyce M. Fried
| | | |
Collapse
|
258
|
Castanelli DJ, Moonen-van Loon JMW, Jolly B, Weller JM. The reliability of a portfolio of workplace-based assessments in anesthesia training. Can J Anaesth 2018; 66:193-200. [PMID: 30430441 DOI: 10.1007/s12630-018-1251-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Revised: 10/01/2018] [Accepted: 10/02/2018] [Indexed: 10/27/2022] Open
Abstract
PURPOSE Competency-based anesthesia training programs require robust assessment of trainee performance and commonly combine different types of workplace-based assessment (WBA) covering multiple facets of practice. This study measured the reliability of WBAs in a large existing database and explored how they could be combined to optimize reliability for assessment decisions. METHODS We used generalizability theory to measure the composite reliability of four different types of WBAs used by the Australian and New Zealand College of Anaesthetists: mini-Clinical Evaluation Exercise (mini-CEX), direct observation of procedural skills (DOPS), case-based discussion (CbD), and multi-source feedback (MSF). We then modified the number and weighting of WBA combinations to optimize reliability with fewer assessments. RESULTS We analyzed 67,405 assessments from 1,837 trainees and 4,145 assessors. We assumed acceptable reliability for interim (intermediate stakes) and final (high stakes) decisions of 0.7 and 0.8, respectively. Depending on the combination of WBA types, 12 assessments allowed the 0.7 threshold to be reached where one assessment of any type has the same weighting, while 20 were required for reliability to reach 0.8. If the weighting of the assessments is optimized, acceptable reliability for interim and final decisions is possible with nine (e.g., two DOPS, three CbD, two mini-CEX, two MSF) and 15 (e.g., two DOPS, eight CbD, three mini-CEX, two MSF) assessments respectively. CONCLUSIONS Reliability is an important factor to consider when designing assessments, and measuring composite reliability can allow the selection of a WBA portfolio with adequate reliability to provide evidence for defensible decisions on trainee progression.
Collapse
Affiliation(s)
- Damian J Castanelli
- School of Clinical Sciences at Monash Health, Monash University, Clayton, VIC, Australia. .,Department of Anaesthesia and Perioperative Medicine, Monash Health, Clayton, VIC, Australia.
| | - Joyce M W Moonen-van Loon
- Department of Educational Development and Research, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Brian Jolly
- School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, Newcastle, NSW, Australia
| | - Jennifer M Weller
- Centre for Medical and Health Sciences Education, School of Medicine, University of Auckland, Auckland, New Zealand.,Department of Anaesthesia, Auckland City Hospital, Auckland, New Zealand
| |
Collapse
|
259
|
Weber K, Carter B, Jenkins G, Jamieson J. A dietetic clinical educator enhances the experience and assessment of clinical placement. Nutr Diet 2018; 76:486-492. [PMID: 30393933 DOI: 10.1111/1747-0080.12497] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Revised: 09/12/2018] [Accepted: 09/18/2018] [Indexed: 10/27/2022]
Abstract
AIM The aim of this study was to evaluate the impact of a Clinical Educator model on the learning experience and environment for students, preceptors and managers. METHODS A Clinical Educator position was established for the 10-week dietetic clinical placement at Edith Cowan University. The Clinical Educator was responsible for overseeing the placement and assisting in the supervision of students. A qualitative descriptive approach using focus groups with purposive sampling explored the research question. Students (n = 10), preceptors (n = 21) and managers (n = 3) participated in separate focus groups. Data were thematically analysed with consideration given to participant and focus group commonalities and differences. RESULTS The findings revealed that the Clinical Educator (i) reduced the logistical burden of student placements and improved time efficiency; (ii) facilitated student assessment within a programme of assessment; (iii) was uniquely positioned to provide support and enhance student confidence; and (iv) enhanced capacity to manage underperforming and challenging students. CONCLUSIONS The Clinical Educator model increased student confidence, facilitated quality assessment and supported the management of underperforming students. This was achieved by reducing the burden of clinical placements, facilitating effective and timely communication between stakeholders and supporting the establishment of meaningful relationships which enriched learning. The results highlight the importance of the people involved in placement to facilitate a positive student learning environment and high quality assessment.
Collapse
Affiliation(s)
- Katrina Weber
- School of Medical and Health Sciences, Edith Cowan University, Murdoch, Western Australia, Australia.,Dietetics Department, Fiona Stanley Hospital, Murdoch, Western Australia, Australia
| | - Brie Carter
- School of Medical and Health Sciences, Edith Cowan University, Murdoch, Western Australia, Australia.,Joondalup Health Campus, Dietetics Department, Joondalup, Western Australia, Australia
| | - Gemma Jenkins
- School of Medical and Health Sciences, Edith Cowan University, Murdoch, Western Australia, Australia
| | - Janica Jamieson
- School of Medical and Health Sciences, Edith Cowan University, Murdoch, Western Australia, Australia
| |
Collapse
|
260
|
Ross S, Binczyk NM, Hamza DM, Schipper S, Humphries P, Nichols D, Donoff MG. Association of a Competency-Based Assessment System With Identification of and Support for Medical Residents in Difficulty. JAMA Netw Open 2018; 1:e184581. [PMID: 30646360 PMCID: PMC6324593 DOI: 10.1001/jamanetworkopen.2018.4581] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/05/2022] Open
Abstract
IMPORTANCE Competency-based medical education is now established in health professions training. However, critics stress that there is a lack of published outcomes for competency-based medical education or competency-based assessment tools. OBJECTIVE To determine whether competency-based assessment is associated with better identification of and support for residents in difficulty. DESIGN, SETTING, AND PARTICIPANTS This cohort study of secondary data from archived files on 458 family medicine residents (2006-2008 and 2010-2016) was conducted between July 5, 2016, and March 2, 2018, using a large, urban family medicine residency program in Canada. EXPOSURES Introduction of the Competency-Based Achievement System (CBAS). MAIN OUTCOMES AND MEASURES Proportion of residents (1) with at least 1 performance or professionalism flag, (2) receiving flags on multiple distinct rotations, (3) classified as in difficulty, and (4) with flags addressed by the residency program. RESULTS Files from 458 residents were reviewed (pre-CBAS: n = 163; 81 [49.7%] women; 90 [55.2%] aged >30 years; 105 [64.4%] Canadian medical graduates; post-CBAS: n = 295; 144 [48.8%] women; 128 [43.4%] aged >30 years; 243 [82.4%] Canadian medical graduates). A significant reduction in the proportion of residents receiving at least 1 flag during training after CBAS implementation was observed (0.38; 95% CI, 0.377-0.383), as well as a significant decrease in the numbers of distinct rotations during which residents received flags on summative assessments (0.24; 95% CI, 0.237-0.243). There was a decrease in the number of residents in difficulty after CBAS (from 0.13 [95% CI, 0.128-0.132] to 0.17 [95% CI, 0.168-0.172]) depending on the strictness of criteria defining a resident in difficulty. Furthermore, there was a significant increase in narrative documentation that a flag was discussed with the resident between the pre-CBAS and post-CBAS conditions (0.18; 95% CI, 0.178-0.183). CONCLUSIONS AND RELEVANCE The CBAS approach to assessment appeared to be associated with better identification of residents in difficulty, facilitating the program's ability to address learners' deficiencies in competence. After implementation of CBAS, residents experiencing challenges were better supported and their deficiencies did not recur on later rotations. A key argument for shifting to competency-based medical education is to change assessment approaches; these findings suggest that competency-based assessment may be useful.
Collapse
Affiliation(s)
- Shelley Ross
- Department of Family Medicine, University of Alberta, Edmonton, Alberta, Canada
| | - Natalia M. Binczyk
- Department of Family Medicine, University of Alberta, Edmonton, Alberta, Canada
| | - Deena M. Hamza
- Department of Family Medicine, University of Alberta, Edmonton, Alberta, Canada
| | - Shirley Schipper
- Department of Family Medicine, University of Alberta, Edmonton, Alberta, Canada
| | - Paul Humphries
- Department of Family Medicine, University of Alberta, Edmonton, Alberta, Canada
| | - Darren Nichols
- Department of Family Medicine, University of Alberta, Edmonton, Alberta, Canada
| | - Michel G. Donoff
- Department of Family Medicine, University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
261
|
Carr SE, Celenza A, Mercer AM, Lake F, Puddey IB. Predicting performance of junior doctors: Association of workplace based assessment with demographic characteristics, emotional intelligence, selection scores, and undergraduate academic performance. MEDICAL TEACHER 2018; 40:1175-1182. [PMID: 29355068 DOI: 10.1080/0142159x.2018.1426840] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
INTRODUCTION Predicting workplace performance of junior doctors from before entry or during medical school is difficult and has limited available evidence. This study explored the association between selected predictor variables and workplace based performance in junior doctors during their first postgraduate year. METHODS Two cohorts of medical students (n = 200) from one university in Western Australia participated in the longitudinal study. Pearson correlation coefficients and multivariate analyses utilizing linear regression were used to assess the relationships between performance on the Junior Doctor Assessment Tool (JDAT) and its sub-components with demographic characteristics, selection scores for medical school entry, emotional intelligence, and undergraduate academic performance. RESULTS Grade Point Average (GPA) at the completion of undergraduate studies had the most significant association with better performance on the overall JDAT and each subscale. Increased age was a negative predictor for junior doctor performance on the Clinical management subscale and understanding emotion was a predictor for the JDAT Communication subscale. Secondary school performance measured by Tertiary Entry Rank on entry to medical school score predicted GPA but not junior doctor performance. DISCUSSION The GPA as a composite measure of ability and performance in medical school is associated with junior doctor assessment scores. Using this variable to identify students at risk of difficulty could assist planning for appropriate supervision, support, and training for medical graduates transitioning to the workplace.
Collapse
Affiliation(s)
- Sandra E Carr
- a Division of Health Professions Education , School of Allied Health, The University of Western Australia , Perth , Australia
| | - Antonio Celenza
- b UWA Medical School, The University of Western Australia , Perth , Australia
| | - Annette M Mercer
- a Division of Health Professions Education , School of Allied Health, The University of Western Australia , Perth , Australia
| | - Fiona Lake
- b UWA Medical School, The University of Western Australia , Perth , Australia
| | - Ian B Puddey
- b UWA Medical School, The University of Western Australia , Perth , Australia
| |
Collapse
|
262
|
Kinnear B, Warm EJ, Hauer KE. Twelve tips to maximize the value of a clinical competency committee in postgraduate medical education. MEDICAL TEACHER 2018; 40:1110-1115. [PMID: 29944025 DOI: 10.1080/0142159x.2018.1474191] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Medical education has shifted to a competency-based paradigm, leading to calls for improved learner assessment methods and validity evidence for how assessment data are interpreted. Clinical competency committees (CCCs) use the collective input of multiple people to improve the validity and reliability of decisions made and actions taken based on assessment data. Significant heterogeneity in CCC structure and function exists across postgraduate medical education programs and specialties, and while there is no "one-size-fits-all" approach, there are ways to maximize value for learners and programs. This paper collates available evidence and the authors' experiences to provide practical tips on CCC purpose, membership, processes, and outputs. These tips can benefit programs looking to start a CCC and those that are improving their current CCC processes.
Collapse
Affiliation(s)
- Benjamin Kinnear
- a Internal Medicine and Pediatrics , University of Cincinnati College of Medicine , Cincinnati , OH , USA
| | - Eric J Warm
- b Richard W. Vilter Professor of Medicine , University of Cincinnati College of Medicine , Cincinnati , OH , USA
| | - Karen E Hauer
- c Medicine , University of California, San Francisco School of Medicine , San Francisco , CA , USA
| |
Collapse
|
263
|
Al Askar BA, Al Sweleh FS, Al Wasill EI, Amin Z. Restructuring Saudi Board in Restorative Dentistry (SBRD) curriculum using CanMEDS competency. MEDICAL TEACHER 2018; 40:S30-S36. [PMID: 29792543 DOI: 10.1080/0142159x.2018.1469740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
OBJECTIVE The purpose of this paper is to describe the process of adopting the Canadian Medical Education Directions for Specialists (CanMEDS) 2015 competency framework in a dental specialty program to reconstruct the Saudi Board in Restorative Dentistry (SBRD) curriculum and disseminate the lessons learned. Method and development process: The process of curriculum development was started with the selection of SBRD curriculum committee and review of CanMEDS framework. The Committee conducted needs assessment among the stakeholders and adopted CanMEDS 2015 competencies through a careful process. A modeled curriculum was developed after taking feedback, review of existing literature, and unique context of dentistry. Curriculum: Several unique features are incorporated. For example, milestones and continuum of learning are developed to enable residents develop competencies at different stages (transition to discipline, foundation of discipline, and core of discipline). Academic activities are restructured to encourage interactive, student-centered approaches, team work, intellectual curiosity, and scholarship. Learning outcomes are integrated throughout within several modules. Many formative assessment tools are adopted to promote learning and evaluate clinical skills. CONCLUSIONS This is the first published example of adopting CanMEDS competency framework in a dental specialty program. The success of developing SBRD curriculum has encouraged other dental specialties toward adopting CanMEDS 2015 frameworks for their own curricula.
Collapse
Affiliation(s)
| | - Fahad Saleh Al Sweleh
- a Dental Clinics , College of Dentistry, King Saud University , Riyadh , Kingdom of Saudi Arabia
| | | | - Zubair Amin
- c Department of Pediatrics , Yong Loo Lin School of Medicine, National University of Singapore , Singapore , Singapore
| |
Collapse
|
264
|
Curran VR, Deacon D, Schulz H, Stringer K, Stone CN, Duggan N, Coombs-Thorne H. Evaluation of the Characteristics of a Workplace Assessment Form to Assess Entrustable Professional Activities (EPAs) in an Undergraduate Surgery Core Clerkship. JOURNAL OF SURGICAL EDUCATION 2018; 75:1211-1222. [PMID: 29609893 DOI: 10.1016/j.jsurg.2018.02.013] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/20/2017] [Revised: 01/29/2018] [Accepted: 02/20/2018] [Indexed: 05/13/2023]
Abstract
OBJECTIVE Entrustable Professional Activities (EPAs) are explicit, directly observable tasks requiring the demonstration of specific knowledge, skills, and behaviors that learners are expected to perform without direct supervision once they have gained sufficient competence. Undergraduate level implementation of EPAs is relatively new. We examined the characteristics of a workplace assessment form (clinic card) as part of a formative programmatic assessment process of EPAs for a core undergraduate surgery rotation. DESIGN A clinic card was introduced to assess progression towards EPA achievement in the clerkship curriculum phase. Students completing their core eight (8) week clerkship surgery rotation submitted at least 1 clinic card per week. We compiled assessment scores for the 2015 to 2016 academic year, in which EPAs were introduced, and analyzed relationships between scores and time, EPA, training site, and assessor role. We surveyed preceptors and students, and conducted a focus group with clinical discipline coordinators of all core rotations. SETTING This study took place at the Faculty of Medicine, Memorial University in St. John's, Newfoundland, Canada. PARTICIPANTS Third year medical students (n = 79) who completed their core eight (8) week surgery clerkship rotation during the 2015 to 2016 academic year, preceptors, and clinical discipline coordinators participated in this study. RESULTS EPAs reflecting tasks commonly performed by students were more likely to be assessed. EPAs frequently observed during preceptor-student encounters had higher entrustment ratings. Most EPAs showed increased entrustment scores over time and no significant differences in ratings between teaching sites nor preceptors and residents. Survey and focus group feedback suggest clinic cards fostered direct observation by preceptors and promoted constructive feedback on clinical tasks. A binary rating scale (entrustable/pre-entrustable) was not educationally beneficial. CONCLUSIONS The findings support the feasibility, utility, catalytic and educational benefits of clinic cards in assessing EPAs in a core surgery rotation in undergraduate medical education.
Collapse
Affiliation(s)
- Vernon R Curran
- Faculty of Medicine, Memorial University, St. John's, Newfoundland, Canada.
| | - Diana Deacon
- Medical Education Scholarship Centre (MESC), Faculty of Medicine, Memorial University, St. John's, Newfoundland, Canada
| | - Henry Schulz
- Faculty of Education, Memorial University, St. John's, Newfoundland, Canada
| | - Katherine Stringer
- Discipline of Family Medicine, Faculty of Medicine, Memorial University, St. John's, Newfoundland, Canada
| | - Craig N Stone
- Discipline of Surgery, Faculty of Medicine, Memorial University, St. John's, Newfoundland, Canada
| | - Norah Duggan
- Discipline of Family Medicine, Faculty of Medicine, Memorial University, St. John's, Newfoundland, Canada
| | - Heidi Coombs-Thorne
- Medical Education Scholarship Centre (MESC), Faculty of Medicine, Memorial University, St. John's, Newfoundland, Canada
| |
Collapse
|
265
|
Rotthoff T. Standing up for Subjectivity in the Assessment of Competencies. GMS JOURNAL FOR MEDICAL EDUCATION 2018; 35:Doc29. [PMID: 30186939 PMCID: PMC6120153 DOI: 10.3205/zma001175] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [MESH Headings] [Subscribe] [Scholar Register] [Received: 10/09/2017] [Revised: 04/16/2018] [Accepted: 06/05/2018] [Indexed: 06/08/2023]
Affiliation(s)
- Thomas Rotthoff
- Heinrich-Heine-University Düsseldorf, Medical Faculty, Office of the dean of studies, Düsseldorf, Germany
| |
Collapse
|
266
|
Cutrer WB, Atkinson HG, Friedman E, Deiorio N, Gruppen LD, Dekhtyar M, Pusic M. Exploring the characteristics and context that allow Master Adaptive Learners to thrive. MEDICAL TEACHER 2018; 40:791-796. [PMID: 30033795 DOI: 10.1080/0142159x.2018.1484560] [Citation(s) in RCA: 51] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Because change is ubiquitous in healthcare, clinicians must constantly make adaptations to their practice to provide the highest quality care to patients. In a previous article, Cutrer et al. described a metacognitive approach to learning based on self-regulation, which facilitates the development of the Master Adaptive Learner (MAL). The MAL process helps individuals to cultivate and demonstrate adaptive expertise, allowing them to investigate new concepts (learn) and create new solutions (innovate). An individual's ability to learn in this manner is driven by several internal characteristics and is also impacted by numerous aspects of their context. In this article, the authors examine the important internal and contextual factors that can impede or foster Master Adaptive Learning.
Collapse
Affiliation(s)
- William B Cutrer
- a Department of Pediatrics, Vanderbilt University School of Medicine , Nashville , TN , USA
| | | | | | - Nicole Deiorio
- c Department of Emergency Medicine, Oregon Health and Science University , Portland , OR , USA
| | | | | | | |
Collapse
|
267
|
Boscardin C, Fergus KB, Hellevig B, Hauer KE. Twelve tips to promote successful development of a learner performance dashboard within a medical education program. MEDICAL TEACHER 2018; 40:855-861. [PMID: 29117744 DOI: 10.1080/0142159x.2017.1396306] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Easily accessible and interpretable performance data constitute critical feedback for learners that facilitate informed self-assessment and learning planning. To provide this feedback, there has been a proliferation of educational dashboards in recent years. An educational (learner) dashboard systematically delivers timely and continuous feedback on performance and can provide easily visualized and interpreted performance data. In this paper, we provide practical tips for developing a functional, user-friendly individual learner performance dashboard and literature review of dashboard development, assessment theory, and users' perspectives. Considering key design principles and maximizing current technological advances in data visualization techniques can increase dashboard utility and enhance the user experience. By bridging current technology with assessment strategies that support learning, educators can continue to improve the field of learning analytics and design of information management tools such as dashboards in support of improved learning outcomes.
Collapse
|
268
|
Hauer KE, Vandergrift J, Lipner RS, Holmboe ES, Hood S, McDonald FS. National Internal Medicine Milestone Ratings: Validity Evidence From Longitudinal Three-Year Follow-up. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2018; 93:1189-1204. [PMID: 29620673 DOI: 10.1097/acm.0000000000002234] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
PURPOSE To evaluate validity evidence for internal medicine milestone ratings across programs for three resident cohorts by quantifying "not assessable" ratings; reporting mean longitudinal milestone ratings for individual residents; and correlating medical knowledge ratings across training years with certification examination scores to determine predictive validity of milestone ratings for certification outcomes. METHOD This retrospective study examined milestone ratings for postgraduate year (PGY) 1-3 residents in U.S. internal medicine residency programs. Data sources included milestone ratings, program characteristics, and certification examination scores. RESULTS Among 35,217 participants, there was a decreased percentage with "not assessable" ratings across years: 1,566 (22.5%) PGY1s in 2013-2014 versus 1,219 (16.6%) in 2015-2016 (P = .01), and 342 (5.1%) PGY3s in 2013-2014 versus 177 (2.6%) in 2015-2016 (P = .04). For individual residents with three years of ratings, mean milestone ratings increased from around 3 (behaviors of an early learner or advancing resident) in PGY1 (ranging from a mean of 2.73 to 3.19 across subcompetencies) to around 4 (ready for unsupervised practice) in PGY3 (mean of 4.00 to 4.22 across subcompetencies, P < .001 for all subcompetencies). For each increase of 0.5 units in two medical knowledge (MK1, MK2) subcompetency ratings, the difference in examination scores for PGY3s was 19.5 points for MK1 (P < .001) and 19.0 for MK2 (P < .001). CONCLUSIONS These findings provide evidence of validity of the milestones by showing how training programs have applied them over time and how milestones predict other training outcomes.
Collapse
Affiliation(s)
- Karen E Hauer
- K.E. Hauer is associate dean for assessment and professor, Department of Medicine, University of California at San Francisco, San Francisco, California. J. Vandergrift is a health services researcher, American Board of Internal Medicine (ABIM), Philadelphia, Pennsylvania. R.S. Lipner is senior vice president of assessment and research, ABIM, Philadelphia, Pennsylvania. E.S. Holmboe is senior vice president of milestones development and evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois. S. Hood is director of initial certification, ABIM, Philadelphia, Pennsylvania. F.S. McDonald is senior vice president of academic and medical affairs, ABIM, Philadelphia, Pennsylvania
| | | | | | | | | | | |
Collapse
|
269
|
Rietmeijer CBT, Huisman D, Blankenstein AH, de Vries H, Scheele F, Kramer AWM, Teunissen PW. Patterns of direct observation and their impact during residency: general practice supervisors' views. MEDICAL EDUCATION 2018; 52:981-991. [PMID: 30043397 PMCID: PMC6120450 DOI: 10.1111/medu.13631] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2018] [Revised: 03/06/2018] [Accepted: 04/27/2018] [Indexed: 05/17/2023]
Abstract
CONTEXT Direct observation (DO) of residents' performance, despite the importance that is ascribed to it, does not readily fit in with the practice of postgraduate medical education (PGME); it is infrequent and the quality of observation may be poor in spite of ongoing efforts towards improvement. In recent literature, DO is mostly portrayed as a means to gather information on the performance of residents for purposes of feedback and assessment. The role of DO in PGME is likely to be more complex and poorly understood in the era of outcome-based education. By exploring the possible complexity of DO in workplace learning, our research aims to contribute to a better use of DO in the practice of PGME. METHODS Constructivist grounded theory informed our data collection and analysis. Data collection involved focus group sessions with supervisors in Dutch general practice who were invited to discuss the manifestations, meanings and effects of DO of technical skills. Theoretical sufficiency was achieved after four focus groups, with a total of 28 participants being included. RESULTS We found four patterns of DO of technical skills: initial planned DO sessions; resident-initiated ad hoc DO; supervisor-initiated ad hoc DO, and continued planned DO sessions. Different patterns of DO related to varying meanings, such as checking or trusting, and effects, such as learning a new skill or experiencing emotional discomfort, all of them concerning the training relationship, patient safety or residents' learning. CONCLUSIONS Direct observation, to supervisors, means much more than gathering information for purposes of feedback and assessment. Planned DO sessions are an important routine during the initiation phase of a training relationship. Continued planned bidirectional DO sessions, although infrequently practised, potentially combine most benefits with least side-effects of DO. Ad hoc DO, although much relied upon, is often hampered by internal tensions in supervisors, residents or both.
Collapse
Affiliation(s)
- Chris B T Rietmeijer
- Department of General Practice and Elderly Care MedicineVU University Medical CentreAmsterdamThe Netherlands
| | - Daniëlle Huisman
- Department of General Practice and Elderly Care MedicineVU University Medical CentreAmsterdamThe Netherlands
| | - Annette H Blankenstein
- Department of General Practice and Elderly Care MedicineVU University Medical CentreAmsterdamThe Netherlands
| | - Henk de Vries
- Department of General Practice and Elderly Care MedicineVU University Medical CentreAmsterdamThe Netherlands
| | - Fedde Scheele
- School of Medical SciencesVU University Medical CentreAmsterdamThe Netherlands
- Athena Institute for Transdisciplinary ResearchVU UniversityAmsterdamThe Netherlands
| | - Anneke W M Kramer
- Department of Public Health and Primary CareLeiden University Medical CentreLeidenThe Netherlands
| | - Pim W Teunissen
- School of Health Professions EducationMaastricht UniversityMaastrichtThe Netherlands
| |
Collapse
|
270
|
Sargeant J, Lockyer JM, Mann K, Armson H, Warren A, Zetkulic M, Soklaridis S, Könings KD, Ross K, Silver I, Holmboe E, Shearer C, Boudreau M. The R2C2 Model in Residency Education: How Does It Foster Coaching and Promote Feedback Use? ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2018; 93:1055-1063. [PMID: 29342008 DOI: 10.1097/acm.0000000000002131] [Citation(s) in RCA: 86] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
PURPOSE The authors previously developed and tested a reflective model for facilitating performance feedback for practice improvement, the R2C2 model. It consists of four phases: relationship building, exploring reactions, exploring content, and coaching. This research studied the use and effectiveness of the model across different residency programs and the factors that influenced its effectiveness and use. METHOD From July 2014-October 2016, case study methodology was used to study R2C2 model use and the influence of context on use within and across five cases. Five residency programs (family medicine, psychiatry, internal medicine, surgery, and anesthesia) from three countries (Canada, the United States, and the Netherlands) were recruited. Data collection included audiotaped site assessment interviews, feedback sessions, and debriefing interviews with residents and supervisors, and completed learning change plans (LCPs). Content, thematic, template, and cross-case analysis were conducted. RESULTS An average of nine resident-supervisor dyads per site were recruited. The R2C2 feedback model, used with an LCP, was reported to be effective in engaging residents in a reflective, goal-oriented discussion about performance data, supporting coaching, and enabling collaborative development of a change plan. Use varied across cases, influenced by six general factors: supervisor characteristics, resident characteristics, qualities of the resident-supervisor relationship, assessment approaches, program culture and context, and supports provided by the authors. CONCLUSIONS The R2C2 model was reported to be effective in fostering a productive, reflective feedback conversation focused on resident development and in facilitating collaborative development of a change plan. Factors contributing to successful use were identified.
Collapse
Affiliation(s)
- Joan Sargeant
- J. Sargeant is professor, Continuing Professional Development Program and Division of Medical Education, Faculty of Medicine, Dalhousie University, Halifax, Nova Scotia, Canada. J.M. Lockyer is professor, Department of Community Health Sciences, Faculty of Medicine, University of Calgary, Calgary, Alberta, Canada. K. Mann was professor emeritus, Division of Medical Education, Faculty of Medicine, Dalhousie University, Halifax, Nova Scotia, Canada. H. Armson is assistant dean, Continuing Professional Development, and associate professor, Department of Family Medicine, University of Calgary, Calgary, Alberta, Canada. A. Warren is associate professor, Department of Pediatrics, and associate dean, Postgraduate Medical Education, Faculty of Medicine, Dalhousie University, Halifax, Nova Scotia, Canada. M. Zetkulic is assistant professor, Seton Hall School of Medicine, Director of Medical Education, Department of Medicine, Hackensack University Hospital, Hackensack, New Jersey. S. Soklaridis is assistant professor, Department of Psychiatry, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada. K.D. Könings is associate professor, Department of Educational Development & Research and School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands. K. Ross is research associate, Department of Evaluation, Research and Development, American Board of Internal Medicine, Philadelphia, Pennsylvania. I. Silver is vice president of education, Centre for Addiction and Mental Health, and professor, Department of Psychiatry, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada. E. Holmboe is senior vice president of milestones development and evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois, adjunct professor of medicine, Yale University, New Haven, Connecticut, and adjunct professor, Uniformed Services University of the Health Sciences, Bethesda, Maryland. C. Shearer is evaluation specialist, Postgraduate Medical Education, Dalhousie University, Halifax, Nova Scotia, Canada. M. Boudreau is evaluation specialist, Continuing Professional Development, Dalhousie University, Halifax, Nova Scotia, Canada
| | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
271
|
Stiefel F, de Vries M, Bourquin C. Core components of Communication Skills Training in oncology: A synthesis of the literature contrasted with consensual recommendations. Eur J Cancer Care (Engl) 2018; 27:e12859. [PMID: 29873149 DOI: 10.1111/ecc.12859] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2017] [Revised: 03/27/2018] [Accepted: 04/13/2018] [Indexed: 11/27/2022]
Abstract
This systematic review synthesises the literature on Communication Skills Training (CST) programmes for oncology professionals to identify their core components and compare them with the recommendations formulated in a position paper based on a European expert consensus meeting. A systematic literature search was conducted using MEDLINE (OVID and PUBMED), CINAHL, EMBASE, PSYCHINFO, Web of Science and the Cochrane Library. The analytic approach relied on an a priori framework based on the position paper's recommendations, generating several themes. Forty-nine articles were included. The CST programmes reported between 2010 and 2016 were heterogeneous. Some recommendations, especially those regarding content and pedagogic tools, were followed by most providers, while others, such as setting, objectives and participants, were not. This synthesis raises questions on how CST programmes are conceived and how they could or should be conceived in future. While medicine, especially clinical communication, is socially and culturally embedded, some recommendations regarding CST programmes seem to be universally valuable, contributing to ensure quality and enhanced credibility, and thus endorsement and sustained implementation, of CST programmes in the oncology setting.
Collapse
Affiliation(s)
- Friedrich Stiefel
- Psychiatric Liaison Service, Lausanne University Hospital, Lausanne, Switzerland
| | - Mirjam de Vries
- Psychiatric Liaison Service, Lausanne University Hospital, Lausanne, Switzerland
| | - Céline Bourquin
- Psychiatric Liaison Service, Lausanne University Hospital, Lausanne, Switzerland
| |
Collapse
|
272
|
Marceau M, Gallagher F, Young M, St-Onge C. Validity as a social imperative for assessment in health professions education: a concept analysis. MEDICAL EDUCATION 2018; 52:641-653. [PMID: 29878449 DOI: 10.1111/medu.13574] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2017] [Revised: 10/02/2017] [Accepted: 01/30/2018] [Indexed: 06/08/2023]
Abstract
CONTEXT Assessment can have far-reaching consequences for future health care professionals and for society. Thus, it is essential to establish the quality of assessment. Few modern approaches to validity are well situated to ensure the quality of complex assessment approaches, such as authentic and programmatic assessments. Here, we explore and delineate the concept of validity as a social imperative in the context of assessment in health professions education (HPE) as a potential framework for examining the quality of complex and programmatic assessment approaches. METHODS We conducted a concept analysis using Rodgers' evolutionary method to describe the concept of validity as a social imperative in the context of assessment in HPE. Supported by an academic librarian, we developed and executed a search strategy across several databases for literature published between 1995 and 2016. From a total of 321 citations, we identified 67 articles that met our inclusion criteria. Two team members analysed the texts using a specified approach to qualitative data analysis. Consensus was achieved through full team discussions. RESULTS Attributes that characterise the concept were: (i) demonstration of the use of evidence considered credible by society to document the quality of assessment; (ii) validation embedded through the assessment process and score interpretation; (iii) documented validity evidence supporting the interpretation of the combination of assessment findings, and (iv) demonstration of a justified use of a variety of evidence (quantitative and qualitative) to document the quality of assessment strategies. CONCLUSIONS The emerging concept of validity as a social imperative highlights some areas of focus in traditional validation frameworks, whereas some characteristics appear unique to HPE and move beyond traditional frameworks. The study reflects the importance of embedding consideration for society and societal concerns throughout the assessment and validation process, and may represent a potential lens through which to examine the quality of complex and programmatic assessment approaches.
Collapse
Affiliation(s)
- Mélanie Marceau
- Department of Nursing, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Quebec, Canada
| | - Frances Gallagher
- Department of Nursing, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Quebec, Canada
| | - Meredith Young
- Department of Medicine and Center for Medical Education, Faculty of Medicine, McGill University, Montreal, Quebec, Canada
| | - Christina St-Onge
- Department of Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, Quebec, Canada
| |
Collapse
|
273
|
Schut S, Driessen E, van Tartwijk J, van der Vleuten C, Heeneman S. Stakes in the eye of the beholder: an international study of learners' perceptions within programmatic assessment. MEDICAL EDUCATION 2018; 52:654-663. [PMID: 29572920 PMCID: PMC6001565 DOI: 10.1111/medu.13532] [Citation(s) in RCA: 87] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2017] [Revised: 12/12/2017] [Accepted: 01/08/2018] [Indexed: 05/12/2023]
Abstract
OBJECTIVES Within programmatic assessment, the ambition is to simultaneously optimise the feedback and the decision-making function of assessment. In this approach, individual assessments are intended to be low stakes. In practice, however, learners often perceive assessments designed to be low stakes as high stakes. In this study, we explored how learners perceive assessment stakes within programmatic assessment and which factors influence these perceptions. METHODS Twenty-six learners were interviewed from three different countries and five different programmes, ranging from undergraduate to postgraduate medical education. The interviews explored learners' experience with and perception of assessment stakes. An open and qualitative approach to data gathering and analyses inspired by the constructivist grounded theory approach was used to analyse the data and reveal underlying mechanisms influencing learners' perceptions. RESULTS Learners' sense of control emerged from the analysis as key for understanding learners' perception of assessment stakes. Several design factors of the assessment programme provided or hindered learners' opportunities to exercise control over the assessment experience, mainly the opportunities to influence assessment outcomes, to collect evidence and to improve. Teacher-learner relationships that were characterised by learners' autonomy and in which learners feel safe were important for learners' believed ability to exercise control and to use assessment to support their learning. CONCLUSIONS Knowledge of the factors that influence the perception of assessment stakes can help design effective assessment programmes in which assessment supports learning. Learners' opportunities for agency, a supportive programme structure and the role of the teacher are particularly powerful mechanisms to stimulate the learning value of programmatic assessment.
Collapse
Affiliation(s)
- Suzanne Schut
- Faculty of Health, Medicine and Life SciencesSchool of Health Professions EducationMaastricht UniversityMaastrichtthe Netherlands
- Department of Educational Development and ResearchFaculty of Health, Medicine and Life SciencesMaastricht UniversityMaastrichtthe Netherlands
| | - Erik Driessen
- Faculty of Health, Medicine and Life SciencesSchool of Health Professions EducationMaastricht UniversityMaastrichtthe Netherlands
- Department of Educational Development and ResearchFaculty of Health, Medicine and Life SciencesMaastricht UniversityMaastrichtthe Netherlands
| | | | - Cees van der Vleuten
- Faculty of Health, Medicine and Life SciencesSchool of Health Professions EducationMaastricht UniversityMaastrichtthe Netherlands
- Department of Educational Development and ResearchFaculty of Health, Medicine and Life SciencesMaastricht UniversityMaastrichtthe Netherlands
| | - Sylvia Heeneman
- Faculty of Health, Medicine and Life SciencesSchool of Health Professions EducationMaastricht UniversityMaastrichtthe Netherlands
- Department of PathologyFaculty of Health, Medicine and Life SciencesMaastricht UniversityMaastrichtthe Netherlands
| |
Collapse
|
274
|
Duijn CCMA, Welink LS, Bok HGJ, Ten Cate OTJ. When to trust our learners? Clinical teachers' perceptions of decision variables in the entrustment process. PERSPECTIVES ON MEDICAL EDUCATION 2018; 7:192-199. [PMID: 29713908 PMCID: PMC6002285 DOI: 10.1007/s40037-018-0430-0] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
INTRODUCTION Clinical training programs increasingly use entrustable professional activities (EPAs) as focus of assessment. However, questions remain about which information should ground decisions to trust learners. This qualitative study aimed to identify decision variables in the workplace that clinical teachers find relevant in the elaboration of the entrustment decision processes. The findings can substantiate entrustment decision-making in the clinical workplace. METHODS Focus groups were conducted with medical and veterinary clinical teachers, using the structured consensus method of the Nominal Group Technique to generate decision variables. A ranking was made based on a relevance score assigned by the clinical teachers to the different decision variables. Field notes, audio recordings and flip chart lists were analyzed and subsequently translated and, as a form of axial coding, merged into one list, combining the decision variables that were similar in their meaning. RESULTS A list of 11 and 17 decision variables were acknowledged as relevant by the medical and veterinary teacher groups, respectively. The focus groups yielded 21 unique decision variables that were considered relevant to inform readiness to perform a clinical task on a designated level of supervision. The decision variables consisted of skills, generic qualities, characteristics, previous performance or other information. We were able to group the decision variables into five categories: ability, humility, integrity, reliability and adequate exposure. DISCUSSION To entrust a learner to perform a task at a specific level of supervision, a supervisor needs information to support such a judgement. This trust cannot be credited on a single case at a single moment of assessment, but requires different variables and multiple sources of information. This study provides an overview of decision variables giving evidence to justify the multifactorial process of making an entrustment decision.
Collapse
Affiliation(s)
- Chantal C M A Duijn
- Center for Quality Improvement in Veterinary Education, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands.
| | - Lisanne S Welink
- Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Harold G J Bok
- Center for Quality Improvement in Veterinary Education, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
| | - Olle T J Ten Cate
- Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, The Netherlands
| |
Collapse
|
275
|
Berendonk C, Rogausch A, Gemperli A, Himmel W. Variability and dimensionality of students' and supervisors' mini-CEX scores in undergraduate medical clerkships - a multilevel factor analysis. BMC MEDICAL EDUCATION 2018; 18:100. [PMID: 29739387 PMCID: PMC5941409 DOI: 10.1186/s12909-018-1207-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/22/2017] [Accepted: 04/20/2018] [Indexed: 05/28/2023]
Abstract
BACKGROUND The mini clinical evaluation exercise (mini-CEX)-a tool used to assess student-patient encounters-is increasingly being applied as a learning device to foster clinical competencies. Although the importance of eliciting self-assessment for learning is widely acknowledged, little is known about the validity of self-assessed mini-CEX scores. The aims of this study were (1) to explore the variability of medical students' self-assessed mini-CEX scores, and to compare them with the scores obtained from their clinical supervisors, and (2) to ascertain whether learners' self-assessed mini-CEX scores represent a global dimension of clinical competence or discrete clinical skills. METHODS In year 4, medical students conducted one to three mini-CEX per clerkship in gynaecology, internal medicine, paediatrics, psychiatry and surgery. Students and clinical supervisors rated the students' performance on a 10-point scale (1 = great need for improvement; 10 = little need for improvement) in the six domains history taking, physical examination, counselling, clinical judgement, organisation/efficiency and professionalism as well as in overall performance. Correlations between students' self-ratings and ratings from clinical supervisors were calculated (Pearson's correlation coefficient) based on averaged scores per domain and overall. To investigate the dimensionality of the mini-CEX domain scores, we performed factor analyses using linear mixed models that accounted for the multilevel structure of the data. RESULTS A total of 1773 mini-CEX from 164 students were analysed. Mean scores for the six domains ranged from 7.5 to 8.3 (student ratings) and from 8.8 to 9.3 (supervisor ratings). Correlations between the ratings of students and supervisors for the different domains varied between r = 0.29 and 0.51 (all p < 0.0001). Mini-CEX domain scores revealed a single-factor solution for both students' and supervisors' ratings, with high loadings of all six domains between 0.58 and 0.83 (students) and 0.58 and 0.84 (supervisors). CONCLUSIONS These findings put a question mark on the validity of mini-CEX domain scores for formative purposes, as neither the scores obtained from students nor those obtained from clinical supervisors unravelled specific strengths and weaknesses of individual students' clinical competence.
Collapse
Affiliation(s)
- Christoph Berendonk
- Department of Assessment and Evaluation, Institute of Medical Education, University of Bern, Konsumstrasse 13, 3010 Bern, CH Switzerland
| | - Anja Rogausch
- Department of Assessment and Evaluation, Institute of Medical Education, University of Bern, Konsumstrasse 13, 3010 Bern, CH Switzerland
| | - Armin Gemperli
- Department of Health Sciences and Health Policy, University of Lucerne, Lucerne, Switzerland
- Swiss Paraplegic Research, Nottwil, Switzerland
| | - Wolfgang Himmel
- Department of General Practice, University Medical Center Göttingen, Göttingen, Germany
| |
Collapse
|
276
|
Schüttpelz-Brauns K, Kadmon M, Kiessling C, Karay Y, Gestmann M, Kämmer JE. Identifying low test-taking effort during low-stakes tests with the new Test-taking Effort Short Scale (TESS) - development and psychometrics. BMC MEDICAL EDUCATION 2018; 18:101. [PMID: 29739405 PMCID: PMC5941641 DOI: 10.1186/s12909-018-1196-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/05/2017] [Accepted: 04/20/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND Low-stakes tests are becoming increasingly important in international assessments of educational progress, and the validity of these results is essential especially as these results are often used for benchmarking. Test scores in these tests not only mirror students' ability but also depend on their test-taking effort. One way to obtain more valid scores from participating samples is to identify test-takers with low test-taking effort and to exclude them from further analyses. Self-assessment is a convenient and quick way of measuring test-taking effort. We present the newly developed Test-taking Effort Short Scale (TESS), which comprises three items measuring attainment value/intrinsic value, utility value, and perceived benefits, respectively. METHODS In a multicenter validation study with N = 1837 medical students sitting a low-stakes progress test we analyzed item and test statistics including construct and external validity. RESULTS TESS showed very good psychometric properties. We propose an approach using stanine norms to determine a cutoff value for identifying participants with low test-taking effort. CONCLUSION With just three items, TESS is shorter than most established self-assessment scales; it is thus suited for administration after low-stakes progress testing. However, further studies are necessary to establish its suitability for routine usage in assessment outside progress testing.
Collapse
Affiliation(s)
- Katrin Schüttpelz-Brauns
- Medical Faculty Mannheim at Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany
| | - Martina Kadmon
- Carl von Ossietzky University Oldenburg, Carl-von-Ossietzky-Straße 9-11, 26129 Oldenburg, Germany
| | - Claudia Kiessling
- Brandenburg Medical School Theodor Fontane, Fehrbelliner Straße 38, 16816 Neuruppin, Germany
| | - Yassin Karay
- Medical Faculty, University of Cologne, Joseph-Stelzmann-Straße 20 (Building 42), 50931 Cologne, Germany
| | - Margarita Gestmann
- Medical Faculty, University of Duisburg-Essen, Hufelandstraße 55, 45147 Essen, Germany
| | - Juliane E. Kämmer
- AG Progress Test Medizin, Charité Universitätsmedizin Berlin, Hannoversche Straße 19, 10115 Berlin, Germany
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin, Germany
| |
Collapse
|
277
|
Cookson J, Miller A, Fleet Z. The future of internal medicine: a new curriculum for 2019. Br J Hosp Med (Lond) 2018; 79:298. [PMID: 29727226 DOI: 10.12968/hmed.2018.79.5.298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- John Cookson
- Emeritus Professor Development Dean University of Worcester Worcester WR2 6AJ
| | - Alastair Miller
- Chair of Internal Medicine Committee and Deputy Medical Director Joint Royal Colleges of Physicians Training Board London NW1 4LE
| | - Zoë Fleet
- Curriculum and Assessment Manager Joint Royal Colleges of Physicians Training Board London
| |
Collapse
|
278
|
Abstract
Clinical skills remain fundamental to the practice of medicine and form a core component of the professional identity of the physician. However, evidence exists to suggest that the practice of some clinical skills is declining, particularly in the United States. A decline in practice of any skill can lead to a decline in its teaching and assessment, with further decline in practice as a result. Consequently, assessment not only drives learning of clinical skills, but their practice. This article summarizes contemporary approaches to clinical skills assessment that, if more widely adopted, could support the maintenance and reinvigoration of bedside clinical skills.
Collapse
Affiliation(s)
- Andrew Elder
- Department of Acute Medicine for Older People, Edinburgh Medical School, Western General Hospital, Crewe Road, Edinburgh EH42XU, UK.
| |
Collapse
|
279
|
Christensen MK, Lykkegaard E, Lund O, O'Neill LD. Qualitative analysis of MMI raters' scorings of medical school candidates: A matter of taste? ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2018; 23:289-310. [PMID: 28956195 DOI: 10.1007/s10459-017-9794-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Accepted: 09/20/2017] [Indexed: 05/25/2023]
Abstract
Recent years have seen leading medical educationalists repeatedly call for a paradigm shift in the way we view, value and use subjectivity in assessment. The argument is that subjective expert raters generally bring desired quality, not just noise, to performance evaluations. While several reviews document the psychometric qualities of the Multiple Mini-Interview (MMI), we currently lack qualitative studies examining what we can learn from MMI raters' subjectivity. The present qualitative study therefore investigates rater subjectivity or taste in MMI selection interview. Taste (Bourdieu 1984) is a practical sense, which makes it possible at a pre-reflective level to apply 'invisible' or 'tacit' categories of perception for distinguishing between good and bad. The study draws on data from explorative in-depth interviews with 12 purposefully selected MMI raters. We find that MMI raters spontaneously applied subjective criteria-their taste-enabling them to assess the candidates' interpersonal attributes and to predict the candidates' potential. In addition, MMI raters seemed to share a taste for certain qualities in the candidates (e.g. reflectivity, resilience, empathy, contact, alikeness, 'the good colleague'); hence, taste may be the result of an ongoing enculturation in medical education and healthcare systems. This study suggests that taste is an inevitable condition in the assessment of students' performance. The MMI set-up should therefore make room for MMI raters' taste and their connoisseurship, i.e. their ability to taste, to improve the quality of their assessment of medical school candidates.
Collapse
Affiliation(s)
| | - Eva Lykkegaard
- Centre for Health Sciences Education, Aarhus University, Aarhus, Denmark
| | - Ole Lund
- Centre for Health Sciences Education, Aarhus University, Aarhus, Denmark
| | - Lotte D O'Neill
- SDU Centre for Teaching and Learning, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
280
|
Affiliation(s)
- Rehan Ahmed Khan
- Prof. Dr. Rehan Ahmed Khan, Professor of Surgery and Assistant Dean Medical Education, Islamic International Medical College, Riphah International University, Islamabad, Pakistan.
| |
Collapse
|
281
|
Chan T, Sebok‐Syer S, Thoma B, Wise A, Sherbino J, Pusic M. Learning Analytics in Medical Education Assessment: The Past, the Present, and the Future. AEM EDUCATION AND TRAINING 2018; 2:178-187. [PMID: 30051086 PMCID: PMC6001721 DOI: 10.1002/aet2.10087] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Accepted: 01/30/2018] [Indexed: 05/09/2023]
Abstract
With the implementation of competency-based medical education (CBME) in emergency medicine, residency programs will amass substantial amounts of qualitative and quantitative data about trainees' performances. This increased volume of data will challenge traditional processes for assessing trainees and remediating training deficiencies. At the intersection of trainee performance data and statistical modeling lies the field of medical learning analytics. At a local training program level, learning analytics has the potential to assist program directors and competency committees with interpreting assessment data to inform decision making. On a broader level, learning analytics can be used to explore system questions and identify problems that may impact our educational programs. Scholars outside of health professions education have been exploring the use of learning analytics for years and their theories and applications have the potential to inform our implementation of CBME. The purpose of this review is to characterize the methodologies of learning analytics and explore their potential to guide new forms of assessment within medical education.
Collapse
Affiliation(s)
- Teresa Chan
- McMaster program for Education Research, Innovation, and Theory (MERIT)HamiltonOntarioCanada
| | - Stefanie Sebok‐Syer
- Centre for Education Research & InnovationSchulich School of Medicine and DentistrySaskatoonSaskatchewanCanada
| | - Brent Thoma
- Department of Emergency MedicineUniversity of SaskatchewanSaskatoonSaskatchewanCanada
| | - Alyssa Wise
- Steinhardt School of Culture, Education, and Human DevelopmentNew York UniversityNew YorkNY
| | - Jonathan Sherbino
- Faculty of Health ScienceDivision of Emergency MedicineDepartment of MedicineMcMaster UniversityHamiltonOntarioCanada
- McMaster program for Education Research, Innovation, and Theory (MERIT)HamiltonOntarioCanada
| | - Martin Pusic
- Department of Emergency MedicineNYU School of MedicineNew YorkNY
| |
Collapse
|
282
|
Bing-You R, Varaklis K, Hayes V, Trowbridge R, Kemp H, McKelvy D. The Feedback Tango: An Integrative Review and Analysis of the Content of the Teacher-Learner Feedback Exchange. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2018; 93:657-663. [PMID: 28991848 DOI: 10.1097/acm.0000000000001927] [Citation(s) in RCA: 57] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
PURPOSE To conduct an integrative review and analysis of the literature on the content of feedback to learners in medical education. METHOD Following completion of a scoping review in 2016, the authors analyzed a subset of articles published through 2015 describing the analysis of feedback exchange content in various contexts: audiotapes, clinical examination, feedback cards, multisource feedback, videotapes, and written feedback. Two reviewers extracted data from these articles and identified common themes. RESULTS Of the 51 included articles, about half (49%) were published since 2011. Most involved medical students (43%) or residents (43%). A leniency bias was noted in many (37%), as there was frequently reluctance to provide constructive feedback. More than one-quarter (29%) indicated the feedback was low in quality (e.g., too general, limited amount, no action plans). Some (16%) indicated faculty dominated conversations, did not use feedback forms appropriately, or provided inadequate feedback, even after training. Multiple feedback tools were used, with some articles (14%) describing varying degrees of use, completion, or legibility. Some articles (14%) noted the impact of the gender of the feedback provider or learner. CONCLUSIONS The findings reveal that the exchange of feedback is troubled by low-quality feedback, leniency bias, faculty deficient in feedback competencies, challenges with multiple feedback tools, and gender impacts. Using the tango dance form as a metaphor for this dynamic partnership, the authors recommend ways to improve feedback for teachers and learners willing to partner with each other and engage in the complexities of the feedback exchange.
Collapse
Affiliation(s)
- Robert Bing-You
- R. Bing-You is professor, Tufts University School of Medicine, Boston, Massachusetts, and vice president for medical education, Maine Medical Center, Portland, Maine. K. Varaklis is clinical associate professor, Tufts University School of Medicine, Boston, Massachusetts, and designated institutional official, Maine Medical Center, Portland, Maine. V. Hayes is clinical assistant professor, Tufts University School of Medicine, Boston, Massachusetts, and faculty member, Department of Family Medicine, Maine Medical Center, Portland, Maine. R. Trowbridge is associate professor, Tufts University School of Medicine, Boston, Massachusetts, and director of undergraduate medical education, Department of Medicine, Maine Medical Center, Portland, Maine. H. Kemp is medical librarian, Maine Medical Center, Portland, Maine. D. McKelvy is manager of library and knowledge services, Maine Medical Center, Portland, Maine
| | | | | | | | | | | |
Collapse
|
283
|
Griffin B, Bayl-Smith P, Hu W. Predicting patterns of change and stability in student performance across a medical degree. MEDICAL EDUCATION 2018; 52:438-446. [PMID: 29349791 DOI: 10.1111/medu.13508] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Revised: 10/02/2017] [Accepted: 11/13/2017] [Indexed: 05/17/2023]
Abstract
CONTEXT Evidence of predictive validity is essential for making robust selection decisions in high-stakes contexts such as medical student selection. Currently available evidence is limited to the prediction of academic performance at single points in time with little understanding of the factors that might undermine the predictive validity of tests of academic and non-academic qualities considered important for success. This study addressed these issues by predicting students' changing performance across a medical degree and assessing whether factors outside an institution's control (such as the uptake of commercial coaching) impact validity. METHODS Three cohorts of students (n = 301) enrolled in an undergraduate medical degree from 2007-2013 were used to identify trajectories of student academic performance using growth mixture modelling. Multinomial logistic regression assessed whether past academic performance, a test of cognitive ability and a multiple mini-interview could predict a student's likely trajectory and whether this predictive validity was different for those who undertook commercial coaching compared with those who didn't. RESULTS Among the medical students who successfully graduated (n = 268), four unique trajectories of academic performance were identified. In three trajectories, performance changed at the time when learning became more self-directed and focused on clinical specialties. Scores on all selection tests, with the exception of a test of abstract reasoning, significantly affected the odds of following a trajectory that was consistently below average. However, selection tests could not distinguish those whose performance improved across time from those whose performance declined after an average start. Commercial coaching increased the odds of being among the below-average performers, but did not alter the predictive validity of the selection tests. CONCLUSION Identifying distinct groups of students has important implications for selection, but also for educating medical students. Commercial coaching may result in selecting students who are less suited for coping with the rigours of medical studies.
Collapse
Affiliation(s)
- Barbara Griffin
- Department of Psychology, Macquarie University, Sydney, New South Wales, Australia
| | - Piers Bayl-Smith
- Department of Psychology, Macquarie University, Sydney, New South Wales, Australia
| | - Wendy Hu
- Western Sydney University School of Medicine, Sydney, New South Wales, Australia
| |
Collapse
|
284
|
Hauer KE, O'Sullivan PS, Fitzhenry K, Boscardin C. Translating Theory Into Practice: Implementing a Program of Assessment. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2018; 93:444-450. [PMID: 29116979 DOI: 10.1097/acm.0000000000001995] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
PROBLEM A program of assessment addresses challenges in learner assessment using a centrally planned, coordinated approach that emphasizes assessment for learning. This report describes the steps taken to implement a program of assessment framework within a medical school. APPROACH A literature review on best practices in assessment highlighted six principles that guided implementation of the program of assessment in 2016-2017: (1) a centrally coordinated plan for assessment aligns with and supports a curricular vision; (2) multiple assessment tools used longitudinally generate multiple data points; (3) learners require ready access to information-rich feedback to promote reflection and informed self-assessment; (4) mentoring is essential to facilitate effective data use for reflection and learning planning; (5) the program of assessment fosters self-regulated learning behaviors; and (6) expert groups make summative decisions about grades and readiness for advancement. Implementation incorporated stakeholder engagement, use of multiple assessment tools, design of a coaching program, and creation of a learner performance dashboard. OUTCOMES The assessment team monitors adherence to principles defining the program of assessment and gathers and responds to regular feedback from key stakeholders, including faculty, staff, and students. NEXT STEPS Next steps include systematically collecting evidence for validity of individual assessments and the program overall. Iterative review of student performance data informs curricular improvements. The program of assessment also highlights technology needs that will be addressed with information technology experts. The outcome ultimately will entail showing evidence of validity that the program produces physicians who engage in lifelong learning and provide high-quality patient care.
Collapse
Affiliation(s)
- Karen E Hauer
- K.E. Hauer is professor, Department of Medicine, University of California, San Francisco, San Francisco, California; ORCID: http://orcid.org/0000-0002-8812-4045. P.S. O'Sullivan is professor, Department of Medicine, University of California, San Francisco, San Francisco, California; ORCID: http://orcid.org/0000-0002-8706-4095. K. Fitzhenry is manager of student assessment, School of Medicine, University of California, San Francisco, San Francisco, California. C. Boscardin is associate professor, Department of Medicine, University of California, San Francisco, San Francisco, California; ORCID: http://orcid.org/0000-0002-9070-8859
| | | | | | | |
Collapse
|
285
|
Gruppen LD, Ten Cate O, Lingard LA, Teunissen PW, Kogan JR. Enhanced Requirements for Assessment in a Competency-Based, Time-Variable Medical Education System. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2018; 93:S17-S21. [PMID: 29485482 DOI: 10.1097/acm.0000000000002066] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Competency-based, time-variable medical education has reshaped the perceptions and practices of teachers, curriculum designers, faculty developers, clinician educators, and program administrators. This increasingly popular approach highlights the fact that learning among different individuals varies in duration, foundation, and goal. Time variability places particular demands on the assessment data that are so necessary for making decisions about learner progress. These decisions may be formative (e.g., feedback for improvement) or summative (e.g., decisions about advancing a student). This article identifies challenges to collecting assessment data and to making assessment decisions in a time-variable system. These challenges include managing assessment data, defining and making valid assessment decisions, innovating in assessment, and modeling the considerable complexity of assessment in real-world settings and richly interconnected social systems. There are hopeful signs of creativity in assessment both from researchers and practitioners, but the transition from a traditional to a competency-based medical education system will likely continue to create much controversy and offer opportunities for originality and innovation in assessment.
Collapse
Affiliation(s)
- Larry D Gruppen
- L.D. Gruppen is professor, Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, Michigan. O. ten Cate is professor of medical education, Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, the Netherlands. L.A. Lingard is professor, Department of Medicine, and director, Centre for Education Research & Innovation, Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada. P.W. Teunissen is professor, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands, and maternal fetal medicine specialist, VU University Medical Center, Amsterdam, the Netherlands. J.R. Kogan is professor of medicine, Department of Medicine, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania
| | | | | | | | | |
Collapse
|
286
|
Patel US, Tonni I, Gadbury-Amyot C, Van der Vleuten CPM, Escudier M. Assessment in a global context: An international perspective on dental education. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2018; 22 Suppl 1:21-27. [PMID: 29601682 DOI: 10.1111/eje.12343] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/12/2018] [Indexed: 05/08/2023]
Abstract
Assessments are widely used in dental education to record the academic progress of students and ultimately determine whether they are ready to begin independent dental practice. Whilst some would consider this a "rite-of-passage" of learning, the concept of assessments in education is being challenged to allow the evolution of "assessment for learning." This serves as an economical use of learning resources whilst allowing our learners to prove their knowledge and skills and demonstrating competence. The Association for Dental Education in Europe and the American Dental Education Association held a joint international meeting in London in May 2017 allowing experts in dental education to come together for the purposes of Shaping the Future of Dental Education. Assessment in a Global Context was one topic in which international leaders could discuss different methods of assessment, identifying the positives, the pitfalls and critiquing the method of implementation to determine the optimum assessment for a learner studying to be a healthcare professional. A post-workshop survey identified that educators were thinking differently about assessment, instead of working as individuals providing isolated assessments; the general consensus was that a longitudinally orientated systematic and programmatic approach to assessment provide greater reliability and improved the ability to demonstrate learning.
Collapse
Affiliation(s)
- U S Patel
- School of Dentistry, University of Birmingham, Birmingham, UK
| | - I Tonni
- Department of Orthodontics, University of Brescia, Brescia, Italy
| | - C Gadbury-Amyot
- The University of Missouri-Kansas City (UMKC), Kansas City, MO, USA
| | - C P M Van der Vleuten
- Department of Educational Development and Research in the Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - M Escudier
- Department of Clinical and Diagnostic Sciences, King's College London Dental Institute, London, UK
| |
Collapse
|
287
|
Lucey CR, Thibault GE, Ten Cate O. Competency-Based, Time-Variable Education in the Health Professions: Crossroads. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2018; 93:S1-S5. [PMID: 29485479 DOI: 10.1097/acm.0000000000002080] [Citation(s) in RCA: 88] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Health care systems around the world are transforming to align with the needs of 21st-century patients and populations. Transformation must also occur in the educational systems that prepare the health professionals who deliver care, advance discovery, and educate the next generation of physicians in these evolving systems. Competency-based, time-variable education, a comprehensive educational strategy guided by the roles and responsibilities that health professionals must assume to meet the needs of contemporary patients and communities, has the potential to catalyze optimization of educational and health care delivery systems. By designing educational and assessment programs that require learners to meet specific competencies before transitioning between the stages of formal education and into practice, this framework assures the public that every physician is capable of providing high-quality care. By engaging learners as partners in assessment, competency-based, time-variable education prepares graduates for careers as lifelong learners. While the medical education community has embraced the notion of competencies as a guiding framework for educational institutions, the structure and conduct of formal educational programs remain more aligned with a time-based, competency-variable paradigm.The authors outline the rationale behind this recommended shift to a competency-based, time-variable education system. They then introduce the other articles included in this supplement to Academic Medicine, which summarize the history of, theories behind, examples demonstrating, and challenges associated with competency-based, time-variable education in the health professions.
Collapse
Affiliation(s)
- Catherine R Lucey
- C.R. Lucey is executive vice dean, vice dean for education, and professor of medicine, University of California, San Francisco, School of Medicine, San Francisco, California. G.E. Thibault is president, Josiah Macy Jr. Foundation, New York, New York. O. ten Cate is professor of medical education, Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, the Netherlands
| | | | | |
Collapse
|
288
|
Bacon R, Kellett J, Dart J, Knight-Agarwal C, Mete R, Ash S, Palermo C. A Consensus Model: Shifting assessment practices in dietetics tertiary education. Nutr Diet 2018; 75:418-430. [PMID: 29468799 DOI: 10.1111/1747-0080.12415] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2017] [Revised: 01/12/2018] [Accepted: 01/16/2018] [Indexed: 11/27/2022]
Abstract
AIM The aim of this research was to evaluate a Consensus Model for competency-based assessment. METHODS An evaluative case study was used to allow a holistic examination of a constructivist-interpretivist programmatic model of assessment. Using a modified Delphi process, the competence of all 29 students enrolled in their final year of a Master of Nutrition and Dietetics course was assessed by a panel (with expertise in competency-based assessment; industry and academic representation) from a course e-portfolio (that included the judgements of student performance made by worksite educators) and a panel interview. Data were triangulated with assessments from a capstone internship. Qualitative descriptive studies with worksite educators (focus groups n = 4, n = 5, n = 8) and students (personal interviews n = 29) explored stakeholder experiences analysed using thematic analysis. RESULTS Panel consensus was achieved for all cases by the third-round and corroborated by internship outcomes. For 34% of students this differed to the 'interpretations' of their performance made by their worksite educator/s. Emerging qualitative themes from stakeholder data found the model: (i) supported sustainable assessment practices; (ii) shifted the power relationship between students and worksite educators and (iii) provided a fair method to assess competence. To maximise benefits, more refinement, resources and training are required. CONCLUSIONS This research questions competency-based assessment practices based on discrete placement units and supports a constructivist-interpretivist programmatic approach where evidence across a whole course of study is considered by a panel of assessors.
Collapse
Affiliation(s)
- Rachel Bacon
- Discipline of Nutrition and Dietetics, Faculty of Health, University of Canberra, Bruce, Australian Capital Territory, Australia
| | - Jane Kellett
- Discipline of Nutrition and Dietetics, Faculty of Health, University of Canberra, Bruce, Australian Capital Territory, Australia
| | - Janeane Dart
- Nutrition and Dietetics, School of Exercise and Nutritional Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Cathy Knight-Agarwal
- Discipline of Nutrition and Dietetics, Faculty of Health, University of Canberra, Bruce, Australian Capital Territory, Australia
| | - Rebecca Mete
- Discipline of Nutrition and Dietetics, Faculty of Health, University of Canberra, Bruce, Australian Capital Territory, Australia
| | - Susan Ash
- Nutrition and Dietetics, School of Exercise and Nutritional Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Claire Palermo
- Monash Centre for Scholarship in Health Education, Faculty of Medicine, Nursing and Health Sciences, Monash University, Notting Hill, Victoria, Australia
| |
Collapse
|
289
|
Perry M, Linn A, Munzer BW, Hopson L, Amlong A, Cole M, Santen SA. Programmatic Assessment in Emergency Medicine: Implementation of Best Practices. J Grad Med Educ 2018; 10:84-90. [PMID: 29467979 PMCID: PMC5821020 DOI: 10.4300/jgme-d-17-00094.1] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/08/2017] [Revised: 07/17/2017] [Accepted: 10/31/2017] [Indexed: 01/20/2023] Open
Abstract
BACKGROUND Programmatic assessment is the intentional collection of key data from multiple sources for both assessment of learning and assessment for learning. OBJECTIVE We developed a system of programmatic assessment (PA) to identify competency progression (summative) and assessment for learning to assist residents in their formative development. METHODS The programmatic assessment was designed iteratively from 2014 through 2016. All assessments were first categorized by competency domain and source of assessment. The number of assessment modalities for each competency domain was collected. These multisource assessments were then mapped by program leadership to the milestones to develop a master PA blueprint. A resident learning management system provided the platform for aggregating formative and summative data, allowing residents and faculty ongoing access to guide learning and assessment. A key component of programmatic assessment was to support resident integration of assessment information through feedback by faculty after shifts and during monthly formal assessments, semiannual resident reviews, and summative judgments by the Clinical Competency Committee. RESULTS Through the PA, the 6 competency domains are assessed through multiple modalities: patient care (22 different assessments), professionalism (18), systems-based practice (17), interprofessional and communication skills (16), medical knowledge (11), and practice-based learning and improvement (6). Each assessment provides feedback to the resident in various formats. Our programmatic assessment has been utilized for more than 2 years with iterative improvements. CONCLUSIONS The implementation of programmatic assessment allowed our program to organize diverse, multisourced feedback to drive both formative and summative assessments.
Collapse
|
290
|
Yardley S, Westerman M, Bartlett M, Walton JM, Smith J, Peile E. The do's, don't and don't knows of supporting transition to more independent practice. PERSPECTIVES ON MEDICAL EDUCATION 2018; 7:8-22. [PMID: 29383578 PMCID: PMC5807269 DOI: 10.1007/s40037-018-0403-3] [Citation(s) in RCA: 47] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
INTRODUCTION Transitions are traditionally viewed as challenging for clinicians. Throughout medical career pathways, clinicians need to successfully navigate successive transitions as they become progressively more independent practitioners. In these guidelines, we aim to synthesize the evidence from the literature to provide guidance for supporting clinicians in their development of independence, and highlight areas for further research. METHODS Drawing upon D3 method guidance, four key themes universal to medical career transitions and progressive independence were identified by all authors through discussion and consensus from our own experience and expertise: workplace learning, independence and responsibility, mentoring and coaching, and patient perspectives. A scoping review of the literature was conducted using Medline database searches in addition to the authors' personal archives and reference snowballing searches. RESULTS 387 articles were identified and screened. 210 were excluded as not relevant to medical transitions (50 at title screen; 160 at abstract screen). 177 full-text articles were assessed for eligibility; a further 107 were rejected (97 did not include career transitions in their study design; 10 were review articles; the primary references of these were screened for inclusion). 70 articles were included of which 60 provided extractable data for the final qualitative synthesis. Across the four key themes, seven do's, two don'ts and seven don't knows were identified, and the strength of evidence was graded for each of these recommendations. CONCLUSION The two strongest messages arising from current literature are first, transitions should not be viewed as one moment in time: career trajectories are a continuum with valuable opportunities for personal and professional development throughout. Second, learning needs to be embedded in practice and learners provided with authentic and meaningful learning opportunities. In this paper, we propose evidence-based guidelines aimed at facilitating such transitions through the fostering of progressive independence.
Collapse
Affiliation(s)
- Sarah Yardley
- Central and North West London NHS Foundation Trust, London, UK.
| | | | | | - J Mark Walton
- McMaster Children's Hospital, Hamilton, Ontario, Canada
| | | | - Ed Peile
- St Catherine's College, Oxford, UK
| |
Collapse
|
291
|
Fransen F, Martens H, Nagtzaam I, Heeneman S. Use of e-learning in clinical clerkships: effects on acquisition of dermatological knowledge and learning processes. INTERNATIONAL JOURNAL OF MEDICAL EDUCATION 2018; 9:11-17. [PMID: 29352748 PMCID: PMC5834826 DOI: 10.5116/ijme.5a47.8ab0] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2017] [Accepted: 12/30/2017] [Indexed: 06/07/2023]
Abstract
OBJECTIVES To obtain a deeper understanding of how the e-learning program, Education in Dermatology (ED), affects the acquisition of dermatological knowledge and the underlying learning processes of medical students in their clinical phase. METHODS The study used a mixed method design with a convergent parallel collection of data. Medical students (n=62) from Maastricht University (The Netherlands) were randomized to either a conventional teaching group (control group n=30) or conventional teaching plus the e-learning program (application on smartphone) group (e-learning group n=32). Pre- and post-intervention knowledge test results were analysed using an independent t-test. Individual semi-structured interviews (n=9) were conducted and verbatim-transcribed recordings were analysed using King's template analysis. RESULTS The e-learning program positively influenced students' level of knowledge and their process of learning. A significant difference was found in the post-test scores for the control group (M=51.4, SD=6.43) and the e-learning group (M=73.09, SD=5.12); t(60)=-14.75, p<0.000). Interview data showed that the e-learning program stimulated students' learning as the application promoted the identification and recognition of skin disorders, the use of references, creation of documents and sharing information with colleagues. CONCLUSIONS This study demonstrated that use of the e-learning program led to a significant improvement in basic dermatological knowledge. The underlying learning processes indicated that e-learning programs in dermatology filled a vital gap in the understanding of clinical reasoning in dermatology. These results might be useful when developing (clinical) teaching formats with a special focus on visual disciplines.
Collapse
Affiliation(s)
- Frederike Fransen
- Department of Dermatology, Maastricht University Medical Centre, Maastricht, The Netherlands
| | - Herm Martens
- Department of Dermatology, Maastricht University Medical Centre, Maastricht, The Netherlands
| | - Ivo Nagtzaam
- Department of Dermatology, Maastricht University Medical Centre, Maastricht, The Netherlands
| | - Sylvia Heeneman
- Department of Pathology, School of Health Profession Education, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
292
|
Abstract
SummaryPsychiatrists have a role in teaching all medical undergraduates and foundation year doctors generic skills to become good doctors, but they also have to appeal to and nurture the interests of future psychiatrists by maintaining core psychiatric skills/knowledge in their teaching. They must tackle poor recruitment to psychiatry and stigma against both the profession and its patients. Medical students and junior doctors tend to be strategic learners, motivated by passing assessments, and psychiatrists are often guilty of gearing their teaching only to this. This article explores the assessment process itself and ways to optimise it, and presents a case for going beyond teaching how to pass exams in order to address wider issues relating to psychiatry.Learning Objectives• Identify the extent of current problems of recruitment and stigma in psychiatry and recognise the role of psychiatrists in addressing these through teaching• Be aware of the impact and limitations of tailoring teaching to assessment only• Identify ways of improving your own practice, taking account of the literature and strategies suggested
Collapse
|
293
|
McIntosh C, Patterson J, Miller S. First year midwifery students' experience with self-recorded and assessed video of selected midwifery practice skills at Otago Polytechnic in New Zealand. Nurse Educ Pract 2018; 28:54-59. [DOI: 10.1016/j.nepr.2017.09.016] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2017] [Revised: 09/12/2017] [Accepted: 09/17/2017] [Indexed: 10/18/2022]
|
294
|
Chan TM. Nuance and Noise: Lessons Learned From Longitudinal Aggregated Assessment Data. J Grad Med Educ 2017; 9:724-729. [PMID: 29270262 PMCID: PMC5734327 DOI: 10.4300/jgme-d-17-00086.1] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/02/2017] [Revised: 07/04/2017] [Accepted: 08/22/2017] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Competency-based medical education requires frequent assessment to tailor learning experiences to the needs of trainees. In 2012, we implemented the McMaster Modular Assessment Program, which captures shift-based assessments of resident global performance. OBJECTIVE We described patterns (ie, trends and sources of variance) in aggregated workplace-based assessment data. METHODS Emergency medicine residents and faculty members from 3 Canadian university-affiliated, urban, tertiary care teaching hospitals participated in this study. During each shift, supervising physicians rated residents' performance using a behaviorally anchored scale that hinged on endorsements for progression. We used a multilevel regression model to examine the relationship between global rating scores and time, adjusting for data clustering by resident and rater. RESULTS We analyzed data from 23 second-year residents between July 2012 and June 2015, which yielded 1498 unique ratings (65 ± 18.5 per resident) from 82 raters. The model estimated an average score of 5.7 ± 0.6 at baseline, with an increase of 0.005 ± 0.01 for each additional assessment. There was significant variation among residents' starting score (y-intercept) and trajectory (slope). CONCLUSIONS Our model suggests that residents begin at different points and progress at different rates. Meta-raters such as program directors and Clinical Competency Committee members should bear in mind that progression may take time and learning trajectories will be nuanced. Individuals involved in ratings should be aware of sources of noise in the system, including the raters themselves.
Collapse
|
295
|
Timmerman AA, Dijkstra J. A practical approach to programmatic assessment design. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2017; 22:1169-1182. [PMID: 28120259 PMCID: PMC5663798 DOI: 10.1007/s10459-017-9756-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2016] [Accepted: 01/12/2017] [Indexed: 05/25/2023]
Abstract
Assessment of complex tasks integrating several competencies calls for a programmatic design approach. As single instruments do not provide the information required to reach a robust judgment of integral performance, 73 guidelines for programmatic assessment design were developed. When simultaneously applying these interrelated guidelines, it is challenging to keep a clear overview of all assessment activities. The goal of this study was to provide practical support for applying a programmatic approach to assessment design, not bound to any specific educational paradigm. The guidelines were first applied in a postgraduate medical training setting, and a process analysis was conducted. This resulted in the identification of four steps for programmatic assessment design: evaluation, contextualisation, prioritisation and justification. Firstly, the (re)design process starts with sufficiently detailing the assessment environment and formulating the principal purpose. Key stakeholders with sufficient (assessment) expertise need to be involved in the analysis of strengths and weaknesses and identification of developmental needs. Central governance is essential to balance efforts and stakes with the principal purpose and decide on prioritisation of design decisions and selection of relevant guidelines. Finally, justification of assessment design decisions, quality assurance and external accountability close the loop, to ensure sound underpinning and continuous improvement of the assessment programme.
Collapse
Affiliation(s)
- A A Timmerman
- Department of Family Medicine, Maastricht University, P.O. Box 616, 6200 MD, Maastricht, The Netherlands.
| | - J Dijkstra
- Academic Affairs, University Office, Maastricht University, P.O. Box 616, 6200 MD, Maastricht, The Netherlands
| |
Collapse
|
296
|
|
297
|
Heeneman S, Driessen EW. The use of a portfolio in postgraduate medical education - reflect, assess and account, one for each or all in one? GMS JOURNAL FOR MEDICAL EDUCATION 2017; 34:Doc57. [PMID: 29226225 PMCID: PMC5704619 DOI: 10.3205/zma001134] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 11/08/2016] [Revised: 02/07/2017] [Accepted: 03/20/2017] [Indexed: 05/26/2023]
Abstract
Competency-based education has become central to the training and assessment of post-graduate medical trainees or residents [1]. In competency-based education, there is a strong focus on outcomes and professional performance. Typically, holistic tasks are used to train, practice and assess the defined outcomes or competencies. In residency training, these tasks are part of the day-to-day clinical practice. The performance of residents in the workplace needs to be captured and stored. A portfolio has been used as an instrument for storage and collection of workplace-based assessment and feedback in various countries, like the Netherlands and the United States. The collection of information in a portfolio can serve or be used for a variety of purposes. These are: The collection of work samples, assessment, feedback and evaluations in a portfolio enables the learner to look back, analyze and reflect. The content is used for assessment or making decisions about progress. And the portfolio is used as an instrument for quality assurance processes. In post-graduate medical education, these purposes can be combined but this is not always reported transparently. In this paper, we will discuss the different perspectives, how a portfolio can serve these three purposes and what are opportunities and challenges of combining multiple purposes.
Collapse
Affiliation(s)
- Sylvia Heeneman
- Maastricht University/MUMC, Department of Pathology, HX Maastricht, The Netherlands
- Maastricht University, School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht, The Netherlands
| | - Erik W. Driessen
- Maastricht University, School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht, The Netherlands
- Maastricht University/MUMC, Department of Educational Development and Research, HX Maastricht, The Netherlands
| |
Collapse
|
298
|
Jamieson J, Jenkins G, Beatty S, Palermo C. Designing programmes of assessment: A participatory approach. MEDICAL TEACHER 2017; 39:1182-1188. [PMID: 28776435 DOI: 10.1080/0142159x.2017.1355447] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Programmatic approaches to assessment provide purposeful and meaningful assessment yet few examples of their development exist. The aim of this study was to describe the development of a programme of assessment using a participatory action research (PAR) approach. Nine work-based assessors together with three academics met on six occasions to explore the current approach to competency-based assessment in the placement component of a dietetics university course, the findings of which were used to design a programme of assessment. Findings revealed disconnect between current assessment approaches and best practice. The PAR methodology fostered a shared vision for the design of a programmatic approach to assessment and strong leadership was essential. Participants experienced a philosophical shift in their views towards assessment, supporting the implementation of a new assessment programme. This paper is the first to describe a PAR approach as a feasible and effective way forward in the design of programmatic assessment. The approach engaged stakeholders to strengthen their abilities as work-based assessors and produced champions for best practice assessment.
Collapse
Affiliation(s)
- Janica Jamieson
- a School of Medical and Health Sciences , Edith Cowan University , Perth , Australia
| | - Gemma Jenkins
- a School of Medical and Health Sciences , Edith Cowan University , Perth , Australia
| | - Shelley Beatty
- a School of Medical and Health Sciences , Edith Cowan University , Perth , Australia
| | - Claire Palermo
- b Department of Nutrition and Dietetics, School of Clinical Sciences at Monash Health, Faculty of Medicine , Nursing and Health Sciences , Melbourne , Australia
| |
Collapse
|
299
|
Basehore PM, Mortensen LH, Katsaros E, Linsenmeyer M, McClain EK, Sexton PS, Wadsworth N. Entrustable Professional Activities for Entering Residency: Establishing Common Osteopathic Performance Standards in the Transition From Medical School to Residency. J Osteopath Med 2017; 117:712-718. [DOI: 10.7556/jaoa.2017.137] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Abstract
Entrustable professional activities (EPAs) are measurable units of observable professional practice that can be entrusted to an unsupervised trainee. They were first introduced as a method of operationalizing competency-based medical education in graduate medical education. The American Association of Medical Colleges subsequently used EPAs to establish the core skills that medical students must be able to perform before they enter residency training. A recently published guide provides descriptions, guidelines, and rationale for implementing and assessing the core EPAs from an osteopathic approach. These osteopathically informed EPAs can allow schools to more appropriately assess a learner's whole-person approach to a patient, in alignment with the philosophy of the profession. As the single accreditation system for graduate medical education moves forward, it will be critical to integrate EPAs into osteopathic medical education to demonstrate entrustment of medical school graduates. The authors describe the collaborative process used to establish the osteopathic considerations added to EPAs and explores the challenges and opportunities for undergraduate osteopathic medical education.
Collapse
|
300
|
Hauer KE, Nishimura H, Dubon D, Teherani A, Boscardin C. Competency assessment form to improve feedback. CLINICAL TEACHER 2017; 15:472-477. [PMID: 29045060 DOI: 10.1111/tct.12726] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
BACKGROUND In-training evaluation reports are a commonly used assessment method for clinical learners that can characterise the development of competence in essential domains of practice. Strategies to increase the usefulness and specificity of written narrative comments about learner performance in these reports are needed to guide their learning. Soliciting narrative comments by competency domain from supervising doctors on in-training evaluation reports could improve the quality of written feedback to students. METHODS This is a pre-post study examining narrative comments derived from assessments of core clerkship students by faculty members and resident supervisors in seven clerkships using two assessment forms in academic years 2013/14 (pre; two comments fields - summative, constructive) and 2014/15 (post; seven comments fields - six competency domains, constructive comments). Using a purposive sample of 60 students based on overall clerkship performance, we conducted content analysis of written comments to compare comment quality based on word count, competencies addressed and reinforcing or constructive content. Differences between the two forms across these three components of quality were compared using Student's t-tests. RESULTS The revised form elicited more narrative comments in all seven clerkships, with more competencies addressed. The revised form led to a decrease in the proportion of constructive comments about the students' performances. In-training evaluation reports are a commonly used assessment method for clinical learners DISCUSSION: Structural changes to a medical student assessment form to elicit narrative comments by competency improved some measures of the quality of narrative comments provided by faculty members and residents. Additional study is needed to determine how learners use this information to improve their clinical practice.
Collapse
Affiliation(s)
- Karen E Hauer
- University of California at San Francisco, San Francisco, California, USA
| | - Holly Nishimura
- University of California at San Francisco, San Francisco, California, USA
| | - Diego Dubon
- University of California at Berkeley, Berkeley, California, USA
| | - Arianne Teherani
- University of California at San Francisco, San Francisco, California, USA
| | - Christy Boscardin
- University of California at San Francisco, San Francisco, California, USA
| |
Collapse
|