201
|
Santen SA, Myklebust L, Cabrera C, Patton J, Grichanik M, Zaidi NLB. Creating a learner performance dashboard for programmatic assessment. CLINICAL TEACHER 2019; 17:261-266. [DOI: 10.1111/tct.13106] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Sally A Santen
- Department of Emergency MedicineSchool of MedicineVirginia Commonwealth University Richmond Virginia USA
| | - Leif Myklebust
- University of Michigan Medical School Ann Arbor Michigan USA
| | - Clare Cabrera
- University of Michigan Medical School Ann Arbor Michigan USA
| | - Johmarx Patton
- University of Michigan Medical School Ann Arbor Michigan USA
| | - Mark Grichanik
- Rush Medical College of Rush University Chicago Illinois USA
| | | |
Collapse
|
202
|
Jahn HK, Kwan J, O'Reilly G, Geduld H, Douglass K, Tenner A, Wallis L, Tupesis J, Mowafi HO. Towards developing a consensus assessment framework for global emergency medicine fellowships. BMC Emerg Med 2019; 19:68. [PMID: 31711428 PMCID: PMC6849247 DOI: 10.1186/s12873-019-0286-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Accepted: 10/31/2019] [Indexed: 12/17/2022] Open
Abstract
Background The number of Global Emergency Medicine (GEM) Fellowship training programs are increasing worldwide. Despite the increasing number of GEM fellowships, there is not an agreed upon approach for assessment of GEM trainees. Main body In order to study the lack of standardized assessment in GEM fellowship training, a working group was established between the International EM Fellowship Consortium (IEMFC) and the International Federation for Emergency Medicine (IFEM). A needs assessment survey of IEMFC members and a review were undertaken to identify assessment tools currently in use by GEM fellowship programs; what relevant frameworks exist; and common elements used by programs with a wide diversity of emphases. A consensus framework was developed through iterative working group discussions. Thirty-two of 40 GEM fellowships responded (80% response). There is variability in the use and format of formal assessment between programs. Thirty programs reported training GEM fellows in the last 3 years (94%). Eighteen (56%) reported only informal assessments of trainees. Twenty-seven (84%) reported regular meetings for assessment of trainees. Eleven (34%) reported use of a structured assessment of any sort for GEM fellows and, of these, only 2 (18%) used validated instruments modified from general EM residency assessment tools. Only 3 (27%) programs reported incorporation of formal written feedback from partners in other countries. Using these results along with a review of the available assessment tools in GEM the working group developed a set of principles to guide GEM fellowship assessments along with a sample assessment for use by GEM fellowship programs seeking to create their own customized assessments. Conclusion There are currently no widely used assessment frameworks for GEM fellowship training. The working group made recommendations for developing standardized assessments aligned with competencies defined by the programs, that characterize goals and objectives of training, and document progress of trainees towards achieving those goals. Frameworks used should include perspectives of multiple stakeholders including partners in other countries where trainees conduct field work. Future work may evaluate the usability, validity and reliability of assessment frameworks in GEM fellowship training.
Collapse
Affiliation(s)
- Haiko Kurt Jahn
- FRCPCH Belfast Health and Social Trust, Belfast, UK.,Friedrich Schiller University, Jena, Germany
| | - James Kwan
- FRCEM, FAMS Tan Tock Seng Hospital, Singapore and Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | | | - Heike Geduld
- MBChB DipPEC MMed Stellenbosch University, Cape Town, South Africa
| | | | - Andrea Tenner
- MPH University of California, San Francisco, CA, USA
| | - Lee Wallis
- FCEM(SA), PhD University of Cape Town, Cape Town, South Africa.
| | | | | |
Collapse
|
203
|
Pearce J, Prideaux D. When I say … programmatic assessment in postgraduate medical education. MEDICAL EDUCATION 2019; 53:1074-1076. [PMID: 31432549 DOI: 10.1111/medu.13949] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2019] [Revised: 05/29/2019] [Accepted: 07/19/2019] [Indexed: 06/10/2023]
Affiliation(s)
- Jacob Pearce
- Australian Council for Educational Research, Assessment and Psychometric Research, Camberwell, Victoria, Australia
| | - David Prideaux
- Flinders University, Prideaux Centre for Research in Health Professions Education, Adelaide, South Australia, Australia
| |
Collapse
|
204
|
Williams JC, Ireland T, Warman S, Cake MA, Dymock D, Fowler E, Baillie S. Instruments to measure the ability to self-reflect: A systematic review of evidence from workplace and educational settings including health care. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2019; 23:389-404. [PMID: 31108006 DOI: 10.1111/eje.12445] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/26/2018] [Accepted: 05/16/2019] [Indexed: 05/17/2023]
Abstract
INTRODUCTION Self-reflection has become recognised as a core skill in dental education, although the ability to self-reflect is valued and measured within several professions. This review appraises the evidence for instruments available to measure the self-reflective ability of adults studying or working within any setting, not just health care. MATERIALS AND METHODS A systematic review was conducted of 20 electronic databases (including Medline, ERIC, CINAHL and Business Source Complete) from 1975 to 2017, supplemented by citation searches. Data were extracted from each study and the studies graded against quality indicators by at least two independent reviewers, using a coding sheet. Reviewers completed a utility analysis of the assessment instruments described within included studies, appraising their reported reliability, validity, educational impact, acceptability and cost. RESULTS A total of 131 studies met the inclusion criteria. Eighteen were judged to provide higher quality evidence for the review and three broad types of instrument were identified, namely: rubrics (or scoring guides), self-reported scales and observed behaviour. CONCLUSIONS Three types of instrument were identified to assess the ability to self-reflect. It was not possible to recommend a single most effective instrument due to under reporting of the criteria necessary for a full utility analysis of each. The use of more than one instrument may therefore be appropriate dependent on the acceptability to the faculty, assessor, student and cost. Future research should report on the utility of assessment instruments and provide guidance on what constitutes thresholds of acceptable or unacceptable ability to self-reflect, and how this should be managed.
Collapse
Affiliation(s)
- Julie C Williams
- Bristol Dental School, Faculty of Health Sciences, University of Bristol, Bristol, UK
| | - Tony Ireland
- Bristol Dental School, Faculty of Health Sciences, University of Bristol, Bristol, UK
| | - Sheena Warman
- Bristol Veterinary School, Faculty of Health Sciences, University of Bristol, Bristol, UK
| | - Martin A Cake
- School of Veterinary and Biomedical Sciences, Murdoch University, Perth, Western Australia, Australia
| | - David Dymock
- Bristol Dental School, Faculty of Health Sciences, University of Bristol, Bristol, UK
| | - Ellayne Fowler
- Centre for Medical Education, University of Bristol, Bristol, UK
| | - Sarah Baillie
- Bristol Veterinary School, Faculty of Health Sciences, University of Bristol, Bristol, UK
| |
Collapse
|
205
|
Norcini J. What's Next? Developing Systems of Assessment for Educational Settings. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:S7-S8. [PMID: 31365393 DOI: 10.1097/acm.0000000000002908] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Affiliation(s)
- John Norcini
- J. Norcini is research professor, SUNY Upstate Medical School, Syracuse, New York, and president emeritus, Foundation for the International Advancement of Medical Education and Research, Philadelphia, Pennsylvania
| |
Collapse
|
206
|
McQueen S, McKinnon V, VanderBeek L, McCarthy C, Sonnadara R. Video-Based Assessment in Surgical Education: A Scoping Review. JOURNAL OF SURGICAL EDUCATION 2019; 76:1645-1654. [PMID: 31175065 DOI: 10.1016/j.jsurg.2019.05.013] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2019] [Revised: 05/15/2019] [Accepted: 05/18/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Video-based assessment of residents' surgical skills may offer several advantages over direct observations of clinical performance in terms of objectivity, time-efficiency, and feasibility. Although video-based assessment is becoming more common in surgical training, a broad understanding of its utility is lacking. This scoping review explores video-based assessment in surgical training and presents the evidence supporting its use. DESIGN A literature search was conducted using the Web of Science database with key words related to video-based assessment and surgical training. Exclusion criteria included articles not published in English and articles on undergraduate medical education, continuing professional development, or non-surgical disciplines. Initially, 702 articles were identified; after title, abstract, and full-text screening by two independent reviewers (SM and VM), 199 articles remained. RESULTS We present the benefits of video-based assessment, including the ability to capture clinical ability in the operating room without decreasing intraoperative efficiency, as well as the potential to improve formative assessment and feedback practices. We describe the validity, reliability, and challenges of video-based assessment, as well as the use of video-based methods in clinical and simulated settings. We conclude by discussing questions that remain to be addressed. CONCLUSIONS Although further research and cost-benefit analyses are required, greater adoption of video-based assessment into surgical training may help meet increased assessment demands in an era of competency-based medical education.
Collapse
Affiliation(s)
- Sydney McQueen
- Department of Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Victoria McKinnon
- Department of Surgery, McMaster University, Hamilton, Ontario, Canada
| | - Laura VanderBeek
- Department of Surgery, McMaster University, Hamilton, Ontario, Canada
| | - Colm McCarthy
- Department of Surgery, McMaster University, Hamilton, Ontario, Canada
| | - Ranil Sonnadara
- Department of Surgery, University of Toronto, Toronto, Ontario, Canada; Department of Surgery, McMaster University, Hamilton, Ontario, Canada.
| |
Collapse
|
207
|
Friedlander LT, Meldrum AM, Lyons K. Curriculum development in final year dentistry to enhance competency and professionalism for contemporary general dental practice. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2019; 23:498-506. [PMID: 31373742 DOI: 10.1111/eje.12458] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/07/2018] [Revised: 06/10/2019] [Accepted: 07/28/2019] [Indexed: 06/10/2023]
Abstract
INTRODUCTION General dentistry is the most common area of practice, and new dentists must have the competency and skills to safely deliver patient care. In New Zealand (NZ), completion of a 5-year Bachelor of Dental Surgery (BDS) degree enables graduates to register with the Dental Council in NZ. This necessitates that the clinical component of the curriculum in final year dentistry (BDS5) transparently delivers learning opportunities and evaluates competency for independent practice. A review of the BDS5 Clinical Practice course was undertaken in 2015 and a revised curriculum introduced in 2016. CURRICULUM We present a BDS5 curriculum for a Clinical Practice course that is learner focused with emphasis on comprehensive patient-centred care, competency and professional practice. Learning opportunities and assessment processes are described alongside teacher training. These changes have provided students scaffolding to support clinical and professional development, and accommodate different learning preferences. The outcomes align with the competency requirements of the NZ regulatory body for registration as a general dental practitioner. Since its introduction 3 years ago, ongoing feedback from students and staff has been positive and indicates the curriculum is effective in achieving its objectives. CONCLUSIONS This curriculum provides a firm foundation for students transitioning to independent clinical practice in the community and supports the professional development of clinical teachers. It may also be translated to other areas of health education to ensure the delivery of quality holistic patient care.
Collapse
Affiliation(s)
- Lara T Friedlander
- Faculty of Dentistry, Sir John Walsh Research Institute, University of Otago, Dunedin, New Zealand
| | - Alison M Meldrum
- Faculty of Dentistry, Sir John Walsh Research Institute, University of Otago, Dunedin, New Zealand
| | - Karl Lyons
- Faculty of Dentistry, Sir John Walsh Research Institute, University of Otago, Dunedin, New Zealand
| |
Collapse
|
208
|
van der Vleuten C, van den Eertwegh V, Giroldi E. Assessment of communication skills. PATIENT EDUCATION AND COUNSELING 2019; 102:2110-2113. [PMID: 31351785 DOI: 10.1016/j.pec.2019.07.007] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Accepted: 07/06/2019] [Indexed: 05/26/2023]
Abstract
OBJECTIVE This paper addresses how communication skills can best be assessed. Since assessment and learning are strongly connected, the way communication skills are best learned is also described. RESULTS Communication skills are best learned in a longitudinal fashion with ample practice in an authentic setting. Confrontation of behavior initiates the learning process and should be supported by meaningful feedback through direct observation. When done appropriately a set of (learned) communication skills become integrated skilled communication, being versatilely used in purposeful goal-oriented clinical communication. The assessment of communication skills should follow a modern approach to assessment where the learning function of assessment is considered a priority. Individual assessments are feedback-oriented to promote further learning and development. The resulting rich information may be used to make progression decisions, usually in a group or committee decision. CONCLUSION This modern programmatic approach to assessment fits the learning of skilled communication well. PRACTICE IMPLICATIONS Implementation of a programmatic assessment approach to communication will entail a major innovation to education.
Collapse
Affiliation(s)
- Cees van der Vleuten
- Maastricht University, Department of Educational Development and Research, School of Health Professions Education(SHE), Faculty of Health, Medicine and Life Sciences, Maastricht, the Netherlands.
| | - Valerie van den Eertwegh
- Maastricht University, Skillslab, Faculty of Health, Medicine and Life Sciences, Maastricht, the Netherlands
| | - Esther Giroldi
- Maastricht University, Department of Educational Development and Research, School of Health Professions Education(SHE), Faculty of Health, Medicine and Life Sciences, Maastricht, the Netherlands; Maastricht University, Department of Family Medicine, Care and Public, Health Research Institute (CAPHRI), Faculty of Health, Medicine and Life Sciences, Maastricht, the Netherlands
| |
Collapse
|
209
|
Bullock JL, Lai CJ, Lockspeiser T, O'Sullivan PS, Aronowitz P, Dellmore D, Fung CC, Knight C, Hauer KE. In Pursuit of Honors: A Multi-Institutional Study of Students' Perceptions of Clerkship Evaluation and Grading. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:S48-S56. [PMID: 31365406 DOI: 10.1097/acm.0000000000002905] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
PURPOSE To examine medical students' perceptions of the fairness and accuracy of core clerkship assessment, the clerkship learning environment, and contributors to students' achievement. METHOD Fourth-year medical students at 6 institutions completed a survey in 2018 assessing perceptions of the fairness and accuracy of clerkship evaluation and grading, the learning environment including clerkship goal structures (mastery- or performance-oriented), racial/ethnic stereotype threat, and student performance (honors earned). Factor analysis of 5-point Likert items (1 = strongly disagree, 5 = strongly agree) provided scale scores of perceptions. Using multivariable regression, investigators examined predictors of honors earned. Qualitative content analysis of responses to an open-ended question yielded students' recommendations to improve clerkship grading. RESULTS Overall response rate was 71.1% (666/937). Students believed that being liked and particular supervisors most influenced final grades. Only 44.4% agreed that grading was fair. Students felt the clerkship learning environment promoted both mastery and performance avoidance behaviors (88.0% and 85.6%, respectively). Students from backgrounds underrepresented in medicine were more likely to experience stereotype threat vulnerability (55.7% vs 10.9%, P < .0005). Honors earned was positively associated with perceived accuracy of grading and interest in competitive specialties while negatively associated with stereotype threat. Students recommended strategies to improve clerkship grading: eliminating honors, training evaluators, and rewarding improvement on clerkships. CONCLUSIONS Participants had concerns around the fairness and accuracy of clerkship evaluation and grading and potential bias. Students expressed a need to redefine the culture of assessment on core clerkships to create more favorable learning environments for all students.
Collapse
Affiliation(s)
- Justin L Bullock
- J.L. Bullock is a first-year resident in internal medicine, Department of Medicine, University of California, San Francisco School of Medicine, San Francisco, California. The author was a fourth-year medical student at the time of writing. C.J. Lai is director of internal medicine clerkships and professor, Department of Medicine, University of California, San Francisco School of Medicine, San Francisco, California. T. Lockspeiser is director of the assessment/competency committee and associate professor, Department of Pediatrics, University of Colorado School of Medicine, Aurora, Colorado. P.S. O'Sullivan is director of research and development in medical education and professor, Department of Medicine and Department of Surgery, University of California, San Francisco School of Medicine, San Francisco, California. P. Aronowitz is clerkship director of internal medicine and professor, Department of Internal Medicine, University of California, Davis School of Medicine, Davis, California. D. Dellmore is director of medical student education and associate professor, Department of Psychiatry and Behavioral Sciences, University of New Mexico School of Medicine, Albuquerque, New Mexico. C.-C. Fung is assistant dean for medical education and associate professor, Keck School of Medicine of USC, Los Angeles, California. C. Knight is associate clerkship director and associate professor, Division of General Internal Medicine, University of Washington School of Medicine, Seattle, Washington. K.E. Hauer is associate dean for competency assessment and professional standards and professor, Department of Medicine, University of California, San Francisco School of Medicine, San Francisco, California
| | | | | | | | | | | | | | | | | |
Collapse
|
210
|
Bonvin R, Nendaz M, Frey P, Schnabel KP, Huwendiek S, Schirlo C. Looking back: twenty years of reforming undergraduate medical training and curriculum frameworks in Switzerland. GMS JOURNAL FOR MEDICAL EDUCATION 2019; 36:Doc64. [PMID: 31815174 PMCID: PMC6883239 DOI: 10.3205/zma001272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2018] [Revised: 06/03/2019] [Accepted: 08/06/2019] [Indexed: 06/10/2023]
Abstract
Introduction: To date, hardly any reports exist that outline the reforms in medical studies in Switzerland from the first partial reforms in the 1970s until today. Methods: This article outlines the recent history of medical curricula, their reforms in the early 1970s and, based on these, the key reasons for the major curricular reforms of the 2000s from the perspective of the authors. Results: The various projects, initiatives and legislative elements at the national level include the introduction of new quality control instruments - federal examination and programme accreditation, the introduction of a national catalogue of learning objectives and its two follow-up editions, as well as the implementation of the Bologna reform in undergraduate medical curricula. Examples of the key new elements found in all medical training in Switzerland include: the interdisciplinary orientation of learning content in organ and functional system-oriented subject areas or modules, the enhanced valorisation of practical clinical training, as well as the introduction of problem-oriented formats and the integration of partly formative, partly summative exams according to the format of the objective structured practical examination (OSCE). Characteristics unique to the four medical faculties and their medical training programme are also highlighted. Discussion: The described projects, initiatives and legislative elements have led to a dynamic, continuous development of medical curricula in Switzerland. The close cooperation between the faculties and the Federal Office of Public Health (FOPH) has also resulted in a redefinition of the roles and responsibilities of universities and the Federal Government according to the new Law on Medical Professions. This guarantees the medical faculties a great deal of autonomy, without neglecting quality assurance.
Collapse
Affiliation(s)
- Raphael Bonvin
- Universität Fribourg, Unité Pédagogie Médicale, Fribourg, Switzerland
| | - Mathieu Nendaz
- Hôpitaux Universitaires Genève, Institut de médecine de premier recours, Genève, Switzerland
| | - Peter Frey
- Universität Bern, Medizinische Fakultät, Studiendekanat, Bern, Switzerland
| | - Kai P. Schnabel
- Universität Bern, Institut für medizinische Lehre, Abteilung für Unterricht und Medien, Bern, Switzerland
| | - Sören Huwendiek
- Universität Bern, Institut für medizinische Lehre, Abteilung für Assessment und Evaluation AAE, Bern, Switzerland
| | - Christian Schirlo
- Universität Zürich, Geschäftsstelle Direktorium UMZH, Medizinische Fakultät, Geschäftsbereich Struktur & Entwicklung, Zürich, Switzerland
| |
Collapse
|
211
|
van Bockel EAP, Walstock PA, van Mook WNKA, Arbous MS, Tepaske R, van Hemel TJD, Müller MCA, Delwig H, Tulleken JE. Entrustable professional activities (EPAs) for postgraduate competency based intensive care medicine training in the Netherlands: The next step towards excellence in intensive care medicine training. J Crit Care 2019; 54:261-267. [PMID: 31733630 DOI: 10.1016/j.jcrc.2019.09.012] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2018] [Revised: 07/30/2019] [Accepted: 09/11/2019] [Indexed: 11/15/2022]
Abstract
INTRODUCTION The Competency Based Training in Intensive Care Education (CoBaTrICE) programme developed common standards of ICM training by describing competencies of an intensivist. Entrustable Professional Activities (EPAs) of Intensive Care Medicine (ICM) (EPAsICM) are presented as a new workplace-based assessment tool in competency-based training of intensivists. EPAs are activities to be entrusted to a trainee once he (or she) has attained competence. EPAs emphasise the role of trust between trainees and supervisors. EPAs bridge the gap between competencies and competence. METHODS An expert panel of ICM (vice)programme directors and intensivists in The Netherlands integrated the CoBaTrICE and CanMEDS competencies into EPAsICM. Comment and feedback was sought from other ICM programme directors and educational experts and processed in the final version of EPAsICM before implementation in the Dutch ICM training programme. RESULTS A list of 15 EPAsICM are considered to reflect the spectrum of clinical practice while incorporating the competencies of CoBaTrICE and CanMEDS. The grading system is designed as a 5-point entrustment scale based on the amount of supervision a trainee needs, aligning with daily judgement of trainees by intensivists. CONCLUSION EPAsICM is an assessment tool that formalises entrustment decisions and can be a valuable addition in international ICM training.
Collapse
Affiliation(s)
- Esther A P van Bockel
- Department of Critical Care, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9700, RB, Groningen, the Netherlands.
| | - Pieter A Walstock
- Department of Critical Care, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9700, RB, Groningen, the Netherlands
| | - Walther N K A van Mook
- Department of Intensive Care Medicine, Maastricht University Medical Center, P. Debyelaan 25, 6202, AZ, Maastricht, the Netherlands; School of Health Professions Education, Maastricht University, the Netherlands
| | - M Sesmu Arbous
- Department of Intensive Care Medicine, Leiden University Medical Center, Albinusdreef 2, 2333, ZA, Leiden, the Netherlands; Department of Clinical Epidemiology, Leiden University Medical Center, Albinusdreef 2, 2333, ZA, Leiden, the Netherlands
| | - Robert Tepaske
- Amsterdam UMC, University of Amsterdam, Department of Intensive Care Medicine, Meibergdreef 9, 1105, AZ, Amsterdam, the Netherlands
| | - Tina J D van Hemel
- Department of Intensive Care Medicine, Leiden University Medical Center, Albinusdreef 2, 2333, ZA, Leiden, the Netherlands
| | - Marcella C A Müller
- Amsterdam UMC, University of Amsterdam, Department of Intensive Care Medicine, Meibergdreef 9, 1105, AZ, Amsterdam, the Netherlands
| | - Hans Delwig
- Department of Critical Care, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9700, RB, Groningen, the Netherlands
| | - Jaap E Tulleken
- Department of Critical Care, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9700, RB, Groningen, the Netherlands
| |
Collapse
|
212
|
Malau-Aduli BS, Alele FO, Heggarty P, Teague PA, Sen Gupta T, Hays R. Perceived clinical relevance and retention of basic sciences across the medical education continuum. ADVANCES IN PHYSIOLOGY EDUCATION 2019; 43:293-299. [PMID: 31246508 DOI: 10.1152/advan.00012.2019] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Medical programs are under pressure to maintain currency with scientific and technical advances, as well as prepare graduates for clinical work and a wide range of postgraduate careers. The value of the basic sciences in primary medical education was assessed by exploring the perceived clinical relevance and test performance trends among medical students, interns, residents, and experienced clinicians. A pilot study conducted in 2014 involved administration of a voluntary 60-item multiple-choice question test to 225 medical students and 4 interns. These participants and 26 teaching clinicians rated the items for clinical relevance. In 2016, a similarly constructed test (main study) was made a mandatory formative assessment, attempted by 563 students in years 2, 4, and 6 and by 120 commencing general practice residents. Test scores, performance trends, clinical relevance ratings, and correlations were assessed using relevant parametric and nonparametric tests. Rank order and pass-fail decisions were also reviewed. The mean test scores were 57% (SD 7.1) and 52% (SD 6.1) for the pilot and main studies, respectively. Highest scores were observed in pathology and social sciences. Overall performance increased with increasing year of study. Test scores were positively correlated with perceived relevance. There were moderate correlations (r = 0.50-0.63; P < 0.001) between participants' scores in the basic science and summative exams. Assessments may be key to fostering relevance and integration of the basic sciences. Benchmarking knowledge retention and result comparisons across topics are useful in program evaluation.
Collapse
Affiliation(s)
- Bunmi S Malau-Aduli
- Division of Tropical Health and Medicine, College of Medicine and Dentistry, James Cook University , Townsville, Queensland , Australia
| | - Faith O Alele
- Division of Tropical Health and Medicine, College of Medicine and Dentistry, James Cook University , Townsville, Queensland , Australia
| | - Paula Heggarty
- Division of Tropical Health and Medicine, College of Medicine and Dentistry, James Cook University , Townsville, Queensland , Australia
| | - Peta-Ann Teague
- Division of Tropical Health and Medicine, College of Medicine and Dentistry, James Cook University , Townsville, Queensland , Australia
| | - Tarun Sen Gupta
- Division of Tropical Health and Medicine, College of Medicine and Dentistry, James Cook University , Townsville, Queensland , Australia
| | - Richard Hays
- Division of Tropical Health and Medicine, College of Medicine and Dentistry, James Cook University , Townsville, Queensland , Australia
| |
Collapse
|
213
|
van der Meulen MW, Smirnova A, Heeneman S, Oude Egbrink MGA, van der Vleuten CPM, Lombarts KMJMH. Exploring Validity Evidence Associated With Questionnaire-Based Tools for Assessing the Professional Performance of Physicians: A Systematic Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:1384-1397. [PMID: 31460937 DOI: 10.1097/acm.0000000000002767] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
PURPOSE To collect and examine-using an argument-based validity approach-validity evidence of questionnaire-based tools used to assess physicians' clinical, teaching, and research performance. METHOD In October 2016, the authors conducted a systematic search of the literature seeking articles about questionnaire-based tools for assessing physicians' professional performance published from inception to October 2016. They included studies reporting on the validity evidence of tools used to assess physicians' clinical, teaching, and research performance. Using Kane's validity framework, they conducted data extraction based on four inferences in the validity argument: scoring, generalization, extrapolation, and implications. RESULTS They included 46 articles on 15 tools assessing clinical performance and 72 articles on 38 tools assessing teaching performance. They found no studies on research performance tools. Only 12 of the tools (23%) gathered evidence on all four components of Kane's validity argument. Validity evidence focused mostly on generalization and extrapolation inferences. Scoring evidence showed mixed results. Evidence on implications was generally missing. CONCLUSIONS Based on the argument-based approach to validity, not all questionnaire-based tools seem to support their intended use. Evidence concerning implications of questionnaire-based tools is mostly lacking, thus weakening the argument to use these tools for formative and, especially, for summative assessments of physicians' clinical and teaching performance. More research on implications is needed to strengthen the argument and to provide support for decisions based on these tools, particularly for high-stakes, summative decisions. To meaningfully assess academic physicians in their tripartite role as doctor, teacher, and researcher, additional assessment tools are needed.
Collapse
Affiliation(s)
- Mirja W van der Meulen
- M.W. van der Meulen is PhD candidate, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands, and member, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0003-3636-5469. A. Smirnova is PhD graduate and researcher, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands, and member, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0003-4491-3007. S. Heeneman is professor, Department of Pathology, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0002-6103-8075. M.G.A. oude Egbrink is professor, Department of Physiology, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0002-5530-6598. C.P.M. van der Vleuten is professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0001-6802-3119. K.M.J.M.H. Lombarts is professor, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0001-6167-0620
| | | | | | | | | | | |
Collapse
|
214
|
Schuwirth LW, van der Vleuten CP. How ‘Testing’ Has Become ‘Programmatic Assessment for Learning’. HEALTH PROFESSIONS EDUCATION 2019. [DOI: 10.1016/j.hpe.2018.06.005] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022] Open
|
215
|
Emke AR. Workplace-Based Assessments Using Pediatric Critical Care Entrustable Professional Activities. J Grad Med Educ 2019; 11:430-438. [PMID: 31440338 PMCID: PMC6699545 DOI: 10.4300/jgme-d-18-01006.1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 04/22/2019] [Accepted: 05/29/2019] [Indexed: 12/22/2022] Open
Abstract
BACKGROUND Workplace-based assessment (WBA) is critical to graduating competent physicians. Developing assessment tools that combine the needs of faculty, trainees, and governing bodies is challenging but imperative. Entrustable professional activities (EPAs) are emerging as a clinically oriented framework for trainee assessment. OBJECTIVE We sought to develop an EPA-based WBA tool for pediatric critical care medicine (PCCM) fellows. The goals of the tool were to promote learning through benchmarking and tracking entrustment. METHODS A single PCCM EPA was iteratively subdivided into observable practice activities (OPAs) based on national and local data. Using a mixed-methods approach following van der Vleuten's conceptual model for assessment tool utility and Messick's unified validity framework, we sought validity evidence for acceptability, content, internal structure, relation to other variables, response process, and consequences. RESULTS Evidence was gathered after 1 year of use. Items for assessment were based on correlation between the number of times each item was assessed and the frequency professional activity occurred. Phi-coefficient reliability was 0.65. Narrative comments demonstrated all factors influencing trust, identified by current literature, were cited when determining level of entrustment granted. Mean entrustment levels increased significantly between fellow training years (P = .001). Compliance for once- and twice-weekly tool completion was 50% and 100%, respectively. Average time spent completing the assessment was less than 5 minutes. CONCLUSIONS Using an EPA-OPA framework, we demonstrated utility and validity evidence supporting the tool's outcomes. In addition, narrative comments about entrustment decisions provide important insights for the training program to improve individual fellow advancement toward autonomy.
Collapse
|
216
|
Westein MP, de Vries H, Floor A, Koster AS, Buurma H. Development of a Postgraduate Community Pharmacist Specialization Program Using CanMEDS Competencies, and Entrustable Professional Activities. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2019; 83:6863. [PMID: 31507284 PMCID: PMC6718509 DOI: 10.5688/ajpe6863] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2017] [Accepted: 02/20/2018] [Indexed: 05/09/2023]
Abstract
Objectives. To develop and implement a postgraduate, workplace-based curriculum for community pharmacy specialists in the Netherlands, conduct a thorough evaluation of the program, and revise any deficiencies found. Methods. The experiences of the Dutch Advisory Board for Postgraduate Curriculum Development for Medical Specialists were used as a guideline for the development of a competency-based postgraduate education program for community pharmacists. To ensure that community pharmacists achieved competence in 10 task areas and seven roles defined by the Canadian Medical Education Directions for Specialists (CanMEDS), a two-year workplace-based curriculum was built. A development path along four milestones was constructed using 40 entrustable professional activities (EPAs). The assessment program consisted of 155 workplace-based assessments, with the supervisor serving as the main assessor. Also, 360-degree feedback and 22 days of classroom courses were included in the curriculum. In 2014, the curriculum was evaluated by two focus groups and a review committee. Results. Eighty-two first-year trainees enrolled in the community pharmacy specialist program in 2012. That number increased to 130 trainees by 2016 (a 59% increase). In 2015, based on feedback from pharmacy supervisors, trainees, and other stakeholders, 22.5% of the EPAs were changed and the number of workplace-based assessments was reduced by 48.5%. Conclusion. Using design approaches from the medical field in the development of postgraduate workplace-based pharmacy education programs proved to be feasible and successful. How to address the concerns and challenges encountered in developing and maintaining competency-based postgraduate pharmacy education programs merits further research.
Collapse
Affiliation(s)
- Marnix P.D. Westein
- Royal Dutch Pharmacists Association (KNMP), Hague, Netherlands
- Utrecht Institute of Pharmaceutical Sciences, Utrecht University, Utrecht, Netherlands
| | - Harry de Vries
- HPC the Human Perspective in Consulting, Hague, Netherlands
| | - Annemieke Floor
- Royal Dutch Pharmacists Association (KNMP), Hague, Netherlands
- SIR Institute for Pharmacy Practice and Policy, Leiden, Netherlands
| | - Andries S. Koster
- Utrecht Institute of Pharmaceutical Sciences, Utrecht University, Utrecht, Netherlands
| | - Henk Buurma
- Royal Dutch Pharmacists Association (KNMP), Hague, Netherlands
- SIR Institute for Pharmacy Practice and Policy, Leiden, Netherlands
| |
Collapse
|
217
|
Henry D, West DC. The Clinical Learning Environment and Workplace-Based Assessment: Frameworks, Strategies, and Implementation. Pediatr Clin North Am 2019; 66:839-854. [PMID: 31230626 DOI: 10.1016/j.pcl.2019.03.010] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
This article provides an overview of the role played by the clinical learning environment in providing opportunities for assessment of trainee performance and how those assessments can guide learning. It reviews the importance of competency models as frameworks to facilitate the creation of a shared mental model of what is to be learned between learners and supervisors. In addition, it discusses how assessment can be used to drive mastery learning as well as the components necessary for a program of assessment.
Collapse
Affiliation(s)
- Duncan Henry
- Department of Pediatrics, University of California San Francisco, 550 16th Street, 5th floor, Box 0110, San Francisco, CA 94143-0110, USA.
| | - Daniel C West
- Department of Pediatrics, University of California San Francisco, 550 16th Street, 4th floor, Box 0110, San Francisco, CA 94143-0110, USA
| |
Collapse
|
218
|
Griffiths J, Dalgarno N, Schultz K, Han H, van Melle E. Competency-Based Medical Education implementation: Are we transforming the culture of assessment? MEDICAL TEACHER 2019; 41:811-818. [PMID: 30955390 DOI: 10.1080/0142159x.2019.1584276] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Purpose: Adopting CBME is challenging in medicine. It mandates a change in processes and approach, ultimately a change in institutional culture with stakeholders ideally embracing and valuing the new processes. Adopting the transformational change model, this study describes the shift in assessment culture by Academic Advisors (AAs) and preceptors over three years of CBME implementation in one Department of Family Medicine. Methods: A qualitative grounded theory method was used for this two-part study. Interviews were conducted with 12 AAs in 2013 and nine AAs in 2016 using similar interview questions. Data were analyzed through a constant comparative method. Results: Three overarching themes emerged from the data: (1) specific identified shifts in assessment culture, (2) factors supporting the shifts in culture, and (3) outcomes related to the culture shift. Conclusions: In both parts of the study, participants noted that assessment took more time and effort. In Part 2, however, the effort was mitigated by a sense of value for all stakeholders. With support from the mandate of regulatory bodies, local leadership, department, faculty development and an electronic platform, a cultural transformation occurred in assessment that enhanced learning and teaching, use of embedded standards for performance decisions, and tracking and documentation performance.
Collapse
Affiliation(s)
- Jane Griffiths
- a Department of Family Medicine , Queen's University , Kingston , Canada
| | - Nancy Dalgarno
- b Faculty of Health Sciences , Queen's University , Kingston , Canada
| | - Karen Schultz
- a Department of Family Medicine , Queen's University , Kingston , Canada
| | - Han Han
- c Centre for Studies in Primary Care, Queen's University , Kingston , Canada
| | - Elaine van Melle
- a Department of Family Medicine , Queen's University , Kingston , Canada
- d Royal College of Physicians and Surgeons of Canada , Ottawa , Canada
| |
Collapse
|
219
|
Humphrey-Murto S, LeBlanc A, Touchie C, Pugh D, Wood TJ, Cowley L, Shaw T. The Influence of Prior Performance Information on Ratings of Current Performance and Implications for Learner Handover: A Scoping Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:1050-1057. [PMID: 30946129 DOI: 10.1097/acm.0000000000002731] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
PURPOSE Learner handover (LH) is the sharing of information about trainees between faculty supervisors. This scoping review aimed to summarize key concepts across disciplines surrounding the influence of prior performance information (PPI) on current performance ratings and implications for LH in medical education. METHOD The authors used the Arksey and O'Malley framework to systematically select and summarize the literature. Cross-disciplinary searches were conducted in six databases in 2017-2018 for articles published after 1969. To represent PPI relevant to LH in medical education, eligible studies included within-subject indirect PPI for work-type performance and rating of an individual current performance. Quantitative and thematic analyses were conducted. RESULTS Of 24,442 records identified through database searches and 807 through other searches, 23 articles containing 24 studies were included. Twenty-two studies (92%) reported an assimilation effect (current ratings were biased toward the direction of the PPI). Factors modifying the effect of PPI were observed, with larger effects for highly polarized PPI, negative (vs positive) PPI, and early (vs subsequent) performances. Specific standards, rater motivation, and certain rater characteristics mitigated context effects, whereas increased rater processing demands heightened them. Mixed effects were seen with nature of the performance and with rater expertise and training. CONCLUSIONS PPI appears likely to influence ratings of current performance, and an assimilation effect is seen with indirect PPI. Whether these findings generalize to medical education is unknown, but they should be considered by educators wanting to implement LH. Future studies should explore PPI in medical education contexts and real-world settings.
Collapse
Affiliation(s)
- Susan Humphrey-Murto
- S. Humphrey-Murto is associate professor, Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada. A. LeBlanc is a fifth-year respirology resident, Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada. C. Touchie is associate professor, Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada. D. Pugh is associate professor, Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada. T.J. Wood is full professor, Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada. L. Cowley is a research assistant, Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada. T. Shaw is lecturer, Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | | | | | | | | | | | | |
Collapse
|
220
|
Pack R, Lingard L, Watling CJ, Chahine S, Cristancho SM. Some assembly required: tracing the interpretative work of Clinical Competency Committees. MEDICAL EDUCATION 2019; 53:723-734. [PMID: 31037748 DOI: 10.1111/medu.13884] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Revised: 01/17/2019] [Accepted: 02/22/2019] [Indexed: 05/24/2023]
Abstract
OBJECTIVES This qualitative study describes the social processes of evidence interpretation employed by Clinical Competency Committees (CCCs), explicating how they interpret, grapple with and weigh assessment data. METHODS Over 8 months, two researchers observed 10 CCC meetings across four postgraduate programmes at a Canadian medical school, spanning over 25 hours and 100 individual decisions. After each CCC meeting, a semi-structured interview was conducted with one member. Following constructivist grounded theory methodology, data collection and inductive analysis were conducted iteratively. RESULTS Members of the CCCs held an assumption that they would be presented with high-quality assessment data that would enable them to make systematic and transparent decisions. This assumption was frequently challenged by the discovery of what we have termed 'problematic evidence' (evidence that CCC members struggled to meaningful interpret) within the catalogue of learner data. When CCCs were confronted with 'problematic evidence', they engaged in lengthy, effortful discussions aided by contextual data in order to make meaning of the evidence in question. This process of effortful discussion enabled CCCs to arrive at progression decisions that were informed by, rather than ignored, problematic evidence. CONCLUSIONS Small groups involved in the review of trainee assessment data should be prepared to encounter evidence that is uncertain, absent, incomplete, or otherwise difficult to interpret, and should openly discuss strategies for addressing these challenges. The answer to the problem of effortful processes of data interpretation and problematic evidence is not as simple as generating more data with strong psychometric properties. Rather, it involves grappling with the discrepancies between our interpretive frameworks and the inescapably subjective nature of assessment data and judgement.
Collapse
Affiliation(s)
- Rachael Pack
- Centre for Education Research & Innovation, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Lorelei Lingard
- Centre for Education Research & Innovation, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
- Faculty of Education, Western University, London, Ontario, Canada
- Department of Medicine, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Christopher J Watling
- Centre for Education Research & Innovation, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
- Department of Clinical Neurological Sciences, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Saad Chahine
- Centre for Education Research & Innovation, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
- Faculty of Education, Western University, London, Ontario, Canada
- Department of Medicine, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Sayra M Cristancho
- Centre for Education Research & Innovation, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
- Faculty of Education, Western University, London, Ontario, Canada
- Department of Surgery, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| |
Collapse
|
221
|
Caretta-Weyer HA, Gisondi MA. Design Your Clinical Workplace to Facilitate Competency-Based Education. West J Emerg Med 2019; 20:651-653. [PMID: 31316706 PMCID: PMC6625682 DOI: 10.5811/westjem.2019.4.43216] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Revised: 05/10/2019] [Accepted: 04/29/2019] [Indexed: 11/17/2022] Open
Affiliation(s)
- Holly A Caretta-Weyer
- Stanford University School of Medicine, Department of Emergency Medicine, Palo Alto, California
| | - Michael A Gisondi
- Stanford University School of Medicine, Department of Emergency Medicine, Palo Alto, California
| |
Collapse
|
222
|
Daniel M, Rencic J, Durning SJ, Holmboe E, Santen SA, Lang V, Ratcliffe T, Gordon D, Heist B, Lubarsky S, Estrada CA, Ballard T, Artino AR, Sergio Da Silva A, Cleary T, Stojan J, Gruppen LD. Clinical Reasoning Assessment Methods: A Scoping Review and Practical Guidance. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:902-912. [PMID: 30720527 DOI: 10.1097/acm.0000000000002618] [Citation(s) in RCA: 125] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
PURPOSE An evidence-based approach to assessment is critical for ensuring the development of clinical reasoning (CR) competence. The wide array of CR assessment methods creates challenges for selecting assessments fit for the purpose; thus, a synthesis of the current evidence is needed to guide practice. A scoping review was performed to explore the existing menu of CR assessments. METHOD Multiple databases were searched from their inception to 2016 following PRISMA guidelines. Articles of all study design types were included if they studied a CR assessment method. The articles were sorted by assessment methods and reviewed by pairs of authors. Extracted data were used to construct descriptive appendixes, summarizing each method, including common stimuli, response formats, scoring, typical uses, validity considerations, feasibility issues, advantages, and disadvantages. RESULTS A total of 377 articles were included in the final synthesis. The articles broadly fell into three categories: non-workplace-based assessments (e.g., multiple-choice questions, extended matching questions, key feature examinations, script concordance tests); assessments in simulated clinical environments (objective structured clinical examinations and technology-enhanced simulation); and workplace-based assessments (e.g., direct observations, global assessments, oral case presentations, written notes). Validity considerations, feasibility issues, advantages, and disadvantages differed by method. CONCLUSIONS There are numerous assessment methods that align with different components of the complex construct of CR. Ensuring competency requires the development of programs of assessment that address all components of CR. Such programs are ideally constructed of complementary assessment methods to account for each method's validity and feasibility issues, advantages, and disadvantages.
Collapse
Affiliation(s)
- Michelle Daniel
- M. Daniel is assistant dean for curriculum and associate professor of emergency medicine and learning health sciences, University of Michigan Medical School, Ann Arbor, Michigan; ORCID: http://orcid.org/0000-0001-8961-7119. J. Rencic is associate program director of the internal medicine residency program and associate professor of medicine, Tufts University School of Medicine, Boston, Massachusetts; ORCID: http://orcid.org/0000-0002-2598-3299. S.J. Durning is director of graduate programs in health professions education and professor of medicine and pathology, Uniformed Services University of the Health Sciences, Bethesda, Maryland. E. Holmboe is senior vice president of milestone development and evaluation, Accreditation Council for Graduate Medical Education, and adjunct professor of medicine, Northwestern Feinberg School of Medicine, Chicago, Illinois; ORCID: http://orcid.org/0000-0003-0108-6021. S.A. Santen is senior associate dean and professor of emergency medicine, Virginia Commonwealth University, Richmond, Virginia; ORCID: http://orcid.org/0000-0002-8327-8002. V. Lang is associate professor of medicine, University of Rochester School of Medicine and Dentistry, Rochester, New York; ORCID: http://orcid.org/0000-0002-2157-7613. T. Ratcliffe is associate professor of medicine, University of Texas Long School of Medicine at San Antonio, San Antonio, Texas. D. Gordon is medical undergraduate education director, associate residency program director of emergency medicine, and associate professor of surgery, Duke University School of Medicine, Durham, North Carolina. B. Heist is clerkship codirector and assistant professor of medicine, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania. S. Lubarsky is assistant professor of neurology, McGill University, and faculty of medicine and core member, McGill Center for Medical Education, Montreal, Quebec, Canada; ORCID: http://orcid.org/0000-0001-5692-1771. C.A. Estrada is staff physician, Birmingham Veterans Affairs Medical Center, and director, Division of General Internal Medicine, and professor of medicine, University of Alabama, Birmingham, Alabama; ORCID: https://orcid.org/0000-0001-6262-7421. T. Ballard is plastic surgeon, Ann Arbor Plastic Surgery, Ann Arbor, Michigan. A.R. Artino Jr is deputy director for graduate programs in health professions education and professor of medicine, preventive medicine, and biometrics pathology, Uniformed Services University of the Health Sciences, Bethesda, Maryland; ORCID: http://orcid.org/0000-0003-2661-7853. A. Sergio Da Silva is senior lecturer in medical education and director of the masters in medical education program, Swansea University Medical School, Swansea, United Kingdom; ORCID: http://orcid.org/0000-0001-7262-0215. T. Cleary is chair, Applied Psychology Department, CUNY Graduate School and University Center, New York, New York, and associate professor of applied and professional psychology, Rutgers University, New Brunswick, New Jersey. J. Stojan is associate professor of internal medicine and pediatrics, University of Michigan Medical School, Ann Arbor, Michigan. L.D. Gruppen is director of the master of health professions education program and professor of learning health sciences, University of Michigan Medical School, Ann Arbor, Michigan; ORCID: http://orcid.org/0000-0002-2107-0126
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
223
|
Han H, Ju A. The complexity of mentoring observed through engagement with programmatic assessment. MEDICAL EDUCATION 2019; 53:542-544. [PMID: 31106889 DOI: 10.1111/medu.13906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Affiliation(s)
- Heeyoung Han
- Department of Medical Education, Southern Illinois University School of Medicine, Springfield, Illinois, USA
| | - Ahreum Ju
- Department of Education Policy, Organization & Leadership, College of Education, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| |
Collapse
|
224
|
Chan TM, Kuehl DR. On Lampposts, Sneetches, and Stars: A Call to Go Beyond Bibliometrics for Determining Academic Value. Acad Emerg Med 2019; 26:688-694. [PMID: 30706552 DOI: 10.1111/acem.13707] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Teresa M. Chan
- Department of Medicine; Division of Emergency Medicine; McMaster University; Hamilton Ontario Canada
| | - Damon R. Kuehl
- Department of Emergency Medicine; Virginia Tech Carilion School of Medicine; Roanoke VA
| |
Collapse
|
225
|
de Jong LH, Bok HGJ, Kremer WDJ, van der Vleuten CPM. Programmatic assessment: Can we provide evidence for saturation of information? MEDICAL TEACHER 2019; 41:678-682. [PMID: 30707848 DOI: 10.1080/0142159x.2018.1555369] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Purpose: According to the principles of programmatic assessment, a valid high-stakes assessment of the students' performance should amongst others, be based on a multiple data points, supposedly leading to saturation of information. Saturation of information is generated when a data point does not add important information to the assessor. In establishing saturation of information, institutions often set minimum requirements for the number of assessment data points to be included in the portfolio. Methods: In this study, we aimed to provide validity evidence for saturation of information by investigating the relationship between the number of data points exceeding the minimum requirements in a portfolio and the consensus between two independent assessors. Data were analyzed using a multiple logistic regression model. Results: The results showed no relation between the number of data points and the consensus. This suggests that either the consensus is predicted by other factors only, or, more likely, that assessors already reached saturation of information. This study took the first step in investigating saturation of information, further research is necessary to gain in-depth insights of this matter in relation to the complex process of decision-making.
Collapse
Affiliation(s)
- Lubberta H de Jong
- a Faculty of Veterinary Medicine, Centre for Quality Improvement in Veterinary Education , Utrecht University , Utrecht , The Netherlands
| | - Harold G J Bok
- a Faculty of Veterinary Medicine, Centre for Quality Improvement in Veterinary Education , Utrecht University , Utrecht , The Netherlands
| | - Wim D J Kremer
- a Faculty of Veterinary Medicine, Centre for Quality Improvement in Veterinary Education , Utrecht University , Utrecht , The Netherlands
| | - Cees P M van der Vleuten
- b Department of Educational Development and Research, Faculty of Health Medicine and Life Sciences, Maastricht University , Maastricht , The Netherlands
| |
Collapse
|
226
|
Kamp R, Möltner A, Harendza S. "Princess and the pea" - an assessment tool for palpation skills in postgraduate education. BMC MEDICAL EDUCATION 2019; 19:177. [PMID: 31146715 PMCID: PMC6543652 DOI: 10.1186/s12909-019-1619-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Accepted: 05/22/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND In osteopathic medicine, palpation is considered to be the key skill to be acquired during training. Whether palpation skills are adequately acquired during undergraduate or postgraduate training is difficult to assess. The aim of our study was to test a palpation assessment tool developed for undergraduate medical education in a postgraduate medical education (PME) setting. METHODS We modified and standardized an assessment tool, where a coin has to be palpated under different layers of copy paper. For every layer depth we randomized the hiding positions with a random generator. The task was to palpate the coin or to determine that no coin was hidden in the stack. We recruited three groups of participants: 22 physicians with no training in osteopathic medicine, 25 participants in a PME course of osteopathic techniques before and after a palpation training program, 31 physicians from an osteopathic expert group with at least 700 h of osteopathic skills training. These experts ran the test twice to check for test-retest-reliability. Inferential statistical analyzes were performed using generalized linear mixed models with the dichotomous variable "coin detected / not detected" as the dependent variable. RESULTS We measured a test-retest reliability of the assessment tool as a whole with 56 stations in the expert group of 0.67 (p < 0.001). For different paper layers, we found good retest reliabilities up to 300 sheets. The control group detected a coin significantly better in a depth of 150 sheets (p = 0.01) than the pre-training group. The osteopathic training group showed significantly more correct coin localizations after the training in layer depths of 200 (p = 0.03) and 300 sheets (p = 0.05). This group also had significantly better palpation results than the expert group in the depth of 300 sheets (p = 0.001). When there was no coin hidden, the expert group showed significantly better results than the post-training group (p = 0.01). CONCLUSIONS Our tool can be used with reliable results to test palpation course achievements with 200 and 300 sheets of paper. Further refinements of this tool will be needed to use it in complex assessment designs for the evaluation of more sophisticated palpatory skills in postgraduate medical settings.
Collapse
Affiliation(s)
- Rainer Kamp
- Academy of Medical Education of the Medical Council Westphalia-Lippe, Ärztekammer Westfalen-Lippe and Kassenärztliche Vereinigung Westfalen-Lippe, Münster, Germany
| | - Andreas Möltner
- Ruprecht-Karls-University, Center of Excellence for Assessment in Medicine – Baden Württemberg, Heidelberg, Germany
| | - Sigrid Harendza
- III. Department of Internal Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Universitätsklinikum Hamburg-Eppendorf, III. Medizinische Klinik Martinistr. 52, 20246 Hamburg, Germany
| |
Collapse
|
227
|
Abstract
OBJECTIVES The formative aspect of the mini-clinical evaluation exercise (mini-CEX) in postgraduate medical workplace-based assessment is intended to afford opportunities for active learning. Yet, there is little understanding of the perceived relationship between the mini-CEX and how trainees self-regulate their learning. Our objective was to explore trainees' perceptions of their mini-CEX experiences from a learning perspective, using Zimmerman's self-regulated learning theoretical framework as an interpretive lens. DESIGN Qualitative, using semi-structured interviews conducted in 2017. The interviews were analysed thematically. SETTING Geriatric medicine training. PARTICIPANTS Purposive sampling was employed to recruit geriatric medicine trainees in Melbourne, Australia. Twelve advanced trainees participated in the interviews. RESULTS Four themes were found with a cyclical inter-relationship between three of these themes: namely, goal setting, task translation and perceived outcome. These themes reflect the phases of the self-regulated learning framework. Each phase was influenced by the fourth theme, supervisor co-regulation. Goal setting had motivational properties that had significant impact on the later phases of the cycle. A 'tick box' goal aligned with an opportunistic approach and poorer perceived educational outcomes. Participants reported that external feedback following assessment was critical for their self-evaluation, affective responses and perceived outcomes. CONCLUSIONS Trainees perceived the performance of a mini-CEX as a complex, inter-related cyclical process, influenced at all stages by the supervisor. Based on our trainee perspectives of the mini-CEX, we conclude that supervisor engagement is essential to support trainees to individually regulate their learning in the clinical environment.
Collapse
Affiliation(s)
- Eva Kipen
- Department of Medical Education, University of Melbourne, Melbourne, Victoria, Australia
- Central Clinical School, Faculty of Medicine Nursing and Health Sciences, Monash University, Australia
- Alfred Hospital, Melbourne, Victory, Australia
| | - Eleanor Flynn
- Department of Medical Education, University of Melbourne, Melbourne, Victoria, Australia
| | - Robyn Woodward-Kron
- Department of Medical Education, University of Melbourne, Melbourne, Victoria, Australia
| |
Collapse
|
228
|
Woolf K, Page M, Viney R. Assessing professional competence: a critical review of the Annual Review of Competence Progression. J R Soc Med 2019; 112:236-244. [PMID: 31124405 DOI: 10.1177/0141076819848113] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The Annual Review of Competence Progression is used to determine whether trainee doctors in the United Kingdom are safe and competent to progress to the next training stage. In this article we provide evidence to inform recommendations to enhance the validity of the summative and formative elements of the Annual Review of Competency Progression. The work was commissioned as part of a Health Education England review. We systematic searched the peer reviewed and grey literature, synthesising findings with information from national, local and specialty-specific Annual Review of Competence Progression guidance, critically evaluating the findings in the context of literature on assessing competence in medical education. National guidance lacked detail resulting in variability across locations and specialties, threatening validity and reliability. Trainees and trainers were concerned that the Annual Review of Competence Progression only reliably identifies the most poorly performing trainees. Feedback is not routinely provided, which can leave those with performance difficulties unsupported and high performers demotivated. Variability in the provision and quality of feedback can negatively affect learning. The Annual Review of Competence Progression functions as a high-stakes assessment, likely to have a significant impact on patient care. It should be subject to the same rigorous evaluation as other high-stakes assessments; there should be consistency in procedures across locations, specialties and grades; and all trainees should receive high-quality feedback.
Collapse
Affiliation(s)
- Katherine Woolf
- Research Department of Medical Education, UCL Medical School, Royal Free Hospital, London NW3 2PF, UK
| | - Michael Page
- Research Department of Medical Education, UCL Medical School, Royal Free Hospital, London NW3 2PF, UK
| | - Rowena Viney
- Research Department of Medical Education, UCL Medical School, Royal Free Hospital, London NW3 2PF, UK
| |
Collapse
|
229
|
Heeneman S, de Grave W. Development and initial validation of a dual-purpose questionnaire capturing mentors' and mentees' perceptions and expectations of the mentoring process. BMC MEDICAL EDUCATION 2019; 19:133. [PMID: 31068162 PMCID: PMC6505175 DOI: 10.1186/s12909-019-1574-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2017] [Accepted: 04/24/2019] [Indexed: 05/28/2023]
Abstract
BACKGROUND In health profession education, learners are often coached by mentors for development of competencies, self-direction of learning and professionalism. It is important that the mentee-mentor relationship is aligned in terms of mutual expectations. METHODS A dual-purpose questionnaire capturing both the mentor and mentee perceptions on the actual and preferred mentoring functions was designed and validated, by performing a principal component analysis (PCA) using the data of mentees (n = 103) and mentors (n = 23) of a medical course. As a proof of concept, alignment of needs and changes in the mentoring perceptions in mentee groups of different years were determined. RESULTS PCA showed that specific sets of questions addressed important elements in the mentoring process, such as self-direction of learning and reflection (Scale 1), guidance of behavioural change (Scale 4), addressing personal issues and professional identity development (Scale 3 and 5) and how the mentor and mentee presents oneself in the mentoring relationship (Scale 2). Mentors and mentees perceived comparable situations as critical for an effective mentoring process, such as mentor presence and guidance of reflection, although there was also evidence of gaps, such as perception of cultural issues. By comparison of the mentee groups in the different years of the program, the dynamic or evolving nature of the mentor process became evident, mentees experienced more emphasis by the mentor on reflection (Scale 1), at a constant level of mentor presence (Scale 2). CONCLUSION Given the individualized, context-specific, and dynamic nature of mentoring, programmes would benefit from a regular evaluation of mentoring practices, e.g. by using questionnaires, in order to facilitate organizational revisions and further development of the mentoring competencies.
Collapse
Affiliation(s)
- Sylvia Heeneman
- Department of Pathology, Faculty of Health, Medicine and Life Sciences, Maastricht University/ MUMC, Peter Debyelaan 25, Maastricht, HX 6229 The Netherlands
| | - Willem de Grave
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Universiteitssingel 60, Maastricht, ER 6229 The Netherlands
| |
Collapse
|
230
|
Johnson CE, Keating JL, Farlie MK, Kent F, Leech M, Molloy EK. Educators' behaviours during feedback in authentic clinical practice settings: an observational study and systematic analysis. BMC MEDICAL EDUCATION 2019; 19:129. [PMID: 31046776 PMCID: PMC6498493 DOI: 10.1186/s12909-019-1524-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2018] [Accepted: 03/17/2019] [Indexed: 05/30/2023]
Abstract
BACKGROUND Verbal feedback plays a critical role in health professions education but it is not clear which components of effective feedback have been successfully translated from the literature into supervisory practice in the workplace, and which have not. The purpose of this study was to observe and systematically analyse educators' behaviours during authentic feedback episodes in contemporary clinical practice. METHODS Educators and learners videoed themselves during formal feedback sessions in routine hospital training. Researchers compared educators' practice to a published set of 25 educator behaviours recommended for quality feedback. Individual educator behaviours were rated 0 = not seen, 1 = done somewhat, 2 = consistently done. To characterise individual educator's practice, their behaviour scores were summed. To describe how commonly each behaviour was observed across all the videos, mean scores were calculated. RESULTS Researchers analysed 36 videos involving 34 educators (26 medical, 4 nursing, 4 physiotherapy professionals) and 35 learners across different health professions, specialties, levels of experience and gender. There was considerable variation in both educators' feedback practices, indicated by total scores for individual educators ranging from 5.7 to 34.2 (maximum possible 48), and how frequently specific feedback behaviours were seen across all the videos, indicated by mean scores for each behaviour ranging from 0.1 to 1.75 (maximum possible 2). Educators commonly provided performance analysis, described how the task should be performed, and were respectful and supportive. However a number of recommended feedback behaviours were rarely seen, such as clarifying the session purpose and expectations, promoting learner involvement, creating an action plan or arranging a subsequent review. CONCLUSIONS These findings clarify contemporary feedback practice and inform the design of educational initiatives to help health professional educators and learners to better realise the potential of feedback.
Collapse
Affiliation(s)
- Christina E. Johnson
- Monash Doctors Education, Monash Health and Department of Medical Education, Melbourne Medical School, University of Melbourne, Melbourne, Victoria Australia
| | - Jennifer L. Keating
- Department of Physiotherapy, School of Primary and Allied Health Care, Faculty of Medicine Nursing and Health Science, Monash University, Melbourne, Australia
| | - Melanie K. Farlie
- Allied Health Workforce Innovation, Strategy, Education & Research (WISER) Unit, Monash Health, and School of Primary and Allied Health Care, Faculty of Medicine, Nursing and Health Sciences at Monash University, Melbourne, Australia
| | - Fiona Kent
- Education Portfolio, Faculty Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia
| | - Michelle Leech
- Monash School of Medicine, Faculty of Medicine, Nursing & Health Sciences, Monash University and Monash Health, Melbourne, Australia
| | - Elizabeth K. Molloy
- Department of Medical Education, Melbourne Medical School, University of Melbourne, Melbourne, Australia
| |
Collapse
|
231
|
Brouwers M, Custers J, Bazelmans E, van Weel C, Laan R, van Weel-Baumgarten E. Assessment of medical students' integrated clinical communication skills: development of a tailor-made assessment tool. BMC MEDICAL EDUCATION 2019; 19:118. [PMID: 31035995 PMCID: PMC6489308 DOI: 10.1186/s12909-019-1557-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2018] [Accepted: 04/15/2019] [Indexed: 05/17/2023]
Abstract
BACKGROUND Since patient-centered communication is directly connected to clinical performance, it should be integrated with medical knowledge and clinical skills. Therefore, clinical communication skills should be trained and assessed as an integral part of the student's clinical performance. We were unable to identify a tool, which helps when assessing patient-centered communication skills as an integrated component of medical history taking ('the integrated medical interview'). Therefore, we decided to design a new tailor-made assessment tool, the BOCC (BeOordeling Communicatie en Consultvoering (Dutch), Assessment of Communication and Consultation (English) to help raters assess students' integrated clinical communication skills with the emphasis on patient-centred communication combined with the correct medical content. This is a first initiative to develop such a tool, and this paper describes the first steps in this process. METHODS We investigated the tool in a group of third-year medical students (n = 672) interviewing simulated patients. Internal structure and internal consistency were assessed. Regression analysis was conducted to investigate the relationship between scores on the instrument and general grading. Applicability to another context was tested in a group of fourth-year medical students (n = 374). RESULTS PCA showed five components (Communication skills, Problem clarification, Specific History, Problem influence and Integration Skills) with various Cronbach's alpha scores. The component Problem Clarification made the strongest unique contribution to the grade prediction. Applicability was good when investigated in another context. CONCLUSIONS The BOCC is designed to help raters assess students' integrated communication skills. It was assessed on internal structure and internal consistency. This tool is the first step in the assessment of the integrated medical interview and a basis for further investigation to reform it into a true measurement instrument on clinical communication skills.
Collapse
Affiliation(s)
- M. Brouwers
- Radboud Institute of Health Sciences, Dept. Primary and Community Care (161), Radboud University Medical Center, PO Box 9101, 6500 HB Nijmegen, The Netherlands
| | - J. Custers
- Departement of Medical Psychology, Radboud University Medical Center, PO Box 9101, 6500 HB Nijmegen, The Netherlands
| | - E. Bazelmans
- Departement of Medical Psychology, Radboud University Medical Center, PO Box 9101, 6500 HB Nijmegen, The Netherlands
| | - C. van Weel
- Radboud Institute of Health Sciences, Dept. Primary and Community Care (161), Radboud University Medical Center, PO Box 9101, 6500 HB Nijmegen, The Netherlands
- Department of Health Services Research and Policy, Australian National University, Canberra, Australia
| | - R. Laan
- Health Academy, Radboud University Medical Center, PO Box 9101, 6500 HB Nijmegen, The Netherlands
| | - E. van Weel-Baumgarten
- Radboud Institute of Health Sciences, Dept. Primary and Community Care (161), Radboud University Medical Center, PO Box 9101, 6500 HB Nijmegen, The Netherlands
| |
Collapse
|
232
|
[Competence-based assessment in the national licensing examination in Germany]. Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz 2019; 61:171-177. [PMID: 29230515 DOI: 10.1007/s00103-017-2668-9] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
In Germany, future physicians have to pass a national licensing examination at the end of their medical studies. Passing this examination is the requirement for the license to practice medicine. The Masterplan Medizinstudium 2020 with its 41 measures aims to shift the paradigm in medical education and medical licensing examinations.The main goals of the Masterplan include the development towards competency-based and practical medical education and examination as well as the strengthening of general medicine. The healthcare policy takes into account social developments, which are very important for the medical education and licensing examination.Seven measures of the Masterplan relate to the realignment of the licensing examinations. Their function to drive learning should better support students in achieving the study goal defined in the German Medical Licensure Act: to educate a medical doctor scientifically and practically who is qualified for autonomous and independent professional practice, postgraduate education and continuous training.
Collapse
|
233
|
Hauer KE, Lucey CR. Core Clerkship Grading: The Illusion of Objectivity. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:469-472. [PMID: 30113359 DOI: 10.1097/acm.0000000000002413] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Core clerkship grading creates multiple challenges that produce high stress for medical students, interfere with learning, and create inequitable learning environments. Students and faculty alike succumb to the illusion of objectivity-that quantitative ratings converted to grades convey accurate measures of the complexity of clinical performance.Clerkship grading is the first high-stakes assessment within medical school and occurs just as students are newly immersed full-time in an environment in which patient care supersedes their needs as learners. Students earning high marks situate themselves to earn entry into competitive residency programs and selective specialties. However, there is no commonly accepted standard for how to assign clerkship grades, and the process is vulnerable to imprecision and bias. Rewarding learners for the speed with which they adapt inherently favors students who bring advantages acquired before medical school and discounts the goal of all learners achieving competence.The authors propose that, rather than focusing on assigning core clerkship grades, assessment of student performance should incorporate expert judgment of learning progress. Competency-based medical education is predicated on the articulation of stepwise expectations for learners, with the support and time allocated for each learner to meet those expectations. Concurrently, students should ideally review their own performance data with coaches to self-assess areas of relative strength and areas for further growth. Eliminating grades in favor of competency-based assessment for learning holds promise to engage learners in developing essential patient care and teamwork skills and to foster their development of lifelong learning habits.
Collapse
Affiliation(s)
- Karen E Hauer
- K.E. Hauer is associate dean for assessment and professor, Department of Medicine, University of California, San Francisco, San Francisco, California; ORCID: https://orcid.org/0000-0002-8812-4045. C.R. Lucey is vice dean for education and professor, Department of Medicine, University of California, San Francisco, San Francisco, California
| | | |
Collapse
|
234
|
Meyer EG, Cozza KL, Konara RMR, Hamaoka D, West JC. Inflated Clinical Evaluations: a Comparison of Faculty-Selected and Mathematically Calculated Overall Evaluations Based on Behaviorally Anchored Assessment Data. ACADEMIC PSYCHIATRY : THE JOURNAL OF THE AMERICAN ASSOCIATION OF DIRECTORS OF PSYCHIATRIC RESIDENCY TRAINING AND THE ASSOCIATION FOR ACADEMIC PSYCHIATRY 2019; 43:151-156. [PMID: 30091071 DOI: 10.1007/s40596-018-0957-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2018] [Accepted: 07/12/2018] [Indexed: 06/08/2023]
Abstract
OBJECTIVE This retrospective study compared faculty-selected evaluation scores with those mathematically calculated from behaviorally anchored assessments. METHODS Data from 1036 psychiatry clerkship clinical evaluations (2012-2015) was reviewed. These clinical evaluations required faculty to assess clinical performance using 14 behaviorally anchored questions followed by a faculty-selected overall evaluation. An explicit rubric was included in the overall evaluation to assist the faculty in interpreting their 14 assessment responses. Using the same rubric, mathematically calculated evaluations of the same assessment responses were generated and compared to the faculty-selected evaluations. RESULTS Comparison of faculty-selected to mathematically calculated evaluations revealed that while the two methods were reliably correlated (Cohen's kappa = 0.314, Pearson's coefficient = 0.658, p < 0.001), there was a notable difference in the results (t = 24.5, p < 0.0001). The average faculty-selected evaluation was 1.58 (SD = 0.61) with a mode of "1" or "outstanding," while the mathematically calculated evaluation had an average of 2.10 (SD = 0.90) with a mode of "3" or "satisfactory." 51.0% of the faculty-selected evaluations matched the mathematically calculated results: 46.1% were higher and 2.9% were lower. CONCLUSIONS Clerkship clinical evaluation forms that require faculty to make an overall evaluation generate results that are significantly higher than what would have been assigned solely using behavioral anchored assessment questions. Focusing faculty attention on assessing specific behaviors rather than overall evaluations may reduce this inflation and improve validity. Clerkships may want to consider removing overall evaluation questions from their clinical evaluation tools.
Collapse
Affiliation(s)
- Eric G Meyer
- Uniformed Services University of the Health Sciences, Bethesda, MD, USA.
| | - Kelly L Cozza
- Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | | | - Derrick Hamaoka
- Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - James C West
- Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| |
Collapse
|
235
|
Favier RP, Vernooij JCM, Jonker FH, Bok HGJ. Inter-Rater Reliability of Grading Undergraduate Portfolios in Veterinary Medical Education. JOURNAL OF VETERINARY MEDICAL EDUCATION 2019; 46:415-422. [PMID: 30920333 DOI: 10.3138/jvme.0917-128r1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The reliability of high-stakes assessment of portfolios containing an aggregation of quantitative and qualitative data based on programmatic assessment is under debate, especially when multiple assessors are involved. In this study carried out at the Faculty of Veterinary Medicine, Utrecht University, the Netherlands, two independent assessors graded the portfolios of students in their second year of the 3-year clinical phase. The similarity of grades (i.e., equal grades) and the level of the grades were studied to estimate inter-rater reliability, taking into account the potential effects of the assessor's background (i.e., originating from a clinical or non-clinical department) and student's cohort group, gender, and chosen master track (Companion Animal Health, Equine Health, or Farm Animal/Public Health). Whereas the similarity between the two grades increased from 58% in the first year the grading system was introduced to around 80% afterwards, the grade level was lower over the next 3 years. The assessor's background had a minor effect on the proportion of similar grades, as well as on grading level. The assessor intraclass correlation was low (i.e., all assessors scored with a similar grading pattern [same range of grades]). The grades awarded to female students were higher but more often dissimilar. We conclude that the grading system was well implemented and has a high inter-rater reliability.
Collapse
Affiliation(s)
- Robert P Favier
- Department of Clinical Sciences of Companion Animals, Faculty of Veterinary Medicine, Utrecht University
| | - Johannes C M Vernooij
- Biostatistician, and Teacher in Methodology and Statistics, Department of Farm Animal Health, Faculty of Veterinary Medicine, Utrecht University
| | - F Herman Jonker
- Chair of the Portfolio Evaluation Committee, and Teacher in Reproduction, Department of Farm Animal Health, Faculty of Veterinary Medicine
| | - Harold G J Bok
- Centre for Quality Improvement in Veterinary Education, Faculty of Veterinary Medicine, 3508 TC Utrecht University
| |
Collapse
|
236
|
Ten Cate O, Regehr G. The Power of Subjectivity in the Assessment of Medical Trainees. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:333-337. [PMID: 30334840 DOI: 10.1097/acm.0000000000002495] [Citation(s) in RCA: 88] [Impact Index Per Article: 17.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Objectivity in the assessment of students and trainees has been a hallmark of quality since the introduction of multiple-choice items in the 1960s. In medical education, this has extended to the structured examination of clinical skills and workplace-based assessment. Competency-based medical education, a pervasive movement that started roughly around the turn of the century, similarly calls for rigorous, objective assessment to ensure that all medical trainees meet standards to assure quality of health care. At the same time, measures of objectivity, such as reliability, have consistently shown disappointing results. This raises questions about the extent to which objectivity in such assessments can be ensured.In fact, the legitimacy of "objective" assessment of individual trainees, particularly in the clinical workplace, may be questioned. Workplaces are highly dynamic and ratings by observers are inherently subjective, as they are based on expert judgment, and experts do not always agree-for good, idiosyncratic, reasons. Thus, efforts to "objectify" these assessments may be problematically distorting the assessment process itself. In addition, "competence" must meet standards, but it is also context dependent.Educators are now arriving at the insight that subjective expert judgments by medical professionals are not only unavoidable but actually should be embraced as the core of assessment of medical trainees. This paper elaborates on the case for subjectivity in assessment.
Collapse
Affiliation(s)
- Olle Ten Cate
- O. ten Cate is professor of medical education and senior scientist, Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, the Netherlands; ORCID: https://orcid.org/0000-0002-6379-8780. G. Regehr is professor, Department of Surgery, and associate director of research, Centre for Health Education Scholarship, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada; ORCID: http://orcid.org/0000-0002-3144-331X
| | | |
Collapse
|
237
|
Jamieson J, Palermo C, Hay M, Gibson S. Assessment Practices for Dietetics Trainees: A Systematic Review. J Acad Nutr Diet 2019; 119:272-292.e23. [DOI: 10.1016/j.jand.2018.09.010] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2018] [Revised: 09/05/2018] [Accepted: 09/17/2018] [Indexed: 11/29/2022]
|
238
|
Scarff CE, Bearman M, Chiavaroli N, Trumble S. Keeping mum in clinical supervision: private thoughts and public judgements. MEDICAL EDUCATION 2019; 53:133-142. [PMID: 30328138 DOI: 10.1111/medu.13728] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/09/2018] [Revised: 05/30/2018] [Accepted: 07/31/2018] [Indexed: 06/08/2023]
Abstract
CONTEXT The seemingly obvious claim that people prefer to keep mum about undesirable messages - termed 'the MUM effect' - was initially reported in the psychology literature in the 1970s. More recently, it has been discussed in contexts including performance appraisals and the reporting of unsuccessful projects in workplace settings, but only sparsely in educational ones. We wished to review the published literature on the MUM effect in order to understand the implications for clinical assessment. METHODS We performed a narrative literature review on the MUM effect and clustered findings together into three themes: those that describe what MUM behaviours look like, those that explore potential reasons for the MUM effect and those that consider factors that can influence MUM behaviours. RESULTS This paper summarises the extensive literature on the MUM effect, including its manifestations and modifiers and discusses how the effect may be used to consider issues faced by many clinical supervisors faced with delivering 'negative' assessment messages to trainees. DISCUSSION We suggest, that as a pervasive phenomenon, the MUM effect can both help to explain the difficulties that some assessors face when delivering undesirable messages (including feedback or ratings) and offer new insights in how to deal with such issues.
Collapse
Affiliation(s)
- Catherine E Scarff
- Department of Medical Education, Melbourne Medical School, University of Melbourne, Melbourne, Victoria, Australia
| | - Margaret Bearman
- Centre for Research in Assessment and Digital Learning, Deakin University, Geelong, Victoria, Australia
| | - Neville Chiavaroli
- Department of Medical Education, Melbourne Medical School, University of Melbourne, Melbourne, Victoria, Australia
| | - Steve Trumble
- Department of Medical Education, Melbourne Medical School, University of Melbourne, Melbourne, Victoria, Australia
| |
Collapse
|
239
|
Duitsman ME, Fluit CRMG, van der Goot WE, ten Kate-Booij M, de Graaf J, Jaarsma DADC. Judging residents' performance: a qualitative study using grounded theory. BMC MEDICAL EDUCATION 2019; 19:13. [PMID: 30621674 PMCID: PMC6325830 DOI: 10.1186/s12909-018-1446-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 12/28/2018] [Indexed: 05/12/2023]
Abstract
BACKGROUND Although program directors judge residents' performance for summative decisions, little is known about how they do this. This study examined what information program directors use and how they value this information in making a judgment of residents' performance and what residents think of this process. METHODS Sixteen semi-structured interviews were held with residents and program directors from different hospitals in the Netherlands in 2015-2016. Participants were recruited from internal medicine, surgery and radiology. Transcripts were analysed using grounded theory methodology. Concepts and themes were identified by iterative constant comparison. RESULTS When approaching semi-annual meetings with residents, program directors report primarily gathering information from the following: assessment tools, faculty members and from their own experience with residents. They put more value on faculty's comments during meetings and in the corridors than on feedback provided in the assessment tools. They are influenced by their own beliefs about learning and education in valuing feedback. Residents are aware that faculty members discuss their performance in meetings, but they believe the assessment tools provide the most important proof to demonstrate their clinical competency. CONCLUSIONS Residents think that feedback in the assessment tools is the most important proof to demonstrate their performance, whereas program directors scarcely use this feedback to form a judgment about residents' performance. They rely heavily on remarks of faculty in meetings instead. Therefore, residents' performance may be better judged in group meetings that are organised to enhance optimal information sharing and decision making about residents' performance.
Collapse
Affiliation(s)
- Marrigje E. Duitsman
- Department of Internal Medicine and Health Academy, Radboud Health Academy, Radboud University Medical Centre, Gerard van Swietenlaan 4, Postbus 9101, 6500 HB Nijmegen, the Netherlands
| | - Cornelia R. M. G. Fluit
- Health Academy, Department of Research in Learning and Education, Radboud University Medical Centre, Nijmegen, the Netherlands
| | - Wieke E. van der Goot
- Martini Hospital, Groningen, the Netherlands
- Centre for Education Development and Research in Health Professions, University Medical Centre Groningen, Groningen, the Netherlands
| | - Marianne ten Kate-Booij
- Department of Obstetrics and Gynaecology, Erasmus University Medical Centre, Rotterdam, the Netherlands
| | - Jacqueline de Graaf
- Department of Internal Medicine, Radboudumc Nijmegen, Nijmegen, the Netherlands
| | - Debbie A. D. C. Jaarsma
- Centre for Education Development and Research in Health Professions, University Medical Centre Groningen, Groningen, the Netherlands
| |
Collapse
|
240
|
Harrison C. Can we redesign the MRCGP assessment to support lifelong learning? EDUCATION FOR PRIMARY CARE 2019; 30:9-12. [DOI: 10.1080/14739879.2018.1563507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
241
|
Links MJ, Wilkinson T, Campbell C. Discourses of professionalism: Metaphors, theory and practice. MEDICAL TEACHER 2019; 41:91-98. [PMID: 29575950 DOI: 10.1080/0142159x.2018.1442565] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
UNLABELLED Professionalism is a contested concept and different discourses have differed by scope and epistemology. The theory of communicative action integrates epistemology (knowledge interests) with that of scope (lifeworld). AIM To pragmatically inform learning of professionalism. METHODS apply the theory of communicative action to professionalism discourses. RESULTS Previous professionalism discourses translated into four frames: technical; communicative; improvement, and critical. These can be viewed as four metaphors the scale; conversation; consensus conference, and protest. The theory of communicative action demonstrated that a critical frame was often lacking from discussions of professionalism and emphasized critiquing the assumptions made, the way power was utilized, and the ends to which actions were directed. Using these frameworks connected discourses on professionalism to other key medical discourses particularly quality improvement, patient centeredness, social justice, and the professional well-being. CONCLUSION The theory of communicative action adds value by introducing criteria for the evaluation of individual truth claims that expands the discussion beyond accuracy to include sincerity, ethics and coherence; and it emphasizes promoting free speech and the inclusion of diverse views and stakeholders. The theory of communicative action provides a coherent and useful framework for viewing professionalism that integrates with broader discussions about philosophy, truth claims, and post-modern society.
Collapse
Affiliation(s)
- Matthew Jon Links
- a Medical Education Unit , Gold Coast University Hospital and Health Service , Southport , Australia
| | - Tim Wilkinson
- b Medical Education Unit, Christchurch School of Medicine & Health Sciences University of Otago , Christchurch , New Zealand
| | - Craig Campbell
- c Department of Professional Development , Royal College of Physicians and Surgeons of Canada , Ottawa , Canada
| |
Collapse
|
242
|
Rafii F, Ghezeljeh TN, Nasrollah S. Design and implementation of clinical competency evaluation system for nursing students in medical-surgical wards. J Family Med Prim Care 2019; 8:1408-1413. [PMID: 31143730 PMCID: PMC6510106 DOI: 10.4103/jfmpc.jfmpc_47_19] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Background: In nursing, it is important to ensure the evaluation of students’ clinical competency and using a valid and reliable evaluation system is necessary. The aim of this study was to design a clinical competency evaluation system for nursing students in medical-surgical wards and determine its validity and reliability. Methods: This cross-sectional study was conducted on the nursing students who were spending their practicum courses at the medical-surgical wards. First, the educational objectives and applicable evaluation tools were determined. Then, three tools of: Direct Observation of Procedural Skills (DOPS), Mini Clinical Evaluation Exercise (Mini-CEX), and Clinical Work Sampling (CWS) were determined as appropriate tools. Finally, the evaluation system was designed and its validity was confirmed using content validity index (CVI) and content validity ratio (CVR). Reliability of the tools was calculated using Cronbach's alpha coefficient. Results: CWS tool had CVI = 0.91 and CVR = 0.93, DOPS tool had CVI = 0.98 and CVR = 0.94, and Mini-CEX tool had CVI = 0.93 and CVR = 1. These results indicated desirable validity of the designed evaluation system. In addition, all items had appropriate CVR. Reliability was also higher than 0.7. Significant difference was found between the results of students’ evaluation using the School's current evaluation method and the designed evaluation system. From the perspective of teachers and students, the designed evaluation system was accepted. Conclusion: The designed evaluation system had high reliability and validity. Its application satisfied the majority of teachers and students. Therefore, it can be used as a useful evaluation system for assessing clinical competencies in medical-surgical wards.
Collapse
Affiliation(s)
- Forough Rafii
- Department of Medical-Surgical, School of Nursing and Midwifery, Iran University of Medical Sciences, Tehran, Iran
| | - Tahereh Najafi Ghezeljeh
- Department of Intensive Care and Cardiovascular Perfusion Technology, School of Nursing and Midwifery, Iran University of Medical Sciences, Tehran, Iran
| | - Sepideh Nasrollah
- Department of Medical-Surgical, School of Nursing and Midwifery, International Campus, Iran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
243
|
Watling CJ, Ginsburg S. Assessment, feedback and the alchemy of learning. MEDICAL EDUCATION 2019; 53:76-85. [PMID: 30073692 DOI: 10.1111/medu.13645] [Citation(s) in RCA: 187] [Impact Index Per Article: 37.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2018] [Revised: 04/04/2018] [Accepted: 05/24/2018] [Indexed: 05/25/2023]
Abstract
CONTEXT Models of sound assessment practices increasingly emphasise assessment's formative role. As a result, assessment must not only support sound judgements about learner competence, but also generate meaningful feedback to guide learning. Reconciling the tension between assessment's focus on judgement and decision making and feedback's focus on growth and development represents a critical challenge for researchers and educators. METHODS We synthesise the literature related to this tension, framed around four trends in education research: (i) shifting perspectives on assessment; (ii) shifting perspectives on feedback; (iii) increasing attention on learners' perceptions of assessment and feedback, and (iv) increasing attention on the influence of culture on assessment and feedback. We describe factors that produce and sustain this tension. RESULTS The lines between assessment and feedback frequently blur in medical education. Models of programmatic assessment deliberately use the same data for both purposes: low-stakes individual data points are used formatively, but then are added together to support summative judgements. However, the translation of theory to practice is not straightforward. Efforts to embed meaningful feedback in programmes of learning face a multitude of threats. Learners may perceive assessment with formative intent as summative, restricting their engagement with it as feedback, and thus diminishing its learning value. A learning culture focused on assessment may limit learners' sense of safety to explore, to experiment, and sometimes to fail. CONCLUSIONS Successfully blending assessment and feedback demands clarity of purpose, support for learners, and a system and organisational commitment to a culture of improvement rather than a culture of performance.
Collapse
Affiliation(s)
- Christopher J Watling
- Department of Clinical Neurological Sciences, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Shiphra Ginsburg
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
244
|
Boulet JR, Durning SJ. What we measure … and what we should measure in medical education. MEDICAL EDUCATION 2019; 53:86-94. [PMID: 30216508 DOI: 10.1111/medu.13652] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2018] [Revised: 03/06/2018] [Accepted: 05/31/2018] [Indexed: 05/20/2023]
Abstract
CONTEXT As the practice of medicine evolves, the knowledge, skills and attitudes required to provide patient care will continue to change. These competency-based changes will necessitate the restructuring of assessment systems. High-quality assessment programmes are needed to fulfil health professions education's contract with society. OBJECTIVES We discuss several issues that are important to consider when developing assessments in health professions education. We organise the discussion along the continuum of medical education, outlining the tension between what has been deemed important to measure and what should be measured. We also attempt to alleviate some of the apprehension associated with measuring evolving competencies by discussing how emerging technologies, including simulation and artificial intelligence, can play a role. METHODS We focus our thoughts on the assessment of competencies that, at least historically, have been difficult to measure. We highlight several assessment challenges, discuss some of the important issues concerning the validity of assessment scores, and argue that medical educators must do a better job of justifying their use of specific assessment strategies. DISCUSSION As in most professions, there are clear tensions in medicine in relation to what should be assessed, who should be responsible for administering assessment content, and how much evidence should be gathered to support the evaluation process. Although there have been advances in assessment practices, there is still room for improvement. From the student's, resident's and practising physician's perspectives, assessments need to be relevant. Knowledge is certainly required, but there are other qualities and attributes that are important, and perhaps far more important. Research efforts spent now on delineating what makes a good physician, and on aligning new and upcoming assessment tools with the relevant competencies, will ensure that assessment practices, whether aimed at establishing competence or at fostering learning, are effective with respect to their primary goal: to produce qualified physicians.
Collapse
Affiliation(s)
- John R Boulet
- Foundation for Advancement of International Medical Education and Research (FAIMER), Philadelphia, Pennsylvania, USA
| | - Steven J Durning
- Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland, USA
| |
Collapse
|
245
|
Duijn CCMA, Ten Cate O, Kremer WDJ, Bok HGJ. The Development of Entrustable Professional Activities for Competency-Based Veterinary Education in Farm Animal Health. JOURNAL OF VETERINARY MEDICAL EDUCATION 2018; 46:218-224. [PMID: 30565977 DOI: 10.3138/jvme.0617-073r] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Entrustable professional activities (EPAs) are professional tasks that can be entrusted to a student under a given level of supervision once he or she has demonstrated competence in these tasks. The EPA construct was conceived to increase transparency in objectives for clinical workplace learning and to help ensure patient safety and the quality of care. A first step in implementing EPAs in a veterinary curriculum is to identify the core EPAs of the profession. The aim of this study was to develop EPAs for farm animal health. An initial set of 36 EPAs for farm animal health was prepared by a team of six veterinarians and curriculum developers and used in a modified Delphi study. In this iterative process, the EPAs were evaluated until higher than 80% agreement was reached. Of 83 veterinarians who participated, 39 (47%) completed the Delphi procedure. After two rounds, the panel reached consensus. A small expert group further refined and reorganized the EPAs for educational purposes into seven core EPAs for farm animal health and 29 sub-EPAs. This study is an important step in optimizing competency-based training in veterinary medicine. Future steps are to implement EPAs in the curriculum and train supervisors to assess students' ability to perform EPAs with increasing levels of independence.
Collapse
|
246
|
Exploring assessment of medical students' competencies in pain medicine-A review. Pain Rep 2018; 4:e704. [PMID: 30801044 PMCID: PMC6370140 DOI: 10.1097/pr9.0000000000000704] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2018] [Revised: 10/21/2018] [Accepted: 11/01/2018] [Indexed: 12/21/2022] Open
Abstract
Supplemental Digital Content is Available in the Text. Introduction: Considering the continuing high prevalence and public health burden of pain, it is critical that medical students are equipped with competencies in the field of pain medicine. Robust assessment of student expertise is integral for effective implementation of competency-based medical education. Objective: The aim of this review was to describe the literature regarding methods for assessing pain medicine competencies in medical students. Method: PubMed, Medline, EMBASE, ERIC, and Google Scholar, and BEME data bases were searched for empirical studies primarily focusing on assessment of any domain of pain medicine competencies in medical students published between January 1997 and December 2016. Results: A total of 41 studies met the inclusion criteria. Most assessments were performed for low-stakes summative purposes and did not reflect contemporary theories of assessment. Assessments were predominantly undertaken using written tests or clinical simulation methods. The most common pain medicine education topics assessed were pain pharmacology and the management of cancer and low-back pain. Most studies focussed on assessment of cognitive levels of learning as opposed to more challenging domains of demonstrating skills and attitudes or developing and implementing pain management plans. Conclusion: This review highlights the need for more robust assessment tools that effectively measure the abilities of medical students to integrate pain-related competencies into clinical practice. A Pain Medicine Assessment Framework has been developed to encourage systematic planning of pain medicine assessment at medical schools internationally and to promote continuous multidimensional assessments in a variety of clinical contexts based on well-defined pain medicine competencies.
Collapse
|
247
|
Bearman M, Ajjawi R. Actor-network theory and the OSCE: formulating a new research agenda for a post-psychometric era. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2018; 23:1037-1049. [PMID: 29027040 DOI: 10.1007/s10459-017-9797-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2017] [Accepted: 10/07/2017] [Indexed: 06/07/2023]
Abstract
The Objective Structured Clinical Examination (OSCE) is a ubiquitous part of medical education, although there is some debate about its value, particularly around possible impact on learning. Literature and research regarding the OSCE is most often situated within the psychometric or competency discourses of assessment. This paper describes an alternative approach: Actor-network-theory (ANT), a sociomaterial approach to understanding practice and learning. ANT provides a means to productively examine tensions and limitations of the OSCE, in part through extending research to include social relationships and physical objects. Using a narrative example, the paper suggests three ANT-informed insights into the OSCE. We describe: (1) exploring the OSCE as a holistic combination of people and objects; (2) thinking about the influences a checklist can exert over the OSCE; and (3) the implications of ANT educational research for standardisation within the OSCE. We draw from this discussion to provide a practical agenda for ANT research into the OSCE. This agenda promotes new areas for exploration in an often taken-for-granted assessment format.
Collapse
Affiliation(s)
- Margaret Bearman
- Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University, Geelong, VIC, Australia.
| | - Rola Ajjawi
- Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University, Geelong, VIC, Australia
| |
Collapse
|
248
|
Gingerich A, Schokking E, Yeates P. Comparatively salient: examining the influence of preceding performances on assessors' focus and interpretations in written assessment comments. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2018; 23:937-959. [PMID: 29980956 DOI: 10.1007/s10459-018-9841-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2018] [Accepted: 07/03/2018] [Indexed: 06/08/2023]
Abstract
Recent literature places more emphasis on assessment comments rather than relying solely on scores. Both are variable, however, emanating from assessment judgements. One established source of variability is "contrast effects": scores are shifted away from the depicted level of competence in a preceding encounter. The shift could arise from an effect on the range-frequency of assessors' internal scales or the salience of performance aspects within assessment judgments. As these suggest different potential interventions, we investigated assessors' cognition by using the insight provided by "clusters of consensus" to determine whether any change in the salience of performance aspects was induced by contrast effects. A dataset from a previous experiment contained scores and comments for 3 encounters: 2 with significant contrast effects and 1 without. Clusters of consensus were identified using F-sort and latent partition analysis both when contrast effects were significant and non-significant. The proportion of assessors making similar comments only significantly differed when contrast effects were significant with assessors more frequently commenting on aspects that were dissimilar with the standard of competence demonstrated in the preceding performance. Rather than simply influencing range-frequency of assessors' scales, preceding performances may affect salience of performance aspects through comparative distinctiveness: when juxtaposed with the context some aspects are more distinct and selectively draw attention. Research is needed to determine whether changes in salience indicate biased or improved assessment information. The potential should be explored to augment existing benchmarking procedures in assessor training by cueing assessors' attention through observation of reference performances immediately prior to assessment.
Collapse
Affiliation(s)
- Andrea Gingerich
- Northern Medical Program, University of Northern British Columbia, 3333 University Way, Prince George, BC, V2N 4Z9, Canada.
| | - Edward Schokking
- Northern Medical Program, University of Northern British Columbia, 3333 University Way, Prince George, BC, V2N 4Z9, Canada
| | - Peter Yeates
- Keele University School of Medicine, Keele, Staffordshire, UK
- Pennine Acute Hospitals NHS Trust, Bury, Lancashire, UK
| |
Collapse
|
249
|
Bok HGJ, de Jong LH, O'Neill T, Maxey C, Hecker KG. Validity evidence for programmatic assessment in competency-based education. PERSPECTIVES ON MEDICAL EDUCATION 2018; 7:362-372. [PMID: 30430439 PMCID: PMC6283777 DOI: 10.1007/s40037-018-0481-2] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
INTRODUCTION Competency-based education (CBE) is now pervasive in health professions education. A foundational principle of CBE is to assess and identify the progression of competency development in students over time. It has been argued that a programmatic approach to assessment in CBE maximizes student learning. The aim of this study is to investigate if programmatic assessment, i. e., a system of assessment, can be used within a CBE framework to track progression of student learning within and across competencies over time. METHODS Three workplace-based assessment methods were used to measure the same seven competency domains. We performed a retrospective quantitative analysis of 327,974 assessment data points from 16,575 completed assessment forms from 962 students over 124 weeks using both descriptive (visualization) and modelling (inferential) analyses. This included multilevel random coefficient modelling and generalizability theory. RESULTS Random coefficient modelling indicated that variance due to differences in inter-student performance was highest (40%). The reliability coefficients of scores from assessment methods ranged from 0.86 to 0.90. Method and competency variance components were in the small-to-moderate range. DISCUSSION The current validation evidence provides cause for optimism regarding the explicit development and implementation of a program of assessment within CBE. The majority of the variance in scores appears to be student-related and reliable, supporting the psychometric properties as well as both formative and summative score applications.
Collapse
Affiliation(s)
- Harold G J Bok
- Centre for Quality Improvement in Veterinary Education, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands.
| | - Lubberta H de Jong
- Centre for Quality Improvement in Veterinary Education, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
| | - Thomas O'Neill
- Department of Psychology, University of Calgary, Calgary, Canada
| | - Connor Maxey
- Veterinary Clinical and Diagnostic Sciences, Faculty of Veterinary Medicine, University of Calgary, Calgary, Canada
| | - Kent G Hecker
- Veterinary Clinical and Diagnostic Sciences, Faculty of Veterinary Medicine, University of Calgary, Calgary, Canada
| |
Collapse
|
250
|
Chung MP, Thang CK, Vermillion M, Fried JM, Uijtdehaage S. Exploring medical students' barriers to reporting mistreatment during clerkships: a qualitative study. MEDICAL EDUCATION ONLINE 2018; 23:1478170. [PMID: 29848223 PMCID: PMC5990956 DOI: 10.1080/10872981.2018.1478170] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
BACKGROUND Despite widespread implementation of policies to address mistreatment, the proportion of medical students who experience mistreatment during clinical training is significantly higher than the proportion of students who report mistreatment. Understanding barriers to reporting mistreatment from students' perspectives is needed before effective interventions can be implemented to improve the clinical learning environment. OBJECTIVE We explored medical students' reasons for not reporting perceived mistreatment or abuse experienced during clinical clerkships at the David Geffen School of Medicine at UCLA (DGSOM). DESIGN This was a sequential two-phase qualitative study. In the first phase, we analyzed institutional survey responses to an open-ended questionnaire administered to the DGSOM graduating classes of 2013-2015 asking why students who experienced mistreatment did not seek help or report incidents. In the second phase, we conducted focus group interviews with third- and fourth-year medical students to explore their reasons for not reporting mistreatment. In total, 30 of 362 eligible students participated in five focus groups. On the whole, 63% of focus group participants felt they had experienced mistreatment, of which over half chose not to report to any member of the medical school administration. Transcripts were analyzed via inductive thematic analysis. RESULTS The following major themes emerged: fear of reprisal even in the setting of anonymity; perception that medical culture includes mistreatment; difficulty reporting more subtle forms of mistreatment; incident is not important enough to report; reporting process damages the student-teacher relationship; reporting process is too troublesome; and empathy with the source of mistreatment. Differing perceptions arose as students debated whether or not reporting was beneficial to the clinical learning environment. CONCLUSIONS Multiple complex factors deeply rooted in the culture of medicine, along with negative connotations associated with reporting, prevent students from reporting incidents of mistreatment. Further research is needed to establish interventions that will help identify mistreatment and change the underlying culture.
Collapse
Affiliation(s)
- Melody P. Chung
- Deans Office, David Geffen School of Medicine at UCLA, Los Angeles, California, USA
| | - Christine K. Thang
- Deans Office, David Geffen School of Medicine at UCLA, Los Angeles, California, USA
| | - Michelle Vermillion
- Deans Office, David Geffen School of Medicine at UCLA, Los Angeles, California, USA
| | - Joyce M. Fried
- Deans Office, David Geffen School of Medicine at UCLA, Los Angeles, California, USA
- CONTACT Joyce M. Fried
| | | |
Collapse
|