51
|
Wilbur K, Wilby KJ, Pawluk S. Pharmacy Preceptor Judgments of Student Performance and Behavior During Experiential Training. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2018; 82:6451. [PMID: 30643308 PMCID: PMC6325462 DOI: 10.5688/ajpe6451] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/07/2017] [Accepted: 08/10/2017] [Indexed: 05/29/2023]
Abstract
Objective. To report the findings of how Canadian preceptors perceive and subsequently evaluate diverse levels of trainees during pharmacy clerkships. Methods. Using modified Delphi technique, 17 Doctor of Pharmacy (PharmD) preceptors from across Canada categorized 16 student narrative descriptions pertaining to their perception of described student performance: exceeds, meets, or falls below their expectations. Results. Twelve (75%) student narratives profiles were categorized unanimously in the final round, six of which were below expectations. Out of 117 ratings of below expectations by responding preceptors, the majority (115, 98%) of post-baccalaureate PharmD students described would fail. Conversely, if the same narrative instead profiled a resident or an entry-to-practice PharmD student, rotation failure decreased to 95 (81%) and 89 (76%), respectively. Conclusion. Pharmacy preceptors do not uniformly judge the same described student performance and inconsistently apply failing rotation grades when they do agree that performance falls below expectations.
Collapse
Affiliation(s)
- Kerry Wilbur
- Faculty of Pharmaceutical Sciences, University of British Columbia, Vancouver, BC, Canada
| | - Kyle J. Wilby
- School of Pharmacy, University of Otago, Dunedin, New Zealand
| | - Shane Pawluk
- College of Pharmacy, Qatar University, Doha, Qatar
| |
Collapse
|
52
|
Castanelli DJ, Moonen-van Loon JMW, Jolly B, Weller JM. The reliability of a portfolio of workplace-based assessments in anesthesia training. Can J Anaesth 2018; 66:193-200. [PMID: 30430441 DOI: 10.1007/s12630-018-1251-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Revised: 10/01/2018] [Accepted: 10/02/2018] [Indexed: 10/27/2022] Open
Abstract
PURPOSE Competency-based anesthesia training programs require robust assessment of trainee performance and commonly combine different types of workplace-based assessment (WBA) covering multiple facets of practice. This study measured the reliability of WBAs in a large existing database and explored how they could be combined to optimize reliability for assessment decisions. METHODS We used generalizability theory to measure the composite reliability of four different types of WBAs used by the Australian and New Zealand College of Anaesthetists: mini-Clinical Evaluation Exercise (mini-CEX), direct observation of procedural skills (DOPS), case-based discussion (CbD), and multi-source feedback (MSF). We then modified the number and weighting of WBA combinations to optimize reliability with fewer assessments. RESULTS We analyzed 67,405 assessments from 1,837 trainees and 4,145 assessors. We assumed acceptable reliability for interim (intermediate stakes) and final (high stakes) decisions of 0.7 and 0.8, respectively. Depending on the combination of WBA types, 12 assessments allowed the 0.7 threshold to be reached where one assessment of any type has the same weighting, while 20 were required for reliability to reach 0.8. If the weighting of the assessments is optimized, acceptable reliability for interim and final decisions is possible with nine (e.g., two DOPS, three CbD, two mini-CEX, two MSF) and 15 (e.g., two DOPS, eight CbD, three mini-CEX, two MSF) assessments respectively. CONCLUSIONS Reliability is an important factor to consider when designing assessments, and measuring composite reliability can allow the selection of a WBA portfolio with adequate reliability to provide evidence for defensible decisions on trainee progression.
Collapse
Affiliation(s)
- Damian J Castanelli
- School of Clinical Sciences at Monash Health, Monash University, Clayton, VIC, Australia. .,Department of Anaesthesia and Perioperative Medicine, Monash Health, Clayton, VIC, Australia.
| | - Joyce M W Moonen-van Loon
- Department of Educational Development and Research, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Brian Jolly
- School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, Newcastle, NSW, Australia
| | - Jennifer M Weller
- Centre for Medical and Health Sciences Education, School of Medicine, University of Auckland, Auckland, New Zealand.,Department of Anaesthesia, Auckland City Hospital, Auckland, New Zealand
| |
Collapse
|
53
|
Burggraf M, Kristin J, Wegner A, Beck S, Herbstreit S, Dudda M, Jäger M, Kauther MD. Willingness of medical students to be examined in a physical examination course. BMC MEDICAL EDUCATION 2018; 18:246. [PMID: 30373579 PMCID: PMC6206683 DOI: 10.1186/s12909-018-1353-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2018] [Accepted: 10/18/2018] [Indexed: 05/08/2023]
Abstract
BACKGROUND Physical examination courses are an essential part of the education of medical students. The aim of this study was to ascertain the factors influencing students' motivation and willingness to participate in a physical examination course. METHODS Students were asked to complete a questionnaire subdivided into five domains: anthropometric data, religiousness, motivation to take part in physical examination courses, willingness to be physically examined at 11 different body regions by peers or a professional tutor and a field for free text. RESULTS The questionnaire was completed by 142 medical students. The importance of the examination course was rated 8.7 / 10 points, the score for students' motivation was 7.8 / 10 points. Willingness to be physically examined ranged from 6 to 100% depending on body part and examiner. Female students were significantly less willing to be examined at sensitive body parts (breast, upper body, groin and the hip joint; p = .003 to < .001), depending on group composition and / or examiner. Strictly religious students showed significantly less willingness to undergo examination of any part of the body except the hand (p = .02 to < .001). Considering BMI, willingness to be examined showed comparable rates for normal weight and under- / overweight students in general (80% vs. 77%). Concerning the composition of the group for physical examination skills courses, students preferred self-assembled over mixed gender and same gender groups. CONCLUSIONS Peer physical examination is a method to improve students' skills. While motivation to participate in and acceptance of the physical examination course appears to be high, willingness to be examined is low for certain parts of the body, e.g. breast and groin, depending on religiousness, gender and examiner. Examination by a professional medical tutor did not lead to higher acceptance. Most students would prefer to choose their team for physical examination courses themselves rather than be assigned to a group.
Collapse
Affiliation(s)
- Manuel Burggraf
- Department of Orthopaedics and Trauma Surgery, University Hospital Essen, University of Duisburg-Essen, Hufelandstr. 55, 45147 Essen, Germany
| | - Julia Kristin
- Department of Otorhinolaryngology, University Hospital Duesseldorf, Heinrich Heine University Duesseldorf, Duesseldorf, Germany
| | - Alexander Wegner
- Department of Orthopaedics and Trauma Surgery, University Hospital Essen, University of Duisburg-Essen, Hufelandstr. 55, 45147 Essen, Germany
| | - Sascha Beck
- Department of Orthopaedics and Trauma Surgery, University Hospital Essen, University of Duisburg-Essen, Hufelandstr. 55, 45147 Essen, Germany
| | - Stephanie Herbstreit
- Department of Orthopaedics and Trauma Surgery, University Hospital Essen, University of Duisburg-Essen, Hufelandstr. 55, 45147 Essen, Germany
| | - Marcel Dudda
- Department of Orthopaedics and Trauma Surgery, University Hospital Essen, University of Duisburg-Essen, Hufelandstr. 55, 45147 Essen, Germany
| | - Marcus Jäger
- Department of Orthopaedics and Trauma Surgery, University Hospital Essen, University of Duisburg-Essen, Hufelandstr. 55, 45147 Essen, Germany
| | - Max Daniel Kauther
- Department of Orthopaedics and Trauma Surgery, University Hospital Essen, University of Duisburg-Essen, Hufelandstr. 55, 45147 Essen, Germany
| |
Collapse
|
54
|
Abstract
BACKGROUND Clinical handover is a core skill that needs to be learned by students and junior clinical staff to improve patient safety. Despite this, training is frequently lacking and of poor quality. A user-friendly assessment tool can assist clinicians to provide training and feedback. CONTEXT This tool was developed in the context of medical students on short placements in remote health services, supervised by registered nurses, in outback Australia. Students make telephone handover calls to the Royal Flying Doctor Service (RFDS), which provides generalist and aeromedical retrieval services to its communities. METHODS Doctors in the RFDS at Broken Hill used a clinical handover assessment tool (CHAT), based on the introduction, situation, background, assessment and recommendation (ISBAR) handover mnemonic, for assessment and training on telephone handovers given by medical students in this remote setting. Medical students were invited to complete surveys and doctors completed interviews about their experience of giving or receiving handovers. Students highly valued the experience of learning handovers in a clinical setting. Doctors in the RFDS found the tool helpful for assessment and for giving feedback in their routine work. We identified no concerns about the safety of patients or students. …we explored the acceptability and educational impact of doctors giving immediate feedback on medical students' handover skills using CHAT CONCLUSIONS: We suggest that work-based handover assessment and feedback provided by clinicians are feasible and should be developed further. Students can learn to give handovers safely even in a remote setting. Clinicians may find CHAT helpful in the learning and teaching of structured handovers in other clinical settings.
Collapse
Affiliation(s)
- Malcolm Moore
- Rural Clinical School, Australian National University Medical School, Canberra, Australian Capital Territory, Australia.,University Department of Rural Health, University of Sydney, Broken Hill, New South Wales, Australia
| | - Chris Roberts
- Office of Education, The University of Sydney School of Medicine, Sydney, New South Wales, Australia
| |
Collapse
|
55
|
Halman S, Rekman J, Wood T, Baird A, Gofton W, Dudek N. Avoid reinventing the wheel: implementation of the Ottawa Clinic Assessment Tool (OCAT) in Internal Medicine. BMC MEDICAL EDUCATION 2018; 18:218. [PMID: 30236097 PMCID: PMC6148769 DOI: 10.1186/s12909-018-1327-7] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2018] [Accepted: 09/13/2018] [Indexed: 05/16/2023]
Abstract
BACKGROUND Workplace based assessment (WBA) is crucial to competency-based education. The majority of healthcare is delivered in the ambulatory setting making the ability to run an entire clinic a crucial core competency for Internal Medicine (IM) trainees. Current WBA tools used in IM do not allow a thorough assessment of this skill. Further, most tools are not aligned with the way clinical assessors conceptualize performances. To address this, many tools aligned with entrustment decisions have recently been published. The Ottawa Clinic Assessment Tool (OCAT) is an entrustment-aligned tool that allows for such an assessment but was developed in the surgical setting and it is not known if it can perform well in an entirely different context. The aim of this study was to implement the OCAT in an IM program and collect psychometric data in this different setting. Using one tool across multiple contexts may reduce the need for tool development and ensure that tools used have proper psychometric data to support them. METHODS Psychometrics characteristics were determined. Descriptive statistics and effect sizes were calculated. Scores were compared between levels of training (juniors (PGY1), seniors (PGY2s and PGY3s) & fellows (PGY4s and PGY5s)) using a one-way ANOVA. Safety for independent practice was analyzed with a dichotomous score. Variance components were generated and used to estimate the reliability of the OCAT. RESULTS Three hundred ninety OCATs were completed over 52 weeks by 86 physicians assessing 44 residents. The range of ratings varied from 2 (I had to talk them through) to 5 (I did not need to be there) for most items. Mean scores differed significantly by training level (p < .001) with juniors having lower ratings (M = 3.80 (out of 5), SD = 0.49) than seniors (M = 4.22, SD = - 0.47) who had lower ratings than fellows (4.70, SD = 0.36). Trainees deemed safe to run the clinic independently had significantly higher mean scores than those deemed not safe (p < .001). The generalizability coefficient that corresponds to internal consistency is 0.92. CONCLUSIONS This study's psychometric data demonstrates that we can reliably use the OCAT in IM. We support assessing existing tools within different contexts rather than continuous developing discipline-specific instruments.
Collapse
Affiliation(s)
- Samantha Halman
- Department of Medicine, the University of Ottawa, The Ottawa Hospital General Campus, 501 Smyth Road, Box 209, Ottawa, Ontario K1H 8L6 Canada
| | - Janelle Rekman
- Department of Surgical Education, the University of Ottawa, The Ottawa Hospital Civic Campus, Loeb Research Building - Main Floor WM150b, 725 Parkdale Avenue, C/O Isabel Menard, Ottawa, Ontario K1Y 4E9 Canada
| | - Timothy Wood
- Department of Innovation in Medical Education, Faculty of Medicine, the University of Ottawa, 850 Peter Morand Crescent (Room 102), Ottawa, Ontario K1G 5Z3 Canada
| | - Andrew Baird
- Department of Medicine, the University of Ottawa, The Ottawa Hospital Parkdale Campus, Room 162, 1053 Carling Avenue, C/O Odile Kaufmann, Ottawa, Ontario K1Y 4E9 Canada
| | - Wade Gofton
- Department of Surgical Education, the University of Ottawa, Ottawa Hospital - Civic Campus, Suite J15, 1053 Carling Avenue, Ottawa, Ontario K1Y 4E9 Canada
| | - Nancy Dudek
- Department of Medicine, the University of Ottawa, The Rehabillitation Centre. 505 Smyth Road, Ottawa, Ontario K1H 8M2 Canada
| |
Collapse
|
56
|
Young JQ, Hasser C, Hung EK, Kusz M, O'Sullivan PS, Stewart C, Weiss A, Williams N. Developing End-of-Training Entrustable Professional Activities for Psychiatry: Results and Methodological Lessons. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2018; 93:1048-1054. [PMID: 29166349 DOI: 10.1097/acm.0000000000002058] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
PURPOSE To develop entrustable professional activities (EPAs) for psychiatry and to demonstrate an innovative, validity-enhancing methodology that may be relevant to other specialties. METHOD A national task force employed a three-stage process from May 2014 to February 2017 to develop EPAs for psychiatry. In stage 1, the task force used an iterative consensus-driven process to construct proposed EPAs. Each included a title, full description, and relevant competencies. In stage 2, the task force interviewed four nonpsychiatric experts in EPAs and further revised the EPAs. In stage 3, the task force performed a Delphi study of national experts in psychiatric education and assessment. All survey participants completed a brief training program on EPAs. Quantitative and qualitative analysis led to further modifications. Essentialness was measured on a five-point scale. EPAs were included if the content validity index was at least 0.8 and the lower end of the asymmetric confidence interval was not lower than 4.0. RESULTS Stages 1 and 2 yielded 24 and 14 EPAs, respectively. In stage 3, 31 of the 39 invited experts participated in both rounds of the Delphi study. Round 1 reduced the proposed EPAs to 13. Ten EPAs met the inclusion criteria in Round 2. CONCLUSIONS The final EPAs provide a strong foundation for competency-based assessment in psychiatry. Methodological features such as critique by nonpsychiatry experts, a national Delphi study with frame-of-reference training, and stringent inclusion criteria strengthen the content validity of the findings and may serve as a model for future efforts in other specialties.
Collapse
Affiliation(s)
- John Q Young
- J.Q. Young is professor, Department of Psychiatry, Zucker School of Medicine at Hofstra/Northwell, New York, New York. C. Hasser is assistant professor, Department of Psychiatry, UCSF School of Medicine, San Francisco, California. E.K. Hung is associate professor, Department of Psychiatry, UCSF School of Medicine, San Francisco, California. M. Kusz is research assistant, Department of Psychiatry, Hofstra Northwell School of Medicine, New York, New York. P.S. O'Sullivan is professor, Department of Medicine and Surgery, UCSF School of Medicine, San Francisco, California. C. Stewart is assistant professor, Department of Psychiatry, Georgetown School of Medicine, Washington, DC. A. Weiss is associate professor, Department of Psychiatry and Behavioral Sciences, Albert Einstein School of Medicine, New York, New York. N. Williams is professor, Department of Psychiatry, University of Iowa Carver College of Medicine, Iowa City, Iowa
| | | | | | | | | | | | | | | |
Collapse
|
57
|
Lörwald AC, Lahner FM, Nouns ZM, Berendonk C, Norcini J, Greif R, Huwendiek S. The educational impact of Mini-Clinical Evaluation Exercise (Mini-CEX) and Direct Observation of Procedural Skills (DOPS) and its association with implementation: A systematic review and meta-analysis. PLoS One 2018; 13:e0198009. [PMID: 29864130 PMCID: PMC5986126 DOI: 10.1371/journal.pone.0198009] [Citation(s) in RCA: 49] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2017] [Accepted: 05/11/2018] [Indexed: 11/19/2022] Open
Abstract
Introduction Mini Clinical Evaluation Exercise (Mini-CEX) and Direct Observation of Procedural Skills (DOPS) are used as formative assessments worldwide. Since an up-to-date comprehensive synthesis of the educational impact of Mini-CEX and DOPS is lacking, we performed a systematic review. Moreover, as the educational impact might be influenced by characteristics of the setting in which Mini-CEX and DOPS take place or their implementation status, we additionally investigated these potential influences. Methods We searched Scopus, Web of Science, and Ovid, including All Ovid Journals, Embase, ERIC, Ovid MEDLINE(R), and PsycINFO, for original research articles investigating the educational impact of Mini-CEX and DOPS on undergraduate and postgraduate trainees from all health professions, published in English or German from 1995 to 2016. Educational impact was operationalized and classified using Barr’s adaptation of Kirkpatrick’s four-level model. Where applicable, outcomes were pooled in meta-analyses, separately for Mini-CEX and DOPS. To examine potential influences, we used Fisher’s exact test for count data. Results We identified 26 articles demonstrating heterogeneous effects of Mini-CEX and DOPS on learners’ reactions (Kirkpatrick Level 1) and positive effects of Mini-CEX and DOPS on trainees’ performance (Kirkpatrick Level 2b; Mini-CEX: standardized mean difference (SMD) = 0.26, p = 0.014; DOPS: SMD = 3.33, p<0.001). No studies were found on higher Kirkpatrick levels. Regarding potential influences, we found two implementation characteristics, “quality” and “participant responsiveness”, to be associated with the educational impact. Conclusions Despite the limited evidence, the meta-analyses demonstrated positive effects of Mini-CEX and DOPS on trainee performance. Additionally, we revealed implementation characteristics to be associated with the educational impact. Hence, we assume that considering implementation characteristics could increase the educational impact of Mini-CEX and DOPS.
Collapse
Affiliation(s)
- Andrea C. Lörwald
- Department of Assessment and Evaluation, Institute of Medical Education, University of Bern, Bern, Switzerland
- * E-mail:
| | - Felicitas-Maria Lahner
- Department of Assessment and Evaluation, Institute of Medical Education, University of Bern, Bern, Switzerland
| | - Zineb M. Nouns
- Department of Assessment and Evaluation, Institute of Medical Education, University of Bern, Bern, Switzerland
| | - Christoph Berendonk
- Department of Assessment and Evaluation, Institute of Medical Education, University of Bern, Bern, Switzerland
| | - John Norcini
- FAIMER, Philadelphia, Pennsylvania, United States of America
| | - Robert Greif
- Department of Anaesthesiology and Pain Therapy, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Sören Huwendiek
- Department of Assessment and Evaluation, Institute of Medical Education, University of Bern, Bern, Switzerland
| |
Collapse
|
58
|
Eva KW. Cognitive Influences on Complex Performance Assessment: Lessons from the Interplay between Medicine and Psychology. JOURNAL OF APPLIED RESEARCH IN MEMORY AND COGNITION 2018. [DOI: 10.1016/j.jarmac.2018.03.008] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
59
|
Krupat E. Critical Thoughts About the Core Entrustable Professional Activities in Undergraduate Medical Education. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2018; 93:371-376. [PMID: 28857790 DOI: 10.1097/acm.0000000000001865] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
The Core Entrustable Professional Activities for Entering Residency (Core EPAs) have taken a strong hold on undergraduate medical education (UME). This Perspective questions their value added and considers the utility of the Core EPAs along two separate dimensions: (1) the ways they change the content and focus of the goals of UME; and (2) the extent to which entrustable professional activity (EPA)-based assessment conforms to basic principles of measurement theory as practiced in the social sciences. Concerning content and focus, the author asks whether the 13 Core EPAs frame UME too narrowly, putting competencies into the background and overlooking certain aspirational, but important and measurable, objectives of UME. The author also discusses the unevenness of EPAs in terms of their breadth and their developmental status as core activities. Regarding measurement and assessment, the author raises concerns that the EPA metric introduces layers of inference that may cause distortions and hinder accuracy and rater agreement. In addition, the use of weak anchors and multidimensional scales is also of concern. The author concludes with a proposal for reframing the Core EPAs and Accreditation Council for Graduate Medical Education competencies into broadly defined sets of behaviors, referred to as "Tasks of Medicine," and calls for the development of a systematic and longitudinal research agenda. The author asserts that "slowing down when you should" applies to medical education as well as patient care, and calls for a reevaluation of the Core EPAs before further commitment to them.
Collapse
Affiliation(s)
- Edward Krupat
- E. Krupat is associate professor, Department of Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts. At the time of writing, the author was also director, Center for Evaluation, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
60
|
Carraccio C. Strengthening the Connection of Medical Education to the Vision of Improving Child Health. Pediatrics 2018; 141:peds.2017-3427. [PMID: 29444817 DOI: 10.1542/peds.2017-3427] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 11/30/2017] [Indexed: 11/24/2022] Open
Abstract
This is the text version of Dr Carol Caraccio’s acceptance speech upon receiving the 2017 Joseph W. St. Geme, Jr, Leadership Award.
Collapse
|
61
|
Patel US, Tonni I, Gadbury-Amyot C, Van der Vleuten CPM, Escudier M. Assessment in a global context: An international perspective on dental education. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2018; 22 Suppl 1:21-27. [PMID: 29601682 DOI: 10.1111/eje.12343] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/12/2018] [Indexed: 05/08/2023]
Abstract
Assessments are widely used in dental education to record the academic progress of students and ultimately determine whether they are ready to begin independent dental practice. Whilst some would consider this a "rite-of-passage" of learning, the concept of assessments in education is being challenged to allow the evolution of "assessment for learning." This serves as an economical use of learning resources whilst allowing our learners to prove their knowledge and skills and demonstrating competence. The Association for Dental Education in Europe and the American Dental Education Association held a joint international meeting in London in May 2017 allowing experts in dental education to come together for the purposes of Shaping the Future of Dental Education. Assessment in a Global Context was one topic in which international leaders could discuss different methods of assessment, identifying the positives, the pitfalls and critiquing the method of implementation to determine the optimum assessment for a learner studying to be a healthcare professional. A post-workshop survey identified that educators were thinking differently about assessment, instead of working as individuals providing isolated assessments; the general consensus was that a longitudinally orientated systematic and programmatic approach to assessment provide greater reliability and improved the ability to demonstrate learning.
Collapse
Affiliation(s)
- U S Patel
- School of Dentistry, University of Birmingham, Birmingham, UK
| | - I Tonni
- Department of Orthodontics, University of Brescia, Brescia, Italy
| | - C Gadbury-Amyot
- The University of Missouri-Kansas City (UMKC), Kansas City, MO, USA
| | - C P M Van der Vleuten
- Department of Educational Development and Research in the Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - M Escudier
- Department of Clinical and Diagnostic Sciences, King's College London Dental Institute, London, UK
| |
Collapse
|
62
|
Moore M, Roberts C, Newbury J, Crossley J. Am I getting an accurate picture: a tool to assess clinical handover in remote settings? BMC MEDICAL EDUCATION 2017; 17:213. [PMID: 29141622 PMCID: PMC5688655 DOI: 10.1186/s12909-017-1067-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2016] [Accepted: 11/07/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND Good clinical handover is critical to safe medical care. Little research has investigated handover in rural settings. In a remote setting where nurses and medical students give telephone handover to an aeromedical retrieval service, we developed a tool by which the receiving clinician might assess the handover; and investigated factors impacting on the reliability and validity of that assessment. METHODS Researchers consulted with clinicians to develop an assessment tool, based on the ISBAR handover framework, combining validity evidence and the existing literature. The tool was applied 'live' by receiving clinicians and from recorded handovers by academic assessors. The tool's performance was analysed using generalisability theory. Receiving clinicians and assessors provided feedback. RESULTS Reliability for assessing a call was good (G = 0.73 with 4 assessments). The scale had a single factor structure with good internal consistency (Cronbach's alpha = 0.8). The group mean for the global score for nurses and students was 2.30 (SD 0.85) out of a maximum 3.0, with no difference between these sub-groups. CONCLUSIONS We have developed and evaluated a tool to assess high-stakes handover in a remote setting. It showed good reliability and was easy for working clinicians to use. Further investigation and use is warranted beyond this setting.
Collapse
Affiliation(s)
- Malcolm Moore
- Rural Clinical School, Australian National University Medical School, 54 Mills Rd, Acton, ACT 2601 Australia
- Broken Hill University Department of Rural Health, University of Sydney, Broken Hill, Australia
| | - Chris Roberts
- Northern Clinical School, Sydney Medical School, University of Sydney, Sydney, Australia
| | - Jonathan Newbury
- Rural Clinical School, University of Adelaide, Adelaide, Australia
| | - Jim Crossley
- Medical School, University of Sheffield, Sheffield, UK
| |
Collapse
|
63
|
Zijlstra-Shaw S, Roberts T, Robinson PG. Evaluation of an assessment system for professionalism amongst dental students. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2017; 21:e89-e100. [PMID: 27440069 DOI: 10.1111/eje.12226] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/20/2016] [Indexed: 06/06/2023]
Abstract
INTRODUCTION Dental professionalism is an essential requirement to practice dentistry that covers both abilities and personal qualities. Therefore, a programme of assessment that promotes personal and professional development throughout the undergraduate dental education course is needed. This study aimed to develop and validate a system to assess dental students' professionalism based on a previously developed conceptual framework. METHODS Using the framework, an assessment programme was designed to encourage students to reflect on and explain their observed behaviours with appropriate feedback. The programme was panel-tested and then administered to a cohort of senior dental students. Internal reliability criterion validity and construct validity were evaluated quantitatively, whilst the usefulness of the programme was evaluated qualitatively. RESULTS Mean of student, staff and agreed grades was similar, and there were no floor or ceiling effects. All item-total correlations were >0.6 and Cronbach's alpha = 0.95 indicating acceptable internal reliability. All items correlated significantly with global ratings indicating good criterion validity. All hypothesized correlations were significant, and grades were not related to age or gender. Qualitative data produced three themes: assessment process, educational value and suggestions for improvement. CONCLUSION The assessment programme has good internal reliability and validity and suggests that basing an assessment system around the explicit theoretical model is a valuable educational tool.
Collapse
Affiliation(s)
- S Zijlstra-Shaw
- Academic Unit of Primary Dental Care, School of Clinical Dentistry, University of Sheffield, Sheffield, UK
| | - T Roberts
- Leeds Institute of Medical Education, University of Leeds, Leeds, UK
| | - P G Robinson
- School of Oral and Dental Sciences, University of Bristol, Bristol, UK
| |
Collapse
|
64
|
Gingerich A, Ramlo SE, van der Vleuten CPM, Eva KW, Regehr G. Inter-rater variability as mutual disagreement: identifying raters' divergent points of view. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2017; 22:819-838. [PMID: 27651046 DOI: 10.1007/s10459-016-9711-8] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2016] [Accepted: 09/09/2016] [Indexed: 06/06/2023]
Abstract
Whenever multiple observers provide ratings, even of the same performance, inter-rater variation is prevalent. The resulting 'idiosyncratic rater variance' is considered to be unusable error of measurement in psychometric models and is a threat to the defensibility of our assessments. Prior studies of inter-rater variation in clinical assessments have used open response formats to gather raters' comments and justifications. This design choice allows participants to use idiosyncratic response styles that could result in a distorted representation of the underlying rater cognition and skew subsequent analyses. In this study we explored rater variability using the structured response format of Q methodology. Physician raters viewed video-recorded clinical performances and provided Mini Clinical Evaluation Exercise (Mini-CEX) assessment ratings through a web-based system. They then shared their assessment impressions by sorting statements that described the most salient aspects of the clinical performance onto a forced quasi-normal distribution ranging from "most consistent with my impression" to "most contrary to my impression". Analysis of the resulting Q-sorts revealed distinct points of view for each performance shared by multiple physicians. The points of view corresponded with the ratings physicians assigned to the performance. Each point of view emphasized different aspects of the performance with either rapport-building and/or medical expertise skills being most salient. It was rare for the points of view to diverge based on disagreements regarding the interpretation of a specific aspect of the performance. As a result, physicians' divergent points of view on a given clinical performance cannot be easily reconciled into a single coherent assessment judgment that is impacted by measurement error. If inter-rater variability does not wholly reflect error of measurement, it is problematic for our current measurement models and poses challenges for how we are to adequately analyze performance assessment ratings.
Collapse
Affiliation(s)
- Andrea Gingerich
- Northern Medical Program, University of Northern British Columbia, 3333 University Way, Prince George, BC, V2N 4Z9, Canada.
| | - Susan E Ramlo
- Department of Engineering and Science Technology, University of Akron, Akron, OH, USA
| | | | - Kevin W Eva
- Centre for Health Education Scholarship, University of British Columbia, Vancouver, BC, Canada
| | - Glenn Regehr
- Centre for Health Education Scholarship, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
65
|
Leep Hunderfund AN, Reed DA, Starr SR, Havyer RD, Lang TR, Norby SM. Ways to Write a Milestone: Approaches to Operationalizing the Development of Competence in Graduate Medical Education. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2017; 92:1328-1334. [PMID: 28353504 DOI: 10.1097/acm.0000000000001660] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
PURPOSE To identify approaches to operationalizing the development of competence in Accreditation Council for Graduate Medical Education (ACGME) milestones. METHOD The authors reviewed all 25 "Milestone Project" documents available on the ACGME Web site on September 11, 2013, using an iterative process to identify approaches to operationalizing the development of competence in the milestones associated with each of 601 subcompetencies. RESULTS Fifteen approaches were identified. Ten focused on attributes and activities of the learner, such as their ability to perform different, increasingly difficult tasks (304/601; 51%), perform a task better and faster (171/601; 45%), or perform a task more consistently (123/601; 20%). Two approaches focused on context, inferring competence from performing a task in increasingly difficult situations (236/601; 29%) or an expanding scope of engagement (169/601; 28%). Two used socially defined indicators of competence such as progression from "learning" to "teaching," "leading," or "role modeling" (271/601; 45%). One approach focused on the supervisor's role, inferring competence from a decreasing need for supervision or assistance (151/601; 25%). Multiple approaches were often combined within a single set of milestones (mean 3.9, SD 1.6). CONCLUSIONS Initial ACGME milestones operationalize the development of competence in many ways. These findings offer insights into how physicians understand and assess the developmental progression of competence and an opportunity to consider how different approaches may affect the validity of milestone-based assessments. The results of this analysis can inform the work of educators developing or revising milestones, interpreting milestone data, or creating assessment tools to inform milestone-based performance measures.
Collapse
Affiliation(s)
- Andrea N Leep Hunderfund
- A.N. Leep Hunderfund is assistant professor of neurology, Mayo Clinic, Rochester, Minnesota. D.A. Reed is associate professor of medical education and medicine and senior associate dean of academic affairs, Mayo Medical School, Mayo Clinic, Rochester, Minnesota. S.R. Starr is assistant professor of pediatric and adolescent medicine and director of science of health care delivery education, Mayo Medical School, Mayo Clinic, Rochester, Minnesota. R.D. Havyer is assistant professor of medicine, Mayo Clinic, Rochester, Minnesota. T.R. Lang is assistant professor of pediatric and adolescent medicine, Mayo Clinic, Rochester, Minnesota (now at Gundersen Health System, LaCrosse, Wisconsin). S.M. Norby is associate professor of medicine, Mayo Clinic, Rochester, Minnesota
| | | | | | | | | | | |
Collapse
|
66
|
Gaunt A, Patel A, Rusius V, Royle TJ, Markham DH, Pawlikowska T. 'Playing the game': How do surgical trainees seek feedback using workplace-based assessment? MEDICAL EDUCATION 2017; 51:953-962. [PMID: 28833426 DOI: 10.1111/medu.13380] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2016] [Revised: 10/24/2016] [Accepted: 05/18/2017] [Indexed: 06/07/2023]
Abstract
OBJECTIVES Although trainees and trainers find feedback interactions beneficial, difficulties in giving and receiving feedback are reported. Few studies have explored what drives trainees to seek feedback. This study explores how workplace-based assessments (WBAs) influence the ways surgical trainees seek feedback and feedback interactions. METHODS Utilising a template analysis approach, we conducted 10 focus groups with 42 surgical trainees from four regions across the UK. Data were independently coded by three researchers, incorporating three a priori themes identified from a previous quantitative study. Further themes emerged from exploration of these data. The final template, agreed by the three researchers, was applied to all focus group transcripts. The themes were linked in a diagrammatical form to allow critical exploration of the data. RESULTS Trainees' perceptions of the purpose of WBA for learning or an assessment of learning, and their relationship with their trainer impacted upon how trainees chose to use WBA. Perceiving WBA as a test led trainees to 'play the game': seek positive and avoid negative feedback through WBA. Perceiving WBA as a chance to learn led trainees to seek negative feedback. Some trainees sought negative feedback outside WBA. Negative feedback was more important for changing practice compared with positive feedback, which enabled trainees to 'look good' but had less of an effect on changing clinical practice. The timing of feedback relative to WBA was also important, with immediate feedback being more beneficial for learning; however, delayed feedback was still sought using WBA. DISCUSSION Trainees' perceptions of the purpose of WBA and their relationship with their trainer informed when they chose to seek feedback. Trainees who perceived WBA as a test were led to 'play the game' by seeking positive and avoiding negative feedback. Outside of WBA, trainees sought negative feedback, which was most important for change in practice.
Collapse
Affiliation(s)
- Anne Gaunt
- Education Development, Warwick Medical School, University of Warwick, Coventry, UK
- Department of General Surgery, University Hospital North Midlands, Stoke on Trent, UK
| | - Abhilasha Patel
- Department of General Surgery, University Hospital North Midlands, Stoke on Trent, UK
| | - Victoria Rusius
- Department of General Surgery, Royal Blackburn Hospital, Blackburn, UK
| | - T James Royle
- Department of Colorectal Surgery, Sunderland Royal Hospital, Sunderland, UK
| | - Deborah H Markham
- Department of General Surgery, South Warwickshire Foundation Trust, Warwick, UK
| | | |
Collapse
|
67
|
Embo M, Helsloot K, Michels N, Valcke M. A Delphi study to validate competency-based criteria to assess undergraduate midwifery students' competencies in the maternity ward. Midwifery 2017; 53:1-8. [PMID: 28708987 DOI: 10.1016/j.midw.2017.07.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2017] [Revised: 06/26/2017] [Accepted: 07/04/2017] [Indexed: 11/15/2022]
Abstract
BACKGROUND workplace learning plays a crucial role in midwifery education. Twelve midwifery schools in Flanders (Belgium) aimed to implement a standardised and evidence-based method to learn and assess competencies in practice. This study focuses on the validation of competency-based criteria to guide and assess undergraduate midwifery students' postnatal care competencies in the maternity ward. METHOD an online Delphi study was carried out. During three consecutive sessions, experts from workplaces and schools were invited to score the assessment criteria as to their relevance and feasibility, and to comment on the content and their formulation. A descriptive quantitative analysis, and a qualitative thematic content analysis of the comments were carried out. A Mann-Whitney U-test was used to investigate differences between expert groups. FINDINGS eleven competencies and fifty-six assessment criteria were found appropriate to assess midwifery students' competencies in the maternity ward. Overall median scores were high and consensus was obtained for all criteria, except for one during the first round. Although all initial assessment criteria (N=89) were scored as relevant, some of them appeared not feasible in practice. Little difference was found between the expert groups. Comments mainly included remarks about concreteness and measurability. CONCLUSION this study resulted in validated criteria to assess postnatal care competencies in the maternity ward.
Collapse
Affiliation(s)
- M Embo
- Midwifery Department, University College Arteveldehogeschool Ghent, Voetweg 66, 9000 Ghent, Belgium; Department of Educational Studies, Faculty of Psychology and Educational Sciences, Ghent University, H. Dunantlaan 2, 9000 Ghent, Belgium.
| | - K Helsloot
- Midwifery Department, University College Arteveldehogeschool Ghent, Voetweg 66, 9000 Ghent, Belgium.
| | - N Michels
- Skills lab and Centre for General Practice, Faculty of Medicine and Health Sciences Antwerp, Universiteitslaan 1, 2610 Antwerp, Belgium.
| | - M Valcke
- Department of Educational Studies, Faculty of Psychology and Educational Sciences, Ghent University, H. Dunantlaan 2, 9000 Ghent, Belgium.
| |
Collapse
|
68
|
Warm EJ, Englander R, Pereira A, Barach P. Improving Learner Handovers in Medical Education. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2017; 92:927-931. [PMID: 27805952 DOI: 10.1097/acm.0000000000001457] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Multiple studies have demonstrated that the information included in the Medical Student Performance Evaluation fails to reliably predict medical students' future performance. This faulty transfer of information can lead to harm when poorly prepared students fail out of residency or, worse, are shuttled through the medical education system without an honest accounting of their performance. Such poor learner handovers likely arise from two root causes: (1) the absence of agreed-on outcomes of training and/or accepted assessments of those outcomes, and (2) the lack of standardized ways to communicate the results of those assessments. To improve the current learner handover situation, an authentic, shared mental model of competency is needed; high-quality tools to assess that competency must be developed and tested; and transparent, reliable, and safe ways to communicate this information must be created.To achieve these goals, the authors propose using a learner handover process modeled after a patient handover process. The CLASS model includes a description of the learner's Competency attainment, a summary of the Learner's performance, an Action list and statement of Situational awareness, and Synthesis by the receiving program. This model also includes coaching oriented towards improvement along the continuum of education and care. Just as studies have evaluated patient handover models using metrics that matter most to patients, studies must evaluate this learner handover model using metrics that matter most to providers, patients, and learners.
Collapse
Affiliation(s)
- Eric J Warm
- E.J. Warm is the Sue P. and Richard W. Vilter Professor of Medicine and categorical medicine residency program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio. R. Englander is associate dean for undergraduate medical education, University of Minnesota Medical School, Minneapolis, Minnesota. A. Pereira is associate professor and assistant dean for clinical education, University of Minnesota Medical School, Minneapolis, Minnesota. P. Barach is clinical professor, Department of Pediatrics, Wayne State University School of Medicine, Detroit, Michigan
| | | | | | | |
Collapse
|
69
|
Harris P, Bhanji F, Topps M, Ross S, Lieberman S, Frank JR, Snell L, Sherbino J. Evolving concepts of assessment in a competency-based world. MEDICAL TEACHER 2017; 39:603-608. [PMID: 28598736 DOI: 10.1080/0142159x.2017.1315071] [Citation(s) in RCA: 82] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Competency-based medical education (CBME) is an approach to the design of educational systems or curricula that focuses on graduate abilities or competencies. It has been adopted in many jurisdictions, and in recent years an explosion of publications has examined its implementation and provided a critique of the approach. Assessment in a CBME context is often based on observations or judgments about an individual's level of expertise; it emphasizes frequent, direct observation of performance along with constructive and timely feedback to ensure that learners, including clinicians, have the expertise they need to perform entrusted tasks. This paper explores recent developments since the publication in 2010 of Holmboe and colleagues' description of CBME assessment. Seven themes regarding assessment that arose at the second invitational summit on CBME, held in 2013, are described: competency frameworks, the reconceptualization of validity, qualitative methods, milestones, feedback, assessment processes, and assessment across the medical education continuum. Medical educators interested in CBME, or assessment more generally, should consider the implications for their practice of the review of these emerging concepts.
Collapse
Affiliation(s)
- Peter Harris
- a Office of Medical Education , University of New South Wales , Sydney , Australia
| | - Farhan Bhanji
- b Royal College of Physicians and Surgeons of Canada , Ottawa , Canada
- c Centre for Medical and Department of General Internal Medicine , McGill University , Montreal , Quebec, Canada
| | - Maureen Topps
- d Cumming School of Medicine , University of Calgary , Calgary , Canada
| | - Shelley Ross
- e Department of Family Medicine , University of Alberta , Edmonton , Canada
| | - Steven Lieberman
- f Office of the Dean of medicine, University of Texas Medical Branch , Galveston , TX , USA
| | - Jason R Frank
- b Royal College of Physicians and Surgeons of Canada , Ottawa , Canada
- g Department of Emergency Medicine , University of Ottawa , Ottawa , Canada
| | - Linda Snell
- b Royal College of Physicians and Surgeons of Canada , Ottawa , Canada
- c Centre for Medical and Department of General Internal Medicine , McGill University , Montreal , Quebec, Canada
| | - Jonathan Sherbino
- h Division of Emergency Medicine, Department of Medicine , McMaster University , Hamilton , Canada
| |
Collapse
|
70
|
Kinnear B, Bensman R, Held J, O'Toole J, Schauer D, Warm E. Critical Deficiency Ratings in Milestone Assessment: A Review and Case Study. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2017; 92:820-826. [PMID: 28557948 DOI: 10.1097/acm.0000000000001383] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
PURPOSE The Accreditation Council for Graduate Medical Education (ACGME) requires programs to report learner progress using specialty-specific milestones. It is unclear how milestones can best identify critical deficiencies (CDs) in trainee performance. Specialties developed milestones independently of one another; not every specialty included CDs within milestones ratings. This study examined the proportion of ACGME milestone sets that include CD ratings, and describes one residency program's experiences using CD ratings in assessment. METHOD The authors reviewed ACGME milestones for all 99 specialties in November 2015, determining which rating scales contained CDs. The authors also reviewed three years of data (July 2012-June 2015) from the University of Cincinnati Medical Center (UCMC) internal medicine residency assessment system based on observable practice activities mapped to ACGME milestones. Data were analyzed by postgraduate year, assessor type, rotation, academic year, and core competency. The Mantel-Haenszel chi-square test was used to test for changes over time. RESULTS Specialties demonstrated heterogeneity in accounting for CDs in ACGME milestones, with 22% (22/99) of specialties having no language describing CDs in milestones assessment. Thirty-three percent (63/189) of UCMC internal medicine residents received at least one CD rating, with CDs accounting for 0.18% (668/364,728) of all assessment ratings. The authors identified CDs across multiple core competencies and rotations. CONCLUSIONS Despite some specialties not accounting for CDs in milestone assessment, UCMC's experience demonstrates that a significant proportion of residents may be rated as having a CD during training. Identification of CDs may allow programs to develop remediation and improvement plans.
Collapse
Affiliation(s)
- Benjamin Kinnear
- B. Kinnear is assistant professor and residency assistant program director, Medicine-Pediatrics, Department of Internal Medicine, University of Cincinnati College of Medicine, and Department of Pediatrics, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio. R. Bensman is clinical fellow, Department of Pediatrics, Division of Emergency Medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio. J. Held is assistant professor and residency associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio. J. O'Toole is associate professor and residency associate program director, Medicine-Pediatrics, Department of Internal Medicine, University of Cincinnati College of Medicine, and Department of Pediatrics, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio. D. Schauer is associate professor and residency associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio. E. Warm is Richard W. Vilter Professor of Medicine and residency program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio
| | | | | | | | | | | |
Collapse
|
71
|
Gowland EH, Birns J, Bryant C, Ball KL. Trials and tribulations of the annual review of competence progression - lessons learned from core medical training in London. Future Healthc J 2017; 4:92-98. [PMID: 31098442 PMCID: PMC6502634 DOI: 10.7861/futurehosp.4-2-92] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
The annual review of competence progression (ARCP) was introduced as a way of keeping records and reviewing satisfactory progress through a medical curriculum for doctors in training. It provides public assurance that doctors are trained to a satisfactory standard and are fit for purpose. A routine external review of the core medical training (CMT) ARCPs in London revealed documentation of satisfactory progression of trainees to the next level of training without the evidence to support their completion of the curriculum. An internal review and series of process interventions were subsequently conducted and implemented to improve the quality and standardisation of the ARCPs. This paper reviews these interventions, discusses the lessons learned from the internal review and highlights issues applicable to any ARCP process.
Collapse
|
72
|
Dawson LJ, Mason BG, Bissell V, Youngson C. Calling for a re-evaluation of the data required to credibly demonstrate a dental student is safe and ready to practice. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2017; 21:130-135. [PMID: 27027651 PMCID: PMC5396269 DOI: 10.1111/eje.12191] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Affiliation(s)
- L. J. Dawson
- University of Liverpool School of DentistryLiverpoolUK
| | - B. G. Mason
- University of Liverpool School of DentistryLiverpoolUK
| | - V. Bissell
- University of Glasgow School of DentistryGlasgowUK
| | - C. Youngson
- University of Liverpool School of DentistryLiverpoolUK
| |
Collapse
|
73
|
Roberts C, Jorm C, Gentilcore S, Crossley J. Peer assessment of professional behaviours in problem-based learning groups. MEDICAL EDUCATION 2017; 51:390-400. [PMID: 28078685 DOI: 10.1111/medu.13151] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2015] [Revised: 02/29/2016] [Accepted: 06/27/2016] [Indexed: 05/28/2023]
Abstract
CONTEXT Peer assessment of professional behaviour within problem-based learning (PBL) groups can support learning and provide opportunities to identify and remediate problem behaviours. OBJECTIVES We investigated whether a peer assessment of learning behaviours in PBL is sufficiently valid to support decision making about student professional behaviours. METHODS Data were available for two cohorts of students, in which each student was rated by all of their PBL group peers using a modified version of a previously validated scale. Following the provision of feedback to the students, their behaviours were again peer-assessed. A generalisability study was undertaken to calculate the students' professional behaviour scores, sources of error that impacted the reliability of the assessment, changes in student rating behaviour, and changes in mean scores after the delivery of feedback. RESULTS Peer assessment of professional learning behaviour was highly reliable for within-group comparisons (G = 0.81-0.87), but poor for across-group comparisons (G = 0.47-0.53). Feedback increased the range of ratings given by assessors and brought their mean ratings into closer alignment. More of the increased variance was attributable to assessee performance than to assessor stringency and hence there was a slight improvement in reliability, especially for comparisons across groups. Mean professional behaviour scores were unchanged. CONCLUSIONS Peer assessment of professional learning behaviours may be unreliable for decision making outside a PBL group. Faculty members should not draw conclusions from peer assessment about a student's behaviour compared with that of their peers in the cohort, and such a tool may not be appropriate for summative assessment. Health professional educators interested in assessing student professional behaviours in PBL groups might focus on opportunities for the provision of formative peer feedback and its impact on learning.
Collapse
Affiliation(s)
- Chris Roberts
- Sydney Medical School - Northern, University of Sydney, Sydney, Australia
| | - Christine Jorm
- Office of Medical Education, Sydney Medical School, University of Sydney, Sydney, Australia
| | - Stacey Gentilcore
- Office of Medical Education, Sydney Medical School, University of Sydney, Sydney, Australia
| | - Jim Crossley
- The Medical School, University of Sheffield, Sheffield, UK
| |
Collapse
|
74
|
Wilbur K, Hassaballa N, Mahmood OS, Black EK. Describing student performance: a comparison among clinical preceptors across cultural contexts. MEDICAL EDUCATION 2017; 51:411-422. [PMID: 28220518 DOI: 10.1111/medu.13223] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2016] [Revised: 02/26/2016] [Accepted: 09/09/2016] [Indexed: 06/06/2023]
Abstract
CONTEXT Health professional student evaluation during experiential training is notably subjective and assessor judgements may be affected by socio-cultural influences. OBJECTIVES This study sought to explore how clinical preceptors in pharmacy conceptualise varying levels of student performance and to identify any contextual differences that may exist across different countries. METHODS The qualitative research design employed semi-structured interviews. A sample of 20 clinical preceptors for post-baccalaureate Doctor of Pharmacy programmes in Canada and the Middle East gave personal accounts of how students they had supervised fell below, met or exceeded their expectations. Discussions were analysed following constructivist grounded theory principles. RESULTS Seven major themes encompassing how clinical pharmacy preceptors categorise levels of student performance and behaviour were identified: knowledge; team interaction; motivation; skills; patient care; communication, and professionalism. Expectations were outlined using both positive and negative descriptions. Pharmacists typically described supervisory experiences representing a series of these categories, but arrived at concluding judgements in a holistic fashion: if valued traits of motivation and positive attitude were present, overall favourable impressions of a student could be maintained despite observations of a few deficiencies. Some prioritised dimensions could not be mapped to defined existing educational outcomes. There was no difference in thresholds for how student performance was distinguished by participants in the two regions. CONCLUSIONS The present research findings are congruent with current literature related to the constructs used by clinical supervisors in health professional student workplace-based assessment and provide additional insight into cross-national perspectives in pharmacy. As previously determined in social work and medicine, further study of how evaluation instruments and associated processes can integrate these judgements should be pursued in this discipline.
Collapse
Affiliation(s)
- Kerry Wilbur
- College of Pharmacy, Qatar University, Doha, Qatar
| | | | | | - Emily K Black
- College of Pharmacy, Dalhousie University, Halifax, Nova Scotia, Canada
| |
Collapse
|
75
|
Patel M, Agius S. Cross-cultural comparisons of assessment of clinical performance. MEDICAL EDUCATION 2017; 51:348-350. [PMID: 28299843 DOI: 10.1111/medu.13262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
|
76
|
Tweed M, Purdie G, Wilkinson T. Low performing students have insightfulness when they reflect-in-action. MEDICAL EDUCATION 2017; 51:316-323. [PMID: 28084033 DOI: 10.1111/medu.13206] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2016] [Revised: 04/04/2016] [Accepted: 08/15/2016] [Indexed: 06/06/2023]
Abstract
CONTEXT Measuring appropriateness of certainty of responses in a progress test using descriptors authentic to practice as reflection-in-action builds on existing theories of self-monitoring. Clinicians making decisions require the ability to accurately self-monitor, including certainty of being correct. Inappropriate certainty could lead to medical error. Self-assessment and certainty of assessment performance have been measured in a variety of ways. Previous work has shown that those with less experience are less accurate in self-assessment, but such studies looked at self-assessment using methods less authentic to clinical practice. This study investigates how correctness varies with certainty, allowing for experience and performance. METHODS Students in Years 2-5 were certain of their responses to two iterations of a progress test during one calendar year. Analyses compared correctness for certainty of response, test number, student year cohort and performance level, defined by criterion scores. RESULTS The odds of a correct response increased with student certainty for all subsets allowing for year group and ability, including student subsets with less experience and subsets in lower-performance groups. CONCLUSION Unlike previous work showing poorer accuracy of self-assessment for those with less experience or ability, we postulate that our finding of similar increases in correctness with increasing certainty even in the less experienced and lower performance groups, relates to certainty descriptors being worded in a way that is authentic to clinical practice, and in turn related to reflection-in-action.
Collapse
Affiliation(s)
- Mike Tweed
- School of Medicine and Health Sciences, University of Otago, Wellington, New Zealand
| | - Gordon Purdie
- School of Medicine and Health Sciences, University of Otago, Wellington, New Zealand
| | - Tim Wilkinson
- School of Medicine and Health Sciences, University of Otago, Christchurch, New Zealand
| |
Collapse
|
77
|
Weller JM, Castanelli DJ, Chen Y, Jolly B. Making robust assessments of specialist trainees' workplace performance. Br J Anaesth 2017; 118:207-214. [PMID: 28100524 DOI: 10.1093/bja/aew412] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/15/2016] [Indexed: 11/12/2022] Open
Abstract
BACKGROUND Workplace-based assessments should provide a reliable measure of trainee performance, but have met with mixed success. We proposed that using an entrustability scale, where supervisors scored trainees on the level of supervision required for the case would improve the utility of compulsory mini-clinical evaluation exercise (CEX) assessments in a large anaesthesia training program. METHODS We analysed mini-CEX scores from all Australian and New Zealand College of Anaesthetists trainees submitted to an online database over a 12-month period. Supervisors' scores were adjusted for the expected supervision requirement for the case for trainees at different stages of training. We used generalisability theory to determine score reliability. RESULTS 7808 assessments were available for analysis. Supervision requirements decreased significantly (P < 0.05) with increased duration and level of training, supporting validity. We found moderate reliability (G > 0.7) with a feasible number of assessments. Adjusting scores against the expected supervision requirement considerably improved reliability, with G > 0.8 achieved with only nine assessments. Three per cent of trainees generated average mini-CEX scores below the expected standard. CONCLUSIONS Using an entrustment scoring system, where supervisors score trainees on the level of supervision required, mini-CEX scores demonstrated moderate reliability within a feasible number of assessments, and evidence of validity. When scores were adjusted against an expected standard, underperforming trainees could be identified, and reliability much improved. Taken together with other evidence on trainee ability, the mini-CEX is of sufficient reliability for inclusion in high stakes decisions on trainee progression towards independent specialist practice.
Collapse
Affiliation(s)
- J M Weller
- Centre for Medical and Health Sciences Education, School of Medicine, University of Auckland, New Zealand .,Department of Anaesthesia, Auckland City Hospital, New Zealand
| | - D J Castanelli
- Department of Anaesthesia and Perioperative Medicine, Monash Health, Victoria, Australia.,Department of Anaesthesia and Perioperative Medicine, Monash University, Clayton, Victoria, Australia
| | - Y Chen
- Centre for Medical and Health Sciences Education, School of Medicine, University of Auckland, New Zealand
| | - B Jolly
- Medical Education Unit, School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, New South Wales, Australia
| |
Collapse
|
78
|
van der Meulen MW, Boerebach BCM, Smirnova A, Heeneman S, Oude Egbrink MGA, van der Vleuten CPM, Arah OA, Lombarts KMJMH. Validation of the INCEPT: A Multisource Feedback Tool for Capturing Different Perspectives on Physicians' Professional Performance. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2017; 37:9-18. [PMID: 28212117 DOI: 10.1097/ceh.0000000000000143] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
INTRODUCTION Multisource feedback (MSF) instruments are used to and must feasibly provide reliable and valid data on physicians' performance from multiple perspectives. The "INviting Co-workers to Evaluate Physicians Tool" (INCEPT) is a multisource feedback instrument used to evaluate physicians' professional performance as perceived by peers, residents, and coworkers. In this study, we report on the validity, reliability, and feasibility of the INCEPT. METHODS The performance of 218 physicians was assessed by 597 peers, 344 residents, and 822 coworkers. Using explorative and confirmatory factor analyses, multilevel regression analyses between narrative and numerical feedback, item-total correlations, interscale correlations, Cronbach's α and generalizability analyses, the psychometric qualities, and feasibility of the INCEPT were investigated. RESULTS For all respondent groups, three factors were identified, although constructed slightly different: "professional attitude," "patient-centeredness," and "organization and (self)-management." Internal consistency was high for all constructs (Cronbach's α ≥ 0.84 and item-total correlations ≥ 0.52). Confirmatory factor analyses indicated acceptable to good fit. Further validity evidence was given by the associations between narrative and numerical feedback. For reliable total INCEPT scores, three peer, two resident and three coworker evaluations were needed; for subscale scores, evaluations of three peers, three residents and three to four coworkers were sufficient. DISCUSSION The INCEPT instrument provides physicians performance feedback in a valid and reliable way. The number of evaluations to establish reliable scores is achievable in a regular clinical department. When interpreting feedback, physicians should consider that respondent groups' perceptions differ as indicated by the different item clustering per performance factor.
Collapse
Affiliation(s)
- Mirja W van der Meulen
- Ms. van der Meulen: PhD Candidate, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands, and Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands. Dr. Boerebach: Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands. Dr. Smirnova: PhD Candidate, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands, and Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands. Dr. Heeneman: Professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands. Dr. oude Egbrink: Professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands. Dr. van der Vleuten: Professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands. Dr. Arah: Professor, Department of Epidemiology, Fielding School of Public Health, University of California, Los Angeles (UCLA), Los Angeles, CA, and UCLA Center for Health Policy Research, Los Angeles, CA. Dr. Lombarts: Professor, Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | | | | | | | | | | | | | | |
Collapse
|
79
|
Mellinger JD, Williams RG, Sanfey H, Fryer JP, DaRosa D, George BC, Bohnen JD, Schuller MC, Sandhu G, Minter RM, Gardner AK, Scott DJ. Teaching and assessing operative skills: From theory to practice. Curr Probl Surg 2016; 54:44-81. [PMID: 28212782 DOI: 10.1067/j.cpsurg.2016.11.007] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2016] [Accepted: 11/22/2016] [Indexed: 11/22/2022]
Affiliation(s)
- John D Mellinger
- Department of Surgery, Southern Illinois University School of Medicine, Springfield, IL.
| | - Reed G Williams
- Department of Surgery, Southern Illinois University School of Medicine, Springfield, IL; Department of Surgery, Indiana University School of Medicine, Indianapolis, IN
| | - Hilary Sanfey
- Department of Surgery, Southern Illinois University School of Medicine, Springfield, IL; American College of Surgeons, Chicago, IL
| | - Jonathan P Fryer
- Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, IL
| | - Debra DaRosa
- Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, IL
| | - Brian C George
- Department of Surgery, University of Michigan, Ann Arbor, MI
| | - Jordan D Bohnen
- Department of General Surgery, Massachussetts General Hospital and Harvard University, Boston, MA
| | - Mary C Schuller
- Department of Surgery, Feinberg School of Medicine, Northwestern University, Chicago, IL
| | - Gurjit Sandhu
- Department of Surgery, University of Michigan, Ann Arbor, MI; Department of Learning Health Sciences, University of Michigan, Ann Arbor, MI
| | - Rebecca M Minter
- Department of Surgery, University of Texas Southwestern Medical Center, Dallas, TX
| | - Aimee K Gardner
- Department of Surgery, University of Texas Southwestern Medical Center, Dallas, TX; UT Southwestern Simulation Center, University of Texas Southwestern Medical Center, Dallas, TX
| | - Daniel J Scott
- Department of Surgery, University of Texas Southwestern Medical Center, Dallas, TX; UT Southwestern Simulation Center, University of Texas Southwestern Medical Center, Dallas, TX
| |
Collapse
|
80
|
Williams RG, Kim MJ, Dunnington GL. Practice Guidelines for Operative Performance Assessments. Ann Surg 2016; 264:934-948. [DOI: 10.1097/sla.0000000000001685] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
81
|
Bacon R, Holmes K, Palermo C. Exploring subjectivity in competency-based assessment judgements of assessors. Nutr Diet 2016; 74:357-364. [DOI: 10.1111/1747-0080.12326] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2016] [Revised: 08/08/2016] [Accepted: 08/19/2016] [Indexed: 11/29/2022]
Affiliation(s)
- Rachel Bacon
- Discipline of Nutrition and Dietetics, Faculty of Health; University of Canberra; Canberra Australian Capital Territory Australia
| | - Kay Holmes
- Discipline of Nutrition and Dietetics, Faculty of Health; University of Canberra; Canberra Australian Capital Territory Australia
| | - Claire Palermo
- Department of Nutrition and Dietetics; Monash University; Melbourne Victoria Australia
| |
Collapse
|
82
|
Hughes J, Wilson WJ, MacBean N, Hill AE. A tool for assessing case history and feedback skills in audiology students working with simulated patients. Int J Audiol 2016; 55:765-774. [PMID: 27696974 DOI: 10.1080/14992027.2016.1214758] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVE To develop a tool for assessing audiology students taking a case history and giving feedback with simulated patients (SP). DESIGN Single observation, single group design. STUDY SAMPLE Twenty-four first-year audiology students, five simulated patients, two clinical educators, and three evaluators. RESULTS The Audiology Simulated Patient Interview Rating Scale (ASPIRS) was developed consisting of six items assessing specific clinical skills, non-verbal communication, verbal communication, interpersonal skills, interviewing skills, and professional practice skills. These items are applied once for taking a case history and again for giving feedback. The ASPIRS showed very high internal consistency (α = 0.91-0.97; mean inter-item r = 0.64-0.85) and fair-to-moderate agreement between evaluators (29.2-54.2% exact and 79.2-100% near agreement; κweighted up to 0.60). It also showed fair-to-moderate absolute agreement amongst evaluators for single evaluator scores (intraclass correlation coefficient [ICC] r = 0.35-0.59) and substantial consistency of agreement amongst evaluators for three-evaluator averaged scores (ICC r = 0.62-0.81). Factor analysis showed the ASPIRS' 12 items fell into two components, one containing all feedback items and one containing all case history items. CONCLUSION The ASPIRS shows promise as the first published tool for assessing audiology students taking a case history and giving feedback with an SP.
Collapse
Affiliation(s)
- Jane Hughes
- a School of Health and Rehabilitation Sciences , The University of Queensland , Australia
| | - Wayne J Wilson
- a School of Health and Rehabilitation Sciences , The University of Queensland , Australia
| | - Naomi MacBean
- a School of Health and Rehabilitation Sciences , The University of Queensland , Australia
| | - Anne E Hill
- a School of Health and Rehabilitation Sciences , The University of Queensland , Australia
| |
Collapse
|
83
|
Warm EJ, Held JD, Hellmann M, Kelleher M, Kinnear B, Lee C, O'Toole JK, Mathis B, Mueller C, Sall D, Tolentino J, Schauer DP. Entrusting Observable Practice Activities and Milestones Over the 36 Months of an Internal Medicine Residency. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2016; 91:1398-1405. [PMID: 27355780 DOI: 10.1097/acm.0000000000001292] [Citation(s) in RCA: 66] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
PURPOSE Competency-based medical education and milestone reporting have led to increased interest in work-based assessments using entrustment over time as an assessment framework. Little is known about data collected from these assessments during residency. This study describes the results of entrustment of discrete work-based skills over 36 months in the University of Cincinnati internal medicine (IM) residency program. METHOD Attending physician and peer/allied health assessors provided entrustment ratings of resident performance on work-based observable practice activities (OPAs) mapped to Accreditation Council for Graduate Medicine Education/American Board of Internal Medicine reporting milestones for IM. These data were translated into milestones data and tracked longitudinally. The authors analyzed data from this new entrustment system's first 36 months (July 2012-June 2015). RESULTS During the 36-month period, assessors made 364,728 milestone assessments (mapped from OPAs) of 189 residents. Residents received an annualized average of 83 assessment encounters, producing means of 3,987 milestone assessments and 4,325 words of narrative assessment. Mean entrustment ratings (range 1-5) from all assessors for all milestones rose from 2.46 for first-month residents to 3.92 for 36th-month residents (r = 0.9252, P < .001). Attending physicians' entrustment ratings were lower than peer/allied health assessors' ratings. Medical knowledge and patient care milestones were rated lower than professionalism and interpersonal and communication skills milestones. CONCLUSIONS Entrustment of milestones appears to rise progressively over time, with differences by assessor type, competency, milestone, and resident. Further research is needed to elucidate the validity of these data in promotion, remediation, and reporting decisions.
Collapse
Affiliation(s)
- Eric J Warm
- E.J. Warm is Sue P. and Richard W. Vilter Professor of Medicine and categorical medicine program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio.J.D. Held is assistant professor and associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio.M. Hellmann is a fellow in pulmonary/critical care, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio.M. Kelleher is assistant professor and fellow in medical education, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio.B. Kinnear is assistant professor and assistant program director, Medicine-Pediatrics, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio.C. Lee is assistant professor and associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio.J.K. O'Toole is assistant professor and associate program director, Medicine-Pediatrics, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio.B. Mathis is associate professor, associate chair for clinical affairs, and associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio.C. Mueller is professor and program director, Medicine-Pediatrics, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio.D. Sall is assistant professor and fellow in medical education, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio.J. Tolentino is visiting associate professor and program director, Medicine-Pediatrics, Stony Brook University School of Medicine, Stony Brook, New York.D.P. Schauer is associate professor and associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
84
|
Williams RG, Mellinger JD, Dunnington GL. A problem-oriented approach to resident performance ratings. Surgery 2016; 160:936-945. [PMID: 27460933 DOI: 10.1016/j.surg.2016.04.040] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2016] [Revised: 02/20/2016] [Accepted: 04/13/2016] [Indexed: 10/21/2022]
Abstract
BACKGROUND Global, end-of-rotation evaluations are often difficult to interpret due to their high level of abstraction (eg, excellent, good, poor) and the bias toward high ratings. This study documents the utility of and measurement characteristics of serious problem items, an alternative item format. METHODS This report is based on 4,234 faculty performance ratings for 105 general surgery residents. Faculty members reported whether each resident had a serious problem for each of 8 areas of clinical performance and 6 areas of professional behavior. RESULTS A total of 263 serious problems were reported. The performance category with the most total serious problems noted was knowledge and that with the least problems noted was relations with patients and family members. Seven residents accounted for 86.9% of all serious problem reports. Each resident had serious problems in multiple performance areas. Problems were reported most frequently in knowledge, management, technical/procedural skills, ability to assume responsibility within level of competence, and problem identification. Citations of these serious problems were most common in year 3. Traditional ratings of global performance were not an adequate means for identifying residents with serious performance problems. CONCLUSION Serious problem ratings can communicate faculty concerns about residents more directly and can be used as a complement to conventional global rating scales without substantially increasing faculty workload.
Collapse
Affiliation(s)
- Reed G Williams
- Department of Surgery, Indiana University School of Medicine, Indianapolis, IN.
| | - John D Mellinger
- Department of Surgery, Southern Illinois University School of Medicine, Springfield, IL
| | - Gary L Dunnington
- Department of Surgery, Indiana University School of Medicine, Indianapolis, IN
| |
Collapse
|
85
|
Rekman J, Hamstra SJ, Dudek N, Wood T, Seabrook C, Gofton W. A New Instrument for Assessing Resident Competence in Surgical Clinic: The Ottawa Clinic Assessment Tool. JOURNAL OF SURGICAL EDUCATION 2016; 73:575-82. [PMID: 27052202 DOI: 10.1016/j.jsurg.2016.02.003] [Citation(s) in RCA: 65] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2016] [Accepted: 02/13/2016] [Indexed: 05/26/2023]
Abstract
BACKGROUND The shift toward competency-based medical education has created a demand for feasible workplace-based assessment tools. Perhaps, more important than competence to assess an individual patient is the ability to successfully manage a surgical clinic. Trainee performance in clinic is a critical component of learning to manage a surgical practice, yet no assessment tool currently exists to assess daily performance in outpatient clinics for surgery residents. The development of a competency-based assessment tool, the Ottawa Clinic Assessment Tool (OCAT), is described here to address this gap. STUDY DESIGN A consensus group of experts was gathered to generate dimensions of performance reflective of a competent "generalist" surgeon in clinic. A 6-month pilot study of the OCAT was conducted in orthopedics, general surgery, and obstetrics and gynecology with quantitative and qualitative evidence of validity collected. In all, 2 subsequent feedback sessions and a survey for staff and residents evaluated the OCAT for clarity and utility. RESULTS The OCAT is a 9-item tool, with a global assessment item and 2 short-answer questions. Among the 2 divisions, 44 staff surgeons completed 132 OCAT assessments of 79 residents. Psychometric data was collected as evidence of validity. Analysis of feedback indicated that the entrustability rating scale was useful for surgeons and residents and that the items could be correlated with individual competencies. CONCLUSIONS Multiple sources of validity evidence collected in this study demonstrate that the OCAT can measure resident clinic competency in a valid and feasible manner.
Collapse
Affiliation(s)
- Janelle Rekman
- Department of Surgical Education, The University of Ottawa, Ottawa, Ontario, Canada.
| | - Stanley J Hamstra
- Milestones Research and Evaluation at the Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Nancy Dudek
- Department of Medicine, The Ottawa Hospital Rehabilitation Center, The University of Ottawa, Ottawa, Ontario, Canada
| | - Timothy Wood
- Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada
| | - Christine Seabrook
- Department of Surgical Education, The University of Ottawa, Ottawa, Ontario, Canada
| | - Wade Gofton
- Department of Surgical Education, The University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
86
|
Choe JH, Knight CL, Stiling R, Corning K, Lock K, Steinberg KP. Shortening the Miles to the Milestones: Connecting EPA-Based Evaluations to ACGME Milestone Reports for Internal Medicine Residency Programs. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2016; 91:943-50. [PMID: 27028030 DOI: 10.1097/acm.0000000000001161] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
The Next Accreditation System requires internal medicine training programs to provide the Accreditation Council for Graduate Medical Education (ACGME) with semiannual information about each resident's progress in 22 subcompetency domains. Evaluation of resident "trustworthiness" in performing entrustable professional activities (EPAs) may offer a more tangible assessment construct than evaluations based on expectations of usual progression toward competence. However, translating results from EPA-based evaluations into ACGME milestone progress reports has proven to be challenging because the constructs that underlay these two systems differ.The authors describe a process to bridge the gap between rotation-specific EPA-based evaluations and ACGME milestone reporting. Developed at the University of Washington in 2012 and 2013, this method involves mapping EPA-based evaluation responses to "milestone elements," the narrative descriptions within the columns of each of the 22 internal medicine subcompetencies. As faculty members complete EPA-based evaluations, the mapped milestone elements are automatically marked as "confirmed." Programs can maintain a database that tallies the number of times each milestone element is confirmed for a resident; these data can be used to produce graphical displays of resident progress along the internal medicine milestones.Using this count of milestone elements allows programs to bridge the gap between faculty assessments of residents based on rotation-specific observed activities and semiannual ACGME reports based on the internal medicine milestones. Although potentially useful for all programs, this method is especially beneficial to large programs where clinical competency committee members may not have the opportunity for direct observation of all residents.
Collapse
Affiliation(s)
- John H Choe
- J.H. Choe is assistant professor, Department of Medicine, and associate program director, Internal Medicine Residency Program, University of Washington School of Medicine, Seattle, Washington. C.L. Knight is associate professor, Department of Medicine, and associate program director, Internal Medicine Residency Program, University of Washington School of Medicine, Seattle, Washington. R. Stiling was program operations specialist, Internal Medicine Residency Program, University of Washington, Seattle, Washington, at the time this article was written. K. Corning is associate director, Internal Medicine Residency Program, University of Washington, Seattle, Washington. K. Lock is program operations specialist, Internal Medicine Residency Program, University of Washington, Seattle, Washington. K.P. Steinberg is professor, Department of Medicine, and program director, Internal Medicine Residency Program, University of Washington School of Medicine, Seattle Washington
| | | | | | | | | | | |
Collapse
|
87
|
Gauthier G, St-Onge C, Tavares W. Rater cognition: review and integration of research findings. MEDICAL EDUCATION 2016; 50:511-22. [PMID: 27072440 DOI: 10.1111/medu.12973] [Citation(s) in RCA: 58] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2015] [Revised: 07/20/2015] [Accepted: 11/13/2015] [Indexed: 05/21/2023]
Abstract
BACKGROUND Given the complexity of competency frameworks, associated skills and abilities, and contexts in which they are to be assessed in competency-based education (CBE), there is an increased reliance on rater judgements when considering trainee performance. This increased dependence on rater-based assessment has led to the emergence of rater cognition as a field of research in health professions education. The topic, however, is often conceptualised and ultimately investigated using many different perspectives and theoretical frameworks. Critically analysing how researchers think about, study and discuss rater cognition or the judgement processes in assessment frameworks may provide meaningful and efficient directions in how the field continues to explore the topic. METHODS We conducted a critical and integrative review of the literature to explore common conceptualisations and unified terminology associated with rater cognition research. We identified 1045 articles on rater-based assessment in health professions education using Scorpus, Medline and ERIC and 78 articles were included in our review. RESULTS We propose a three-phase framework of observation, processing and integration. We situate nine specific mechanisms and sub-mechanisms described across the literature within these phases: (i) generating automatic impressions about the person; (ii) formulating high-level inferences; (iii) focusing on different dimensions of competencies; (iv) categorising through well-developed schemata based on (a) personal concept of competence, (b) comparison with various exemplars and (c) task and context specificity; (v) weighting and synthesising information differently, (vi) producing narrative judgements; and (vii) translating narrative judgements into scales. CONCLUSION Our review has allowed us to identify common underlying conceptualisations of observed rater mechanisms and subsequently propose a comprehensive, although complex, framework for the dynamic and contextual nature of the rating process. This framework could help bridge the gap between researchers adopting different perspectives when studying rater cognition and enable the interpretation of contradictory findings of raters' performance by determining which mechanism is enabled or disabled in any given context.
Collapse
Affiliation(s)
| | - Christina St-Onge
- Medecine interne, Universite de Sherbrooke, Sherbrooke, Quebec, Canada
| | - Walter Tavares
- Division of Emergency Medicine, McMaster University, Hamilton, Ontario, Canada
- Centennial College, School of Community and Health Studies, Toronto, Ontario, Canada
- ORNGE Transport Medicine, Faculty of Medicine, Mississauga, Ontario, Canada
| |
Collapse
|
88
|
Carraccio C, Englander R, Van Melle E, Ten Cate O, Lockyer J, Chan MK, Frank JR, Snell LS. Advancing Competency-Based Medical Education: A Charter for Clinician-Educators. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2016; 91:645-9. [PMID: 26675189 DOI: 10.1097/acm.0000000000001048] [Citation(s) in RCA: 192] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
The International Competency-Based Medical Education (ICBME) Collaborators have been working since 2009 to promote understanding of competency-based medical education (CBME) and accelerate its uptake worldwide. This article presents a charter, supported by a literature-based rationale, which is meant to provide a shared mental model of CBME that will serve as a path forward in its widespread implementation.At a 2013 summit, the ICBME Collaborators laid the groundwork for this charter. Here, the fundamental principles of CBME and professional responsibilities of medical educators in its implementation process are described. The authors outline three fundamental principles: (1) Medical education must be based on the health needs of the populations served; (2) the primary focus of education and training should be the desired outcomes for learners rather than the structure and process of the educational system; and (3) the formation of a physician should be seamless across the continuum of education, training, and practice.Building on these principles, medical educators must demonstrate commitment to teaching, assessing, and role modeling the range of identified competencies. In the clinical setting, they must provide supervision that balances patient safety with the professional development of learners, being transparent with stakeholders about level of supervision needed. They must use effective and efficient assessment strategies and tools for basing transition decisions on competence rather than time in training, empowering learners to be active participants in their learning and assessment. Finally, advancing CBME requires program evaluation and research, faculty development, and a collaborative approach to realize its full potential.
Collapse
Affiliation(s)
- Carol Carraccio
- C. Carraccio is vice president, Competency-Based Assessment, American Board of Pediatrics, Chapel Hill, North Carolina. R. Englander was senior director of competency-based learning and assessment, Association of American Medical Colleges, Washington, DC, at the time this was written. E. Van Melle is education researcher, Queen's University, Kingston, Ontario, Canada, and education scientist, Royal College of Physicians and Surgeons of Canada, Ottawa, Ontario, Canada. O. ten Cate is professor of medical education and director, Center for Research and Development of Education, University Medical Center, Utrecht, the Netherlands. J. Lockyer is senior associate dean-education and professor, Department of Community Health Sciences, Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada. M.-K. Chan is associate professor, Department of Pediatrics and Child Health, University of Manitoba, Winnipeg, Manitoba, Canada, and clinician educator, CanMEDS & Faculty Development, Royal College of Physicians and Surgeons of Canada, Ottawa, Ontario, Canada. J.R. Frank is director, Specialty Education, Strategy, and Standards, Office of Specialty Education, Royal College of Physicians and Surgeons of Canada, and director of educational research and development, Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada. L.S. Snell is professor of medicine, Centre for Medical Education, McGill University, Montreal, Quebec, Canada, and senior clinician educator, Royal College of Physicians and Surgeons of Canada, Ottawa, Ontario, Canada
| | | | | | | | | | | | | | | |
Collapse
|
89
|
Roberts LJ, Jones O. Assessing anaesthesia trainees at work: opportunities and challenges. Anaesth Intensive Care 2016; 44:194-7. [PMID: 27029670 DOI: 10.1177/0310057x1604400204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Affiliation(s)
- L J Roberts
- Departments of Anaesthesia and Pain Management, Sir Charles Gairdner Hospital, Nedlands, Western Australia Education Unit, Australian and New Zealand College of Anaesthetists, Melbourne, Victoria
| | | |
Collapse
|
90
|
The Correlation of Workplace Simulation-Based Assessments With Interns’ Infant Lumbar Puncture Success. Simul Healthc 2016; 11:126-33. [DOI: 10.1097/sih.0000000000000135] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
91
|
Leep Hunderfund AN, Rubin DI, Laughlin RS, Sorenson EJ, Watson JC, Jones LK, Juul D, Park YS. Validity and feasibility of the EMG direct observation tool (EMG-DOT). Neurology 2016; 86:1627-34. [DOI: 10.1212/wnl.0000000000002609] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2015] [Accepted: 01/13/2016] [Indexed: 11/15/2022] Open
|
92
|
Rekman J, Gofton W, Dudek N, Gofton T, Hamstra SJ. Entrustability Scales: Outlining Their Usefulness for Competency-Based Clinical Assessment. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2016; 91:186-90. [PMID: 26630609 DOI: 10.1097/acm.0000000000001045] [Citation(s) in RCA: 171] [Impact Index Per Article: 21.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Meaningful residency education occurs at the bedside, along with opportunities for situated in-training assessment. A necessary component of workplace-based assessment (WBA) is the clinical supervisor, whose subjective judgments of residents' performance can yield rich and nuanced ratings but may also occasionally reflect bias. How to improve the validity of WBA instruments while simultaneously capturing meaningful subjective judgment is currently not clear. This Perspective outlines how "entrustability scales" may help bridge the gap between the assessment judgments of clinical supervisors and WBA instruments. Entrustment-based assessment evaluates trainees against what they will actually do when independent; thus, "entrustability scales"-defined as behaviorally anchored ordinal scales based on progression to competence-reflect a judgment that has clinical meaning for assessors. Rather than asking raters to assess trainees against abstract scales, entrustability scales provide raters with an assessment measure structured around the way evaluators already make day-to-day clinical entrustment decisions, which results in increased reliability. Entrustability scales help raters make assessments based on narrative descriptors that reflect real-world judgments, drawing attention to a trainee's readiness for independent practice rather than his/her deficiencies. These scales fit into milestone measurement both by allowing an individual resident to strive for independence in entrustable professional activities across the entire training period and by allowing residency directors to identify residents experiencing difficulty. Some WBA tools that have begun to use variations of entrustability scales show potential for allowing raters to produce valid judgments. This type of anchor scale should be brought into wider circulation.
Collapse
Affiliation(s)
- Janelle Rekman
- J. Rekman is a general surgery resident and master's in health professions education student, University of Ottawa, Ottawa, Ontario, Canada. W. Gofton is an orthopedic surgeon, University of Ottawa, Ottawa, Ontario, Canada. N. Dudek is associate professor, Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada. T. Gofton is Wissenschaftlicher Mitarbeiter, Department of Philosophy, Eberhard Karls Universität, Tübingen, Germany. S.J. Hamstra is vice president, Milestones Research and Evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | | | | | | | | |
Collapse
|
93
|
Palermo C, Davidson ZE, Hay M. A cross-sectional study exploring the different roles of individual and group assessment methods in assessing public health nutrition competence. J Hum Nutr Diet 2016; 29:523-8. [PMID: 26781685 DOI: 10.1111/jhn.12351] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
BACKGROUND Competency in the practice of public health is essential for dietitians, yet little is known about credible and dependable assessment in this field. The present study aimed to investigate the role of individual and group assessment tasks as elements of a public health nutrition competency-based assessment system. METHODS Assessment performance data from 158 dietetics students (three group tasks and one individual task) who had completed a practical placement learning experience in a public health nutrition setting were examined using nonparametric techniques. All 158 students were deemed individually 'competent' on completion of the placement. RESULTS The median mark was significantly lower for the individual compared to the group task, with a greater range of marks achieved in the individual assessment. There was a weak relationship between individual and group marks for the whole cohort (n = 158) (Spearman's rho correlation coefficient = 0.193, P = 0.015). Bland-Altman analysis showed that the mean (SD) agreement between the two assessment tasks was -5.9 (17.7) marks. Systematic bias between the two tasks was also demonstrated, indicating that students with the lowest average mark of the two assessments scored lower on the individual assessment task compared to their group task and those who had a higher average mark scored higher on the individual group assessment compared to their group task. CONCLUSIONS Student performance in public health differs between individual and group assessment. Individual assessment appears to differentiate between students, yet group work is essential for the development of teamwork skills. Both should be considered in the judgement of public health nutrition competency.
Collapse
Affiliation(s)
- C Palermo
- Department of Nutrition and Dietetics, Faculty of Medicine, Nursing and Health Sciences, Monash University, Notting Hill, VIC, Australia
| | - Z E Davidson
- Department of Nutrition and Dietetics, Faculty of Medicine, Nursing and Health Sciences, Monash University, Notting Hill, VIC, Australia
| | - M Hay
- Office of the Deputy Dean (Education), Faculty of Medicine, Nursing and Health Sciences, Monash University, Clayton, VIC, Australia
| |
Collapse
|
94
|
Rangel JC, Cartmill C, Kuper A, Martimianakis MA, Whitehead CR. Setting the standard: Medical Education's first 50 years. MEDICAL EDUCATION 2016; 50:24-35. [PMID: 26695464 DOI: 10.1111/medu.12765] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/02/2015] [Revised: 03/03/2015] [Accepted: 03/20/2015] [Indexed: 05/15/2023]
Abstract
CONTEXT By understanding its history, the medical education community gains insight into why it thinks and acts as it does. This piece provides a Foucauldian archaeological critical discourse analysis (CDA) of the journal Medical Education on the publication of its 50th Volume. This analysis draws upon critical social science perspectives to allow the examination of unstated assumptions that underpin and shape educational tools and practices. METHODS A Foucauldian form of CDA was utilised to examine the journal over its first half-century. This approach emphasises the importance of language, and the ways in which words used affect and are affected by educational practices and priorities. An iterative methodology was used to organise the very large dataset (12,000 articles). A distilled dataset, within which particular focus was placed on the editorial pieces in the journal, was analysed. RESULTS A major finding was the diversity of the journal as a site that has permitted multiple - and sometimes contradictory - discursive trends to emerge. One particularly dominant discursive tension across the time span of the journal is that between a persistent drive for standardisation and a continued questioning of the desirability of standardisation. This tension was traced across three prominent areas of focus in the journal: objectivity and the nature of medical education knowledge; universality and local contexts, and the place of medical education between academia and the community. CONCLUSIONS The journal has provided the medical education community with a place in which to both discuss practical pedagogical concerns and ponder conceptual and social issues affecting the medical education community. This dual nature of the journal brings together educators and researchers; it also gives particular focus to a major and rarely cited tension in medical education between the quest for objective standards and the limitations of standard measures.
Collapse
Affiliation(s)
- Jaime C Rangel
- Department of Sociology, University of Toronto, Toronto, ON, Canada
- Wilson Centre, University Health Network, Toronto, ON, Canada
| | - Carrie Cartmill
- Wilson Centre, University Health Network, Toronto, ON, Canada
| | - Ayelet Kuper
- Wilson Centre, University Health Network, Toronto, ON, Canada
- Department of Medicine, Sunnybrook Health Sciences, Toronto, ON, Canada
- Department of Family and Community Medicine, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Maria A Martimianakis
- Wilson Centre, University Health Network, Toronto, ON, Canada
- Department of Paediatrics, Hospital for Sick Children, University of Toronto, Toronto, ON, Canada
| | - Cynthia R Whitehead
- Wilson Centre, University Health Network, Toronto, ON, Canada
- Department of Family and Community Medicine, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- Centre for Ambulatory Care Education, Women's College Hospital, Toronto, ON, Canada
| |
Collapse
|
95
|
Onishi H. Assessment of Clinical Reasoning by Listening to Case Presentations: VSOP Method for Better Feedback. JOURNAL OF MEDICAL EDUCATION AND CURRICULAR DEVELOPMENT 2016; 3:JMECD.S30035. [PMID: 29349321 PMCID: PMC5736286 DOI: 10.4137/jmecd.s30035] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2016] [Revised: 08/21/2016] [Accepted: 08/23/2016] [Indexed: 06/02/2023]
Abstract
Case presentation is used as a teaching and learning tool in almost all clinical education, and it is also associated with clinical reasoning ability. Despite this, no specific assessment tool utilizing case presentations has yet been established. SNAPPS (summarize, narrow, analyze, probe, plan, and select) and the One-minute Preceptor are well-known educational tools for teaching how to improve consultations. However, these tools do not include a specific rating scale to determine the diagnostic reasoning level. Mini clinical evaluation exercise (Mini-CEX) and RIME (reporter, interpreter, manager, and educator) are comprehensive assessment tools with appropriate reliability and validity. The vague, structured, organized and pertinent (VSOP) model, previously proposed in Japan and derived from RIME model, is a tool for formative assessment and teaching of trainees through case presentations. Uses of the VSOP model in real settings are also discussed.
Collapse
Affiliation(s)
- Hirotaka Onishi
- International Research Center for Medical Education, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| |
Collapse
|
96
|
Ingham G, Fry J, Morgan S, Ward B. ARCADO - Adding random case analysis to direct observation in workplace-based formative assessment of general practice registrars. BMC MEDICAL EDUCATION 2015; 15:218. [PMID: 26655455 PMCID: PMC4676174 DOI: 10.1186/s12909-015-0503-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2015] [Accepted: 12/05/2015] [Indexed: 06/05/2023]
Abstract
BACKGROUND Workplace-based formative assessments using consultation observation are currently conducted during the Australian general practice training program. Assessment reliability is improved by using multiple assessment methods. The aim of this study was to explore experiences of general practice medical educator assessors and registrars (trainees) when adding random case analysis to direct observation (ARCADO) during formative workplace-based assessments. METHODS A sample of general practice medical educators and matched registrars were recruited. Following the ARCADO workplace assessment, semi-structured qualitative interviews were conducted. The data was analysed thematically. RESULTS Ten registrars and eight medical educators participated. Four major themes emerged - formative versus summative assessment; strengths (acceptability, flexibility, time efficiency, complementarity and authenticity); weaknesses (reduced observation and integrity risks); and contextual factors (variation in assessment content, assessment timing, registrar-medical educator relationship, medical educator's approach and registrar ability). CONCLUSION ARCADO is a well-accepted workplace-based formative assessment perceived by registrars and assessors to be valid and flexible. The use of ARCADO enabled complementary insights that would not have been achieved with direct observation alone. Whilst there are some contextual factors to be considered in its implementation, ARCADO appears to have utility as formative assessment and, subject to further evaluation, high-stakes assessment.
Collapse
Affiliation(s)
- Gerard Ingham
- Beyond Medical Education, PO Box 3064, Bendigo, Victoria, 3550, Australia.
| | - Jennifer Fry
- Beyond Medical Education, PO Box 3064, Bendigo, Victoria, 3550, Australia.
| | - Simon Morgan
- GP Training Valley to Coast, Hunter Regional Mail Centre, PO Box 573, Newcastle, NSW, 2310, Australia.
| | - Bernadette Ward
- School of Rural Health, Monash University, PO Box 666, Bendigo, 3550, Victoria, Australia.
| |
Collapse
|
97
|
On the Assessment of Paramedic Competence: A Narrative Review with Practice Implications. Prehosp Disaster Med 2015; 31:64-73. [DOI: 10.1017/s1049023x15005166] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
AbstractIntroductionParamedicine is experiencing significant growth in scope of practice, autonomy, and role in the health care system. Despite clinical governance models, the degree to which paramedicine ultimately can be safe and effective will be dependent on the individuals the profession deems suited to practice. This creates an imperative for those responsible for these decisions to ensure that assessments of paramedic competence are indeed accurate, trustworthy, and defensible.PurposeThe purpose of this study was to explore and synthesize relevant theoretical foundations and literature informing best practices in performance-based assessment (PBA) of competence, as it might be applied to paramedicine, for design or evaluation of assessment programs.MethodsA narrative review methodology was applied to focus intentionally, but broadly, on purpose relevant, theoretically derived research that could inform assessment protocols in paramedicine. Primary and secondary studies from a number of health professions that contributed to and informed best practices related to the assessment of paramedic clinical competence were included and synthesized.ResultsMultiple conceptual frameworks, psychometric requirements, and emerging lines of research are forwarded. Seventeen practice implications are derived to promote understanding as well as best practices and evaluation criteria for educators, employers, and/or licensing/certifying bodies when considering the assessment of paramedic competence.ConclusionsThe assessment of paramedic competence is a complex process requiring an understanding, appreciation for, and integration of conceptual and psychometric principles. The field of PBA is advancing rapidly with numerous opportunities for research.TavaresW,BoetS.On the assessment of paramedic competence: a narrative review with practice implications.Prehosp Disaster Med.2016;31(1):64–73.
Collapse
|
98
|
Rogausch A, Beyeler C, Montagne S, Jucker-Kupper P, Berendonk C, Huwendiek S, Gemperli A, Himmel W. The influence of students' prior clinical skills and context characteristics on mini-CEX scores in clerkships--a multilevel analysis. BMC MEDICAL EDUCATION 2015; 15:208. [PMID: 26608836 PMCID: PMC4658793 DOI: 10.1186/s12909-015-0490-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2015] [Accepted: 11/19/2015] [Indexed: 05/23/2023]
Abstract
BACKGROUND In contrast to objective structured clinical examinations (OSCEs), mini-clinical evaluation exercises (mini-CEXs) take place at the clinical workplace. As both mini-CEXs and OSCEs assess clinical skills, but within different contexts, this study aims at analyzing to which degree students' mini-CEX scores can be predicted by their recent OSCE scores and/or context characteristics. METHODS Medical students participated in an end of Year 3 OSCE and in 11 mini-CEXs during 5 different clerkships of Year 4. The students' mean scores of 9 clinical skills OSCE stations and mean 'overall' and 'domain' mini-CEX scores, averaged over all mini-CEXs of each student were computed. Linear regression analyses including random effects were used to predict mini-CEX scores by OSCE performance and characteristics of clinics, trainers, students and assessments. RESULTS A total of 512 trainers in 45 clinics provided 1783 mini-CEX ratings for 165 students; OSCE results were available for 144 students (87%). Most influential for the prediction of 'overall' mini-CEX scores was the trainers' clinical position with a regression coefficient of 0.55 (95%-CI: 0.26-0.84; p < .001) for residents compared to heads of department. Highly complex tasks and assessments taking place in large clinics significantly enhanced 'overall' mini-CEX scores, too. In contrast, high OSCE performance did not significantly increase 'overall' mini-CEX scores. CONCLUSION In our study, Mini-CEX scores depended rather on context characteristics than on students' clinical skills as demonstrated in an OSCE. Ways are discussed which focus on either to enhance the scores' validity or to use narrative comments only.
Collapse
Affiliation(s)
- Anja Rogausch
- Department of Assessment and Evaluation, Institute of Medical Education, University of Bern, Bern, Switzerland.
- Clinic Sonnenhalde, Riehen, Switzerland.
| | - Christine Beyeler
- Department of Assessment and Evaluation, Institute of Medical Education, University of Bern, Bern, Switzerland.
| | - Stephanie Montagne
- Department of Assessment and Evaluation, Institute of Medical Education, University of Bern, Bern, Switzerland.
| | - Patrick Jucker-Kupper
- Department of Assessment and Evaluation, Institute of Medical Education, University of Bern, Bern, Switzerland.
| | - Christoph Berendonk
- Department of Assessment and Evaluation, Institute of Medical Education, University of Bern, Bern, Switzerland.
| | - Sören Huwendiek
- Department of Assessment and Evaluation, Institute of Medical Education, University of Bern, Bern, Switzerland.
| | - Armin Gemperli
- Department of Health Sciences and Health Policy, University of Lucerne, Lucerne, Switzerland.
- Swiss Paraplegic Research Nottwil, Nottwil, Switzerland.
| | - Wolfgang Himmel
- Department of General Practice, University Medical Center, Göttingen, Germany.
| |
Collapse
|
99
|
Peterson LN, Rusticus SA, Wilson DA, Eva KW, Lovato CY. Readiness for Residency: A Survey to Evaluate Undergraduate Medical Education Programs. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2015; 90:S36-42. [PMID: 26505099 DOI: 10.1097/acm.0000000000000903] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
BACKGROUND Health professions programs continue to search for meaningful and efficient ways to evaluate the quality of education they provide and support ongoing program improvement. Despite flaws inherent in self-assessment, recent research suggests that aggregated self-assessments reliably rank aspects of competence attained during preclerkship MD training. Given the novelty of those observations, the purpose of this study was to test their generalizability by evaluating an MD program as a whole. METHOD The Readiness for Residency Survey (RfR) was developed and aligned with the published Readiness for Clerkship Survey (RfC), but focused on the competencies expected to be achieved at graduation. The RfC and RfR were administered electronically four months after the start of clerkship and six months after the start of residency, respectively. Generalizability and decision studies examined the extent to which specific competencies were achieved relative to one another. RESULTS The reliability of scores assigned by a single resident was G = 0.32. However, a reliability of G = 0.80 could be obtained by averaging over as few as nine residents. Whereas highly rated competencies in the RfC resided within the CanMEDS domains of professional, communicator, and collaborator, five additional medical expert competencies emerged as strengths when the program was evaluated after completion by residents. CONCLUSIONS Aggregated resident self-assessments obtained using the RfR reliably differentiate aspects of competence attained over four years of undergraduate training. The RfR and RfC together can be used as evaluation tools to identify areas of strength and weakness in an undergraduate medical education program.
Collapse
|
100
|
Stroud L, Bryden P, Kurabi B, Ginsburg S. Putting performance in context: the perceived influence of environmental factors on work-based performance. PERSPECTIVES ON MEDICAL EDUCATION 2015; 4:233-243. [PMID: 26458930 PMCID: PMC4602013 DOI: 10.1007/s40037-015-0209-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
INTRODUCTION Context shapes behaviours yet is seldom considered when assessing competence. Our objective was to explore attending physicians' and trainees' perceptions of the Internal Medicine Clinical Teaching Unit (CTU) environment and how they thought contextual factors affected their performance. METHOD 29 individuals recently completing CTU rotations participated in nine level-specific focus groups (2 with attending physicians, 3 with senior and 2 with junior residents, and 2 with students). Participants were asked to identify environmental factors on the CTU and to describe how these factors influenced their own performance across CanMEDS roles. Discussions were analyzed using constructivist grounded theory. RESULTS Five major contextual factors were identified: Busyness, Multiple Hats, Other People, Educational Structures, and Hospital Resources and Policies. Busyness emerged as the most important, but all factors had a substantial perceived impact on performance. Participants felt their performance on the Manager and Scholar roles was most affected by environmental factors (mostly negatively, due to decreased efficiency and impact on learning). CONCLUSIONS In complex workplace environments, numerous factors shape performance. These contextual factors and their impact need to be considered in observations and judgements made about performance in the workplace, as without this understanding conclusions about competency may be flawed.
Collapse
Affiliation(s)
- Lynfa Stroud
- Department of Medicine, University of Toronto, Toronto, Canada.
- Wilson Centre for Research in Education, University Health Network, University of Toronto, Toronto, Canada.
| | - Pier Bryden
- Department of Psychiatry, University of Toronto, Toronto, Canada
| | - Bochra Kurabi
- Department of Medicine, University of Toronto, Toronto, Canada
| | - Shiphra Ginsburg
- Department of Medicine, University of Toronto, Toronto, Canada
- Wilson Centre for Research in Education, University Health Network, University of Toronto, Toronto, Canada
| |
Collapse
|