1
|
Bodard S, Levi LI. Introduction of Objective Structured Clinical Examinations (OSCEs) in France: A paradigm shift in Medical Education. Bull Cancer 2024; 111:1005-1007. [PMID: 39277438 DOI: 10.1016/j.bulcan.2024.08.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2024] [Accepted: 08/06/2024] [Indexed: 09/17/2024]
Affiliation(s)
- Sylvain Bodard
- University of Paris Cité, Necker Hospital, AP-HP, Department of Radiology, 75015 Paris, France; Sorbonne Université, CNRS, INSERM, Laboratoire d'Imagerie Biomédicale, Paris, France; Center for Transplantation Sciences, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
| | - Laura I Levi
- University of Paris Cité, Saint-Louis and Lariboisière Hospitals, AP-HP, Department of infectious and tropical diseases, 75010, Paris, France
| |
Collapse
|
2
|
Lim A, Abeyaratne C, Reeve E, Desforges K, Malone D. Using Kane's Validity Framework to Compare an Integrated and Single-Skill Objective Structured Clinical Examination. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2024; 88:100756. [PMID: 39002863 DOI: 10.1016/j.ajpe.2024.100756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Revised: 06/11/2024] [Accepted: 07/06/2024] [Indexed: 07/15/2024]
Abstract
OBJECTIVE The aim of this study was to compare the validity of an integrated objective structured clinical examination (OSCE) station assessing both oral and written components with that of an OSCE station assessing 1 single skill (oral only), both targeted at assessing taking a best possible medication history. METHODS A convergent mixed-methods design that used the 4 inferences of Kane's validity framework (scoring, generalization, extrapolation, and implications) as a scaffold to integrate qualitative data (post-OSCE reflections) and quantitative data (assessment grades and categories of medication errors) was applied. RESULTS In 2022, 216 students completed the OSCE station with the oral component alone, while in 2023, 254 students completed the integrated (oral and written) OSCE station. Students in 2023 performed significantly better, with a median score of 88% vs 80% in 2022. There was a greater proportion of commission errors in the integrated assessment (20.4% vs 15.3%), but fewer omission errors (29.9% vs 31.8%) and patient profile errors (5.1% vs 69.4%). Student reflections revealed that conversations were rushed in the integrated assessment, with a greater focus on written formatting, but an appreciation for the authenticity and structured format of the integrated OSCE compared with the single-skill OSCE alone. CONCLUSION Students completing the integrated OSCE (with oral and written components) had fewer patient profile and medication omission errors than students who completed the oral-only OSCE. Considering Kane's validity framework, there was a stronger argument for the more authentic integrated OSCE in terms of the inferences of extrapolation and implications.
Collapse
Affiliation(s)
- Angelina Lim
- Monash University, Faculty of Pharmacy and Pharmaceutical Sciences, Parkville, Australia; Royal Children's Hospital, Murdoch Children's Research Institute, Parkville, Australia.
| | - Carmen Abeyaratne
- Monash University, Faculty of Pharmacy and Pharmaceutical Sciences, Parkville, Australia
| | - Emily Reeve
- Monash University, Faculty of Pharmacy and Pharmaceutical Sciences, Parkville, Australia; University of South Australia, Quality Use of Medicines and Pharmacy Research Centre, Adelaide, Australia; University of South Australia, Clinical and Health Sciences, Adelaide, Australia
| | - Katherine Desforges
- Monash University, Faculty of Pharmacy and Pharmaceutical Sciences, Parkville, Australia; Université de Montréal, Faculty of Pharmacy, Montréal, Canada
| | - Daniel Malone
- Monash University, Faculty of Pharmacy and Pharmaceutical Sciences, Parkville, Australia
| |
Collapse
|
3
|
Garcia-Ros R, Ruescas-Nicolau MA, Cezón-Serrano N, Flor-Rufino C, Martin-Valenzuela CS, Sánchez-Sánchez ML. Improving assessment of procedural skills in health sciences education: a validation study of a rubrics system in neurophysiotherapy. BMC Psychol 2024; 12:147. [PMID: 38486300 PMCID: PMC10941460 DOI: 10.1186/s40359-024-01643-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 03/05/2024] [Indexed: 03/17/2024] Open
Abstract
BACKGROUND The development of procedural skills is essential in health sciences education. Rubrics can be useful for learning and assessing these skills. To this end, a set of rubrics were developed in case of neurophysiotherapy maneuvers for undergraduates. Although students found the rubrics to be valid and useful in previous courses, the analysis of the practical exam results showed the need to change them in order to improve their validity and reliability, especially when used for summative purposes. After reviewing the rubrics, this paper analyzes their validity and reliability for promoting the learning of neurophysiotherapy maneuvers and assessing the acquisition of the procedural skills they involve. METHODS In this cross-sectional and psychometric study, six experts and 142 undergraduate students of a neurophysiotherapy subject from a Spanish university participated. The rubrics' validity (content and structural) and reliability (inter-rater and internal consistency) were analyzed. The students' scores in the subject practical exam derived from the application of the rubrics, as well as the rubrics' criteria difficulty and discrimination indices were also determined. RESULTS The rubrics´ content validity was found to be adequate (Content Validity Index > 0.90). These showed a unidimensional structure, and an acceptable internal consistency (α = 0.71) and inter-rater reliability (Fleiss' ƙ=0.44, ICC = 0.94). The scores of the subject practical exam practically covered the entire range of possible theoretical scores, showing all the criterion medium-low to medium difficulty indices - except for the one related to the physical therapist position-. All the criterion exhibited adequate discrimination indices (rpbis > 0.39), as did the rubric as a whole (Ferguson's δ = 0.86). Students highlighted the rubrics´ usefulness for learning the maneuvers, as well as their validity and reliability for formative and summative assessment. CONCLUSIONS The changed rubrics constitute a valid and reliable instrument for evaluating the execution quality of neurophysiotherapy maneuvers from a summative evaluation viewpoint. This study facilitates the development of rubrics aimed at promoting different practical skills in health-science education.
Collapse
Affiliation(s)
- Rafael Garcia-Ros
- Department of Developmental and Educational Psychology, Faculty of Psychology, University of Valencia, Blasco Ibáñez Av. no. 21, Valencia, 46010, Spain
- Neurophysiotherapy Teaching Innovation Group, Department of Physiotherapy, Faculty of Physiotherapy, University of Valencia, Gascó Oliag Street no. 5, Valencia, 46010, Spain
| | - Maria-Arantzazu Ruescas-Nicolau
- Neurophysiotherapy Teaching Innovation Group, Department of Physiotherapy, Faculty of Physiotherapy, University of Valencia, Gascó Oliag Street no. 5, Valencia, 46010, Spain.
- Physiotherapy in Motion. Multispeciality Research Group (PTinMOTION), Department of Physiotherapy, Faculty of Physiotherapy, University of Valencia, Gascó Oliag Street no. 5, Valencia, 46010, Spain.
| | - Natalia Cezón-Serrano
- Neurophysiotherapy Teaching Innovation Group, Department of Physiotherapy, Faculty of Physiotherapy, University of Valencia, Gascó Oliag Street no. 5, Valencia, 46010, Spain
- Physiotherapy in Motion. Multispeciality Research Group (PTinMOTION), Department of Physiotherapy, Faculty of Physiotherapy, University of Valencia, Gascó Oliag Street no. 5, Valencia, 46010, Spain
| | - Cristina Flor-Rufino
- Neurophysiotherapy Teaching Innovation Group, Department of Physiotherapy, Faculty of Physiotherapy, University of Valencia, Gascó Oliag Street no. 5, Valencia, 46010, Spain
- Department of Physiotherapy, Faculty of Physiotherapy, University of Valencia, Gascó Oliag Street no. 5, Valencia, 46010, Spain
| | - Constanza San Martin-Valenzuela
- Neurophysiotherapy Teaching Innovation Group, Department of Physiotherapy, Faculty of Physiotherapy, University of Valencia, Gascó Oliag Street no. 5, Valencia, 46010, Spain
- Research unit in Clinical biomechanics - UBIC, Department of Physiotherapy, Faculty of Physiotherapy, University of Valencia, Gascó Oliag Street no. 5, Valencia, 46010, Spain
| | - M Luz Sánchez-Sánchez
- Neurophysiotherapy Teaching Innovation Group, Department of Physiotherapy, Faculty of Physiotherapy, University of Valencia, Gascó Oliag Street no. 5, Valencia, 46010, Spain
- Physiotherapy in Motion. Multispeciality Research Group (PTinMOTION), Department of Physiotherapy, Faculty of Physiotherapy, University of Valencia, Gascó Oliag Street no. 5, Valencia, 46010, Spain
| |
Collapse
|
4
|
Chang O, Holbrook AM, Lohit S, Deng J, Xu J, Lee M, Cheng A. Comparability of Objective Structured Clinical Examinations (OSCEs) and Written Tests for Assessing Medical School Students' Competencies: A Scoping Review. Eval Health Prof 2023; 46:213-224. [PMID: 36959750 PMCID: PMC10443966 DOI: 10.1177/01632787231165797] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/25/2023]
Abstract
Objective Structured Clinical Examinations (OSCEs) and written tests are commonly used to assess health professional students, but it remains unclear whether the additional human resources and expenses required for OSCEs, both in-person and online, are worthwhile for assessing competencies. This scoping review summarized literature identified by searching MEDLINE and EMBASE comparing 1) OSCEs and written tests and 2) in-person and online OSCEs, for assessing health professional trainees' competencies. For Q1, 21 studies satisfied inclusion criteria. The most examined health profession was medical trainees (19, 90.5%), the comparison was most frequently OSCEs versus multiple-choice questions (MCQs) (18, 85.7%), and 18 (87.5%) examined the same competency domain. Most (77.5%) total score correlation coefficients between testing methods were weak (r < 0.40). For Q2, 13 articles were included. In-person and online OSCEs were most used for medical trainees (9, 69.2%), checklists were the most prevalent evaluation scheme (7, 63.6%), and 14/17 overall score comparisons were not statistically significantly different. Generally low correlations exist between MCQ and OSCE scores, providing insufficient evidence as to whether OSCEs provide sufficient value to be worth their additional cost. Online OSCEs may be a viable alternative to in-person OSCEs for certain competencies where technical challenges can be met.
Collapse
Affiliation(s)
- Oswin Chang
- Clinical Pharmacology and Toxicology Research, St Joseph’s Healthcare Hamilton
- Faculty of Health Sciences, McMaster University
| | - Anne M. Holbrook
- Clinical Pharmacology and Toxicology Research, St Joseph’s Healthcare Hamilton
- Division of Clinical Pharmacology and Toxicology, McMaster University
| | - Simran Lohit
- Clinical Pharmacology and Toxicology Research, St Joseph’s Healthcare Hamilton
- Faculty of Health Sciences, McMaster University
| | - Jiawen Deng
- Clinical Pharmacology and Toxicology Research, St Joseph’s Healthcare Hamilton
- Faculty of Health Sciences, McMaster University
| | - Janice Xu
- Clinical Pharmacology and Toxicology Research, St Joseph’s Healthcare Hamilton
- Faculty of Health Sciences, McMaster University
| | - Munil Lee
- Schulich School of Medicine and Dentistry,University of Western Ontario
| | - Alan Cheng
- Clinical Pharmacology and Toxicology Research, St Joseph’s Healthcare Hamilton
- Faculty of Health Sciences, McMaster University
| |
Collapse
|
5
|
Andersen SAW, Nayahangan LJ, Park YS, Konge L. Use of Generalizability Theory for Exploring Reliability of and Sources of Variance in Assessment of Technical Skills: A Systematic Review and Meta-Analysis. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:1609-1619. [PMID: 33951677 DOI: 10.1097/acm.0000000000004150] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
PURPOSE Competency-based education relies on the validity and reliability of assessment scores. Generalizability (G) theory is well suited to explore the reliability of assessment tools in medical education but has only been applied to a limited extent. This study aimed to systematically review the literature using G-theory to explore the reliability of structured assessment of medical and surgical technical skills and to assess the relative contributions of different factors to variance. METHOD In June 2020, 11 databases, including PubMed, were searched from inception through May 31, 2020. Eligible studies included the use of G-theory to explore reliability in the context of assessment of medical and surgical technical skills. Descriptive information on study, assessment context, assessment protocol, participants being assessed, and G-analyses was extracted. Data were used to map G-theory and explore variance components analyses. A meta-analysis was conducted to synthesize the extracted data on the sources of variance and reliability. RESULTS Forty-four studies were included; of these, 39 had sufficient data for meta-analysis. The total pool included 35,284 unique assessments of 31,496 unique performances of 4,154 participants. Person variance had a pooled effect of 44.2% (95% confidence interval [CI], 36.8%-51.5%). Only assessment tool type (Objective Structured Assessment of Technical Skills-type vs task-based checklist-type) had a significant effect on person variance. The pooled reliability (G-coefficient) was 0.65 (95% CI, .59-.70). Most studies included decision studies (39, 88.6%) and generally seemed to have higher ratios of performances to assessors to achieve a sufficiently reliable assessment. CONCLUSIONS G-theory is increasingly being used to examine reliability of technical skills assessment in medical education, but more rigor in reporting is warranted. Contextual factors can potentially affect variance components and thereby reliability estimates and should be considered, especially in high-stakes assessment. Reliability analysis should be a best practice when developing assessment of technical skills.
Collapse
Affiliation(s)
- Steven Arild Wuyts Andersen
- S.A.W. Andersen is postdoctoral researcher, Copenhagen Academy for Medical Education and Simulation (CAMES), Center for Human Resources and Education, Capital Region of Denmark, and Department of Otolaryngology, The Ohio State University, Columbus, Ohio, and resident in otorhinolaryngology, Department of Otorhinolaryngology-Head & Neck Surgery, Rigshospitalet, Copenhagen, Denmark; ORCID: https://orcid.org/0000-0002-3491-9790
| | - Leizl Joy Nayahangan
- L.J. Nayahangan is researcher, CAMES, Center for Human Resources and Education, Capital Region of Denmark, Copenhagen, Denmark; ORCID: https://orcid.org/0000-0002-6179-1622
| | - Yoon Soo Park
- Y.S. Park is director of health professions education research, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts; ORCID: https://orcid.org/0000-0001-8583-4335
| | - Lars Konge
- L. Konge is professor of medical education, University of Copenhagen, and head of research, CAMES, Center for Human Resources and Education, Capital Region of Denmark, Copenhagen, Denmark; ORCID: https://orcid.org/0000-0002-1258-5822
| |
Collapse
|
6
|
van den Bos-Boon A, van Dijk M, Adema J, Gischler S, van der Starre C. Professional Assessment Tool for Team Improvement: An assessment tool for paediatric intensive care unit nurses' technical and nontechnical skills. Aust Crit Care 2021; 35:159-166. [PMID: 34167890 DOI: 10.1016/j.aucc.2021.03.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Revised: 02/19/2021] [Accepted: 03/06/2021] [Indexed: 10/21/2022] Open
Abstract
BACKGROUND Cardiorespiratory arrests are rare in paediatric intensive care units, yet intensive care nurses must be able to initiate resuscitation before medical assistance is available. For resuscitation to be successful, instant decision-making, team communication, and the coordinating role of the first responsible nurse are crucial. In-house resuscitation training for nurses includes technical and nontechnical skills. OBJECTIVES The aim of this study was to develop a valid, reliable, and feasible assessment instrument, called the Professional Assessment Tool for Team Improvement, for the first responsible nurse's technical and nontechnical skills. METHODS Instrument development followed the COnsensus-based Standards for the selection of health Measurement Instruments guidelines and professionals' expertise. To establish content validity, experts reached consensus via group discussions about the content and the operationalisation of this team role. The instrument was tested using two resuscitation assessment scenarios. Inter-rater reliability was established by assessing 71 nurses in live scenario sessions and videotaped sessions, using intraclass correlation coefficients and Cohen's kappa. Internal consistency for the total instrument was established using Cronbach's alpha. Construct validity was assessed by examining the associations between raters' assessments and nurses' self-assessment scores. RESULTS The final instrument included 12 items, divided into four categories: Team role, Teamwork and communication, Technical skills, and Reporting. Intraclass correlation coefficients were good in both live and videotaped sessions (0.78-0.87). Cronbach's alpha was stable around 0.84. Feasibility was approved (assessment time reduced by >30%). CONCLUSIONS The Professional Assessment Tool for Team Improvement appears to be a promising valid and reliable instrument to assess both technical and nontechnical skills of the first responsible paediatric intensive care unit nurse. The ability of the instrument to detect change over time (i.e., improvement of skills after training) needs to be established.
Collapse
Affiliation(s)
- Ada van den Bos-Boon
- Pediatric Intensive Care Unit and Department of Pediatric Surgery, Erasmus University Medical Centre-Sophia Children's Hospital, Rotterdam, the Netherlands.
| | - Monique van Dijk
- Pediatric Intensive Care Unit and Department of Pediatric Surgery, Erasmus University Medical Centre-Sophia Children's Hospital, Rotterdam, the Netherlands
| | - Jan Adema
- Cito, Institute for Educational Testing, Arnhem, the Netherlands
| | - Saskia Gischler
- Pediatric Intensive Care Unit and Department of Pediatric Surgery, Erasmus University Medical Centre-Sophia Children's Hospital, Rotterdam, the Netherlands
| | - Cynthia van der Starre
- Pediatric Intensive Care Unit and Department of Pediatric Surgery, Erasmus University Medical Centre-Sophia Children's Hospital, Rotterdam, the Netherlands; Neonatal Intensive Care Unit, Erasmus University Medical Centre-Sophia Children's Hospital, Rotterdam, the Netherlands
| |
Collapse
|
7
|
Hock SM, Shah SC, Perumalsamy PD, Sergel M. Comparison of Two Simulated Procedural Assessment Formats in Attending Emergency Physicians. Cureus 2021; 13:e14943. [PMID: 34123640 PMCID: PMC8189535 DOI: 10.7759/cureus.14943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Background Emergency physicians must be proficient at inserting central venous catheters and performing lumbar punctures to provide life-saving therapies to critically ill patients. An assessment of procedural skill is rarely performed after an emergency physician has completed residency. Current board certification exams for emergency medicine focus only on verbal descriptions of procedures to assess skill. We compared two methods of procedural skill assessment, simulated task trainer and verbal description, to assess the range of skill in central venous catheter insertion and lumbar punctures of emergency attending physicians at a large, urban, academic tertiary care institution. Methodology This is a prospective cohort study of simulated internal jugular central venous catheter insertion and lumbar puncture skill by emergency attending physicians on a task trainer versus verbal description. A total of 17 attending emergency medicine physicians consented to participate in the study during a yearly procedural skills session. For each subject, two expert raters used previously published checklists to assess procedural skill and give a global rating score. Results More checklist items were performed correctly on the task trainer than on verbal assessment for central line (task trainer = 78.4% ± 8.32% and verbal = 68.26% ± 8.9%) and lumbar puncture (task trainer = 85.57% ± 7.6% and verbal = 73.53%4 ± 10.34%) procedures, both with significant differences (p < 0.001). Of the participants, 82% strongly preferred the task trainer format to the verbal description assessment format. Conclusions The higher scores on the simulated format compared to the current verbal format imply that a shift towards simulated procedural assessment techniques may benefit examinees. More work is needed to determine if objective checklist scores for practicing attending emergency physicians correlate with subjective expert assessments of their procedural skills.
Collapse
Affiliation(s)
- Sara M Hock
- Department of Emergency Medicine, Rush University Medical Center, Chicago, USA
| | - Shital C Shah
- Department of Health Systems Management, College of Health Sciences, Rush University Medical Center, Chicago, USA
| | - Priya D Perumalsamy
- Department of Emergency Medicine, Cape Regional Medical Center, Cape May Court House, USA
| | - Michelle Sergel
- Department of Emergency Medicine, Cook County Health and Hospital System, Chicago, USA
| |
Collapse
|
8
|
Gomes MM, Driman D, Park YS, Wood TJ, Yudkowsky R, Dudek NL. Teaching and assessing intra-operative consultations in competency-based medical education: development of a workplace-based assessment instrument. Virchows Arch 2021; 479:803-813. [PMID: 33966099 PMCID: PMC8516791 DOI: 10.1007/s00428-021-03113-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 04/22/2021] [Accepted: 04/27/2021] [Indexed: 02/02/2023]
Abstract
Competency-based medical education (CBME) is being implemented worldwide. In CMBE, residency training is designed around competencies required for unsupervised practice and use entrustable professional activities (EPAs) as workplace “units of assessment”. Well-designed workplace-based assessment (WBA) tools are required to document competence of trainees in authentic clinical environments. In this study, we developed a WBA instrument to assess residents’ performance of intra-operative pathology consultations and conducted a validity investigation. The entrustment-aligned pathology assessment instrument for intra-operative consultations (EPA-IC) was developed through a national iterative consultation and used clinical supervisors to assess residents’ performance at an anatomical pathology program. Psychometric analyses and focus groups were conducted to explore the sources of evidence using modern validity theory: content, response process, internal structure, relations to other variables, and consequences of assessment. The content was considered appropriate, the assessment was feasible and acceptable by residents and supervisors, and it had a positive educational impact by improving performance of intra-operative consultations and feedback to learners. The results had low reliability, which seemed to be related to assessment biases, and supervisors were reluctant to fully entrust trainees due to cultural issues. With CBME implementation, new workplace-based assessment tools are needed in pathology. In this study, we showcased the development of the first instrument for assessing resident’s performance of a prototypical entrustable professional activity in pathology using modern education principles and validity theory.
Collapse
Affiliation(s)
- Marcio M Gomes
- Department of Pathology and Laboratory Medicine, University of Ottawa, Ottawa, Canada.
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada.
- The Ottawa Hospital, Ottawa, Canada.
| | - David Driman
- Department of Pathology and Laboratory Medicine, Western University, London, Canada
| | - Yoon Soo Park
- Department of Medical Education, University of Illinois At Chicago, Chicago, IL, USA
| | - Timothy J Wood
- Department of Innovation in Medical Education, University of Ottawa, Ottawa, Canada
| | - Rachel Yudkowsky
- Department of Medical Education, University of Illinois At Chicago, Chicago, IL, USA
| | - Nancy L Dudek
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- The Ottawa Hospital, Ottawa, Canada
- Department of Medicine, University of Ottawa, Ottawa, Canada
| |
Collapse
|
9
|
Horita S, Park YS, Son D, Eto M. Computer-based test (CBT) and OSCE scores predict residency matching and National Board assessment results in Japan. BMC MEDICAL EDUCATION 2021; 21:85. [PMID: 33531010 PMCID: PMC7856777 DOI: 10.1186/s12909-021-02520-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2020] [Accepted: 01/18/2021] [Indexed: 06/12/2023]
Abstract
CONTEXT The Japan Residency Matching Program (JRMP) launched in 2003 and is now a significant event for graduating medical students and postgraduate residency hospitals. The environment surrounding JRMP changed due to Japanese health policy, resulting in an increase in the number of unsuccessfully-matched students in the JRMP. Beyond policy issues, we suspected there were also common characteristics among the students who do not get a match with residency hospitals. METHODS In total 237 out of 321 students at The University of Tokyo Faculty of Medicine graduates from 2018 to 2020 participated in the study. The students answered to the questionnaire and gave written consent for using their personal information including the JRMP placement, scores of the pre-clinical clerkship (CC) Objective Structured Clinical Examinations (OSCE), the Computer-Based Test (CBT), the National Board Examination (NBE), and domestic scores for this study. The collected data were statistically analyzed. RESULTS The JRMP placements were correlated with some of the pre-CC OSCE factors/stations and/or total scores/global scores. Above all, the result of neurological examination station had most significant correlation between the JRMP placements. On the other hand, the CBT result had no correlation with the JRMP results. The CBT results had significant correlation between the NBE results. CONCLUSIONS Our data suggest that the pre-clinical clerkship OSCE score and the CBT score, both undertaken before the clinical clerkship, predict important outcomes including the JRMP and the NBE. These results also suggest that the educational resources should be intensively put on those who did not make good scores in the pre-clinical clerkship OSCE and the CBT to avoid the failure in the JRMP and the NBE.
Collapse
Affiliation(s)
- Shoko Horita
- Office for Clinical Practice and Medical Education, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan.
| | - Yoon-Soo Park
- Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Daisuke Son
- International Research Center for Medical Education, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
- Department of Community-based Family Medicine, School of Medicine, Tottori University Faculty of Medicine, Yonago, Japan
| | - Masato Eto
- International Research Center for Medical Education, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
10
|
Abbassi Z, Sgardello SD, Chevallay M, Toso C, Ris F, Jung M, Peloso A. The modified competency assessment tool in surgical training. Am J Surg 2020; 221:777-779. [PMID: 32958158 DOI: 10.1016/j.amjsurg.2020.09.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 09/04/2020] [Accepted: 09/04/2020] [Indexed: 12/29/2022]
Affiliation(s)
- Ziad Abbassi
- Division of Abdominal Surgery, Department of Surgery, Geneva University Hospitals, University of Geneva, Geneva, Switzerland.
| | - Sebastian Douglas Sgardello
- Division of Abdominal Surgery, Department of Surgery, Geneva University Hospitals, University of Geneva, Geneva, Switzerland
| | - Mickael Chevallay
- Division of Abdominal Surgery, Department of Surgery, Geneva University Hospitals, University of Geneva, Geneva, Switzerland
| | - Christian Toso
- Division of Abdominal Surgery, Department of Surgery, Geneva University Hospitals, University of Geneva, Geneva, Switzerland
| | - Frédéric Ris
- Division of Abdominal Surgery, Department of Surgery, Geneva University Hospitals, University of Geneva, Geneva, Switzerland
| | - Minoa Jung
- Division of Abdominal Surgery, Department of Surgery, Geneva University Hospitals, University of Geneva, Geneva, Switzerland
| | - Andrea Peloso
- Division of Abdominal Surgery, Department of Surgery, Geneva University Hospitals, University of Geneva, Geneva, Switzerland
| |
Collapse
|
11
|
O'Keeffe DA, Losty M, Traynor O, Doherty EM. Objective assessment of surgical trainees' non-technical skills: Improved performance following a two-year program of instruction. Am J Surg 2020; 220:1566-1571. [PMID: 32444063 DOI: 10.1016/j.amjsurg.2020.04.039] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Revised: 04/18/2020] [Accepted: 04/29/2020] [Indexed: 12/22/2022]
Abstract
BACKGROUND Non-technical skills (NTS) encompass personal skills such as communication, situational awareness, decision making, teamwork and leadership. Poor performance of these skills has been shown to contribute to medical error. The Royal College of Surgeons in Ireland (RCSI) has delivered a mandatory program of instruction in NTS to all surgical trainees since 2005. We investigated whether the NTS of surgical trainees improved after the first two years of this program. METHODS Baseline data was collected in a three-station OSCE assessment of NTS at the beginning of Year one and again at end of Year two of surgical training. RESULTS Trainees' mean percentage NTS scores improved significantly over the two-year period for the NTS assessment (P < .001). A significant difference was demonstrated using within-subject (paired) t-tests between the Year one and two time points for all three OSCE stations: Consent (-5.39; P < .001); Colleague Conflict (-8.63; P < .001); and Disclosure of Error (-7.56; P < .001). CONCLUSIONS RCSI offers a unique mandatory program of instruction in NTS. There was a statistically and practically significant improvement in the NTS scores of surgical trainees over the two-year period of the program.
Collapse
Affiliation(s)
- Dara A O'Keeffe
- National Surgical and Clinical Skills Centre, Royal College of Surgeons in Ireland, RCSI House, 121 St. Stephen's Green, Dublin, 2, Ireland.
| | - Mairead Losty
- National Surgical and Clinical Skills Centre, Royal College of Surgeons in Ireland, RCSI House, 121 St. Stephen's Green, Dublin, 2, Ireland.
| | - Oscar Traynor
- National Surgical and Clinical Skills Centre, Royal College of Surgeons in Ireland, RCSI House, 121 St. Stephen's Green, Dublin, 2, Ireland.
| | - Eva M Doherty
- National Surgical and Clinical Skills Centre, Royal College of Surgeons in Ireland, RCSI House, 121 St. Stephen's Green, Dublin, 2, Ireland.
| |
Collapse
|
12
|
Assessing Competence in Central Venous Catheter Placement by Pediatric Critical Care Fellows: A National Survey Study. Crit Care Med 2020; 47:e654-e661. [PMID: 31135502 DOI: 10.1097/ccm.0000000000003821] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES To describe the current approach to initial training, ongoing skill maintenance, and assessment of competence in central venous catheter placement by pediatric critical care medicine fellows, a subset of trainees in whom this skill is required. DESIGN Cross-sectional internet-based survey with deliberate sampling. SETTING United States pediatric critical care medicine fellowship programs. SUBJECTS Pediatric critical care medicine program directors of Accreditation Council for Graduate Medical Education-accredited fellowship programs. INTERVENTIONS None. MEASUREMENTS AND MAIN RESULTS A working group of the Education in Pediatric Intensive Care Investigators research collaborative conducted a national study to assess the degree of standardization of training and competence assessment of central venous catheter placement across pediatric critical care medicine fellowship programs. After piloting, the survey was sent to all program directors (n = 67) of Accreditation Council for Graduate Medical Education-accredited pediatric critical care medicine programs between July 2017 and September 2017. The response rate was 85% (57/67). Although 98% of programs provide formalized central venous catheter placement training for first-year fellows, only 42% of programs provide ongoing maintenance training as part of fellowship. Over half (55%) of programs use a global assessment tool and 33% use a checklist-based tool when evaluating fellow central venous catheter placement competence under direct supervision. Only two programs (4%) currently use an assessment tool previously published and validated by the Education in Pediatric Intensive Care group. A majority (82%) of responding program directors believe that a standardized approach to assessment of central venous catheter competency across programs is important. CONCLUSIONS Despite national mandates for skill competence by many accrediting bodies, no standardized system currently exists across programs for assessing central venous catheter placement. Most pediatric critical care medicine programs use a global assessment and decisions around the ability of a fellow to place a central venous catheter under indirect supervision are largely based upon subjective assessment of performance. Further investigation is needed to determine if this finding is consistent in other specialties/subspecialties, if utilization of standardized assessment methods can improve program directors' abilities to ensure trainee competence in central venous catheter insertion in the setting of variable training approaches, and if these findings are consistent with other procedures across critical care medicine training programs, adult and pediatric.
Collapse
|
13
|
Jasemi M, Ahangarzadeh Rezaie S, Hemmati Maslakpak M, Parizad N. Are workplace-based assessment methods (DOPS and Mini-CEX) effective in nursing students' clinical skills? A single-blind randomized, parallel group, controlled trial. Contemp Nurse 2020; 55:565-575. [PMID: 32107975 DOI: 10.1080/10376178.2020.1735941] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Background: Evaluation of clinical skills is critically important for nursing students. However, the quality of evaluation tools is poor.Objectives: To evaluate the effectiveness of Direct Observation of Procedural Skills (DOPS) and Mini-Clinical Evaluation Exercise (Mini-CEX) on clinical skills of nursing students.Methods: This study was conducted among 108 senior nursing students. Mini-CEX and DOPS were utilized to evaluate clinical skills in the intervention group.Results: The mean of students' scores in all of the five procedures was significantly higher in the intervention group compared to control group.. Students' scores for the procedures significantly raised through the first stage of DOPS and Mini-CEX to the third stage.Conclusions: Utilization of DOPS and Mini-CEX for evaluation of clinical skills in nursing students effectively enhance their learning ability. Implementing of such assessment methods lead to promoting clinical skills of students which eventually help them to provide high quality care for their patients.
Collapse
Affiliation(s)
- Madineh Jasemi
- Faculty of Nursing and Midwifery, Urmia University of Medical Sciences, Urmia, Iran
| | | | - Masumeh Hemmati Maslakpak
- Faculty of Nursing and Midwifery, Urmia University of Medical Sciences, Urmia, Iran.,Maternal and Childhood Obesity Research Center, Urmia University of Medical Sciences, Urmia, Iran
| | - Naser Parizad
- Faculty of Nursing and Midwifery, Urmia University of Medical Sciences, Urmia, Iran.,Patient Safety Research Center, Urmia University of Medical Sciences, Urmia, Iran
| |
Collapse
|
14
|
Naylor KA, Torres KC. Translation of learning objectives in medical education using high-and low-fidelity simulation: Learners' perspectives. J Taibah Univ Med Sci 2020; 14:481-487. [PMID: 31908634 PMCID: PMC6940622 DOI: 10.1016/j.jtumed.2019.10.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Revised: 10/22/2019] [Accepted: 10/25/2019] [Indexed: 12/04/2022] Open
Abstract
Objectives The mastering of learnt procedures by medical students is triggered by numerous elements, including the ability to understand educational goals for specific tasks. In this study, the authors investigated the processes for identifying learning objectives set forth by medical students and the possibility of the chosen simulation fidelity influencing this ability in Basic Clinical Skills and Elderly Medicine courses at the Medical University of Lublin. Methods A total of 121 medical students assessed the extent to which learning objectives were implemented in two courses with high- and low-fidelity simulation. Using an online survey with closed-ended questions, a list of learning objectives assigned to the courses was sent to participants. The authors evaluated how the courses were generally assessed in terms of their substantive value and general applicability. The Spearman rank correlation (Spearman's rho), χ2, and descriptive statistics were used for investigating research problems. Results Students correctly identified established learning objectives embedded in the courses and positively assessed both courses. Participants' affirmative opinions were related to the high substantive value of both courses. Conclusions Teachers and course creators could benefit from students' feedback about the clarity of learning objectives. The application of some of their ideas would promote a student-centred approach in medical simulation. This approach could be considered input for task selection and optimisation of learning.
Collapse
Affiliation(s)
- Katarzyna A Naylor
- Department of Didactics and Medical Simulation, Medical University of Lublin, Lublin, Poland
| | - Kamil C Torres
- Department of Didactics and Medical Simulation, Medical University of Lublin, Lublin, Poland
| |
Collapse
|
15
|
Cevik AA, Abu-Zidan F. Clinical Procedure Experience of Medical Students Improves Their Objective Structured Clinical Examination Station Scores in Emergency Medicine Clerkship. Cureus 2019; 11:e6261. [PMID: 31819841 PMCID: PMC6886735 DOI: 10.7759/cureus.6261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Accepted: 11/30/2019] [Indexed: 11/24/2022] Open
Abstract
OBJECTIVE We aimed to study the correlation between procedure experiences in the clinical setting and objective structured clinical examination (OSCE) scores achieved at the end of an emergency medicine clerkship for the final-year medical students. METHODS This is a retrospective analysis of prospectively collected clinical data of 141 final-year medical students and their OSCE scores for the two consecutive academic years (2015-2017). The experience of practical skills including suturing, extended focused assessment sonography for trauma (EFAST), airway management, and cardiopulmonary resuscitation was correlated with the final OSCE scores in the same areas. RESULTS Weighted experiences of the four procedures were significantly correlated with the total OSCE station scores (p = 0.027, Spearman's rho = 0.19). Suturing OSCE scores were significantly higher than the other stations (p < 0.0001, Wilcoxon signed-rank test). There was a significant correlation between suturing experience and its OSCE score (p = 0.036, Spearman's rho = 0.18). There was also a strong trend in correlation between EFAST experience and its OSCE score (p = 0.063, Spearman's rho = 0.16). There was a significant difference in weighted experience between each of the four procedures (p < 0.0001, Wilcoxon signed-rank test). In all cut-off levels (75-95) of OSCE scores, students showed higher weighted procedure experience for those who had higher scores. Statistical significance was found only for students who scored more than 90% of the OSCE score. CONCLUSION Clinical experience of procedures improved OSCE scores of the same procedures. The top students showed significant higher weighted procedure experience.
Collapse
Affiliation(s)
- Arif Alper Cevik
- Internal Medicine, Emergency Medicine Section, United Arab Emirates University, College of Medicine and Health Sciences, Al Ain, ARE
| | | |
Collapse
|
16
|
Higham H, Greig PR, Rutherford J, Vincent L, Young D, Vincent C. Observer-based tools for non-technical skills assessment in simulated and real clinical environments in healthcare: a systematic review. BMJ Qual Saf 2019; 28:672-686. [DOI: 10.1136/bmjqs-2018-008565] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2018] [Revised: 04/17/2019] [Accepted: 04/23/2019] [Indexed: 12/18/2022]
Abstract
BackgroundOver the past three decades multiple tools have been developed for the assessment of non-technical skills (NTS) in healthcare. This study was designed primarily to analyse how they have been designed and tested but also to consider guidance on how to select them.ObjectivesTo analyse the context of use, method of development, evidence of validity (including reliability) and usability of tools for the observer-based assessment of NTS in healthcare.DesignSystematic review.Data sourcesSearch of electronic resources, including PubMed, Embase, CINAHL, ERIC, PsycNet, Scopus, Google Scholar and Web of Science. Additional records identified through searching grey literature (OpenGrey, ProQuest, AHRQ, King’s Fund, Health Foundation).Study selectionStudies of observer-based tools for NTS assessment in healthcare professionals (or undergraduates) were included if they: were available in English; published between January 1990 and March 2018; assessed two or more NTS; were designed for simulated or real clinical settings and had provided evidence of validity plus or minus usability. 11,101 articles were identified. After limits were applied, 576 were retrieved for evaluation and 118 articles included in this review.ResultsOne hundred and eighteen studies describing 76 tools for assessment of NTS in healthcare met the eligibility criteria. There was substantial variation in the method of design of the tools and the extent of validity, and usability testing. There was considerable overlap in the skills assessed, and the contexts of use of the tools.ConclusionThis study suggests a need for rationalisation and standardisation of the way we assess NTS in healthcare and greater consistency in how tools are developed and deployed.
Collapse
|
17
|
Bearman M, Ajjawi R. Actor-network theory and the OSCE: formulating a new research agenda for a post-psychometric era. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2018; 23:1037-1049. [PMID: 29027040 DOI: 10.1007/s10459-017-9797-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2017] [Accepted: 10/07/2017] [Indexed: 06/07/2023]
Abstract
The Objective Structured Clinical Examination (OSCE) is a ubiquitous part of medical education, although there is some debate about its value, particularly around possible impact on learning. Literature and research regarding the OSCE is most often situated within the psychometric or competency discourses of assessment. This paper describes an alternative approach: Actor-network-theory (ANT), a sociomaterial approach to understanding practice and learning. ANT provides a means to productively examine tensions and limitations of the OSCE, in part through extending research to include social relationships and physical objects. Using a narrative example, the paper suggests three ANT-informed insights into the OSCE. We describe: (1) exploring the OSCE as a holistic combination of people and objects; (2) thinking about the influences a checklist can exert over the OSCE; and (3) the implications of ANT educational research for standardisation within the OSCE. We draw from this discussion to provide a practical agenda for ANT research into the OSCE. This agenda promotes new areas for exploration in an often taken-for-granted assessment format.
Collapse
Affiliation(s)
- Margaret Bearman
- Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University, Geelong, VIC, Australia.
| | - Rola Ajjawi
- Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University, Geelong, VIC, Australia
| |
Collapse
|
18
|
Nicholls D, Sweet L, Muller A, Hyett J. A model to teach concomitant patient communication during psychomotor skill development. NURSE EDUCATION TODAY 2018; 60:121-126. [PMID: 29096384 DOI: 10.1016/j.nedt.2017.09.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2016] [Revised: 06/18/2017] [Accepted: 09/08/2017] [Indexed: 06/07/2023]
Abstract
Many health professionals use psychomotor or task-based skills in clinical practice that require concomitant communication with a conscious patient. Verbally engaging with the patient requires highly developed verbal communication skills, enabling the delivery of patient-centred care. Historically, priority has been given to learning the psychomotor skills essential to clinical practice. However, there has been a shift towards also ensuring competent communication with the patient during skill performance. While there is literature outlining the steps to teach and learn verbal communication skills, little is known about the most appropriate instructional approach to teach how to verbally engage with the patient when also learning to perform a task. A literature review was performed and it identified that there was no model or proven approach which could be used to integrate the learning of both psychomotor and communication skills. This paper reviews the steps to teach a communication skill and provides a suggested model to guide the acquisition and development of the concomitant -communication skills required with a patient at the time a psychomotor skill is performed.
Collapse
Affiliation(s)
- Delwyn Nicholls
- College of Nursing and Health Sciences, Flinders University, Adelaide, Australia; Sydney Ultrasound for Women, Sydney, Australia.
| | - Linda Sweet
- College of Nursing and Health Sciences, Flinders University, Adelaide, Australia
| | - Amanda Muller
- College of Nursing and Health Sciences, Flinders University, Adelaide, Australia
| | - Jon Hyett
- RPA Women and Babies, Royal Prince Alfred Hospital, Sydney, Australia; Discipline of Obstetrics, Gynaecology and Neonatology, Faculty of Medicine, University of Sydney, Australia
| |
Collapse
|
19
|
Sobh AH, Austin Z, Izham M I M, Diab MI, Wilby KJ. Application of a systematic approach to evaluating psychometric properties of a cumulative exit-from-degree objective structured clinical examination (OSCE). CURRENTS IN PHARMACY TEACHING & LEARNING 2017; 9:1091-1098. [PMID: 29233377 DOI: 10.1016/j.cptl.2017.07.011] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2016] [Revised: 02/25/2017] [Accepted: 07/28/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND AND PURPOSE Objective structured clinical examinations (OSCEs) are considered gold standard performance-based assessments yet comprehensive evaluation data is currently lacking. The objective of this study was to critically evaluate the psychometric properties of a cumulative OSCE for graduating pharmacy students in Qatar for which policies and procedures were adapted from a Canadian context. EDUCATIONAL ACTIVITY AND SETTING A 10-station OSCE was conducted for graduating students in Qatar. Evaluation included assessment of pass rates, predictive validity, concurrent validity, internal validity, content validity, interrater reliability, and internal consistency. FINDINGS Twenty-six students completed the OSCE. Three stations achieved pass rates < 80%. Scores from professional skills and case-based learning courses, formative OSCEs, and cumulative grade point averages correlated with OSCE scores (p < 0.05). Average correlation between assessors' analytical and global scoring was moderate (r = 0.52). Average interrater reliability was excellent for analytical scoring (ICC = 0.88) and moderate for global scoring (ICC = 0.61). Excellent internal consistency was demonstrated for overall performance (α = 0.927). Students generally agreed stations represented real practice scenarios (range per station, 30-100%). DISCUSSION AND SUMMARY The evaluation model identified strengths and weaknesses in assessment and curricular considerations. The OSCE demonstrated acceptable validity and reliability as an adapted assessment.
Collapse
Affiliation(s)
| | - Zubin Austin
- Leslie Dan Faculty of Pharmacy, University of Toronto, Toronto, Ontario, Canada.
| | | | - Mohammad I Diab
- College of Pharmacy, Qatar University, PO Box 2713, Doha, Qatar.
| | - Kyle John Wilby
- College of Pharmacy, Qatar University, PO Box 2713, Doha, Qatar.
| |
Collapse
|
20
|
Muthusami A, Mohsina S, Sureshkumar S, Anandhi A, Elamurugan TP, Srinivasan K, Mahalakshmy T, Kate V. Efficacy and Feasibility of Objective Structured Clinical Examination in the Internal Assessment for Surgery Postgraduates. JOURNAL OF SURGICAL EDUCATION 2017; 74:398-405. [PMID: 27913082 DOI: 10.1016/j.jsurg.2016.11.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2016] [Revised: 10/15/2016] [Accepted: 11/11/2016] [Indexed: 06/06/2023]
Abstract
INTRODUCTION Traditionally assessment in medical training programs has been through subjective faculty evaluations or multiple choice questions. Conventional examinations provide assessment of the global performance rather than individual competencies thus making the final feedback less meaningful. The objective structured clinical examination (OSCE) is a relatively new multidimensional tool for evaluating training. This study was carried out to determine the efficacy and feasibility of OSCE as a tool for the internal assessment of surgery residents. METHODS This study was carried out on the marks obtained by surgery residents at different levels of training in a single tertiary center in India over the 4 OSCEs conducted in the years 2015 and 2016. The marks of the OSCE were collected from the departmental records and analyzed. Reliability was calculated using internal consistency using Cronbach's α. Validity was calculated by item total correlation. Content validation was done by obtaining expert reviews from 5 experts using a proforma, to assess the content and checklist of each station of the OSCE. RESULTS A total of 49 surgery residents were assessed in small batches during the above mentioned period. Of the 4 OSCEs conducted by us, 3 had a high value of Cronbach's α of greater than 0.9, as opposed to the set standard of 0.7. Out of 23 stations used in the 4 examinations separately, only 3 stations were found to have a low correlation coefficient (item total correlation), and hence, a low validity. The remaining 20 stations were found to have a high validity. Expert review showed unanimous validation of the content of 17 out of the 23 stations, with few suggestions for change in the remaining 6 stations. The material and manpower used was minimal and easy to obtain, thus making the OSCE feasible to conduct. CONCLUSION OSCE is a reliable, valid. and feasible method for evaluating surgery residents at various levels of training.
Collapse
Affiliation(s)
- Anitha Muthusami
- Department of Surgery, Pondicherry, Jawaharlal Institute of Postgraduate Medical Education and Research, Pondicherry, India
| | - Subair Mohsina
- Department of Surgery, Pondicherry, Jawaharlal Institute of Postgraduate Medical Education and Research, Pondicherry, India
| | - Sathasivam Sureshkumar
- Department of Surgery, Pondicherry, Jawaharlal Institute of Postgraduate Medical Education and Research, Pondicherry, India
| | - Amaranathan Anandhi
- Department of Surgery, Pondicherry, Jawaharlal Institute of Postgraduate Medical Education and Research, Pondicherry, India
| | - Thirthar Palanivelu Elamurugan
- Department of Surgery, Pondicherry, Jawaharlal Institute of Postgraduate Medical Education and Research, Pondicherry, India
| | - Krishnamachari Srinivasan
- Department of Surgery, Pondicherry, Jawaharlal Institute of Postgraduate Medical Education and Research, Pondicherry, India
| | - Thulasingam Mahalakshmy
- Department of Preventive and Social Medicine, Jawaharlal Institute of Postgraduate Medical Education and Research, Pondicherry, India
| | - Vikram Kate
- Department of Surgery, Pondicherry, Jawaharlal Institute of Postgraduate Medical Education and Research, Pondicherry, India.
| |
Collapse
|
21
|
Pugh D, Cavalcanti RB, Halman S, Ma IWY, Mylopoulos M, Shanks D, Stroud L. Using the Entrustable Professional Activities Framework in the Assessment of Procedural Skills. J Grad Med Educ 2017; 9:209-214. [PMID: 28439355 PMCID: PMC5398142 DOI: 10.4300/jgme-d-16-00282.1] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/06/2016] [Revised: 09/16/2016] [Accepted: 11/08/2016] [Indexed: 12/26/2022] Open
Abstract
BACKGROUND The entrustable professional activity (EPA) framework has been identified as a useful approach to assessment in competency-based education. To apply an EPA framework for assessment, essential skills necessary for entrustment to occur must first be identified. OBJECTIVE Using an EPA framework, our study sought to (1) define the essential skills required for entrustment for 7 bedside procedures expected of graduates of Canadian internal medicine (IM) residency programs, and (2) develop rubrics for the assessment of these procedural skills. METHODS An initial list of essential skills was defined for each procedural EPA by focus groups of experts at 4 academic centers using the nominal group technique. These lists were subsequently vetted by representatives from all Canadian IM training programs through a web-based survey. Consensus (more than 80% agreement) about inclusion of each item was sought using a modified Delphi exercise. Qualitative survey data were analyzed using a framework approach to inform final assessment rubrics for each procedure. RESULTS Initial lists of essential skills for procedural EPAs ranged from 10 to 24 items. A total of 111 experts completed the national survey. After 2 iterations, consensus was reached on all items. Following qualitative analysis, final rubrics were created, which included 6 to 10 items per procedure. CONCLUSIONS These EPA-based assessment rubrics represent a national consensus by Canadian IM clinician educators. They provide a practical guide for the assessment of procedural skills in a competency-based education model, and a robust foundation for future research on their implementation and evaluation.
Collapse
|
22
|
Ponton-Carss A, Kortbeek JB, Ma IW. Assessment of technical and nontechnical skills in surgical residents. Am J Surg 2016; 212:1011-1019. [DOI: 10.1016/j.amjsurg.2016.03.005] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2015] [Revised: 02/24/2016] [Accepted: 03/22/2016] [Indexed: 01/03/2023]
|
23
|
Rekman J, Gofton W, Dudek N, Gofton T, Hamstra SJ. Entrustability Scales: Outlining Their Usefulness for Competency-Based Clinical Assessment. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2016; 91:186-90. [PMID: 26630609 DOI: 10.1097/acm.0000000000001045] [Citation(s) in RCA: 171] [Impact Index Per Article: 21.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Meaningful residency education occurs at the bedside, along with opportunities for situated in-training assessment. A necessary component of workplace-based assessment (WBA) is the clinical supervisor, whose subjective judgments of residents' performance can yield rich and nuanced ratings but may also occasionally reflect bias. How to improve the validity of WBA instruments while simultaneously capturing meaningful subjective judgment is currently not clear. This Perspective outlines how "entrustability scales" may help bridge the gap between the assessment judgments of clinical supervisors and WBA instruments. Entrustment-based assessment evaluates trainees against what they will actually do when independent; thus, "entrustability scales"-defined as behaviorally anchored ordinal scales based on progression to competence-reflect a judgment that has clinical meaning for assessors. Rather than asking raters to assess trainees against abstract scales, entrustability scales provide raters with an assessment measure structured around the way evaluators already make day-to-day clinical entrustment decisions, which results in increased reliability. Entrustability scales help raters make assessments based on narrative descriptors that reflect real-world judgments, drawing attention to a trainee's readiness for independent practice rather than his/her deficiencies. These scales fit into milestone measurement both by allowing an individual resident to strive for independence in entrustable professional activities across the entire training period and by allowing residency directors to identify residents experiencing difficulty. Some WBA tools that have begun to use variations of entrustability scales show potential for allowing raters to produce valid judgments. This type of anchor scale should be brought into wider circulation.
Collapse
Affiliation(s)
- Janelle Rekman
- J. Rekman is a general surgery resident and master's in health professions education student, University of Ottawa, Ottawa, Ontario, Canada. W. Gofton is an orthopedic surgeon, University of Ottawa, Ottawa, Ontario, Canada. N. Dudek is associate professor, Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada. T. Gofton is Wissenschaftlicher Mitarbeiter, Department of Philosophy, Eberhard Karls Universität, Tübingen, Germany. S.J. Hamstra is vice president, Milestones Research and Evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | | | | | | | | |
Collapse
|
24
|
Pietrantonio F, Orlandini F, Moriconi L, La Regina M. Acute Complex Care Model: An organizational approach for the medical care of hospitalized acute complex patients. Eur J Intern Med 2015; 26:759-65. [PMID: 26365373 DOI: 10.1016/j.ejim.2015.08.011] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/05/2015] [Revised: 08/17/2015] [Accepted: 08/18/2015] [Indexed: 10/23/2022]
Abstract
BACKGROUND Chronic diseases are the major cause of death (59%) and disability worldwide, representing 46% of global disease burden. According to the Future Hospital Commission of the Royal College of Physicians, Medical Division (MD) will be responsible for all hospital medical services, from emergency to specialist wards. The Hospital Acute Care Hub will bring together the clinical areas of the MD that focus on the management of acute medical patients. The Chronic Care Model (CCM) places the patient at the center of the care system enhancing the community's social and health support, pathways and structures to keep chronic, frail, poly-pathological people at home or out of the hospital. The management of such patients in the hospital still needs to be solved. Hereby, we propose an innovative model for the management of the hospital's acute complex patients, which is the hospital counterpart of the CCM. ACUTE COMPLEX CARE MODEL (ACCM) The target population are acutely ill complex and poly-pathological patients (AICPPs), admitted to hospital and requiring high technology resources. The mission is to improve the management of medical admissions through pre-defined intra-hospital tracks and a global, multidisciplinary, patient-centered approach. The ACCM leader is an internal medicine specialist (IMS) who summarizes health problems, establishes priorities, and restores health balance in AICPPs. CONCLUSIONS The epidemiological transition leading to a progressive increase in "chronically unstable" and complex patients needing frequent hospital treatment, inevitably enhances the role of hospital IMS in the coordination and delivery of care. ACCM represents a practical response to this epochal change of roles.
Collapse
Affiliation(s)
| | - Francesco Orlandini
- SC Medicina Interna 1, Ospedale S. Andrea, ASL5 "Spezzino", La Spezia, Italy
| | - Luca Moriconi
- Azienda Ospedaliera S. Giovanni-Addolorata, UOC Medicina 1 per l'Urgenza, Roma, Italy
| | - Micaela La Regina
- SC Medicina Interna 1, Ospedale S. Andrea, ASL5 "Spezzino", La Spezia, Italy
| |
Collapse
|
25
|
|
26
|
Criscione-Schreiber LG, Sloane RJ, Hawley J, Jonas BL, O'Rourke KS, Bolster MB. Expert panel consensus on assessment checklists for a rheumatology objective structured clinical examination. Arthritis Care Res (Hoboken) 2015; 67:898-904. [PMID: 25580581 DOI: 10.1002/acr.22543] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2014] [Revised: 10/22/2014] [Accepted: 12/23/2014] [Indexed: 11/08/2022]
Abstract
OBJECTIVE While several regional fellowship groups conduct rheumatology objective structured clinical examinations (ROSCEs), none have been validated for use across programs. We aimed to establish agreement among subspecialty experts regarding checklist items for several ROSCE stations. METHODS We administered a 1-round survey to assess the importance of 173 assessment checklist items for 11 possible ROSCE stations. We e-mailed the survey to 127 rheumatology educators from across the US. Participants rated each item's importance on a 5-point Likert scale (1 = not important to 5 = very important). Consensus for high importance was predefined as a lower bound of the 95% confidence interval ≥4.0. RESULTS Twenty-five individuals (20%) completed the expert panel survey. A total of 133 of the 173 items (77%) met statistical cutoff for consensus to retain. Several items that had population means of ≥4.0 but did not meet the predetermined definition for consensus were rejected. The percentage of retained items for individual stations ranged from 24% to 100%; all items were retained for core elements of patient counseling and radiograph interpretation tasks. Only 24% of items were retained for a rehabilitation medicine station and 60% for a microscope use/synovial fluid analysis station. CONCLUSION This single-round expert panel survey established consensus on 133 items to assess on 11 proposed ROSCE stations. The method used in this study, which can engage a diverse geographic representation and employs rigorous statistical methods to establish checklist content agreement, can be used in any medical field.
Collapse
Affiliation(s)
| | | | | | | | - Kenneth S O'Rourke
- Wake Forest University School of Medicine, Winston-Salem, North Carolina
| | | |
Collapse
|
27
|
Pugh D, Touchie C, Humphrey-Murto S, Wood TJ. The OSCE progress test--Measuring clinical skill development over residency training. MEDICAL TEACHER 2015; 38:168-173. [PMID: 25909896 DOI: 10.3109/0142159x.2015.1029895] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
PURPOSE The purpose of this study was to explore the use of an objective structured clinical examination for Internal Medicine residents (IM-OSCE) as a progress test for clinical skills. METHODS Data from eight administrations of an IM-OSCE were analyzed retrospectively. Data were scaled to a mean of 500 and standard deviation (SD) of 100. A time-based comparison, treating post-graduate year (PGY) as a repeated-measures factor, was used to determine how residents' performance progressed over time. RESULTS Residents' total IM-OSCE scores (n = 244) increased over training from a mean of 445 (SD = 84) in PGY-1 to 534 (SD = 71) in PGY-3 (p < 0.001). In an analysis of sub-scores, including only those who participated in the IM OSCE for all three years of training (n = 46), mean structured oral scores increased from 464 (SD = 92) to 533 (SD = 83) (p < 0.001), physical examination scores increased from 464 (SD = 82) to 520 (SD = 75) (p < 0.001), and procedural skills increased from 495 (SD = 99) to 555 (SD = 67) (p = 0.033). There was no significant change in communication scores (p = 0.97). CONCLUSIONS The IM-OSCE can be used to demonstrate progression of clinical skills throughout residency training. Although most of the clinical skills assessed improved as residents progressed through their training, communication skills did not appear to change.
Collapse
|