1
|
Shehata D, Ghaderi I, Nepomnayshy D. The evolution of surgical skills simulation education: Advanced laparoscopic skills. Surgery 2025; 181:109248. [PMID: 39952019 DOI: 10.1016/j.surg.2025.109248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2024] [Revised: 01/08/2025] [Accepted: 01/17/2025] [Indexed: 02/17/2025]
Abstract
The rapid evolution of modern surgical practice has dramatically expanded the technical skills required of surgical trainees and faculty. Today's surgeons must be proficient in a wide range of techniques, from open to robotic surgery. As the demands of the field grow, training programs face increasing pressure to provide comprehensive education while ensuring high standards in skill development and assessment. To meet these challenges, surgical simulation curricula designed with the principles of educational science offer a critical solution, providing a structured and effective framework for teaching and evaluating these competencies. Over the past 2 decades, several proficiency-based curricula have been developed and implemented to establish a minimum standard and reduce inconsistencies in surgical training. The Society of American Gastrointestinal and Endoscopic Surgeons introduced the Fundamentals of Laparoscopic Surgery, a curriculum for fundamental skills in laparoscopic procedures. In addition, Society of American Gastrointestinal and Endoscopic Surgeons developed the Fundamentals of Endoscopic Surgery and Fundamental Use of Surgical Energy to address endoscopic and electrosurgery skills. Additional curricula such as The Fundamentals of Robotic Surgery and Emerging Minimally Invasive Gynecologic Surgery address skills fundamental to robotic and gynecologic surgery. However, these curricula do not address variability in training at more advanced levels of skills required of senior residents and fellows. This gap was highlighted in a nationwide survey of fellowship program directors and fellows, which underscored the need for improved training in laparoscopic skills, specifically laparoscopic suturing. In order to address this need, the Association of Surgical Education launched the Advanced Training in Laparoscopic Suturing curriculum in 2022. Validity evidence has been established to support the broad adoption of this curriculum, and efforts are underway to disseminate it broadly. This article discusses the need for robust development of surgical skills curricula on the basis of the evolution of clinical surgical practice and provides a brief description of a curriculum development process and validity evidence for 1 such expert-based curriculum. We believe that it can be used as a framework for development of other expert-based training curricula to ensure that trainees universally achieve the necessary skills to maintain high standards of patient safety and care upon completion of training.
Collapse
Affiliation(s)
- Dena Shehata
- Department of Surgery, Lahey Hospital and Medical Center, Burlington, MA.
| | - Iman Ghaderi
- Department of Surgery, University of Arizona, Tucson, AZ. https://twitter.com/ImanGhaderi
| | - Dmitry Nepomnayshy
- Department of Surgery, Lahey Hospital and Medical Center, Burlington, MA
| |
Collapse
|
2
|
Gotzmann A, Boulet J, Zhang Y, McCormick J, Wojcik M, Bartman I, Pugh D. Conducting an objective structured clinical examination under COVID-restricted conditions. BMC MEDICAL EDUCATION 2024; 24:801. [PMID: 39061036 PMCID: PMC11282689 DOI: 10.1186/s12909-024-05774-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Accepted: 07/12/2024] [Indexed: 07/28/2024]
Abstract
BACKGROUND The administration of performance assessments during the coronavirus disease of 2019 (COVID-19) pandemic posed many challenges, especially for examinations employed as part of certification and licensure. The National Assessment Collaboration (NAC) Examination, an Objective Structured Clinical Examination (OSCE), was modified during the pandemic. The purpose of this study was to gather evidence to support the reliability and validity of the modified NAC Examination. METHODS The modified NAC Examination was delivered to 2,433 candidates in 2020 and 2021. Cronbach's alpha, decision consistency, and accuracy values were calculated. Validity evidence includes comparisons of scores and sub-scores for demographic groups: gender (male vs. female), type of International Medical Graduate (IMG) (Canadians Studying Abroad (CSA) vs. non-CSA), postgraduate training (PGT) (no PGT vs. PGT), and language of examination (English vs. French). Criterion relationships were summarized using correlations within and between the NAC Examination and the Medical Council of Canada Qualifying Examination (MCCQE) Part I scores. RESULTS Reliability estimates were consistent with other OSCEs similar in length and previous NAC Examination administrations. Both total score and sub-score differences for gender were statistically significant. Total score differences by type of IMG and PGT were not statistically significant, but sub-score differences were statistically significant. Administration language was not statistically significant for either the total scores or sub-scores. Correlations were all statistically significant with some relationships being small or moderate (0.20 to 0.40) or large (> 0.40). CONCLUSIONS The NAC Examination yields reliable total scores and pass/fail decisions. Expected differences in total scores and sub-scores for defined groups were consistent with previous literature, and internal relationships amongst NAC Examination sub-scores and their external relationships with the MCCQE Part I supported both discriminant and criterion-related validity arguments. Modifications to OSCEs to address health restrictions can be implemented without compromising the overall quality of the assessment. This study outlines some of the validity and reliability analyses for OSCEs that required modifications due to COVID.
Collapse
Affiliation(s)
- Andrea Gotzmann
- Medical Council of Canada, 1021 Thomas Spratt Place, Ottawa, ON, K1G 5L5, Canada.
| | - John Boulet
- Medical Council of Canada, 1021 Thomas Spratt Place, Ottawa, ON, K1G 5L5, Canada
| | - Yichi Zhang
- Medical Council of Canada, 1021 Thomas Spratt Place, Ottawa, ON, K1G 5L5, Canada
| | - Judy McCormick
- Medical Council of Canada, 1021 Thomas Spratt Place, Ottawa, ON, K1G 5L5, Canada
| | - Mathieu Wojcik
- Medical Council of Canada, 1021 Thomas Spratt Place, Ottawa, ON, K1G 5L5, Canada
| | - Ilona Bartman
- Medical Council of Canada, 1021 Thomas Spratt Place, Ottawa, ON, K1G 5L5, Canada
| | - Debra Pugh
- Medical Council of Canada, 1021 Thomas Spratt Place, Ottawa, ON, K1G 5L5, Canada
| |
Collapse
|
3
|
Mao X, Boulet JR, Sandella JM, Oliverio MF, Smith L. A validity study of COMLEX-USA Level 3 with the new test design. J Osteopath Med 2024; 124:257-265. [PMID: 38498662 DOI: 10.1515/jom-2023-0011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 02/14/2024] [Indexed: 03/20/2024]
Abstract
CONTEXT The National Board of Osteopathic Medical Examiners (NBOME) administers the Comprehensive Osteopathic Medical Licensing Examination of the United States (COMLEX-USA), a three-level examination designed for licensure for the practice of osteopathic medicine. The examination design for COMLEX-USA Level 3 (L3) was changed in September 2018 to a two-day computer-based examination with two components: a multiple-choice question (MCQ) component with single best answer and a clinical decision-making (CDM) case component with extended multiple-choice (EMC) and short answer (SA) questions. Continued validation of the L3 examination, especially with the new design, is essential for the appropriate interpretation and use of the test scores. OBJECTIVES The purpose of this study is to gather evidence to support the validity of the L3 examination scores under the new design utilizing sources of evidence based on Kane's validity framework. METHODS Kane's validity framework contains four components of evidence to support the validity argument: Scoring, Generalization, Extrapolation, and Implication/Decision. In this study, we gathered data from various sources and conducted analyses to provide evidence that the L3 examination is validly measuring what it is supposed to measure. These include reviewing content coverage of the L3 examination, documenting scoring and reporting processes, estimating the reliability and decision accuracy/consistency of the scores, quantifying associations between the scores from the MCQ and CDM components and between scores from different competency domains of the L3 examination, exploring the relationships between L3 scores and scores from a performance-based assessment that measures related constructs, performing subgroup comparisons, and describing and justifying the criterion-referenced standard setting process. The analysis data contains first-attempt test scores for 8,366 candidates who took the L3 examination between September 2018 and December 2019. The performance-based assessment utilized as a criterion measure in this study is COMLEX-USA Level 2 Performance Evaluation (L2-PE). RESULTS All assessment forms were built through the automated test assembly (ATA) procedure to maximize parallelism in terms of content coverage and statistical properties across the forms. Scoring and reporting follows industry-standard quality-control procedures. The inter-rater reliability of SA rating, decision accuracy, and decision consistency for pass/fail classifications are all very high. There is a statistically significant positive association between the MCQ and the CDM components of the L3 examination. The patterns of associations, both within the L3 subscores and with L2-PE domain scores, fit with what is being measured. The subgroup comparisons by gender, race, and first language showed expected small differences in mean scores between the subgroups within each category and yielded findings that are consistent with those described in the literature. The L3 pass/fail standard was established through implementation of a defensible criterion-referenced procedure. CONCLUSIONS This study provides some additional validity evidence for the L3 examination based on Kane's validity framework. The validity of any measurement must be established through ongoing evaluation of the related evidence. The NBOME will continue to collect evidence to support validity arguments for the COMLEX-USA examination series.
Collapse
Affiliation(s)
- Xia Mao
- 159673 National Board of Osteopathic Medical Examiners , Chicago, IL, USA
| | - John R Boulet
- 159673 National Board of Osteopathic Medical Examiners , Chicago, IL, USA
| | - Jeanne M Sandella
- 159673 National Board of Osteopathic Medical Examiners , Chicago, IL, USA
| | - Michael F Oliverio
- Adjunct Clinical Faculty, Departments of Family Practice and OMM, NYIT-COM, North Bellmore, NY, USA
| | - Larissa Smith
- 159673 National Board of Osteopathic Medical Examiners , Chicago, IL, USA
| |
Collapse
|
4
|
Sawyer T, Gray MM. Competency-based assessment in neonatal simulation-based training. Semin Perinatol 2023; 47:151823. [PMID: 37748942 DOI: 10.1016/j.semperi.2023.151823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 09/27/2023]
Abstract
Simulation is a cornerstone of training in neonatal clinical care, allowing learners to practice skills in a safe and controlled environment. Competency-based assessment provides a systematic approach to evaluating technical and behavioral skills observed in the simulation environment to ensure the learner is prepared to safely perform the skill in a clinical setting. Accurate assessment of competency requires the creation of tools with evidence of validity and reliability. There has been considerable work on the use of competency-based assessment in the field of neonatology. In this chapter, we review neonatal simulation-based training, examine competency-based assessment tools, explore methods to gather evidence of the validity and reliability, and review an evidence-based approach to competency-based assessment using simulation.
Collapse
Affiliation(s)
- Taylor Sawyer
- Division of Neonatology, Department of Pediatrics, University of Washington School of Medicine, Seattle Children's Hospital, Seattle, Washington, United States; Neonatal Education and Simulation-based Training (NEST) Program, Division of Neonatology, Department of Pediatrics, University of Washington School of Medicine, Seattle, Washington, United States.
| | - Megan M Gray
- Division of Neonatology, Department of Pediatrics, University of Washington School of Medicine, Seattle Children's Hospital, Seattle, Washington, United States; Neonatal Education and Simulation-based Training (NEST) Program, Division of Neonatology, Department of Pediatrics, University of Washington School of Medicine, Seattle, Washington, United States
| |
Collapse
|
5
|
Wenghofer E, Boulet J. Medical Council of Canada Qualifying Examinations and performance in future practice. CANADIAN MEDICAL EDUCATION JOURNAL 2022; 13:53-61. [PMID: 36091726 PMCID: PMC9441123 DOI: 10.36834/cmej.73770] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The purpose of medical licensing examinations is to protect the public from practitioners who do not have adequate knowledge, skills, and abilities to provide acceptable patient care, and therefore evaluating the validity of these examinations is a matter of accountability. Our objective was to discuss the Medical Council of Canada's Qualifying Examinations (MCCQEs) Part I (QE1) and Part II (QE2) in terms of how well they reflect future performance in practice. We examined the supposition that satisfactory performance on the MCCQEs are important determinants of practice performance and, ultimately, patient outcomes. We examined the literature before the implementation of the QE2 (pre-1992), post QE2 but prior to the implementation of the new Blueprint (1992-2018), and post Blueprint (2018-present). The literature suggests that MCCQE performance is predictive of future physician behaviours, that the relationship between examination performance and outcomes did not attenuate with practice experience, and that associations between examination performance and outcomes made sense clinically. While the evidence suggests the MCC qualifying examinations measure the intended constructs and are predictive of future performance, the validity argument is never complete. As new competency requirements emerge, we will need to develop valid and reliable mechanisms for determining practice readiness in these areas.
Collapse
Affiliation(s)
- Elizabeth Wenghofer
- School of Kinesiology and Health Sciences, Laurentian University; Division of Human Sciences, Northern Ontario School of Medicine, Ontario, Canada
| | - John Boulet
- National Board of Osteopathic Medical Examiners (NBOME); Uniformed Services University of the Health Sciences (USUHS), Bethesda, Maryland, USA
| |
Collapse
|
6
|
Nepomnayshy D, Whitledge J, Fitzgibbons S, Nijjar B, Gardner A, Alseidi A, Birkett R, Deal S, Duvra RR, Anton N, Stefanidis D. Advanced laparoscopic skills: Understanding the relationship between simulation-based practice and clinical performance. Am J Surg 2019; 218:527-532. [DOI: 10.1016/j.amjsurg.2019.01.024] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2018] [Revised: 01/15/2019] [Accepted: 01/22/2019] [Indexed: 10/27/2022]
|
7
|
Duffy MC, Ibrahim M, Lachapelle K. Development of a saphenous vein harvest model for simulation-based assessment. J Thorac Cardiovasc Surg 2018; 157:1082-1089. [PMID: 30195588 DOI: 10.1016/j.jtcvs.2018.07.042] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/29/2017] [Revised: 06/13/2018] [Accepted: 07/02/2018] [Indexed: 11/25/2022]
Abstract
OBJECTIVE There is a need to develop a realistic model of open saphenous vein harvesting for simulation training and assessment. The purpose of this study was to develop a novel simulated model of this procedure and to examine its viability by examining participants' performance and feedback on this model. METHODS A total of 14 participants (cardiac surgeons, residents, students) conducted open saphenous vein harvesting on a portable, noncommercial, simulated vein model (complete with artificial vein, subcutaneous tissue, and skin) within an operating room. Surgical assistance was provided by a cardiac resident. Participants provided feedback through questionnaires and interviews. Technical performance was rated by 2 blinded raters using a global rating scale for operative technical skills. RESULTS Quantitative analyses demonstrated that participants considered the model to be realistic and useful. Analyses of performance ratings indicated that the model could be used as a reliable indicator of skill level, given that raters were able to use performance scores to discriminate participants according to their level of experience at a high level of accuracy. Participants with a higher level of experience performed significantly better than those with a lower level of experience. Qualitative analyses revealed the model was considered to be most beneficial to learn procedural steps of vein harvesting and basic surgical skills. CONCLUSIONS Results provide support for the technical fidelity of this model and its ability to identify skill level for assessment of vein harvesting. Future work should examine transfer of surgical skills from simulator to clinical practice to assess its viability for training.
Collapse
Affiliation(s)
- Melissa C Duffy
- Department of Educational Studies, University of South Carolina, Columbia, SC
| | - Marina Ibrahim
- Division of Cardiac Surgery, McGill University, Montreal, Quebec, Canada
| | - Kevin Lachapelle
- Division of Cardiac Surgery, McGill University, Montreal, Quebec, Canada; Steinberg Centre for Simulation and Interactive Learning, McGill University, Montreal, Quebec, Canada.
| |
Collapse
|
8
|
Rivière E, Saucier D, Lafleur A, Lacasse M, Chiniara G. Twelve tips for efficient procedural simulation. MEDICAL TEACHER 2018; 40:743-751. [PMID: 29065750 DOI: 10.1080/0142159x.2017.1391375] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Procedural simulation (PS) is increasingly being used worldwide in healthcare for training caregivers in psychomotor competencies. It has been demonstrated to improve learners' confidence and competence in technical procedures, with consequent positive impacts on patient outcomes and safety. Several frameworks can guide healthcare educators in using PS as an educational tool. However, no theory-informed practical framework exists to guide them in including PS in their training programs. We present 12 practical tips for efficient PS training that translates educational concepts from theory to practice, based on the existing literature. In doing this, we aim to help healthcare educators to adequately incorporate and use PS both for optimal learning and for transfer into professional practice.
Collapse
Affiliation(s)
- Etienne Rivière
- a Department of Internal Medicine , Haut-Leveque Hospital, University Hospital Centre of Bordeaux , Pessac , France
- b Apprentiss Centre (Simulation Centre) , Laval University , Quebec City , Canada
- c Centre of Applied Research to Educative Methods (CAREM), University of Bordeaux , Bordeaux , France
| | - Danielle Saucier
- d Department of Family and Emergency Medicine , Laval University , Quebec City , Canada
- e Office of Education and Continuing Professional Development (Vice-décanat à la pédagogie et au développement professional continu) , Laval University , Quebec City , Canada
| | - Alexandre Lafleur
- e Office of Education and Continuing Professional Development (Vice-décanat à la pédagogie et au développement professional continu) , Laval University , Quebec City , Canada
- f Department of Medicine , Laval University , Quebec City , Canada
| | - Miriam Lacasse
- e Office of Education and Continuing Professional Development (Vice-décanat à la pédagogie et au développement professional continu) , Laval University , Quebec City , Canada
- f Department of Medicine , Laval University , Quebec City , Canada
| | - Gilles Chiniara
- b Apprentiss Centre (Simulation Centre) , Laval University , Quebec City , Canada
- g Department of Anaesthesiology and Intensive Care , Laval University , Quebec City , Canada
| |
Collapse
|
9
|
Evaluation of a Low-Fidelity Surgical Simulator for Large Loop Excision of the Transformation Zone (LLETZ). ACTA ACUST UNITED AC 2017; 12:304-307. [DOI: 10.1097/sih.0000000000000242] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
10
|
Noveanu J, Amsler F, Ummenhofer W, von Wyl T, Zuercher M. Assessment of Simulated Emergency Scenarios: Are Trained Observers Necessary? PREHOSP EMERG CARE 2017; 21:511-524. [DOI: 10.1080/10903127.2017.1302528] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
11
|
Schuster C, Stahl B, Murray C, Keleekai NL, Glover K. Development and Testing of a Short Peripheral Intravenous Catheter Insertion Skills Checklist. ACTA ACUST UNITED AC 2016. [DOI: 10.1016/j.java.2016.08.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Abstract
To date, there is no published, psychometrically validated, short peripheral intravenous catheter (PIVC) insertion skills checklist. Creating a valid, reliable, and generalizable checklist to measure PIVC skill is a key step in assessing baseline competence and skill mastery. Based on recognized standards and best practices, the PIVC Insertion Skills Checklist was developed to measure all the steps necessary for a best practice PIVC insertion. This includes the entire process from reading the prescriber's orders to documentation and, if the first attempt is unsuccessful, a second attempt option. Content validity was established using 3 infusion therapy experts. Evidence in support of response process validity is described. The PIVC Insertion Skills Checklist was used by 8 trained raters to assess the PIVC insertion skills, in a simulated environment, of 63 practicing clinicians working on medical and surgical units in a US teaching hospital. Internal consistency of the PIVC Insertion Skills Checklist was α = 0.84. Individual item intraclass correlation coefficients (ICCs) between rater and gold standard observations ranged from − 0.01 to 1.00 and total score ICC was 0.99 (95% confidence interval, 0.99–0.99). The current study offers validity and reliability evidence to support the use of the PIVC Insertion Skills Checklist to measure PIVC insertion skill of clinicians in a simulated environment.
Collapse
|
12
|
Abstract
This review examines the current environment of neonatal procedural learning, describes an updated model of skills training, defines the role of simulation in assessing competency, and discusses potential future directions for simulation-based competency assessment. In order to maximize impact, simulation-based procedural training programs should follow a standardized and evidence-based approach to designing and evaluating educational activities. Simulation can be used to facilitate the evaluation of competency, but must incorporate validated assessment tools to ensure quality and consistency. True competency evaluation cannot be accomplished with simulation alone: competency assessment must also include evaluations of procedural skill during actual clinical care. Future work in this area is needed to measure and track clinically meaningful patient outcomes resulting from simulation-based training, examine the use of simulation to assist physicians undergoing re-entry to practice, and to examine the use of procedural skills simulation as part of a maintenance of competency and life-long learning.
Collapse
Affiliation(s)
- Taylor Sawyer
- Division of Neonatology, Department of Pediatrics, Neonatal Education and Simulation-Based Training (NEST) Program, University of Washington School of Medicine and Seattle Children's Hospital, 1959 NE Pacific St, RR451 HSB, Box 356320, Seattle, WA.
| | - Megan M Gray
- Division of Neonatology, Department of Pediatrics, Neonatal Education and Simulation-Based Training (NEST) Program, University of Washington School of Medicine and Seattle Children's Hospital, 1959 NE Pacific St, RR451 HSB, Box 356320, Seattle, WA
| |
Collapse
|