151
|
Scott SI, Dalsgaard T, Jepsen JV, von Buchwald C, Andersen SAW. Design and validation of a cross-specialty simulation-based training course in basic robotic surgical skills. Int J Med Robot 2020; 16:1-10. [PMID: 32721072 DOI: 10.1002/rcs.2138] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Revised: 07/07/2020] [Accepted: 07/20/2020] [Indexed: 11/09/2022]
Abstract
BACKGROUND The aim of this study was to design and validate a cross-specialty basic robotic surgical skills training program on the RobotiX Mentor virtual reality simulator. METHODS A Delphi panel reached consensus on six modules to include in the training program. Validity evidence was collected according to Messick's framework with three performances in each simulator module by 11 experienced robotic surgeons and 11 residents without robotic surgical experience. RESULTS For five of the six modules, a compound metrics-based score could significantly discriminate between the performances of novices and experienced robotic surgeons. Pass/fail levels were established, resulting in very few novices passing in their first attempt. CONCLUSIONS This validated course can be used for structured simulation-based basic robotic surgical skills training within a mastery learning framework where the individual trainee can practice each module until they achieve proficiency and can continue training on other modalities and more specific to their specialty.
Collapse
Affiliation(s)
- Susanne I Scott
- Department of Otorhinolaryngology, Head & Neck Surgery and Audiology, Rigshospitalet, Copenhagen University Hospital, Copenhagen, Denmark
| | - Torur Dalsgaard
- Department of Gynaecology, Endometriosis Team and Robotic Surgery Section, Rigshospitalet, Copenhagen University Hospital, Copenhagen, Denmark
| | - Jan Vibjerg Jepsen
- Department of Urology, Herlev Hospital, Herlev, Denmark.,Copenhagen Academy for Medical Education and Simulation (CAMES), Center for HR, Copenhagen, Denmark
| | - Christian von Buchwald
- Department of Otorhinolaryngology, Head & Neck Surgery and Audiology, Rigshospitalet, Copenhagen University Hospital, Copenhagen, Denmark
| | - Steven Arild Wuyts Andersen
- Department of Otorhinolaryngology, Head & Neck Surgery and Audiology, Rigshospitalet, Copenhagen University Hospital, Copenhagen, Denmark.,Copenhagen Academy for Medical Education and Simulation (CAMES), Center for HR, Copenhagen, Denmark
| |
Collapse
|
152
|
Sawyer T. Educational Perspectives: Educational Strategies to Improve Outcomes from Neonatal Resuscitation. Neoreviews 2020; 21:e431-e441. [PMID: 32611561 DOI: 10.1542/neo.21-7-e431] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Since 1987, the Neonatal Resuscitation Program (NRP) course has taught the cognitive, technical, and behavioral skills required to effectively resuscitate newborns. To remain relevant and effective, the NRP course needs to continually evolve and embrace evidence-based educational strategies proven to improve outcomes from resuscitation. In this Educational Perspectives article, 6 educational strategies that can be applied to neonatal resuscitation education are reviewed. These educational strategies include mastery learning and deliberate practice, spaced practice, contextual learning, feedback and debriefing, assessment, and innovative educational strategies Then knowledge translation and implementation of these educational strategies through passive and active knowledge translation, change theory, design thinking, performance measurement, deadoption strategies, continuous quality improvement, incentive and penalties, and psychological marketing are explored. Finally, ways to optimize faculty development of NRP instructors, including both initial instructor training and ongoing instructor development, are examined. The goal of this review is to help NRP program developers and instructors use evidence-based educational strategies to improve neonatal resuscitation outcomes.
Collapse
Affiliation(s)
- Taylor Sawyer
- Department of Pediatrics, Division of Neonatology, University of Washington School of Medicine and Seattle Children's Hospital, Seattle, WA
| |
Collapse
|
153
|
Comparing Surgical Experience and Skill Using a High-Fidelity, Total Laparoscopic Hysterectomy Model. Obstet Gynecol 2020; 136:97-108. [DOI: 10.1097/aog.0000000000003897] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
154
|
A model to measure self-assessed proficiency in electronic medical records: Validation using maturity survey data from Canadian community-based physicians. Int J Med Inform 2020; 141:104218. [PMID: 32574925 DOI: 10.1016/j.ijmedinf.2020.104218] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Revised: 05/28/2020] [Accepted: 06/07/2020] [Indexed: 11/23/2022]
Abstract
OBJECTIVE Adoption of electronic medical records (EMRs) does not necessarily translate to proficiency -referred to here as EMR maturity. To realize the full benefit of wide scale EMR adoption, the focus must shift from adoption to advancing mature use. This calls for validated assessment models so that researchers, health system planners and digital health developers can better understand what contributes to maturity among physicians. This research aims to validate a measurement model for self-assessed EMR maturity among community-based physicians. METHODS As part of an Ontario government-funded EMR adoption program, the EMR Maturity Model for community-based practices was adapted from a hospital-based EMR maturity model. A survey instrument was developed on the foundation of the new model and revised by experts and stakeholders. Content validity, face validity and user acceptance were established before survey administration. Internal consistency and construct validity of the model were tested after survey data were collected. Finally, physicians' comments collected via the survey were qualitatively analyzed to provide additional insights that can be applied to refinement of the model and survey. RESULTS As of August 1, 2019, 1588 physicians completed the survey. Ordinal alpha tests for reliability and content validity yielded an alpha value of 0.86 across all key measures specifically associated with maturity. Among most of these, there was a pattern of weak to moderate significant (p < .0001) positive Spearman inter-correlations. One factor was extracted for items measuring dimensions of maturity and all factor loadings of the key measures were greater than 0.40. The fit of the one-factor model was moderately adequate. This indicates the model is valid and reliable, with consistency across key measures for measuring one factor: maturity. CONCLUSIONS This is the first known validated model published in English that measures EMR maturity among community-based physicians. While the model is shown to be valid and reliable statistically and qualitative analysis supports this, there is room for improvement. Both the statistical analysis and portions of the qualitative analysis suggest areas of exploration to strengthen the model and survey. Future efforts will include refining the survey to improve user interface and accrue further data, as the sample to date is insufficient for generalizability.
Collapse
|
155
|
Nayahangan LJ, Konge L, Eiberg J. An addition to the systematic review of simulation in open abdominal aortic aneurysm repair. J Vasc Surg 2020; 72:381-382. [PMID: 32553408 DOI: 10.1016/j.jvs.2020.02.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Accepted: 02/03/2020] [Indexed: 10/24/2022]
Affiliation(s)
| | - Lars Konge
- Faculty of Medicine and the Health Sciences, Copenhagen, Denmark
| | - Jonas Eiberg
- Department of Vascular Surgery, Rigshospitalet, Copenhagen, Denmark
| |
Collapse
|
156
|
Oviedo-Peñata CA, Tapia-Araya AE, Lemos JD, Riaño-Benavides C, Case JB, Maldonado-Estrada JG. Validation of Training and Acquisition of Surgical Skills in Veterinary Laparoscopic Surgery: A Review. Front Vet Sci 2020; 7:306. [PMID: 32582781 PMCID: PMC7283875 DOI: 10.3389/fvets.2020.00306] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Accepted: 05/05/2020] [Indexed: 02/06/2023] Open
Abstract
At present, veterinary laparoscopic surgery training is lacking in experiences that provide a controlled and safe environment where surgeons can practice specific techniques while receiving experts' feedback. Surgical skills acquired using simulators must be certified and transferable to the operating room. Most models for practicing laparoscopic skills in veterinary minimally invasive surgery are general task trainers and consist of boxes (simulators) designed for training human surgery. These simulators exhibit several limitations, including anatomic species and procedural differences, as well as general psychomotor training rather than in vivo skill recreation. In this paper, we review the existing methods of training, evaluation, and validation of technical skills in veterinary laparoscopic surgery. Content includes global and specific scales, and the conditions a structured curriculum should meet for improving the performance of novice surgeons during and after training. A focus on trainee-specific assessment and tailored-technical instruction should influence training programs. We provide a comprehensive analysis of current theories and concepts related to the evaluation and validation of simulators for training laparoscopic surgery in small animal surgery. We also highlight the need to develop new training models and complementary evaluation scales for the validation of training and acquisition of basic and advanced skills in veterinary laparoscopic surgery.
Collapse
Affiliation(s)
- Carlos A Oviedo-Peñata
- Tropical Animal Production Research Group, Faculty of Veterinary Medicine and Zootechny, University of Cordoba, Monteria, Colombia.,Surgery and Theriogenology Branch OHVRI-Group, College of Veterinary Medicine, University of Antioquia, Medellin, Colombia
| | | | - Juan D Lemos
- Bioinstrumentation and Clinical Engineering Research Group (GIBIC), Bioengineering Department, Engineering Faculty, Universidad de Antioquia, Medellín, Colombia
| | - Carlos Riaño-Benavides
- Surgery and Theriogenology Branch OHVRI-Group, College of Veterinary Medicine, University of Antioquia, Medellin, Colombia
| | - J Brad Case
- Department of Small Animal Clinical Sciences, College of Veterinary Medicine, University of Florida, Gainesville, FL, United States
| | - Juan G Maldonado-Estrada
- Surgery and Theriogenology Branch OHVRI-Group, College of Veterinary Medicine, University of Antioquia, Medellin, Colombia
| |
Collapse
|
157
|
Design and validation of a low-cost, high-fidelity model for robotic pyeloplasty simulation training. J Pediatr Urol 2020; 16:332-339. [PMID: 32173325 DOI: 10.1016/j.jpurol.2020.02.003] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Accepted: 02/01/2020] [Indexed: 01/04/2023]
Abstract
INTRODUCTION/BACKGROUND Owing to restrictions in operative experiences, urology residents can no longer solely rely on 'hands-on' operative time to master their surgical skills by the end of residency. Simulation training could help residents master basic surgical skills and steps of a procedure to maximize time in the operative room. However, simulators can be expensive or tedious to set up, limiting the availability to residents and training programs. OBJECTIVE The authors sought to develop and validate an inexpensive, high-fidelity training model for robotic pyeloplasty. STUDY DESIGN Pyeloplasty models were created using Dragon Skin® FX-Pro tissue-mimicking silicone cast over 3-dimensional molds. Urology faculty and trainees completed a demographic questionnaire. The participants viewed a brief instructional video and then independently performed robotic dismembered pyeloplasty on the model. Acceptability and content validity were evaluated via post-task evaluation of the model. Construct validity was evaluated by comparing procedure completion time, the Global Evaluative Assessment of Robotic Skills (GEARS) score, blinded subjective physical evaluation of repair quality (1-10 scale), and flow rate between experts and novices. RESULTS In total, 5 urology faculty, 6 fellows, and 14 residents participated. The median robotic console experience among faculty, fellows, and residents was 8 years (interquartile range [IQR] = 6-11), 3.5 years (IQR = 2-4 years), and 0 years (IQR = 0-0.5 years), respectively. The median procedure completion time was 29 min (IQR = 26-40 min), and the median flow rate was 1.11 mL/s (IQR = 0-1.34 mL/s). All faculty had flow rates >1.25 mL/s and procedure times <30 min compared with 2 of 6 fellows and none of the residents (P < 0.001). All faculty, half of the fellows, and none of the residents achieved a GEARS score ≥20, with a median resident score of 12.5 (IQR = 8-13) (P < 0.001). For repair quality, all faculty scored ≥9 (out of 10), all fellows scored ≥8, and the median score among residents was 6 (IQR = 2-6) (P < 0.001). The material cost was $1.32/model, and the average production time was 0.12 person-hours/model. DISCUSSION AND CONCLUSION This low-cost pyeloplasty model exhibits acceptability and content validity. Construct validity is supported by significant correlation between participant expertise and simulator performance across multiple assessment domains. The model has excellent potential to be used as a training tool in urology and allows for repetitive practice of pyeloplasty skills before live cases.
Collapse
|
158
|
Peters S, Clarebout G, Aertgeerts B, Michels N, Pype P, Stammen L, Roex A. Provoking a Conversation Around Students' and Supervisors' Expectations Regarding Workplace Learning. TEACHING AND LEARNING IN MEDICINE 2020; 32:282-293. [PMID: 31880173 DOI: 10.1080/10401334.2019.1704764] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Construct: This study presents a tool that can facilitate a conversation about students' and supervisors' expectations concerning responsibilities during workplace learning. Background: It is often unclear who is responsible for facilitating learning opportunities in the workplace. In order to increase learning opportunities, it is important that expectations are discussed and alignment is reached between the student's and supervisor's expectations. This study collected and interpreted validity evidence for a tool that aims to provoke such a conversation. Approach: Three types of validity evidence were collected: response process, content, and consequences evidence. Educational leaders, medical teachers, and students of four medical schools were involved. The data collection consisted of cognitive interviews, a modified Delphi approach (with three rounds of inquiry), completed tools, and narrative comments. Findings: This study showed that the expectations of most students and supervisors were not initially aligned. The conversation, for which the tool aims to be a catalyst, facilitated better alignment of expectations about responsibilities during workplace learning. Moreover, the students' perceived degree of consensus and satisfaction after the conversation were very high. Conclusions: This study underlined the relevance and usefulness of a tool that facilitates conversation about expectations regarding responsibilities, potentially enhancing learning opportunities at the workplace.
Collapse
Affiliation(s)
- Sanne Peters
- Academic Center for General Practice, KU Leuven, Leuven, Belgium
| | - Geraldine Clarebout
- School of Health Professions Education, Maastricht University, Maastricht, the Netherlands
- Center for Instructional Psychology and Technology, KU Leuven, Leuven, Belgium
| | - Bert Aertgeerts
- Academic Center for General Practice, KU Leuven, Leuven, Belgium
| | - Nele Michels
- Center for General Practice, University of Antwerp, Antwerp, Belgium
| | - Peter Pype
- Department of Public Health and Primary Care, Ghent University, Ghent, Belgium
| | - Lorette Stammen
- School of Health Professions Education, Maastricht University, Maastricht, the Netherlands
| | - Ann Roex
- Department of Clinical Sciences, Faculty of Medicine & Pharmacy, VUB, Brussels, Belgium
| |
Collapse
|
159
|
Blanié A, Amorim MA, Meffert A, Perrot C, Dondelli L, Benhamou D. Assessing validity evidence for a serious game dedicated to patient clinical deterioration and communication. Adv Simul (Lond) 2020; 5:4. [PMID: 32514382 PMCID: PMC7251894 DOI: 10.1186/s41077-020-00123-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2019] [Accepted: 05/14/2020] [Indexed: 11/25/2022] Open
Abstract
Background A serious game (SG) is a useful tool for nurse training. The objectives of this study were to assess validity evidence of a new SG designed to improve nurses’ ability to detect patient clinical deterioration. Methods The SG (LabForGames Warning) was developed through interaction between clinical and pedagogical experts and one developer. For the game study, consenting nurses were divided into three groups: nursing students (pre-graduate) (group S), recently graduated nurses (graduated < 2 years before the study) (group R) and expert nurses (graduated > 4 years before the study and working in an ICU) (group E). Each volunteer played three cases of the game (haemorrhage, brain trauma and obstructed intestinal tract). The validity evidence was assessed following Messick’s framework: content, response process (questionnaire, observational analysis), internal structure, relations to other variables (by scoring each case and measuring playing time) and consequences (a posteriori analysis). Results The content validity was supported by the game design produced by clinical, pedagogical and interprofessional experts in accordance with the French nurse training curriculum, literature review and pilot testing. Seventy-one nurses participated in the study: S (n = 25), R (n = 25) and E (n = 21). The content validity in all three cases was highly valued by group E. The response process evidence was supported by good security control. There was no significant difference in the three groups’ high rating of the game’s realism, satisfaction and educational value. All participants stated that their knowledge of the different steps of the clinical reasoning process had improved. Regarding the internal structure, the factor analysis showed a common source of variance between the steps of the clinical reasoning process and communication or the situational awareness errors made predominantly by students. No statistical difference was observed between groups regarding scores and playing time. A posteriori analysis of the results of final examinations assessing study-related topics found no significant difference between group S participants and students who did not participate in the study. Conclusion While it appears that this SG cannot be used for summative assessment (score validity undemonstrated), it is positively valued as an educational tool. Trial registration ClinicalTrials.gov ID: NCT03092440
Collapse
Affiliation(s)
- Antonia Blanié
- Centre de simulation LabForSIMS, Faculté de médecine Paris Saclay, 94275 Le Kremlin Bicêtre, France.,Département d'Anesthésie-Réanimation chirurgicale, CHU Bicêtre, 94275 Le Kremlin Bicêtre, France.,CIAMS, Université Paris-Saclay, 91405 Orsay Cedex, France.,CIAMS, Université d'Orléans, 45067 Orléans, France
| | - Michel-Ange Amorim
- CIAMS, Université Paris-Saclay, 91405 Orsay Cedex, France.,CIAMS, Université d'Orléans, 45067 Orléans, France
| | - Arnaud Meffert
- Centre de simulation LabForSIMS, Faculté de médecine Paris Saclay, 94275 Le Kremlin Bicêtre, France.,Département d'Anesthésie-Réanimation chirurgicale, CHU Bicêtre, 94275 Le Kremlin Bicêtre, France
| | | | | | - Dan Benhamou
- Centre de simulation LabForSIMS, Faculté de médecine Paris Saclay, 94275 Le Kremlin Bicêtre, France.,Département d'Anesthésie-Réanimation chirurgicale, CHU Bicêtre, 94275 Le Kremlin Bicêtre, France.,CIAMS, Université Paris-Saclay, 91405 Orsay Cedex, France.,CIAMS, Université d'Orléans, 45067 Orléans, France
| |
Collapse
|
160
|
IJgosse W, van Goor H, Rosman C, Luursema JM. Construct Validity of a Serious Game for Laparoscopic Skills Training: Validation Study. JMIR Serious Games 2020; 8:e17222. [PMID: 32379051 PMCID: PMC7243133 DOI: 10.2196/17222] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2019] [Revised: 02/27/2020] [Accepted: 03/23/2020] [Indexed: 01/19/2023] Open
Abstract
Background Surgical residents underutilize opportunities for traditional laparoscopic simulation training. Serious gaming may increase residents’ motivation to practice laparoscopic skills. However, little is known about the effectiveness of serious gaming for laparoscopic skills training. Objective The aim of this study was to establish construct validity for the laparoscopic serious game Underground. Methods All study participants completed 2 levels of Underground. Performance for 2 novel variables (time and error) was compared between novices (n=65, prior experience <10 laparoscopic procedures), intermediates (n=26, prior experience 10-100 laparoscopic procedures), and experts (n=20, prior experience >100 laparoscopic procedures) using analysis of covariance. We corrected for gender and video game experience. Results Controlling for gender and video game experience, the effects of prior laparoscopic experience on the time variable differed significantly (F2,106=4.77, P=.01). Both experts and intermediates outperformed novices in terms of task completion speed; experts did not outperform intermediates. A similar trend was seen for the rate of gameplay errors. Both gender (F1,106=14.42, P<.001 in favor of men) and prior video game experience (F1,106=5.20, P=.03 in favor of experienced gamers) modulated the time variable. Conclusions We established construct validity for the laparoscopic serious game Underground. Serious gaming may aid laparoscopic skills development. Previous gaming experience and gender also influenced Underground performance. The in-game performance metrics were not suitable for statistical evaluation. To unlock the full potential of serious gaming for training, a more formal approach to performance metric development is needed.
Collapse
Affiliation(s)
- Wouter IJgosse
- Department of Surgery, Radboud University Medical Center, Nijmegen, Netherlands
| | - Harry van Goor
- Department of Surgery, Radboud University Medical Center, Nijmegen, Netherlands
| | - Camiel Rosman
- Department of Surgery, Radboud University Medical Center, Nijmegen, Netherlands
| | | |
Collapse
|
161
|
Law BHY, Cheung PY, van Os S, Fray C, Schmölzer GM. Effect of monitor positioning on visual attention and situation awareness during neonatal resuscitation: a randomised simulation study. Arch Dis Child Fetal Neonatal Ed 2020; 105:285-291. [PMID: 31375503 DOI: 10.1136/archdischild-2019-316992] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/06/2019] [Revised: 07/11/2019] [Accepted: 07/16/2019] [Indexed: 11/04/2022]
Abstract
OBJECTIVES To compare situation awareness (SA), visual attention (VA) and protocol adherence in simulated neonatal resuscitations using two different monitor positions. DESIGN Randomised controlled simulation study. SETTINGS Simulation lab at the Royal Alexandra Hospital, Edmonton, Canada. PARTICIPANTS Healthcare providers (HCPs) with Neonatal Resuscitation Program (NRP) certification within the last 2 years and trained in neonatal endotracheal intubations. INTERVENTION HCPs were randomised to either central (eye-level on the radiant warmer) or peripheral (above eye-level, wall-mounted) monitor positions. Each led a complex resuscitation with a high-fidelity mannequin and a standardised assistant. To measure SA, situation awareness global assessment tool (SAGAT) was used, where simulations were paused at three predetermined points, with five questions asked each pause. Videos were analysed for SAGAT and adherence to a NRP checklist. Eye-tracking glasses recorded participants' VA. MAIN OUTCOME MEASURE The main outcome was SA as measured by composite SAGAT score. Secondary outcomes included VA and adherence to NRP checklist. RESULTS Thirty simulations were performed; 29 were completed per protocol and analysed. Twenty-two eye-tracking recordings were of sufficient quality and analysed. Median composite SAGAT was 11.5/15 central versus 11/15 peripheral, p=0.56. Checklist scores 46/50 central versus 46/50 peripheral, p=0.75. Most VA was directed at the mannequin (30.6% central vs 34.1% peripheral, p=0.76), and the monitor (28.7% central vs 20.5% peripheral, p=0.06). CONCLUSIONS Simulation, SAGAT and eye-tracking can be used to evaluate human factors of neonatal resuscitation. During simulated neonatal resuscitation, monitor position did not affect SA, VA or protocol adherence.
Collapse
Affiliation(s)
- Brenda Hiu Yan Law
- Department of Pediatrics, University of Alberta, Edmonton, Alberta, Canada.,Centre for the Studies of Asphyxia and Resuscitation, Royal Alexandra Hospital, Edmonton, Alberta, Canada
| | - Po-Yin Cheung
- Department of Pediatrics, University of Alberta, Edmonton, Alberta, Canada.,Centre for the Studies of Asphyxia and Resuscitation, Royal Alexandra Hospital, Edmonton, Alberta, Canada
| | - Sylvia van Os
- Centre for the Studies of Asphyxia and Resuscitation, Royal Alexandra Hospital, Edmonton, Alberta, Canada
| | - Caroline Fray
- Centre for the Studies of Asphyxia and Resuscitation, Royal Alexandra Hospital, Edmonton, Alberta, Canada
| | - Georg M Schmölzer
- Department of Pediatrics, University of Alberta, Edmonton, Alberta, Canada.,Centre for the Studies of Asphyxia and Resuscitation, Royal Alexandra Hospital, Edmonton, Alberta, Canada
| |
Collapse
|
162
|
Bhanji F, Miller G, Cheung WJ, Puligandla PS, Winthrop A, Baird R, Davies D, Lopushinsky SR, Webber EM. The future is here! Pediatric surgery and the move to the royal college of physicians and surgeons of Canada's competence by design. J Pediatr Surg 2020; 55:796-799. [PMID: 32085917 DOI: 10.1016/j.jpedsurg.2020.01.031] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/04/2020] [Accepted: 01/25/2020] [Indexed: 11/25/2022]
Abstract
This interactive session was held at the 51st Annual Meeting of the Canadian Association of Pediatric Surgeons (CAPS) in preparation for the transition of Pediatric Surgery training in Canada to Competency by Design (a CBME-based model of residency training developed by the Royal College of Physicians and Surgeons of Canada).
Collapse
Affiliation(s)
- Farhan Bhanji
- Royal College of Physicians and Surgeons of Canada, Professor of Pediatrics, Faculty of Medicine, McGill University, Montreal, Quebec, Canada.
| | - Grant Miller
- University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Warren J Cheung
- Department of Emergency Medicine, University of Ottawa
- The Ottawa Hospital, Ottawa, Ontario, Canada
| | - Pramod S Puligandla
- The Harvey E. Beardmore Division of Pediatric Surgery, Department of Pediatric Surgery, Faculty of Medicine, McGill University, Montreal, Quebec, Canada
| | - Andrea Winthrop
- Queen's University School of Medicine, Kingston, Ontario, Canada
| | - Robert Baird
- University of British Columbia, British Columbia Children's Hospital, Vancouver, British Columbia, Canada
| | - Dafydd Davies
- Faculty of Medicine, Dalhousie University, IWK Health Centre, Dartmouth, Nova Scotia, Canada
| | | | - Eric M Webber
- Queen's University School of Medicine, Kingston, Ontario, Canada
| |
Collapse
|
163
|
Usero-Pérez MDC, Jiménez-Rodríguez ML, González-Aguña A, González-Alonso V, Orbañanos-Peiro L, Santamaría-García JM, Gómez-González JL. Validation of an evaluation instrument for responders in tactical casualty care simulations. Rev Lat Am Enfermagem 2020; 28:e3251. [PMID: 32321042 PMCID: PMC7164920 DOI: 10.1590/1518-8345.3052.3251] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2018] [Accepted: 10/23/2019] [Indexed: 11/28/2022] Open
Abstract
OBJECTIVE to construct and validate a tool for the evaluation of responders in tactical casualty care simulations. METHOD three rubrics for the application of a tourniquet, an emergency bandage and haemostatic agents recommended by the Hartford Consensus were developed and validated. Validity and reliability were studied. Validation was performed by 4 experts in the field and 36 nursing participants who were selected through convenience sampling. Three rubrics with 8 items were evaluated (except for the application of an emergency bandage, for which 7 items were evaluated). Each simulation was evaluated by 3 experts. RESULTS an excellent score was obtained for the correlation index for the 3 simulations and 2 levels that were evaluated (competent and expert). The mean score for the application of a tourniquet was 0.897, the mean score for the application of an emergency bandage was 0.982, and the mean score for the application of topical haemostats was 0.805. CONCLUSION this instrument for the evaluation of nurses in tactical casualty care simulations is considered useful, valid and reliable for training in a prehospital setting for both professionals who lack experience in tactical casualty care and those who are considered to be experts.
Collapse
Affiliation(s)
| | | | - Alexandra González-Aguña
- Universidad de Alcalá de Henares, Facultad de Ciencias de la
Computación, Alcalá de Henares, Madrid, Madrid, Spain
| | | | | | - Jose María Santamaría-García
- Universidad de Alcalá de Henares, Facultad de Ciencias de la
Computación, Alcalá de Henares, Madrid, Madrid, Spain
| | - Jorge Luís Gómez-González
- Universidad de Alcalá de Henares, Facultad de Ciencias de la
Computación, Alcalá de Henares, Madrid, Madrid, Spain
| |
Collapse
|
164
|
Roadmap for Developing Complex Virtual Reality Simulation Scenarios: Subpial Neurosurgical Tumor Resection Model. World Neurosurg 2020; 139:e220-e229. [PMID: 32289510 DOI: 10.1016/j.wneu.2020.03.187] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 03/25/2020] [Accepted: 03/26/2020] [Indexed: 01/03/2023]
Abstract
BACKGROUND Advancement and evolution of current virtual reality (VR) surgical simulation technologies are integral to improve the available armamentarium of surgical skill education. This is especially important in high-risk surgical specialties. Such fields including neurosurgery are beginning to explore the utilization of virtual reality simulation in the assessment and training of psychomotor skills. An important issue facing the available VR simulation technologies is the lack of complexity of scenarios that fail to replicate the visual and haptic realities of complex neurosurgical procedures. Therefore there is a need to create more realistic and complex scenarios with the appropriate visual and haptic realities to maximize the potential of virtual reality technology. METHODS We outline a roadmap for creating complex virtual reality neurosurgical simulation scenarios using a step-wise description of our team's subpial tumor resection project as a model. RESULTS The creation of complex neurosurgical simulations involves integrating multiple modules into a scenario-building roadmap. The components of each module are described outlining the important stages in the process of complex VR simulation creation. CONCLUSIONS Our roadmap of a stepwise approach for the creation of complex VR-simulated neurosurgical procedures may also serve as a guide to aid the development of other VR scenarios in a variety of surgical fields. The generation of new VR complex simulated neurosurgical procedures, by surgeons for surgeons, with the help of computer scientists and engineers may improve the assessment and training of residents and ultimately improve patient care.
Collapse
|
165
|
Kwan C, Pusic M, Pecaric M, Weerdenburg K, Tessaro M, Boutis K. The Variable Journey in Learning to Interpret Pediatric Point-of-care Ultrasound Images: A Multicenter Prospective Cohort Study. AEM EDUCATION AND TRAINING 2020; 4:111-122. [PMID: 32313857 PMCID: PMC7163207 DOI: 10.1002/aet2.10375] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2019] [Revised: 06/17/2019] [Accepted: 06/20/2019] [Indexed: 06/01/2023]
Abstract
OBJECTIVES To complement bedside learning of point-of-care ultrasound (POCUS), we developed an online learning assessment platform for the visual interpretation component of this skill. This study examined the amount and rate of skill acquisition in POCUS image interpretation in a cohort of pediatric emergency medicine (PEM) physician learners. METHODS This was a multicenter prospective cohort study. PEM physicians learned POCUS using a computer-based image repository and learning assessment system that allowed participants to deliberately practice image interpretation of 400 images from four pediatric POCUS applications (soft tissue, lung, cardiac, and focused assessment sonography for trauma [FAST]). Participants completed at least one application (100 cases) over a 4-week period. RESULTS We enrolled 172 PEM physicians (114 attendings, 65 fellows). The increase in accuracy from the initial to final 25 cases was 11.6%, 9.8%, 7.4%, and 8.6% for soft tissue, lung, cardiac, and FAST, respectively. For all applications, the average learners (50th percentile) required 0 to 45, 25 to 97, 66 to 175, and 141 to 290 cases to reach 80, 85, 90, and 95% accuracy, respectively. The least efficient (95th percentile) learners required 60 to 288, 109 to 456, 160 to 666, and 243 to 1040 cases to reach these same accuracy benchmarks. Generally, the soft tissue application required participants to complete the least number of cases to reach a given proficiency level, while the cardiac application required the most. CONCLUSIONS Deliberate practice of pediatric POCUS image cases using an online learning and assessment platform may lead to skill improvement in POCUS image interpretation. Importantly, there was a highly variable rate of achievement across learners and applications. These data inform our understanding of POCUS image interpretation skill development and could complement bedside learning and performance assessments.
Collapse
Affiliation(s)
- Charisse Kwan
- From the Division of Pediatric Emergency MedicineDepartment of PediatricsHospital for Sick Children and University of TorontoTorontoOntarioCanada
| | - Martin Pusic
- Department of Emergency Medicine and Division of Learning AnalyticsNYU School of MedicineNew YorkNY
| | | | - Kirstin Weerdenburg
- Department of Emergency MedicineIWK Health Centre and Dalhousie UniversityHalifaxNova ScotiaCanada
| | - Mark Tessaro
- From the Division of Pediatric Emergency MedicineDepartment of PediatricsHospital for Sick Children and University of TorontoTorontoOntarioCanada
| | - Kathy Boutis
- From the Division of Pediatric Emergency MedicineDepartment of PediatricsHospital for Sick Children and University of TorontoTorontoOntarioCanada
| |
Collapse
|
166
|
|
167
|
Abstract
Assessment of endoscopist competence is an increasingly important component of colonoscopy quality assurance. In this study from the Joint Advisory Group on Gastrointestinal Endoscopy, validity evidence is provided for the use of the Direct Observation of Procedural Skills assessment tool in the formative setting during training. In this national UK dataset, overall colonoscopy competence was typically achieved after 200-249 procedures, although certain complex procedural skills ("proactive problem solving" and "loop management") had not reached the threshold for competence even after 300 procedures. These data will help inform the development and/or refinement of certification policies and practices in jurisdictions around the world.
Collapse
|
168
|
Colonoscopy Direct Observation of Procedural Skills Assessment Tool for Evaluating Competency Development During Training. Am J Gastroenterol 2020; 115:234-243. [PMID: 31738285 DOI: 10.14309/ajg.0000000000000426] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
INTRODUCTION Formative colonoscopy direct observation of procedural skills (DOPS) assessments were updated in 2016 and incorporated into UK training but lack validity evidence. We aimed to appraise the validity of DOPS assessments, benchmark performance, and evaluate competency development during training in diagnostic colonoscopy. METHODS This prospective national study identified colonoscopy DOPS submitted over an 18-month period to the UK training e-portfolio. Generalizability analyses were conducted to evaluate internal structure validity and reliability. Benchmarking was performed using receiver operator characteristic analyses. Learning curves for DOPS items and domains were studied, and multivariable analyses were performed to identify predictors of DOPS competency. RESULTS Across 279 training units, 10,749 DOPS submitted for 1,199 trainees were analyzed. The acceptable reliability threshold (G > 0.70) was achieved with 3 assessors performing 2 DOPS each. DOPS competency rates correlated with the unassisted caecal intubation rate (rho 0.404, P < 0.001). Demonstrating competency in 90% of assessed items provided optimal sensitivity (90.2%) and specificity (87.2%) for benchmarking overall DOPS competence. This threshold was attained in the following order: "preprocedure" (50-99 procedures), "endoscopic nontechnical skills" and "postprocedure" (150-199), "management" (200-249), and "procedure" (250-299) domain. At item level, competency in "proactive problem solving" (rho 0.787) and "loop management" (rho 0.780) correlated strongest with the overall DOPS rating (P < 0.001) and was the last to develop. Lifetime procedure count, DOPS count, trainer specialty, easier case difficulty, and higher cecal intubation rate were significant multivariable predictors of DOPS competence. DISCUSSION This study establishes milestones for competency acquisition during colonoscopy training and provides novel validity and reliability evidence to support colonoscopy DOPS as a competency assessment tool.
Collapse
|
169
|
Yiasemidou M, Glassman D, Khan K, Downing J, Sivakumar R, Fawole A, Biyani CS. Validation of a cost-effective appendicectomy model for surgical training. Scott Med J 2020; 65:46-51. [PMID: 31959075 DOI: 10.1177/0036933019900340] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
BACKGROUND Appendicitis is a commonly occurring condition worldwide. The gold standard treatment is appendicectomy. Although training models are commercially available for this procedure, they are often associated with high cost. Here we present a cost-effective model. AIM To establish construct validity of a cost-effective laparoscopic appendicectomy simulation model. METHODS Three groups of surgeons were recruited; novices (n = 31), of intermediate expertise (n = 13) and experts (n = 5) and asked to perform a simulated laparoscopic appendicectomy using the new model. Their performance was assessed by a faculty member and compared between the three groups using a validated scoring system (Global Operative Assessment of Laparoscopic Skills [GOALS] score). RESULTS One-way ANOVA test showed a significant difference in task performance between groups (p < 0.0001). Post-hoc comparisons after the application of Bonferroni correction (statistically significant p value <0.017) demonstrate a significant difference in performance between all groups for all GOALS categories as well as the total score. Effect size calculations showed that experience level had moderate (Eta-squared >0.5 and <0.8) and significant (>0.8) impact on the performance of the simulated procedure. CONCLUSION The model described in this study is cost-effective, valid and can adequately simulate appendicectomy. The authors recommend inclusion of this model to postgraduate surgical training.
Collapse
Affiliation(s)
- Marina Yiasemidou
- Honorary Research Fellow, Leeds Institute of Biomedical and Clinical Sciences, University of Leeds, St. James University Hospital, Leeds, UK.,Specialty Registrar Colorectal Surgery, Mid Yorkshire NHS Trust, West Yorkshire, UK
| | - Daniel Glassman
- TIG Oncoplastic Fellow Breast Surgery, York Teaching Hospital, York, UK
| | - Khalid Khan
- Registrar Colorectal Surgery, Hull and East Riding NHS Trust, Hull, UK
| | - Justine Downing
- Specialty Registrar Breast Surgery, Barnsley District General Hospital, Barnsley, UK
| | | | - Adeshina Fawole
- Consultant Colorectal Surgeon, Mid Yorkshire NHS Trust, West Yorkshire, UK
| | | |
Collapse
|
170
|
Buek J. What Is New in Validation and Simulation Training?: Best Articles From the Past Year. Obstet Gynecol 2019; 134:1358-1360. [PMID: 31764750 DOI: 10.1097/aog.0000000000003592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
This month we focus on current research in validation and simulation training. Dr. Buek discusses four recent publications, which are concluded with a "bottom line" that is the take-home message. A complete reference for each can be found on on this page along with direct links to the abstracts.
Collapse
Affiliation(s)
- John Buek
- Dr. Buek is from the Department of Obstetrics and Gynecology at the Medstar Washington Hospital Center, Washington, DC;
| |
Collapse
|
171
|
Braun LT, Lenzer B, Fischer MR, Schmidmaier R. Complexity of clinical cases in simulated learning environments: proposal for a scoring system. GMS JOURNAL FOR MEDICAL EDUCATION 2019; 36:Doc80. [PMID: 31844652 PMCID: PMC6905356 DOI: 10.3205/zma001288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Figures] [Subscribe] [Scholar Register] [Received: 04/15/2018] [Revised: 11/24/2018] [Accepted: 01/22/2019] [Indexed: 06/10/2023]
Affiliation(s)
- Leah Theresa Braun
- Ludwig-Maximilians-University (LMU) Munich, Klinikum der Universität München, Medizinische Klinik und Poliklinik IV, Munich, Germany
| | - Benedikt Lenzer
- Ludwig-Maximilians-University (LMU) Munich, Klinikum der Universität München, Institut für Didaktik und Ausbildungsforschung in der Medizin, Munich, Germany
| | - Martin R. Fischer
- Ludwig-Maximilians-University (LMU) Munich, Klinikum der Universität München, Institut für Didaktik und Ausbildungsforschung in der Medizin, Munich, Germany
| | - Ralf Schmidmaier
- Ludwig-Maximilians-University (LMU) Munich, Klinikum der Universität München, Medizinische Klinik und Poliklinik IV, Munich, Germany
| |
Collapse
|
172
|
Breitkopf DM, Green IC, Hopkins MR, Torbenson VE, Camp CL, Turner NS. Use of Asynchronous Video Interviews for Selecting Obstetrics and Gynecology Residents. Obstet Gynecol 2019; 134 Suppl 1:9S-15S. [DOI: 10.1097/aog.0000000000003432] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
173
|
Cheng A, Nadkarni VM, Mancini MB, Hunt EA, Sinz EH, Merchant RM, Donoghue A, Duff JP, Eppich W, Auerbach M, Bigham BL, Blewer AL, Chan PS, Bhanji F. Resuscitation Education Science: Educational Strategies to Improve Outcomes From Cardiac Arrest: A Scientific Statement From the American Heart Association. Circulation 2019; 138:e82-e122. [PMID: 29930020 DOI: 10.1161/cir.0000000000000583] [Citation(s) in RCA: 201] [Impact Index Per Article: 33.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
The formula for survival in resuscitation describes educational efficiency and local implementation as key determinants in survival after cardiac arrest. Current educational offerings in the form of standardized online and face-to-face courses are falling short, with providers demonstrating a decay of skills over time. This translates to suboptimal clinical care and poor survival outcomes from cardiac arrest. In many institutions, guidelines taught in courses are not thoughtfully implemented in the clinical environment. A current synthesis of the evidence supporting best educational and knowledge translation strategies in resuscitation is lacking. In this American Heart Association scientific statement, we provide a review of the literature describing key elements of educational efficiency and local implementation, including mastery learning and deliberate practice, spaced practice, contextual learning, feedback and debriefing, assessment, innovative educational strategies, faculty development, and knowledge translation and implementation. For each topic, we provide suggestions for improving provider performance that may ultimately optimize patient outcomes from cardiac arrest.
Collapse
|
174
|
Paediatric Colonoscopy Direct Observation of Procedural Skills: Evidence of Validity and Competency Development. J Pediatr Gastroenterol Nutr 2019; 69:18-23. [PMID: 30889133 DOI: 10.1097/mpg.0000000000002321] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
INTRODUCTION The paediatric series of direct observation of procedural skills (DOPS) were introduced into the UK national endoscopy training curriculum in 2016, but lack validity evidence. We aimed to present validity evidence for paediatric colonoscopy DOPS and study competency development in a national trainee cohort. METHODS This prospective UK-wide study analysed formative paediatric colonoscopy DOPS which were submitted to the e-Portfolio between 2016 and 2018. Item, domain, and average DOPS scores were correlated with the overall DOPS rating to evidence internal structure validity. Overall DOPS ratings were compared over lifetime procedure count to demonstrate learning curves (discriminant validity). Consequential validity was founded on receiver operating characteristic curve analyses. RESULTS A total of 203 DOPS assessments were completed for 29 trainees from 11 UK training centres. Internal structure validity was provided through item-total correlation analyses. DOPS scores positively correlated with trainee seniority (P < 0.001) and lifetime procedure count (P < 0.001). Competency acquisition followed the order of: "preprocedure," "postprocedure," "endoscopic nontechnical skills," "management," "procedure" domains, followed by overall DOPS competency, which was achieved in 81% of the cohort after 125 to 149 procedures. Mean DOPS scores could be used to predict overall procedure competence (area under receiver operating characteristic curve 0.969, P < 0.001), with a mean score of 3.9 demonstrating optimal sensitivity (93.5%) and specificity (87.6%). CONCLUSIONS This study provides validity evidence supporting the use of paediatric colonoscopy DOPS as an in-training competence assessment tool. DOPS may also be used to measure competency development and benchmark performance during training, which may be of value to trainees, trainers, and training programmes.
Collapse
|
175
|
Jeyalingam T, Walsh CM. Video-based assessments: a promising step in improving polypectomy competency. Gastrointest Endosc 2019; 89:1231-1233. [PMID: 31104751 DOI: 10.1016/j.gie.2019.04.203] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/25/2019] [Accepted: 04/02/2019] [Indexed: 02/08/2023]
Affiliation(s)
- Thurarshen Jeyalingam
- Division of Gastroenterology, University of Toronto, Toronto, Ontario, Canada; Department of Medicine, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada; The Wilson Centre, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Catharine M Walsh
- The Wilson Centre, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada; Division of Gastroenterology, Hepatology, and Nutrition and the Research and Learning Institutes, Hospital for Sick Children, University of Toronto, Toronto, Ontario, Canada; Department of Paediatrics, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
176
|
Higham H, Greig PR, Rutherford J, Vincent L, Young D, Vincent C. Observer-based tools for non-technical skills assessment in simulated and real clinical environments in healthcare: a systematic review. BMJ Qual Saf 2019; 28:672-686. [DOI: 10.1136/bmjqs-2018-008565] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2018] [Revised: 04/17/2019] [Accepted: 04/23/2019] [Indexed: 12/18/2022]
Abstract
BackgroundOver the past three decades multiple tools have been developed for the assessment of non-technical skills (NTS) in healthcare. This study was designed primarily to analyse how they have been designed and tested but also to consider guidance on how to select them.ObjectivesTo analyse the context of use, method of development, evidence of validity (including reliability) and usability of tools for the observer-based assessment of NTS in healthcare.DesignSystematic review.Data sourcesSearch of electronic resources, including PubMed, Embase, CINAHL, ERIC, PsycNet, Scopus, Google Scholar and Web of Science. Additional records identified through searching grey literature (OpenGrey, ProQuest, AHRQ, King’s Fund, Health Foundation).Study selectionStudies of observer-based tools for NTS assessment in healthcare professionals (or undergraduates) were included if they: were available in English; published between January 1990 and March 2018; assessed two or more NTS; were designed for simulated or real clinical settings and had provided evidence of validity plus or minus usability. 11,101 articles were identified. After limits were applied, 576 were retrieved for evaluation and 118 articles included in this review.ResultsOne hundred and eighteen studies describing 76 tools for assessment of NTS in healthcare met the eligibility criteria. There was substantial variation in the method of design of the tools and the extent of validity, and usability testing. There was considerable overlap in the skills assessed, and the contexts of use of the tools.ConclusionThis study suggests a need for rationalisation and standardisation of the way we assess NTS in healthcare and greater consistency in how tools are developed and deployed.
Collapse
|
177
|
Direct observation of procedural skills (DOPS) assessment in diagnostic gastroscopy: nationwide evidence of validity and competency development during training. Surg Endosc 2019; 34:105-114. [PMID: 30911922 PMCID: PMC6946748 DOI: 10.1007/s00464-019-06737-7] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2018] [Accepted: 03/06/2019] [Indexed: 12/12/2022]
Abstract
Background Validated competency assessment tools and the data supporting milestone development during gastroscopy training are lacking. We aimed to assess the validity of the formative direct observation of procedural skills (DOPS) assessment tool in diagnostic gastroscopy and study competency development using DOPS. Methods This was a prospective multicentre (N = 275) analysis of formative gastroscopy DOPS assessments. Internal structure validity was tested using exploratory factor analysis and reliability estimated using generalisability theory. Item and global DOPS scores were stratified by lifetime procedure count to define learning curves, using a threshold determined from receiver operator characteristics (ROC) analysis. Multivariable binary logistic regression analysis was performed to identify independent predictors of DOPS competence. Results In total, 10086 DOPS were submitted for 987 trainees. Exploratory factor analysis identified three distinct item groupings, representing ‘pre-procedure’, ‘technical’, and ‘post-procedure non-technical’ skills. From generalisability analyses, sources of variance in overall DOPS scores included trainee ability (31%), assessor stringency (8%), assessor subjectivity (18%), and trainee case-to-case variation (43%). The combination of three assessments from three assessors was sufficient to achieve the reliability threshold of 0.70. On ROC analysis, a mean score of 3.9 provided optimal sensitivity and specificity for determining competency. This threshold was attained in the order of ‘pre-procedure’ (100–124 procedures), ‘technical’ (150–174 procedures), ‘post-procedure non-technical’ skills (200–224 procedures), and global competency (225–249 procedures). Higher lifetime procedure count, DOPS count, surgical trainees and assessors, higher trainee seniority, and lower case difficulty were significant multivariable predictors of DOPS competence. Conclusion This study establishes milestones for competency acquisition during gastroscopy training and provides validity and reliability evidence to support gastroscopy DOPS as a competency assessment tool. Electronic supplementary material The online version of this article (10.1007/s00464-019-06737-7) contains supplementary material, which is available to authorised users.
Collapse
|
178
|
Zaccagnini M. Assessing noncognitive domains of respiratory therapy applicants: Messick's framework appraisal of the multiple mini-interview. CANADIAN JOURNAL OF RESPIRATORY THERAPY : CJRT = REVUE CANADIENNE DE LA THERAPIE RESPIRATOIRE : RCTR 2019; 55:31-35. [PMID: 31297445 PMCID: PMC6591782 DOI: 10.29390/cjrt-2019-002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Educators who assess incoming applicants into a health professional training program are looking for a wide array of cognitive and noncognitive skills that best predict success in the program and as a future practicing professional. While aptitude tests generally measure cognitive skills, noncognitive constructs are more difficult to measure appropriately. The traditional method of measuring noncognitive constructs has been the panel interview. Panel interviews have been described as inconsistent in measuring noncognitive domains and consistently reported as unreliable and susceptible to bias. An alternate interview method used in many health professions schools is the multiple mini-interview (MMI) that was specifically designed to assess noncognitive domains in health professions education. This paper discusses the purpose of using the MMI, how the MMI is conducted, specific domains of focus for the MMI, and the feasibility of creating an MMI. Finally, the paper uses Messick's framework on validity to guide the consideration of the MMI.
Collapse
Affiliation(s)
- Marco Zaccagnini
- Department of Anesthesia & Critical Care, McGill University Health Centre, Montréal, QC, Canada
| |
Collapse
|
179
|
Rasmussen KMB, Hertz P, Laursen CB, Arshad A, Saghir Z, Clementsen PF, Konge L. Ensuring Basic Competence in Thoracentesis. Respiration 2019; 97:463-471. [PMID: 30625480 DOI: 10.1159/000495686] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2018] [Accepted: 11/25/2018] [Indexed: 11/19/2022] Open
Abstract
BACKGROUND Trocar pigtail catheter thoracentesis (TPCT) is a common procedure often performed by junior physicians. Simulation-based training may effectively train physicians in the procedure prior to performing it on patients. An assessment tool with solid validity evidence is necessary to ensure sufficient procedural competence. OBJECTIVES Our study objectives were (1) to collect evidence of validity for a newly developed pigtail catheter assessment tool (Thoracentesis Assessment Tool [ThorAT]) developed for the evaluation of TPCT performance and (2) to establish a pass/fail score for summative assessment. METHODS We assessed the validity evidence for the ThorAT using the recommended framework for validity by Messick. Thirty-four participants completed two consecutive procedures and their performance was assessed by two blinded, independent raters using the ThorAT. We compared performance scores to test whether the assessment tool was able to discern between the two groups, and a pass/fail score was established. RESULTS The assessment tool was able to discriminate between the two groups in terms of competence level. Experienced physicians received significantly higher test scores than novices in both the first and second procedure. A pass/fail score of 25.2 points was established, resulting in 4 (17%) passing novices and 1 (9%) failing experienced participant in the first procedure. In the second procedure 9 (39%) novices passed and 2 (18%) experienced participants failed. CONCLUSIONS This study provides a tool for summative assessment of competence in TPCT. Strong validity evidence was gathered from five sources of evidence. A simulation-based training program using the ThorAT could ensure competence before performing thoracentesis on patients.
Collapse
Affiliation(s)
| | - Peter Hertz
- Copenhagen Academy for Medical Education and Simulation, The Capital Region of Denmark, Copenhagen, Denmark
| | - Christian B Laursen
- Regional Center for Technical Simulation, Region of Southern Denmark, Odense, Denmark.,Department of Respiratory Medicine, Odense University Hospital, Odense, Denmark
| | - Arman Arshad
- Regional Center for Technical Simulation, Region of Southern Denmark, Odense, Denmark.,Department of Respiratory Medicine, Odense University Hospital, Odense, Denmark
| | - Zaigham Saghir
- Department of Respiratory Medicine, Bispebjerg Hospital, Copenhagen, Denmark
| | - Paul Frost Clementsen
- Copenhagen Academy for Medical Education and Simulation, The Capital Region of Denmark, Copenhagen, Denmark.,Department of Internal Medicine, Zealand University Hospital, Roskilde, Denmark
| | - Lars Konge
- Copenhagen Academy for Medical Education and Simulation, The Capital Region of Denmark, Copenhagen, Denmark
| |
Collapse
|
180
|
Yazbeck Karam V, Park YS, Tekian A, Youssef N. Evaluating the validity evidence of an OSCE: results from a new medical school. BMC MEDICAL EDUCATION 2018; 18:313. [PMID: 30572876 PMCID: PMC6302424 DOI: 10.1186/s12909-018-1421-x] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Accepted: 12/05/2018] [Indexed: 06/09/2023]
Abstract
BACKGROUND To prevent the problems of traditional clinical evaluation, the "Objective Structured Clinical Examination (OSCE)" was presented by Harden as a more valid and reliable assessment instrument. However, an essential condition to guarantee a high-quality and effective OSCE is the assurance of evidence to support the validity of its scores. This study examines the psychometric properties of OSCE scores, with an emphasis on consequential and internal structure validity evidence. METHODS Fifty-three first year medical students took part in a summative OSCE at the Lebanese American University-School of Medicine. Evidence to support consequential validity was gathered by using criterion-based standard setting methods. Internal structure validity evidence was gathered by examining various psychometric measures both at the station level and across the complete OSCE. RESULTS Compared to our actual method of computing results, the introduction of standard setting resulted in lower students' average grades and a higher cut score. Across stations, Cronbach's alpha was moderately low. CONCLUSION Gathering consequential and internal structure validity evidence by multiple metrics provides support for or against the quality of an OSCE. It is critical that this analysis be performed routinely on local iterations of given tests, and the results used to enhance the quality of assessment.
Collapse
Affiliation(s)
- Vanda Yazbeck Karam
- Lebanese American University-School of Medicine, P.O. Box: 113288, Zahar Street, Beirut, Lebanon
| | - Yoon Soo Park
- Department of Medical Education, University of Illinois, Chicago, USA
| | - Ara Tekian
- Department of Medical Education, University of Illinois, Chicago, USA
| | - Nazih Youssef
- Lebanese American University-School of Medicine, P.O. Box: 113288, Zahar Street, Beirut, Lebanon
| |
Collapse
|
181
|
Validity Evidence for Direct Observation of Procedural Skills in Paediatric Gastroscopy. J Pediatr Gastroenterol Nutr 2018; 67:e111-e116. [PMID: 30216204 DOI: 10.1097/mpg.0000000000002089] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
OBJECTIVES Direct observation of procedural skills (DOPS) are competence-assessment tools in endoscopy. Formative paediatric gastroscopy DOPS were implemented into the UK curriculum in 2016 but lack validity evidence; we aimed to assess validity evidence using a recognised contemporary validity framework. METHODS We performed a prospective UK-wide analysis of formative paediatric gastroscopy DOPS submitted to the e-Portfolio over 1 year. Internal structure validity was assessed using interitem correlations between DOPS items, average domain, and skillset scores and with the overall competency rating. Overall competence scores and mean DOPS scores were compared by trainee seniority and procedure count (discriminative validity). Receiver operating characteristic curve analysis was performed to explore if DOPS scores could be used to delineate procedural competency (consequential validity). RESULTS A total of 157 DOPS assessments were completed by 20 trainers for 17 trainees. Strengths of correlations varied between DOPS components, with overall competency correlating most with technical-predominant items, domains and skillsets. Both the overall assessor's rating and mean DOPS scores increased with trainee seniority (P < 0.001) and lifetime procedure count (P < 0.001). Overall competency could be delineated using mean DOPS scores (area under receiver operating characteristic curve 0.95, P < 0.001), with a threshold of 3.9 providing optimal sensitivity (94.4%) and specificity (89.7%). CONCLUSIONS Competencies in paediatric gastroscopy, as assessed using DOPS, vary in their correlation with overall competence and increase with trainee experience. Formative DOPS thresholds could be used to indicate readiness for summative assessment. Our study therefore provides evidence of internal structure, discriminative, and consequential validity in support of formative paediatric gastroscopy DOPS.
Collapse
|
182
|
Xu J, Campisi P, Forte V, Carrillo B, Vescan A, Brydges R. Effectiveness of discovery learning using a mobile otoscopy simulator on knowledge acquisition and retention in medical students: a randomized controlled trial. J Otolaryngol Head Neck Surg 2018; 47:70. [PMID: 30458877 PMCID: PMC6247612 DOI: 10.1186/s40463-018-0317-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2018] [Accepted: 11/04/2018] [Indexed: 01/10/2023] Open
Abstract
Background Portable educational technologies, like simulators, afford students the opportunity to learn independently. A key question in education, is how to pair self-regulated learning (SRL) with direct instruction. A cloud-based portable otoscopy simulator was employed to compare two curricula involving SRL. Pre-clerkship medical students used a prototype smartphone application, a 3D ear attachment and an otoscope to complete either otoscopy curriculum. Methods Pre-clerkship medical students were recruited and randomized to two curriculum designs. The “Discovery then Instruction” group received the simulator one week before a traditional lecture, while the “Instruction then Discovery” group received it after the lecture. To assess participants’ ability to identify otoscopic pathology, we used a 100-item test at baseline, post-intervention and 2-week retention time points. Secondary outcomes included self-reported comfort, time spent using the device, and a survey on learning preferences. Results Thirty-four students completed the study. Analysis of knowledge acquisition and retention showed improvement in scores of both groups and no significant effects of group (F1,31 = 0.53, p = 0.47). An analysis of participants’ self-reported comfort showed a significant group x test interaction (F1,36 = 4.61, p = 0.04), where only the discovery then instruction group’s comfort improved significantly. Overall device usage was low, as the discovery then instruction group spent 21.47 ± 26.28 min, while the instruction then discovery group spent 13.84 ± 18.71 min. The discovery first group’s time spent with the simulator correlated moderately with their post-test score (r = 0.42, p = 0.07). After the intervention, most participants in both groups (63–68%) stated that they would prefer the instruction then discovery sequence. Conclusions Both curricular sequences led to improved knowledge scores with no statistically significant knowledge differences. When given minimal guidance, students engaged in discovery learning minimally. There is value in SRL in simulation education, and we plan to further improve our curricular design by considering learner behaviours identified in this study.
Collapse
Affiliation(s)
- Josie Xu
- Department of Otolaryngology-Head and Neck Surgery, University of Toronto, 190 Elizabeth Street 3S-438, Toronto, M5G 2C4, Canada.
| | - Paolo Campisi
- Department of Otolaryngology-Head and Neck Surgery, University of Toronto, 190 Elizabeth Street 3S-438, Toronto, M5G 2C4, Canada
| | - Vito Forte
- Department of Otolaryngology-Head and Neck Surgery, University of Toronto, 190 Elizabeth Street 3S-438, Toronto, M5G 2C4, Canada
| | | | - Allan Vescan
- Department of Otolaryngology-Head and Neck Surgery, University of Toronto, 190 Elizabeth Street 3S-438, Toronto, M5G 2C4, Canada
| | - Ryan Brydges
- The Wilson Centre, University Health Network & University of Toronto, Toronto, Canada.,Department of Medicine, University of Toronto, Toronto, Canada.,Allan Water Family Simulation Centre, St. Michael's Hospital, Toronto, Canada
| |
Collapse
|
183
|
The Development and Validation of a Concise Instrument for Formative Assessment of Team Leader Performance During Simulated Pediatric Resuscitations. Simul Healthc 2018; 13:77-82. [PMID: 29117092 DOI: 10.1097/sih.0000000000000267] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
AIM The aim of this study was to assess the validity of a formative feedback instrument for leaders of simulated resuscitations. METHODS This is a prospective validation study with a fully crossed (person × scenario × rater) study design. The Concise Assessment of Leader Management (CALM) instrument was designed by pediatric emergency medicine and graduate medical education experts to be used off the shelf to evaluate and provide formative feedback to resuscitation leaders. Four experts reviewed 16 videos of in situ simulated pediatric resuscitations and scored resuscitation leader performance using the CALM instrument. The videos consisted of 4 pediatric emergency department resuscitation teams each performing in 4 pediatric resuscitation scenarios (cardiac arrest, respiratory arrest, seizure, and sepsis). We report on content and internal structure (reliability) validity of the CALM instrument. RESULTS Content validity was supported by the instrument development process that involved professional experience, expert consensus, focused literature review, and pilot testing. Internal structure validity (reliability) was supported by the generalizability analysis. The main component that contributed to score variability was the person (33%), meaning that individual leaders performed differently. The rater component had almost zero (0%) contribution to variance, which implies that raters were in agreement and argues for high interrater reliability. CONCLUSIONS These results provide initial evidence to support the validity of the CALM instrument as a reliable assessment instrument that can facilitate formative feedback to leaders of pediatric simulated resuscitations.
Collapse
|
184
|
Hodwitz K, Tays W, Reardon R. Redeveloping a workplace-based assessment program for physicians using Kane's validity framework. CANADIAN MEDICAL EDUCATION JOURNAL 2018; 9:e14-e24. [PMID: 30140344 PMCID: PMC6104320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This paper describes the use of Kane's validity framework to redevelop a workplace-based assessment program for practicing physicians administered by the College of Physicians and Surgeons of Ontario. The developmental process is presented according to the four inferences in Kane's model. Scoring was addressed through the creation of specialty-specific assessment criteria and global, narrative-focused reports. Generalization was addressed through standardized sampling protocols and assessor training and consensus-building. Extrapolation was addressed through the use of real-world performance data and an external review of the scoring tools by practicing physicians. Implications were theoretically supported through adherence to formative assessment principles and will be assessed through an evaluation accompanying the implementation of the redeveloped program. Kane's framework was valuable for guiding the redevelopment process and for systematically collecting validity evidence throughout to support the use of the assessment for its intended purpose. As the use of workplace-based assessment programs for physicians continues to increase, practical examples are needed of how to develop and evaluate these programs using established frameworks. The dissemination of comprehensive validity arguments is vital for sharing knowledge about the development and evaluation of WBA programs and for understanding the effects of these assessments on physician practice improvement.
Collapse
Affiliation(s)
- Kathryn Hodwitz
- The College of Physicians and Surgeons of Ontario, Ontario, Canada
| | - William Tays
- The College of Physicians and Surgeons of Ontario, Ontario, Canada
| | - Rhoda Reardon
- The College of Physicians and Surgeons of Ontario, Ontario, Canada
| |
Collapse
|
185
|
|
186
|
Vereijken MWC, van der Rijst RM, van Driel JH, Dekker FW. Student learning outcomes, perceptions and beliefs in the context of strengthening research integration into the first year of medical school. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2018; 23:371-385. [PMID: 29128900 PMCID: PMC5882629 DOI: 10.1007/s10459-017-9803-0] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2016] [Accepted: 11/03/2017] [Indexed: 05/04/2023]
Abstract
Research integrated into undergraduate education is important in order for medical students to understand and value research for later clinical practice. Therefore, attempts are being made to strengthen the integration of research into teaching from the first year onwards. First-year students may interpret attempts made to strengthen research integration differently than intended by teachers. This might be explained by student beliefs about learning and research as well as student perceptions of the learning environment. In general, student perceptions of the learning environment play a pivotal role in fostering student learning outcomes. This study aims to determine whether a curriculum change intended to promote research integration fosters student learning outcomes and student perceptions of research integrated into teaching. To serve this purpose, three subsequent cohorts of first-year students were compared, one before and two after a curriculum change. Learning outcomes of these students were measured using scores on a national progress test of 921 students and assessments of a sample of 100 research reports of a first-year student research project. 746 Students filled out the Student Perceptions of Research Integration Questionnaire. The findings suggest that learning outcomes of these students, that is, scores on research related test items of the progress test and the quality of research reports, were better than those of students before the curriculum change.
Collapse
Affiliation(s)
- Mayke W. C. Vereijken
- ICLON Graduate School of Teaching, Leiden University, P.O. Box 905, 2300 AX Leiden, The Netherlands
| | - Roeland M. van der Rijst
- ICLON Graduate School of Teaching, Leiden University, P.O. Box 905, 2300 AX Leiden, The Netherlands
| | - Jan H. van Driel
- Graduate School of Education, University of Melbourne, 234 Queensberry Street, Melbourne, VIC 3010 Australia
| | - Friedo W. Dekker
- Department of Clinical Epidemiology, Leiden University Medical Centre, P.O. Box 9600, 2300 RC Leiden, The Netherlands
| |
Collapse
|
187
|
Griswold S, Fralliccardi A, Boulet J, Moadel T, Franzen D, Auerbach M, Hart D, Goswami V, Hui J, Gordon JA. Simulation-based Education to Ensure Provider Competency Within the Health Care System. Acad Emerg Med 2018; 25:168-176. [PMID: 28963862 DOI: 10.1111/acem.13322] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2017] [Revised: 09/05/2017] [Accepted: 09/06/2017] [Indexed: 11/27/2022]
Abstract
The acquisition and maintenance of individual competency is a critical component of effective emergency care systems. This article summarizes consensus working group deliberations and recommendations focusing on the topic "Simulation-based education to ensure provider competency within the healthcare system." The authors presented this work for discussion and feedback at the 2017 Academic Emergency Medicine Consensus Conference on "Catalyzing System Change Through Healthcare Simulation: Systems, Competency, and Outcomes," held on May 16, 2017, in Orlando, Florida. Although simulation-based training is a quality and safety imperative in other high-reliability professions such as aviation, nuclear power, and the military, health care professions still lag behind in applying simulation more broadly. This is likely a result of a number of factors, including cost, assessment challenges, and resistance to change. This consensus subgroup focused on identifying current gaps in knowledge and process related to the use of simulation for developing, enhancing, and maintaining individual provider competency. The resulting product is a research agenda informed by expert consensus and literature review.
Collapse
Affiliation(s)
- Sharon Griswold
- Department of Emergency Medicine; Drexel University College of Medicine; Philadelphia PA
| | - Alise Fralliccardi
- Department of Emergency Medicine; University of Connecticut School of Medicine; Hartford CT
| | - John Boulet
- Foundation for Advancement of International Medical Education and Research; Philadelphia PA
| | - Tiffany Moadel
- Department of Emergency Medicine; Hofstra Northwell School of Medicine; Hempstead NY
| | - Douglas Franzen
- Department of Emergency Medicine; University of Washington School of Medicine; Seattle WA
| | - Marc Auerbach
- Department of Emergency Medicine and Pediatrics; Yale University School of Medicine; New Haven CT
| | - Danielle Hart
- Department of Emergency Medicine; Hennepin County Medical Center; St. Paul MN
| | - Varsha Goswami
- Department of Emergency Medicine; Drexel University College of Medicine; Philadelphia PA
| | - Joshua Hui
- Department of Emergency Medicine; Kaiser Permanente Los Angeles Medical Center; Los Angeles CA
| | - James A. Gordon
- MGH Learning Laboratory and Division of Medical Simulation; Department of Emergency Medicine; Massachusetts General Hospital; and the Gilbert Program in Medical Simulation; Harvard Medical School; Boston MA
| |
Collapse
|
188
|
Hart D, Bond W, Siegelman JN, Miller D, Cassara M, Barker L, Anders S, Ahn J, Huang H, Strother C, Hui J. Simulation for Assessment of Milestones in Emergency Medicine Residents. Acad Emerg Med 2018; 25:205-220. [PMID: 28833892 DOI: 10.1111/acem.13296] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2017] [Revised: 08/01/2017] [Accepted: 08/16/2017] [Indexed: 11/29/2022]
Abstract
OBJECTIVES All residency programs in the United States are required to report their residents' progress on the milestones to the Accreditation Council for Graduate Medical Education (ACGME) biannually. Since the development and institution of this competency-based assessment framework, residency programs have been attempting to ascertain the best ways to assess resident performance on these metrics. Simulation was recommended by the ACGME as one method of assessment for many of the milestone subcompetencies. We developed three simulation scenarios with scenario-specific milestone-based assessment tools. We aimed to gather validity evidence for this tool. METHODS We conducted a prospective observational study to investigate the validity evidence for three mannequin-based simulation scenarios for assessing individual residents on emergency medicine (EM) milestones. The subcompetencies (i.e., patient care [PC]1, PC2, PC3) included were identified via a modified Delphi technique using a group of experienced EM simulationists. The scenario-specific checklist (CL) items were designed based on the individual milestone items within each EM subcompetency chosen for assessment and reviewed by experienced EM simulationists. Two independent live raters who were EM faculty at the respective study sites scored each scenario following brief rater training. The inter-rater reliability (IRR) of the assessment tool was determined by measuring intraclass correlation coefficient (ICC) for the sum of the CL items as well as the global rating scales (GRSs) for each scenario. Comparing GRS and CL scores between various postgraduate year (PGY) levels was performed with analysis of variance. RESULTS Eight subcompetencies were chosen to assess with three simulation cases, using 118 subjects. Evidence of test content, internal structure, response process, and relations with other variables were found. The ICCs for the sum of the CL items and the GRSs were >0.8 for all cases, with one exception (clinical management GRS = 0.74 in sepsis case). The sum of CL items and GRSs (p < 0.05) discriminated between PGY levels on all cases. However, when the specific CL items were mapped back to milestones in various proficiency levels, the milestones in the higher proficiency levels (level 3 [L3] and 4 [L4]) did not often discriminate between various PGY levels. L3 milestone items discriminated between PGY levels on five of 12 occasions they were assessed, and L4 items discriminated only two of 12 times they were assessed. CONCLUSION Three simulation cases with scenario-specific assessment tools allowed evaluation of EM residents on proficiency L1 to L4 within eight of the EM milestone subcompetencies. Evidence of test content, internal structure, response process, and relations with other variables were found. Good to excellent IRR and the ability to discriminate between various PGY levels was found for both the sum of CL items and the GRSs. However, there was a lack of a positive relationship between advancing PGY level and the completion of higher-level milestone items (L3 and L4).
Collapse
Affiliation(s)
- Danielle Hart
- Emergency Medicine; Hennepin County Medical Center; University of Minnesota Medical School; Minneapolis MN
| | - William Bond
- Department of Emergency Medicine; Lehigh Valley Health Network; Allentown PA
| | | | - Daniel Miller
- Department of Emergency Medicine; University of Iowa; Iowa City IA
| | - Michael Cassara
- Department of Emergency Medicine; Hofstra University North Shore Long Island Jewish SOM; Northwell Health Center; Lake Success NY
| | - Lisa Barker
- Department of Emergency Medicine; University of Illinois College of Medicine at Peoria; Peoria IL
| | - Shilo Anders
- Department of Anesthesiology; Vanderbilt University; Nashville TN
| | - James Ahn
- Department of Emergency Medicine; University of Chicago; Chicago IL
| | - Hubert Huang
- Division of Education; Lehigh Valley Health Network; Allentown PA
| | | | - Joshua Hui
- Department of Emergency Medicine; Kaiser Permanente; Los Angeles Medical Center; Los Angeles CA
| |
Collapse
|
189
|
McGrath JL, Taekman JM, Dev P, Danforth DR, Mohan D, Kman N, Crichlow A, Bond WF. Using Virtual Reality Simulation Environments to Assess Competence for Emergency Medicine Learners. Acad Emerg Med 2018; 25:186-195. [PMID: 28888070 DOI: 10.1111/acem.13308] [Citation(s) in RCA: 72] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2017] [Revised: 09/05/2017] [Accepted: 09/06/2017] [Indexed: 01/13/2023]
Abstract
Immersive learning environments that use virtual simulation (VS) technology are increasingly relevant as medical learners train in an environment of restricted clinical training hours and a heightened focus on patient safety. We conducted a consensus process with a breakout group of the 2017 Academic Emergency Medicine Consensus Conference "Catalyzing System Change Through Health Care Simulation: Systems, Competency, and Outcomes." This group examined the current uses of VS in training and assessment, including limitations and challenges in implementing VS into medical education curricula. We discuss the role of virtual environments in formative and summative assessment. Finally, we offer recommended areas of focus for future research examining VS technology for assessment, including high-stakes assessment in medical education. Specifically, we discuss needs for determination of areas of focus for VS training and assessment, development and exploration of virtual platforms, automated feedback within such platforms, and evaluation of effectiveness and validity of VS education.
Collapse
Affiliation(s)
- Jillian L. McGrath
- Department of Emergency Medicine; The Ohio State University Wexner Medical Center; Columbus OH
| | | | - Parvati Dev
- Stanford University School of Medicine; Los Altos CA
| | - Douglas R. Danforth
- Department of Obstetrics and Gynecology; The Ohio State University Wexner Medical Center; Columbus OH
| | - Deepika Mohan
- Department of Surgery; University of Pittsburgh Medical Center; Pittsburgh PA
| | - Nicholas Kman
- Department of Emergency Medicine; The Ohio State University Wexner Medical Center; Columbus OH
| | - Amanda Crichlow
- Department of Emergency Medicine; Drexel University College of Medicine; Philadelphia PA
| | | |
Collapse
|
190
|
Chiavaroli NG, Beck EJ, Itsiopoulos C, Wilkinson P, Gibbons K, Palermo C. Development and validation of a written credentialing examination for overseas-educated dietitians. Nutr Diet 2018; 75:235-243. [PMID: 29314662 DOI: 10.1111/1747-0080.12403] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2017] [Revised: 11/09/2017] [Accepted: 11/20/2017] [Indexed: 11/28/2022]
Abstract
AIM Health professionals seeking employment in foreign countries are commonly required to undertake competency assessment in order to practice. The present study aims to outline the development and validation of a written examination for Dietetic Skills Recognition (DSR), to assess the knowledge, skills, capabilities and professional judgement of overseas-educated dietitians against the competency standards applied to dietetic graduates in Australia. METHODS The present study reviews the design, rationale, validation and outcomes of a multiple choice question (MCQ) written examination for overseas-educated dietitians based on 5 years of administration. The validity of the exam is evaluated using Messick's validity framework, which focuses on five potential sources of validity evidence-content, internal structure, relationships with other variables, response process and consequences. The reference point for the exam pass mark or "cutscore" is the minimum standard required for safe practice. RESULTS In total, 114 candidates have completed the MCQ examination at least once, with an overall pass rate of 52% on the first attempt. Pass rates are higher from countries where dietetic education more closely reflects the Australian model. While the pass rate for each exam tends to vary with each cohort, the cutscore has remained relatively stable over eight administrations. CONCLUSIONS The findings provide important data supporting the validity of the MCQ exam. A more complete evaluation of the validity of the exam must be sought within the context of the whole DSR program of assessment. The DSR written component may serve as a model for use of the MCQ format for dietetic and other professional credentialing organisations.
Collapse
Affiliation(s)
- Neville G Chiavaroli
- Department of Medical Education, University of Melbourne, Melbourne, Victoria, Australia
| | - Eleanor J Beck
- School of Medicine, University of Wollongong, Wollongong, New South Wales, Australia
| | | | - Paul Wilkinson
- Recognition and Journal Services, Dietitians Association of Australia, Canberra, Australian Capital Territory, Australia
| | - Kay Gibbons
- College of Health and Biomedicine, Victoria University, Melbourne, Victoria, Australia
| | - Claire Palermo
- Department of Nutrition, Dietetics and Food, Monash University, Melbourne, Victoria, Australia
| |
Collapse
|
191
|
Pammi M, Lingappan K, Carbajal MM, Suresh GK. Focused Evidence-Based Medicine Curriculum for Trainees in Neonatal-Perinatal Medicine. MEDEDPORTAL : THE JOURNAL OF TEACHING AND LEARNING RESOURCES 2017; 13:10664. [PMID: 30800864 PMCID: PMC6338140 DOI: 10.15766/mep_2374-8265.10664] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/09/2017] [Accepted: 11/26/2017] [Indexed: 06/09/2023]
Abstract
INTRODUCTION While evidence-based medicine (EBM) is an Accreditation Council for Graduate Medical Education core competency, EBM teaching in pediatric subspecialties is rarely reported. Therefore, we designed, implemented, and evaluated this focused EBM curriculum for trainees in neonatal-perinatal medicine. METHODS This EBM curriculum consists of seven weekly 1-hour sessions. Specific EBM skills taught in the sessions include formulating a structured clinical question, conducting an efficient literature search, critically appraising published literature in both intervention and diagnostic studies, and incorporating evidence into clinical decision-making. The course was evaluated by a neonatology-adapted Fresno test (NAFT) and neonatology case vignettes, which were administered to learners before and after the curriculum. This publication includes the needs assessment survey, PowerPoint slides for the seven sessions, the NAFT, and the scoring rubric for the test. RESULTS The NAFT was internally reliable, with a Cronbach's alpha of .74. The intraclass correlation coefficient of the three raters' variability in assessment of learners was excellent at .98. Mean test scores increased significantly (54 points, p < .001) in 14 learners after the EBM curriculum, indicating an increase in EBM-related knowledge and skills. DISCUSSION This focused EBM curriculum enhances trainees' knowledge and skills and fosters evidence-based practice. The curriculum can be easily adapted for learners in pediatrics, as well as family medicine, in order to enhance trainees' EBM skills and knowledge.
Collapse
Affiliation(s)
- Mohan Pammi
- Associate Professor, Department of Pediatrics, Section of Neonatology, Baylor College of Medicine
| | - Krithika Lingappan
- Assistant Professor, Department of Pediatrics, Section of Neonatology, Baylor College of Medicine
| | - Melissa M. Carbajal
- Assistant Professor, Department of Pediatrics, Section of Neonatology, Baylor College of Medicine
| | - Gautham K. Suresh
- Professor, Department of Pediatrics, Section of Neonatology, Baylor College of Medicine
| |
Collapse
|
192
|
Lineberry M, Matthew Ritter E. Psychometric properties of the Fundamentals of Endoscopic Surgery (FES) skills examination. Surg Endosc 2017; 31:5219-5227. [DOI: 10.1007/s00464-017-5590-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2017] [Accepted: 05/02/2017] [Indexed: 01/24/2023]
|
193
|
Cheng A, Kessler D, Mackinnon R, Chang TP, Nadkarni VM, Hunt EA, Duval-Arnould J, Lin Y, Pusic M, Auerbach M. Conducting multicenter research in healthcare simulation: Lessons learned from the INSPIRE network. Adv Simul (Lond) 2017; 2:6. [PMID: 29450007 PMCID: PMC5806260 DOI: 10.1186/s41077-017-0039-0] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2016] [Accepted: 02/08/2017] [Indexed: 01/29/2023] Open
Abstract
Simulation-based research has grown substantially over the past two decades; however, relatively few published simulation studies are multicenter in nature. Multicenter research confers many distinct advantages over single-center studies, including larger sample sizes for more generalizable findings, sharing resources amongst collaborative sites, and promoting networking. Well-executed multicenter studies are more likely to improve provider performance and/or have a positive impact on patient outcomes. In this manuscript, we offer a step-by-step guide to conducting multicenter, simulation-based research based upon our collective experience with the International Network for Simulation-based Pediatric Innovation, Research and Education (INSPIRE). Like multicenter clinical research, simulation-based multicenter research can be divided into four distinct phases. Each phase has specific differences when applied to simulation research: (1) Planning phase, to define the research question, systematically review the literature, identify outcome measures, and conduct pilot studies to ensure feasibility and estimate power; (2) Project Development phase, when the primary investigator identifies collaborators, develops the protocol and research operations manual, prepares grant applications, obtains ethical approval and executes subsite contracts, registers the study in a clinical trial registry, forms a manuscript oversight committee, and conducts feasibility testing and data validation at each site; (3) Study Execution phase, involving recruitment and enrollment of subjects, clear communication and decision-making, quality assurance measures and data abstraction, validation, and analysis; and (4) Dissemination phase, where the research team shares results via conference presentations, publications, traditional media, social media, and implements strategies for translating results to practice. With this manuscript, we provide a guide to conducting quantitative multicenter research with a focus on simulation-specific issues.
Collapse
Affiliation(s)
- Adam Cheng
- Department of Pediatrics, Alberta Children’s Hospital, KidSim-ASPIRE Research Program, Section of Emergency Medicine, University of Calgary, 2888 Shaganappi Trail NW, Calgary, AB Canada T3B 6A8
| | - David Kessler
- Division of Pediatric Emergency Medicine, Columbia University Medical School, 3959 Broadway, CHN-1-116, New York, NY 10032 USA
| | - Ralph Mackinnon
- Department of Paediatric Anaesthesia and NWTS, First Floor Theatres, Royal Manchester Children’s Hospital, Hathersage Road, Manchester, UK M13 9WL
| | - Todd P. Chang
- Children’s Hospital Los Angeles, 4650 Sunset Blvd, Mailstop 113, Los Angeles, CA 90027 USA
| | - Vinay M. Nadkarni
- The Children’s Hospital of Philadelphia, University of Pennsylvania Perelman School of Medicine, 3401 Civic Center Blvd, Philadelphia, PA 19104 USA
| | - Elizabeth A. Hunt
- Charlotte R. Bloomberg Children’s Center, Johns Hopkins University School of Medicine, 1800 Orleans St, Room 6321, Baltimore, MD 21287 USA
| | - Jordan Duval-Arnould
- Charlotte R. Bloomberg Children’s Center, Johns Hopkins University School of Medicine, 1800 Orleans St, Room 6321, Baltimore, MD 21287 USA
| | - Yiqun Lin
- Alberta Children’s Hospital, Cumming School of Medicine, University of Calgary, 2888 Shaganappi Trail NW, Calgary, AB Canada T3B 6A8
| | - Martin Pusic
- Institute for Innovations in Medical Education, 550 First Ave, MSB G109, New York, NY 10016 USA
| | - Marc Auerbach
- Section of Pediatric Emergency Medicine, 100 York Street, Suite 1F, New Haven, CT 06520 USA
| |
Collapse
|