101
|
Strategies for increasing the feasibility of performance assessments during competency-based education: Subjective and objective evaluations correlate in the operating room. Am J Surg 2016; 214:365-372. [PMID: 27634423 DOI: 10.1016/j.amjsurg.2016.07.017] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2016] [Revised: 06/23/2016] [Accepted: 07/06/2016] [Indexed: 11/22/2022]
Abstract
BACKGROUND Competency-based education necessitates assessments that determine whether trainees have acquired specific competencies. The evidence on the ability of internal raters (staff surgeons) to provide accurate assessments is mixed; however, this has not yet been directly explored in the operating room. This study's objective is to compare the ratings given by internal raters vs an expert external rater (independent to the training process) in the operating room. METHODS Raters assessed general surgery residents during a laparoscopic cholecystectomy for their technical and nontechnical performance. RESULTS Fifteen cases were observed. There was a moderately positive correlation (rs = .618, P = .014) for technical performance and a strong positive correlation (rs = .731, P = .002) for nontechnical performance. The internal raters were less stringent for technical (mean rank 3.33 vs 8.64, P = .007) and nontechnical (mean rank 3.83 vs 8.50, P = .01) performances. CONCLUSIONS This study provides evidence to help operationalize competency-based assessments.
Collapse
|
102
|
Goderstad JM, Sandvik L, Fosse E, Lieng M. Assessment of Surgical Competence: Development and Validation of Rating Scales Used for Laparoscopic Supracervical Hysterectomy. JOURNAL OF SURGICAL EDUCATION 2016; 73:600-8. [PMID: 26966082 DOI: 10.1016/j.jsurg.2016.01.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2015] [Revised: 11/09/2015] [Accepted: 01/06/2016] [Indexed: 05/11/2023]
Abstract
OBJECTIVE To develop a procedure-specific rating scale for laparoscopic supracervical hysterectomy (LSH), and to compare the construct validity and reliability with a general rating scale in laparoscopic surgery, global operative assessment of laparoscopic skills (GOALS). DESIGN Prospective interobserver study. In collaboration with an expert group, we developed the procedure-specific rating scale, competence assessment tool for laparoscopic supracervical hysterectomy (CAT-LSH). LSH was performed by gynecologists with different levels of surgical competence levels (13 procedures were performed by inexperienced trainees, 13 by intermediate experienced, and 15 by laparoscopic experts). All procedures were video-recorded. Surgical performance was evaluated in all procedures using both CAT-LSH and GOALS by the surgical assistant, as well as by 2 blinded observers evaluating the video recordings. SETTING University teaching hospital. PARTICIPANTS Laparoscopic experts, consultants and gynecological registrars from the Department of Gynecology. RESULTS There were significant differences between the 3 proficiency groups in both the rating scales. Mean GOALS score evaluated by the operating assistant and the 2 observers were for inexperienced surgeons 16.4 vs. 13.6 (p < 0.01), for surgeons with intermediate experienced 22.6 vs. 19.5 (p < 0.05) and for expert surgeons 26.1 vs. 22.4 (p < 0.01), respectively. Corresponding results for the CAT-LSH scores were 41.0/34.6 (p < 0.01), 49.2/43.1 (p < 0.01), and 58.7/51.1 (p < 0.01), respectively. The interrater reliability measured by the interclass correlation coefficient between the surgical assistant and the 2 blinded observers for GOALS and CAT-LSH were 0.71 and 0.75, respectively. CONCLUSIONS The GOALS and CAT-LSH appear to have construct validity and high interrater reliability. Assessment of surgical competence during LSH is feasible in daily practice with objective rating scales like CAT-LSH and GOALS.
Collapse
Affiliation(s)
| | - Leiv Sandvik
- Oslo Center for biostatistics and epidemiology, Oslo University Hospital, Oslo, Norway
| | - Erik Fosse
- The Intervention Centre, Oslo University Hospital, Oslo, Norway; Department of Gynecology, Oslo University Hospital, Oslo, Norway
| | - Marit Lieng
- Department of Gynecology, Oslo University Hospital, Oslo, Norway; Institute of Clinical Medicine, University of Oslo, Oslo, Norway
| |
Collapse
|
103
|
Boza C, León F, Buckel E, Riquelme A, Crovari F, Martínez J, Aggarwal R, Grantcharov T, Jarufe N, Varas J. Simulation-trained junior residents perform better than general surgeons on advanced laparoscopic cases. Surg Endosc 2016; 31:135-141. [DOI: 10.1007/s00464-016-4942-6] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2015] [Accepted: 04/15/2016] [Indexed: 11/25/2022]
|
104
|
Non-technical skills in minimally invasive surgery teams: a systematic review. Surg Endosc 2016; 30:5185-5199. [PMID: 27066972 DOI: 10.1007/s00464-016-4890-1] [Citation(s) in RCA: 50] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2015] [Accepted: 03/23/2016] [Indexed: 01/06/2023]
Abstract
BACKGROUND Root cause analyses show that up to 70 % of adverse events are caused by human error. Strong non-technical skills (NTS) can prevent or reduce these errors, considerable numbers of which occur in the operating theatre. Minimally invasive surgery (MIS) requires manipulation of more complex equipment than open procedures, likely requiring a different set of NTS for each kind of team. The aims of this study were to identify the MIS teams' key NTS and investigate the effect of training and assessment of NTS on MIS teams. METHODS The databases of PubMed, Cochrane Library, Embase, PsycINFO, and Scopus were systematically searched according to Preferred Reporting Item for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Articles containing outcome measures related to MIS teams' key NTS, training, or assessment of NTS were included. RESULTS The search yielded 1984 articles, 11 of which were included. All were observational studies without blinding, and they differed in aims, types of evaluation, and outcomes. Only two studies evaluated patient outcomes other than operative time, and overall, the studies' quality of evidence was low. Different communication types were encountered in MIS compared to open surgery, mainly due to equipment- and patient-related challenges. Fixed teams improved teamwork and safety levels, while deficient planning and poor teamwork were found to obstruct workflow and increase errors. Training NTS mitigates these issues and improves staff attitudes towards NTS. CONCLUSIONS MIS teams' NTS are important for workflow and prevention of errors and can be enhanced by working in fixed teams. In the technological complex sphere of MIS, communication revolves around equipment- and patient-related topics, much more so than in open surgery. In all, only a few heterogeneous-design studies have examined this. In the future, the focus should shift to systematically identifying key NTS and developing effective, evidence-based team training programmes in MIS.
Collapse
|
105
|
Hagelsteen K, Sevonius D, Bergenfelz A, Ekelund M. Simball Box for Laparoscopic Training With Advanced 4D Motion Analysis of Skills. Surg Innov 2016; 23:309-16. [DOI: 10.1177/1553350616628678] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Background. Laparoscopic skills training and evaluation outside the operating room is important for all surgeons learning new skills. To study feasibility, a video box trainer tracking 4-dimensional (4D) metrics was evaluated as a laparoscopic training tool. Method. Simball Box is a video box trainer with authentic surgical instruments and camera with video recording, equipped with 4D motion analysis registered through trocars using machine vision technology. Residents attending a 3-day laparoscopy course were evaluated performing a laparoscopic surgical knot at start, middle, and end. Metrics were obtained. Feedback data were presented in reference to expert/tutorial performance. Results. Ten right-handed residents were included. Median time (range) to finish the task was 359 (253-418), 129 (95-166), and 95 (52-156) seconds; 655%, 236%, and 174% of tutorial performance, with significance pre-/midcourse ( P < .0001), pre-/postcourse ( P < .0001), and mid-/postcourse ( P = .0050). Combined median total instrument motion decreased pre-/midcourse from 1208 (845-1751) to 522 cm (411-810 cm); P = .042 to 405 cm (246-864 cm) postcourse; pre-/postcourse P < .0001; 673%, 291%, 225% of tutorial performance. Total angular distance in radians (range) was 150 (87-251), 65 (42-116), and 50 (33-136) with significance pre-/midcourse ( P = .022) and pre-/postcourse ( P = .0002). Right-handed average speed (cm/s) increased: 1.94 (1.11-2.27) pre-, 2.39 (1.56-2.83) mid-, 2.60 (1.67-3.19) postcourse with significance pre-/midcourse ( P = .022) and pre-/postcourse ( P = .002). Average acceleration (mm/s2) and motion smoothness (µm/s3) failed to show any difference. Conclusion. For laparoscopic training and as a promising evaluation device, Simball Box obtained metrics mirroring progression well.
Collapse
Affiliation(s)
| | - Dan Sevonius
- Skåne University Hospital, Sweden
- Lund University, Lund, Sweden
| | | | - Mikael Ekelund
- Skåne University Hospital, Sweden
- Lund University, Lund, Sweden
| |
Collapse
|
106
|
Husslein H, Shirreff L, Shore EM, Lefebvre GG, Grantcharov TP. The Generic Error Rating Tool: A Novel Approach to Assessment of Performance and Surgical Education in Gynecologic Laparoscopy. JOURNAL OF SURGICAL EDUCATION 2015; 72:1259-65. [PMID: 26111823 DOI: 10.1016/j.jsurg.2015.04.029] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2014] [Revised: 04/14/2015] [Accepted: 04/29/2015] [Indexed: 05/11/2023]
Abstract
OBJECTIVE Global rating scales are commonly used to rate surgeons' skill level. However, these tools lack granularity required for specific skill feedback. Recently, an alternative framework has been developed that is designed to measure technical errors during laparoscopy. The purpose of the present study was to gather validity evidence for the Generic Error Rating Tool (GERT) in gynecologic laparoscopy. DESIGN Video recordings of total laparoscopic hysterectomies were analyzed by 2 blinded reviewers using the GERT and the Objective Structured Assessment of Technical Skills (OSATS) scale. Several sources of validity were examined according to the unitary framework of validity. Main outcomes were interrater and intrarater reliability regarding total number of errors and events. Further, surgeons were grouped according to OSATS scores (OSATS ≥ 28 = high performers and OSATS < 28 = low performers), and the number of errors and events was compared between groups. Correlation analysis between GERT and OSATS scores was performed. Lastly, error distribution within procedure steps was explored and compared between high- and low-performing surgeons. SETTING University teaching hospital. PARTICIPANTS A total of 20 anonymized video recordings of total laparoscopic hysterectomies. RESULTS Interrater and intrarater reliability was high (intraclass correlation coefficient >0.95) for total number of errors and events. Low performers made significantly more errors than high performers did (median = 49.5 [interquartile range: 34.5-66] vs median = 31 [interquartile range: 16.75-35.25], p = 0.002). There was a significant negative correlation between individual OSATS scores and total number of errors (Spearman ρ = -0.76, p < 0.001, and ρ = -0.88, p < 0.001, for raters 1 and 2, respectively). Error distribution varied between operative steps, and low performers made more errors in some steps, but not in others. CONCLUSION GERT allows for objective and reproducible assessment of technical errors during gynecologic laparoscopy and could be used for performance analysis and personalized surgical education and training.
Collapse
Affiliation(s)
- Heinrich Husslein
- Division of Obstetrics and Gynaecology, St. Michael's Hospital, University of Toronto, Toronto, Ontario, Canada; Department of Obstetrics and Gynecology, Medical University Vienna, Vienna, Austria.
| | - Lindsay Shirreff
- Division of Obstetrics and Gynaecology, St. Michael's Hospital, University of Toronto, Toronto, Ontario, Canada
| | - Eliane M Shore
- Division of Obstetrics and Gynaecology, St. Michael's Hospital, University of Toronto, Toronto, Ontario, Canada
| | - Guylaine G Lefebvre
- Division of Obstetrics and Gynaecology, St. Michael's Hospital, University of Toronto, Toronto, Ontario, Canada
| | - Teodor P Grantcharov
- Division of General Surgery, St. Michael's Hospital, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
107
|
Ghaderi I, Harris I, Park YS, Ott M, Juul D, Farrell T. Response to "An Addition to the Technical Skills Assessment Toolbox". Ann Surg 2015; 266:e37. [PMID: 26445475 DOI: 10.1097/sla.0000000000001430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Affiliation(s)
- Iman Ghaderi
- University of Arizona, Tucson, AZ Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, IL Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, IL Department of Surgery, Western University, London, Canada American Board of Psychiatry and Neurology, Inc, Deerfield, IL Department of Surgery, The University of North Carolina at Chapel Hill, Chapel Hill, NC
| | | | | | | | | | | |
Collapse
|
108
|
Hong J, de Montbrun SL, Roberts PL, MacRae H. Assessing technical competency during colon and rectal surgery training. SEMINARS IN COLON AND RECTAL SURGERY 2015. [DOI: 10.1053/j.scrs.2015.04.009] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
109
|
Konge L, Clementsen PF, Ringsted C, Minddal V, Larsen KR, Annema JT. Simulator training for endobronchial ultrasound: a randomised controlled trial. Eur Respir J 2015; 46:1140-9. [PMID: 26160875 DOI: 10.1183/13993003.02352-2015] [Citation(s) in RCA: 82] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2014] [Accepted: 05/14/2015] [Indexed: 12/25/2022]
Abstract
Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) is very operator dependent and has a long learning curve. Simulation-based training might shorten the learning curve, and an assessment tool with solid validity evidence could ensure basic competency before unsupervised performance.A total of 16 respiratory physicians, without EBUS experience, were randomised to either virtual-reality simulator training or traditional apprenticeship training on patients, and then each physician performed EBUS-TBNA procedures on three patients. Three blinded, independent assessor assessed the video recordings of the procedures using a newly developed EBUS assessment tool (EBUSAT).The internal consistency was high (Cronbach's α=0.95); the generalisability coefficient was good (0.86), and the tool had discriminatory ability (p<0.001). Procedures performed by simulator-trained novices were rated higher than procedures performed by apprenticeship-trained novices: mean±sd are 24.2±7.9 points and 20.2±9.4 points, respectively; p=0.006. A pass/fail standard of 28.9 points was established using the contrasting groups method, resulting in 16 (67%) and 20 (83%) procedures performed by simulator-trained novices and apprenticeship-trained novices failing the test, respectively; p<0.001.The endobronchial ultrasound assessment tool could be used to provide reliable and valid assessment of competence in EBUS-TBNA, and act as an aid in certification. Virtual-reality simulator training was shown to be more effective than traditional apprenticeship training.
Collapse
Affiliation(s)
- Lars Konge
- Centre for Clinical Education, University of Copenhagen and The Capital Region of Denmark, Copenhagen, Denmark
| | | | - Charlotte Ringsted
- The Wilson Centre and Dept of Anesthesiology, University of Toronto and University Health Network, Toronto, ON, Canada
| | - Valentina Minddal
- Dept of Pulmonology, Gentofte Hospital, University of Copenhagen, Hellerup, Denmark
| | - Klaus Richter Larsen
- Dept of Pulmonology, Bispebjerg Hospital, University of Copenhagen, Copenhagen, Denmark
| | - Jouke T Annema
- Dept of Pulmonology, Leiden University Medical Center, Leiden, The Netherlands Dept of Pulmonology, Academic Medical Centre, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
110
|
Watanabe Y, Bilgic E, Lebedeva E, McKendy KM, Feldman LS, Fried GM, Vassiliou MC. A systematic review of performance assessment tools for laparoscopic cholecystectomy. Surg Endosc 2015; 30:832-44. [PMID: 26092014 DOI: 10.1007/s00464-015-4285-8] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2015] [Accepted: 05/23/2015] [Indexed: 02/06/2023]
Abstract
BACKGROUND Multiple tools are available to assess clinical performance of laparoscopic cholecystectomy (LC), but there are no guidelines on how best to implement and interpret them in educational settings. The purpose of this systematic review was to identify and critically appraise LC assessment tools and their measurement properties, in order to make recommendations for their implementation in surgical training. METHODS A systematic search (1989-2013) was conducted in MEDLINE, Embase, Scopus, Cochrane, and grey literature sources. Evidence for validity (content, response process, internal structure, relations to other variables, and consequences) and the conditions in which the evidence was obtained were evaluated. RESULTS A total of 54 articles were included for qualitative synthesis. Fifteen technical skills and two non-technical skills assessment tools were identified. The 17 tools were used for either: recorded procedures (nine tools, 60%), direct observation (five tools, 30%), or both (three tools, 18%). Fourteen (82%) tools reported inter-rater reliability and one reported a Generalizability Theory coefficient. Nine (53%) had evidence for validity based on clinical experience and 11 (65%) compared scores to other assessments. Consequences of scores, educational impact, applications to residency training, and how raters were trained were not clearly reported. No studies mentioned cost. CONCLUSIONS The most commonly reported validity evidence was inter-rater reliability and relationships to other known variables. Consequences of assessments and rater training were not clearly reported. These data and the evidence for validity should be taken into consideration when deciding how to select and implement a tool to assess performance of LC, and especially how to interpret the results.
Collapse
Affiliation(s)
- Yusuke Watanabe
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, 1650, Cedar Avenue, L9. 316, Montreal, QC, H3G 1A4, Canada.
- Department of Gastroenterological Surgery II, Hokkaido University Graduate School of Medicine, Sapporo, Hokkaido, Japan.
| | - Elif Bilgic
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, 1650, Cedar Avenue, L9. 316, Montreal, QC, H3G 1A4, Canada
| | - Ekaterina Lebedeva
- The Henry K.M. De Kuyper Education Centre, McGill University Health Centre, Montreal, QC, Canada
| | - Katherine M McKendy
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, 1650, Cedar Avenue, L9. 316, Montreal, QC, H3G 1A4, Canada
| | - Liane S Feldman
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, 1650, Cedar Avenue, L9. 316, Montreal, QC, H3G 1A4, Canada
| | - Gerald M Fried
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, 1650, Cedar Avenue, L9. 316, Montreal, QC, H3G 1A4, Canada
| | - Melina C Vassiliou
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, 1650, Cedar Avenue, L9. 316, Montreal, QC, H3G 1A4, Canada.
| |
Collapse
|
111
|
Stefanidis D, Sevdalis N, Paige J, Zevin B, Aggarwal R, Grantcharov T, Jones DB. Simulation in Surgery. Ann Surg 2015; 261:846-53. [DOI: 10.1097/sla.0000000000000826] [Citation(s) in RCA: 142] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
112
|
Dedy NJ, Szasz P, Louridas M, Bonrath EM, Husslein H, Grantcharov TP. Objective structured assessment of nontechnical skills: Reliability of a global rating scale for the in-training assessment in the operating room. Surgery 2015; 157:1002-13. [PMID: 25704419 DOI: 10.1016/j.surg.2014.12.023] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2014] [Revised: 12/13/2014] [Accepted: 12/18/2014] [Indexed: 11/17/2022]
Abstract
BACKGROUND Nontechnical skills are critical for patient safety in the operating room (OR). As a result, regulatory bodies for accreditation and certification have mandated the integration of these competencies into postgraduate education. A generally accepted approach to the in-training assessment of nontechnical skills, however, is lacking. The goal of the present study was to develop an evidence-based and reliable tool for the in-training assessment of residents' nontechnical performance in the OR. METHODS The Objective Structured Assessment of Nontechnical Skills tool was designed as a 5-point global rating scale with descriptive anchors for each item, based on existing evidence-based frameworks of nontechnical skills, as well as resident training requirements. The tool was piloted on scripted videos and refined in an iterative process. The final version was used to rate residents' performance in recorded OR crisis simulations and during live observations in the OR. RESULTS A total of 37 simulations and 10 live procedures were rated. Interrater agreement was good for total mean scores, both in simulation and in the real OR, with intraclass correlation coefficients >0.90 in all settings for average and single measures. Internal consistency of the scale was high (Cronbach's alpha = 0.80). CONCLUSION The Objective Structured Assessment of Nontechnical Skills global rating scale was developed as an evidence-based tool for the in-training assessment of residents' nontechnical performance in the OR. Unique descriptive anchors allow for a criterion-referenced assessment of performance. Good reliability was demonstrated in different settings, supporting applications in research and education.
Collapse
Affiliation(s)
- Nicolas J Dedy
- Department of Surgery, St. Michael's Hospital, University of Toronto, Toronto, Canada.
| | - Peter Szasz
- Department of Surgery, St. Michael's Hospital, University of Toronto, Toronto, Canada
| | - Marisa Louridas
- Department of Surgery, St. Michael's Hospital, University of Toronto, Toronto, Canada
| | - Esther M Bonrath
- Department of Surgery, St. Michael's Hospital, University of Toronto, Toronto, Canada
| | - Heinrich Husslein
- Department of Obstetrics and Gynaecology, St. Michael's Hospital, University of Toronto, Toronto, Canada
| | - Teodor P Grantcharov
- Department of Surgery, St. Michael's Hospital, University of Toronto, Toronto, Canada
| |
Collapse
|
113
|
Ghaderi I, Auvergne L, Park YS, Farrell TM. Quantitative and qualitative analysis of performance during advanced laparoscopic fellowship: a curriculum based on structured assessment and feedback. Am J Surg 2015; 209:71-8. [DOI: 10.1016/j.amjsurg.2014.08.029] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2014] [Revised: 08/13/2014] [Accepted: 08/15/2014] [Indexed: 10/24/2022]
|
114
|
Wagner JP, Chen DC, Donahue TR, Quach C, Hines OJ, Hiatt JR, Tillou A. Assessment of resident operative performance using a real-time mobile Web system: preparing for the milestone age. JOURNAL OF SURGICAL EDUCATION 2014; 71:e41-e46. [PMID: 25037504 DOI: 10.1016/j.jsurg.2014.06.008] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2014] [Revised: 06/08/2014] [Accepted: 06/12/2014] [Indexed: 06/03/2023]
Abstract
OBJECTIVE To satisfy trainees' operative competency requirements while improving feedback validity and timeliness using a mobile Web-based platform. DESIGN The Southern Illinois University Operative Performance Rating Scale (OPRS) was embedded into a website formatted for mobile devices. From March 2013 to February 2014, faculty members were instructed to complete the OPRS form while providing verbal feedback to the operating resident at the conclusion of each procedure. Submitted data were compiled automatically within a secure Web-based spreadsheet. Conventional end-of-rotation performance (CERP) evaluations filed 2006 to 2013 and OPRS performance scores were compared by year of training using serial and independent-samples t tests. The mean CERP scores and OPRS overall resident operative performance scores were directly compared using a linear regression model. OPRS mobile site analytics were reviewed using a Web-based reporting program. SETTING Large university-based general surgery residency program. PARTICIPANTS General Surgery faculty used the mobile Web OPRS system to rate resident performance. Residents and the program director reviewed evaluations semiannually. RESULTS Over the study period, 18 faculty members and 37 residents logged 176 operations using the mobile OPRS system. There were 334 total OPRS website visits. Median time to complete an evaluation was 45 minutes from the end of the operation, and faculty spent an average of 134 seconds on the site to enter 1 assessment. In the 38,506 CERP evaluations reviewed, mean performance scores showed a positive linear trend of 2% change per year of training (p = 0.001). OPRS overall resident operative performance scores showed a significant linear (p = 0.001), quadratic (p = 0.001), and cubic (p = 0.003) trend of change per year of clinical training, reflecting the resident operative experience in our training program. Differences between postgraduate year-1 and postgraduate year-5 overall performance scores were greater with the OPRS (mean = 0.96, CI: 0.55-1.38) than with CERP measures (mean = 0.37, CI: 0.34-0.41). Additionally, there were consistent increases in each of the OPRS subcategories. CONCLUSIONS In contrast to CERPs, the OPRS fully satisfies the Accreditation Council for Graduate Medical Education and American Board of Surgery operative assessment requirements. The mobile Web platform provides a convenient interface, broad accessibility, automatic data compilation, and compatibility with common database and statistical software. Our mobile OPRS system encourages candid feedback dialog and generates a comprehensive review of individual and group-wide operative proficiency in real time.
Collapse
Affiliation(s)
- Justin P Wagner
- David Geffen School of Medicine, University of California, Los Angeles, California.
| | - David C Chen
- David Geffen School of Medicine, University of California, Los Angeles, California
| | - Timothy R Donahue
- David Geffen School of Medicine, University of California, Los Angeles, California
| | - Chi Quach
- David Geffen School of Medicine, University of California, Los Angeles, California
| | - O Joe Hines
- David Geffen School of Medicine, University of California, Los Angeles, California
| | - Jonathan R Hiatt
- David Geffen School of Medicine, University of California, Los Angeles, California
| | - Areti Tillou
- David Geffen School of Medicine, University of California, Los Angeles, California
| |
Collapse
|