1
|
Jones-Carr ME, McLeod C, Baker S, Lindeman B. Framing our Expectations: Variability in Entrustable Professional Activity Assessments. JOURNAL OF SURGICAL EDUCATION 2024; 81:1355-1361. [PMID: 39163720 DOI: 10.1016/j.jsurg.2024.07.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Revised: 06/30/2024] [Accepted: 07/28/2024] [Indexed: 08/22/2024]
Abstract
OBJECTIVE To determine the ability of surgical trainees and faculty to correctly interpret entrustability of a resident learner in a modeled patient care scenario. DESIGN Prospective study utilizing a web-based survey including 4 previously-recorded short videos of resident learners targeted to specific levels of the American Board of Surgery's (ABS) Entrustment Scale. Respondents were asked to choose the entrustment level that best corresponded to their observations of the learner in the video. Responses were subcategorized by low and high entrustment. SETTING Online, utilizing the Qualtrics survey platform. PARTICIPANTS Survey targeting US surgical trainees and surgical faculty via email and social media. We received 31 complete responses and 2 responses which completed > 1 video assessment question without demographic information (n = 33). Respondents included 10 trainees (32%) and 21 attending surgeons (68%). RESULTS Neither faculty nor trainees readily identified the targeted entrustment level for Question 1 (preoperative care of a patient with acute appendicitis with high entrustment, 36% correct), though evaluations of the remaining questions (2 through 4) demonstrated more accuracy (70, 84, and 75% correct, respectively). Faculty were more readily able than trainees to identify low entrustment (level Limited Participation) in intraoperative inguinal hernia repair (95% vs 60%, p = 0.03). After subcategorization to high and low entrustment, both residents and faculty were able to accurately identify entrustment 95% overall. CONCLUSIONS Both trainees and attending surgeons were able to identify high- and low-performing residents on short video demonstrations using the ABS EPA entrustment scale. This provides additional evidence in support of the need for frequent observations of EPAs to account for the variability in raters' perceptions in addition to complexity of clinical scenarios. Frame-of-reference training via a video-based platform may also be beneficial for both residents and faculty as an ongoing EPA implementation strategy.
Collapse
Affiliation(s)
| | - Chandler McLeod
- Department of Surgery, University of Alabama at Birmingham, Birmingham, AL
| | - Samantha Baker
- Department of Surgery, Louisiana State University Health Sciences Center, New Orleans, LA
| | - Brenessa Lindeman
- Department of Surgery, University of Alabama at Birmingham, Birmingham, AL.
| |
Collapse
|
2
|
Berrens AC, Scheltema M, Maurer T, Hermann K, Hamdy FC, Knipper S, Dell'Oglio P, Mazzone E, de Barros HA, Sorger JM, van Oosterom MN, Stricker PD, van Leeuwen PJ, Rietbergen DDD, Valdes Olmos RA, Vidal-Sicart S, Carroll PR, Buckle T, van der Poel HG, van Leeuwen FWB. Delphi consensus project on prostate-specific membrane antigen (PSMA)-targeted surgery-outcomes from an international multidisciplinary panel. Eur J Nucl Med Mol Imaging 2024; 51:2893-2902. [PMID: 38012448 DOI: 10.1007/s00259-023-06524-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 11/14/2023] [Indexed: 11/29/2023]
Abstract
PURPOSE Prostate-specific membrane antigen (PSMA) is increasingly considered as a molecular target to achieve precision surgery for prostate cancer. A Delphi consensus was conducted to explore expert views in this emerging field and to identify knowledge and evidence gaps as well as unmet research needs that may help change practice and improve oncological outcomes for patients. METHODS One hundred and five statements (scored by a 9-point Likert scale) were distributed through SurveyMonkey®. Following evaluation, a consecutive second round was performed to evaluate consensus (16 statements; 89% response rate). Consensus was defined using the disagreement index, assessed by the research and development project/University of California, Los Angeles appropriateness method. RESULTS Eighty-six panel participants (72.1% clinician, 8.1% industry, 15.1% scientists, and 4.7% other) participated, most with a urological background (57.0%), followed by nuclear medicine (22.1%). Consensus was obtained on the following: (1) The diagnostic PSMA-ligand PET/CT should ideally be taken < 1 month before surgery, 1-3 months is acceptable; (2) a 16-20-h interval between injection of the tracer and surgery seems to be preferred; (3) PSMA targeting is most valuable for identification of nodal metastases; (4) gamma, fluorescence, and hybrid imaging are the preferred guidance technologies; and (5) randomized controlled clinical trials are required to define oncological value. Regarding surgical margin assessment, the view on the value of PSMA-targeted surgery was neutral or inconclusive. A high rate of "cannot answer" responses indicates further study is necessary to address knowledge gaps (e.g., Cerenkov or beta-emissions). CONCLUSIONS This Delphi consensus provides guidance for clinicians and researchers that implement or develop PSMA-targeted surgery technologies. Ultimately, however, the consensus should be backed by randomized clinical trial data before it may be implemented within the guidelines.
Collapse
Affiliation(s)
- Anne-Claire Berrens
- Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands.
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands.
| | - Matthijs Scheltema
- Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
- Department of Urology, Amsterdam University Medical Center, Location VUmc, Amsterdam, The Netherlands
| | - Tobias Maurer
- Martini-Klinik Prostate Cancer Center Hamburg-Eppendorf, Hamburg, Germany
- Department of Urology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Ken Hermann
- Department of Nuclear Medicine, University of Duisburg-Essen, German Cancer Consortium (DKTK)-University Hospital Essen, Essen, Germany
- National Center for Tumor Diseases (NCT), NCT West, Heidelberg, Germany
| | - Freddie C Hamdy
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, UK
| | - Sophie Knipper
- Department of Urology, Vivantes Klinikum Am Urban, Berlin, Germany
| | - Paolo Dell'Oglio
- Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
- Department of Urology, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy
| | - Elio Mazzone
- Unit of Urology/Division of Oncology, Gianfranco Soldera Prostate Cancer Laboratory, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Hilda A de Barros
- Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
| | | | - Matthias N van Oosterom
- Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Philip D Stricker
- Department of Urology, St Vincents Hospital Sydney, Sydney, Australia
- St Vincents Prostate Cancer Research Center Sydney, Sydney, Australia
- Garvan Institute Sydney, Sydney, Australia
| | - Pim J van Leeuwen
- Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
| | - Daphne D D Rietbergen
- Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
- Department of Nuclear Medicine, Leiden University Medical Center, Leiden, The Netherlands
| | - Renato A Valdes Olmos
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Sergi Vidal-Sicart
- Department of Nuclear Medicine, Hospital Clínic Barcelona, Barcelona, Spain
| | - Peter R Carroll
- Department of Urology, University of California, San Francisco, CA, USA
| | - Tessa Buckle
- Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Henk G van der Poel
- Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
- Department of Urology, Amsterdam University Medical Center, Location VUmc, Amsterdam, The Netherlands
| | - Fijs W B van Leeuwen
- Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| |
Collapse
|
3
|
Olsen RG, Svendsen MBS, Tolsgaard MG, Konge L, Røder A, Bjerrum F. Automated performance metrics and surgical gestures: two methods for assessment of technical skills in robotic surgery. J Robot Surg 2024; 18:297. [PMID: 39068261 PMCID: PMC11283394 DOI: 10.1007/s11701-024-02051-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Accepted: 07/15/2024] [Indexed: 07/30/2024]
Abstract
The objective of this study is to compare automated performance metrics (APM) and surgical gestures for technical skills assessment during simulated robot-assisted radical prostatectomy (RARP). Ten novices and six experienced RARP surgeons performed simulated RARPs on the RobotiX Mentor (Surgical Science, Sweden). Simulator APM were automatically recorded, and surgical videos were manually annotated with five types of surgical gestures. The consequences of the pass/fail levels, which were based on contrasting groups' methods, were compared for APM and surgical gestures. Intra-class correlation coefficient (ICC) analysis and a Bland-Altman plot were used to explore the correlation between APM and surgical gestures. Pass/fail levels for both APM and surgical gesture could fully distinguish between the skill levels of the surgeons with a specificity and sensitivity of 100%. The overall ICC (one-way, random) was 0.70 (95% CI: 0.34-0.88), showing moderate agreement between the methods. The Bland-Altman plot showed a high agreement between the two methods for assessing experienced surgeons but disagreed on the novice surgeons' skill level. APM and surgical gestures could both fully distinguish between novices and experienced surgeons in a simulated setting. Both methods of analyzing technical skills have their advantages and disadvantages and, as of now, those are only to a limited extent available in the clinical setting. The development of assessment methods in a simulated setting enables testing before implementing it in a clinical setting.
Collapse
Affiliation(s)
- Rikke Groth Olsen
- Copenhagen Academy for Medical Education and Simulation (CAMES), Ryesgade 53B, 2100, Copenhagen, Denmark.
- Department of Urology, Copenhagen Prostate Cancer Center, Copenhagen University Hospital-Rigshospitalet, Copenhagen, Denmark.
- Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark.
| | - Morten Bo Søndergaard Svendsen
- Copenhagen Academy for Medical Education and Simulation (CAMES), Ryesgade 53B, 2100, Copenhagen, Denmark
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Martin G Tolsgaard
- Copenhagen Academy for Medical Education and Simulation (CAMES), Ryesgade 53B, 2100, Copenhagen, Denmark
| | - Lars Konge
- Copenhagen Academy for Medical Education and Simulation (CAMES), Ryesgade 53B, 2100, Copenhagen, Denmark
- Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Andreas Røder
- Department of Urology, Copenhagen Prostate Cancer Center, Copenhagen University Hospital-Rigshospitalet, Copenhagen, Denmark
- Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Flemming Bjerrum
- Copenhagen Academy for Medical Education and Simulation (CAMES), Ryesgade 53B, 2100, Copenhagen, Denmark
- Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
- Gastrounit, Surgical Section, Copenhagen University Hospital-Amager and Hvidovre, Hvidovre, Denmark
| |
Collapse
|
4
|
Knigin D, Brezinov Y, Salvador S, Lau S, Gotlieb WH. Surgery Advances in Gynecologic Tumors: The Evolution and Outcomes of Robotic Surgery for Gynecologic Cancers in a Tertiary Center. Curr Oncol 2024; 31:2400-2409. [PMID: 38785460 PMCID: PMC11120242 DOI: 10.3390/curroncol31050179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Revised: 04/16/2024] [Accepted: 04/23/2024] [Indexed: 05/25/2024] Open
Abstract
The integration of innovation into routine clinical practice is faced with many challenges. In 2007, we received the mandate to evaluate how the introduction of a robotic program in gynecologic oncology affected patient-centered care by studying its impact on clinical outcomes and hospital resource utilization. Here we summarize the history and experience of developing a robotic surgery program for gynecologic cancers over 16 years. Analysis of the data indicates that robotic surgery improved perioperative patient clinical parameters, decreased blood loss, complications, and hospital stay, maintained the oncologic outcome, and is cost-effective, resulting in it becoming the dominant surgical approach in gynecologic oncology in a tertiary cancer care institution.
Collapse
Affiliation(s)
- David Knigin
- Division of Gynecologic Oncology, Jewish General Hospital, McGill University, Montreal, QC H3T 1E2, Canada; (D.K.); (S.S.); (S.L.)
- Segal Cancer Center, Sir Mortimer B. Davis Institute of Medical Research, McGill University, Montreal, QC H3T 1E2, Canada
| | - Yoav Brezinov
- Segal Cancer Center, Sir Mortimer B. Davis Institute of Medical Research, McGill University, Montreal, QC H3T 1E2, Canada
| | - Shannon Salvador
- Division of Gynecologic Oncology, Jewish General Hospital, McGill University, Montreal, QC H3T 1E2, Canada; (D.K.); (S.S.); (S.L.)
- Segal Cancer Center, Sir Mortimer B. Davis Institute of Medical Research, McGill University, Montreal, QC H3T 1E2, Canada
| | - Susie Lau
- Division of Gynecologic Oncology, Jewish General Hospital, McGill University, Montreal, QC H3T 1E2, Canada; (D.K.); (S.S.); (S.L.)
- Segal Cancer Center, Sir Mortimer B. Davis Institute of Medical Research, McGill University, Montreal, QC H3T 1E2, Canada
| | - Walter H. Gotlieb
- Division of Gynecologic Oncology, Jewish General Hospital, McGill University, Montreal, QC H3T 1E2, Canada; (D.K.); (S.S.); (S.L.)
- Segal Cancer Center, Sir Mortimer B. Davis Institute of Medical Research, McGill University, Montreal, QC H3T 1E2, Canada
| |
Collapse
|
5
|
Yiu A, Lam K, Simister C, Clarke J, Kinross J. Adoption of routine surgical video recording: a nationwide freedom of information act request across England and Wales. EClinicalMedicine 2024; 70:102545. [PMID: 38685926 PMCID: PMC11056472 DOI: 10.1016/j.eclinm.2024.102545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 02/28/2024] [Accepted: 02/28/2024] [Indexed: 05/02/2024] Open
Abstract
Background Surgical video contains data with significant potential to improve surgical outcome assessment, quality assurance, education, and research. Current utilisation of surgical video recording is unknown and related policies/governance structures are unclear. Methods A nationwide Freedom of Information (FOI) request concerning surgical video recording, technology, consent, access, and governance was sent to all acute National Health Service (NHS) trusts/boards in England/Wales between 20th February and 20th March 2023. Findings 140/144 (97.2%) trusts/boards in England/Wales responded to the FOI request. Surgical procedures were routinely recorded in 22 trusts/boards. The median estimate of consultant surgeons routinely recording their procedures was 20%. Surgical video was stored on internal systems (n = 27), third-party products (n = 29), and both (n = 9). 32/140 (22.9%) trusts/boards ask for consent to record procedures as part of routine care. Consent for recording included non-clinical purposes in 55/140 (39.3%) trusts/boards. Policies for surgeon/patient access to surgical video were available in 48/140 (34.3%) and 32/140 (22.9%) trusts/boards, respectively. Surgical video was used for non-clinical purposes in 64/140 (45.7%) trusts/boards. Governance policies covering surgical video recording, use, and/or storage were available from 59/140 (42.1%) trusts/boards. Interpretation There is significant heterogeneity in surgical video recording practices in England and Wales. A minority of trusts/boards routinely record surgical procedures, with large variation in recording/storage practices indicating scope for NHS-wide coordination. Revision of surgical video consent, accessibility, and governance policies should be prioritised by trusts/boards to protect key stakeholders. Increased availability of surgical video is essential for patients and surgeons to maximally benefit from the ongoing digital transformation of surgery. Funding KL is supported by an NIHR Academic Clinical Fellowship and acknowledges infrastructure support for this research from the National Institute for Health Research (NIHR) Imperial Biomedical Research Centre (BRC).
Collapse
Affiliation(s)
- Andrew Yiu
- Department of Surgery and Cancer, Imperial College London, UK
| | - Kyle Lam
- Department of Surgery and Cancer, Imperial College London, UK
| | | | - Jonathan Clarke
- Department of Surgery and Cancer, Imperial College London, UK
| | - James Kinross
- Department of Surgery and Cancer, Imperial College London, UK
| |
Collapse
|
6
|
Deol ES, Tollefson MK, Antolin A, Zohar M, Bar O, Ben-Ayoun D, Mynderse LA, Lomas DJ, Avant RA, Miller AR, Elliott DS, Boorjian SA, Wolf T, Asselmann D, Khanna A. Automated surgical step recognition in transurethral bladder tumor resection using artificial intelligence: transfer learning across surgical modalities. Front Artif Intell 2024; 7:1375482. [PMID: 38525302 PMCID: PMC10958784 DOI: 10.3389/frai.2024.1375482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 02/26/2024] [Indexed: 03/26/2024] Open
Abstract
Objective Automated surgical step recognition (SSR) using AI has been a catalyst in the "digitization" of surgery. However, progress has been limited to laparoscopy, with relatively few SSR tools in endoscopic surgery. This study aimed to create a SSR model for transurethral resection of bladder tumors (TURBT), leveraging a novel application of transfer learning to reduce video dataset requirements. Materials and methods Retrospective surgical videos of TURBT were manually annotated with the following steps of surgery: primary endoscopic evaluation, resection of bladder tumor, and surface coagulation. Manually annotated videos were then utilized to train a novel AI computer vision algorithm to perform automated video annotation of TURBT surgical video, utilizing a transfer-learning technique to pre-train on laparoscopic procedures. Accuracy of AI SSR was determined by comparison to human annotations as the reference standard. Results A total of 300 full-length TURBT videos (median 23.96 min; IQR 14.13-41.31 min) were manually annotated with sequential steps of surgery. One hundred and seventy-nine videos served as a training dataset for algorithm development, 44 for internal validation, and 77 as a separate test cohort for evaluating algorithm accuracy. Overall accuracy of AI video analysis was 89.6%. Model accuracy was highest for the primary endoscopic evaluation step (98.2%) and lowest for the surface coagulation step (82.7%). Conclusion We developed a fully automated computer vision algorithm for high-accuracy annotation of TURBT surgical videos. This represents the first application of transfer-learning from laparoscopy-based computer vision models into surgical endoscopy, demonstrating the promise of this approach in adapting to new procedure types.
Collapse
Affiliation(s)
- Ekamjit S. Deol
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Maya Zohar
- theator.io, Palo Alto, CA, United States
| | - Omri Bar
- theator.io, Palo Alto, CA, United States
| | | | | | - Derek J. Lomas
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| | - Ross A. Avant
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| | - Adam R. Miller
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Tamir Wolf
- theator.io, Palo Alto, CA, United States
| | | | - Abhinav Khanna
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| |
Collapse
|
7
|
Zuluaga L, Rich JM, Gupta R, Pedraza A, Ucpinar B, Okhawere KE, Saini I, Dwivedi P, Patel D, Zaytoun O, Menon M, Tewari A, Badani KK. AI-powered real-time annotations during urologic surgery: The future of training and quality metrics. Urol Oncol 2024; 42:57-66. [PMID: 38142209 DOI: 10.1016/j.urolonc.2023.11.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 10/23/2023] [Accepted: 11/02/2023] [Indexed: 12/25/2023]
Abstract
INTRODUCTION AND OBJECTIVE Real-time artificial intelligence (AI) annotation of the surgical field has the potential to automatically extract information from surgical videos, helping to create a robust surgical atlas. This content can be used for surgical education and qualitative initiatives. We demonstrate the first use of AI in urologic robotic surgery to capture live surgical video and annotate key surgical steps and safety milestones in real-time. SUMMARY BACKGROUND DATA While AI models possess the capability to generate automated annotations based on a collection of video images, the real-time implementation of such technology in urological robotic surgery to aid surgeon and training staff it is still pending to be studied. METHODS We conducted an educational symposium, which broadcasted 2 live procedures, a robotic-assisted radical prostatectomy (RARP) and a robotic-assisted partial nephrectomy (RAPN). A surgical AI platform system (Theator, Palo Alto, CA) generated real-time annotations and identified operative safety milestones. This was achieved through trained algorithms, conventional video recognition, and novel Video Transfer Network technology which captures clips in full context, enabling automatic recognition and surgical mapping in real-time. RESULTS Real-time AI annotations for procedure #1, RARP, are found in Table 1. The safety milestone annotations included the apical safety maneuver and deliberate views of structures such as the external iliac vessels and the obturator nerve. Real-time AI annotations for procedure #2, RAPN, are found in Table 1. Safety milestones included deliberate views of structures such as the gonadal vessels and the ureter. AI annotated surgical events included intraoperative ultrasound, temporary clip application and removal, hemostatic powder application, and notable hemorrhage. CONCLUSIONS For the first time, surgical intelligence successfully showcased real-time AI annotations of 2 separate urologic robotic procedures during a live telecast. These annotations may provide the technological framework for send automatic notifications to clinical or operational stakeholders. This technology is a first step in real-time intraoperative decision support, leveraging big data to improve the quality of surgical care, potentially improve surgical outcomes, and support training and education.
Collapse
Affiliation(s)
- Laura Zuluaga
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY.
| | - Jordan Miller Rich
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Raghav Gupta
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Adriana Pedraza
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Burak Ucpinar
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Kennedy E Okhawere
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Indu Saini
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Priyanka Dwivedi
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Dhruti Patel
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Osama Zaytoun
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Mani Menon
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Ashutosh Tewari
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Ketan K Badani
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| |
Collapse
|
8
|
El-Sayed C, Yiu A, Burke J, Vaughan-Shaw P, Todd J, Lin P, Kasmani Z, Munsch C, Rooshenas L, Campbell M, Bach SP. Measures of performance and proficiency in robotic assisted surgery: a systematic review. J Robot Surg 2024; 18:16. [PMID: 38217749 DOI: 10.1007/s11701-023-01756-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 11/07/2023] [Indexed: 01/15/2024]
Abstract
Robotic assisted surgery (RAS) has seen a global rise in adoption. Despite this, there is not a standardised training curricula nor a standardised measure of performance. We performed a systematic review across the surgical specialties in RAS and evaluated tools used to assess surgeons' technical performance. Using the PRISMA 2020 guidelines, Pubmed, Embase and the Cochrane Library were searched systematically for full texts published on or after January 2020-January 2022. Observational studies and RCTs were included; review articles and systematic reviews were excluded. The papers' quality and bias score were assessed using the Newcastle Ottawa Score for the observational studies and Cochrane Risk Tool for the RCTs. The initial search yielded 1189 papers of which 72 fit the eligibility criteria. 27 unique performance metrics were identified. Global assessments were the most common tool of assessment (n = 13); the most used was GEARS (Global Evaluative Assessment of Robotic Skills). 11 metrics (42%) were objective tools of performance. Automated performance metrics (APMs) were the most widely used objective metrics whilst the remaining (n = 15, 58%) were subjective. The results demonstrate variation in tools used to assess technical performance in RAS. A large proportion of the metrics are subjective measures which increases the risk of bias amongst users. A standardised objective metric which measures all domains of technical performance from global to cognitive is required. The metric should be applicable to all RAS procedures and easily implementable. Automated performance metrics (APMs) have demonstrated promise in their wide use of accurate measures.
Collapse
Affiliation(s)
- Charlotte El-Sayed
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom.
| | - A Yiu
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - J Burke
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - P Vaughan-Shaw
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - J Todd
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - P Lin
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - Z Kasmani
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - C Munsch
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - L Rooshenas
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - M Campbell
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - S P Bach
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|
9
|
Boal MWE, Anastasiou D, Tesfai F, Ghamrawi W, Mazomenos E, Curtis N, Collins JW, Sridhar A, Kelly J, Stoyanov D, Francis NK. Evaluation of objective tools and artificial intelligence in robotic surgery technical skills assessment: a systematic review. Br J Surg 2024; 111:znad331. [PMID: 37951600 PMCID: PMC10771126 DOI: 10.1093/bjs/znad331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 09/18/2023] [Accepted: 09/19/2023] [Indexed: 11/14/2023]
Abstract
BACKGROUND There is a need to standardize training in robotic surgery, including objective assessment for accreditation. This systematic review aimed to identify objective tools for technical skills assessment, providing evaluation statuses to guide research and inform implementation into training curricula. METHODS A systematic literature search was conducted in accordance with the PRISMA guidelines. Ovid Embase/Medline, PubMed and Web of Science were searched. Inclusion criterion: robotic surgery technical skills tools. Exclusion criteria: non-technical, laparoscopy or open skills only. Manual tools and automated performance metrics (APMs) were analysed using Messick's concept of validity and the Oxford Centre of Evidence-Based Medicine (OCEBM) Levels of Evidence and Recommendation (LoR). A bespoke tool analysed artificial intelligence (AI) studies. The Modified Downs-Black checklist was used to assess risk of bias. RESULTS Two hundred and forty-seven studies were analysed, identifying: 8 global rating scales, 26 procedure-/task-specific tools, 3 main error-based methods, 10 simulators, 28 studies analysing APMs and 53 AI studies. Global Evaluative Assessment of Robotic Skills and the da Vinci Skills Simulator were the most evaluated tools at LoR 1 (OCEBM). Three procedure-specific tools, 3 error-based methods and 1 non-simulator APMs reached LoR 2. AI models estimated outcomes (skill or clinical), demonstrating superior accuracy rates in the laboratory with 60 per cent of methods reporting accuracies over 90 per cent, compared to real surgery ranging from 67 to 100 per cent. CONCLUSIONS Manual and automated assessment tools for robotic surgery are not well validated and require further evaluation before use in accreditation processes.PROSPERO: registration ID CRD42022304901.
Collapse
Affiliation(s)
- Matthew W E Boal
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
| | - Dimitrios Anastasiou
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Freweini Tesfai
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
| | - Walaa Ghamrawi
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
| | - Evangelos Mazomenos
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Nathan Curtis
- Department of General Surgey, Dorset County Hospital NHS Foundation Trust, Dorchester, UK
| | - Justin W Collins
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - Ashwin Sridhar
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - John Kelly
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - Danail Stoyanov
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Computer Science, UCL, London, UK
| | - Nader K Francis
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- Yeovil District Hospital, Somerset Foundation NHS Trust, Yeovil, Somerset, UK
| |
Collapse
|
10
|
Marcus HJ, Ramirez PT, Khan DZ, Layard Horsfall H, Hanrahan JG, Williams SC, Beard DJ, Bhat R, Catchpole K, Cook A, Hutchison K, Martin J, Melvin T, Stoyanov D, Rovers M, Raison N, Dasgupta P, Noonan D, Stocken D, Sturt G, Vanhoestenberghe A, Vasey B, McCulloch P. The IDEAL framework for surgical robotics: development, comparative evaluation and long-term monitoring. Nat Med 2024; 30:61-75. [PMID: 38242979 DOI: 10.1038/s41591-023-02732-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 11/20/2023] [Indexed: 01/21/2024]
Abstract
The next generation of surgical robotics is poised to disrupt healthcare systems worldwide, requiring new frameworks for evaluation. However, evaluation during a surgical robot's development is challenging due to their complex evolving nature, potential for wider system disruption and integration with complementary technologies like artificial intelligence. Comparative clinical studies require attention to intervention context, learning curves and standardized outcomes. Long-term monitoring needs to transition toward collaborative, transparent and inclusive consortiums for real-world data collection. Here, the Idea, Development, Exploration, Assessment and Long-term monitoring (IDEAL) Robotics Colloquium proposes recommendations for evaluation during development, comparative study and clinical monitoring of surgical robots-providing practical recommendations for developers, clinicians, patients and healthcare systems. Multiple perspectives are considered, including economics, surgical training, human factors, ethics, patient perspectives and sustainability. Further work is needed on standardized metrics, health economic assessment models and global applicability of recommendations.
Collapse
Affiliation(s)
- Hani J Marcus
- Department of Neurosurgery, National Hospital of Neurology and Neurosurgery, London, UK.
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London, UK.
| | - Pedro T Ramirez
- Department of Obstetrics and Gynaecology, Houston Methodist Hospital Neal Cancer Center, Houston, TX, USA
| | - Danyal Z Khan
- Department of Neurosurgery, National Hospital of Neurology and Neurosurgery, London, UK
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London, UK
| | - Hugo Layard Horsfall
- Department of Neurosurgery, National Hospital of Neurology and Neurosurgery, London, UK
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London, UK
| | - John G Hanrahan
- Department of Neurosurgery, National Hospital of Neurology and Neurosurgery, London, UK
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London, UK
| | - Simon C Williams
- Department of Neurosurgery, National Hospital of Neurology and Neurosurgery, London, UK
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London, UK
| | - David J Beard
- RCS Surgical Interventional Trials Unit (SITU) & Robotic and Digital Surgery Initiative (RADAR), Nuffield Dept Orthopaedics, Rheumatology and Musculo-skeletal Sciences, University of Oxford, Oxford, UK
| | - Rani Bhat
- Department of Gynaecological Oncology, Apollo Hospital, Bengaluru, India
| | - Ken Catchpole
- Department of Anaesthesia and Perioperative Medicine, Medical University of South Carolina, Charleston, SC, USA
| | - Andrew Cook
- NIHR Coordinating Centre and Clinical Trials Unit, University of Southampton, Southampton, UK
| | | | - Janet Martin
- Department of Anesthesia & Perioperative Medicine, University of Western Ontario, Ontario, Canada
| | - Tom Melvin
- Department of Medical Gerontology, School of Medicine, Trinity College Dublin, Dublin, Republic of Ireland
| | - Danail Stoyanov
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London, UK
| | - Maroeska Rovers
- Department of Medical Imaging, Radboudumc, Nijmegen, the Netherlands
| | - Nicholas Raison
- Department of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Prokar Dasgupta
- King's Health Partners Academic Surgery, King's College London, London, UK
| | | | - Deborah Stocken
- RCSEng Surgical Trials Centre, Leeds Institute of Clinical Trials Research, University of Leeds, Leeds, UK
| | | | - Anne Vanhoestenberghe
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Baptiste Vasey
- Department of Surgery, Geneva University Hospital, Geneva, Switzerland
- Nuffield Department of Surgical Sciences, University of Oxford, John Radcliffe Hospital, Oxford, UK
| | - Peter McCulloch
- Nuffield Department of Surgical Sciences, University of Oxford, John Radcliffe Hospital, Oxford, UK.
| |
Collapse
|
11
|
Chen G, Li L, Hubert J, Luo B, Yang K, Wang X. Effectiveness of a vision-based handle trajectory monitoring system in studying robotic suture operation. J Robot Surg 2023; 17:2791-2798. [PMID: 37728690 DOI: 10.1007/s11701-023-01713-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 09/02/2023] [Indexed: 09/21/2023]
Abstract
Data on surgical robots are not openly accessible, limiting further study of the operation trajectory of surgeons' hands. Therefore, a trajectory monitoring system should be developed to examine objective indicators reflecting the characteristic parameters of operations. 20 robotic experts and 20 first-year residents without robotic experience were included in this study. A dry-lab suture task was used to acquire relevant hand performance data. Novices completed training on the simulator and then performed the task, while the expert team completed the task after warm-up. Stitching errors were measured using a visual recognition method. Videos of operations were obtained using the camera array mounted on the robot, and the hand trajectory of the surgeons was reconstructed. The stitching accuracy, robotic control parameters, balance and dexterity parameters, and operation efficiency parameters were compared. Experts had smaller center distance (p < 0.001) and larger proximal distance between the hands (p < 0.001) compared with novices. The path and volume ratios between the left and right hands of novices were larger than those of experts (both p < 0.001) and the total volume of the operation range of experts was smaller (p < 0.001). The surgeon trajectory optical monitoring system is an effective and non-subjective method to distinguish skill differences. This demonstrates the potential of pan-platform use to evaluate task completion and help surgeons improve their robotic learning curve.
Collapse
Affiliation(s)
- Gaojie Chen
- Department of Urology, ZhongNan Hospital, Wuhan University, No. 169 Donghu Road, Wuhan, 430071, Hubei, China
- Medicine-Remote Mapping Associated Laboratory, ZhongNan Hospital, Wuhan University, No. 169 Donghu Road, Wuhan, 430071, Hubei, China
| | - Lu Li
- Department of Urology, ZhongNan Hospital, Wuhan University, No. 169 Donghu Road, Wuhan, 430071, Hubei, China
- Medicine-Remote Mapping Associated Laboratory, ZhongNan Hospital, Wuhan University, No. 169 Donghu Road, Wuhan, 430071, Hubei, China
| | - Jacques Hubert
- Department of Urology, CHRU Nancy Brabois University Hospital, Vandoeuvre-Lès-Nancy, France
- IADI-UL-INSERM (U1254), University Hospital, Vandoeuvre-Lès-Nancy, France
| | - Bin Luo
- Medicine-Remote Mapping Associated Laboratory, ZhongNan Hospital, Wuhan University, No. 169 Donghu Road, Wuhan, 430071, Hubei, China
- State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University, Wuhan, Hubei, China
| | - Kun Yang
- Department of Urology, ZhongNan Hospital, Wuhan University, No. 169 Donghu Road, Wuhan, 430071, Hubei, China.
- Medicine-Remote Mapping Associated Laboratory, ZhongNan Hospital, Wuhan University, No. 169 Donghu Road, Wuhan, 430071, Hubei, China.
| | - Xinghuan Wang
- Department of Urology, ZhongNan Hospital, Wuhan University, No. 169 Donghu Road, Wuhan, 430071, Hubei, China.
- Medicine-Remote Mapping Associated Laboratory, ZhongNan Hospital, Wuhan University, No. 169 Donghu Road, Wuhan, 430071, Hubei, China.
| |
Collapse
|
12
|
Aghazadeh F, Zheng B, Tavakoli M, Rouhani H. Surgical tooltip motion metrics assessment using virtual marker: an objective approach to skill assessment for minimally invasive surgery. Int J Comput Assist Radiol Surg 2023; 18:2191-2202. [PMID: 37597089 DOI: 10.1007/s11548-023-03007-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 07/19/2023] [Indexed: 08/21/2023]
Abstract
PURPOSE Surgical skill assessment has primarily been performed using checklists or rating scales, which are prone to bias and subjectivity. To tackle this shortcoming, assessment of surgical tool motion can be implemented to objectively classify skill levels. Due to the challenges involved in motion tracking of surgical tooltips in minimally invasive surgeries, formerly used assessment approaches may not be feasible for real-world skill assessment. We proposed an assessment approach based on the virtual marker on surgical tooltips to derive the tooltip's 3D position and introduced a novel metric for surgical skill assessment. METHODS We obtained the 3D tooltip position based on markers placed on the tool handle. Then, we derived tooltip motion metrics to identify the metrics differentiating the skill levels for objective surgical skill assessment. We proposed a new tooltip motion metric, i.e., motion inconsistency, that can assess the skill level, and also can evaluate the stage of skill learning. In this study, peg transfer, dual transfer, and rubber band translocation tasks were included, and nine novices, five surgical residents and five attending general surgeons participated. RESULTS Our analyses showed that tooltip path length (p [Formula: see text] 0.007) and path length along the instrument axis (p [Formula: see text] 0.014) differed across the three skill levels in all the tasks and decreased by skill level. Tooltip motion inconsistency showed significant differences among the three skill levels in the dual transfer (p [Formula: see text] 0.025) and the rubber band translocation tasks (p [Formula: see text] 0.021). Lastly, bimanual dexterity differed across the three skill levels in all the tasks (p [Formula: see text] 0.012) and increased by skill level. CONCLUSION Depth perception ability (indicated by shorter tooltip path lengths along the instrument axis), bimanual dexterity, tooltip motion consistency, and economical tooltip movements (shorter tooltip path lengths) are related to surgical skill. Our findings can contribute to objective surgical skill assessment, reducing subjectivity, bias, and associated costs.
Collapse
Affiliation(s)
- Farzad Aghazadeh
- Department of Mechanical Engineering, 10-390 Donadeo Innovation Centre for Engineering, University of Alberta, 9211-116 Street NW, Edmonton, AB, T6G 1H9, Canada
| | - Bin Zheng
- Department of Surgery, University of Alberta, Edmonton, AB, Canada
| | - Mahdi Tavakoli
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada
| | - Hossein Rouhani
- Department of Mechanical Engineering, 10-390 Donadeo Innovation Centre for Engineering, University of Alberta, 9211-116 Street NW, Edmonton, AB, T6G 1H9, Canada.
| |
Collapse
|
13
|
Park B, Chi H, Park B, Lee J, Jin HS, Park S, Hyung WJ, Choi MK. Visual modalities-based multimodal fusion for surgical phase recognition. Comput Biol Med 2023; 166:107453. [PMID: 37774560 DOI: 10.1016/j.compbiomed.2023.107453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 08/17/2023] [Accepted: 09/04/2023] [Indexed: 10/01/2023]
Abstract
Surgical workflow analysis is essential to help optimize surgery by encouraging efficient communication and the use of resources. However, the performance of phase recognition is limited by the use of information related to the presence of surgical instruments. To address the problem, we propose visual modality-based multimodal fusion for surgical phase recognition to overcome the limited diversity of information such as the presence of instruments. Using the proposed methods, we extracted a visual kinematics-based index related to using instruments, such as movement and their interrelations during surgery. In addition, we improved recognition performance using an effective convolutional neural network (CNN)-based fusion method for visual features and a visual kinematics-based index (VKI). The visual kinematics-based index improves the understanding of a surgical procedure since information is related to instrument interaction. Furthermore, these indices can be extracted in any environment, such as laparoscopic surgery, and help obtain complementary information for system kinematics log errors. The proposed methodology was applied to two multimodal datasets, a virtual reality (VR) simulator-based dataset (PETRAW) and a private distal gastrectomy surgery dataset, to verify that it can help improve recognition performance in clinical environments. We also explored the influence of a visual kinematics-based index to recognize each surgical workflow by the instrument's existence and the instrument's trajectory. Through the experimental results of a distal gastrectomy video dataset, we validated the effectiveness of our proposed fusion approach in surgical phase recognition. The relatively simple yet index-incorporated fusion we propose can yield significant performance improvements over only CNN-based training and exhibits effective training results compared to fusion based on Transformers, which require a large amount of pre-trained data.
Collapse
Affiliation(s)
- Bogyu Park
- AI Dev. Group, Hutom, Dokmak-ro 279, Mapo-gu, 04151, Seoul, Republic of Korea.
| | - Hyeongyu Chi
- AI Dev. Group, Hutom, Dokmak-ro 279, Mapo-gu, 04151, Seoul, Republic of Korea.
| | - Bokyung Park
- AI Dev. Group, Hutom, Dokmak-ro 279, Mapo-gu, 04151, Seoul, Republic of Korea.
| | - Jiwon Lee
- AI Dev. Group, Hutom, Dokmak-ro 279, Mapo-gu, 04151, Seoul, Republic of Korea.
| | - Hye Su Jin
- AI Dev. Group, Hutom, Dokmak-ro 279, Mapo-gu, 04151, Seoul, Republic of Korea.
| | - Sunghyun Park
- Yonsei University College of Medicine, Yonsei-ro 50, Seodaemun-gu, 03722, Seoul, Republic of Korea.
| | - Woo Jin Hyung
- AI Dev. Group, Hutom, Dokmak-ro 279, Mapo-gu, 04151, Seoul, Republic of Korea; Yonsei University College of Medicine, Yonsei-ro 50, Seodaemun-gu, 03722, Seoul, Republic of Korea.
| | - Min-Kook Choi
- AI Dev. Group, Hutom, Dokmak-ro 279, Mapo-gu, 04151, Seoul, Republic of Korea.
| |
Collapse
|
14
|
Kaoukabani G, Gokcal F, Fanta A, Liu X, Shields M, Stricklin C, Friedman A, Kudsi OY. A multifactorial evaluation of objective performance indicators and video analysis in the context of case complexity and clinical outcomes in robotic-assisted cholecystectomy. Surg Endosc 2023; 37:8540-8551. [PMID: 37789179 DOI: 10.1007/s00464-023-10432-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Accepted: 08/31/2023] [Indexed: 10/05/2023]
Abstract
BACKGROUND The increased digitization in robotic surgical procedures today enables surgeons to quantify their movements through data captured directly from the robotic system. These calculations, called objective performance indicators (OPIs), offer unprecedented detail into surgical performance. In this study, we link case- and surgical step-specific OPIs to case complexity, surgical experience and console utilization, and post-operative clinical complications across 87 robotic cholecystectomy (RC) cases. METHODS Videos of RCs performed by a principal surgeon with and without fellows were segmented into eight surgical steps and linked to patients' clinical data. Data for OPI calculations were extracted from an Intuitive Data Recorder and the da Vinci ® robotic system. RC cases were each assigned a Nassar and Parkland Grading score and categorized as standard or complex. OPIs were compared across complexity groups, console attributions, and post-surgical complication severities to determine objective relationships across variables. RESULTS Across cases, differences in camera control and head positioning metrics of the principal surgeon were observed when comparing standard and complex cases. Further, OPI differences across the principal surgeon and the fellow(s) were observed in standard cases and include differences in arm swapping, camera control, and clutching behaviors. Monopolar coagulation energy usage differences were also observed. Select surgical step duration differences were observed across complexities and console attributions, and additional surgical task analyses determine the adhesion removal and liver bed hemostasis steps to be the most impactful steps for case complexity and post-surgical complications, respectively. CONCLUSION This is the first study to establish the association between OPIs, case complexities, and clinical complications in RC. We identified OPI differences in intra-operative behaviors and post-surgical complications dependent on surgeon expertise and case complexity, opening the door for more standardized assessments of teaching cases, surgical behaviors and case complexities.
Collapse
Affiliation(s)
| | - Fahri Gokcal
- Good Samaritan Medical Center, Brockton, MA, USA
| | - Abeselom Fanta
- Applied Research, Intuitive Surgical Inc., Peachtree City, GA, USA
| | - Xi Liu
- Applied Research, Intuitive Surgical Inc., Peachtree City, GA, USA
| | - Mallory Shields
- Applied Research, Intuitive Surgical Inc., Peachtree City, GA, USA
| | | | | | | |
Collapse
|
15
|
Chu TN, Wong EY, Ma R, Yang CH, Dalieh IS, Hui A, Gomez O, Cen S, Ghazi A, Miles BJ, Lau C, Davis JW, Goldenberg MG, Hung AJ. A Multi-institution Study on the Association of Virtual Reality Skills with Continence Recovery after Robot-assisted Radical Prostatectomy. Eur Urol Focus 2023; 9:1044-1051. [PMID: 37277274 PMCID: PMC10693649 DOI: 10.1016/j.euf.2023.05.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2023] [Revised: 04/13/2023] [Accepted: 05/23/2023] [Indexed: 06/07/2023]
Abstract
BACKGROUND Virtual reality (VR) simulators are increasingly being used for surgical skills training. It is unclear what skills are best improved via VR, translate to live surgical skills, and influence patient outcomes. OBJECTIVE To assess surgeons in VR and live surgery using a suturing assessment tool and evaluate the association between technical skills and a clinical outcome. DESIGN, SETTING, AND PARTICIPANTS This prospective five-center study enrolled participants who completed VR suturing exercises and provided live surgical video. Graders provided skill assessments using the validated End-To-End Assessment of Suturing Expertise (EASE) suturing evaluation tool. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS A hierarchical Poisson model was used to compare skill scores among cohorts and evaluate the association of scores with clinical outcomes. Spearman's method was used to assess correlation between VR and live skills. RESULTS AND LIMITATIONS Ten novices, ten surgeons with intermediate expertise (median 64 cases, interquartile range [IQR] 6-80), and 26 expert surgeons (median 850 cases, IQR 375-3000) participated in this study. Intermediate and expert surgeons were significantly more likely to have ideal scores in comparison to novices for the subskills needle hold angle, wrist rotation, and wrist rotation needle withdrawal (p < 0.01). For both intermediate and expert surgeons, there was positive correlation between VR and live skills for needle hold angle (p < 0.05). For expert surgeons, there was a positive association between ideal scores for VR needle hold angle and driving smoothness subskills and 3-mo continence recovery (p < 0.05). Limitations include the size of the intermediate surgeon sample and clinical data limited to expert surgeons. CONCLUSIONS EASE can be used in VR to identify skills to improve for trainee surgeons. Technical skills that influence postoperative outcomes may be assessable in VR. PATIENT SUMMARY This study provides insights into surgical skills that translate from virtual simulation to live surgery and that have an impact on urinary continence after robot-assisted removal of the prostate. We also highlight the usefulness of virtual reality in surgical education.
Collapse
Affiliation(s)
- Timothy N Chu
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Elyssa Y Wong
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Runzhuo Ma
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Cherine H Yang
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Istabraq S Dalieh
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Alvin Hui
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Oscar Gomez
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Steven Cen
- Department of Radiology, University of Southern California, Los Angeles, CA, USA
| | - Ahmed Ghazi
- Department of Urology, University of Rochester, Rochester, NY, USA
| | - Brian J Miles
- Department of Urology, Houston Methodist, Houston, TX, USA
| | - Clayton Lau
- Department of Urology, City of Hope, Duarte, CA, USA
| | - John W Davis
- Department of Urology, MD Anderson Cancer Center, Houston, TX, USA
| | - Mitchell G Goldenberg
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Andrew J Hung
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
16
|
Wong SW, Crowe P. Automated performance metrics, learning curve and robotic colorectal surgery. Int J Med Robot 2023:e2588. [PMID: 37855300 DOI: 10.1002/rcs.2588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 09/01/2023] [Accepted: 10/09/2023] [Indexed: 10/20/2023]
Abstract
BACKGROUND The aim of this study was to evaluate the usefulness of Automated Performance Metrics (APMs) in assessing the learning curve. METHODS A retrospective review of 85 consecutive patients who underwent total robotic colorectal surgery at a single institution between August 2020 and October 2022 was performed. Patient demographics, operation type, and APMs were collected and analysed. Cumulative summation technique (CUSUM) was used to construct learning curves of surgeon console time (SCT), use of the fourth arm, clutch activation, instrument off screen (number and duration), and cut electrocautery activation. RESULTS Two phases with 50 and 35 cases were identified from the CUSUM graph for SCT. The SCT was significantly different between the two phases (176 and 251 min, p < 0.002). After adjustment for SCT, the APMs were not significantly different between the two phases. CONCLUSIONS Most APMs do not offer additional learning curve information when compared with SCT analysis alone.
Collapse
Affiliation(s)
- Shing Wai Wong
- Department of General Surgery, Prince of Wales Hospital, Sydney, New South Wales, Australia
- Randwick Campus, School of Clinical Medicine, The University of New South Wales, Sydney, New South Wales, Australia
| | - Philip Crowe
- Department of General Surgery, Prince of Wales Hospital, Sydney, New South Wales, Australia
- Randwick Campus, School of Clinical Medicine, The University of New South Wales, Sydney, New South Wales, Australia
| |
Collapse
|
17
|
Choksi S, Bitner DP, Carsky K, Addison P, Webman R, Andrews R, Kowalski R, Dawson M, Dronsky V, Yee A, Jarc A, Filicori F. Kinematic data profile and clinical outcomes in robotic inguinal hernia repairs: a pilot study. Surg Endosc 2023; 37:8035-8042. [PMID: 37474824 DOI: 10.1007/s00464-023-10285-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 07/05/2023] [Indexed: 07/22/2023]
Abstract
BACKGROUND Surgical training requires clinical knowledge and technical skills to operate safely and optimize clinical outcomes. Technical skills are hard to measure. The Intuitive Data Recorder (IDR), (Sunnyvale, CA) allows for the measurement of technical skills using objective performance indicators (OPIs) from kinematic event data. Our goal was to determine whether OPIs improve with surgeon experience and whether they are correlated with clinical outcomes for robotic inguinal hernia repair (RIHR). METHODS The IDR was used to record RIHRs from six surgeons. Data were obtained from 98 inguinal hernia repairs from February 2022 to February 2023. Patients were called on postoperative days 5-10 and asked to take the Carolina Comfort Scale (CCS) survey to evaluate acute clinical outcomes. A Pearson test was run to determine correlations between OPIs from the IDR with a surgeon's yearly RIHR experience and with CCS scores. Linear regression was then run for correlated OPIs. RESULTS Multiple OPIs were correlated with surgeon experience. Specifically, for the task of peritoneal flap exploration, we found that 23 OPIs were significantly correlated with surgeons' 1-year RIHR case number. Total angular motion distance of the left arm instrument had a correlation of - 0.238 (95% CI - 0.417, - 0.042) for RIHR yearly case number. Total angular motion distance of right arm instrument was also negatively correlated with RIHR in 1 year with a correlation of - 0.242 (95% CI - 0.420, - 0.046). For clinical outcomes, wrist articulation of the surgeon's console positively correlated with acute sensation scores from the CCS with a correlation of 0.453 (95% CI 0.013, 0.746). CONCLUSIONS This study defines multiple OPIs that correlate with surgeon experience and with outcomes. Using this knowledge, surgical simulation platforms can be designed to teach patterns to surgical trainees that are associated with increased surgical experience and with improved postoperative outcomes.
Collapse
Affiliation(s)
- Sarah Choksi
- Intraoperative Performance Analytics Laboratory (IPAL), Department of Surgery, Lenox Hill Hospital, Northwell Health, 186 E 76th Street, 1st Fl, New York, NY, 10021, USA.
| | - Daniel P Bitner
- Intraoperative Performance Analytics Laboratory (IPAL), Department of Surgery, Lenox Hill Hospital, Northwell Health, 186 E 76th Street, 1st Fl, New York, NY, 10021, USA
| | - Katherine Carsky
- Intraoperative Performance Analytics Laboratory (IPAL), Department of Surgery, Lenox Hill Hospital, Northwell Health, 186 E 76th Street, 1st Fl, New York, NY, 10021, USA
| | - Poppy Addison
- Intraoperative Performance Analytics Laboratory (IPAL), Department of Surgery, Lenox Hill Hospital, Northwell Health, 186 E 76th Street, 1st Fl, New York, NY, 10021, USA
| | - Rachel Webman
- Zucker School of Medicine at Hofstra/Northwell Health, 5000 Hofstra Blvd, Hempstead, NY, 11549, USA
| | - Robert Andrews
- Zucker School of Medicine at Hofstra/Northwell Health, 5000 Hofstra Blvd, Hempstead, NY, 11549, USA
| | - Rebecca Kowalski
- Zucker School of Medicine at Hofstra/Northwell Health, 5000 Hofstra Blvd, Hempstead, NY, 11549, USA
| | - Matthew Dawson
- Zucker School of Medicine at Hofstra/Northwell Health, 5000 Hofstra Blvd, Hempstead, NY, 11549, USA
| | - Valery Dronsky
- Zucker School of Medicine at Hofstra/Northwell Health, 5000 Hofstra Blvd, Hempstead, NY, 11549, USA
| | | | | | - Filippo Filicori
- Intraoperative Performance Analytics Laboratory (IPAL), Department of Surgery, Lenox Hill Hospital, Northwell Health, 186 E 76th Street, 1st Fl, New York, NY, 10021, USA
- Zucker School of Medicine at Hofstra/Northwell Health, 5000 Hofstra Blvd, Hempstead, NY, 11549, USA
| |
Collapse
|
18
|
Sinha A, West A, Vasdev N, Sooriakumaran P, Rane A, Dasgupta P, McKirdy M. Current practises and the future of robotic surgical training. Surgeon 2023; 21:314-322. [PMID: 36932015 DOI: 10.1016/j.surge.2023.02.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 02/23/2023] [Accepted: 02/28/2023] [Indexed: 03/17/2023]
Abstract
INTRODUCTION This study reviews the current state of robotic surgery training for surgeons, including the various curricula, training methods, and tools available, as well as the challenges and limitations of these. METHODS The authors carried out a literature search across PubMed, MEDLINE, and Google Scholar using keywords related to 'robotic surgery', 'computer-assisted surgery', 'simulation', 'virtual reality', 'surgical training', and 'surgical education'. Full text analysis was performed on 112 articles. TRAINING PROGRAMMES The training program for robotic surgery should focus on proficiency, deliberation, and distribution principles. The curricula can be broadly split up into pre-console and console-side training. Pre-Console and Console-Side Training: Simulation training is an important aspect of robotic surgery training to improve technical skill acquisition and reduce mental workload, which helps prepare trainees for live procedures. OPERATIVE PERFORMANCE ASSESSMENT The study also discusses the various validated assessment tools used for operative performance assessments. FUTURE ADVANCES Finally, the authors propose potential future directions for robotic surgery training, including the use of emerging technologies such as AI and machine learning for real-time feedback, remote mentoring, and augmented reality platforms like Proximie to reduce costs and overcome geographic limitations. CONCLUSION Standardisation in trainee performance assessment is needed. Each of the robotic curricula and platforms has strengths and weaknesses. The ERUS Robotic Curriculum represents an evidence-based example of how to implement training from novice to expert. Remote mentoring and augmented reality platforms can overcome the challenges of high equipment costs and limited access to experts. Emerging technologies offer promising advancements for real-time feedback and immersive training environments, improving patient outcomes.
Collapse
Affiliation(s)
- Ankit Sinha
- Lister Hospital, Hertfordshire and Bedfordshire Urological Cancer Centre, Stevenage, Hertfordshire, UK.
| | - Alexander West
- Lister Hospital, Hertfordshire and Bedfordshire Urological Cancer Centre, Stevenage, Hertfordshire, UK.
| | - Nikhil Vasdev
- Lister Hospital, Hertfordshire and Bedfordshire Urological Cancer Centre, Stevenage, Hertfordshire, UK; University of Hertfordshire, School of Life and Medical Sciences, Hatfield, Hertfordshire, UK.
| | | | - Abhay Rane
- East Surrey Hospital, Redhill, Surrey, UK.
| | - Prokar Dasgupta
- MRC Centre for Transplantation, King's College London, King's Health Partners, Department of Urology, London, UK.
| | - Michael McKirdy
- Royal College of Physicians and Surgeons of Glasgow, Glasgow, UK.
| |
Collapse
|
19
|
Oh D, Brown K, Yousaf S, Nesbitt J, Feins R, Sancheti M, Lin J, Yang S, D'Souza D, Jarc A. Differences Between Attending and Trainee Surgeon Performance Using Objective Performance Indicators During Robot-Assisted Lobectomy. INNOVATIONS-TECHNOLOGY AND TECHNIQUES IN CARDIOTHORACIC AND VASCULAR SURGERY 2023; 18:479-488. [PMID: 37830765 DOI: 10.1177/15569845231204607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2023]
Abstract
OBJECTIVE Existing approaches for assessing surgical performance are subjective and prone to bias. In contrast, utilizing digital kinematic and system data from the surgical robot allows the calculation of objective performance indicators (OPIs) that may differentiate technical skill and competency. This study compared OPIs of trainees and attending surgeons to assess differences during robotic lobectomy (RL). METHODS There were 50 cardiothoracic surgery residents and 7 attending surgeons who performed RL on a left upper lobectomy of an ex vivo perfused model. A novel recorder simultaneously captured video and data from the system and instruments. The lobectomy was annotated into discrete tasks, and OPIs were analyzed for both hands during 6 tasks: exposure of the superior pulmonary vein, upper division of the pulmonary artery and bronchus, and the stapling of these structures. RESULTS There were significant differences between attendings and trainees in all tasks. Among 20 OPIs during exposure tasks, significant differences were observed for the left hand in 31 of 60 (52%) of OPIs and for the right hand in 42 of 60 (70%). During stapling tasks, significant differences were observed for the stapling hand in 28 of 60 (47%) of OPIs and for the nonstapling hand in 14 of 60 (25%). CONCLUSIONS Use of a novel data and video recorder to generate OPIs for both hands revealed significant differences in the operative gestures performed by trainees compared to attendings during RL. This method of assessing performance has potential for establishing objective competency benchmarks and use for tracking progress.
Collapse
Affiliation(s)
- Daniel Oh
- University of Southern California, Los Angeles, CA, USA
- Data and Analytics, Intuitive Surgical, Sunnyvale, CA, USA
| | - Kristen Brown
- Data and Analytics, Intuitive Surgical, Sunnyvale, CA, USA
| | - Sadia Yousaf
- Data and Analytics, Intuitive Surgical, Sunnyvale, CA, USA
| | | | - Richard Feins
- University of North Carolina at Chapel Hill, NC, USA
| | | | - Jules Lin
- University of Michigan, Ann Arbor, MI, USA
| | | | | | - Anthony Jarc
- Data and Analytics, Intuitive Surgical, Sunnyvale, CA, USA
| |
Collapse
|
20
|
Rodler S, Kidess MA, Westhofen T, Kowalewski KF, Belenchon IR, Taratkin M, Puliatti S, Gómez Rivas J, Veccia A, Piazza P, Checcucci E, Stief CG, Cacciamani GE. A Systematic Review of New Imaging Technologies for Robotic Prostatectomy: From Molecular Imaging to Augmented Reality. J Clin Med 2023; 12:5425. [PMID: 37629467 PMCID: PMC10455161 DOI: 10.3390/jcm12165425] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 08/01/2023] [Accepted: 08/10/2023] [Indexed: 08/27/2023] Open
Abstract
New imaging technologies play a pivotal role in the current management of patients with prostate cancer. Robotic assisted radical prostatectomy (RARP) is a standard of care for localized disease and through the already imaging-based console subject of research towards combinations of imaging technologies and RARP as well as their impact on surgical outcomes. Therefore, we aimed to provide a comprehensive analysis of the currently available literature for new imaging technologies for RARP. On 24 January 2023, we performed a systematic review of the current literature on Pubmed, Scopus and Web of Science according to the PRISMA guidelines and Oxford levels of evidence. A total of 46 studies were identified of which 19 studies focus on imaging of the primary tumor, 12 studies on the intraoperative tumor detection of lymph nodes and 15 studies on the training of surgeons. While the feasibility of combined approaches using new imaging technologies including MRI, PSMA-PET CT or intraoperatively applied radioactive and fluorescent dyes has been demonstrated, the prospective confirmation of improvements in surgical outcomes is currently ongoing.
Collapse
Affiliation(s)
- Severin Rodler
- Department of Urology, University Hospital of Munich, 81377 Munich, Germany (T.W.); (C.G.S.)
| | - Marc Anwar Kidess
- Department of Urology, University Hospital of Munich, 81377 Munich, Germany (T.W.); (C.G.S.)
| | - Thilo Westhofen
- Department of Urology, University Hospital of Munich, 81377 Munich, Germany (T.W.); (C.G.S.)
| | | | - Ines Rivero Belenchon
- Urology and Nephrology Department, Virgen del Rocío University Hospital, Manuel Siurot s/n, 41013 Seville, Spain;
| | - Mark Taratkin
- Institute for Urology and Reproductive Health, Sechenov University, 117418 Moscow, Russia;
| | - Stefano Puliatti
- Department of Urology, University of Modena and Reggio Emilia, 42122 Modena, Italy;
| | - Juan Gómez Rivas
- Department of Urology, Hospital Clinico San Carlos, 28040 Madrid, Spain;
| | - Alessandro Veccia
- Urology Unit, Azienda Ospedaliera Universitaria Integrata Verona, 37126 Verona, Italy;
| | - Pietro Piazza
- Division of Urology, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy;
| | - Enrico Checcucci
- Department of Surgery, Candiolo Cancer Institute, FPO-IRCCS, Candiolo, 10060 Turin, Italy;
| | - Christian Georg Stief
- Department of Urology, University Hospital of Munich, 81377 Munich, Germany (T.W.); (C.G.S.)
| | | |
Collapse
|
21
|
Metchik A, Bhattacharyya K, Yousaf S, Jarc A, Oh D, Lazar JF. A novel approach to quantifying surgical workflow in robotic-assisted lobectomy. Int J Med Robot 2023:e2546. [PMID: 37466244 DOI: 10.1002/rcs.2546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 05/31/2023] [Accepted: 06/23/2023] [Indexed: 07/20/2023]
Abstract
INTRODUCTION Understanding surgical workflow is critical for optimizing efficiencies and outcomes; however, most research evaluating workflow is impacted by observer subjectivity, limiting its reproducibility, scalability, and actionability. To address this, we developed a novel approach to quantitatively describe workflow within robotic-assisted lobectomy (RL). We demonstrate the utility of this approach by analysing features of surgical workflow that correlate with procedure duration. METHODS RL was deconstructed into 12 tasks by expert thoracic surgeons. Task start and stop times were annotated across videos of 10 upper RLs (5 right and 5 left). Markov Networks were used to estimate both the likelihood of transitioning from one task to another and each task-transition entropy (i.e. complexity). Associations between the frequency with which each task was revisited intraoperatively and procedure duration were assessed using Pearson's correlation coefficient. RESULTS Entropy calculations identified fissure dissection and hilar node dissection as tasks with especially complex transitions, while mediastinal lymph node dissection and division of pulmonary veins were less complex. The number of transitions to three tasks significantly correlated with case duration (fissure dissection (R = 0.69, p = 0.01), dissect arteries (R = 0.59, p = 0.03), and divide arteries (R = 0.63, p = 0.03)). CONCLUSION This pilot demonstrates the feasibility of objectively quantifying workflow between RL tasks and introduces entropy as a new metric of task-transition complexity. These innovative measures of surgical workflow enable detailed characterization of a given surgery and might indicate behaviour that impacts case progression. We discuss how these measures can serve as a foundation and be combined with relevant clinical information to better understand factors influencing surgical inefficiency.
Collapse
Affiliation(s)
- Ariana Metchik
- Department of General Surgery, MedStar Georgetown University Hospital, Washington, District of Columbia, USA
| | | | - Sadia Yousaf
- Intuitive Surgical, Inc., Data and Analytics, Norcross, Georgia, USA
| | - Anthony Jarc
- Intuitive Surgical, Inc., Data and Analytics, Norcross, Georgia, USA
| | - Daniel Oh
- Division of Thoracic Surgery, University of Southern California, Los Angeles, California, USA
| | - John F Lazar
- Division of Thoracic Surgery, MedStar Georgetown University Hospital, Washington, District of Columbia, USA
| |
Collapse
|
22
|
Hashemi N, Svendsen MBS, Bjerrum F, Rasmussen S, Tolsgaard MG, Friis ML. Acquisition and usage of robotic surgical data for machine learning analysis. Surg Endosc 2023:10.1007/s00464-023-10214-7. [PMID: 37389741 PMCID: PMC10338401 DOI: 10.1007/s00464-023-10214-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Accepted: 06/12/2023] [Indexed: 07/01/2023]
Abstract
BACKGROUND The increasing use of robot-assisted surgery (RAS) has led to the need for new methods of assessing whether new surgeons are qualified to perform RAS, without the resource-demanding process of having expert surgeons do the assessment. Computer-based automation and artificial intelligence (AI) are seen as promising alternatives to expert-based surgical assessment. However, no standard protocols or methods for preparing data and implementing AI are available for clinicians. This may be among the reasons for the impediment to the use of AI in the clinical setting. METHOD We tested our method on porcine models with both the da Vinci Si and the da Vinci Xi. We sought to capture raw video data from the surgical robots and 3D movement data from the surgeons and prepared the data for the use in AI by a structured guide to acquire and prepare video data using the following steps: 'Capturing image data from the surgical robot', 'Extracting event data', 'Capturing movement data of the surgeon', 'Annotation of image data'. RESULTS 15 participant (11 novices and 4 experienced) performed 10 different intraabdominal RAS procedures. Using this method we captured 188 videos (94 from the surgical robot, and 94 corresponding movement videos of the surgeons' arms and hands). Event data, movement data, and labels were extracted from the raw material and prepared for use in AI. CONCLUSION With our described methods, we could collect, prepare, and annotate images, events, and motion data from surgical robotic systems in preparation for its use in AI.
Collapse
Affiliation(s)
- Nasseh Hashemi
- Department of Clinical Medicine, Aalborg University Hospital, Aalborg, Denmark.
- Nordsim-Centre for Skills Training and Simulation, Aalborg, Denmark.
- ROCnord-Robot Centre, Aalborg University Hospital, Aalborg, Denmark.
- Department of Urology, Aalborg University Hospital, Aalborg, Denmark.
| | - Morten Bo Søndergaard Svendsen
- Copenhagen Academy for Medical Education and Simulation, Center for Human Resources and Education, Copenhagen, Denmark
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Flemming Bjerrum
- Copenhagen Academy for Medical Education and Simulation, Center for Human Resources and Education, Copenhagen, Denmark
- Department of Gastrointestinal and Hepatic Diseases, Copenhagen University Hospital - Herlev and Gentofte, Herlev, Denmark
| | - Sten Rasmussen
- Department of Clinical Medicine, Aalborg University Hospital, Aalborg, Denmark
| | - Martin G Tolsgaard
- Nordsim-Centre for Skills Training and Simulation, Aalborg, Denmark
- Copenhagen Academy for Medical Education and Simulation, Center for Human Resources and Education, Copenhagen, Denmark
| | - Mikkel Lønborg Friis
- Department of Clinical Medicine, Aalborg University Hospital, Aalborg, Denmark
- Nordsim-Centre for Skills Training and Simulation, Aalborg, Denmark
| |
Collapse
|
23
|
Kiyasseh D, Laca J, Haque TF, Otiato M, Miles BJ, Wagner C, Donoho DA, Trinh QD, Anandkumar A, Hung AJ. Human visual explanations mitigate bias in AI-based assessment of surgeon skills. NPJ Digit Med 2023; 6:54. [PMID: 36997642 PMCID: PMC10063676 DOI: 10.1038/s41746-023-00766-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 01/21/2023] [Indexed: 04/03/2023] Open
Abstract
Artificial intelligence (AI) systems can now reliably assess surgeon skills through videos of intraoperative surgical activity. With such systems informing future high-stakes decisions such as whether to credential surgeons and grant them the privilege to operate on patients, it is critical that they treat all surgeons fairly. However, it remains an open question whether surgical AI systems exhibit bias against surgeon sub-cohorts, and, if so, whether such bias can be mitigated. Here, we examine and mitigate the bias exhibited by a family of surgical AI systems-SAIS-deployed on videos of robotic surgeries from three geographically-diverse hospitals (USA and EU). We show that SAIS exhibits an underskilling bias, erroneously downgrading surgical performance, and an overskilling bias, erroneously upgrading surgical performance, at different rates across surgeon sub-cohorts. To mitigate such bias, we leverage a strategy -TWIX-which teaches an AI system to provide a visual explanation for its skill assessment that otherwise would have been provided by human experts. We show that whereas baseline strategies inconsistently mitigate algorithmic bias, TWIX can effectively mitigate the underskilling and overskilling bias while simultaneously improving the performance of these AI systems across hospitals. We discovered that these findings carry over to the training environment where we assess medical students' skills today. Our study is a critical prerequisite to the eventual implementation of AI-augmented global surgeon credentialing programs, ensuring that all surgeons are treated fairly.
Collapse
Affiliation(s)
- Dani Kiyasseh
- Department of Computing and Mathematical Sciences, California Institute of Technology, California, CA, USA.
| | - Jasper Laca
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, University of Southern California, California, CA, USA
| | - Taseen F Haque
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, University of Southern California, California, CA, USA
| | - Maxwell Otiato
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, University of Southern California, California, CA, USA
| | - Brian J Miles
- Department of Urology, Houston Methodist Hospital, Texas, TX, USA
| | - Christian Wagner
- Department of Urology, Pediatric Urology and Uro-Oncology, Prostate Center Northwest, St. Antonius-Hospital, Gronau, Germany
| | - Daniel A Donoho
- Division of Neurosurgery, Center for Neuroscience, Children's National Hospital, Washington DC, WA, USA
| | - Quoc-Dien Trinh
- Center for Surgery & Public Health, Department of Surgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Animashree Anandkumar
- Department of Computing and Mathematical Sciences, California Institute of Technology, California, CA, USA
| | - Andrew J Hung
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, University of Southern California, California, CA, USA.
| |
Collapse
|
24
|
Automated Capture of Intraoperative Adverse Events Using Artificial Intelligence: A Systematic Review and Meta-Analysis. J Clin Med 2023; 12:jcm12041687. [PMID: 36836223 PMCID: PMC9963108 DOI: 10.3390/jcm12041687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Revised: 02/08/2023] [Accepted: 02/14/2023] [Indexed: 02/22/2023] Open
Abstract
Intraoperative adverse events (iAEs) impact the outcomes of surgery, and yet are not routinely collected, graded, and reported. Advancements in artificial intelligence (AI) have the potential to power real-time, automatic detection of these events and disrupt the landscape of surgical safety through the prediction and mitigation of iAEs. We sought to understand the current implementation of AI in this space. A literature review was performed to PRISMA-DTA standards. Included articles were from all surgical specialties and reported the automatic identification of iAEs in real-time. Details on surgical specialty, adverse events, technology used for detecting iAEs, AI algorithm/validation, and reference standards/conventional parameters were extracted. A meta-analysis of algorithms with available data was conducted using a hierarchical summary receiver operating characteristic curve (ROC). The QUADAS-2 tool was used to assess the article risk of bias and clinical applicability. A total of 2982 studies were identified by searching PubMed, Scopus, Web of Science, and IEEE Xplore, with 13 articles included for data extraction. The AI algorithms detected bleeding (n = 7), vessel injury (n = 1), perfusion deficiencies (n = 1), thermal damage (n = 1), and EMG abnormalities (n = 1), among other iAEs. Nine of the thirteen articles described at least one validation method for the detection system; five explained using cross-validation and seven divided the dataset into training and validation cohorts. Meta-analysis showed the algorithms were both sensitive and specific across included iAEs (detection OR 14.74, CI 4.7-46.2). There was heterogeneity in reported outcome statistics and article bias risk. There is a need for standardization of iAE definitions, detection, and reporting to enhance surgical care for all patients. The heterogeneous applications of AI in the literature highlights the pluripotent nature of this technology. Applications of these algorithms across a breadth of urologic procedures should be investigated to assess the generalizability of these data.
Collapse
|
25
|
Cheikh Youssef S, Haram K, Noël J, Patel V, Porter J, Dasgupta P, Hachach-Haram N. Evolution of the digital operating room: the place of video technology in surgery. Langenbecks Arch Surg 2023; 408:95. [PMID: 36807211 PMCID: PMC9939374 DOI: 10.1007/s00423-023-02830-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 02/06/2023] [Indexed: 02/23/2023]
Abstract
PURPOSE The aim of this review was to collate current evidence wherein digitalisation, through the incorporation of video technology and artificial intelligence (AI), is being applied to the practice of surgery. Applications are vast, and the literature investigating the utility of surgical video and its synergy with AI has steadily increased over the last 2 decades. This type of technology is widespread in other industries, such as autonomy in transportation and manufacturing. METHODS Articles were identified primarily using the PubMed and MEDLINE databases. The MeSH terms used were "surgical education", "surgical video", "video labelling", "surgery", "surgical workflow", "telementoring", "telemedicine", "machine learning", "deep learning" and "operating room". Given the breadth of the subject and the scarcity of high-level data in certain areas, a narrative synthesis was selected over a meta-analysis or systematic review to allow for a focussed discussion of the topic. RESULTS Three main themes were identified and analysed throughout this review, (1) the multifaceted utility of surgical video recording, (2) teleconferencing/telemedicine and (3) artificial intelligence in the operating room. CONCLUSIONS Evidence suggests the routine collection of intraoperative data will be beneficial in the advancement of surgery, by driving standardised, evidence-based surgical care and personalised training of future surgeons. However, many barriers stand in the way of widespread implementation, necessitating close collaboration between surgeons, data scientists, medicolegal personnel and hospital policy makers.
Collapse
Affiliation(s)
| | | | - Jonathan Noël
- Guy's and St. Thomas' NHS Foundation Trust, Urology Centre, King's Health Partners, London, UK
| | - Vipul Patel
- Adventhealth Global Robotics Institute, 400 Celebration Place, Celebration, FL, USA
| | - James Porter
- Department of Urology, Swedish Urology Group, Seattle, WA, USA
| | - Prokar Dasgupta
- Guy's and St. Thomas' NHS Foundation Trust, Urology Centre, King's Health Partners, London, UK
| | - Nadine Hachach-Haram
- Department of Plastic Surgery, Guy's and St. Thomas' NHS Foundation Trust, King's Health Partners, London, UK
| |
Collapse
|
26
|
A novel assessment model for teaching robot-assisted living donor nephrectomy in abdominal transplant surgery fellowship. Am J Surg 2023; 225:420-424. [PMID: 36253318 DOI: 10.1016/j.amjsurg.2022.09.058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Revised: 09/23/2022] [Accepted: 09/29/2022] [Indexed: 11/20/2022]
Abstract
BACKGROUND An increasing number of transplant centers have adopted robot-assisted living donor nephrectomy. Thus, a transplant fellow assessment tool is needed for promoting operative independence in an objective and safe manner. METHODS In this pilot study, data was prospectively collected on both fellow performance with focus on technique, efficiency, and communication ("overall RO-SCORE"), and operative steps ("operative steps RO-SCORE"). Robotic user performance metrics were analyzed from the da Vinci Xi system, including fellow percent active control time (ACT) and handoff counts. RESULTS From July 2020 to February 2021, twenty-one robot-assisted donor nephrectomies were performed. In regression analysis, fellow performance (based on both RO-SCOREs and robot % ACT) was significantly associated with both time and case number, with time-to-independence modelled at 8.4-14.2 months, and case number-to-independence estimated at 15-22 cases. Robot user metrics provided valid objective measures alongside RO-SCOREs. CONCLUSIONS This pilot study provides an effective assessment tool for promoting operative competency in robot-assisted donor nephrectomy among transplant fellows.
Collapse
|
27
|
Azargoshasb S, Boekestijn I, Roestenberg M, KleinJan GH, van der Hage JA, van der Poel HG, Rietbergen DDD, van Oosterom MN, van Leeuwen FWB. Quantifying the Impact of Signal-to-background Ratios on Surgical Discrimination of Fluorescent Lesions. Mol Imaging Biol 2023; 25:180-189. [PMID: 35711014 PMCID: PMC9971139 DOI: 10.1007/s11307-022-01736-y] [Citation(s) in RCA: 20] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 03/28/2022] [Accepted: 04/21/2022] [Indexed: 12/14/2022]
Abstract
PURPOSE Surgical fluorescence guidance has gained popularity in various settings, e.g., minimally invasive robot-assisted laparoscopic surgery. In pursuit of novel receptor-targeted tracers, the field of fluorescence-guided surgery is currently moving toward increasingly lower signal intensities. This highlights the importance of understanding the impact of low fluorescence intensities on clinical decision making. This study uses kinematics to investigate the impact of signal-to-background ratios (SBR) on surgical performance. METHODS Using a custom grid exercise containing hidden fluorescent targets, a da Vinci Xi robot with Firefly fluorescence endoscope and ProGrasp and Maryland forceps instruments, we studied how the participants' (N = 16) actions were influenced by the fluorescent SBR. To monitor the surgeon's actions, the surgical instrument tip was tracked using a custom video-based tracking framework. The digitized instrument tracks were then subjected to multi-parametric kinematic analysis, allowing for the isolation of various metrics (e.g., velocity, jerkiness, tortuosity). These were incorporated in scores for dexterity (Dx), decision making (DM), overall performance (PS) and proficiency. All were related to the SBR values. RESULTS Multi-parametric analysis showed that task completion time, time spent in fluorescence-imaging mode and total pathlength are metrics that are directly related to the SBR. Below SBR 1.5, these values substantially increased, and handling errors became more frequent. The difference in Dx and DM between the targets that gave SBR < 1.50 and SBR > 1.50, indicates that the latter group generally yields a 2.5-fold higher Dx value and a threefold higher DM value. As these values provide the basis for the PS score, proficiency could only be achieved at SBR > 1.55. CONCLUSION By tracking the surgical instruments we were able to, for the first time, quantitatively and objectively assess how the instrument positioning is impacted by fluorescent SBR. Our findings suggest that in ideal situations a minimum SBR of 1.5 is required to discriminate fluorescent lesions, a substantially lower value than the SBR 2 often reported in literature.
Collapse
Affiliation(s)
- Samaneh Azargoshasb
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Department of Urology, Netherlands Cancer Institute-Antoni Van Leeuwenhoek Hospital, Amsterdam, the Netherlands
| | - Imke Boekestijn
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Section of Nuclear Medicine, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Meta Roestenberg
- Department of Parasitology, Leiden University Medical Center, Leiden, the Netherlands.,Department of Infectious Diseases, Leiden University Medical Center, Leiden, the Netherlands
| | - Gijs H KleinJan
- Department of Urology, Leiden University Medical Center, Leiden, The Netherlands
| | - Jos A van der Hage
- Department of Surgery, Leiden University Medical Center, Leiden, the Netherlands
| | - Henk G van der Poel
- Department of Urology, Netherlands Cancer Institute-Antoni Van Leeuwenhoek Hospital, Amsterdam, the Netherlands
| | - Daphne D D Rietbergen
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Section of Nuclear Medicine, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Matthias N van Oosterom
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Department of Urology, Netherlands Cancer Institute-Antoni Van Leeuwenhoek Hospital, Amsterdam, the Netherlands
| | - Fijs W B van Leeuwen
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands. .,Department of Urology, Netherlands Cancer Institute-Antoni Van Leeuwenhoek Hospital, Amsterdam, the Netherlands.
| |
Collapse
|
28
|
Ma R, Ramaswamy A, Xu J, Trinh L, Kiyasseh D, Chu TN, Wong EY, Lee RS, Rodriguez I, DeMeo G, Desai A, Otiato MX, Roberts SI, Nguyen JH, Laca J, Liu Y, Urbanova K, Wagner C, Anandkumar A, Hu JC, Hung AJ. Surgical gestures as a method to quantify surgical performance and predict patient outcomes. NPJ Digit Med 2022; 5:187. [PMID: 36550203 PMCID: PMC9780308 DOI: 10.1038/s41746-022-00738-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 11/29/2022] [Indexed: 12/24/2022] Open
Abstract
How well a surgery is performed impacts a patient's outcomes; however, objective quantification of performance remains an unsolved challenge. Deconstructing a procedure into discrete instrument-tissue "gestures" is a emerging way to understand surgery. To establish this paradigm in a procedure where performance is the most important factor for patient outcomes, we identify 34,323 individual gestures performed in 80 nerve-sparing robot-assisted radical prostatectomies from two international medical centers. Gestures are classified into nine distinct dissection gestures (e.g., hot cut) and four supporting gestures (e.g., retraction). Our primary outcome is to identify factors impacting a patient's 1-year erectile function (EF) recovery after radical prostatectomy. We find that less use of hot cut and more use of peel/push are statistically associated with better chance of 1-year EF recovery. Our results also show interactions between surgeon experience and gesture types-similar gesture selection resulted in different EF recovery rates dependent on surgeon experience. To further validate this framework, two teams independently constructe distinct machine learning models using gesture sequences vs. traditional clinical features to predict 1-year EF. In both models, gesture sequences are able to better predict 1-year EF (Team 1: AUC 0.77, 95% CI 0.73-0.81; Team 2: AUC 0.68, 95% CI 0.66-0.70) than traditional clinical features (Team 1: AUC 0.69, 95% CI 0.65-0.73; Team 2: AUC 0.65, 95% CI 0.62-0.68). Our results suggest that gestures provide a granular method to objectively indicate surgical performance and outcomes. Application of this methodology to other surgeries may lead to discoveries on methods to improve surgery.
Collapse
Affiliation(s)
- Runzhuo Ma
- grid.42505.360000 0001 2156 6853Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA USA
| | - Ashwin Ramaswamy
- grid.5386.8000000041936877XDepartment of Urology, Weill Cornell Medicine, New York, NY USA
| | - Jiashu Xu
- grid.42505.360000 0001 2156 6853Computer Science Department, Viterbi School of Engineering, University of Southern California, Los Angeles, CA USA
| | - Loc Trinh
- grid.42505.360000 0001 2156 6853Computer Science Department, Viterbi School of Engineering, University of Southern California, Los Angeles, CA USA
| | - Dani Kiyasseh
- grid.20861.3d0000000107068890Department of Computing & Mathematical Sciences, California Institute of Technology, Pasadena, CA USA
| | - Timothy N. Chu
- grid.42505.360000 0001 2156 6853Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA USA
| | - Elyssa Y. Wong
- grid.42505.360000 0001 2156 6853Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA USA
| | - Ryan S. Lee
- grid.42505.360000 0001 2156 6853Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA USA
| | - Ivan Rodriguez
- grid.42505.360000 0001 2156 6853Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA USA
| | - Gina DeMeo
- grid.5386.8000000041936877XDepartment of Urology, Weill Cornell Medicine, New York, NY USA
| | - Aditya Desai
- grid.42505.360000 0001 2156 6853Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA USA
| | - Maxwell X. Otiato
- grid.42505.360000 0001 2156 6853Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA USA
| | - Sidney I. Roberts
- grid.42505.360000 0001 2156 6853Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA USA
| | - Jessica H. Nguyen
- grid.42505.360000 0001 2156 6853Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA USA
| | - Jasper Laca
- grid.42505.360000 0001 2156 6853Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA USA
| | - Yan Liu
- grid.42505.360000 0001 2156 6853Computer Science Department, Viterbi School of Engineering, University of Southern California, Los Angeles, CA USA
| | - Katarina Urbanova
- grid.459927.40000 0000 8785 9045Department of Urology and Urologic Oncology, St. Antonius-Hospital, Gronau, Germany
| | - Christian Wagner
- grid.459927.40000 0000 8785 9045Department of Urology and Urologic Oncology, St. Antonius-Hospital, Gronau, Germany
| | - Animashree Anandkumar
- grid.20861.3d0000000107068890Department of Computing & Mathematical Sciences, California Institute of Technology, Pasadena, CA USA
| | - Jim C. Hu
- grid.5386.8000000041936877XDepartment of Urology, Weill Cornell Medicine, New York, NY USA
| | - Andrew J. Hung
- grid.42505.360000 0001 2156 6853Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA USA
| |
Collapse
|
29
|
Lazar JF, Brown K, Yousaf S, Jarc A, Metchik A, Henderson H, Feins RH, Sancheti MS, Lin J, Yang S, Nesbitt J, D'Souza D, Oh DS. Objective performance indicators of cardiothoracic residents are associated with vascular injury during robotic-assisted lobectomy on porcine models. J Robot Surg 2022; 17:669-676. [PMID: 36306102 DOI: 10.1007/s11701-022-01476-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 10/14/2022] [Indexed: 11/29/2022]
Abstract
Surgical training relies on subjective feedback on resident technical performance by attending surgeons. A novel data recorder connected to a robotic-assisted surgical platform captures synchronized kinematic and video data during an operation to calculate quantitative, objective performance indicators (OPIs). The aim of this study was to determine if OPIs during initial task of a resident's robotic-assisted lobectomy (RL) correlated with bleeding during the procedure. Forty-six residents from the 2019 Thoracic Surgery Directors Association Resident Boot Camp completed RL on an ex vivo perfused porcine model while continuous video and kinematic data were recorded. For this pilot study, RL was segmented into 12 tasks and OPIs were calculated for the initial major task. Cases were reviewed for major bleeding events and OPIs of bleeding cases were compared to those who did not. Data from 42 residents were complete and included in the analysis. 10/42 residents (23.8%) encountered bleeding: 10/40 residents who started with superior pulmonary vein exposure and 0/2 residents who started with pulmonary artery exposure. Twenty OPIs for both hands were assessed during the initial task. Six OPIs related to instrument usage or smoothness of motion were significant for bleeding. Differences were statistically significant for both hands (p < 0.05). OPIs showing bimanual asymmetry indicated lower proficiency. This study demonstrates that kinematic and video analytics can establish a correlation between objective performance metrics and bleeding events in an ex vivo perfused lobectomy. Further study could assist in the development of focused exercises and simulation on objective domains to help improve overall performance and reducing complications during RL.
Collapse
Affiliation(s)
- John F Lazar
- Department of Surgery, Division of Thoracic Surgery, MedStar Georgetown University Hospital, 110 Irving St, G-253, Washington, DC, 20010, USA.
| | - Kristen Brown
- Data and Analytics, Intuitive Surgical, Inc., Sunnyvale, CA, USA
| | - Sadia Yousaf
- Data and Analytics, Intuitive Surgical, Inc., Sunnyvale, CA, USA
| | - Anthony Jarc
- Data and Analytics, Intuitive Surgical, Inc., Sunnyvale, CA, USA
| | - Ariana Metchik
- Department of General Surgery, MedStar Georgetown University Hospital, Washington, DC, USA
| | - Hayley Henderson
- Department of Surgery, Division of Thoracic Surgery, MedStar Georgetown University Hospital, 110 Irving St, G-253, Washington, DC, 20010, USA
| | - Richard H Feins
- Division of Thoracic Surgery, University of North Carolina, Chapel Hill, NC, USA
| | - Manu S Sancheti
- Division of Thoracic Surgery, Emory University, Atlanta, GA, USA
| | - Jules Lin
- Division of Thoracic Surgery, University of Michigan, Ann Arbor, MI, USA
| | - Stephen Yang
- Division of Thoracic Surgery, Johns Hopkins University, Baltimore, MD, USA
| | - Jonathan Nesbitt
- Department of Thoracic Surgery, Vanderbilt University, Nashville, TN, USA
| | - Desmond D'Souza
- Division of Thoracic Surgery, The Ohio State University, Columbus, OH, USA
| | - Daniel S Oh
- Data and Analytics, Intuitive Surgical, Inc., Sunnyvale, CA, USA
- Division of Thoracic Surgery, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|
30
|
Younes MM, Larkins K, To G, Burke G, Heriot A, Warrier S, Mohan H. What are clinically relevant performance metrics in robotic surgery? A systematic review of the literature. J Robot Surg 2022; 17:335-350. [PMID: 36190655 PMCID: PMC10076398 DOI: 10.1007/s11701-022-01457-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Accepted: 09/17/2022] [Indexed: 10/10/2022]
Abstract
A crucial element of any surgical training program is the ability to provide procedure-specific, objective, and reliable measures of performance. During robotic surgery, objective clinically relevant performance metrics (CRPMs) can provide tailored contextual feedback and correlate with clinical outcomes. This review aims to define CRPMs, assess their validity in robotic surgical training and compare CRPMs to existing measures of robotic performance. A systematic search of Medline and Embase databases was conducted in May 2022 following the PRISMA guidelines. The search terms included Clinically Relevant Performance Metrics (CRPMs) OR Clinically Relevant Outcome Measures (CROMs) AND robotic surgery. The study settings, speciality, operative context, study design, metric details, and validation status were extracted and analysed. The initial search yielded 116 citations, of which 6 were included. Citation searching identified 3 additional studies, resulting in 9 studies included in this review. Metrics were defined as CRPMs, CROMs, proficiency-based performance metrics and reference-procedure metrics which were developed using a modified Delphi methodology. All metrics underwent both contents and construct validation. Two studies found a strong correlation with GEARS but none correlated their metrics with patient outcome data. CRPMs are a validated and objective approach for assessing trainee proficiency. Evaluating CRPMs with other robotic-assessment tools will facilitate a multimodal metric evaluation approach to robotic surgery training. Further studies should assess the correlation with clinical outcomes. This review highlights there is significant scope for the development and validation of CRPMs to establish proficiency-based progression curricula that can be translated from a simulation setting into clinical practice.
Collapse
Affiliation(s)
- Melissa M Younes
- The University of Melbourne, 305 Grattan Street, Parkville, VIC, Australia
| | - Kirsten Larkins
- The University of Melbourne, 305 Grattan Street, Parkville, VIC, Australia. .,Peter MacCallum Cancer Centre, Melbourne, VIC, Australia.
| | - Gloria To
- The University of Melbourne, 305 Grattan Street, Parkville, VIC, Australia
| | - Grace Burke
- International Medical Robotics Academy, North Melbourne, VIC, Australia
| | - Alexander Heriot
- The University of Melbourne, 305 Grattan Street, Parkville, VIC, Australia.,Peter MacCallum Cancer Centre, Melbourne, VIC, Australia.,International Medical Robotics Academy, North Melbourne, VIC, Australia
| | - Satish Warrier
- The University of Melbourne, 305 Grattan Street, Parkville, VIC, Australia.,Peter MacCallum Cancer Centre, Melbourne, VIC, Australia.,International Medical Robotics Academy, North Melbourne, VIC, Australia.,Monash University, Clayton, VIC, Australia
| | - Helen Mohan
- The University of Melbourne, 305 Grattan Street, Parkville, VIC, Australia.,Peter MacCallum Cancer Centre, Melbourne, VIC, Australia.,Austin Health, Heidelberg, VIC, Australia
| |
Collapse
|
31
|
Sanford DI, Ma R, Ghoreifi A, Haque TF, Nguyen JH, Hung AJ. Association of Suturing Technical Skill Assessment Scores Between Virtual Reality Simulation and Live Surgery. J Endourol 2022; 36:1388-1394. [PMID: 35848509 PMCID: PMC9587778 DOI: 10.1089/end.2022.0158] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
Abstract
Introduction: Robotic surgical performance, in particular suturing, has been linked to postoperative clinical outcomes. Before attempting live surgery, virtual reality (VR) simulators afford opportunities for training surgeons to learn fundamental technical skills. Herein, we evaluate the association of suturing technical skill assessments between VR simulation and live surgery, and functional clinical outcomes. Materials and Methods: Twenty surgeons completed a VR suturing exercise on the Mimic™ Flex VR simulator and the anterior vesicourethral anastomosis during robot-assisted radical prostatectomy (RARP). Three independent and blinded graders provided technical skill scores using a validated assessment tool. Correlations between VR and live scores were assessed by Spearman's correlation coefficients (ρ). In addition, 117 historic RARP cases from participating surgeons were extracted, and the association between VR technical skill scores and urinary continence recovery was assessed by a multilevel mixed-effects model. Results: A total of 20 (6 training and 14 expert) surgeons participated. Statistically significant correlations for scores provided between VR simulation and live surgery were found for overall and needle driving scores (ρ = 0.555, p = 0.011; ρ = 0.570, p = 0.009, respectively). A subanalysis performed on training surgeons found significant correlations for overall scores between VR simulation and live surgery (ρ = 0.828, p = 0.042). Expert cases with high VR needle driving scores had significantly greater continence recovery rates at 24 months after RARP (98.5% vs 84.9%, p = 0.028). Conclusions: Our study found significant correlations in technical scores between VR and live surgery, especially among training surgeons. In addition, we found that VR needle driving scores were associated with continence recovery after RARP. Our data support the association of skill assessments between VR simulation and live surgery and potential implications for clinical outcomes.
Collapse
Affiliation(s)
- Daniel I. Sanford
- Catherine & Joseph Aresty Department of Urology, Center for Robotic Simulation & Education, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Runzhuo Ma
- Catherine & Joseph Aresty Department of Urology, Center for Robotic Simulation & Education, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Alireza Ghoreifi
- Catherine & Joseph Aresty Department of Urology, Center for Robotic Simulation & Education, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Taseen F. Haque
- Catherine & Joseph Aresty Department of Urology, Center for Robotic Simulation & Education, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Jessica H. Nguyen
- Catherine & Joseph Aresty Department of Urology, Center for Robotic Simulation & Education, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Andrew J. Hung
- Catherine & Joseph Aresty Department of Urology, Center for Robotic Simulation & Education, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| |
Collapse
|
32
|
Mohamadipanah H, Perumalla CA, Kearse LE, Yang S, Wise BJ, Goll CK, Witt AK, Korndorffer JR, Pugh CM. Do Individual Surgeon Preferences Affect Procedural Outcomes? Ann Surg 2022; 276:701-710. [PMID: 35861074 PMCID: PMC10254571 DOI: 10.1097/sla.0000000000005595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES Surgeon preferences such as instrument and suture selection and idiosyncratic approaches to individual procedure steps have been largely viewed as minor differences in the surgical workflow. We hypothesized that idiosyncratic approaches could be quantified and shown to have measurable effects on procedural outcomes. METHODS At the American College of Surgeons (ACS) Clinical Congress, experienced surgeons volunteered to wear motion tracking sensors and be videotaped while evaluating a loop of porcine intestines to identify and repair 2 preconfigured, standardized enterotomies. Video annotation was used to identify individual surgeon preferences and motion data was used to quantify surgical actions. χ 2 analysis was used to determine whether surgical preferences were associated with procedure outcomes (bowel leak). RESULTS Surgeons' (N=255) preferences were categorized into 4 technical decisions. Three out of the 4 technical decisions (repaired injuries together, double-layer closure, corner-stitches vs no corner-stitches) played a significant role in outcomes, P <0.05. Running versus interrupted did not affect outcomes. Motion analysis revealed significant differences in average operative times (leak: 6.67 min vs no leak: 8.88 min, P =0.0004) and work effort (leak-path length=36.86 cm vs no leak-path length=49.99 cm, P =0.001). Surgeons who took the riskiest path but did not leak had better bimanual dexterity (leak=0.21/1.0 vs no leak=0.33/1.0, P =0.047) and placed more sutures during the repair (leak=4.69 sutures vs no leak=6.09 sutures, P =0.03). CONCLUSIONS Our results show that individual preferences affect technical decisions and play a significant role in procedural outcomes. Future analysis in more complex procedures may make major contributions to our understanding of contributors to procedure outcomes.
Collapse
|
33
|
Inouye DA, Ma R, Nguyen JH, Laca J, Kocielnik R, Anandkumar A, Hung AJ. Assessing the efficacy of dissection gestures in robotic surgery. J Robot Surg 2022; 17:597-603. [PMID: 36149590 DOI: 10.1007/s11701-022-01458-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Accepted: 09/17/2022] [Indexed: 10/14/2022]
Abstract
Our group previously defined a dissection gesture classification system that deconstructs robotic tissue dissection into its most elemental yet meaningful movements. The purpose of this study was to expand upon this framework by adding an assessment of gesture efficacy (ineffective, effective, or erroneous) and analyze dissection patterns between groups of surgeons of varying experience. We defined three possible gesture efficacies as ineffective (no meaningful effect on the tissue), effective (intended effect on the tissue), and erroneous (unintended disruption of the tissue). Novices (0 prior robotic cases), intermediates (1-99 cases), and experts (≥ 100 cases) completed a robotic dissection task in a dry-lab training environment. Video recordings were reviewed to classify each gesture and determine its efficacy, then dissection patterns between groups were analyzed. 23 participants completed the task, with 9 novices, 8 intermediates with median caseload 60 (IQR 41-80), and 6 experts with median caseload 525 (IQR 413-900). For gesture selection, we found increasing experience associated with increasing proportion of overall dissection gestures (p = 0.009) and decreasing proportion of retraction gestures (p = 0.009). For gesture efficacy, novices performed the greatest proportion of ineffective gestures (9.8%, p < 0.001), intermediates commit the greatest proportion of erroneous gestures (26.8%, p < 0.001), and the three groups performed similar proportions of overall effective gestures, though experts performed the greatest proportion of effective retraction gestures (85.6%, p < 0.001). Between groups of experience, we found significant differences in gesture selection and gesture efficacy. These relationships may provide insight into further improving surgical training.
Collapse
Affiliation(s)
- Daniel A Inouye
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, University of Southern California Institute of Urology, Los Angeles, CA, USA
| | - Runzhuo Ma
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, University of Southern California Institute of Urology, Los Angeles, CA, USA
| | - Jessica H Nguyen
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, University of Southern California Institute of Urology, Los Angeles, CA, USA
| | - Jasper Laca
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, University of Southern California Institute of Urology, Los Angeles, CA, USA
| | - Rafal Kocielnik
- Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA, USA
| | - Anima Anandkumar
- Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA, USA
| | - Andrew J Hung
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, University of Southern California Institute of Urology, Los Angeles, CA, USA.
| |
Collapse
|
34
|
Asimakopoulos AD, Annino F, Pastore AL, Carbone A, Fuschi A. Free-hand, transrectal ultrasound-guided hydrodissection of the retroprostatic space during robot-assisted radical prostatectomy: Impact on the learning curve. Urol Oncol 2022; 40:408.e1-408.e8. [DOI: 10.1016/j.urolonc.2022.06.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Revised: 05/23/2022] [Accepted: 06/14/2022] [Indexed: 11/16/2022]
|
35
|
Lee RS, Ma R, Pham S, Maya-Silva J, Nguyen JH, Aron M, Cen S, Daneshmand S, Hung AJ. Machine Learning to Delineate Surgeon and Clinical Factors That Anticipate Positive Surgical Margins After Robot-Assisted Radical Prostatectomy. J Endourol 2022; 36:1192-1198. [PMID: 35414218 PMCID: PMC9422786 DOI: 10.1089/end.2021.0890] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Purpose: Automated performance metrics (APMs), derived from instrument kinematic and systems events data during robotic surgery, are validated objective measures of surgeon performance. Our previous studies showed that APMs are strong outcome predictors of urinary continence after robot-assisted radical prostatectomy (RARP). We now use machine learning to investigate how surgeon performance (i.e., APMs) and clinical factors can predict positive surgical margins (PSMs) after RARP. Methods: We prospectively collected data of patients undergoing RARP at our institution from 2016 to 2019. Random Forest model predicted PSMs based on 15 clinical factors and 38 APMs from 11 standardized RARP steps. Out-of-bag Gini impurity index determined the top 10 variables of importance (VOI). APMs in the top 10 VOI were assessed for confounding effects by extracapsular extension (ECE) and pathologic T (pT) through Poisson regression with Generalized Estimating Equation. Results: 55/236 (23.3%) cases had PSMs. Of the 55 cases with PSMs, 9 (16.4%) were pT2 and 46 (83.6%), pT3. The full model, including clinical factors and APMs, achieved area under the curve (AUC) 0.74. When assessing clinical factors or APMs alone, the model achieved AUC 0.72 and 0.64, respectively. The strongest PSM predictors were ECE and pT stage, followed by APMs in specific steps. After adjusting for ECE and pT stage, most APMs remained as independent predictors of PSM. Conclusion: Using machine learning methods, we found that the strongest predictors of PSMs after RARP are nonmodifiable, disease-driven factors (ECE and pT). While APMs provide minimal additional insight into when PSMs may occur, they are nonetheless capable of independently predicting PSMs based on objective measures of surgeon performance.
Collapse
Affiliation(s)
- Ryan S. Lee
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA
| | - Runzhuo Ma
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA
| | - Stephanie Pham
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA
| | - Jacqueline Maya-Silva
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA
| | - Jessica H. Nguyen
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA
| | - Manju Aron
- Department of Pathology, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA
| | - Steven Cen
- Department of Radiology, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA
| | - Siamak Daneshmand
- Catherine & Joseph Aresty Department of Urology, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA
| | - Andrew J. Hung
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA
| |
Collapse
|
36
|
Chen Z, An J, Wu S, Cheng K, You J, Liu J, Jiang J, Yang D, Peng B, Wang X. Surgesture: a novel instrument based on surgical actions for objective skill assessment. Surg Endosc 2022; 36:6113-6121. [PMID: 35737138 DOI: 10.1007/s00464-022-09108-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Accepted: 02/07/2022] [Indexed: 02/05/2023]
Abstract
BACKGROUND Due to varied surgical skills and the lack of an efficient rating system, we developed Surgesture based on elementary functional surgical gestures performed by surgeons, which could serve as objective metrics to evaluate surgical performance in laparoscopic cholecystectomy (LC). METHODS We defined 14 LC basic Surgestures. Four surgeons annotated Surgestures among LC videos performed by experts and novices. The counts, durations, average action time, and dissection/exposure ratio (D/E ratio) of LC Surgestures were compared. The phase of mobilizing hepatocystic triangle (MHT) was extracted for skill assessment by three professors using a modified Global Operative Assessment of Laparoscopic Skills (mGOALS). RESULTS The novice operation time was significantly longer than the expert operation time (58.12 ± 19.23 min vs. 26.66 ± 8.00 min, P < 0.001), particularly during MHT phase. Novices had significantly more Surgestures than experts in both hands (P < 0.05). The left hand and inefficient Surgesture of novices were dramatically more than those of experts (P < 0.05). The experts demonstrated a significantly higher D/E ratio of duration than novices (0.79 ± 0.37 vs. 2.84 ± 1.98, P < 0.001). The counts and time pattern map of LC Surgestures during MHT demonstrated that novices tended to complete LC with more types of Surgestures and spent more time exposing the surgical scene. The performance metrics of LC Surgesture had significant but weak associations with each aspect of mGOALS. CONCLUSION The newly constructed Surgestures could serve as accessible and quantifiable metrics for demonstrating the operative pattern and distinguishing surgeons with various skills. The association between Surgestures and Global Rating Scale laid the foundation for establishing a bridge to automated objective surgical skill evaluation.
Collapse
Affiliation(s)
- Zixin Chen
- Department of Pancreatic Surgery, West China Hospital of Sichuan University, Chengdu, China.,West China School of Medicine, Sichuan University, Chengdu, China
| | - Jingjing An
- Department of Operating Room, West China Hospital, Chengdu, China.,West China School of Nursing, Sichuan University, Chengdu, China
| | - Shangdi Wu
- Department of Pancreatic Surgery, West China Hospital of Sichuan University, Chengdu, China.,West China School of Medicine, Sichuan University, Chengdu, China
| | - Ke Cheng
- Department of Pancreatic Surgery, West China Hospital of Sichuan University, Chengdu, China.,West China School of Medicine, Sichuan University, Chengdu, China
| | - Jiaying You
- Department of Pancreatic Surgery, West China Hospital of Sichuan University, Chengdu, China.,West China School of Medicine, Sichuan University, Chengdu, China
| | - Jie Liu
- ChengDu Withai Innovations Technology Company, Chengdu, China
| | - Jingwen Jiang
- West China Biomedical Big Data Center of West China Hospital, Chengdu, China
| | - Dewei Yang
- West China Biomedical Big Data Center of West China Hospital, Chengdu, China.,Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Bing Peng
- Department of Pancreatic Surgery, West China Hospital of Sichuan University, Chengdu, China.
| | - Xin Wang
- Department of Pancreatic Surgery, West China Hospital of Sichuan University, Chengdu, China.
| |
Collapse
|
37
|
Artificial intelligence for renal cancer: From imaging to histology and beyond. Asian J Urol 2022; 9:243-252. [PMID: 36035341 PMCID: PMC9399557 DOI: 10.1016/j.ajur.2022.05.003] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 04/07/2022] [Accepted: 05/07/2022] [Indexed: 12/24/2022] Open
Abstract
Artificial intelligence (AI) has made considerable progress within the last decade and is the subject of contemporary literature. This trend is driven by improved computational abilities and increasing amounts of complex data that allow for new approaches in analysis and interpretation. Renal cell carcinoma (RCC) has a rising incidence since most tumors are now detected at an earlier stage due to improved imaging. This creates considerable challenges as approximately 10%–17% of kidney tumors are designated as benign in histopathological evaluation; however, certain co-morbid populations (the obese and elderly) have an increased peri-interventional risk. AI offers an alternative solution by helping to optimize precision and guidance for diagnostic and therapeutic decisions. The narrative review introduced basic principles and provide a comprehensive overview of current AI techniques for RCC. Currently, AI applications can be found in any aspect of RCC management including diagnostics, perioperative care, pathology, and follow-up. Most commonly applied models include neural networks, random forest, support vector machines, and regression. However, for implementation in daily practice, health care providers need to develop a basic understanding and establish interdisciplinary collaborations in order to standardize datasets, define meaningful endpoints, and unify interpretation.
Collapse
|
38
|
Kutana S, Bitner DP, Addison P, Chung PJ, Talamini MA, Filicori F. Objective assessment of robotic surgical skills: review of literature and future directions. Surg Endosc 2022; 36:3698-3707. [PMID: 35229215 DOI: 10.1007/s00464-022-09134-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Accepted: 02/13/2022] [Indexed: 01/29/2023]
Abstract
BACKGROUND Evaluation of robotic surgical skill has become increasingly important as robotic approaches to common surgeries become more widely utilized. However, evaluation of these currently lacks standardization. In this paper, we aimed to review the literature on robotic surgical skill evaluation. METHODS A review of literature on robotic surgical skill evaluation was performed and representative literature presented over the past ten years. RESULTS The study of reliability and validity in robotic surgical evaluation shows two main assessment categories: manual and automatic. Manual assessments have been shown to be valid but typically are time consuming and costly. Automatic evaluation and simulation are similarly valid and simpler to implement. Initial reports on evaluation of skill using artificial intelligence platforms show validity. Few data on evaluation methods of surgical skill connect directly to patient outcomes. CONCLUSION As evaluation in surgery begins to incorporate robotic skills, a simultaneous shift from manual to automatic evaluation may occur given the ease of implementation of these technologies. Robotic platforms offer the unique benefit of providing more objective data streams including kinematic data which allows for precise instrument tracking in the operative field. Such data streams will likely incrementally be implemented in performance evaluations. Similarly, with advances in artificial intelligence, machine evaluation of human technical skill will likely form the next wave of surgical evaluation.
Collapse
Affiliation(s)
- Saratu Kutana
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, 186 E. 76th Street, 1st Floor, New York, NY, 10021, USA
| | - Daniel P Bitner
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, 186 E. 76th Street, 1st Floor, New York, NY, 10021, USA.
| | - Poppy Addison
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, 186 E. 76th Street, 1st Floor, New York, NY, 10021, USA
| | - Paul J Chung
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, 186 E. 76th Street, 1st Floor, New York, NY, 10021, USA.,Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | - Mark A Talamini
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | - Filippo Filicori
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, 186 E. 76th Street, 1st Floor, New York, NY, 10021, USA.,Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| |
Collapse
|
39
|
Hutchinson K, Li Z, Cantrell LA, Schenkman NS, Alemzadeh H. Analysis of executional and procedural errors in dry‐lab robotic surgery experiments. Int J Med Robot 2022; 18:e2375. [PMID: 35114732 PMCID: PMC9285717 DOI: 10.1002/rcs.2375] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 01/25/2022] [Accepted: 01/29/2022] [Indexed: 11/10/2022]
Abstract
Background Analysing kinematic and video data can help identify potentially erroneous motions that lead to sub‐optimal surgeon performance and safety‐critical events in robot‐assisted surgery. Methods We develop a rubric for identifying task and gesture‐specific executional and procedural errors and evaluate dry‐lab demonstrations of suturing and needle passing tasks from the JIGSAWS dataset. We characterise erroneous parts of demonstrations by labelling video data, and use distribution similarity analysis and trajectory averaging on kinematic data to identify parameters that distinguish erroneous gestures. Results Executional error frequency varies by task and gesture, and correlates with skill level. Some predominant error modes in each gesture are distinguishable by analysing error‐specific kinematic parameters. Procedural errors could lead to lower performance scores and increased demonstration times but also depend on surgical style. Conclusions This study provides insights into context‐dependent errors that can be used to design automated error detection mechanisms and improve training and skill assessment.
Collapse
Affiliation(s)
- Kay Hutchinson
- Department of Electrical and Computer Engineering University of Virginia Charlottesville Virginia USA
| | - Zongyu Li
- Department of Electrical and Computer Engineering University of Virginia Charlottesville Virginia USA
| | - Leigh A. Cantrell
- Department of Obstetrics and Gynecology University of Virginia Charlottesville Virginia USA
| | - Noah S. Schenkman
- Department of Urology University of Virginia Charlottesville Virginia USA
| | - Homa Alemzadeh
- Department of Electrical and Computer Engineering University of Virginia Charlottesville Virginia USA
| |
Collapse
|
40
|
Kato D, Namiki S, Ueda S, Takeuchi Y, Takeuchi S, Kawase M, Kawase K, Nakai C, Takai M, Iinuma K, Nakane K, Koie T. Validation of standardized training system for robot-assisted radical prostatectomy: comparison of perioperative and surgical outcomes between experienced surgeons and novice surgeons at a low-volume institute in Japan. MINIM INVASIV THER 2022; 31:1103-1111. [PMID: 35352619 DOI: 10.1080/13645706.2022.2056707] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
INTRODUCTION Although robot-assisted radical prostatectomy (RARP) has become a standard treatment modality in patients with prostate cancer (PCa), RARP is a complicated and difficult surgical procedure due to the risk of serious surgery-related complications. This study aimed to evaluate the validation of a standardized training system for RARP in patients with PCa at a single institute. MATERIAL AND METHODS We retrospectively reviewed the clinical and pathological records of 155 patients with PCa who underwent RARP at Gifu University between August 2018 and April 2021. We developed an institutional program for new surgeons based on the separation of the RARP procedure into six checkpoints. The primary endpoints were surgical outcomes and perioperative complications among three groups (expert, trainer, and novice surgeon groups). RESULTS The console time was significantly longer in the novice surgeon group than in the other groups. Regarding bladder neck dissection, ligation of lateral pedicles, and vesicourethral anastomosis, the operative time was significantly shorter in the expert group than in the other groups. Surgery-related complications occurred in 15 patients (9.7%). CONCLUSIONS Our training system for RARP might help reduce the influence of the learning curve on surgical outcomes and ensure that the surgeries performed at low-volume institutions are safe and effective.
Collapse
Affiliation(s)
- Daiki Kato
- Department of Urology, Gifu Graduate School of Medicine, Gifu, Japan
| | - Sanae Namiki
- Department of Urology, Gifu Graduate School of Medicine, Gifu, Japan
| | - Shota Ueda
- Department of Urology, Gifu Graduate School of Medicine, Gifu, Japan
| | | | - Shinichi Takeuchi
- Department of Urology, Gifu Graduate School of Medicine, Gifu, Japan
| | - Makoto Kawase
- Department of Urology, Gifu Graduate School of Medicine, Gifu, Japan
| | - Kota Kawase
- Department of Urology, Gifu Graduate School of Medicine, Gifu, Japan
| | - Chie Nakai
- Department of Urology, Gifu Graduate School of Medicine, Gifu, Japan
| | - Manabu Takai
- Department of Urology, Gifu Graduate School of Medicine, Gifu, Japan
| | - Koji Iinuma
- Department of Urology, Gifu Graduate School of Medicine, Gifu, Japan
| | - Keita Nakane
- Department of Urology, Gifu Graduate School of Medicine, Gifu, Japan
| | - Takuya Koie
- Department of Urology, Gifu Graduate School of Medicine, Gifu, Japan
| |
Collapse
|
41
|
Trinh L, Mingo S, Vanstrum EB, Sanford D, Aastha, Ma R, Nguyen JH, Liu Y, Hung AJ. Survival Analysis Using Surgeon Skill Metrics and Patient Factors to Predict Urinary Continence Recovery After Robot-assisted Radical Prostatectomy. Eur Urol Focus 2022; 8:623-630. [PMID: 33858811 PMCID: PMC8505550 DOI: 10.1016/j.euf.2021.04.001] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Revised: 03/11/2021] [Accepted: 04/04/2021] [Indexed: 12/16/2022]
Abstract
BACKGROUND It has been shown that metrics recorded for instrument kinematics during robotic surgery can predict urinary continence outcomes. OBJECTIVE To evaluate the contributions of patient and treatment factors, surgeon efficiency metrics, and surgeon technical skill scores, especially for vesicourethral anastomosis (VUA), to models predicting urinary continence recovery following robot-assisted radical prostatectomy (RARP). DESIGN, SETTING, AND PARTICIPANTS Automated performance metrics (APMs; instrument kinematics and system events) and patient data were collected for RARPs performed from July 2016 to December 2017. Robotic Anastomosis Competency Evaluation (RACE) scores during VUA were manually evaluated. Training datasets included: (1) patient factors; (2) summarized APMs (reported over RARP steps); (3) detailed APMs (reported over suturing phases of VUA); and (4) technical skills (RACE). Feature selection was used to compress the dimensionality of the inputs. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS The study outcome was urinary continence recovery, defined as use of 0 or 1 safety pads per day. Two predictive models (Cox proportional hazards [CoxPH] and deep learning survival analysis [DeepSurv]) were used. RESULTS AND LIMITATIONS Of 115 patients undergoing RARP, 89 (77.4%) recovered their urinary continence and the median recovery time was 166 d (interquartile range [IQR] 82-337). VUAs were performed by 23 surgeons. The median RACE score was 28/30 (IQR 27-29). Among the individual datasets, technical skills (RACE) produced the best models (C index: CoxPH 0.695, DeepSurv: 0.708). Among summary APMs, posterior/anterior VUA yielded superior model performance over other RARP steps (C index 0.543-0.592). Among detailed APMs, metrics for needle driving yielded top-performing models (C index 0.614-0.655) over other suturing phases. DeepSurv models consistently outperformed CoxPH; both approaches performed best when provided with all the datasets. Limitations include feature selection, which may have excluded relevant information but prevented overfitting. CONCLUSIONS Technical skills and "needle driving" APMs during VUA were most contributory. The best-performing model used synergistic data from all datasets. PATIENT SUMMARY One of the steps in robot-assisted surgical removal of the prostate involves joining the bladder to the urethra. Detailed information on surgeon performance for this step improved the accuracy of predicting recovery of urinary continence among men undergoing this operation for prostate cancer.
Collapse
Affiliation(s)
- Loc Trinh
- Computer Science Department, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Samuel Mingo
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Erik B. Vanstrum
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Daniel Sanford
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Aastha
- Computer Science Department, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Runzhuo Ma
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Jessica H. Nguyen
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Yan Liu
- Computer Science Department, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Andrew J. Hung
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA,Corresponding author. University of Southern California Institute of Urology, 1441 Eastlake Avenue, Los Angeles, CA 90089, USA. Tel. +1 323 8653700; Fax: +1 323 8650120. (A.J. Hung)
| |
Collapse
|
42
|
Sanford DI, Der B, Haque TF, Ma R, Hakim R, Nguyen JH, Cen S, Hung AJ. Technical Skill Impacts the Success of Sequential Robotic Suturing Substeps. J Endourol 2022; 36:273-278. [PMID: 34779231 PMCID: PMC8861914 DOI: 10.1089/end.2021.0417] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023] Open
Abstract
Introduction: Robotic surgical performance, in particular suturing, has been associated with postoperative clinical outcomes. Suturing can be deconstructed into substep components (needle positioning, needle entry angle, needle driving, and needle withdrawal) allowing for the provision of more specific feedback while teaching suturing and more precision when evaluating suturing technical skill and prediction of clinical outcomes. This study evaluates if the technical skill required for particular substeps of the suturing process is associated with the execution of subsequent substeps in terms of technical skill, accuracy, and efficiency. Materials and Methods: Training and expert surgeons completed standardized sutures on the Mimic™ Flex virtual reality robotic simulator. Video recordings were deidentified, time annotated, and provided technical skill scores for each of the four suturing substeps. Hierarchical Poisson regression with generalized estimating equation was used to examine the association of technical skill rating categories between substeps. Results: Twenty-two surgeons completed 428 suturing attempts with 1669 individual technical skill assessments made. Technical skill scores between substeps of the suturing process were found to be significantly associated. When needle positioning was ideal, needle entry angle was associated with a significantly greater chance of being ideal (risk ratio [RR] = 1.12, p = 0.05). In addition, ideal needle entry angle and needle driving technical skill scores were each significantly associated with ideal needle withdrawal technical skill scores (RR = 1.27, p = 0.03; RR = 1.3, p = 0.03, respectively). Our study determined that ideal technical skill was associated with increased accuracy and efficiency of select substeps. Conclusions: Our study found significant associations in the technical skill required for completing substeps of suturing, demonstrating inter-relationships within the suturing process. Together with the known association between technical skill and clinical outcomes, training surgeons should focus on mastering not just the overall suturing process, but also each substep involved. Future machine learning efforts can better evaluate suturing, knowing that these inter-relationships exist.
Collapse
Affiliation(s)
- Daniel I. Sanford
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Balint Der
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Taseen F. Haque
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Runzhuo Ma
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Ryan Hakim
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Jessica H. Nguyen
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Steven Cen
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Andrew J. Hung
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA.,Address correspondence to: Andrew J. Hung, MD, Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, Keck School of Medicine of USC, University of Southern California, 1441 East lake Ave, NOR 7416, Los Angeles, CA 90033-9178, USA
| |
Collapse
|
43
|
Abstract
Artificial intelligence (AI) is a fascinating new technology that incorporates machine learning and neural networks to improve existing technology or create new ones. Potential applications of AI are introduced to aid in the fight against colorectal cancer (CRC). This includes how AI will affect the epidemiology of colorectal cancer and the new methods of mass information gathering like GeoAI, digital epidemiology and real-time information collection. Meanwhile, this review also examines existing tools for diagnosing disease like CT/MRI, endoscopes, genetics, and pathological assessments also benefitted greatly from implementation of deep learning. Finally, how treatment and treatment approaches to CRC can be enhanced when applying AI is under discussion. The power of AI regarding the therapeutic recommendation in colorectal cancer demonstrates much promise in clinical and translational field of oncology, which means better and personalized treatments for those in need.
Collapse
Affiliation(s)
- Chaoran Yu
- Department of General Surgery, Shanghai Ninth People’ Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025 People’s Republic of China
| | - Ernest Johann Helwig
- Tongji Medical College of Huazhong University of Science and Technology, Wuhan, 430030 People’s Republic of China
| |
Collapse
|
44
|
Berniker M, Bhattacharyya KD, Brown KC, Jarc A. A Probabilistic Approach To Surgical Tasks and Skill Metrics. IEEE Trans Biomed Eng 2021; 69:2212-2219. [PMID: 34971527 DOI: 10.1109/tbme.2021.3139538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Identifying and quantifying the activities that compose surgery is essential for effective interventions, computer-aided analyses and the advancement of surgical data science. For example, recent studies have shown that objective metrics (referred to as objective performance indicators, OPIs) computed during key surgical tasks correlate with surgeon skill and clinical outcomes. Unambiguous identification of these surgical tasks can be particularly challenging for both human annotators and algorithms. Each surgical procedure has multiple approaches, each surgeon has their own level of skill, and the initiation and termination of surgical tasks can be subject to interpretation. As such, human annotators and machine learning models face the same basic problem, accurately identifying the boundaries of surgical tasks despite variable and unstructured information. For use in surgeon feedback, OPIs should also be robust to the variability and diversity in this data. To mitigate this difficulty, we propose a probabilistic approach to surgical task identification and calculation of OPIs. Rather than relying on tasks that are identified by hard temporal boundaries, we demonstrate an approach that relies on distributions of start and stop times, for a probabilistic interpretation of when the task was performed. We first use hypothetical data to outline how this approach is superior to other conventional approaches. Then we present similar analyses on surgical data. We find that when surgical tasks are identified by their individual probabilities, the resulting OPIs are less sensitive to noise in the identification of the start and stop times. These results suggest that this probabilistic approach holds promise for the future of surgical data science.
Collapse
|
45
|
Iqbal U, Jing Z, Ahmed Y, Elsayed AS, Rogers C, Boris R, Porter J, Allaf M, Badani K, Stifelman M, Kaouk J, Terakawa T, Hinata N, Aboumohamed AA, Kauffman E, Li Q, Abaza R, Guru KA, Hussein AA, Eun D. Development and Validation of an Objective Scoring Tool for Robot-Assisted Partial Nephrectomy: Scoring for Partial Nephrectomy. J Endourol 2021; 36:647-653. [PMID: 34809491 DOI: 10.1089/end.2021.0706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Objective: To develop a structured and objective scoring tool for assessment of robot-assisted partial nephrectomy (RAPN): Scoring for Partial Nephrectomy (SPaN). Materials and Methods: Content development: RAPN was deconstructed into 6 domains by a multi-institutional panel of 10 expert robotic surgeons. Performance on each domain was represented on a Likert scale of 1 to 5, with specific descriptions of anchors 1, 3, and 5. Content validation: The Delphi methodology was utilized to achieve consensus about the description of each anchor for each domain in terms of appropriateness of the skill assessed, objectiveness, clarity, and unambiguous wording. The content validity index (CVI) of ≥0.75 was set as cutoff for consensus. Reliability: 15 de-identified videos of RAPN were utilized to determine the inter-rater reliability using linearly weighted percent agreement, and Construct validation of SPaN was described in terms of median scores and odds ratios. Results: The expert panel reached consensus (CVI ≥0.75) after 2 rounds. Consensus was achieved for 36 (67%) statements in the first round and 18 (33%) after the second round. The final six-domain SPaN included Exposure of the kidney; Identification and dissection of the ureter and gonadal vessels; Dissection of the hilum; Tumor localization and exposure; Clamping and tumor resection; and Renorrhaphy. The linearly weighted percent agreement was >0.75 for all domains. There was no difference between median scores for any domain between attendings and trainees. Conclusion: Despite the lack of significant construct validity, SPaN is a structured, reliable, and procedure-specific tool that can objectively assesses technical proficiency for RAPN.
Collapse
Affiliation(s)
- Umar Iqbal
- A.T.L.A.S. (Applied Technology Laboratory for Advanced Surgery), Roswell Park Comprehensive Cancer Center, Buffalo, New York, USA
| | - Zhe Jing
- A.T.L.A.S. (Applied Technology Laboratory for Advanced Surgery), Roswell Park Comprehensive Cancer Center, Buffalo, New York, USA
| | - Youssef Ahmed
- A.T.L.A.S. (Applied Technology Laboratory for Advanced Surgery), Roswell Park Comprehensive Cancer Center, Buffalo, New York, USA
| | - Ahmed S Elsayed
- A.T.L.A.S. (Applied Technology Laboratory for Advanced Surgery), Roswell Park Comprehensive Cancer Center, Buffalo, New York, USA.,Cairo University, Cairo, Egypt
| | - Craig Rogers
- Henry Ford Health Systems, Detroit, Michigan, USA
| | - Ronald Boris
- Indiana University School of Medicine, Indianapolis, Indiana, USA
| | - James Porter
- Swedish Medical Center, Seattle, Washington, USA
| | - Mohammad Allaf
- Johns Hopkins University Hospital, Boston, Massachusetts, USA
| | - Ketan Badani
- Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | | | | | | | - Nobuyuki Hinata
- Hiroshima University Graduate School of Biomedical and Health Sciences, Hiroshima, Japan
| | | | - Eric Kauffman
- A.T.L.A.S. (Applied Technology Laboratory for Advanced Surgery), Roswell Park Comprehensive Cancer Center, Buffalo, New York, USA
| | - Qiang Li
- A.T.L.A.S. (Applied Technology Laboratory for Advanced Surgery), Roswell Park Comprehensive Cancer Center, Buffalo, New York, USA
| | | | - Khurshid A Guru
- A.T.L.A.S. (Applied Technology Laboratory for Advanced Surgery), Roswell Park Comprehensive Cancer Center, Buffalo, New York, USA
| | - Ahmed A Hussein
- A.T.L.A.S. (Applied Technology Laboratory for Advanced Surgery), Roswell Park Comprehensive Cancer Center, Buffalo, New York, USA.,Cairo University, Cairo, Egypt
| | - Daniel Eun
- Temple University Hospital, Philadelphia, Pennsylvania, USA
| |
Collapse
|
46
|
Roberts SI, Cen SY, Nguyen J, Perez LC, Medina LG, Ma R, Marshall S, Kocielnik R, Anandkumar A, Hung AJ. The Relationship of Technical Skills and Cognitive Workload to Errors During Robotic Surgical Exercises. J Endourol 2021; 36:712-720. [PMID: 34913734 PMCID: PMC9145254 DOI: 10.1089/end.2021.0790] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Purpose We attempt to understand the relationship between surgeon technical skills, cognitive workload and errors during a simulated robotic dissection task. Materials and Methods Participant surgeons performed a robotic surgery dissection exercise. Participants were grouped based on surgical experience. Technical skills were evaluated utilizing the validated Global Evaluative Assessment of Robotic Skills (GEARS) assessment tool. The dissection task was evaluated for errors during active dissection or passive retraction maneuvers. We quantified cognitive workload of surgeon participants as an Index of Cognitive Activity (ICA), derived from Task-Evoked-Pupillary-Response metrics; ICA ranged 0-1, with 1 representing maximum ICA. Generalized Estimating Equation (GEE) was used for all modellings to establish relationships between surgeon technical skills, cognitive workload and errors. Results We found a strong association between technical skills as measured by multiple GEARS domains (depth perception, force sensitivity and robotic control) and passive errors, with higher GEARS scores associated with a lower relative risk of errors (all p < 0.01). For novice surgeons, as average GEARS scores increased, the average estimated ICA decreased. In contrast, as average GEARS increased for expert surgeons, the average estimated ICA increased. When exhibiting optimal technical skill (maximal GEARS scores) novices and experts reached a similar range of ICA scores (ICA 0.47 and 0.42, respectively). Conclusions This study found that there is an optimal cognitive workload level for surgeons of all experience levels during our robotic surgical exercise. Select technical skill domains were strong predictors of errors. Future research will explore whether an ideal cognitive workload range truly optimizes surgical training and reduce surgical errors.
Collapse
Affiliation(s)
- Sidney I Roberts
- USC Keck School of Medicine, 12223, Urology , Los Angeles, California, United States;
| | - Steven Yong Cen
- University of Southern California, 5116, Los Angeles, California, United States;
| | - Jessiica Nguyen
- University of Southern California, 5116, Catherine & Joseph Aresty Department of Urology, Los Angeles, California, United States;
| | - Laura C Perez
- University of Southern California, 5116, Catherine & Joseph Aresty Department of Urology , Los Angeles, California, United States;
| | - Luis G Medina
- University of Southern California, 5116, Catherine & Joseph Aresty Department of Urology, Los Angeles, California, United States;
| | - Runzhuo Ma
- University of Southern California, 5116, Center for Robotic Simulation & Education, Catherine and Joseph Aresty Department of Urology, USC Institute of Urology, Los Angeles, California, United States;
| | - Sandra Marshall
- Eyetracking, Inc. , Solana Beach, California, United States;
| | - Rafal Kocielnik
- California Institute of Technology, 6469, Pasadena, California, United States;
| | - Anima Anandkumar
- California Institute of Technology, 6469, Pasadena, California, United States;
| | - Andrew J Hung
- University of Southern California, 5116, Catherine and Joseph Aresty Department of Urology, 1516 San Pablo St, Los Angeles, CA 90033, Los Angeles, California, United States, 90089-0001;
| |
Collapse
|
47
|
Abstract
PURPOSE OF REVIEW Residency training is a pivotal educational step on the road to becoming a urologist. It combines both clinical and surgical instruction with the goal of producing proficient and compassionate surgeons and clinicians. In this review, we employ a SWOT analysis (Strengths, Weaknesses, Opportunities, and Threats) to investigate the current state of urologic residency training. RECENT FINDINGS Urology remains an attractive and competitive residency with varied and complex surgical and medical training. Areas for improvement include standardization of evaluation and feedback, improving resident wellness, and expanding the use of surgical simulation. Workforce issues such as the predicted urologist supply deficit and poor readiness to enter the business of medicine can be addressed at the residency level. Failure to attract and retain underrepresented minorities, increasing burden of student debt, and resident burnout are serious threats to our field. Using a SWOT analysis we identify key areas for expansion, underscore valuable strengths, and provide a working roadmap for improvement of these formative years.
Collapse
Affiliation(s)
- Luke E Sebel
- Division of Urology, Lahey Hospital and Medical Center, Urology, Burlington, MA, USA
| | - Eric G Katz
- Division of Urology, Lahey Hospital and Medical Center, Urology, Burlington, MA, USA
| | - Lara S MacLachlan
- Division of Urology, Lahey Hospital and Medical Center, Urology, Burlington, MA, USA.
| |
Collapse
|
48
|
Review of automated performance metrics to assess surgical technical skills in robot-assisted laparoscopy. Surg Endosc 2021; 36:853-870. [PMID: 34750700 DOI: 10.1007/s00464-021-08792-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Accepted: 10/17/2021] [Indexed: 10/19/2022]
Abstract
INTRODUCTION Robot-assisted laparoscopy is a safe surgical approach with several studies suggesting correlations between complication rates and the surgeon's technical skills. Surgical skills are usually assessed by questionnaires completed by an expert observer. With the advent of surgical robots, automated surgical performance metrics (APMs)-objective measures related to instrument movements-can be computed. The aim of this systematic review was thus to assess APMs use in robot-assisted laparoscopic procedures. The primary outcome was the assessment of surgical skills by APMs and the secondary outcomes were the association between APM and surgeon parameters and the prediction of clinical outcomes. METHODS A systematic review following the PRISMA guidelines was conducted. PubMed and Scopus electronic databases were screened with the query "robot-assisted surgery OR robotic surgery AND performance metrics" between January 2010 and January 2021. The quality of the studies was assessed by the medical education research study quality instrument. The study settings, metrics, and applications were analysed. RESULTS The initial search yielded 341 citations of which 16 studies were finally included. The study settings were either simulated virtual reality (VR) (4 studies) or real clinical environment (12 studies). Data to compute APMs were kinematics (motion tracking), and system and specific events data (actions from the robot console). APMs were used to differentiate expertise levels, and thus validate VR modules, predict outcomes, and integrate datasets for automatic recognition models. APMs were correlated with clinical outcomes for some studies. CONCLUSIONS APMs constitute an objective approach for assessing technical skills. Evidence of associations between APMs and clinical outcomes remain to be confirmed by further studies, particularly, for non-urological procedures. Concurrent validation is also required.
Collapse
|
49
|
Cowan A, Chen J, Mingo S, Reddy SS, Ma R, Marshall S, Nguyen J, Hung AJ. Virtual Reality vs. Dry-Lab Models: Comparing automated performance metrics and cognitive workload during robotic simulation training. J Endourol 2021; 35:1571-1576. [PMID: 34235970 DOI: 10.1089/end.2020.1037] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Background This study compares surgical performance during analogous vesico-urethral anastomosis (VUA) tasks in two robotic training environments, virtual reality (VR) and dry-lab (DL), in order to investigate transferability of skills assessment across the two platforms. Utilizing computer-generated performance metrics and pupillary data we evaluated the two environments' ability to distinguish surgical expertise and ultimately whether performance in the VR simulation correlates to performance on the live robot in the dry-lab. Materials and Methods Experts (≥ 300 cases) and trainees (<300) performed analogous VUAs during VR and dry-lab sessions on a da Vinci robotic console. 22 metrics were generated in each environment (kinematic metrics, tissue metrics, biometrics). The dry-lab included 18 previously validated automated performance metrics (APMs) (kinematics, events metrics) and were captured by an Intuitive systems data recorder. In both settings, Tobii Pro Glasses 2 recorded task-evoked pupillary response (reported as Index of Cognitive Activity [ICA]) to indicate cognitive workload, analyzed by EyeTracking Cognitive Workload Software. Pearson Correlation, Mann-Whitney and Independent t-tests were used for the comparative analyses. Results Our study included 6 experts (median caseload 1300 [interquartile range 400-3000]) and 11 trainees (25 [0-250]). 8/9 metrics directly comparable between VR and DL showed significant positive correlation (r≥0.554, p≤0.032). 5/22 VR metrics distinguished expertise including: task time (p=0.031), clutch usage (p=0.040), unnecessary needle piercings (p=0.026) and suspected injury to endopelvic fascia (p=0.040). This contrasts with 14/22 APMs in dry-lab (p≤0.038) including: linear velocities of all three instruments (p≤0.038) and dominant-hand instrument wrist articulation (p=0.013). Trainees experienced higher cognitive workload (ICA) in both environments when compared to experts (p<0.036). Conclusions A majority of performance metrics between VR and dry-lab exhibited moderate to strong correlations, showing transferability of skills across the platforms. Comparing training environments, APMs during dry-lab tasks are better able to distinguish expertise than VR-generated metrics.
Collapse
Affiliation(s)
- Andrew Cowan
- University of Southern California, 5116, Center for Robotic Simulation & Education, Catherine and Joseph Aresty Department of Urology, USC Institute of Urology, Los Angeles, California, United States;
| | - Jian Chen
- University of Southern California, 5116, Center for Robotic Simulation & Education, Catherine and Joseph Aresty Department of Urology, USC Institute of Urology, Los Angeles, California, United States;
| | - Samuel Mingo
- University of Southern California, 5116, Center for Robotic Simulation & Education, Catherine and Joseph Aresty Department of Urology, USC Institute of Urology, Los Angeles, California, United States;
| | - Sharath S Reddy
- University of Southern California, 5116, Center for Robotic Simulation & Education, Catherine and Joseph Aresty Department of Urology, USC Institute of Urology, Los Angeles, California, United States;
| | - Runzhuo Ma
- University of Southern California, 5116, Center for Robotic Simulation & Education, Catherine and Joseph Aresty Department of Urology, USC Institute of Urology, Los Angeles, California, United States;
| | - Sandra Marshall
- Eyetracking, Inc. , Solana Beach, California, United States;
| | - Jessiica Nguyen
- University of Southern California, 5116, Center for Robotic Simulation & Education, Catherine and Joseph Aresty Department of Urology, USC Institute of Urology, Los Angeles, California, United States;
| | - Andrew J Hung
- University of Southern California, 5116, Center for Robotic Simulation & Education, Catherine and Joseph Aresty Department of Urology, USC Institute of Urology, Los Angeles, California, United States;
| |
Collapse
|
50
|
Hung AJ, Ma R, Cen S, Nguyen JH, Lei X, Wagner C. Surgeon Automated Performance Metrics as Predictors of Early Urinary Continence Recovery After Robotic Radical Prostatectomy-A Prospective Bi-institutional Study. EUR UROL SUPPL 2021; 27:65-72. [PMID: 33959725 PMCID: PMC8095672 DOI: 10.1016/j.euros.2021.03.005] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
Abstract
Background During robotic surgeries, kinematic metrics objectively quantify surgeon performance. Objective To determine whether clinical factors confound the ability of surgeon performance metrics to anticipate urinary continence recovery after robot-assisted radical prostatectomies (RARPs). Design, setting, and participants Clinical data (patient characteristics, continence recovery, and treatment factors) and surgeon data from RARPs performed between July 2016 and November 2018 were prospectively collected. Surgeon data included 40 automated performance metrics (APMs) derived from robot systems (instrument kinematics and events) and summarized over each standardized RARP step. The data were collected from two high-volume robotic centers in the USA and Germany. Surgeons from both institutions performed RARPs. The inclusion criteria were consecutive RARPs having both clinical and surgeon data. Intervention RARP with curative intent to treat prostate cancer. Outcome measurements and statistical analysis The outcome was 3- and 6-mo urinary continence recovery status. Continence was defined as the use of zero or one safety pad per day. Random forest (SAS HPFOREST) was utilized. Results and limitations A total of 193 RARPs performed by 20 surgeons were included. Of the patients, 56.7% (102/180) and 73.3% (129/176) achieved urinary continence by 3 and 6 mo after RARP, respectively. The model anticipated continence recovery (area under the curve = 0.74, 95% confidence interval [CI] 0.66–0.81 for 3-mo, and area under the curve = 0.67, 95% CI 0.58–0.76 for 6 mo). Clinical factors, including pT stage, confounded APMs during prediction of continence recovery at 3 mo after RARP (Δβ median –13.3%, interquartile range [–28.2% to –6.5%]). After adjusting for clinical factors, 11/20 (55%) top-ranking APMs remained significant and independent predictors (ie, velocity and wrist articulation during the vesicourethral anastomosis). Limitations included heterogeneity of surgeon/patient data between institutions, although it was accounted for during multivariate analysis. Conclusions Clinical factors confound surgeon performance metrics during the prediction of urinary continence recovery after RARP. Nonetheless, many surgeon factors are still independent predictors of early continence recovery. Patient summary Both patient factors and surgeon kinematic metrics, recorded during robotic prostatectomies, impact early urinary continence recovery after robot-assisted radical prostatectomy.
Collapse
Affiliation(s)
- Andrew J. Hung
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- Corresponding author. University of Southern California Institute of Urology, 1441 Eastlake Avenue Suite 7416, Los Angeles, CA 90089, USA. Tel. +1 323-865-3700; Fax: +1 323-865-0120.
| | - Runzhuo Ma
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Steven Cen
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Jessica H. Nguyen
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Xiaomeng Lei
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Christian Wagner
- Department of Urology, Pediatric Urology and Urologic Oncology, St. Antonius-Hospital, Gronau, Germany
| |
Collapse
|