1
|
Raison N, Dasgupta P, Knoll T. Insights into Training Standards for Robot-assisted Surgery and Endourology: A Perspective for Both Urologists and Trainees. Eur Urol 2024; 86:146-147. [PMID: 38749853 DOI: 10.1016/j.eururo.2024.05.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Accepted: 05/01/2024] [Indexed: 08/04/2024]
Affiliation(s)
- Nicholas Raison
- Department of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Prokar Dasgupta
- King's Health Partners Academic Surgery, King's College London, London, UK
| | - Thomas Knoll
- Medizinische Fakultät Mannheim, Universitätsmedizin Mannheim, Mannheim, Germany.
| |
Collapse
|
2
|
Shafiei SB, Shadpour S, Mohler JL, Kauffman EC, Holden M, Gutierrez C. Classification of subtask types and skill levels in robot-assisted surgery using EEG, eye-tracking, and machine learning. Surg Endosc 2024:10.1007/s00464-024-11049-6. [PMID: 39039296 DOI: 10.1007/s00464-024-11049-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2024] [Accepted: 07/06/2024] [Indexed: 07/24/2024]
Abstract
BACKGROUND Objective and standardized evaluation of surgical skills in robot-assisted surgery (RAS) holds critical importance for both surgical education and patient safety. This study introduces machine learning (ML) techniques using features derived from electroencephalogram (EEG) and eye-tracking data to identify surgical subtasks and classify skill levels. METHOD The efficacy of this approach was assessed using a comprehensive dataset encompassing nine distinct classes, each representing a unique combination of three surgical subtasks executed by surgeons while performing operations on pigs. Four ML models, logistic regression, random forest, gradient boosting, and extreme gradient boosting (XGB) were used for multi-class classification. To develop the models, 20% of data samples were randomly allocated to a test set, with the remaining 80% used for training and validation. Hyperparameters were optimized through grid search, using fivefold stratified cross-validation repeated five times. Model reliability was ensured by performing train-test split over 30 iterations, with average measurements reported. RESULTS The findings revealed that the proposed approach outperformed existing methods for classifying RAS subtasks and skills; the XGB and random forest models yielded high accuracy rates (88.49% and 88.56%, respectively) that were not significantly different (two-sample t-test; P-value = 0.9). CONCLUSION These results underscore the potential of ML models to augment the objectivity and precision of RAS subtask and skill evaluation. Future research should consider exploring ways to optimize these models, particularly focusing on the classes identified as challenging in this study. Ultimately, this study marks a significant step towards a more refined, objective, and standardized approach to RAS training and competency assessment.
Collapse
Affiliation(s)
- Somayeh B Shafiei
- The Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA.
| | - Saeed Shadpour
- Department of Animal Biosciences, University of Guelph, Guelph, ON, N1G 2W1, Canada
| | - James L Mohler
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Eric C Kauffman
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Matthew Holden
- School of Computer Science, Carleton University, 1125 Colonel By Drive, Ottawa, ON, K1S 5B6, Canada
| | - Camille Gutierrez
- Obstetrics and Gynecology Residency Program, Sisters of Charity Health System, Buffalo, NY, 14214, USA
| |
Collapse
|
3
|
Gillani M, Rupji M, Paul Olson TJ, Balch GC, Shields MC, Liu Y, Rosen SA. Objective performance indicators during specific steps of robotic right colectomy can differentiate surgeon expertise. Surgery 2024:S0039-6060(24)00463-X. [PMID: 39025692 DOI: 10.1016/j.surg.2024.06.040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Revised: 05/21/2024] [Accepted: 06/18/2024] [Indexed: 07/20/2024]
Abstract
BACKGROUND Current surgical assessment tools are subjective and nonscalable. Objective performance indicators, calculated from robotic systems data, provide automated data regarding surgeon movements and robotic arm kinematics. We identified objective performance indicators that significantly differed among expert and trainee surgeons during specific steps of robotic right colectomy. METHODS Endoscopic videos were annotated to delineate surgical steps during robotic right colectomies. Objective performance indicators were compared during mesenteric dissection, ascending colon mobilization, hepatic flexure mobilization, and bowel preparation for transection. RESULTS Twenty-five robotic right colectomy procedures (461 total surgical steps) performed by 2 experts and 8 trainees were analyzed. Experts exhibited faster camera acceleration and jerk during all steps, as well as faster dominant and nondominant arm acceleration and dominant arm jerk during all steps except distal bowel preparation. During mesenteric dissection, experts used faster camera and dominant arm velocity. During medial-to-lateral ascending colon mobilization, experts used less-dominant wrist yaw and pitch, faster nondominant arm velocity, shorter dominant arm path length, and shorter moving times for camera, dominant arm, and nondominant arm. During lateral-to-medial ascending colon mobilization, experts had faster dominant and nondominant arm velocity and third-arm acceleration. During hepatic flexure mobilization, experts exhibited more camera movements, greater velocity for camera, dominant and nondominant arms, and faster third-arm acceleration. During distal bowel preparation, experts used greater dominant wrist articulation, faster camera velocity, and longer nondominant arm path length. During proximal bowel preparation, experts demonstrated faster nondominant arm velocity. CONCLUSION Objective performance indicators can differentiate experts from trainees during distinct steps of robotic right colectomy. These automated, objective and scalable metrics can provide personalized feedback for trainees.
Collapse
Affiliation(s)
- Mishal Gillani
- Department of Surgery, Emory University School of Medicine, Atlanta, GA
| | - Manali Rupji
- Winship Cancer Institute, Emory University, Atlanta, GA
| | | | - Glen C Balch
- Department of Surgery, Emory University School of Medicine, Atlanta, GA
| | | | - Yuan Liu
- Rollins School of Public Health, Emory University, Atlanta, GA
| | - Seth Alan Rosen
- Department of Surgery, Emory University School of Medicine, Atlanta, GA.
| |
Collapse
|
4
|
Bakker AFHA, de Nijs JV, Jaspers TJM, de With PHN, Beulens AJW, van der Poel HG, van der Sommen F, Brinkman WM. Estimating Surgical Urethral Length on Intraoperative Robot-Assisted Prostatectomy Images Using Artificial Intelligence Anatomy Recognition. J Endourol 2024; 38:690-696. [PMID: 38613819 DOI: 10.1089/end.2023.0697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/15/2024] Open
Abstract
Objective: To construct a convolutional neural network (CNN) model that can recognize and delineate anatomic structures on intraoperative video frames of robot-assisted radical prostatectomy (RARP) and to use these annotations to predict the surgical urethral length (SUL). Background: Urethral dissection during RARP impacts patient urinary incontinence (UI) outcomes, and requires extensive training. Large differences exist between incontinence outcomes of different urologists and hospitals. Also, surgeon experience and education are critical toward optimal outcomes. Therefore, new approaches are warranted. SUL is associated with UI. Artificial intelligence (AI) surgical image segmentation using a CNN could automate SUL estimation and contribute toward future AI-assisted RARP and surgeon guidance. Methods: Eighty-eight intraoperative RARP videos between June 2009 and September 2014 were collected from a single center. Two hundred sixty-four frames were annotated according to prostate, urethra, ligated plexus, and catheter. Thirty annotated images from different RARP videos were used as a test data set. The dice (similarity) coefficient (DSC) and 95th percentile Hausdorff distance (Hd95) were used to determine model performance. SUL was calculated using the catheter as a reference. Results: The DSC of the best performing model were 0.735 and 0.755 for the catheter and urethra classes, respectively, with a Hd95 of 29.27 and 72.62, respectively. The model performed moderately on the ligated plexus and prostate. The predicted SUL showed a mean difference of 0.64 to 1.86 mm difference vs human annotators, but with significant deviation (standard deviation = 3.28-3.56). Conclusion: This study shows that an AI image segmentation model can predict vital structures during RARP urethral dissection with moderate to fair accuracy. SUL estimation derived from it showed large deviations and outliers when compared with human annotators, but with a small mean difference (<2 mm). This is a promising development for further research on AI-assisted RARP.
Collapse
Affiliation(s)
- Aron F H A Bakker
- Department of Urology, Catharina Hospital, Eindhoven, The Netherlands
- Department of Oncological Urology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Joris V de Nijs
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Tim J M Jaspers
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Peter H N de With
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Alexander J W Beulens
- Department of Oncological Urology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Henk G van der Poel
- Department of Oncological Urology, Antoni van Leeuwenhoek Hospital, Amsterdam, The Netherlands
| | - Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Willem M Brinkman
- Department of Oncological Urology, University Medical Center Utrecht, Utrecht, The Netherlands
| |
Collapse
|
5
|
Altıntaş E, Şahin A, Babayev H, Gül M, Batur AF, Kaynar M, Kılıç Ö, Göktaş S. Machine learning algorithm predicts urethral stricture following transurethral prostate resection. World J Urol 2024; 42:324. [PMID: 38748256 PMCID: PMC11096196 DOI: 10.1007/s00345-024-05017-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2024] [Accepted: 04/22/2024] [Indexed: 05/18/2024] Open
Abstract
PURPOSE To predict the post transurethral prostate resection(TURP) urethral stricture probability by applying different machine learning algorithms using the data obtained from preoperative blood parameters. METHODS A retrospective analysis of data from patients who underwent bipolar-TURP encompassing patient characteristics, preoperative routine blood test outcomes, and post-surgery uroflowmetry were used to develop and educate machine learning models. Various metrics, such as F1 score, model accuracy, negative predictive value, positive predictive value, sensitivity, specificity, Youden Index, ROC AUC value, and confidence interval for each model, were used to assess the predictive performance of machine learning models for urethral stricture development. RESULTS A total of 109 patients' data (55 patients without urethral stricture and 54 patients with urethral stricture) were included in the study after implementing strict inclusion and exclusion criteria. The preoperative Platelet Distribution Width, Mean Platelet Volume, Plateletcrit, Activated Partial Thromboplastin Time, and Prothrombin Time values were statistically meaningful between the two cohorts. After applying the data to the machine learning systems, the accuracy prediction scores for the diverse algorithms were as follows: decision trees (0.82), logistic regression (0.82), random forests (0.91), support vector machines (0.86), K-nearest neighbors (0.82), and naïve Bayes (0.77). CONCLUSION Our machine learning models' accuracy in predicting the post-TURP urethral stricture probability has demonstrated significant success. Exploring prospective studies that integrate supplementary variables has the potential to enhance the precision and accuracy of machine learning models, consequently progressing their ability to predict post-TURP urethral stricture risk.
Collapse
Affiliation(s)
- Emre Altıntaş
- Faculty of Medicine, Department of Urology, Selcuk University, Tıp Fakültesi Alaeddin Keykubat Yerleşkesi Selçuklu, Konya, 42131, Turkey.
| | - Ali Şahin
- Faculty of Medicine, Selcuk University, Konya, Turkey
| | - Huseyn Babayev
- Faculty of Medicine, University of Zurich, Zurich, Switzerland
| | - Murat Gül
- Faculty of Medicine, Department of Urology, Selcuk University, Tıp Fakültesi Alaeddin Keykubat Yerleşkesi Selçuklu, Konya, 42131, Turkey
| | - Ali Furkan Batur
- Faculty of Medicine, Department of Urology, Selcuk University, Tıp Fakültesi Alaeddin Keykubat Yerleşkesi Selçuklu, Konya, 42131, Turkey
| | - Mehmet Kaynar
- Faculty of Medicine, Department of Urology, Selcuk University, Tıp Fakültesi Alaeddin Keykubat Yerleşkesi Selçuklu, Konya, 42131, Turkey
| | - Özcan Kılıç
- Faculty of Medicine, Department of Urology, Selcuk University, Tıp Fakültesi Alaeddin Keykubat Yerleşkesi Selçuklu, Konya, 42131, Turkey
| | - Serdar Göktaş
- Faculty of Medicine, Department of Urology, Selcuk University, Tıp Fakültesi Alaeddin Keykubat Yerleşkesi Selçuklu, Konya, 42131, Turkey
| |
Collapse
|
6
|
Bellos T, Manolitsis I, Katsimperis S, Juliebø-Jones P, Feretzakis G, Mitsogiannis I, Varkarakis I, Somani BK, Tzelves L. Artificial Intelligence in Urologic Robotic Oncologic Surgery: A Narrative Review. Cancers (Basel) 2024; 16:1775. [PMID: 38730727 PMCID: PMC11083167 DOI: 10.3390/cancers16091775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 04/29/2024] [Accepted: 05/02/2024] [Indexed: 05/13/2024] Open
Abstract
With the rapid increase in computer processing capacity over the past two decades, machine learning techniques have been applied in many sectors of daily life. Machine learning in therapeutic settings is also gaining popularity. We analysed current studies on machine learning in robotic urologic surgery. We searched PubMed/Medline and Google Scholar up to December 2023. Search terms included "urologic surgery", "artificial intelligence", "machine learning", "neural network", "automation", and "robotic surgery". Automatic preoperative imaging, intraoperative anatomy matching, and bleeding prediction has been a major focus. Early artificial intelligence (AI) therapeutic outcomes are promising. Robot-assisted surgery provides precise telemetry data and a cutting-edge viewing console to analyse and improve AI integration in surgery. Machine learning enhances surgical skill feedback, procedure effectiveness, surgical guidance, and postoperative prediction. Tension-sensors on robotic arms and augmented reality can improve surgery. This provides real-time organ motion monitoring, improving precision and accuracy. As datasets develop and electronic health records are used more and more, these technologies will become more effective and useful. AI in robotic surgery is intended to improve surgical training and experience. Both seek precision to improve surgical care. AI in ''master-slave'' robotic surgery offers the detailed, step-by-step examination of autonomous robotic treatments.
Collapse
Affiliation(s)
- Themistoklis Bellos
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | - Ioannis Manolitsis
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | - Stamatios Katsimperis
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | | | - Georgios Feretzakis
- School of Science and Technology, Hellenic Open University, 26335 Patras, Greece;
| | - Iraklis Mitsogiannis
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | - Ioannis Varkarakis
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | - Bhaskar K. Somani
- Department of Urology, University of Southampton, Southampton SO16 6YD, UK;
| | - Lazaros Tzelves
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| |
Collapse
|
7
|
Pak S, Park SG, Park J, Cho ST, Lee YG, Ahn H. Applications of artificial intelligence in urologic oncology. Investig Clin Urol 2024; 65:202-216. [PMID: 38714511 PMCID: PMC11076794 DOI: 10.4111/icu.20230435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 02/24/2024] [Accepted: 03/11/2024] [Indexed: 05/10/2024] Open
Abstract
PURPOSE With the recent rising interest in artificial intelligence (AI) in medicine, many studies have explored the potential and usefulness of AI in urological diseases. This study aimed to comprehensively review recent applications of AI in urologic oncology. MATERIALS AND METHODS We searched the PubMed-MEDLINE databases for articles in English on machine learning (ML) and deep learning (DL) models related to general surgery and prostate, bladder, and kidney cancer. The search terms were a combination of keywords, including both "urology" and "artificial intelligence" with one of the following: "machine learning," "deep learning," "neural network," "renal cell carcinoma," "kidney cancer," "urothelial carcinoma," "bladder cancer," "prostate cancer," and "robotic surgery." RESULTS A total of 58 articles were included. The studies on prostate cancer were related to grade prediction, improved diagnosis, and predicting outcomes and recurrence. The studies on bladder cancer mainly used radiomics to identify aggressive tumors and predict treatment outcomes, recurrence, and survival rates. Most studies on the application of ML and DL in kidney cancer were focused on the differentiation of benign and malignant tumors as well as prediction of their grade and subtype. Most studies suggested that methods using AI may be better than or similar to existing traditional methods. CONCLUSIONS AI technology is actively being investigated in the field of urological cancers as a tool for diagnosis, prediction of prognosis, and decision-making and is expected to be applied in additional clinical areas soon. Despite technological, legal, and ethical concerns, AI will change the landscape of urological cancer management.
Collapse
Affiliation(s)
- Sahyun Pak
- Department of Urology, Kangnam Sacred Heart Hospital, Hallym University College of Medicine, Seoul, Korea
| | - Sung Gon Park
- Department of Urology, Kangnam Sacred Heart Hospital, Hallym University College of Medicine, Seoul, Korea
| | | | - Sung Tae Cho
- Department of Urology, Kangnam Sacred Heart Hospital, Hallym University College of Medicine, Seoul, Korea
| | - Young Goo Lee
- Department of Urology, Kangnam Sacred Heart Hospital, Hallym University College of Medicine, Seoul, Korea
| | - Hanjong Ahn
- Department of Urology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea.
| |
Collapse
|
8
|
Gillani M, Rupji M, Devin CL, Purvis LA, Paul Olson TJ, Jarc A, Shields MC, Liu Y, Rosen SA. Quantification of surgical workflow during robotic proctectomy. Int J Med Robot 2024; 20:e2625. [PMID: 38439215 DOI: 10.1002/rcs.2625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 02/15/2024] [Accepted: 02/22/2024] [Indexed: 03/06/2024]
Abstract
BACKGROUND Surgical workflow assessments offer insight regarding procedure variability. We utilised an objective method to evaluate workflow during robotic proctectomy (RP). METHODS We annotated 31 RPs and used Spearman's correlation to measure the correlation of step time and step visit frequency with console time (CT) and total operative time (TOT). RESULTS Strong correlations were seen with CT and step times for inferior mesenteric vein dissection and ligation (ρ = 0.60, ρ = 0.60), lateral-to-medial splenic flexure mobilisation (SFM) (ρ = 0.63), left rectal dissection (ρ = 0.64) and mesorectal division (ρ = 0.71). CT correlated strongly with medial-to-lateral (ρ = 0.75) and supracolic SFM visit frequency (ρ = 0.65). TOT correlated strongly with initial exposure time (ρ = 0.60), and medial-to-lateral (ρ = 0.67) and supracolic SFM visit frequency (ρ = 0.65). CONCLUSION This study correlates surgical steps with CT and TOT through standardised annotation, providing an objective approach to quantify workflow.
Collapse
Affiliation(s)
- Mishal Gillani
- Department of Surgery, Emory University School of Medicine, Atlanta, Georgia, USA
| | - Manali Rupji
- Biostatistics Shared Resource, Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Courtney L Devin
- Department of Surgery, Emory University School of Medicine, Atlanta, Georgia, USA
| | - Lilia A Purvis
- Research Division, Intuitive Surgical, Norcross, Georgia, USA
| | - Terrah J Paul Olson
- Department of Surgery, Emory University School of Medicine, Atlanta, Georgia, USA
| | - Anthony Jarc
- Research Division, Intuitive Surgical, Norcross, Georgia, USA
| | | | - Yuan Liu
- Department of Biostatistics and Bioinformatics, Rollins School of Public Health, Emory University, Atlanta, Georgia, USA
| | - Seth A Rosen
- Department of Surgery, Emory University School of Medicine, Atlanta, Georgia, USA
| |
Collapse
|
9
|
Knudsen JE, Ghaffar U, Ma R, Hung AJ. Clinical applications of artificial intelligence in robotic surgery. J Robot Surg 2024; 18:102. [PMID: 38427094 PMCID: PMC10907451 DOI: 10.1007/s11701-024-01867-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 02/10/2024] [Indexed: 03/02/2024]
Abstract
Artificial intelligence (AI) is revolutionizing nearly every aspect of modern life. In the medical field, robotic surgery is the sector with some of the most innovative and impactful advancements. In this narrative review, we outline recent contributions of AI to the field of robotic surgery with a particular focus on intraoperative enhancement. AI modeling is allowing surgeons to have advanced intraoperative metrics such as force and tactile measurements, enhanced detection of positive surgical margins, and even allowing for the complete automation of certain steps in surgical procedures. AI is also Query revolutionizing the field of surgical education. AI modeling applied to intraoperative surgical video feeds and instrument kinematics data is allowing for the generation of automated skills assessments. AI also shows promise for the generation and delivery of highly specialized intraoperative surgical feedback for training surgeons. Although the adoption and integration of AI show promise in robotic surgery, it raises important, complex ethical questions. Frameworks for thinking through ethical dilemmas raised by AI are outlined in this review. AI enhancements in robotic surgery is some of the most groundbreaking research happening today, and the studies outlined in this review represent some of the most exciting innovations in recent years.
Collapse
Affiliation(s)
- J Everett Knudsen
- Keck School of Medicine, University of Southern California, Los Angeles, USA
| | | | - Runzhuo Ma
- Cedars-Sinai Medical Center, Los Angeles, USA
| | | |
Collapse
|
10
|
Goldenberg MG. Surgical Artificial Intelligence in Urology: Educational Applications. Urol Clin North Am 2024; 51:105-115. [PMID: 37945096 DOI: 10.1016/j.ucl.2023.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
Surgical education has seen immense change recently. Increased demand for iterative evaluation of trainees from medical school to independent practice has led to the generation of an overwhelming amount of data related to an individual's competency. Artificial intelligence has been proposed as a solution to automate and standardize the ability of stakeholders to assess the technical and nontechnical abilities of a surgical trainee. In both the simulation and clinical environments, evidence supports the use of machine learning algorithms to both evaluate trainee skill and provide real-time and automated feedback, enabling a shortened learning curve for many key procedural skills and ensuring patient safety.
Collapse
Affiliation(s)
- Mitchell G Goldenberg
- Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, 1441 Eastlake Avenue, Suite 7416, Los Angeles, CA 90033, USA.
| |
Collapse
|
11
|
Boal M, Di Girasole CG, Tesfai F, Morrison TEM, Higgs S, Ahmad J, Arezzo A, Francis N. Evaluation status of current and emerging minimally invasive robotic surgical platforms. Surg Endosc 2024; 38:554-585. [PMID: 38123746 PMCID: PMC10830826 DOI: 10.1007/s00464-023-10554-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 10/20/2023] [Indexed: 12/23/2023]
Abstract
BACKGROUND The rapid adoption of robotics within minimally invasive surgical specialties has also seen an explosion of new technology including multi- and single port, natural orifice transluminal endoscopic surgery (NOTES), endoluminal and "on-demand" platforms. This review aims to evaluate the validation status of current and emerging MIS robotic platforms, using the IDEAL Framework. METHODS A scoping review exploring robotic minimally invasive surgical devices, technology and systems in use or being developed was performed, including general surgery, gynaecology, urology and cardiothoracics. Systems operating purely outside the abdomen or thorax and endoluminal or natural orifice platforms were excluded. PubMed, Google Scholar, journal reports and information from the public domain were collected. Each company was approached via email for a virtual interview to discover more about the systems and to quality check data. The IDEAL Framework is an internationally accepted tool to evaluate novel surgical technology, consisting of four stages: idea, development/exploration, assessment, and surveillance. An IDEAL stage, synonymous with validation status in this review, was assigned by reviewing the published literature. RESULTS 21 companies with 23 different robotic platforms were identified for data collection, 13 with national and/or international regulatory approval. Of the 17 multiport systems, 1 is fully evaluated at stage 4, 2 are stage 3, 6 stage 2b, 2 at stage 2a, 2 stage 1, and 4 at the pre-IDEAL stage 0. Of the 6 single-port systems none have been fully evaluated with 1 at stage 3, 3 at stage 1 and 2 at stage 0. CONCLUSIONS The majority of existing robotic platforms are currently at the preclinical to developmental and exploratory stage of evaluation. Using the IDEAL framework will ensure that emerging robotic platforms are fully evaluated with long-term data, to inform the surgical workforce and ensure patient safety.
Collapse
Affiliation(s)
- M Boal
- The Griffin Institute, Northwick Park and St Marks Hospital, London, UK
- Wellcome/EPSRC Centre for Intervention and Surgical Sciences, University College London, London, UK
- Association of Laparoscopic Surgeons of Great Britain and Ireland (ALSGBI) Academy, London, UK
| | | | - F Tesfai
- The Griffin Institute, Northwick Park and St Marks Hospital, London, UK
- Wellcome/EPSRC Centre for Intervention and Surgical Sciences, University College London, London, UK
- Association of Laparoscopic Surgeons of Great Britain and Ireland (ALSGBI) Academy, London, UK
| | - T E M Morrison
- Association of Laparoscopic Surgeons of Great Britain and Ireland (ALSGBI) Academy, London, UK
| | - S Higgs
- Gloucestershire Hospitals NHS Foundation Trust, Gloucester, UK
| | - J Ahmad
- University Hospitals Coventry and Warwickshire, Coventry, UK
| | - A Arezzo
- Department of Surgical Sciences, University of Turin, Turin, Italy
| | - N Francis
- The Griffin Institute, Northwick Park and St Marks Hospital, London, UK.
- Yeovil District Hospital, Somerset NHS Foundation Trust, Yeovil, UK.
| |
Collapse
|
12
|
El-Sayed C, Yiu A, Burke J, Vaughan-Shaw P, Todd J, Lin P, Kasmani Z, Munsch C, Rooshenas L, Campbell M, Bach SP. Measures of performance and proficiency in robotic assisted surgery: a systematic review. J Robot Surg 2024; 18:16. [PMID: 38217749 DOI: 10.1007/s11701-023-01756-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 11/07/2023] [Indexed: 01/15/2024]
Abstract
Robotic assisted surgery (RAS) has seen a global rise in adoption. Despite this, there is not a standardised training curricula nor a standardised measure of performance. We performed a systematic review across the surgical specialties in RAS and evaluated tools used to assess surgeons' technical performance. Using the PRISMA 2020 guidelines, Pubmed, Embase and the Cochrane Library were searched systematically for full texts published on or after January 2020-January 2022. Observational studies and RCTs were included; review articles and systematic reviews were excluded. The papers' quality and bias score were assessed using the Newcastle Ottawa Score for the observational studies and Cochrane Risk Tool for the RCTs. The initial search yielded 1189 papers of which 72 fit the eligibility criteria. 27 unique performance metrics were identified. Global assessments were the most common tool of assessment (n = 13); the most used was GEARS (Global Evaluative Assessment of Robotic Skills). 11 metrics (42%) were objective tools of performance. Automated performance metrics (APMs) were the most widely used objective metrics whilst the remaining (n = 15, 58%) were subjective. The results demonstrate variation in tools used to assess technical performance in RAS. A large proportion of the metrics are subjective measures which increases the risk of bias amongst users. A standardised objective metric which measures all domains of technical performance from global to cognitive is required. The metric should be applicable to all RAS procedures and easily implementable. Automated performance metrics (APMs) have demonstrated promise in their wide use of accurate measures.
Collapse
Affiliation(s)
- Charlotte El-Sayed
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom.
| | - A Yiu
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - J Burke
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - P Vaughan-Shaw
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - J Todd
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - P Lin
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - Z Kasmani
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - C Munsch
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - L Rooshenas
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - M Campbell
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - S P Bach
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|
13
|
Balu A, Kugener G, Pangal DJ, Lee H, Lasky S, Han J, Buchanan I, Liu J, Zada G, Donoho DA. Simulated outcomes for durotomy repair in minimally invasive spine surgery. Sci Data 2024; 11:62. [PMID: 38200013 PMCID: PMC10781746 DOI: 10.1038/s41597-023-02744-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 11/13/2023] [Indexed: 01/12/2024] Open
Abstract
Minimally invasive spine surgery (MISS) is increasingly performed using endoscopic and microscopic visualization, and the captured video can be used for surgical education and development of predictive artificial intelligence (AI) models. Video datasets depicting adverse event management are also valuable, as predictive models not exposed to adverse events may exhibit poor performance when these occur. Given that no dedicated spine surgery video datasets for AI model development are publicly available, we introduce Simulated Outcomes for Durotomy Repair in Minimally Invasive Spine Surgery (SOSpine). A validated MISS cadaveric dural repair simulator was used to educate neurosurgery residents, and surgical microscope video recordings were paired with outcome data. Objects including durotomy, needle, grasper, needle driver, and nerve hook were then annotated. Altogether, SOSpine contains 15,698 frames with 53,238 annotations and associated durotomy repair outcomes. For validation, an AI model was fine-tuned on SOSpine video and detected surgical instruments with a mean average precision of 0.77. In summary, SOSpine depicts spine surgeons managing a common complication, providing opportunities to develop surgical AI models.
Collapse
Affiliation(s)
- Alan Balu
- Department of Neurosurgery, Georgetown University School of Medicine, 3900 Reservoir Rd NW, Washington, D.C., 20007, USA.
| | - Guillaume Kugener
- Department of Neurological Surgery, Keck School of Medicine of University of Southern California, 1200 North State St., Suite 3300, Los Angeles, CA, 90033, USA
| | - Dhiraj J Pangal
- Department of Neurological Surgery, Keck School of Medicine of University of Southern California, 1200 North State St., Suite 3300, Los Angeles, CA, 90033, USA
| | - Heewon Lee
- University of Southern California, 3709 Trousdale Pkwy., Los Angeles, CA, 90089, USA
| | - Sasha Lasky
- University of Southern California, 3709 Trousdale Pkwy., Los Angeles, CA, 90089, USA
| | - Jane Han
- University of Southern California, 3709 Trousdale Pkwy., Los Angeles, CA, 90089, USA
| | - Ian Buchanan
- Department of Neurological Surgery, Keck School of Medicine of University of Southern California, 1200 North State St., Suite 3300, Los Angeles, CA, 90033, USA
| | - John Liu
- Department of Neurological Surgery, Keck School of Medicine of University of Southern California, 1200 North State St., Suite 3300, Los Angeles, CA, 90033, USA
| | - Gabriel Zada
- Department of Neurological Surgery, Keck School of Medicine of University of Southern California, 1200 North State St., Suite 3300, Los Angeles, CA, 90033, USA
| | - Daniel A Donoho
- Department of Neurosurgery, Children's National Hospital, 111 Michigan Avenue NW, Washington, DC, 20010, USA
| |
Collapse
|
14
|
Mian AH, Tollefson MK, Shah P, Sharma V, Mian A, Thompson RH, Boorjian SA, Frank I, Khanna A. Navigating Now and Next: Recent Advances and Future Horizons in Robotic Radical Prostatectomy. J Clin Med 2024; 13:359. [PMID: 38256493 PMCID: PMC10815957 DOI: 10.3390/jcm13020359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 01/01/2024] [Accepted: 01/03/2024] [Indexed: 01/24/2024] Open
Abstract
Robotic-assisted radical prostatectomy (RARP) has become the leading approach for radical prostatectomy driven by innovations aimed at improving functional and oncological outcomes. The initial advancement in this field was transperitoneal multiport robotics, which has since undergone numerous technical modifications. These enhancements include the development of extraperitoneal, transperineal, and transvesical approaches to radical prostatectomy, greatly facilitated by the advent of the Single Port (SP) robot. This review offers a comprehensive analysis of these evolving techniques and their impact on RARP. Additionally, we explore the transformative role of artificial intelligence (AI) in digitizing robotic prostatectomy. AI advancements, particularly in automated surgical video analysis using computer vision technology, are unprecedented in their scope. These developments hold the potential to revolutionize surgeon feedback and assessment and transform surgical documentation, and they could lay the groundwork for real-time AI decision support during surgical procedures in the future. Furthermore, we discuss future robotic platforms and their potential to further enhance the field of RARP. Overall, the field of minimally invasive radical prostatectomy for prostate cancer has been an incubator of innovation over the last two decades. This review focuses on some recent developments in robotic prostatectomy, provides an overview of the next frontier in AI innovation during prostate cancer surgery, and highlights novel robotic platforms that may play an increasing role in prostate cancer surgery in the future.
Collapse
Affiliation(s)
- Abrar H. Mian
- Department of Urology, Mayo Clinic, Rochester, MN 55905, USA
| | | | - Paras Shah
- Department of Urology, Mayo Clinic, Rochester, MN 55905, USA
| | - Vidit Sharma
- Department of Urology, Mayo Clinic, Rochester, MN 55905, USA
| | - Ahmed Mian
- Urology Associates of Green Bay, Green Bay, WI 54301, USA
| | | | | | - Igor Frank
- Department of Urology, Mayo Clinic, Rochester, MN 55905, USA
| | - Abhinav Khanna
- Department of Urology, Mayo Clinic, Rochester, MN 55905, USA
| |
Collapse
|
15
|
Boal MWE, Anastasiou D, Tesfai F, Ghamrawi W, Mazomenos E, Curtis N, Collins JW, Sridhar A, Kelly J, Stoyanov D, Francis NK. Evaluation of objective tools and artificial intelligence in robotic surgery technical skills assessment: a systematic review. Br J Surg 2024; 111:znad331. [PMID: 37951600 PMCID: PMC10771126 DOI: 10.1093/bjs/znad331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 09/18/2023] [Accepted: 09/19/2023] [Indexed: 11/14/2023]
Abstract
BACKGROUND There is a need to standardize training in robotic surgery, including objective assessment for accreditation. This systematic review aimed to identify objective tools for technical skills assessment, providing evaluation statuses to guide research and inform implementation into training curricula. METHODS A systematic literature search was conducted in accordance with the PRISMA guidelines. Ovid Embase/Medline, PubMed and Web of Science were searched. Inclusion criterion: robotic surgery technical skills tools. Exclusion criteria: non-technical, laparoscopy or open skills only. Manual tools and automated performance metrics (APMs) were analysed using Messick's concept of validity and the Oxford Centre of Evidence-Based Medicine (OCEBM) Levels of Evidence and Recommendation (LoR). A bespoke tool analysed artificial intelligence (AI) studies. The Modified Downs-Black checklist was used to assess risk of bias. RESULTS Two hundred and forty-seven studies were analysed, identifying: 8 global rating scales, 26 procedure-/task-specific tools, 3 main error-based methods, 10 simulators, 28 studies analysing APMs and 53 AI studies. Global Evaluative Assessment of Robotic Skills and the da Vinci Skills Simulator were the most evaluated tools at LoR 1 (OCEBM). Three procedure-specific tools, 3 error-based methods and 1 non-simulator APMs reached LoR 2. AI models estimated outcomes (skill or clinical), demonstrating superior accuracy rates in the laboratory with 60 per cent of methods reporting accuracies over 90 per cent, compared to real surgery ranging from 67 to 100 per cent. CONCLUSIONS Manual and automated assessment tools for robotic surgery are not well validated and require further evaluation before use in accreditation processes.PROSPERO: registration ID CRD42022304901.
Collapse
Affiliation(s)
- Matthew W E Boal
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
| | - Dimitrios Anastasiou
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Freweini Tesfai
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
| | - Walaa Ghamrawi
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
| | - Evangelos Mazomenos
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Nathan Curtis
- Department of General Surgey, Dorset County Hospital NHS Foundation Trust, Dorchester, UK
| | - Justin W Collins
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - Ashwin Sridhar
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - John Kelly
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - Danail Stoyanov
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Computer Science, UCL, London, UK
| | - Nader K Francis
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- Yeovil District Hospital, Somerset Foundation NHS Trust, Yeovil, Somerset, UK
| |
Collapse
|
16
|
Balu A, Pangal DJ, Kugener G, Donoho DA. Pilot Analysis of Surgeon Instrument Utilization Signatures Based on Shannon Entropy and Deep Learning for Surgeon Performance Assessment in a Cadaveric Carotid Artery Injury Control Simulation. Oper Neurosurg (Hagerstown) 2023; 25:e330-e337. [PMID: 37655892 DOI: 10.1227/ons.0000000000000888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 06/27/2023] [Indexed: 09/02/2023] Open
Abstract
BACKGROUND AND OBJECTIVES Assessment and feedback are critical to surgical education, but direct observational feedback by experts is rarely provided because of time constraints and is typically only qualitative. Automated, video-based, quantitative feedback on surgical performance could address this gap, improving surgical training. The authors aim to demonstrate the ability of Shannon entropy (ShEn), an information theory metric that quantifies series diversity, to predict surgical performance using instrument detections generated through deep learning. METHODS Annotated images from a publicly available video data set of surgeons managing endoscopic endonasal carotid artery lacerations in a perfused cadaveric simulator were collected. A deep learning model was implemented to detect surgical instruments across video frames. ShEn score for the instrument sequence was calculated from each surgical trial. Logistic regression using ShEn was used to predict hemorrhage control success. RESULTS ShEn scores and instrument usage patterns differed between successful and unsuccessful trials (ShEn: 0.452 vs 0.370, P < .001). Unsuccessful hemorrhage control trials displayed lower entropy and less varied instrument use patterns. By contrast, successful trials demonstrated higher entropy with more diverse instrument usage and consistent progression in instrument utilization. A logistic regression model using ShEn scores (78% accuracy and 97% average precision) was at least as accurate as surgeons' attending/resident status and years of experience for predicting trial success and had similar accuracy as expert human observers. CONCLUSION ShEn score offers a summative signal about surgeon performance and predicted success at controlling carotid hemorrhage in a simulated cadaveric setting. Future efforts to generalize ShEn to additional surgical scenarios can further validate this metric.
Collapse
Affiliation(s)
- Alan Balu
- Department of Neurosurgery, Georgetown University School of Medicine, Washington , District of Columbia, USA
| | - Dhiraj J Pangal
- Department of Neurosurgery, Keck School of Medicine of University of Southern California, Los Angeles , California , USA
| | - Guillaume Kugener
- Department of Neurosurgery, Keck School of Medicine of University of Southern California, Los Angeles , California , USA
| | - Daniel A Donoho
- Division of Neurosurgery, Children's National Hospital, Washington , District of Columbia , USA
| |
Collapse
|
17
|
Wang YD, Huang CP, Yang YR, Wu HC, Hsu YJ, Yeh YC, Yeh PC, Wu KC, Kao CH. Machine Learning and Radiomics of Bone Scintigraphy: Their Role in Predicting Recurrence of Localized or Locally Advanced Prostate Cancer. Diagnostics (Basel) 2023; 13:3380. [PMID: 37958276 PMCID: PMC10648785 DOI: 10.3390/diagnostics13213380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 10/26/2023] [Accepted: 11/01/2023] [Indexed: 11/15/2023] Open
Abstract
BACKGROUND Machine-learning (ML) and radiomics features have been utilized for survival outcome analysis in various cancers. This study aims to investigate the application of ML based on patients' clinical features and radiomics features derived from bone scintigraphy (BS) and to evaluate recurrence-free survival in local or locally advanced prostate cancer (PCa) patients after the initial treatment. METHODS A total of 354 patients who met the eligibility criteria were analyzed and used to train the model. Clinical information and radiomics features of BS were obtained. Survival-related clinical features and radiomics features were included in the ML model training. Using the pyradiomics software, 128 radiomics features from each BS image's region of interest, validated by experts, were extracted. Four textural matrices were also calculated: GLCM, NGLDM, GLRLM, and GLSZM. Five training models (Logistic Regression, Naive Bayes, Random Forest, Support Vector Classification, and XGBoost) were applied using K-fold cross-validation. Recurrence was defined as either a rise in PSA levels, radiographic progression, or death. To assess the classifier's effectiveness, the ROC curve area and confusion matrix were employed. RESULTS Of the 354 patients, 101 patients were categorized into the recurrence group with more advanced disease status compared to the non-recurrence group. Key clinical features including tumor stage, radical prostatectomy, initial PSA, Gleason Score primary pattern, and radiotherapy were used for model training. Random Forest (RF) was the best-performing model, with a sensitivity of 0.81, specificity of 0.87, and accuracy of 0.85. The ROC curve analysis showed that predictions from RF outperformed predictions from other ML models with a final AUC of 0.94 and a p-value of <0.001. The other models had accuracy ranges from 0.52 to 0.78 and AUC ranges from 0.67 to 0.84. CONCLUSIONS The study showed that ML based on clinical features and radiomics features of BS improves the prediction of PCa recurrence after initial treatment. These findings highlight the added value of ML techniques for risk classification in PCa based on clinical features and radiomics features of BS.
Collapse
Affiliation(s)
- Yu-De Wang
- Graduate Institute of Biomedical Sciences, School of Medicine, College of Medicine, China Medical University, Taichung 404327, Taiwan;
- Department of Urology, China Medical University Hospital, Taichung 404327, Taiwan; (C.-P.H.); (Y.-R.Y.)
| | - Chi-Ping Huang
- Department of Urology, China Medical University Hospital, Taichung 404327, Taiwan; (C.-P.H.); (Y.-R.Y.)
- School of Medicine, China Medical University, Taichung 406040, Taiwan;
| | - You-Rong Yang
- Department of Urology, China Medical University Hospital, Taichung 404327, Taiwan; (C.-P.H.); (Y.-R.Y.)
| | - Hsi-Chin Wu
- School of Medicine, China Medical University, Taichung 406040, Taiwan;
- Department of Urology, China Medical University Beigang Hospital, Yunlin 651012, Taiwan
| | - Yu-Ju Hsu
- Artificial Intelligence Center, China Medical University Hospital, Taichung 404327, Taiwan; (Y.-J.H.); (Y.-C.Y.); (P.-C.Y.); (K.-C.W.)
| | - Yi-Chun Yeh
- Artificial Intelligence Center, China Medical University Hospital, Taichung 404327, Taiwan; (Y.-J.H.); (Y.-C.Y.); (P.-C.Y.); (K.-C.W.)
| | - Pei-Chun Yeh
- Artificial Intelligence Center, China Medical University Hospital, Taichung 404327, Taiwan; (Y.-J.H.); (Y.-C.Y.); (P.-C.Y.); (K.-C.W.)
| | - Kuo-Chen Wu
- Artificial Intelligence Center, China Medical University Hospital, Taichung 404327, Taiwan; (Y.-J.H.); (Y.-C.Y.); (P.-C.Y.); (K.-C.W.)
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei 106319, Taiwan
| | - Chia-Hung Kao
- Graduate Institute of Biomedical Sciences, School of Medicine, College of Medicine, China Medical University, Taichung 404327, Taiwan;
- Artificial Intelligence Center, China Medical University Hospital, Taichung 404327, Taiwan; (Y.-J.H.); (Y.-C.Y.); (P.-C.Y.); (K.-C.W.)
- Department of Nuclear Medicine and PET Center, China Medical University Hospital, Taichung 404327, Taiwan
- Department of Bioinformatics and Medical Engineering, Asia University, Taichung 413305, Taiwan
| |
Collapse
|
18
|
Kaoukabani G, Gokcal F, Fanta A, Liu X, Shields M, Stricklin C, Friedman A, Kudsi OY. A multifactorial evaluation of objective performance indicators and video analysis in the context of case complexity and clinical outcomes in robotic-assisted cholecystectomy. Surg Endosc 2023; 37:8540-8551. [PMID: 37789179 DOI: 10.1007/s00464-023-10432-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Accepted: 08/31/2023] [Indexed: 10/05/2023]
Abstract
BACKGROUND The increased digitization in robotic surgical procedures today enables surgeons to quantify their movements through data captured directly from the robotic system. These calculations, called objective performance indicators (OPIs), offer unprecedented detail into surgical performance. In this study, we link case- and surgical step-specific OPIs to case complexity, surgical experience and console utilization, and post-operative clinical complications across 87 robotic cholecystectomy (RC) cases. METHODS Videos of RCs performed by a principal surgeon with and without fellows were segmented into eight surgical steps and linked to patients' clinical data. Data for OPI calculations were extracted from an Intuitive Data Recorder and the da Vinci ® robotic system. RC cases were each assigned a Nassar and Parkland Grading score and categorized as standard or complex. OPIs were compared across complexity groups, console attributions, and post-surgical complication severities to determine objective relationships across variables. RESULTS Across cases, differences in camera control and head positioning metrics of the principal surgeon were observed when comparing standard and complex cases. Further, OPI differences across the principal surgeon and the fellow(s) were observed in standard cases and include differences in arm swapping, camera control, and clutching behaviors. Monopolar coagulation energy usage differences were also observed. Select surgical step duration differences were observed across complexities and console attributions, and additional surgical task analyses determine the adhesion removal and liver bed hemostasis steps to be the most impactful steps for case complexity and post-surgical complications, respectively. CONCLUSION This is the first study to establish the association between OPIs, case complexities, and clinical complications in RC. We identified OPI differences in intra-operative behaviors and post-surgical complications dependent on surgeon expertise and case complexity, opening the door for more standardized assessments of teaching cases, surgical behaviors and case complexities.
Collapse
Affiliation(s)
| | - Fahri Gokcal
- Good Samaritan Medical Center, Brockton, MA, USA
| | - Abeselom Fanta
- Applied Research, Intuitive Surgical Inc., Peachtree City, GA, USA
| | - Xi Liu
- Applied Research, Intuitive Surgical Inc., Peachtree City, GA, USA
| | - Mallory Shields
- Applied Research, Intuitive Surgical Inc., Peachtree City, GA, USA
| | | | | | | |
Collapse
|
19
|
Gillani M, Rupji M, Devin C, Purvis L, Olson TP, Jarc A, Shields M, Liu Y, Rosen S. Quantification of Surgical Workflow during Robotic Proctectomy. RESEARCH SQUARE 2023:rs.3.rs-3462719. [PMID: 37886442 PMCID: PMC10602135 DOI: 10.21203/rs.3.rs-3462719/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Abstract
Aim Assessments of surgical workflow offer insight regarding procedure variability, case complexity and surgeon proficiency. We utilize an objective method to evaluate step-by-step workflow and step transitions during robotic proctectomy (RP). Methods We annotated 31 RPs using a procedure-specific annotation card. Using Spearman's correlation, we measured strength of association of step time and step visit frequency with console time (CT) and total operative time (TOT). Results Across 31 RPs, a mean (± standard deviation) of 49.0 (± 20.3) steps occurred per procedure. Mean CT and TOT were 213 (± 90) and 283 (± 108) minutes. Posterior mesorectal dissection required most visits (8.7 ± 5.0), while anastomosis required most time (18.0 [± 8.5] minutes). Inferior mesenteric vein (IMV) ligation required least visits (1.0 ± 0.0) and lowest duration (0.9 [± 0.5] minutes). Strong correlations were seen with CT and step times for IMV dissection and ligation (ρ = 0.60 for both), lateral-to-medial splenic flexure mobilization (SFM) (ρ = 0.63), left rectal dissection (ρ = 0.64) and mesorectal division (ρ = 0.71). CT correlated strongly with medial-to-lateral and supracolic SFM visit frequency (ρ = 0.75 and ρ = 0.65). There were strong correlations with TOT and initial exposure time (ρ = 0.60), as well as visit frequency for medial-to-lateral (ρ = 0.67) and supracolic SFM (ρ = 0.65). Descending colon mobilization was nodal, rectal mobilization convergent and rectal transection divergent. Conclusion This study correlates individual surgical steps with CT and TOT through standardized annotation. It provides an objective approach to quantify workflow.
Collapse
|
20
|
Kaur G, Garg M, Gupta S, Juneja S, Rashid J, Gupta D, Shah A, Shaikh A. Automatic Identification of Glomerular in Whole-Slide Images Using a Modified UNet Model. Diagnostics (Basel) 2023; 13:3152. [PMID: 37835895 PMCID: PMC10572820 DOI: 10.3390/diagnostics13193152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 09/23/2023] [Accepted: 09/25/2023] [Indexed: 10/15/2023] Open
Abstract
Glomeruli are interconnected capillaries in the renal cortex that are responsible for blood filtration. Damage to these glomeruli often signifies the presence of kidney disorders like glomerulonephritis and glomerulosclerosis, which can ultimately lead to chronic kidney disease and kidney failure. The timely detection of such conditions is essential for effective treatment. This paper proposes a modified UNet model to accurately detect glomeruli in whole-slide images of kidney tissue. The UNet model was modified by changing the number of filters and feature map dimensions from the first to the last layer to enhance the model's capacity for feature extraction. Moreover, the depth of the UNet model was also improved by adding one more convolution block to both the encoder and decoder sections. The dataset used in the study comprised 20 large whole-side images. Due to their large size, the images were cropped into 512 × 512-pixel patches, resulting in a dataset comprising 50,486 images. The proposed model performed well, with 95.7% accuracy, 97.2% precision, 96.4% recall, and 96.7% F1-score. These results demonstrate the proposed model's superior performance compared to the original UNet model, the UNet model with EfficientNetb3, and the current state-of-the-art. Based on these experimental findings, it has been determined that the proposed model accurately identifies glomeruli in extracted kidney patches.
Collapse
Affiliation(s)
- Gurjinder Kaur
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, Punjab, India; (G.K.); (M.G.); (S.G.); (D.G.)
| | - Meenu Garg
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, Punjab, India; (G.K.); (M.G.); (S.G.); (D.G.)
| | - Sheifali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, Punjab, India; (G.K.); (M.G.); (S.G.); (D.G.)
| | - Sapna Juneja
- Kulliyyah of Information and Communication Technology, International Islamic University Malaysia, Kuala Lumpur 53100, Malaysia;
| | - Junaid Rashid
- Department of Data Science, Sejong University, Seoul 05006, Republic of Korea;
| | - Deepali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, Punjab, India; (G.K.); (M.G.); (S.G.); (D.G.)
| | - Asadullah Shah
- Kulliyyah of Information and Communication Technology, International Islamic University Malaysia, Kuala Lumpur 53100, Malaysia;
| | - Asadullah Shaikh
- Department of Information Systems, College of Computer Science and Information Systems, Najran University, Najran 55461, Saudi Arabia;
| |
Collapse
|
21
|
Abstract
Data science has the potential to greatly enhance efforts to translate evidence into practice in critical care. The intensive care unit is a data-rich environment enabling insight into both patient-level care patterns and clinician-level treatment patterns. By applying artificial intelligence to these novel data sources, implementation strategies can be tailored to individual patients, individual clinicians, and individual situations, revealing when evidence-based practices are missed and facilitating context-sensitive clinical decision support. To achieve these goals, technology developers should work closely with clinicians to create unbiased applications that are integrated into the clinical workflow.
Collapse
Affiliation(s)
- Andrew J King
- Department of Critical Care Medicine, University of Pittsburgh School of Medicine, 3500 Terrace Street, Suite 600, Pittsburgh, PA 15261, USA
| | - Jeremy M Kahn
- Department of Critical Care Medicine, University of Pittsburgh School of Medicine, 3500 Terrace Street, Suite 600, Pittsburgh, PA 15261, USA; Department of Health Policy and Management, University of Pittsburgh School of Public Health, 130 De Soto Street, Pittsburgh, PA 15261, USA.
| |
Collapse
|
22
|
Clanahan JM, Yee A, Awad MM. Active control time: an objective performance metric for trainee participation in robotic surgery. J Robot Surg 2023; 17:2117-2123. [PMID: 37237112 DOI: 10.1007/s11701-023-01628-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Accepted: 05/21/2023] [Indexed: 05/28/2023]
Abstract
Trainee participation and progression in robotic general surgery remain poorly defined. Computer-assisted technology offers the potential to provide and track objective performance metrics. In this study, we aimed to validate the use of a novel metric-active control time (ACT)-for assessing trainee participation in robotic-assisted cases. Performance data from da Vinci Surgical Systems was retrospectively analyzed for all robotic cases involving trainees with a single minimally invasive surgeon over 10 months. The primary outcome metric was percent ACT-the amount of trainee console time spent in active system manipulations over total active time from both consoles. Kruskal-Wallis and Mann-Whitney U statistical tests were applied in analyses. A total of 123 robotic cases with 18 general surgery residents and 1 fellow were included. Of these, 56 were categorized as complex. Median %ACT was statistically different between trainee levels for all case types taken in aggregate (PGY1s 3.0% [IQR 2-14%], PGY3s 32% [IQR 27-66%], PGY4s 42% [IQR 26-52%], PGY5s 50% [IQR 28-70%], and fellow 61% [IQR 41-85%], p = < 0.0001). When stratified by complexity, median %ACT was higher in standard versus complex cases for PGY5 (60% vs. 36%, p = 0.0002) and fellow groups (74% vs. 47%, p = 0.0045). In this study, we demonstrated an increase in %ACT with trainee level and with standard versus complex robotic cases. These findings are consistent with hypotheses, providing validity evidence for ACT as an objective measurement of trainee participation in robotic-assisted cases. Future studies will aim to define task-specific ACT to guide further robotic training and performance assessments.
Collapse
Affiliation(s)
- Julie M Clanahan
- Department of Surgery, Section of Minimally Invasive Surgery, Washington University School of Medicine, 660 South Euclid Avenue, Mailstop 8109-22-9905, Campus Box 8109, St. Louis, MO, 63110-1093, USA.
| | - Andrew Yee
- Data and Analytics, Intuitive Surgical, Inc., Peachtree Corners, GA, 30092, USA
| | - Michael M Awad
- Department of Surgery, Section of Minimally Invasive Surgery, Washington University School of Medicine, 660 South Euclid Avenue, Mailstop 8109-22-9905, Campus Box 8109, St. Louis, MO, 63110-1093, USA
| |
Collapse
|
23
|
Hashemi N, Svendsen MBS, Bjerrum F, Rasmussen S, Tolsgaard MG, Friis ML. Acquisition and usage of robotic surgical data for machine learning analysis. Surg Endosc 2023:10.1007/s00464-023-10214-7. [PMID: 37389741 PMCID: PMC10338401 DOI: 10.1007/s00464-023-10214-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Accepted: 06/12/2023] [Indexed: 07/01/2023]
Abstract
BACKGROUND The increasing use of robot-assisted surgery (RAS) has led to the need for new methods of assessing whether new surgeons are qualified to perform RAS, without the resource-demanding process of having expert surgeons do the assessment. Computer-based automation and artificial intelligence (AI) are seen as promising alternatives to expert-based surgical assessment. However, no standard protocols or methods for preparing data and implementing AI are available for clinicians. This may be among the reasons for the impediment to the use of AI in the clinical setting. METHOD We tested our method on porcine models with both the da Vinci Si and the da Vinci Xi. We sought to capture raw video data from the surgical robots and 3D movement data from the surgeons and prepared the data for the use in AI by a structured guide to acquire and prepare video data using the following steps: 'Capturing image data from the surgical robot', 'Extracting event data', 'Capturing movement data of the surgeon', 'Annotation of image data'. RESULTS 15 participant (11 novices and 4 experienced) performed 10 different intraabdominal RAS procedures. Using this method we captured 188 videos (94 from the surgical robot, and 94 corresponding movement videos of the surgeons' arms and hands). Event data, movement data, and labels were extracted from the raw material and prepared for use in AI. CONCLUSION With our described methods, we could collect, prepare, and annotate images, events, and motion data from surgical robotic systems in preparation for its use in AI.
Collapse
Affiliation(s)
- Nasseh Hashemi
- Department of Clinical Medicine, Aalborg University Hospital, Aalborg, Denmark.
- Nordsim-Centre for Skills Training and Simulation, Aalborg, Denmark.
- ROCnord-Robot Centre, Aalborg University Hospital, Aalborg, Denmark.
- Department of Urology, Aalborg University Hospital, Aalborg, Denmark.
| | - Morten Bo Søndergaard Svendsen
- Copenhagen Academy for Medical Education and Simulation, Center for Human Resources and Education, Copenhagen, Denmark
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Flemming Bjerrum
- Copenhagen Academy for Medical Education and Simulation, Center for Human Resources and Education, Copenhagen, Denmark
- Department of Gastrointestinal and Hepatic Diseases, Copenhagen University Hospital - Herlev and Gentofte, Herlev, Denmark
| | - Sten Rasmussen
- Department of Clinical Medicine, Aalborg University Hospital, Aalborg, Denmark
| | - Martin G Tolsgaard
- Nordsim-Centre for Skills Training and Simulation, Aalborg, Denmark
- Copenhagen Academy for Medical Education and Simulation, Center for Human Resources and Education, Copenhagen, Denmark
| | - Mikkel Lønborg Friis
- Department of Clinical Medicine, Aalborg University Hospital, Aalborg, Denmark
- Nordsim-Centre for Skills Training and Simulation, Aalborg, Denmark
| |
Collapse
|
24
|
Baghdadi A, Lama S, Singh R, Sutherland GR. Tool-tissue force segmentation and pattern recognition for evaluating neurosurgical performance. Sci Rep 2023; 13:9591. [PMID: 37311965 DOI: 10.1038/s41598-023-36702-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 06/08/2023] [Indexed: 06/15/2023] Open
Abstract
Surgical data quantification and comprehension expose subtle patterns in tasks and performance. Enabling surgical devices with artificial intelligence provides surgeons with personalized and objective performance evaluation: a virtual surgical assist. Here we present machine learning models developed for analyzing surgical finesse using tool-tissue interaction force data in surgical dissection obtained from a sensorized bipolar forceps. Data modeling was performed using 50 neurosurgery procedures that involved elective surgical treatment for various intracranial pathologies. The data collection was conducted by 13 surgeons of varying experience levels using sensorized bipolar forceps, SmartForceps System. The machine learning algorithm constituted design and implementation for three primary purposes, i.e., force profile segmentation for obtaining active periods of tool utilization using T-U-Net, surgical skill classification into Expert and Novice, and surgical task recognition into two primary categories of Coagulation versus non-Coagulation using FTFIT deep learning architectures. The final report to surgeon was a dashboard containing recognized segments of force application categorized into skill and task classes along with performance metrics charts compared to expert level surgeons. Operating room data recording of > 161 h containing approximately 3.6 K periods of tool operation was utilized. The modeling resulted in Weighted F1-score = 0.95 and AUC = 0.99 for force profile segmentation using T-U-Net, Weighted F1-score = 0.71 and AUC = 0.81 for surgical skill classification, and Weighted F1-score = 0.82 and AUC = 0.89 for surgical task recognition using a subset of hand-crafted features augmented to FTFIT neural network. This study delivers a novel machine learning module in a cloud, enabling an end-to-end platform for intraoperative surgical performance monitoring and evaluation. Accessed through a secure application for professional connectivity, a paradigm for data-driven learning is established.
Collapse
Affiliation(s)
- Amir Baghdadi
- Project neuroArm, Department of Clinical Neurosciences, Hotchkiss Brain Institute University of Calgary, Calgary, AB, Canada
| | - Sanju Lama
- Project neuroArm, Department of Clinical Neurosciences, Hotchkiss Brain Institute University of Calgary, Calgary, AB, Canada
| | - Rahul Singh
- Project neuroArm, Department of Clinical Neurosciences, Hotchkiss Brain Institute University of Calgary, Calgary, AB, Canada
| | - Garnette R Sutherland
- Project neuroArm, Department of Clinical Neurosciences, Hotchkiss Brain Institute University of Calgary, Calgary, AB, Canada.
| |
Collapse
|
25
|
Tolsgaard MG, Pusic MV, Sebok-Syer SS, Gin B, Svendsen MB, Syer MD, Brydges R, Cuddy MM, Boscardin CK. The fundamentals of Artificial Intelligence in medical education research: AMEE Guide No. 156. MEDICAL TEACHER 2023; 45:565-573. [PMID: 36862064 DOI: 10.1080/0142159x.2023.2180340] [Citation(s) in RCA: 20] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
The use of Artificial Intelligence (AI) in medical education has the potential to facilitate complicated tasks and improve efficiency. For example, AI could help automate assessment of written responses, or provide feedback on medical image interpretations with excellent reliability. While applications of AI in learning, instruction, and assessment are growing, further exploration is still required. There exist few conceptual or methodological guides for medical educators wishing to evaluate or engage in AI research. In this guide, we aim to: 1) describe practical considerations involved in reading and conducting studies in medical education using AI, 2) define basic terminology and 3) identify which medical education problems and data are ideally-suited for using AI.
Collapse
Affiliation(s)
- Martin G Tolsgaard
- Copenhagen Academy for Medical Education and Simulation (CAMES), Copenhagen, Denmark
- Department of Obstetrics, Copenhagen University Hospital Rigshospitalet, Copenhagen, Denmark
| | - Martin V Pusic
- Department of Pediatrics, Harvard University, Boston, MA, USA
| | | | - Brian Gin
- Department of Pediatrics, University of California San Francisco, San Francisco, USA
| | - Morten Bo Svendsen
- Copenhagen Academy for Medical Education and Simulation (CAMES), Copenhagen, Denmark
| | - Mark D Syer
- School of Computing, Queen's University, Kingston, Canada
| | - Ryan Brydges
- Allan Waters Family Simulation Centre, St. Michael's Hospital, Unity Health Toronto & Department of Medicine, University of Toronto, Toronto, Canada
| | | | - Christy K Boscardin
- Department of Medicine and Anesthesia, University of California San Francisco, San Francisco, CA, USA
| |
Collapse
|
26
|
Kiyasseh D, Ma R, Haque TF, Miles BJ, Wagner C, Donoho DA, Anandkumar A, Hung AJ. A vision transformer for decoding surgeon activity from surgical videos. Nat Biomed Eng 2023:10.1038/s41551-023-01010-8. [PMID: 36997732 DOI: 10.1038/s41551-023-01010-8] [Citation(s) in RCA: 25] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Accepted: 02/15/2023] [Indexed: 04/01/2023]
Abstract
The intraoperative activity of a surgeon has substantial impact on postoperative outcomes. However, for most surgical procedures, the details of intraoperative surgical actions, which can vary widely, are not well understood. Here we report a machine learning system leveraging a vision transformer and supervised contrastive learning for the decoding of elements of intraoperative surgical activity from videos commonly collected during robotic surgeries. The system accurately identified surgical steps, actions performed by the surgeon, the quality of these actions and the relative contribution of individual video frames to the decoding of the actions. Through extensive testing on data from three different hospitals located in two different continents, we show that the system generalizes across videos, surgeons, hospitals and surgical procedures, and that it can provide information on surgical gestures and skills from unannotated videos. Decoding intraoperative activity via accurate machine learning systems could be used to provide surgeons with feedback on their operating skills, and may allow for the identification of optimal surgical behaviour and for the study of relationships between intraoperative factors and postoperative outcomes.
Collapse
Affiliation(s)
- Dani Kiyasseh
- Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA, USA.
| | - Runzhuo Ma
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, University of Southern California, Los Angeles, CA, USA
| | - Taseen F Haque
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, University of Southern California, Los Angeles, CA, USA
| | - Brian J Miles
- Department of Urology, Houston Methodist Hospital, Houston, TX, USA
| | - Christian Wagner
- Department of Urology, Pediatric Urology and Uro-Oncology, Prostate Center Northwest, St. Antonius-Hospital, Gronau, Germany
| | - Daniel A Donoho
- Division of Neurosurgery, Center for Neuroscience, Children's National Hospital, Washington, DC, USA
| | - Animashree Anandkumar
- Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA, USA
| | - Andrew J Hung
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
27
|
Bykanov A, Danilov G, Kostumov V, Pilipenko O, Nutfullin B, Rastvorova O, Pitskhelauri D. Artificial Intelligence Technologies in the Microsurgical Operating Room (Review). Sovrem Tekhnologii Med 2023; 15:86-94. [PMID: 37389018 PMCID: PMC10306972 DOI: 10.17691/stm2023.15.2.08] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Indexed: 07/01/2023] Open
Abstract
Surgery performed by a novice neurosurgeon under constant supervision of a senior surgeon with the experience of thousands of operations, able to handle any intraoperative complications and predict them in advance, and never getting tired, is currently an elusive dream, but can become a reality with the development of artificial intelligence methods. This paper has presented a review of the literature on the use of artificial intelligence technologies in the microsurgical operating room. Searching for sources was carried out in the PubMed text database of medical and biological publications. The key words used were "surgical procedures", "dexterity", "microsurgery" AND "artificial intelligence" OR "machine learning" OR "neural networks". Articles in English and Russian were considered with no limitation to publication date. The main directions of research on the use of artificial intelligence technologies in the microsurgical operating room have been highlighted. Despite the fact that in recent years machine learning has been increasingly introduced into the medical field, a small number of studies related to the problem of interest have been published, and their results have not proved to be of practical use yet. However, the social significance of this direction is an important argument for its development.
Collapse
Affiliation(s)
- A.E. Bykanov
- Neurosurgeon, 7 Department of Neurosurgery, Researcher; National Medical Research Center for Neurosurgery named after Academician N.N. Burdenko, Ministry of Healthcare of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia
| | - G.V. Danilov
- Academic Secretary; National Medical Research Center for Neurosurgery named after Academician N.N. Burdenko, Ministry of Healthcare of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia
| | - V.V. Kostumov
- PhD Student, Programmer, the CMC Faculty; Lomonosov Moscow State University, 1 Leninskiye Gory, Moscow, 119991, Russia
| | - O.G. Pilipenko
- PhD Student, Programmer, the CMC Faculty; Lomonosov Moscow State University, 1 Leninskiye Gory, Moscow, 119991, Russia
| | - B.M. Nutfullin
- PhD Student, Programmer, the CMC Faculty; Lomonosov Moscow State University, 1 Leninskiye Gory, Moscow, 119991, Russia
| | - O.A. Rastvorova
- Resident, 7 Department of Neurosurgery; National Medical Research Center for Neurosurgery named after Academician N.N. Burdenko, Ministry of Healthcare of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia
| | - D.I. Pitskhelauri
- Professor, Head of the 7 Department of Neurosurgery; National Medical Research Center for Neurosurgery named after Academician N.N. Burdenko, Ministry of Healthcare of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia
| |
Collapse
|
28
|
Devin CL, Gillani M, Shields MC, Eldredge K, Kucera W, Rupji M, Purvis LA, Paul Olson TJ, Liu Y, Jarc A, Rosen SA. Ratio of Economy of Motion: A New Objective Performance Indicator to Assign Consoles During Dual-Console Robotic Proctectomy. Am Surg 2023:31348231161767. [PMID: 36898676 DOI: 10.1177/00031348231161767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/12/2023]
Abstract
BACKGROUND Our group investigates objective performance indicators (OPIs) to analyze robotic colorectal surgery. Analyses of OPI data are difficult in dual-console procedures (DCPs) as there is currently no reliable, efficient, or scalable technique to assign console-specific OPIs during a DCP. We developed and validated a novel metric to assign tasks to appropriate surgeons during DCPs. METHODS A colorectal surgeon and fellow reviewed 21 unedited, dual-console proctectomy videos with no information to identify the operating surgeons. The reviewers watched a small number of random tasks and assigned "attending" or "trainee" to each task. Based on this sampling, the remainder of task assignments for each procedure was extrapolated. In parallel, we applied our newly developed OPI, ratio of economy of motion (rEOM), to assign consoles. Results from the 2 methods were compared. RESULTS A total of 1811 individual surgical tasks were recorded during 21 proctectomy videos. A median of 6.5 random tasks (137 total) were reviewed during each video, and the remainder of task assignments were extrapolated based on the 7.6% of tasks audited. The task assignment agreement was 91.2% for video review vs rEOM, with rEOM providing ground truth. It took 2.5 hours to manually review video and assign tasks. Ratio of economy of motion task assignment was immediately available based on OPI recordings and automated calculation. DISCUSSION We developed and validated rEOM as an accurate, efficient, and scalable OPI to assign individual surgical tasks to appropriate surgeons during DCPs. This new resource will be useful to everyone involved in OPI research across all surgical specialties.
Collapse
Affiliation(s)
- Courtney L Devin
- Department of Surgery, 12239Emory University School of Medicine, Atlanta, GA, USA
| | - Mishal Gillani
- Department of Surgery, 12239Emory University School of Medicine, Atlanta, GA, USA
| | | | - Kyle Eldredge
- Department of Surgery, 12239Emory University School of Medicine, Atlanta, GA, USA
| | - Walter Kucera
- Department of Surgery, 12239Emory University School of Medicine, Atlanta, GA, USA
| | - Manali Rupji
- Biostatistics Shared Resource, Winship Cancer Institute, 1371Emory University, Atlanta, GA, USA
| | - Lilia A Purvis
- Research Division, 19727Intuitive Surgical, Norcross, GA, USA
| | | | - Yuan Liu
- Biostatistics Shared Resource, Winship Cancer Institute, 1371Emory University, Atlanta, GA, USA.,Department of Biostatistics and Bioinformatics, Rollins School of Public Health, 1371Emory University, Atlanta, GA, USA
| | - Anthony Jarc
- Research Division, 19727Intuitive Surgical, Norcross, GA, USA
| | - Seth A Rosen
- Department of Surgery, 12239Emory University School of Medicine, Atlanta, GA, USA
| |
Collapse
|
29
|
Chu TN, Wong EY, Ma R, Yang CH, Dalieh IS, Hung AJ. Exploring the Use of Artificial Intelligence in the Management of Prostate Cancer. Curr Urol Rep 2023; 24:231-240. [PMID: 36808595 PMCID: PMC10090000 DOI: 10.1007/s11934-023-01149-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/30/2023] [Indexed: 02/21/2023]
Abstract
PURPOSE OF REVIEW This review aims to explore the current state of research on the use of artificial intelligence (AI) in the management of prostate cancer. We examine the various applications of AI in prostate cancer, including image analysis, prediction of treatment outcomes, and patient stratification. Additionally, the review will evaluate the current limitations and challenges faced in the implementation of AI in prostate cancer management. RECENT FINDINGS Recent literature has focused particularly on the use of AI in radiomics, pathomics, the evaluation of surgical skills, and patient outcomes. AI has the potential to revolutionize the future of prostate cancer management by improving diagnostic accuracy, treatment planning, and patient outcomes. Studies have shown improved accuracy and efficiency of AI models in the detection and treatment of prostate cancer, but further research is needed to understand its full potential as well as limitations.
Collapse
Affiliation(s)
- Timothy N Chu
- Center for Robotic Simulation & Education, Department of Urology, USC Institute of Urology, University of Southern California, Catherine & Joseph Aresty1441 Eastlake Avenue Suite 7416, Los Angeles, CA, 90089, USA
| | - Elyssa Y Wong
- Center for Robotic Simulation & Education, Department of Urology, USC Institute of Urology, University of Southern California, Catherine & Joseph Aresty1441 Eastlake Avenue Suite 7416, Los Angeles, CA, 90089, USA
| | - Runzhuo Ma
- Center for Robotic Simulation & Education, Department of Urology, USC Institute of Urology, University of Southern California, Catherine & Joseph Aresty1441 Eastlake Avenue Suite 7416, Los Angeles, CA, 90089, USA
| | - Cherine H Yang
- Center for Robotic Simulation & Education, Department of Urology, USC Institute of Urology, University of Southern California, Catherine & Joseph Aresty1441 Eastlake Avenue Suite 7416, Los Angeles, CA, 90089, USA
| | - Istabraq S Dalieh
- Center for Robotic Simulation & Education, Department of Urology, USC Institute of Urology, University of Southern California, Catherine & Joseph Aresty1441 Eastlake Avenue Suite 7416, Los Angeles, CA, 90089, USA
| | - Andrew J Hung
- Center for Robotic Simulation & Education, Department of Urology, USC Institute of Urology, University of Southern California, Catherine & Joseph Aresty1441 Eastlake Avenue Suite 7416, Los Angeles, CA, 90089, USA.
| |
Collapse
|
30
|
Azargoshasb S, Boekestijn I, Roestenberg M, KleinJan GH, van der Hage JA, van der Poel HG, Rietbergen DDD, van Oosterom MN, van Leeuwen FWB. Quantifying the Impact of Signal-to-background Ratios on Surgical Discrimination of Fluorescent Lesions. Mol Imaging Biol 2023; 25:180-189. [PMID: 35711014 PMCID: PMC9971139 DOI: 10.1007/s11307-022-01736-y] [Citation(s) in RCA: 20] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 03/28/2022] [Accepted: 04/21/2022] [Indexed: 12/14/2022]
Abstract
PURPOSE Surgical fluorescence guidance has gained popularity in various settings, e.g., minimally invasive robot-assisted laparoscopic surgery. In pursuit of novel receptor-targeted tracers, the field of fluorescence-guided surgery is currently moving toward increasingly lower signal intensities. This highlights the importance of understanding the impact of low fluorescence intensities on clinical decision making. This study uses kinematics to investigate the impact of signal-to-background ratios (SBR) on surgical performance. METHODS Using a custom grid exercise containing hidden fluorescent targets, a da Vinci Xi robot with Firefly fluorescence endoscope and ProGrasp and Maryland forceps instruments, we studied how the participants' (N = 16) actions were influenced by the fluorescent SBR. To monitor the surgeon's actions, the surgical instrument tip was tracked using a custom video-based tracking framework. The digitized instrument tracks were then subjected to multi-parametric kinematic analysis, allowing for the isolation of various metrics (e.g., velocity, jerkiness, tortuosity). These were incorporated in scores for dexterity (Dx), decision making (DM), overall performance (PS) and proficiency. All were related to the SBR values. RESULTS Multi-parametric analysis showed that task completion time, time spent in fluorescence-imaging mode and total pathlength are metrics that are directly related to the SBR. Below SBR 1.5, these values substantially increased, and handling errors became more frequent. The difference in Dx and DM between the targets that gave SBR < 1.50 and SBR > 1.50, indicates that the latter group generally yields a 2.5-fold higher Dx value and a threefold higher DM value. As these values provide the basis for the PS score, proficiency could only be achieved at SBR > 1.55. CONCLUSION By tracking the surgical instruments we were able to, for the first time, quantitatively and objectively assess how the instrument positioning is impacted by fluorescent SBR. Our findings suggest that in ideal situations a minimum SBR of 1.5 is required to discriminate fluorescent lesions, a substantially lower value than the SBR 2 often reported in literature.
Collapse
Affiliation(s)
- Samaneh Azargoshasb
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Department of Urology, Netherlands Cancer Institute-Antoni Van Leeuwenhoek Hospital, Amsterdam, the Netherlands
| | - Imke Boekestijn
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Section of Nuclear Medicine, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Meta Roestenberg
- Department of Parasitology, Leiden University Medical Center, Leiden, the Netherlands.,Department of Infectious Diseases, Leiden University Medical Center, Leiden, the Netherlands
| | - Gijs H KleinJan
- Department of Urology, Leiden University Medical Center, Leiden, The Netherlands
| | - Jos A van der Hage
- Department of Surgery, Leiden University Medical Center, Leiden, the Netherlands
| | - Henk G van der Poel
- Department of Urology, Netherlands Cancer Institute-Antoni Van Leeuwenhoek Hospital, Amsterdam, the Netherlands
| | - Daphne D D Rietbergen
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Section of Nuclear Medicine, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Matthias N van Oosterom
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Department of Urology, Netherlands Cancer Institute-Antoni Van Leeuwenhoek Hospital, Amsterdam, the Netherlands
| | - Fijs W B van Leeuwen
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands. .,Department of Urology, Netherlands Cancer Institute-Antoni Van Leeuwenhoek Hospital, Amsterdam, the Netherlands.
| |
Collapse
|
31
|
Leung T, Harjai B, Simpson S, Du AL, Tully JL, George O, Waterman R. An Ensemble Learning Approach to Improving Prediction of Case Duration for Spine Surgery: Algorithm Development and Validation. JMIR Perioper Med 2023; 6:e39650. [PMID: 36701181 PMCID: PMC9912154 DOI: 10.2196/39650] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 11/29/2022] [Accepted: 12/25/2022] [Indexed: 12/26/2022] Open
Abstract
BACKGROUND Estimating surgical case duration accurately is an important operating room efficiency metric. Current predictive techniques in spine surgery include less sophisticated approaches such as classical multivariable statistical models. Machine learning approaches have been used to predict outcomes such as length of stay and time returning to normal work, but have not been focused on case duration. OBJECTIVE The primary objective of this 4-year, single-academic-center, retrospective study was to use an ensemble learning approach that may improve the accuracy of scheduled case duration for spine surgery. The primary outcome measure was case duration. METHODS We compared machine learning models using surgical and patient features to our institutional method, which used historic averages and surgeon adjustments as needed. We implemented multivariable linear regression, random forest, bagging, and XGBoost (Extreme Gradient Boosting) and calculated the average R2, root-mean-square error (RMSE), explained variance, and mean absolute error (MAE) using k-fold cross-validation. We then used the SHAP (Shapley Additive Explanations) explainer model to determine feature importance. RESULTS A total of 3189 patients who underwent spine surgery were included. The institution's current method of predicting case times has a very poor coefficient of determination with actual times (R2=0.213). On k-fold cross-validation, the linear regression model had an explained variance score of 0.345, an R2 of 0.34, an RMSE of 162.84 minutes, and an MAE of 127.22 minutes. Among all models, the XGBoost regressor performed the best with an explained variance score of 0.778, an R2 of 0.770, an RMSE of 92.95 minutes, and an MAE of 44.31 minutes. Based on SHAP analysis of the XGBoost regression, body mass index, spinal fusions, surgical procedure, and number of spine levels involved were the features with the most impact on the model. CONCLUSIONS Using ensemble learning-based predictive models, specifically XGBoost regression, can improve the accuracy of the estimation of spine surgery times.
Collapse
Affiliation(s)
| | - Bhavya Harjai
- Division of Perioperative Informatics, Department of Anesthesiology, University of California, San Diego, San Diego, CA, United States
| | - Sierra Simpson
- Division of Perioperative Informatics, Department of Anesthesiology, University of California, San Diego, San Diego, CA, United States
| | - Austin Liu Du
- School of Medicine, University of California, San Diego, San Diego, CA, United States
| | - Jeffrey Logan Tully
- Division of Perioperative Informatics, Department of Anesthesiology, University of California, San Diego, San Diego, CA, United States
| | - Olivier George
- Department of Psychiatry, University of California, San Diego, San Diego, CA, United States
| | - Ruth Waterman
- Department of Anesthesiology, University of California, San Diego, San Diego, CA, United States
| |
Collapse
|
32
|
Deep neural network architecture for automated soft surgical skills evaluation using objective structured assessment of technical skills criteria. Int J Comput Assist Radiol Surg 2023; 18:929-937. [PMID: 36694051 DOI: 10.1007/s11548-022-02827-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 12/22/2022] [Indexed: 01/26/2023]
Abstract
PURPOSE Classic methods of surgery skills evaluation tend to classify the surgeon performance in multi-categorical discrete classes. If this classification scheme has proven to be effective, it does not provide in-between evaluation levels. If these intermediate scoring levels were available, they would provide more accurate evaluation of the surgeon trainee. METHODS We propose a novel approach to assess surgery skills on a continuous scale ranging from 1 to 5. We show that the proposed approach is flexible enough to be used either for scores of global performance or several sub-scores based on a surgical criteria set called Objective Structured Assessment of Technical Skills (OSATS). We established a combined CNN+BiLSTM architecture to take advantage of both temporal and spatial features of kinematic data. Our experimental validation relies on real-world data obtained from JIGSAWS database. The surgeons are evaluated on three tasks: Knot-Tying, Needle-Passing and Suturing. The proposed framework of neural networks takes as inputs a sequence of 76 kinematic variables and produces an output float score ranging from 1 to 5, reflecting the quality of the performed surgical task. RESULTS Our proposed model achieves high-quality OSATS scores predictions with means of Spearman correlation coefficients between the predicted outputs and the ground-truth outputs of 0.82, 0.60 and 0.65 for Knot-Tying, Needle-Passing and Suturing, respectively. To our knowledge, we are the first to achieve this regression performance using the OSATS criteria and the JIGSAWS kinematic data. CONCLUSION An effective deep learning tool was created for the purpose of surgical skills assessment. It was shown that our method could be a promising surgical skills evaluation tool for surgical training programs.
Collapse
|
33
|
Ma R, Ramaswamy A, Xu J, Trinh L, Kiyasseh D, Chu TN, Wong EY, Lee RS, Rodriguez I, DeMeo G, Desai A, Otiato MX, Roberts SI, Nguyen JH, Laca J, Liu Y, Urbanova K, Wagner C, Anandkumar A, Hu JC, Hung AJ. Surgical gestures as a method to quantify surgical performance and predict patient outcomes. NPJ Digit Med 2022; 5:187. [PMID: 36550203 PMCID: PMC9780308 DOI: 10.1038/s41746-022-00738-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 11/29/2022] [Indexed: 12/24/2022] Open
Abstract
How well a surgery is performed impacts a patient's outcomes; however, objective quantification of performance remains an unsolved challenge. Deconstructing a procedure into discrete instrument-tissue "gestures" is a emerging way to understand surgery. To establish this paradigm in a procedure where performance is the most important factor for patient outcomes, we identify 34,323 individual gestures performed in 80 nerve-sparing robot-assisted radical prostatectomies from two international medical centers. Gestures are classified into nine distinct dissection gestures (e.g., hot cut) and four supporting gestures (e.g., retraction). Our primary outcome is to identify factors impacting a patient's 1-year erectile function (EF) recovery after radical prostatectomy. We find that less use of hot cut and more use of peel/push are statistically associated with better chance of 1-year EF recovery. Our results also show interactions between surgeon experience and gesture types-similar gesture selection resulted in different EF recovery rates dependent on surgeon experience. To further validate this framework, two teams independently constructe distinct machine learning models using gesture sequences vs. traditional clinical features to predict 1-year EF. In both models, gesture sequences are able to better predict 1-year EF (Team 1: AUC 0.77, 95% CI 0.73-0.81; Team 2: AUC 0.68, 95% CI 0.66-0.70) than traditional clinical features (Team 1: AUC 0.69, 95% CI 0.65-0.73; Team 2: AUC 0.65, 95% CI 0.62-0.68). Our results suggest that gestures provide a granular method to objectively indicate surgical performance and outcomes. Application of this methodology to other surgeries may lead to discoveries on methods to improve surgery.
Collapse
Affiliation(s)
- Runzhuo Ma
- grid.42505.360000 0001 2156 6853Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA USA
| | - Ashwin Ramaswamy
- grid.5386.8000000041936877XDepartment of Urology, Weill Cornell Medicine, New York, NY USA
| | - Jiashu Xu
- grid.42505.360000 0001 2156 6853Computer Science Department, Viterbi School of Engineering, University of Southern California, Los Angeles, CA USA
| | - Loc Trinh
- grid.42505.360000 0001 2156 6853Computer Science Department, Viterbi School of Engineering, University of Southern California, Los Angeles, CA USA
| | - Dani Kiyasseh
- grid.20861.3d0000000107068890Department of Computing & Mathematical Sciences, California Institute of Technology, Pasadena, CA USA
| | - Timothy N. Chu
- grid.42505.360000 0001 2156 6853Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA USA
| | - Elyssa Y. Wong
- grid.42505.360000 0001 2156 6853Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA USA
| | - Ryan S. Lee
- grid.42505.360000 0001 2156 6853Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA USA
| | - Ivan Rodriguez
- grid.42505.360000 0001 2156 6853Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA USA
| | - Gina DeMeo
- grid.5386.8000000041936877XDepartment of Urology, Weill Cornell Medicine, New York, NY USA
| | - Aditya Desai
- grid.42505.360000 0001 2156 6853Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA USA
| | - Maxwell X. Otiato
- grid.42505.360000 0001 2156 6853Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA USA
| | - Sidney I. Roberts
- grid.42505.360000 0001 2156 6853Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA USA
| | - Jessica H. Nguyen
- grid.42505.360000 0001 2156 6853Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA USA
| | - Jasper Laca
- grid.42505.360000 0001 2156 6853Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA USA
| | - Yan Liu
- grid.42505.360000 0001 2156 6853Computer Science Department, Viterbi School of Engineering, University of Southern California, Los Angeles, CA USA
| | - Katarina Urbanova
- grid.459927.40000 0000 8785 9045Department of Urology and Urologic Oncology, St. Antonius-Hospital, Gronau, Germany
| | - Christian Wagner
- grid.459927.40000 0000 8785 9045Department of Urology and Urologic Oncology, St. Antonius-Hospital, Gronau, Germany
| | - Animashree Anandkumar
- grid.20861.3d0000000107068890Department of Computing & Mathematical Sciences, California Institute of Technology, Pasadena, CA USA
| | - Jim C. Hu
- grid.5386.8000000041936877XDepartment of Urology, Weill Cornell Medicine, New York, NY USA
| | - Andrew J. Hung
- grid.42505.360000 0001 2156 6853Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA USA
| |
Collapse
|
34
|
van Leeuwen FWB, van der Hage JA. Where Robotic Surgery Meets the Metaverse. Cancers (Basel) 2022; 14:cancers14246161. [PMID: 36551645 PMCID: PMC9776294 DOI: 10.3390/cancers14246161] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 12/05/2022] [Indexed: 12/15/2022] Open
Abstract
With a focus on hepatobiliary surgery, the review by Giannone et al [...].
Collapse
Affiliation(s)
- Fijs W. B. van Leeuwen
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Center, 2300 RC Leiden, The Netherlands
- Correspondence:
| | - Jos A. van der Hage
- Department of Sugery, Leiden University Medical Center, 2300 RC Leiden, The Netherlands
| |
Collapse
|
35
|
Mohamadipanah H, Perumalla CA, Kearse LE, Yang S, Wise BJ, Goll CK, Witt AK, Korndorffer JR, Pugh CM. Do Individual Surgeon Preferences Affect Procedural Outcomes? Ann Surg 2022; 276:701-710. [PMID: 35861074 PMCID: PMC10254571 DOI: 10.1097/sla.0000000000005595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES Surgeon preferences such as instrument and suture selection and idiosyncratic approaches to individual procedure steps have been largely viewed as minor differences in the surgical workflow. We hypothesized that idiosyncratic approaches could be quantified and shown to have measurable effects on procedural outcomes. METHODS At the American College of Surgeons (ACS) Clinical Congress, experienced surgeons volunteered to wear motion tracking sensors and be videotaped while evaluating a loop of porcine intestines to identify and repair 2 preconfigured, standardized enterotomies. Video annotation was used to identify individual surgeon preferences and motion data was used to quantify surgical actions. χ 2 analysis was used to determine whether surgical preferences were associated with procedure outcomes (bowel leak). RESULTS Surgeons' (N=255) preferences were categorized into 4 technical decisions. Three out of the 4 technical decisions (repaired injuries together, double-layer closure, corner-stitches vs no corner-stitches) played a significant role in outcomes, P <0.05. Running versus interrupted did not affect outcomes. Motion analysis revealed significant differences in average operative times (leak: 6.67 min vs no leak: 8.88 min, P =0.0004) and work effort (leak-path length=36.86 cm vs no leak-path length=49.99 cm, P =0.001). Surgeons who took the riskiest path but did not leak had better bimanual dexterity (leak=0.21/1.0 vs no leak=0.33/1.0, P =0.047) and placed more sutures during the repair (leak=4.69 sutures vs no leak=6.09 sutures, P =0.03). CONCLUSIONS Our results show that individual preferences affect technical decisions and play a significant role in procedural outcomes. Future analysis in more complex procedures may make major contributions to our understanding of contributors to procedure outcomes.
Collapse
|
36
|
Medical Data Classification Assisted by Machine Learning Strategy. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:9699612. [PMID: 36124172 PMCID: PMC9482495 DOI: 10.1155/2022/9699612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 07/25/2022] [Accepted: 08/02/2022] [Indexed: 11/18/2022]
Abstract
With the development of science and technology, data plays an increasingly important role in our daily life. Therefore, much attention has been paid to the field of data mining. Data classification is the premise of data mining, and how well the data is classified directly affects the performance of subsequent models. In particular, in the medical field, data classification can help accurately determine the location of patients' lesions and reduce the workload of doctors in the treatment process. However, medical data has the characteristics of high noise, strong correlation, and high data dimension, which brings great challenges to the traditional classification model. Therefore, it is very important to design an advanced model to improve the effect of medical data classification. In this context, this paper first introduces the structure and characteristics of the convolutional neural network (CNN) model and then demonstrates its unique advantages in medical data processing, especially in data classification. Secondly, we design a new kind of medical data classification model based on the CNN model. Finally, the simulation results show that the proposed method achieves higher classification accuracy with faster model convergence speed and the lower training error when compared with conventional machine leaning methods, which has demonstrated the effectiveness of the new method in respect to medical data classification.
Collapse
|
37
|
Lee RS, Ma R, Pham S, Maya-Silva J, Nguyen JH, Aron M, Cen S, Daneshmand S, Hung AJ. Machine Learning to Delineate Surgeon and Clinical Factors That Anticipate Positive Surgical Margins After Robot-Assisted Radical Prostatectomy. J Endourol 2022; 36:1192-1198. [PMID: 35414218 PMCID: PMC9422786 DOI: 10.1089/end.2021.0890] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Purpose: Automated performance metrics (APMs), derived from instrument kinematic and systems events data during robotic surgery, are validated objective measures of surgeon performance. Our previous studies showed that APMs are strong outcome predictors of urinary continence after robot-assisted radical prostatectomy (RARP). We now use machine learning to investigate how surgeon performance (i.e., APMs) and clinical factors can predict positive surgical margins (PSMs) after RARP. Methods: We prospectively collected data of patients undergoing RARP at our institution from 2016 to 2019. Random Forest model predicted PSMs based on 15 clinical factors and 38 APMs from 11 standardized RARP steps. Out-of-bag Gini impurity index determined the top 10 variables of importance (VOI). APMs in the top 10 VOI were assessed for confounding effects by extracapsular extension (ECE) and pathologic T (pT) through Poisson regression with Generalized Estimating Equation. Results: 55/236 (23.3%) cases had PSMs. Of the 55 cases with PSMs, 9 (16.4%) were pT2 and 46 (83.6%), pT3. The full model, including clinical factors and APMs, achieved area under the curve (AUC) 0.74. When assessing clinical factors or APMs alone, the model achieved AUC 0.72 and 0.64, respectively. The strongest PSM predictors were ECE and pT stage, followed by APMs in specific steps. After adjusting for ECE and pT stage, most APMs remained as independent predictors of PSM. Conclusion: Using machine learning methods, we found that the strongest predictors of PSMs after RARP are nonmodifiable, disease-driven factors (ECE and pT). While APMs provide minimal additional insight into when PSMs may occur, they are nonetheless capable of independently predicting PSMs based on objective measures of surgeon performance.
Collapse
Affiliation(s)
- Ryan S. Lee
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA
| | - Runzhuo Ma
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA
| | - Stephanie Pham
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA
| | - Jacqueline Maya-Silva
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA
| | - Jessica H. Nguyen
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA
| | - Manju Aron
- Department of Pathology, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA
| | - Steven Cen
- Department of Radiology, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA
| | - Siamak Daneshmand
- Catherine & Joseph Aresty Department of Urology, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA
| | - Andrew J. Hung
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA
| |
Collapse
|
38
|
Artificial intelligence for renal cancer: From imaging to histology and beyond. Asian J Urol 2022; 9:243-252. [PMID: 36035341 PMCID: PMC9399557 DOI: 10.1016/j.ajur.2022.05.003] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 04/07/2022] [Accepted: 05/07/2022] [Indexed: 12/24/2022] Open
Abstract
Artificial intelligence (AI) has made considerable progress within the last decade and is the subject of contemporary literature. This trend is driven by improved computational abilities and increasing amounts of complex data that allow for new approaches in analysis and interpretation. Renal cell carcinoma (RCC) has a rising incidence since most tumors are now detected at an earlier stage due to improved imaging. This creates considerable challenges as approximately 10%–17% of kidney tumors are designated as benign in histopathological evaluation; however, certain co-morbid populations (the obese and elderly) have an increased peri-interventional risk. AI offers an alternative solution by helping to optimize precision and guidance for diagnostic and therapeutic decisions. The narrative review introduced basic principles and provide a comprehensive overview of current AI techniques for RCC. Currently, AI applications can be found in any aspect of RCC management including diagnostics, perioperative care, pathology, and follow-up. Most commonly applied models include neural networks, random forest, support vector machines, and regression. However, for implementation in daily practice, health care providers need to develop a basic understanding and establish interdisciplinary collaborations in order to standardize datasets, define meaningful endpoints, and unify interpretation.
Collapse
|
39
|
Bejan V, Pîslaru M, Scripcariu V. Diagnosis of Peritoneal Carcinomatosis of Colorectal Origin Based on an Innovative Fuzzy Logic Approach. Diagnostics (Basel) 2022; 12:1285. [PMID: 35626439 PMCID: PMC9140813 DOI: 10.3390/diagnostics12051285] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 05/08/2022] [Accepted: 05/16/2022] [Indexed: 02/04/2023] Open
Abstract
Colorectal cancer represents one of the most important causes worldwide of cancer related morbidity and mortality. One of the complications which can occur during cancer progression, is peritoneal carcinomatosis. In the majority of cases, it is diagnosed in late stages due to the lack of diagnostic tools capable of revealing the early-stage peritoneal burden. Therefore, still associates with poor prognosis and quality of life, despite recent therapeutic advances. The aim of the study was to develop a fuzzy logic approach to assess the probability of peritoneal carcinomatosis presence using routine blood test parameters as input data. The patient data was acquired retrospective from patients diagnosed between 2010-2021. The developed model focuses on the specific quantitative alteration of these parameters in the presence of peritoneal carcinomatosis, which is an innovative approach as regards the literature in the field and validates the feasibility of using a fuzzy logic approach in the noninvasive diagnosis of peritoneal carcinomatosis.
Collapse
Affiliation(s)
- Valentin Bejan
- Department of Surgery, Faculty of Medicine, “Gr. T. Popa” University of Medicine and Farmacy of Iași, 700115 Iasi, Romania;
| | - Marius Pîslaru
- Department of Engineering and Management, Faculty of Industrial Design and Business Management, “Gheorghe Asachi” Technical University of Iași, 700050 Iasi, Romania;
| | - Viorel Scripcariu
- Department of Surgery, Faculty of Medicine, “Gr. T. Popa” University of Medicine and Farmacy of Iași, 700115 Iasi, Romania;
| |
Collapse
|
40
|
Zhao H, Li W, Li J, Li L, Wang H, Guo J. Predicting the Stone-Free Status of Percutaneous Nephrolithotomy With the Machine Learning System: Comparative Analysis With Guy’s Stone Score and the S.T.O.N.E Score System. Front Mol Biosci 2022; 9:880291. [PMID: 35601833 PMCID: PMC9114350 DOI: 10.3389/fmolb.2022.880291] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Accepted: 04/07/2022] [Indexed: 11/17/2022] Open
Abstract
Purpose: The aim of the study was to use machine learning methods (MLMs) to predict the stone-free status after percutaneous nephrolithotomy (PCNL). We compared the performance of this system with Guy’s stone score and the S.T.O.N.E score system. Materials and Methods: Data from 222 patients (90 females, 41%) who underwent PCNL at our center were used. Twenty-six parameters, including individual variables, renal and stone factors, and surgical factors were used as input data for MLMs. We evaluated the efficacy of four different techniques: Lasso-logistic (LL), random forest (RF), support vector machine (SVM), and Naive Bayes. The model performance was evaluated using the area under the curve (AUC) and compared with that of Guy’s stone score and the S.T.O.N.E score system. Results: The overall stone-free rate was 50% (111/222). To predict the stone-free status, all receiver operating characteristic curves of the four MLMs were above the curve for Guy’s stone score. The AUCs of LL, RF, SVM, and Naive Bayes were 0.879, 0.803, 0.818, and 0.803, respectively. These values were higher than the AUC of Guy’s score system, 0.800. The accuracies of the MLMs (0.803% to 0.818%) were also superior to the S.T.O.N.E score system (0.788%). Among the MLMs, Lasso-logistic showed the most favorable AUC. Conclusion: Machine learning methods can predict the stone-free rate with AUCs not inferior to those of Guy’s stone score and the S.T.O.N.E score system.
Collapse
Affiliation(s)
- Hong Zhao
- Shanghai Xuhui Central Hospital, Shanghai, China
| | - Wanling Li
- Zhongshan Hospital, Fudan University, Shanghai, China
| | - Junsheng Li
- Shanghai Xuhui Central Hospital, Shanghai, China
| | - Li Li
- Shanghai Xuhui Central Hospital, Shanghai, China
| | - Hang Wang
- Zhongshan Hospital, Fudan University, Shanghai, China
| | - Jianming Guo
- Zhongshan Hospital, Fudan University, Shanghai, China
- *Correspondence: Jianming Guo,
| |
Collapse
|
41
|
Tousignant MR, Liu X, Ershad Langroodi M, Jarc AM. Identification of Main Influencers of Surgical Efficiency and Variability Using Task-Level Objective Metrics: A Five-Year Robotic Sleeve Gastrectomy Case Series. Front Surg 2022; 9:756522. [PMID: 35586509 PMCID: PMC9108208 DOI: 10.3389/fsurg.2022.756522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 03/07/2022] [Indexed: 11/13/2022] Open
Abstract
Objective Surgical efficiency and variability are critical contributors to optimal outcomes, patient experience, care team experience, and total cost to treat per disease episode. Opportunities remain to develop scalable, objective methods to quantify surgical behaviors that maximize efficiency and reduce variability. Such objective measures can then be used to provide surgeons with timely and user-specific feedbacks to monitor performances and facilitate training and learning. In this study, we used objective task-level analysis to identify dominant contributors toward surgical efficiency and variability across the procedural steps of robotic-assisted sleeve gastrectomy (RSG) over a five-year period for a single surgeon. These results enable actionable insights that can both complement those from population level analyses and be tailored to an individual surgeon's practice and experience. Methods Intraoperative video recordings of 77 RSG procedures performed by a single surgeon from 2015 to 2019 were reviewed and segmented into surgical tasks. Surgeon-initiated events when controlling the robotic-assisted surgical system were used to compute objective metrics. A series of multi-staged regression analysis were used to determine: if any specific tasks or patient body mass index (BMI) statistically impacted procedure duration; which objective metrics impacted critical task efficiency; and which task(s) statistically contributed to procedure variability. Results Stomach dissection was found to be the most significant contributor to procedure duration (β = 0.344, p< 0.001; R = 0.81, p< 0.001) followed by surgical inactivity and stomach stapling. Patient BMI was not found to be statistically significantly correlated with procedure duration (R = −0.01, p = 0.90). Energy activation rate, a robotic system event-based metric, was identified as a dominant feature in predicting stomach dissection duration and differentiating earlier and later case groups. Reduction of procedure variability was observed between earlier (2015-2016) and later (2017-2019) groups (IQR = 14.20 min vs. 6.79 min). Stomach dissection was found to contribute most to procedure variability (β = 0.74, p < 0.001). Conclusions A surgical task-based objective analysis was used to identify major contributors to surgical efficiency and variability. We believe this data-driven method will enable clinical teams to quantify surgeon-specific performance and identify actionable opportunities focused on the dominant surgical tasks impacting overall procedure efficiency and consistency.
Collapse
Affiliation(s)
- Mark R. Tousignant
- Medical Safety and Innovation, Intuitive Surgical Inc., Sunnyvale, CA, United States
| | - Xi Liu
- Applied Research, Intuitive Surgical Inc., Peachtree Corners, GA, United States
- *Correspondence: Xi Liu
| | | | - Anthony M. Jarc
- Applied Research, Intuitive Surgical Inc., Peachtree Corners, GA, United States
| |
Collapse
|
42
|
Lam K, Chen J, Wang Z, Iqbal FM, Darzi A, Lo B, Purkayastha S, Kinross JM. Machine learning for technical skill assessment in surgery: a systematic review. NPJ Digit Med 2022; 5:24. [PMID: 35241760 PMCID: PMC8894462 DOI: 10.1038/s41746-022-00566-0] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 01/21/2022] [Indexed: 12/18/2022] Open
Abstract
Accurate and objective performance assessment is essential for both trainees and certified surgeons. However, existing methods can be time consuming, labor intensive, and subject to bias. Machine learning (ML) has the potential to provide rapid, automated, and reproducible feedback without the need for expert reviewers. We aimed to systematically review the literature and determine the ML techniques used for technical surgical skill assessment and identify challenges and barriers in the field. A systematic literature search, in accordance with the PRISMA statement, was performed to identify studies detailing the use of ML for technical skill assessment in surgery. Of the 1896 studies that were retrieved, 66 studies were included. The most common ML methods used were Hidden Markov Models (HMM, 14/66), Support Vector Machines (SVM, 17/66), and Artificial Neural Networks (ANN, 17/66). 40/66 studies used kinematic data, 19/66 used video or image data, and 7/66 used both. Studies assessed the performance of benchtop tasks (48/66), simulator tasks (10/66), and real-life surgery (8/66). Accuracy rates of over 80% were achieved, although tasks and participants varied between studies. Barriers to progress in the field included a focus on basic tasks, lack of standardization between studies, and lack of datasets. ML has the potential to produce accurate and objective surgical skill assessment through the use of methods including HMM, SVM, and ANN. Future ML-based assessment tools should move beyond the assessment of basic tasks and towards real-life surgery and provide interpretable feedback with clinical value for the surgeon.PROSPERO: CRD42020226071.
Collapse
Affiliation(s)
- Kyle Lam
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Junhong Chen
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Zeyu Wang
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Fahad M Iqbal
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Ara Darzi
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Benny Lo
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Sanjay Purkayastha
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK.
| | - James M Kinross
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| |
Collapse
|
43
|
Lam K, Nazarian S, Gadi N, Hakky S, Moorthy K, Tsironis C, Ahmed A, Kinross JM, Purkayastha S. PATIENT PERSPECTIVES ON SURGEON-SPECIFIC OUTCOME REPORTS IN BARIATRIC SURGERY. Surg Obes Relat Dis 2022; 18:704-713. [DOI: 10.1016/j.soard.2022.02.020] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 02/03/2022] [Accepted: 02/27/2022] [Indexed: 02/06/2023]
|
44
|
Trinh L, Mingo S, Vanstrum EB, Sanford D, Aastha, Ma R, Nguyen JH, Liu Y, Hung AJ. Survival Analysis Using Surgeon Skill Metrics and Patient Factors to Predict Urinary Continence Recovery After Robot-assisted Radical Prostatectomy. Eur Urol Focus 2022; 8:623-630. [PMID: 33858811 PMCID: PMC8505550 DOI: 10.1016/j.euf.2021.04.001] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Revised: 03/11/2021] [Accepted: 04/04/2021] [Indexed: 12/16/2022]
Abstract
BACKGROUND It has been shown that metrics recorded for instrument kinematics during robotic surgery can predict urinary continence outcomes. OBJECTIVE To evaluate the contributions of patient and treatment factors, surgeon efficiency metrics, and surgeon technical skill scores, especially for vesicourethral anastomosis (VUA), to models predicting urinary continence recovery following robot-assisted radical prostatectomy (RARP). DESIGN, SETTING, AND PARTICIPANTS Automated performance metrics (APMs; instrument kinematics and system events) and patient data were collected for RARPs performed from July 2016 to December 2017. Robotic Anastomosis Competency Evaluation (RACE) scores during VUA were manually evaluated. Training datasets included: (1) patient factors; (2) summarized APMs (reported over RARP steps); (3) detailed APMs (reported over suturing phases of VUA); and (4) technical skills (RACE). Feature selection was used to compress the dimensionality of the inputs. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS The study outcome was urinary continence recovery, defined as use of 0 or 1 safety pads per day. Two predictive models (Cox proportional hazards [CoxPH] and deep learning survival analysis [DeepSurv]) were used. RESULTS AND LIMITATIONS Of 115 patients undergoing RARP, 89 (77.4%) recovered their urinary continence and the median recovery time was 166 d (interquartile range [IQR] 82-337). VUAs were performed by 23 surgeons. The median RACE score was 28/30 (IQR 27-29). Among the individual datasets, technical skills (RACE) produced the best models (C index: CoxPH 0.695, DeepSurv: 0.708). Among summary APMs, posterior/anterior VUA yielded superior model performance over other RARP steps (C index 0.543-0.592). Among detailed APMs, metrics for needle driving yielded top-performing models (C index 0.614-0.655) over other suturing phases. DeepSurv models consistently outperformed CoxPH; both approaches performed best when provided with all the datasets. Limitations include feature selection, which may have excluded relevant information but prevented overfitting. CONCLUSIONS Technical skills and "needle driving" APMs during VUA were most contributory. The best-performing model used synergistic data from all datasets. PATIENT SUMMARY One of the steps in robot-assisted surgical removal of the prostate involves joining the bladder to the urethra. Detailed information on surgeon performance for this step improved the accuracy of predicting recovery of urinary continence among men undergoing this operation for prostate cancer.
Collapse
Affiliation(s)
- Loc Trinh
- Computer Science Department, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Samuel Mingo
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Erik B. Vanstrum
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Daniel Sanford
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Aastha
- Computer Science Department, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Runzhuo Ma
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Jessica H. Nguyen
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Yan Liu
- Computer Science Department, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Andrew J. Hung
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA,Corresponding author. University of Southern California Institute of Urology, 1441 Eastlake Avenue, Los Angeles, CA 90089, USA. Tel. +1 323 8653700; Fax: +1 323 8650120. (A.J. Hung)
| |
Collapse
|
45
|
Abstract
Artificial intelligence (AI) is a fascinating new technology that incorporates machine learning and neural networks to improve existing technology or create new ones. Potential applications of AI are introduced to aid in the fight against colorectal cancer (CRC). This includes how AI will affect the epidemiology of colorectal cancer and the new methods of mass information gathering like GeoAI, digital epidemiology and real-time information collection. Meanwhile, this review also examines existing tools for diagnosing disease like CT/MRI, endoscopes, genetics, and pathological assessments also benefitted greatly from implementation of deep learning. Finally, how treatment and treatment approaches to CRC can be enhanced when applying AI is under discussion. The power of AI regarding the therapeutic recommendation in colorectal cancer demonstrates much promise in clinical and translational field of oncology, which means better and personalized treatments for those in need.
Collapse
Affiliation(s)
- Chaoran Yu
- Department of General Surgery, Shanghai Ninth People’ Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025 People’s Republic of China
| | - Ernest Johann Helwig
- Tongji Medical College of Huazhong University of Science and Technology, Wuhan, 430030 People’s Republic of China
| |
Collapse
|
46
|
Davids J, Lam K, Nimer A, Gianarrou S, Ashrafian H. AIM in Medical Education. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_30] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
47
|
Artificial Intelligence in Urology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
48
|
Shankar PR. Artificial intelligence in health professions education. ARCHIVES OF MEDICINE AND HEALTH SCIENCES 2022. [DOI: 10.4103/amhs.amhs_234_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
|
49
|
Mourmouris P, Tzelves L, Feretzakis G, Kalles D, Manolitsis I, Berdempes M, Varkarakis I, Skolarikos A. The use and applicability of machine learning algorithms in predicting the surgical outcome for patients with benign prostatic enlargement. Which model to use? Arch Ital Urol Androl 2021; 93:418-424. [PMID: 34933537 DOI: 10.4081/aiua.2021.4.418] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Accepted: 09/22/2021] [Indexed: 11/23/2022] Open
Abstract
OBJECTIVES Artificial intelligence (AI) is increasingly used in medicine, but data on benign prostatic enlargement (BPE) management are lacking. This study aims to test the performance of several machine learning algorithms, in predicting clinical outcomes during BPE surgical management. METHODS Clinical data were extracted from a prospectively collected database for 153 men with BPE, treated with transurethral resection (monopolar or bipolar) or vaporization of the prostate. Due to small sample size, we applied a method for increasing our dataset, Synthetic Minority Oversampling Technique (SMOTE). The new dataset created with SMOTE has been expanded by 453 synthetic instances, in addition to the original 153. The WEKA Data Mining Software was used for constructing predictive models, while several appropriate statistical measures, like Correlation coefficient (R), Mean Absolute Error (MAE), Root Mean-Squared Error (RMSE), were calculated with several supervised regression algorithms - techniques (Linear Regression, Multilayer Perceptron, SMOreg, k-Nearest Neighbors, Bagging, M5Rules, M5P - Pruned Model Tree, and Random forest). RESULTS The baseline characteristics of patients were extracted, with age, prostate volume, method of operation, baseline Qmax and baseline IPSS being used as independent variables. Using the Random Forest algorithm resulted in values of R, MAE, RMSE that indicate the ability of these models to better predict % Qmax increase. The Random Forest model also demonstrated the best results in R, MAE, RMSE for predicting % IPSS reduction. CONCLUSIONS Machine Learning techniques can be used for making predictions regarding clinical outcomes of surgical BPRE management. Wider-scale validation studies are necessary to strengthen our results in choosing the best model.
Collapse
Affiliation(s)
- Panagiotis Mourmouris
- 2nd Department of Urology, National and Kapodistrian University of Athens, Sismanogleio General Hospital, Athens.
| | - Lazaros Tzelves
- 2nd Department of Urology, National and Kapodistrian University of Athens, Sismanogleio General Hospital, Athens.
| | - Georgios Feretzakis
- School of Science and Technology, Hellenic Open University, Patras; Department of Quality Control, Research and Continuing Education, Sismanogleio General Hospital, Marousi.
| | - Dimitris Kalles
- School of Science and Technology, Hellenic Open University, Patras.
| | - Ioannis Manolitsis
- 2nd Department of Urology, National and Kapodistrian University of Athens, Sismanogleio General Hospital, Athens.
| | - Marinos Berdempes
- 2nd Department of Urology, National and Kapodistrian University of Athens, Sismanogleio General Hospital, Athens.
| | - Ioannis Varkarakis
- 2nd Department of Urology, National and Kapodistrian University of Athens, Sismanogleio General Hospital, Athens.
| | - Andreas Skolarikos
- 2nd Department of Urology, National and Kapodistrian University of Athens, Sismanogleio General Hospital, Athens.
| |
Collapse
|
50
|
Stenzl A, Sternberg CN, Ghith J, Serfass L, Schijvenaars BJA, Sboner A. Application of Artificial Intelligence to Overcome Clinical Information Overload in Urologic Cancer. BJU Int 2021; 130:291-300. [PMID: 34846775 DOI: 10.1111/bju.15662] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
OBJECTIVE To describe the use of artificial intelligence (AI) in medical literature and trial data extraction, and its applications in uro-oncology. This bridging review, which consolidates information from the diverse applications of AI, highlights how AI users can investigate more sophisticated queries than with traditional methods, leading to synthesis of raw data and complex outputs into more actionable and personalized results, particularly in the field of uro-oncology. METHODS Literature and clinical trial searches were performed in PubMed, Dimensions, Embase and Google (1999-2020). The searches focused on the use of AI and its various forms to facilitate literature searches, clinical guidelines development, and clinical trial data extraction in uro-oncology. To illustrate how AI can be applied toaddress questions about optimizing therapeutic decision making and individualizing treatment regimens, the Dimensions-linked information platform was searched for "prostate cancer" keywords (76 publications were identified; 48 were included). RESULTS AI offers the promise of transforming raw data and complex outputs into actionable insights. Literature and clinical trial searches can be automated, enabling clinicians to develop and analyze publications expeditiously on complex issues such as therapeutic sequencing and to obtain updates on documents that evolve at the pace and scope of the landscape. An AI-based platform inclusive of 12 trial databases and >100 scientific literature sources enabled the creation of an interactive visualization. CONCLUSION As the literature and clinical trial landscape continues to grow in complexity and with increasing speed, the ability to pull the right information at the right time from different search engines and resources while excluding social media bias becomes more challenging. This review demonstrates that by applying natural language processing and machine learning algorithms, validated and optimized AI leads to a speedier, more personalized, efficient and focused search compared with traditional methods.
Collapse
Affiliation(s)
- Arnulf Stenzl
- Department of Urology, University of Tübingen, Tübingen, Germany
| | - Cora N Sternberg
- Clinical Director, Englander Institute for Precision Medicine, Professor of Medicine, Weill Cornell Medicine Hematology/Oncology, Sandra and Edward Meyer Cancer Center, New York, NY, USA
| | | | | | | | - Andrea Sboner
- Director of Informatics and Computational Biology, Englander Institute for Precision Medicine; Assistant Professor at the Department of Pathology and Laboratory Medicine, Weill Cornell Medicine, New York, NY, USA
| |
Collapse
|