1
|
Bellos T, Manolitsis I, Katsimperis S, Juliebø-Jones P, Feretzakis G, Mitsogiannis I, Varkarakis I, Somani BK, Tzelves L. Artificial Intelligence in Urologic Robotic Oncologic Surgery: A Narrative Review. Cancers (Basel) 2024; 16:1775. [PMID: 38730727 PMCID: PMC11083167 DOI: 10.3390/cancers16091775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 04/29/2024] [Accepted: 05/02/2024] [Indexed: 05/13/2024] Open
Abstract
With the rapid increase in computer processing capacity over the past two decades, machine learning techniques have been applied in many sectors of daily life. Machine learning in therapeutic settings is also gaining popularity. We analysed current studies on machine learning in robotic urologic surgery. We searched PubMed/Medline and Google Scholar up to December 2023. Search terms included "urologic surgery", "artificial intelligence", "machine learning", "neural network", "automation", and "robotic surgery". Automatic preoperative imaging, intraoperative anatomy matching, and bleeding prediction has been a major focus. Early artificial intelligence (AI) therapeutic outcomes are promising. Robot-assisted surgery provides precise telemetry data and a cutting-edge viewing console to analyse and improve AI integration in surgery. Machine learning enhances surgical skill feedback, procedure effectiveness, surgical guidance, and postoperative prediction. Tension-sensors on robotic arms and augmented reality can improve surgery. This provides real-time organ motion monitoring, improving precision and accuracy. As datasets develop and electronic health records are used more and more, these technologies will become more effective and useful. AI in robotic surgery is intended to improve surgical training and experience. Both seek precision to improve surgical care. AI in ''master-slave'' robotic surgery offers the detailed, step-by-step examination of autonomous robotic treatments.
Collapse
Affiliation(s)
- Themistoklis Bellos
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | - Ioannis Manolitsis
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | - Stamatios Katsimperis
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | | | - Georgios Feretzakis
- School of Science and Technology, Hellenic Open University, 26335 Patras, Greece;
| | - Iraklis Mitsogiannis
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | - Ioannis Varkarakis
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | - Bhaskar K. Somani
- Department of Urology, University of Southampton, Southampton SO16 6YD, UK;
| | - Lazaros Tzelves
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| |
Collapse
|
2
|
Abid R, Hussein AA, Guru KA. Artificial Intelligence in Urology: Current Status and Future Perspectives. Urol Clin North Am 2024; 51:117-130. [PMID: 37945097 DOI: 10.1016/j.ucl.2023.06.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
Surgical fields, especially urology, have shifted increasingly toward the use of artificial intelligence (AI). Advancements in AI have created massive improvements in diagnostics, outcome predictions, and robotic surgery. For robotic surgery to progress from assisting surgeons to eventually reaching autonomous procedures, there must be advancements in machine learning, natural language processing, and computer vision. Moreover, barriers such as data availability, interpretability of autonomous decision-making, Internet connection and security, and ethical concerns must be overcome.
Collapse
Affiliation(s)
- Rayyan Abid
- Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH 44106, USA
| | - Ahmed A Hussein
- Department of Urology, Roswell Park Comprehensive Cancer Center
| | - Khurshid A Guru
- Department of Urology, Roswell Park Comprehensive Cancer Center.
| |
Collapse
|
3
|
Boal MWE, Anastasiou D, Tesfai F, Ghamrawi W, Mazomenos E, Curtis N, Collins JW, Sridhar A, Kelly J, Stoyanov D, Francis NK. Evaluation of objective tools and artificial intelligence in robotic surgery technical skills assessment: a systematic review. Br J Surg 2024; 111:znad331. [PMID: 37951600 PMCID: PMC10771126 DOI: 10.1093/bjs/znad331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 09/18/2023] [Accepted: 09/19/2023] [Indexed: 11/14/2023]
Abstract
BACKGROUND There is a need to standardize training in robotic surgery, including objective assessment for accreditation. This systematic review aimed to identify objective tools for technical skills assessment, providing evaluation statuses to guide research and inform implementation into training curricula. METHODS A systematic literature search was conducted in accordance with the PRISMA guidelines. Ovid Embase/Medline, PubMed and Web of Science were searched. Inclusion criterion: robotic surgery technical skills tools. Exclusion criteria: non-technical, laparoscopy or open skills only. Manual tools and automated performance metrics (APMs) were analysed using Messick's concept of validity and the Oxford Centre of Evidence-Based Medicine (OCEBM) Levels of Evidence and Recommendation (LoR). A bespoke tool analysed artificial intelligence (AI) studies. The Modified Downs-Black checklist was used to assess risk of bias. RESULTS Two hundred and forty-seven studies were analysed, identifying: 8 global rating scales, 26 procedure-/task-specific tools, 3 main error-based methods, 10 simulators, 28 studies analysing APMs and 53 AI studies. Global Evaluative Assessment of Robotic Skills and the da Vinci Skills Simulator were the most evaluated tools at LoR 1 (OCEBM). Three procedure-specific tools, 3 error-based methods and 1 non-simulator APMs reached LoR 2. AI models estimated outcomes (skill or clinical), demonstrating superior accuracy rates in the laboratory with 60 per cent of methods reporting accuracies over 90 per cent, compared to real surgery ranging from 67 to 100 per cent. CONCLUSIONS Manual and automated assessment tools for robotic surgery are not well validated and require further evaluation before use in accreditation processes.PROSPERO: registration ID CRD42022304901.
Collapse
Affiliation(s)
- Matthew W E Boal
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
| | - Dimitrios Anastasiou
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Freweini Tesfai
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
| | - Walaa Ghamrawi
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
| | - Evangelos Mazomenos
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Nathan Curtis
- Department of General Surgey, Dorset County Hospital NHS Foundation Trust, Dorchester, UK
| | - Justin W Collins
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - Ashwin Sridhar
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - John Kelly
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - Danail Stoyanov
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Computer Science, UCL, London, UK
| | - Nader K Francis
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- Yeovil District Hospital, Somerset Foundation NHS Trust, Yeovil, Somerset, UK
| |
Collapse
|
4
|
Balu A, Pangal DJ, Kugener G, Donoho DA. Pilot Analysis of Surgeon Instrument Utilization Signatures Based on Shannon Entropy and Deep Learning for Surgeon Performance Assessment in a Cadaveric Carotid Artery Injury Control Simulation. Oper Neurosurg (Hagerstown) 2023; 25:e330-e337. [PMID: 37655892 DOI: 10.1227/ons.0000000000000888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 06/27/2023] [Indexed: 09/02/2023] Open
Abstract
BACKGROUND AND OBJECTIVES Assessment and feedback are critical to surgical education, but direct observational feedback by experts is rarely provided because of time constraints and is typically only qualitative. Automated, video-based, quantitative feedback on surgical performance could address this gap, improving surgical training. The authors aim to demonstrate the ability of Shannon entropy (ShEn), an information theory metric that quantifies series diversity, to predict surgical performance using instrument detections generated through deep learning. METHODS Annotated images from a publicly available video data set of surgeons managing endoscopic endonasal carotid artery lacerations in a perfused cadaveric simulator were collected. A deep learning model was implemented to detect surgical instruments across video frames. ShEn score for the instrument sequence was calculated from each surgical trial. Logistic regression using ShEn was used to predict hemorrhage control success. RESULTS ShEn scores and instrument usage patterns differed between successful and unsuccessful trials (ShEn: 0.452 vs 0.370, P < .001). Unsuccessful hemorrhage control trials displayed lower entropy and less varied instrument use patterns. By contrast, successful trials demonstrated higher entropy with more diverse instrument usage and consistent progression in instrument utilization. A logistic regression model using ShEn scores (78% accuracy and 97% average precision) was at least as accurate as surgeons' attending/resident status and years of experience for predicting trial success and had similar accuracy as expert human observers. CONCLUSION ShEn score offers a summative signal about surgeon performance and predicted success at controlling carotid hemorrhage in a simulated cadaveric setting. Future efforts to generalize ShEn to additional surgical scenarios can further validate this metric.
Collapse
Affiliation(s)
- Alan Balu
- Department of Neurosurgery, Georgetown University School of Medicine, Washington , District of Columbia, USA
| | - Dhiraj J Pangal
- Department of Neurosurgery, Keck School of Medicine of University of Southern California, Los Angeles , California , USA
| | - Guillaume Kugener
- Department of Neurosurgery, Keck School of Medicine of University of Southern California, Los Angeles , California , USA
| | - Daniel A Donoho
- Division of Neurosurgery, Children's National Hospital, Washington , District of Columbia , USA
| |
Collapse
|
5
|
Buyck F, Vandemeulebroucke J, Ceranka J, Van Gestel F, Cornelius JF, Duerinck J, Bruneau M. Computer-vision based analysis of the neurosurgical scene - A systematic review. BRAIN & SPINE 2023; 3:102706. [PMID: 38020988 PMCID: PMC10668095 DOI: 10.1016/j.bas.2023.102706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 10/23/2023] [Accepted: 10/29/2023] [Indexed: 12/01/2023]
Abstract
Introduction With increasing use of robotic surgical adjuncts, artificial intelligence and augmented reality in neurosurgery, the automated analysis of digital images and videos acquired over various procedures becomes a subject of increased interest. While several computer vision (CV) methods have been developed and implemented for analyzing surgical scenes, few studies have been dedicated to neurosurgery. Research question In this work, we present a systematic literature review focusing on CV methodologies specifically applied to the analysis of neurosurgical procedures based on intra-operative images and videos. Additionally, we provide recommendations for the future developments of CV models in neurosurgery. Material and methods We conducted a systematic literature search in multiple databases until January 17, 2023, including Web of Science, PubMed, IEEE Xplore, Embase, and SpringerLink. Results We identified 17 studies employing CV algorithms on neurosurgical videos/images. The most common applications of CV were tool and neuroanatomical structure detection or characterization, and to a lesser extent, surgical workflow analysis. Convolutional neural networks (CNN) were the most frequently utilized architecture for CV models (65%), demonstrating superior performances in tool detection and segmentation. In particular, mask recurrent-CNN manifested most robust performance outcomes across different modalities. Discussion and conclusion Our systematic review demonstrates that CV models have been reported that can effectively detect and differentiate tools, surgical phases, neuroanatomical structures, as well as critical events in complex neurosurgical scenes with accuracies above 95%. Automated tool recognition contributes to objective characterization and assessment of surgical performance, with potential applications in neurosurgical training and intra-operative safety management.
Collapse
Affiliation(s)
- Félix Buyck
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- Vrije Universiteit Brussel (VUB), Research group Center For Neurosciences (C4N-NEUR), 1090, Brussels, Belgium
| | - Jef Vandemeulebroucke
- Vrije Universiteit Brussel (VUB), Department of Electronics and Informatics (ETRO), 1050, Brussels, Belgium
- Department of Radiology, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- imec, 3001, Leuven, Belgium
| | - Jakub Ceranka
- Vrije Universiteit Brussel (VUB), Department of Electronics and Informatics (ETRO), 1050, Brussels, Belgium
- imec, 3001, Leuven, Belgium
| | - Frederick Van Gestel
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- Vrije Universiteit Brussel (VUB), Research group Center For Neurosciences (C4N-NEUR), 1090, Brussels, Belgium
| | - Jan Frederick Cornelius
- Department of Neurosurgery, Medical Faculty, Heinrich-Heine-University, 40225, Düsseldorf, Germany
| | - Johnny Duerinck
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- Vrije Universiteit Brussel (VUB), Research group Center For Neurosciences (C4N-NEUR), 1090, Brussels, Belgium
| | - Michaël Bruneau
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- Vrije Universiteit Brussel (VUB), Research group Center For Neurosciences (C4N-NEUR), 1090, Brussels, Belgium
| |
Collapse
|
6
|
Pedrett R, Mascagni P, Beldi G, Padoy N, Lavanchy JL. Technical skill assessment in minimally invasive surgery using artificial intelligence: a systematic review. Surg Endosc 2023; 37:7412-7424. [PMID: 37584774 PMCID: PMC10520175 DOI: 10.1007/s00464-023-10335-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 07/20/2023] [Indexed: 08/17/2023]
Abstract
BACKGROUND Technical skill assessment in surgery relies on expert opinion. Therefore, it is time-consuming, costly, and often lacks objectivity. Analysis of intraoperative data by artificial intelligence (AI) has the potential for automated technical skill assessment. The aim of this systematic review was to analyze the performance, external validity, and generalizability of AI models for technical skill assessment in minimally invasive surgery. METHODS A systematic search of Medline, Embase, Web of Science, and IEEE Xplore was performed to identify original articles reporting the use of AI in the assessment of technical skill in minimally invasive surgery. Risk of bias (RoB) and quality of the included studies were analyzed according to Quality Assessment of Diagnostic Accuracy Studies criteria and the modified Joanna Briggs Institute checklists, respectively. Findings were reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement. RESULTS In total, 1958 articles were identified, 50 articles met eligibility criteria and were analyzed. Motion data extracted from surgical videos (n = 25) or kinematic data from robotic systems or sensors (n = 22) were the most frequent input data for AI. Most studies used deep learning (n = 34) and predicted technical skills using an ordinal assessment scale (n = 36) with good accuracies in simulated settings. However, all proposed models were in development stage, only 4 studies were externally validated and 8 showed a low RoB. CONCLUSION AI showed good performance in technical skill assessment in minimally invasive surgery. However, models often lacked external validity and generalizability. Therefore, models should be benchmarked using predefined performance metrics and tested in clinical implementation studies.
Collapse
Affiliation(s)
- Romina Pedrett
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Pietro Mascagni
- IHU Strasbourg, Strasbourg, France
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Guido Beldi
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Nicolas Padoy
- IHU Strasbourg, Strasbourg, France
- ICube, CNRS, University of Strasbourg, Strasbourg, France
| | - Joël L Lavanchy
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
- IHU Strasbourg, Strasbourg, France.
- University Digestive Health Care Center Basel - Clarunis, PO Box, 4002, Basel, Switzerland.
| |
Collapse
|
7
|
Bykanov A, Danilov G, Kostumov V, Pilipenko O, Nutfullin B, Rastvorova O, Pitskhelauri D. Artificial Intelligence Technologies in the Microsurgical Operating Room (Review). Sovrem Tekhnologii Med 2023; 15:86-94. [PMID: 37389018 PMCID: PMC10306972 DOI: 10.17691/stm2023.15.2.08] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Indexed: 07/01/2023] Open
Abstract
Surgery performed by a novice neurosurgeon under constant supervision of a senior surgeon with the experience of thousands of operations, able to handle any intraoperative complications and predict them in advance, and never getting tired, is currently an elusive dream, but can become a reality with the development of artificial intelligence methods. This paper has presented a review of the literature on the use of artificial intelligence technologies in the microsurgical operating room. Searching for sources was carried out in the PubMed text database of medical and biological publications. The key words used were "surgical procedures", "dexterity", "microsurgery" AND "artificial intelligence" OR "machine learning" OR "neural networks". Articles in English and Russian were considered with no limitation to publication date. The main directions of research on the use of artificial intelligence technologies in the microsurgical operating room have been highlighted. Despite the fact that in recent years machine learning has been increasingly introduced into the medical field, a small number of studies related to the problem of interest have been published, and their results have not proved to be of practical use yet. However, the social significance of this direction is an important argument for its development.
Collapse
Affiliation(s)
- A.E. Bykanov
- Neurosurgeon, 7 Department of Neurosurgery, Researcher; National Medical Research Center for Neurosurgery named after Academician N.N. Burdenko, Ministry of Healthcare of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia
| | - G.V. Danilov
- Academic Secretary; National Medical Research Center for Neurosurgery named after Academician N.N. Burdenko, Ministry of Healthcare of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia
| | - V.V. Kostumov
- PhD Student, Programmer, the CMC Faculty; Lomonosov Moscow State University, 1 Leninskiye Gory, Moscow, 119991, Russia
| | - O.G. Pilipenko
- PhD Student, Programmer, the CMC Faculty; Lomonosov Moscow State University, 1 Leninskiye Gory, Moscow, 119991, Russia
| | - B.M. Nutfullin
- PhD Student, Programmer, the CMC Faculty; Lomonosov Moscow State University, 1 Leninskiye Gory, Moscow, 119991, Russia
| | - O.A. Rastvorova
- Resident, 7 Department of Neurosurgery; National Medical Research Center for Neurosurgery named after Academician N.N. Burdenko, Ministry of Healthcare of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia
| | - D.I. Pitskhelauri
- Professor, Head of the 7 Department of Neurosurgery; National Medical Research Center for Neurosurgery named after Academician N.N. Burdenko, Ministry of Healthcare of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia
| |
Collapse
|
8
|
Capturing fine-grained details for video-based automation of suturing skills assessment. Int J Comput Assist Radiol Surg 2023; 18:545-552. [PMID: 36282465 PMCID: PMC9975072 DOI: 10.1007/s11548-022-02778-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 10/10/2022] [Indexed: 02/02/2023]
Abstract
OBJECTIVES Manually-collected suturing technical skill scores are strong predictors of continence recovery after robotic radical prostatectomy. Herein, we automate suturing technical skill scoring through computer vision (CV) methods as a scalable method to provide feedback. METHODS Twenty-two surgeons completed a suturing exercise three times on the Mimic™ Flex VR simulator. Instrument kinematic data (XYZ coordinates of each instrument and pose) were captured at 30 Hz. After standardized training, three human raters manually video segmented suturing task into four sub-stitch phases (Needle handling, Needle targeting, Needle driving, Needle withdrawal) and labeled the corresponding technical skill domains (Needle positioning, Needle entry, Needle driving, and Needle withdrawal). The CV framework extracted RGB features and optical flow frames using a pre-trained AlexNet. Additional CV strategies including auxiliary supervision (using kinematic data during training only) and attention mechanisms were implemented to improve performance. RESULTS This study included data from 15 expert surgeons (median caseload 300 [IQR 165-750]) and 7 training surgeons (0 [IQR 0-8]). In all, 226 virtual sutures were captured. Automated assessments for Needle positioning performed best with the simplest approach (1 s video; AUC 0.749). Remaining skill domains exhibited improvements with the implementation of auxiliary supervision and attention mechanisms when deployed separately (AUC 0.604-0.794). All techniques combined produced the best performance, particularly for Needle driving and Needle withdrawal (AUC 0.959 and 0.879, respectively). CONCLUSIONS This study demonstrated the best performance of automated suturing technical skills assessment to date using advanced CV techniques. Future work will determine if a "human in the loop" is necessary to verify surgeon evaluations.
Collapse
|
9
|
Chu TN, Wong EY, Ma R, Yang CH, Dalieh IS, Hung AJ. Exploring the Use of Artificial Intelligence in the Management of Prostate Cancer. Curr Urol Rep 2023; 24:231-240. [PMID: 36808595 PMCID: PMC10090000 DOI: 10.1007/s11934-023-01149-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/30/2023] [Indexed: 02/21/2023]
Abstract
PURPOSE OF REVIEW This review aims to explore the current state of research on the use of artificial intelligence (AI) in the management of prostate cancer. We examine the various applications of AI in prostate cancer, including image analysis, prediction of treatment outcomes, and patient stratification. Additionally, the review will evaluate the current limitations and challenges faced in the implementation of AI in prostate cancer management. RECENT FINDINGS Recent literature has focused particularly on the use of AI in radiomics, pathomics, the evaluation of surgical skills, and patient outcomes. AI has the potential to revolutionize the future of prostate cancer management by improving diagnostic accuracy, treatment planning, and patient outcomes. Studies have shown improved accuracy and efficiency of AI models in the detection and treatment of prostate cancer, but further research is needed to understand its full potential as well as limitations.
Collapse
Affiliation(s)
- Timothy N Chu
- Center for Robotic Simulation & Education, Department of Urology, USC Institute of Urology, University of Southern California, Catherine & Joseph Aresty1441 Eastlake Avenue Suite 7416, Los Angeles, CA, 90089, USA
| | - Elyssa Y Wong
- Center for Robotic Simulation & Education, Department of Urology, USC Institute of Urology, University of Southern California, Catherine & Joseph Aresty1441 Eastlake Avenue Suite 7416, Los Angeles, CA, 90089, USA
| | - Runzhuo Ma
- Center for Robotic Simulation & Education, Department of Urology, USC Institute of Urology, University of Southern California, Catherine & Joseph Aresty1441 Eastlake Avenue Suite 7416, Los Angeles, CA, 90089, USA
| | - Cherine H Yang
- Center for Robotic Simulation & Education, Department of Urology, USC Institute of Urology, University of Southern California, Catherine & Joseph Aresty1441 Eastlake Avenue Suite 7416, Los Angeles, CA, 90089, USA
| | - Istabraq S Dalieh
- Center for Robotic Simulation & Education, Department of Urology, USC Institute of Urology, University of Southern California, Catherine & Joseph Aresty1441 Eastlake Avenue Suite 7416, Los Angeles, CA, 90089, USA
| | - Andrew J Hung
- Center for Robotic Simulation & Education, Department of Urology, USC Institute of Urology, University of Southern California, Catherine & Joseph Aresty1441 Eastlake Avenue Suite 7416, Los Angeles, CA, 90089, USA.
| |
Collapse
|
10
|
Kutana S, Bitner DP, Addison P, Chung PJ, Talamini MA, Filicori F. Objective assessment of robotic surgical skills: review of literature and future directions. Surg Endosc 2022; 36:3698-3707. [PMID: 35229215 DOI: 10.1007/s00464-022-09134-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Accepted: 02/13/2022] [Indexed: 01/29/2023]
Abstract
BACKGROUND Evaluation of robotic surgical skill has become increasingly important as robotic approaches to common surgeries become more widely utilized. However, evaluation of these currently lacks standardization. In this paper, we aimed to review the literature on robotic surgical skill evaluation. METHODS A review of literature on robotic surgical skill evaluation was performed and representative literature presented over the past ten years. RESULTS The study of reliability and validity in robotic surgical evaluation shows two main assessment categories: manual and automatic. Manual assessments have been shown to be valid but typically are time consuming and costly. Automatic evaluation and simulation are similarly valid and simpler to implement. Initial reports on evaluation of skill using artificial intelligence platforms show validity. Few data on evaluation methods of surgical skill connect directly to patient outcomes. CONCLUSION As evaluation in surgery begins to incorporate robotic skills, a simultaneous shift from manual to automatic evaluation may occur given the ease of implementation of these technologies. Robotic platforms offer the unique benefit of providing more objective data streams including kinematic data which allows for precise instrument tracking in the operative field. Such data streams will likely incrementally be implemented in performance evaluations. Similarly, with advances in artificial intelligence, machine evaluation of human technical skill will likely form the next wave of surgical evaluation.
Collapse
Affiliation(s)
- Saratu Kutana
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, 186 E. 76th Street, 1st Floor, New York, NY, 10021, USA
| | - Daniel P Bitner
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, 186 E. 76th Street, 1st Floor, New York, NY, 10021, USA.
| | - Poppy Addison
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, 186 E. 76th Street, 1st Floor, New York, NY, 10021, USA
| | - Paul J Chung
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, 186 E. 76th Street, 1st Floor, New York, NY, 10021, USA.,Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | - Mark A Talamini
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | - Filippo Filicori
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, 186 E. 76th Street, 1st Floor, New York, NY, 10021, USA.,Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| |
Collapse
|
11
|
Lam K, Chen J, Wang Z, Iqbal FM, Darzi A, Lo B, Purkayastha S, Kinross JM. Machine learning for technical skill assessment in surgery: a systematic review. NPJ Digit Med 2022; 5:24. [PMID: 35241760 PMCID: PMC8894462 DOI: 10.1038/s41746-022-00566-0] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 01/21/2022] [Indexed: 12/18/2022] Open
Abstract
Accurate and objective performance assessment is essential for both trainees and certified surgeons. However, existing methods can be time consuming, labor intensive, and subject to bias. Machine learning (ML) has the potential to provide rapid, automated, and reproducible feedback without the need for expert reviewers. We aimed to systematically review the literature and determine the ML techniques used for technical surgical skill assessment and identify challenges and barriers in the field. A systematic literature search, in accordance with the PRISMA statement, was performed to identify studies detailing the use of ML for technical skill assessment in surgery. Of the 1896 studies that were retrieved, 66 studies were included. The most common ML methods used were Hidden Markov Models (HMM, 14/66), Support Vector Machines (SVM, 17/66), and Artificial Neural Networks (ANN, 17/66). 40/66 studies used kinematic data, 19/66 used video or image data, and 7/66 used both. Studies assessed the performance of benchtop tasks (48/66), simulator tasks (10/66), and real-life surgery (8/66). Accuracy rates of over 80% were achieved, although tasks and participants varied between studies. Barriers to progress in the field included a focus on basic tasks, lack of standardization between studies, and lack of datasets. ML has the potential to produce accurate and objective surgical skill assessment through the use of methods including HMM, SVM, and ANN. Future ML-based assessment tools should move beyond the assessment of basic tasks and towards real-life surgery and provide interpretable feedback with clinical value for the surgeon. PROSPERO: CRD42020226071
Collapse
Affiliation(s)
- Kyle Lam
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Junhong Chen
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Zeyu Wang
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Fahad M Iqbal
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Ara Darzi
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Benny Lo
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Sanjay Purkayastha
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK.
| | - James M Kinross
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| |
Collapse
|
12
|
Xu H, Han T, Wang H, Liu S, Hou G, Sun L, Jiang G, Yang F, Wang J, Deng K, Zhou J. OUP accepted manuscript. Eur J Cardiothorac Surg 2022; 62:6555788. [PMID: 35352106 PMCID: PMC9615432 DOI: 10.1093/ejcts/ezac154] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 02/10/2022] [Accepted: 03/11/2011] [Indexed: 11/15/2022] Open
Affiliation(s)
- Hao Xu
- Department of Thoracic Surgery, Peking University People's Hospital, Beijing, China
| | - Tingxuan Han
- Center for Statistical Science & Department of Industrial Engineering, Tsinghua University, Beijing, China
| | - Haifeng Wang
- Center for Statistical Science & Department of Industrial Engineering, Tsinghua University, Beijing, China
| | - Shanggui Liu
- Department of Thoracic Surgery, Peking University People's Hospital, Beijing, China
| | - Guanghao Hou
- Department of Thoracic Surgery, Peking University People's Hospital, Beijing, China
| | - Lina Sun
- Central operating Theatre, Peking University People's Hospital, Beijing, China
| | - Guanchao Jiang
- Department of Thoracic Surgery, Peking University People's Hospital, Beijing, China
| | - Fan Yang
- Department of Thoracic Surgery, Peking University People's Hospital, Beijing, China
| | - Jun Wang
- Department of Thoracic Surgery, Peking University People's Hospital, Beijing, China
- Corresponding authors: Department of Thoracic Surgery, Peking University People's Hospital, Beijing 100044, China. Tel: +86 010-88326650; e-mail: (Dr. Jian Zhou); Department of Thoracic Surgery, Peking University People's Hospital, Beijing 100044, China. Tel: 010-88326652; e-mail: (Dr. Jun Wang); Center for Statistical Science & Department of Industrial Engineering, Tsinghua University, Beijing 100084, China. Tel: +86 010-62782453; e-mail: (Ke Deng)
| | - Ke Deng
- Center for Statistical Science & Department of Industrial Engineering, Tsinghua University, Beijing, China
- Corresponding authors: Department of Thoracic Surgery, Peking University People's Hospital, Beijing 100044, China. Tel: +86 010-88326650; e-mail: (Dr. Jian Zhou); Department of Thoracic Surgery, Peking University People's Hospital, Beijing 100044, China. Tel: 010-88326652; e-mail: (Dr. Jun Wang); Center for Statistical Science & Department of Industrial Engineering, Tsinghua University, Beijing 100084, China. Tel: +86 010-62782453; e-mail: (Ke Deng)
| | - Jian Zhou
- Department of Thoracic Surgery, Peking University People's Hospital, Beijing, China
- Corresponding authors: Department of Thoracic Surgery, Peking University People's Hospital, Beijing 100044, China. Tel: +86 010-88326650; e-mail: (Dr. Jian Zhou); Department of Thoracic Surgery, Peking University People's Hospital, Beijing 100044, China. Tel: 010-88326652; e-mail: (Dr. Jun Wang); Center for Statistical Science & Department of Industrial Engineering, Tsinghua University, Beijing 100084, China. Tel: +86 010-62782453; e-mail: (Ke Deng)
| |
Collapse
|
13
|
Iqbal U, Jing Z, Ahmed Y, Elsayed AS, Rogers C, Boris R, Porter J, Allaf M, Badani K, Stifelman M, Kaouk J, Terakawa T, Hinata N, Aboumohamed AA, Kauffman E, Li Q, Abaza R, Guru KA, Hussein AA, Eun D. Development and Validation of an Objective Scoring Tool for Robot-Assisted Partial Nephrectomy: Scoring for Partial Nephrectomy. J Endourol 2021; 36:647-653. [PMID: 34809491 DOI: 10.1089/end.2021.0706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Objective: To develop a structured and objective scoring tool for assessment of robot-assisted partial nephrectomy (RAPN): Scoring for Partial Nephrectomy (SPaN). Materials and Methods: Content development: RAPN was deconstructed into 6 domains by a multi-institutional panel of 10 expert robotic surgeons. Performance on each domain was represented on a Likert scale of 1 to 5, with specific descriptions of anchors 1, 3, and 5. Content validation: The Delphi methodology was utilized to achieve consensus about the description of each anchor for each domain in terms of appropriateness of the skill assessed, objectiveness, clarity, and unambiguous wording. The content validity index (CVI) of ≥0.75 was set as cutoff for consensus. Reliability: 15 de-identified videos of RAPN were utilized to determine the inter-rater reliability using linearly weighted percent agreement, and Construct validation of SPaN was described in terms of median scores and odds ratios. Results: The expert panel reached consensus (CVI ≥0.75) after 2 rounds. Consensus was achieved for 36 (67%) statements in the first round and 18 (33%) after the second round. The final six-domain SPaN included Exposure of the kidney; Identification and dissection of the ureter and gonadal vessels; Dissection of the hilum; Tumor localization and exposure; Clamping and tumor resection; and Renorrhaphy. The linearly weighted percent agreement was >0.75 for all domains. There was no difference between median scores for any domain between attendings and trainees. Conclusion: Despite the lack of significant construct validity, SPaN is a structured, reliable, and procedure-specific tool that can objectively assesses technical proficiency for RAPN.
Collapse
Affiliation(s)
- Umar Iqbal
- A.T.L.A.S. (Applied Technology Laboratory for Advanced Surgery), Roswell Park Comprehensive Cancer Center, Buffalo, New York, USA
| | - Zhe Jing
- A.T.L.A.S. (Applied Technology Laboratory for Advanced Surgery), Roswell Park Comprehensive Cancer Center, Buffalo, New York, USA
| | - Youssef Ahmed
- A.T.L.A.S. (Applied Technology Laboratory for Advanced Surgery), Roswell Park Comprehensive Cancer Center, Buffalo, New York, USA
| | - Ahmed S Elsayed
- A.T.L.A.S. (Applied Technology Laboratory for Advanced Surgery), Roswell Park Comprehensive Cancer Center, Buffalo, New York, USA.,Cairo University, Cairo, Egypt
| | - Craig Rogers
- Henry Ford Health Systems, Detroit, Michigan, USA
| | - Ronald Boris
- Indiana University School of Medicine, Indianapolis, Indiana, USA
| | - James Porter
- Swedish Medical Center, Seattle, Washington, USA
| | - Mohammad Allaf
- Johns Hopkins University Hospital, Boston, Massachusetts, USA
| | - Ketan Badani
- Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | | | | | | | - Nobuyuki Hinata
- Hiroshima University Graduate School of Biomedical and Health Sciences, Hiroshima, Japan
| | | | - Eric Kauffman
- A.T.L.A.S. (Applied Technology Laboratory for Advanced Surgery), Roswell Park Comprehensive Cancer Center, Buffalo, New York, USA
| | - Qiang Li
- A.T.L.A.S. (Applied Technology Laboratory for Advanced Surgery), Roswell Park Comprehensive Cancer Center, Buffalo, New York, USA
| | | | - Khurshid A Guru
- A.T.L.A.S. (Applied Technology Laboratory for Advanced Surgery), Roswell Park Comprehensive Cancer Center, Buffalo, New York, USA
| | - Ahmed A Hussein
- A.T.L.A.S. (Applied Technology Laboratory for Advanced Surgery), Roswell Park Comprehensive Cancer Center, Buffalo, New York, USA.,Cairo University, Cairo, Egypt
| | - Daniel Eun
- Temple University Hospital, Philadelphia, Pennsylvania, USA
| |
Collapse
|
14
|
Moglia A, Georgiou K, Georgiou E, Satava RM, Cuschieri A. A systematic review on artificial intelligence in robot-assisted surgery. Int J Surg 2021; 95:106151. [PMID: 34695601 DOI: 10.1016/j.ijsu.2021.106151] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 10/04/2021] [Accepted: 10/19/2021] [Indexed: 12/12/2022]
Abstract
BACKGROUND Despite the extensive published literature on the significant potential of artificial intelligence (AI) there are no reports on its efficacy in improving patient safety in robot-assisted surgery (RAS). The purposes of this work are to systematically review the published literature on AI in RAS, and to identify and discuss current limitations and challenges. MATERIALS AND METHODS A literature search was conducted on PubMed, Web of Science, Scopus, and IEEExplore according to PRISMA 2020 statement. Eligible articles were peer-review studies published in English language from January 1, 2016 to December 31, 2020. Amstar 2 was used for quality assessment. Risk of bias was evaluated with the Newcastle Ottawa Quality assessment tool. Data of the studies were visually presented in tables using SPIDER tool. RESULTS Thirty-five publications, representing 3436 patients, met the search criteria and were included in the analysis. The selected reports concern: motion analysis (n = 17), urology (n = 12), gynecology (n = 1), other specialties (n = 1), training (n = 3), and tissue retraction (n = 1). Precision for surgical tools detection varied from 76.0% to 90.6%. Mean absolute error on prediction of urinary continence after robot-assisted radical prostatectomy (RARP) ranged from 85.9 to 134.7 days. Accuracy on prediction of length of stay after RARP was 88.5%. Accuracy on recognition of the next surgical task during robot-assisted partial nephrectomy (RAPN) achieved 75.7%. CONCLUSION The reviewed studies were of low quality. The findings are limited by the small size of the datasets. Comparison between studies on the same topic was restricted due to algorithms and datasets heterogeneity. There is no proof that currently AI can identify the critical tasks of RAS operations, which determine patient outcome. There is an urgent need for studies on large datasets and external validation of the AI algorithms used. Furthermore, the results should be transparent and meaningful to surgeons, enabling them to inform patients in layman's words. REGISTRATION Review Registry Unique Identifying Number: reviewregistry1225.
Collapse
Affiliation(s)
- Andrea Moglia
- EndoCAS, Center for Computer Assisted Surgery, University of Pisa, 56124, Pisa, Italy 1st Propaedeutic Surgical Unit, Hippocrateion Athens General Hospital, Athens Medical School, National and Kapodistrian University of Athens, Greece MPLSC, Athens Medical School, National and Kapodistrian University of Athens, Greece Department of Surgery, University of Washington Medical Center, Seattle, WA, United States Scuola Superiore Sant'Anna of Pisa, 56214, Pisa, Italy Institute for Medical Science and Technology, University of Dundee, Dundee, DD2 1FD, United Kingdom
| | | | | | | | | |
Collapse
|
15
|
Cacciamani GE, Anvar A, Chen A, Gill I, Hung AJ. How the use of the artificial intelligence could improve surgical skills in urology: state of the art and future perspectives. Curr Opin Urol 2021; 31:378-384. [PMID: 33965984 DOI: 10.1097/mou.0000000000000890] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW As technology advances, surgical training has evolved in parallel over the previous decade. Training is commonly seen as a way to prepare surgeons for their day-to-day work; however, more importantly, it allows for certification of skills to ensure maximum patient safety. This article reviews advances in the use of machine learning and artificial intelligence for improvements of surgical skills in urology. RECENT FINDINGS Six studies have been published, which met the inclusion criteria. All articles assessed the application of artificial intelligence in improving surgical training. Different approaches were taken, such as using machine learning to identify and classify suturing gestures, creating automated objective evaluation reports, and determining surgical technical skill levels to predict clinical outcomes. The articles illustrated the continuously growing role of artificial intelligence to address the difficulties currently present in evaluating urological surgical skills. SUMMARY Artificial intelligence allows us to efficiently analyze the surmounting data related to surgical training and use it to come to conclusions that normally would require human intelligence. Although these metrics have been shown to predict surgeon expertise and surgical outcomes, evidence is still scarce regarding their ability to directly improve patient outcomes. Considering this, current active research is growing on the topic of deep learning-based computer vision to provide automated metrics needed for real-time surgeon feedback.
Collapse
Affiliation(s)
- Giovanni E Cacciamani
- USC Institute of Urology and Catherine & Joseph Aresty Department of Urology, Keck School of Medicine
- AI Center at USC Urology, USC Institute of Urology
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Arya Anvar
- USC Institute of Urology and Catherine & Joseph Aresty Department of Urology, Keck School of Medicine
- AI Center at USC Urology, USC Institute of Urology
| | - Andrew Chen
- USC Institute of Urology and Catherine & Joseph Aresty Department of Urology, Keck School of Medicine
- AI Center at USC Urology, USC Institute of Urology
| | - Inderbir Gill
- USC Institute of Urology and Catherine & Joseph Aresty Department of Urology, Keck School of Medicine
- AI Center at USC Urology, USC Institute of Urology
| | - Andrew J Hung
- USC Institute of Urology and Catherine & Joseph Aresty Department of Urology, Keck School of Medicine
- AI Center at USC Urology, USC Institute of Urology
| |
Collapse
|
16
|
Abstract
PURPOSE OF REVIEW This review aims to summarize innovations in urologic surgical training in the past 5 years. RECENT FINDINGS Many assessment tools have been developed to objectively evaluate surgical skills and provide structured feedback to urologic trainees. A variety of simulation modalities (i.e., virtual/augmented reality, dry-lab, animal, and cadaver) have been utilized to facilitate the acquisition of surgical skills outside the high-stakes operating room environment. Three-dimensional printing has been used to create high-fidelity, immersive dry-lab models at a reasonable cost. Non-technical skills such as teamwork and decision-making have gained more attention. Structured surgical video review has been shown to improve surgical skills not only for trainees but also for qualified surgeons. Research and development in urologic surgical training has been active in the past 5 years. Despite these advances, there is still an unfulfilled need for a standardized surgical training program covering both technical and non-technical skills.
Collapse
|
17
|
Patch-based classification of gallbladder wall vascularity from laparoscopic images using deep learning. Int J Comput Assist Radiol Surg 2020; 16:103-113. [PMID: 33146850 DOI: 10.1007/s11548-020-02285-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Accepted: 10/23/2020] [Indexed: 12/13/2022]
Abstract
PURPOSE In this study, we propose a deep learning approach for assessment of gallbladder (GB) wall vascularity from images of laparoscopic cholecystectomy (LC). Difficulty in the visualization of GB wall vessels may be the result of fatty infiltration or increased thickening of the GB wall, potentially as a result of cholecystitis or other diseases. METHODS The dataset included 800 patches and 181 region outlines of the GB wall extracted from 53 operations of the Cholec80 video collection. The GB regions and patches were annotated by two expert surgeons using two labeling schemes: 3 classes (low, medium and high vascularity) and 2 classes (low vs. high). Two convolutional neural network (CNN) architectures were investigated. Preprocessing (vessel enhancement) and post-processing (late fusion of CNN output) techniques were applied. RESULTS The best model yielded accuracy 94.48% and 83.77% for patch classification into 2 and 3 classes, respectively. For the GB wall regions, the best model yielded accuracy 91.16% (2 classes) and 80.66% (3 classes). The inter-observer agreement was 91.71% (2 classes) and 78.45% (3 classes). Late fusion analysis allowed the computation of spatial probability maps, which provided a visual representation of the probability for each vascularity class across the GB wall region. CONCLUSIONS This study is the first significant step forward to assess the vascularity of the GB wall from intraoperative images based on computer vision and deep learning techniques. The classification performance of the CNNs was comparable to the agreement of two expert surgeons. The approach may be used for various applications such as for classification of LC operations and context-aware assistance in surgical education and practice.
Collapse
|
18
|
Ma R, Vanstrum EB, Lee R, Chen J, Hung AJ. Machine learning in the optimization of robotics in the operative field. Curr Opin Urol 2020; 30:808-816. [PMID: 32925312 PMCID: PMC7735438 DOI: 10.1097/mou.0000000000000816] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
PURPOSE OF REVIEW The increasing use of robotics in urologic surgery facilitates collection of 'big data'. Machine learning enables computers to infer patterns from large datasets. This review aims to highlight recent findings and applications of machine learning in robotic-assisted urologic surgery. RECENT FINDINGS Machine learning has been used in surgical performance assessment and skill training, surgical candidate selection, and autonomous surgery. Autonomous segmentation and classification of surgical data have been explored, which serves as the stepping-stone for providing real-time surgical assessment and ultimately, improve surgical safety and quality. Predictive machine learning models have been created to guide appropriate surgical candidate selection, whereas intraoperative machine learning algorithms have been designed to provide 3-D augmented reality and real-time surgical margin checks. Reinforcement-learning strategies have been utilized in autonomous robotic surgery, and the combination of expert demonstrations and trial-and-error learning by the robot itself is a promising approach towards autonomy. SUMMARY Robot-assisted urologic surgery coupled with machine learning is a burgeoning area of study that demonstrates exciting potential. However, further validation and clinical trials are required to ensure the safety and efficacy of incorporating machine learning into surgical practice.
Collapse
Affiliation(s)
- Runzhuo Ma
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, California, USA
| | | | | | | | | |
Collapse
|
19
|
Systematic Review of Intraoperative Assessment Tools in Minimally Invasive Gynecologic Surgery. J Minim Invasive Gynecol 2020; 28:692-697. [PMID: 33086146 PMCID: PMC7568765 DOI: 10.1016/j.jmig.2020.10.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Revised: 10/07/2020] [Accepted: 10/15/2020] [Indexed: 11/24/2022]
Abstract
OBJECTIVE To collect, summarize, and evaluate the currently available intraoperative rating tools used in abdominal minimally invasive gynecologic surgery (MIGS). DATA SOURCES Medline, Embase, and Scopus databases from January 1, 2000, to May 12, 2020. METHODS OF STUDY SELECTION A systematic search strategy was designed and executed. Published studies evaluating an assessment tool in abdominal MIGS cases were included. Studies focused on simulation, reviews, and abstracts without a published manuscript were excluded. Risk of bias and methodological quality were assessed for each study. TABULATION, INTEGRATION, AND RESULTS Disparate study methods prevented quantitative synthesis of the data. Ten studies were included in the analysis. The tools were grouped into global (n = 4) and procedure-specific assessments (n = 6). Most studies evaluated small numbers of surgeons and lacked a comparison group to evaluate the effectiveness of the tool. All studies demonstrated content validity and at least 1 dimension of reliability, and 2 have external validity. The intraoperative procedure-specific tools have been more thoroughly evaluated than the global scales. CONCLUSION Procedure-specific intraoperative assessment tools for MIGS cases are more thoroughly evaluated than global tools; however, poor-quality studies and borderline reliability limit their use. Well-designed, controlled studies evaluating the effectiveness of intraoperative assessment tools in MIGS are needed.
Collapse
|
20
|
Baghdadi A, Hoshyarmanesh H, de Lotbiniere-Bassett MP, Choi SK, Lama S, Sutherland GR. Data analytics interrogates robotic surgical performance using a microsurgery-specific haptic device. Expert Rev Med Devices 2020; 17:721-730. [PMID: 32536224 DOI: 10.1080/17434440.2020.1782736] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
OBJECTIVES With the increase in robot-assisted cases, recording the quantifiable dexterity of surgeons is essential for proficiency evaluations. The present study employs sensor-based kinematics and recorded surgeon experience for evaluating a new haptic device. METHODS Thirty surgeons performed a task simulating micromanipulation with neuroArmPLUSHD and two commercially available hand-controllers. The surgical performance was evaluated based on subjective measures obtained from survey and objective features derived from the sensors. Statistical analyses were performed to assess the hand-controllers and regression analysis was used to identify the key features and develop a machine learning model for surgical skill assessment. FINDINGS MANCOVA tests on objective features demonstrated significance (α = 0.05) for time (p = 0.02), errors (p = 0.01), distance (p = 0.03), clutch incidents (p = 0.03), and forces (p = 0.00). The majority of metrics were in favor of neuroArmPLUSHD. The surgeons found it smoother, more comfortable, less tiring, and easier to maneuver with more realistic force feedback. The ensemble machine learning model trained with 5-fold cross-validation showed an accuracy (SD) of 0.78 (0.15) in surgeon skill classification. CONCLUSIONS This study validates the importance of incorporating a superior haptic device in telerobotic surgery for standardization of surgical education and patient care.
Collapse
Affiliation(s)
- Amir Baghdadi
- Project neuroArm, Department of Clinical Neurosciences, and Hotchkiss Brain Institute, University of Calgary , Calgary, Alberta, Canada
| | - Hamidreza Hoshyarmanesh
- Project neuroArm, Department of Clinical Neurosciences, and Hotchkiss Brain Institute, University of Calgary , Calgary, Alberta, Canada
| | - Madeleine P de Lotbiniere-Bassett
- Project neuroArm, Department of Clinical Neurosciences, and Hotchkiss Brain Institute, University of Calgary , Calgary, Alberta, Canada
| | - Seok Keon Choi
- Project neuroArm, Department of Clinical Neurosciences, and Hotchkiss Brain Institute, University of Calgary , Calgary, Alberta, Canada
| | - Sanju Lama
- Project neuroArm, Department of Clinical Neurosciences, and Hotchkiss Brain Institute, University of Calgary , Calgary, Alberta, Canada
| | - Garnette R Sutherland
- Project neuroArm, Department of Clinical Neurosciences, and Hotchkiss Brain Institute, University of Calgary , Calgary, Alberta, Canada
| |
Collapse
|
21
|
Baghdadi A, Aldhaam NA, Elsayed AS, Hussein AA, Cavuoto LA, Kauffman E, Guru KA. Automated differentiation of benign renal oncocytoma and chromophobe renal cell carcinoma on computed tomography using deep learning. BJU Int 2020; 125:553-560. [PMID: 31901213 DOI: 10.1111/bju.14985] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
OBJECTIVES To develop and evaluate the feasibility of an objective method using artificial intelligence (AI) and image processing in a semi-automated fashion for tumour-to-cortex peak early-phase enhancement ratio (PEER) in order to differentiate CD117(+) oncocytoma from the chromophobe subtype of renal cell carcinoma (ChRCC) using convolutional neural networks (CNNs) on computed tomography imaging. METHODS The CNN was trained and validated to identify the kidney + tumour areas in images from 192 patients. The tumour type was differentiated through automated measurement of PEER after manual segmentation of tumours. The performance of this diagnostic model was compared with that of manual expert identification and tumour pathology with regard to accuracy, sensitivity and specificity, along with the root-mean-square error (RMSE), for the remaining 20 patients with CD117(+) oncocytoma or ChRCC. RESULTS The mean ± sd Dice similarity score for segmentation was 0.66 ± 0.14 for the CNN model to identify the kidney + tumour areas. PEER evaluation achieved accuracy of 95% in tumour type classification (100% sensitivity and 89% specificity) compared with the final pathology results (RMSE of 0.15 for PEER ratio). CONCLUSIONS We have shown that deep learning could help to produce reliable discrimination of CD117(+) benign oncocytoma and malignant ChRCC through PEER measurements obtained by computer vision.
Collapse
Affiliation(s)
- Amir Baghdadi
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, USA.,Department of Industrial and Systems Engineering, University at Buffalo, Buffalo, NY, USA
| | - Naif A Aldhaam
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, USA
| | - Ahmed S Elsayed
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, USA
| | - Ahmed A Hussein
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, USA
| | - Lora A Cavuoto
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, USA.,Department of Industrial and Systems Engineering, University at Buffalo, Buffalo, NY, USA
| | - Eric Kauffman
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, USA
| | - Khurshid A Guru
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, USA
| |
Collapse
|
22
|
Ganni S, Botden SMBI, Chmarra M, Li M, Goossens RHM, Jakimowicz JJ. Validation of Motion Tracking Software for Evaluation of Surgical Performance in Laparoscopic Cholecystectomy. J Med Syst 2020; 44:56. [PMID: 31980955 PMCID: PMC6981315 DOI: 10.1007/s10916-020-1525-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2019] [Accepted: 01/16/2020] [Indexed: 01/22/2023]
Abstract
Motion tracking software for assessing laparoscopic surgical proficiency has been proven to be effective in differentiating between expert and novice performances. However, with several indices that can be generated from the software, there is no set threshold that can be used to benchmark performances. The aim of this study was to identify the best possible algorithm that can be used to benchmark expert, intermediate and novice performances for objective evaluation of psychomotor skills. 12 video recordings of various surgeons were collected in a blinded fashion. Data from our previous study of 6 experts and 23 novices was also included in the analysis to determine thresholds for performance. Video recording were analyzed both by the Kinovea 0.8.15 software and a blinded expert observer using the CAT form. Multiple algorithms were tested to accurately identify expert and novice performances. ½ L + [Formula: see text] A + [Formula: see text] J scoring of path length, average movement and jerk index respectively resulted in identifying 23/24 performances. Comparing the algorithm to CAT assessment yielded in a linear regression coefficient R2 of 0.844. The value of motion tracking software in providing objective clinical evaluation and retrospective analysis is evident. Given the prospective use of this tool the algorithm developed in this study proves to be effective in benchmarking performances for psychomotor skills evaluation.
Collapse
Affiliation(s)
- Sandeep Ganni
- Delft University of Technology, Industrial Design Engineering, Medisign, Delft, The Netherlands.
- GSL Medical College, Department of Surgery, Rajahmundry, India.
- Catharina Hospital, Research and Education, Michelangelolaan 2, 5653 EJ, Eindhoven, The Netherlands.
| | - Sanne M B I Botden
- Department of Pediatric Surgery, Radboudumc - Amalia Children's Hospital, Nijmegen, the Netherlands
| | - Magdalena Chmarra
- Delft University of Technology, Industrial Design Engineering, Medisign, Delft, The Netherlands
| | - Meng Li
- Delft University of Technology, Industrial Design Engineering, Medisign, Delft, The Netherlands
- Catharina Hospital, Research and Education, Michelangelolaan 2, 5653 EJ, Eindhoven, The Netherlands
| | - Richard H M Goossens
- Delft University of Technology, Industrial Design Engineering, Medisign, Delft, The Netherlands
| | - Jack J Jakimowicz
- Delft University of Technology, Industrial Design Engineering, Medisign, Delft, The Netherlands
- Catharina Hospital, Research and Education, Michelangelolaan 2, 5653 EJ, Eindhoven, The Netherlands
| |
Collapse
|