1
|
Younis R, Yamlahi A, Bodenstedt S, Scheikl PM, Kisilenko A, Daum M, Schulze A, Wise PA, Nickel F, Mathis-Ullrich F, Maier-Hein L, Müller-Stich BP, Speidel S, Distler M, Weitz J, Wagner M. A surgical activity model of laparoscopic cholecystectomy for co-operation with collaborative robots. Surg Endosc 2024:10.1007/s00464-024-10958-w. [PMID: 38872018 DOI: 10.1007/s00464-024-10958-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Accepted: 05/24/2024] [Indexed: 06/15/2024]
Abstract
BACKGROUND Laparoscopic cholecystectomy is a very frequent surgical procedure. However, in an ageing society, less surgical staff will need to perform surgery on patients. Collaborative surgical robots (cobots) could address surgical staff shortages and workload. To achieve context-awareness for surgeon-robot collaboration, the intraoperative action workflow recognition is a key challenge. METHODS A surgical process model was developed for intraoperative surgical activities including actor, instrument, action and target in laparoscopic cholecystectomy (excluding camera guidance). These activities, as well as instrument presence and surgical phases were annotated in videos of laparoscopic cholecystectomy performed on human patients (n = 10) and on explanted porcine livers (n = 10). The machine learning algorithm Distilled-Swin was trained on our own annotated dataset and the CholecT45 dataset. The validation of the model was conducted using a fivefold cross-validation approach. RESULTS In total, 22,351 activities were annotated with a cumulative duration of 24.9 h of video segments. The machine learning algorithm trained and validated on our own dataset scored a mean average precision (mAP) of 25.7% and a top K = 5 accuracy of 85.3%. With training and validation on our dataset and CholecT45, the algorithm scored a mAP of 37.9%. CONCLUSIONS An activity model was developed and applied for the fine-granular annotation of laparoscopic cholecystectomies in two surgical settings. A machine recognition algorithm trained on our own annotated dataset and CholecT45 achieved a higher performance than training only on CholecT45 and can recognize frequently occurring activities well, but not infrequent activities. The analysis of an annotated dataset allowed for the quantification of the potential of collaborative surgical robots to address the workload of surgical staff. If collaborative surgical robots could grasp and hold tissue, up to 83.5% of the assistant's tissue interacting tasks (i.e. excluding camera guidance) could be performed by robots.
Collapse
Affiliation(s)
- R Younis
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - A Yamlahi
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - S Bodenstedt
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - P M Scheikl
- Surgical Planning and Robotic Cognition (SPARC), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-University Erlangen-Nürnberg, Erlangen, Germany
| | - A Kisilenko
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - M Daum
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - A Schulze
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - P A Wise
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - F Nickel
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg- Eppendorf, Hamburg, Germany
| | - F Mathis-Ullrich
- Surgical Planning and Robotic Cognition (SPARC), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-University Erlangen-Nürnberg, Erlangen, Germany
| | - L Maier-Hein
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - B P Müller-Stich
- Department for Abdominal Surgery, University Center for Gastrointestinal and Liver Diseases, Basel, Switzerland
| | - S Speidel
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - M Distler
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - J Weitz
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - M Wagner
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), Heidelberg, Germany.
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany.
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany.
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany.
| |
Collapse
|
2
|
Skinner G, Chen T, Jentis G, Liu Y, McCulloh C, Harzman A, Huang E, Kalady M, Kim P. Real-time near infrared artificial intelligence using scalable non-expert crowdsourcing in colorectal surgery. NPJ Digit Med 2024; 7:99. [PMID: 38649447 PMCID: PMC11035672 DOI: 10.1038/s41746-024-01095-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 03/29/2024] [Indexed: 04/25/2024] Open
Abstract
Surgical artificial intelligence (AI) has the potential to improve patient safety and clinical outcomes. To date, training such AI models to identify tissue anatomy requires annotations by expensive and rate-limiting surgical domain experts. Herein, we demonstrate and validate a methodology to obtain high quality surgical tissue annotations through crowdsourcing of non-experts, and real-time deployment of multimodal surgical anatomy AI model in colorectal surgery.
Collapse
Affiliation(s)
- Garrett Skinner
- Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, Buffalo, NY, USA
- Activ Surgical, University at Buffalo, Buffalo, NY, USA
| | - Tina Chen
- Activ Surgical, University at Buffalo, Buffalo, NY, USA
| | | | - Yao Liu
- Activ Surgical, University at Buffalo, Buffalo, NY, USA
- Warren Alpert Medical School Alpert Medical School of Brown University, Providence, RI, USA
| | | | - Alan Harzman
- The Ohio State University Wexner Medical Center, Columbus, OH, USA
| | - Emily Huang
- The Ohio State University Wexner Medical Center, Columbus, OH, USA
| | - Matthew Kalady
- The Ohio State University Wexner Medical Center, Columbus, OH, USA
| | - Peter Kim
- Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, Buffalo, NY, USA.
- Activ Surgical, University at Buffalo, Buffalo, NY, USA.
| |
Collapse
|
3
|
Zuluaga L, Rich JM, Gupta R, Pedraza A, Ucpinar B, Okhawere KE, Saini I, Dwivedi P, Patel D, Zaytoun O, Menon M, Tewari A, Badani KK. AI-powered real-time annotations during urologic surgery: The future of training and quality metrics. Urol Oncol 2024; 42:57-66. [PMID: 38142209 DOI: 10.1016/j.urolonc.2023.11.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 10/23/2023] [Accepted: 11/02/2023] [Indexed: 12/25/2023]
Abstract
INTRODUCTION AND OBJECTIVE Real-time artificial intelligence (AI) annotation of the surgical field has the potential to automatically extract information from surgical videos, helping to create a robust surgical atlas. This content can be used for surgical education and qualitative initiatives. We demonstrate the first use of AI in urologic robotic surgery to capture live surgical video and annotate key surgical steps and safety milestones in real-time. SUMMARY BACKGROUND DATA While AI models possess the capability to generate automated annotations based on a collection of video images, the real-time implementation of such technology in urological robotic surgery to aid surgeon and training staff it is still pending to be studied. METHODS We conducted an educational symposium, which broadcasted 2 live procedures, a robotic-assisted radical prostatectomy (RARP) and a robotic-assisted partial nephrectomy (RAPN). A surgical AI platform system (Theator, Palo Alto, CA) generated real-time annotations and identified operative safety milestones. This was achieved through trained algorithms, conventional video recognition, and novel Video Transfer Network technology which captures clips in full context, enabling automatic recognition and surgical mapping in real-time. RESULTS Real-time AI annotations for procedure #1, RARP, are found in Table 1. The safety milestone annotations included the apical safety maneuver and deliberate views of structures such as the external iliac vessels and the obturator nerve. Real-time AI annotations for procedure #2, RAPN, are found in Table 1. Safety milestones included deliberate views of structures such as the gonadal vessels and the ureter. AI annotated surgical events included intraoperative ultrasound, temporary clip application and removal, hemostatic powder application, and notable hemorrhage. CONCLUSIONS For the first time, surgical intelligence successfully showcased real-time AI annotations of 2 separate urologic robotic procedures during a live telecast. These annotations may provide the technological framework for send automatic notifications to clinical or operational stakeholders. This technology is a first step in real-time intraoperative decision support, leveraging big data to improve the quality of surgical care, potentially improve surgical outcomes, and support training and education.
Collapse
Affiliation(s)
- Laura Zuluaga
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY.
| | - Jordan Miller Rich
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Raghav Gupta
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Adriana Pedraza
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Burak Ucpinar
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Kennedy E Okhawere
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Indu Saini
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Priyanka Dwivedi
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Dhruti Patel
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Osama Zaytoun
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Mani Menon
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Ashutosh Tewari
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| | - Ketan K Badani
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York City, NY
| |
Collapse
|
4
|
Hashimoto DA, Sambasastry SK, Singh V, Kurada S, Altieri M, Yoshida T, Madani A, Jogan M. A foundation for evaluating the surgical artificial intelligence literature. EUROPEAN JOURNAL OF SURGICAL ONCOLOGY 2024:108014. [PMID: 38360498 DOI: 10.1016/j.ejso.2024.108014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Revised: 01/06/2024] [Accepted: 02/09/2024] [Indexed: 02/17/2024]
Abstract
With increasing growth in applications of artificial intelligence (AI) in surgery, it has become essential for surgeons to gain a foundation of knowledge to critically appraise the scientific literature, commercial claims regarding products, and regulatory and legal frameworks that govern the development and use of AI. This guide offers surgeons a framework with which to evaluate manuscripts that incorporate the use of AI. It provides a glossary of common terms, an overview of prerequisite knowledge to maximize understanding of methodology, and recommendations on how to carefully consider each element of a manuscript to assess the quality of the data on which an algorithm was trained, the appropriateness of the methodological approach, the potential for reproducibility of the experiment, and the applicability to surgical practice, including considerations on generalizability and scalability.
Collapse
Affiliation(s)
- Daniel A Hashimoto
- Penn Computer Assisted Surgery and Outcomes Laboratory, Department of Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA; Global Surgical AI Collaborative, Toronto, ON, USA.
| | - Sai Koushik Sambasastry
- Penn Computer Assisted Surgery and Outcomes Laboratory, Department of Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Vivek Singh
- Penn Computer Assisted Surgery and Outcomes Laboratory, Department of Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Sruthi Kurada
- Penn Computer Assisted Surgery and Outcomes Laboratory, Department of Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Maria Altieri
- Penn Computer Assisted Surgery and Outcomes Laboratory, Department of Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Global Surgical AI Collaborative, Toronto, ON, USA
| | - Takuto Yoshida
- Surgical AI Research Academy, Department of Surgery, University Health Network, Toronto, ON, USA
| | - Amin Madani
- Global Surgical AI Collaborative, Toronto, ON, USA; Surgical AI Research Academy, Department of Surgery, University Health Network, Toronto, ON, USA
| | - Matjaz Jogan
- Penn Computer Assisted Surgery and Outcomes Laboratory, Department of Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
5
|
Abid R, Hussein AA, Guru KA. Artificial Intelligence in Urology: Current Status and Future Perspectives. Urol Clin North Am 2024; 51:117-130. [PMID: 37945097 DOI: 10.1016/j.ucl.2023.06.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
Surgical fields, especially urology, have shifted increasingly toward the use of artificial intelligence (AI). Advancements in AI have created massive improvements in diagnostics, outcome predictions, and robotic surgery. For robotic surgery to progress from assisting surgeons to eventually reaching autonomous procedures, there must be advancements in machine learning, natural language processing, and computer vision. Moreover, barriers such as data availability, interpretability of autonomous decision-making, Internet connection and security, and ethical concerns must be overcome.
Collapse
Affiliation(s)
- Rayyan Abid
- Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH 44106, USA
| | - Ahmed A Hussein
- Department of Urology, Roswell Park Comprehensive Cancer Center
| | - Khurshid A Guru
- Department of Urology, Roswell Park Comprehensive Cancer Center.
| |
Collapse
|
6
|
Schenk M, Neumann J, Adler N, Trommer T, Theopold J, Neumuth T, Hepp P. A comparison between a maximum care university hospital and an outpatient clinic - potential for optimization in arthroscopic workflows? BMC Health Serv Res 2023; 23:1313. [PMID: 38017443 PMCID: PMC10685488 DOI: 10.1186/s12913-023-10259-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 10/31/2023] [Indexed: 11/30/2023] Open
Abstract
BACKGROUND Due to the growing economic pressure, there is an increasing interest in the optimization of operational processes within surgical operating rooms (ORs). Surgical departments are frequently dealing with limited resources, complex processes with unexpected events as well as constantly changing conditions. In order to use available resources efficiently, existing workflows and processes have to be analyzed and optimized continuously. Structural and procedural changes without prior data-driven analyses may impair the performance of the OR team and the overall efficiency of the department. The aim of this study is to develop an adaptable software toolset for surgical workflow analysis and perioperative process optimization in arthroscopic surgery. METHODS In this study, the perioperative processes of arthroscopic interventions have been recorded and analyzed subsequently. A total of 53 arthroscopic operations were recorded at a maximum care university hospital (UH) and 66 arthroscopic operations were acquired at a special outpatient clinic (OC). The recording includes regular perioperative processes (i.a. patient positioning, skin incision, application of wound dressing) and disruptive influences on these processes (e.g. telephone calls, missing or defective instruments, etc.). For this purpose, a software tool was developed ('s.w.an Suite Arthroscopic toolset'). Based on the data obtained, the processes of the maximum care provider and the special outpatient clinic have been analyzed in terms of performance measures (e.g. Closure-To-Incision-Time), efficiency (e.g. activity duration, OR resource utilization) as well as intra-process disturbances and then compared to one another. RESULTS Despite many similar processes, the results revealed considerable differences in performance indices. The OC required significantly less time than UH for surgical preoperative (UH: 30:47 min, OC: 26:01 min) and postoperative phase (UH: 15:04 min, OC: 9:56 min) as well as changeover time (UH: 32:33 min, OC: 6:02 min). In addition, these phases result in the Closure-to-Incision-Time, which lasted longer at the UH (UH: 80:01 min, OC: 41:12 min). CONCLUSION The perioperative process organization, team collaboration, and the avoidance of disruptive factors had a considerable influence on the progress of the surgeries. Furthermore, differences in terms of staffing and spatial capacities could be identified. Based on the acquired process data (such as the duration for different surgical steps or the number of interfering events) and the comparison of different arthroscopic departments, approaches for perioperative process optimization to decrease the time of work steps and reduce disruptive influences were identified.
Collapse
Affiliation(s)
- Martin Schenk
- Innovation Center Computer Assisted Surgery (ICCAS), Leipzig University, Semmelweisstr. 14, 04103, Leipzig, Germany.
- Department of Orthopaedic, Trauma and Plastic Surgery, Division of Arthroscopic Surgery and Sports Medicine, University of Leipzig Medical Center, Leipzig, Germany.
| | - Juliane Neumann
- Innovation Center Computer Assisted Surgery (ICCAS), Leipzig University, Semmelweisstr. 14, 04103, Leipzig, Germany
| | - Nadine Adler
- Innovation Center Computer Assisted Surgery (ICCAS), Leipzig University, Semmelweisstr. 14, 04103, Leipzig, Germany
- Department of Orthopaedic, Trauma and Plastic Surgery, Division of Arthroscopic Surgery and Sports Medicine, University of Leipzig Medical Center, Leipzig, Germany
| | | | - Jan Theopold
- Department of Orthopaedic, Trauma and Plastic Surgery, Division of Arthroscopic Surgery and Sports Medicine, University of Leipzig Medical Center, Leipzig, Germany
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery (ICCAS), Leipzig University, Semmelweisstr. 14, 04103, Leipzig, Germany
| | - Pierre Hepp
- Department of Orthopaedic, Trauma and Plastic Surgery, Division of Arthroscopic Surgery and Sports Medicine, University of Leipzig Medical Center, Leipzig, Germany
| |
Collapse
|
7
|
Buyck F, Vandemeulebroucke J, Ceranka J, Van Gestel F, Cornelius JF, Duerinck J, Bruneau M. Computer-vision based analysis of the neurosurgical scene - A systematic review. BRAIN & SPINE 2023; 3:102706. [PMID: 38020988 PMCID: PMC10668095 DOI: 10.1016/j.bas.2023.102706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 10/23/2023] [Accepted: 10/29/2023] [Indexed: 12/01/2023]
Abstract
Introduction With increasing use of robotic surgical adjuncts, artificial intelligence and augmented reality in neurosurgery, the automated analysis of digital images and videos acquired over various procedures becomes a subject of increased interest. While several computer vision (CV) methods have been developed and implemented for analyzing surgical scenes, few studies have been dedicated to neurosurgery. Research question In this work, we present a systematic literature review focusing on CV methodologies specifically applied to the analysis of neurosurgical procedures based on intra-operative images and videos. Additionally, we provide recommendations for the future developments of CV models in neurosurgery. Material and methods We conducted a systematic literature search in multiple databases until January 17, 2023, including Web of Science, PubMed, IEEE Xplore, Embase, and SpringerLink. Results We identified 17 studies employing CV algorithms on neurosurgical videos/images. The most common applications of CV were tool and neuroanatomical structure detection or characterization, and to a lesser extent, surgical workflow analysis. Convolutional neural networks (CNN) were the most frequently utilized architecture for CV models (65%), demonstrating superior performances in tool detection and segmentation. In particular, mask recurrent-CNN manifested most robust performance outcomes across different modalities. Discussion and conclusion Our systematic review demonstrates that CV models have been reported that can effectively detect and differentiate tools, surgical phases, neuroanatomical structures, as well as critical events in complex neurosurgical scenes with accuracies above 95%. Automated tool recognition contributes to objective characterization and assessment of surgical performance, with potential applications in neurosurgical training and intra-operative safety management.
Collapse
Affiliation(s)
- Félix Buyck
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- Vrije Universiteit Brussel (VUB), Research group Center For Neurosciences (C4N-NEUR), 1090, Brussels, Belgium
| | - Jef Vandemeulebroucke
- Vrije Universiteit Brussel (VUB), Department of Electronics and Informatics (ETRO), 1050, Brussels, Belgium
- Department of Radiology, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- imec, 3001, Leuven, Belgium
| | - Jakub Ceranka
- Vrije Universiteit Brussel (VUB), Department of Electronics and Informatics (ETRO), 1050, Brussels, Belgium
- imec, 3001, Leuven, Belgium
| | - Frederick Van Gestel
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- Vrije Universiteit Brussel (VUB), Research group Center For Neurosciences (C4N-NEUR), 1090, Brussels, Belgium
| | - Jan Frederick Cornelius
- Department of Neurosurgery, Medical Faculty, Heinrich-Heine-University, 40225, Düsseldorf, Germany
| | - Johnny Duerinck
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- Vrije Universiteit Brussel (VUB), Research group Center For Neurosciences (C4N-NEUR), 1090, Brussels, Belgium
| | - Michaël Bruneau
- Department of Neurosurgery, Universitair Ziekenhuis Brussel (UZ Brussel), 1090, Brussels, Belgium
- Vrije Universiteit Brussel (VUB), Research group Center For Neurosciences (C4N-NEUR), 1090, Brussels, Belgium
| |
Collapse
|
8
|
Choksi S, Szot S, Zang C, Yarali K, Cao Y, Ahmad F, Xiang Z, Bitner DP, Kostic Z, Filicori F. Bringing Artificial Intelligence to the operating room: edge computing for real-time surgical phase recognition. Surg Endosc 2023; 37:8778-8784. [PMID: 37580578 DOI: 10.1007/s00464-023-10322-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 07/19/2023] [Indexed: 08/16/2023]
Abstract
BACKGROUND Automation of surgical phase recognition is a key effort toward the development of Computer Vision (CV) algorithms, for workflow optimization and video-based assessment. CV is a form of Artificial Intelligence (AI) that allows interpretation of images through a deep learning (DL)-based algorithm. The improvements in Graphic Processing Unit (GPU) computing devices allow researchers to apply these algorithms for recognition of content in videos in real-time. Edge computing, where data is collected, analyzed, and acted upon in close proximity to the collection source, is essential meet the demands of workflow optimization by providing real-time algorithm application. We implemented a real-time phase recognition workflow and demonstrated its performance on 10 Robotic Inguinal Hernia Repairs (RIHR) to obtain phase predictions during the procedure. METHODS Our phase recognition algorithm was developed with 211 videos of RIHR originally annotated into 14 surgical phases. Using these videos, a DL model with a ResNet-50 backbone was trained and validated to automatically recognize surgical phases. The model was deployed to a GPU, the Nvidia® Jetson Xavier™ NX edge computing device. RESULTS This model was tested on 10 inguinal hernia repairs from four surgeons in real-time. The model was improved using post-recording processing methods such as phase merging into seven final phases (peritoneal scoring, mesh placement, preperitoneal dissection, reduction of hernia, out of body, peritoneal closure, and transitionary idle) and averaging of frames. Predictions were made once per second with a processing latency of approximately 250 ms. The accuracy of the real-time predictions ranged from 59.8 to 78.2% with an average accuracy of 68.7%. CONCLUSION A real-time phase prediction of RIHR using a CV deep learning model was successfully implemented. This real-time CV phase segmentation system can be useful for monitoring surgical progress and be integrated into software to provide hospital workflow optimization.
Collapse
Affiliation(s)
- Sarah Choksi
- Intraoperative Performance Analytics Laboratory (IPAL), Department of Surgery, Lenox Hill Hospital, Northwell Health, 186 E 76th Street, 1st Fl, New York, NY, 10021, USA.
| | - Skyler Szot
- Department of Electrical Engineering, Columbia University, 500 W 120 Street, Mudd 1310, New York, NY, 10027, USA
| | - Chengbo Zang
- Department of Electrical Engineering, Columbia University, 500 W 120 Street, Mudd 1310, New York, NY, 10027, USA
| | - Kaan Yarali
- Department of Electrical Engineering, Columbia University, 500 W 120 Street, Mudd 1310, New York, NY, 10027, USA
| | - Yuqing Cao
- Department of Electrical Engineering, Columbia University, 500 W 120 Street, Mudd 1310, New York, NY, 10027, USA
| | - Feroz Ahmad
- Department of Electrical Engineering, Columbia University, 500 W 120 Street, Mudd 1310, New York, NY, 10027, USA
| | - Zixuan Xiang
- Department of Electrical Engineering, Columbia University, 500 W 120 Street, Mudd 1310, New York, NY, 10027, USA
| | - Daniel P Bitner
- Intraoperative Performance Analytics Laboratory (IPAL), Department of Surgery, Lenox Hill Hospital, Northwell Health, 186 E 76th Street, 1st Fl, New York, NY, 10021, USA
| | - Zoran Kostic
- Department of Electrical Engineering, Columbia University, 500 W 120 Street, Mudd 1310, New York, NY, 10027, USA
| | - Filippo Filicori
- Intraoperative Performance Analytics Laboratory (IPAL), Department of Surgery, Lenox Hill Hospital, Northwell Health, 186 E 76th Street, 1st Fl, New York, NY, 10021, USA
- Zucker School of Medicine at Hofstra/Northwell Health, 5000 Hofstra Blvd, Hempstead, NY, 11549, USA
| |
Collapse
|
9
|
Park JJ, Doiphode N, Zhang X, Pan L, Blue R, Shi J, Buch VP. Developing the surgeon-machine interface: using a novel instance-segmentation framework for intraoperative landmark labelling. Front Surg 2023; 10:1259756. [PMID: 37936949 PMCID: PMC10626480 DOI: 10.3389/fsurg.2023.1259756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 09/20/2023] [Indexed: 11/09/2023] Open
Abstract
Introduction The utilisation of artificial intelligence (AI) augments intraoperative safety, surgical training, and patient outcomes. We introduce the term Surgeon-Machine Interface (SMI) to describe this innovative intersection between surgeons and machine inference. A custom deep computer vision (CV) architecture within a sparse labelling paradigm was developed, specifically tailored to conceptualise the SMI. This platform demonstrates the ability to perform instance segmentation on anatomical landmarks and tools from a single open spinal dural arteriovenous fistula (dAVF) surgery video dataset. Methods Our custom deep convolutional neural network was based on SOLOv2 architecture for precise, instance-level segmentation of surgical video data. Test video consisted of 8520 frames, with sparse labelling of only 133 frames annotated for training. Accuracy and inference time, assessed using F1-score and mean Average Precision (mAP), were compared against current state-of-the-art architectures on a separate test set of 85 additionally annotated frames. Results Our SMI demonstrated superior accuracy and computing speed compared to these frameworks. The F1-score and mAP achieved by our platform were 17% and 15.2% respectively, surpassing MaskRCNN (15.2%, 13.9%), YOLOv3 (5.4%, 11.9%), and SOLOv2 (3.1%, 10.4%). Considering detections that exceeded the Intersection over Union threshold of 50%, our platform achieved an impressive F1-score of 44.2% and mAP of 46.3%, outperforming MaskRCNN (41.3%, 43.5%), YOLOv3 (15%, 34.1%), and SOLOv2 (9%, 32.3%). Our platform demonstrated the fastest inference time (88ms), compared to MaskRCNN (90ms), SOLOV2 (100ms), and YOLOv3 (106ms). Finally, the minimal amount of training set demonstrated a good generalisation performance -our architecture successfully identified objects in a frame that were not included in the training or validation frames, indicating its ability to handle out-of-domain scenarios. Discussion We present our development of an innovative intraoperative SMI to demonstrate the future promise of advanced CV in the surgical domain. Through successful implementation in a microscopic dAVF surgery, our framework demonstrates superior performance over current state-of-the-art segmentation architectures in intraoperative landmark guidance with high sample efficiency, representing the most advanced AI-enabled surgical inference platform to date. Our future goals include transfer learning paradigms for scaling to additional surgery types, addressing clinical and technical limitations for performing real-time decoding, and ultimate enablement of a real-time neurosurgical guidance platform.
Collapse
Affiliation(s)
- Jay J. Park
- Department of Neurosurgery, The Surgical Innovation and Machine Interfacing (SIMI) Lab, Stanford University School of Medicine, Stanford, CA, United States
- Centre for Global Health, Usher Institute, Edinburgh Medical School, The University of Edinburgh, Edinburgh, United Kingdom
| | - Nehal Doiphode
- Department of Neurosurgery, The Surgical Innovation and Machine Interfacing (SIMI) Lab, Stanford University School of Medicine, Stanford, CA, United States
- Department of Computer and Information Science, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States
| | - Xiao Zhang
- Department of Computer Science, University of Chicago, Chicago, IL, United States
| | - Lishuo Pan
- Department of Computer Science, Brown University, Providence, RI, United States
| | - Rachel Blue
- Department of Neurosurgery, Perelman School of Medicine at The University of Pennsylvania, Philadelphia, PA, United States
| | - Jianbo Shi
- Department of Computer and Information Science, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States
| | - Vivek P. Buch
- Department of Neurosurgery, The Surgical Innovation and Machine Interfacing (SIMI) Lab, Stanford University School of Medicine, Stanford, CA, United States
| |
Collapse
|
10
|
Alkhamaiseh KN, Grantner JL, Shebrain S, Abdel-Qader I. Towards reliable hepatocytic anatomy segmentation in laparoscopic cholecystectomy using U-Net with Auto-Encoder. Surg Endosc 2023; 37:7358-7369. [PMID: 37491657 DOI: 10.1007/s00464-023-10306-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Accepted: 07/12/2023] [Indexed: 07/27/2023]
Abstract
BACKGROUND Most bile duct (BDI) injuries during laparoscopic cholecystectomy (LC) occur due to visual misperception leading to the misinterpretation of anatomy. Deep learning (DL) models for surgical video analysis could, therefore, support visual tasks such as identifying critical view of safety (CVS). This study aims to develop a prediction model of CVS during LC. This aim is accomplished using a deep neural network integrated with a segmentation model that is capable of highlighting hepatocytic anatomy. METHODS Still images from LC videos were annotated with four hepatocystic landmarks of anatomy segmentation. A deep autoencoder neural network with U-Net to investigate accurate medical image segmentation was trained and tested using fivefold cross-validation. Accuracy, Loss, Intersection over Union (IoU), Precision, Recall, and Hausdorff Distance were computed to evaluate the model performance versus the annotated ground truth. RESULTS A total of 1550 images from 200 LC videos were annotated. Mean IoU for segmentation was 74.65%. The proposed approach performed well for automatic hepatocytic landmarks identification with 92% accuracy and 93.9% precision and can segment challenging cases. CONCLUSION DL, can potentially provide an intraoperative model for surgical video analysis and can be trained to guide surgeons toward reliable hepatocytic anatomy segmentation and produce selective video documentation of this safety step of LC.
Collapse
Affiliation(s)
- Koloud N Alkhamaiseh
- Department of Electrical and Computer Engineering, Western Michigan University, Kalamazoo, MI, USA.
| | - Janos L Grantner
- Department of Electrical and Computer Engineering, Western Michigan University, Kalamazoo, MI, USA
| | - Saad Shebrain
- Western Michigan University Homer Stryker MD School of Medicine, Kalamazoo, MI, USA
| | - Ikhlas Abdel-Qader
- Department of Electrical and Computer Engineering, Western Michigan University, Kalamazoo, MI, USA
| |
Collapse
|
11
|
Kolbinger FR, Bodenstedt S, Carstens M, Leger S, Krell S, Rinner FM, Nielen TP, Kirchberg J, Fritzmann J, Weitz J, Distler M, Speidel S. Artificial Intelligence for context-aware surgical guidance in complex robot-assisted oncological procedures: An exploratory feasibility study. EUROPEAN JOURNAL OF SURGICAL ONCOLOGY 2023:106996. [PMID: 37591704 DOI: 10.1016/j.ejso.2023.106996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 06/20/2023] [Accepted: 07/26/2023] [Indexed: 08/19/2023]
Abstract
INTRODUCTION Complex oncological procedures pose various surgical challenges including dissection in distinct tissue planes and preservation of vulnerable anatomical structures throughout different surgical phases. In rectal surgery, violation of dissection planes increases the risk of local recurrence and autonomous nerve damage resulting in incontinence and sexual dysfunction. This work explores the feasibility of phase recognition and target structure segmentation in robot-assisted rectal resection (RARR) using machine learning. MATERIALS AND METHODS A total of 57 RARR were recorded and subsets of these were annotated with respect to surgical phases and exact locations of target structures (anatomical structures, tissue types, static structures, and dissection areas). For surgical phase recognition, three machine learning models were trained: LSTM, MSTCN, and Trans-SVNet. Based on pixel-wise annotations of target structures in 9037 images, individual segmentation models based on DeepLabv3 were trained. Model performance was evaluated using F1 score, Intersection-over-Union (IoU), accuracy, precision, recall, and specificity. RESULTS The best results for phase recognition were achieved with the MSTCN model (F1 score: 0.82 ± 0.01, accuracy: 0.84 ± 0.03). Mean IoUs for target structure segmentation ranged from 0.14 ± 0.22 to 0.80 ± 0.14 for organs and tissue types and from 0.11 ± 0.11 to 0.44 ± 0.30 for dissection areas. Image quality, distorting factors (i.e. blood, smoke), and technical challenges (i.e. lack of depth perception) considerably impacted segmentation performance. CONCLUSION Machine learning-based phase recognition and segmentation of selected target structures are feasible in RARR. In the future, such functionalities could be integrated into a context-aware surgical guidance system for rectal surgery.
Collapse
Affiliation(s)
- Fiona R Kolbinger
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany; National Center for Tumor Diseases Dresden (NCT/UCC), Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany; Helmholtz-Zentrum Dresden - Rossendorf, Dresden, Germany; Else Kröner Fresenius Center for Digital Health (EKFZ), Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany.
| | - Sebastian Bodenstedt
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Partner Site Dresden, Fetscherstraße 74, 01307, Dresden, Germany; Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, Dresden, Germany
| | - Matthias Carstens
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany
| | - Stefan Leger
- Else Kröner Fresenius Center for Digital Health (EKFZ), Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany; Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Partner Site Dresden, Fetscherstraße 74, 01307, Dresden, Germany
| | - Stefanie Krell
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Partner Site Dresden, Fetscherstraße 74, 01307, Dresden, Germany
| | - Franziska M Rinner
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany
| | - Thomas P Nielen
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany
| | - Johanna Kirchberg
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany; National Center for Tumor Diseases Dresden (NCT/UCC), Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany; Helmholtz-Zentrum Dresden - Rossendorf, Dresden, Germany
| | - Johannes Fritzmann
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany; National Center for Tumor Diseases Dresden (NCT/UCC), Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany; Helmholtz-Zentrum Dresden - Rossendorf, Dresden, Germany
| | - Jürgen Weitz
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany; National Center for Tumor Diseases Dresden (NCT/UCC), Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany; Helmholtz-Zentrum Dresden - Rossendorf, Dresden, Germany; Else Kröner Fresenius Center for Digital Health (EKFZ), Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany; Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, Dresden, Germany
| | - Marius Distler
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307 Dresden, Germany; National Center for Tumor Diseases Dresden (NCT/UCC), Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany; Helmholtz-Zentrum Dresden - Rossendorf, Dresden, Germany
| | - Stefanie Speidel
- Else Kröner Fresenius Center for Digital Health (EKFZ), Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany; Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Partner Site Dresden, Fetscherstraße 74, 01307, Dresden, Germany; Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, Dresden, Germany.
| |
Collapse
|
12
|
Hashemi N, Svendsen MBS, Bjerrum F, Rasmussen S, Tolsgaard MG, Friis ML. Acquisition and usage of robotic surgical data for machine learning analysis. Surg Endosc 2023:10.1007/s00464-023-10214-7. [PMID: 37389741 PMCID: PMC10338401 DOI: 10.1007/s00464-023-10214-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Accepted: 06/12/2023] [Indexed: 07/01/2023]
Abstract
BACKGROUND The increasing use of robot-assisted surgery (RAS) has led to the need for new methods of assessing whether new surgeons are qualified to perform RAS, without the resource-demanding process of having expert surgeons do the assessment. Computer-based automation and artificial intelligence (AI) are seen as promising alternatives to expert-based surgical assessment. However, no standard protocols or methods for preparing data and implementing AI are available for clinicians. This may be among the reasons for the impediment to the use of AI in the clinical setting. METHOD We tested our method on porcine models with both the da Vinci Si and the da Vinci Xi. We sought to capture raw video data from the surgical robots and 3D movement data from the surgeons and prepared the data for the use in AI by a structured guide to acquire and prepare video data using the following steps: 'Capturing image data from the surgical robot', 'Extracting event data', 'Capturing movement data of the surgeon', 'Annotation of image data'. RESULTS 15 participant (11 novices and 4 experienced) performed 10 different intraabdominal RAS procedures. Using this method we captured 188 videos (94 from the surgical robot, and 94 corresponding movement videos of the surgeons' arms and hands). Event data, movement data, and labels were extracted from the raw material and prepared for use in AI. CONCLUSION With our described methods, we could collect, prepare, and annotate images, events, and motion data from surgical robotic systems in preparation for its use in AI.
Collapse
Affiliation(s)
- Nasseh Hashemi
- Department of Clinical Medicine, Aalborg University Hospital, Aalborg, Denmark.
- Nordsim-Centre for Skills Training and Simulation, Aalborg, Denmark.
- ROCnord-Robot Centre, Aalborg University Hospital, Aalborg, Denmark.
- Department of Urology, Aalborg University Hospital, Aalborg, Denmark.
| | - Morten Bo Søndergaard Svendsen
- Copenhagen Academy for Medical Education and Simulation, Center for Human Resources and Education, Copenhagen, Denmark
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Flemming Bjerrum
- Copenhagen Academy for Medical Education and Simulation, Center for Human Resources and Education, Copenhagen, Denmark
- Department of Gastrointestinal and Hepatic Diseases, Copenhagen University Hospital - Herlev and Gentofte, Herlev, Denmark
| | - Sten Rasmussen
- Department of Clinical Medicine, Aalborg University Hospital, Aalborg, Denmark
| | - Martin G Tolsgaard
- Nordsim-Centre for Skills Training and Simulation, Aalborg, Denmark
- Copenhagen Academy for Medical Education and Simulation, Center for Human Resources and Education, Copenhagen, Denmark
| | - Mikkel Lønborg Friis
- Department of Clinical Medicine, Aalborg University Hospital, Aalborg, Denmark
- Nordsim-Centre for Skills Training and Simulation, Aalborg, Denmark
| |
Collapse
|
13
|
Zang C, Turkcan MK, Narasimhan S, Cao Y, Yarali K, Xiang Z, Szot S, Ahmad F, Choksi S, Bitner DP, Filicori F, Kostic Z. Surgical Phase Recognition in Inguinal Hernia Repair-AI-Based Confirmatory Baseline and Exploration of Competitive Models. Bioengineering (Basel) 2023; 10:654. [PMID: 37370585 DOI: 10.3390/bioengineering10060654] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 05/18/2023] [Accepted: 05/23/2023] [Indexed: 06/29/2023] Open
Abstract
Video-recorded robotic-assisted surgeries allow the use of automated computer vision and artificial intelligence/deep learning methods for quality assessment and workflow analysis in surgical phase recognition. We considered a dataset of 209 videos of robotic-assisted laparoscopic inguinal hernia repair (RALIHR) collected from 8 surgeons, defined rigorous ground-truth annotation rules, then pre-processed and annotated the videos. We deployed seven deep learning models to establish the baseline accuracy for surgical phase recognition and explored four advanced architectures. For rapid execution of the studies, we initially engaged three dozen MS-level engineering students in a competitive classroom setting, followed by focused research. We unified the data processing pipeline in a confirmatory study, and explored a number of scenarios which differ in how the DL networks were trained and evaluated. For the scenario with 21 validation videos of all surgeons, the Video Swin Transformer model achieved ~0.85 validation accuracy, and the Perceiver IO model achieved ~0.84. Our studies affirm the necessity of close collaborative research between medical experts and engineers for developing automated surgical phase recognition models deployable in clinical settings.
Collapse
Affiliation(s)
- Chengbo Zang
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA
| | - Mehmet Kerem Turkcan
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA
| | - Sanjeev Narasimhan
- Department of Computer Science, Columbia University, New York, NY 10027, USA
| | - Yuqing Cao
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA
| | - Kaan Yarali
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA
| | - Zixuan Xiang
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA
| | - Skyler Szot
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA
| | - Feroz Ahmad
- Department of Computer Science, Columbia University, New York, NY 10027, USA
| | - Sarah Choksi
- Intraoperative Performance Analytics Laboratory (IPAL), Lenox Hill Hospital, New York, NY 10021, USA
| | - Daniel P Bitner
- Intraoperative Performance Analytics Laboratory (IPAL), Lenox Hill Hospital, New York, NY 10021, USA
| | - Filippo Filicori
- Intraoperative Performance Analytics Laboratory (IPAL), Lenox Hill Hospital, New York, NY 10021, USA
- Zucker School of Medicine at Hofstra/Northwell Health, Hempstead, NY 11549, USA
| | - Zoran Kostic
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA
| |
Collapse
|
14
|
Eckhoff JA, Ban Y, Rosman G, Müller DT, Hashimoto DA, Witkowski E, Babic B, Rus D, Bruns C, Fuchs HF, Meireles O. TEsoNet: knowledge transfer in surgical phase recognition from laparoscopic sleeve gastrectomy to the laparoscopic part of Ivor-Lewis esophagectomy. Surg Endosc 2023; 37:4040-4053. [PMID: 36932188 PMCID: PMC10156818 DOI: 10.1007/s00464-023-09971-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 02/21/2023] [Indexed: 03/19/2023]
Abstract
BACKGROUND Surgical phase recognition using computer vision presents an essential requirement for artificial intelligence-assisted analysis of surgical workflow. Its performance is heavily dependent on large amounts of annotated video data, which remain a limited resource, especially concerning highly specialized procedures. Knowledge transfer from common to more complex procedures can promote data efficiency. Phase recognition models trained on large, readily available datasets may be extrapolated and transferred to smaller datasets of different procedures to improve generalizability. The conditions under which transfer learning is appropriate and feasible remain to be established. METHODS We defined ten operative phases for the laparoscopic part of Ivor-Lewis Esophagectomy through expert consensus. A dataset of 40 videos was annotated accordingly. The knowledge transfer capability of an established model architecture for phase recognition (CNN + LSTM) was adapted to generate a "Transferal Esophagectomy Network" (TEsoNet) for co-training and transfer learning from laparoscopic Sleeve Gastrectomy to the laparoscopic part of Ivor-Lewis Esophagectomy, exploring different training set compositions and training weights. RESULTS The explored model architecture is capable of accurate phase detection in complex procedures, such as Esophagectomy, even with low quantities of training data. Knowledge transfer between two upper gastrointestinal procedures is feasible and achieves reasonable accuracy with respect to operative phases with high procedural overlap. CONCLUSION Robust phase recognition models can achieve reasonable yet phase-specific accuracy through transfer learning and co-training between two related procedures, even when exposed to small amounts of training data of the target procedure. Further exploration is required to determine appropriate data amounts, key characteristics of the training procedure and temporal annotation methods required for successful transferal phase recognition. Transfer learning across different procedures addressing small datasets may increase data efficiency. Finally, to enable the surgical application of AI for intraoperative risk mitigation, coverage of rare, specialized procedures needs to be explored.
Collapse
Affiliation(s)
- J A Eckhoff
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA. .,Department of General, Visceral, Tumor and Transplant Surgery, University Hospital Cologne, Kerpenerstrasse 62, 50937, Cologne, Germany.
| | - Y Ban
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA.,Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, 02139, USA
| | - G Rosman
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA.,Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, 02139, USA
| | - D T Müller
- Department of General, Visceral, Tumor and Transplant Surgery, University Hospital Cologne, Kerpenerstrasse 62, 50937, Cologne, Germany
| | - D A Hashimoto
- Department of Surgery, University Hospitals Cleveland Medical Center, Cleveland, OH, 44106, USA.,Department of Surgery, Case Western Reserve School of Medicine, Cleveland, OH, 44106, USA
| | - E Witkowski
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA
| | - B Babic
- Department of General, Visceral, Tumor and Transplant Surgery, University Hospital Cologne, Kerpenerstrasse 62, 50937, Cologne, Germany
| | - D Rus
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, 02139, USA
| | - C Bruns
- Department of General, Visceral, Tumor and Transplant Surgery, University Hospital Cologne, Kerpenerstrasse 62, 50937, Cologne, Germany
| | - H F Fuchs
- Department of General, Visceral, Tumor and Transplant Surgery, University Hospital Cologne, Kerpenerstrasse 62, 50937, Cologne, Germany
| | - O Meireles
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA
| |
Collapse
|
15
|
Lavanchy JL, Gonzalez C, Kassem H, Nett PC, Mutter D, Padoy N. Proposal and multicentric validation of a laparoscopic Roux-en-Y gastric bypass surgery ontology. Surg Endosc 2023; 37:2070-2077. [PMID: 36289088 PMCID: PMC10017621 DOI: 10.1007/s00464-022-09745-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Accepted: 10/14/2022] [Indexed: 11/30/2022]
Abstract
BACKGROUND Phase and step annotation in surgical videos is a prerequisite for surgical scene understanding and for downstream tasks like intraoperative feedback or assistance. However, most ontologies are applied on small monocentric datasets and lack external validation. To overcome these limitations an ontology for phases and steps of laparoscopic Roux-en-Y gastric bypass (LRYGB) is proposed and validated on a multicentric dataset in terms of inter- and intra-rater reliability (inter-/intra-RR). METHODS The proposed LRYGB ontology consists of 12 phase and 46 step definitions that are hierarchically structured. Two board certified surgeons (raters) with > 10 years of clinical experience applied the proposed ontology on two datasets: (1) StraBypass40 consists of 40 LRYGB videos from Nouvel Hôpital Civil, Strasbourg, France and (2) BernBypass70 consists of 70 LRYGB videos from Inselspital, Bern University Hospital, Bern, Switzerland. To assess inter-RR the two raters' annotations of ten randomly chosen videos from StraBypass40 and BernBypass70 each, were compared. To assess intra-RR ten randomly chosen videos were annotated twice by the same rater and annotations were compared. Inter-RR was calculated using Cohen's kappa. Additionally, for inter- and intra-RR accuracy, precision, recall, F1-score, and application dependent metrics were applied. RESULTS The mean ± SD video duration was 108 ± 33 min and 75 ± 21 min in StraBypass40 and BernBypass70, respectively. The proposed ontology shows an inter-RR of 96.8 ± 2.7% for phases and 85.4 ± 6.0% for steps on StraBypass40 and 94.9 ± 5.8% for phases and 76.1 ± 13.9% for steps on BernBypass70. The overall Cohen's kappa of inter-RR was 95.9 ± 4.3% for phases and 80.8 ± 10.0% for steps. Intra-RR showed an accuracy of 98.4 ± 1.1% for phases and 88.1 ± 8.1% for steps. CONCLUSION The proposed ontology shows an excellent inter- and intra-RR and should therefore be implemented routinely in phase and step annotation of LRYGB.
Collapse
Affiliation(s)
- Joël L Lavanchy
- IHU Strasbourg, 1 Place de l'Hôpital, 67000, Strasbourg, France.
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
| | - Cristians Gonzalez
- IHU Strasbourg, 1 Place de l'Hôpital, 67000, Strasbourg, France
- University Hospital of Strasbourg, Strasbourg, France
| | - Hasan Kassem
- ICube, CNRS, University of Strasbourg, Strasbourg, France
| | - Philipp C Nett
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Didier Mutter
- IHU Strasbourg, 1 Place de l'Hôpital, 67000, Strasbourg, France
- University Hospital of Strasbourg, Strasbourg, France
| | - Nicolas Padoy
- IHU Strasbourg, 1 Place de l'Hôpital, 67000, Strasbourg, France
- ICube, CNRS, University of Strasbourg, Strasbourg, France
| |
Collapse
|
16
|
Fer D, Zhang B, Abukhalil R, Goel V, Goel B, Barker J, Kalesan B, Barragan I, Gaddis ML, Kilroy PG. An artificial intelligence model that automatically labels roux-en-Y gastric bypasses, a comparison to trained surgeon annotators. Surg Endosc 2023:10.1007/s00464-023-09870-6. [PMID: 36658282 DOI: 10.1007/s00464-023-09870-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 01/04/2023] [Indexed: 01/21/2023]
Abstract
INTRODUCTION Artificial intelligence (AI) can automate certain tasks to improve data collection. Models have been created to annotate the steps of Roux-en-Y Gastric Bypass (RYGB). However, model performance has not been compared with individual surgeon annotator performance. We developed a model that automatically labels RYGB steps and compares its performance to surgeons. METHODS AND PROCEDURES 545 videos (17 surgeons) of laparoscopic RYGB procedures were collected. An annotation guide (12 steps, 52 tasks) was developed. Steps were annotated by 11 surgeons. Each video was annotated by two surgeons and a third reconciled the differences. A convolutional AI model was trained to identify steps and compared with manual annotation. For modeling, we used 390 videos for training, 95 for validation, and 60 for testing. The performance comparison between AI model versus manual annotation was performed using ANOVA (Analysis of Variance) in a subset of 60 testing videos. We assessed the performance of the model at each step and poor performance was defined (F1-score < 80%). RESULTS The convolutional model identified 12 steps in the RYGB architecture. Model performance varied at each step [F1 > 90% for 7, and > 80% for 2]. The reconciled manual annotation data (F1 > 80% for > 5 steps) performed better than trainee's (F1 > 80% for 2-5 steps for 4 annotators, and < 2 steps for 4 annotators). In testing subset, certain steps had low performance, indicating potential ambiguities in surgical landmarks. Additionally, some videos were easier to annotate than others, suggesting variability. After controlling for variability, the AI algorithm was comparable to the manual (p < 0.0001). CONCLUSION AI can be used to identify surgical landmarks in RYGB comparable to the manual process. AI was more accurate to recognize some landmarks more accurately than surgeons. This technology has the potential to improve surgical training by assessing the learning curves of surgeons at scale.
Collapse
Affiliation(s)
- Danyal Fer
- University of California, San Francisco-East Bay, General Surgery, Oakland, CA, USA.,Johnson & Johnson MedTech, New Brunswick, NJ, USA
| | - Bokai Zhang
- Johnson & Johnson MedTech, New Brunswick, NJ, USA
| | - Rami Abukhalil
- Johnson & Johnson MedTech, New Brunswick, NJ, USA. .,, 5490 Great America Parkway, Santa Clara, CA, 95054, USA.
| | - Varun Goel
- University of California, San Francisco-East Bay, General Surgery, Oakland, CA, USA.,Johnson & Johnson MedTech, New Brunswick, NJ, USA
| | - Bharti Goel
- Johnson & Johnson MedTech, New Brunswick, NJ, USA
| | | | | | | | | | | |
Collapse
|
17
|
Baldi PF, Abdelkarim S, Liu J, To JK, Ibarra MD, Browne AW. Vitreoretinal Surgical Instrument Tracking in Three Dimensions Using Deep Learning. Transl Vis Sci Technol 2023; 12:20. [PMID: 36648414 PMCID: PMC9851279 DOI: 10.1167/tvst.12.1.20] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023] Open
Abstract
Purpose To evaluate the potential for artificial intelligence-based video analysis to determine surgical instrument characteristics when moving in the three-dimensional vitreous space. Methods We designed and manufactured a model eye in which we recorded choreographed videos of many surgical instruments moving throughout the eye. We labeled each frame of the videos to describe the surgical tool characteristics: tool type, location, depth, and insertional laterality. We trained two different deep learning models to predict each of the tool characteristics and evaluated model performances on a subset of images. Results The accuracy of the classification model on the training set is 84% for the x-y region, 97% for depth, 100% for instrument type, and 100% for laterality of insertion. The accuracy of the classification model on the validation dataset is 83% for the x-y region, 96% for depth, 100% for instrument type, and 100% for laterality of insertion. The close-up detection model performs at 67 frames per second, with precision for most instruments higher than 75%, achieving a mean average precision of 79.3%. Conclusions We demonstrated that trained models can track surgical instrument movement in three-dimensional space and determine instrument depth, tip location, instrument insertional laterality, and instrument type. Model performance is nearly instantaneous and justifies further investigation into application to real-world surgical videos. Translational Relevance Deep learning offers the potential for software-based safety feedback mechanisms during surgery or the ability to extract metrics of surgical technique that can direct research to optimize surgical outcomes.
Collapse
Affiliation(s)
- Pierre F. Baldi
- Department of Computer Science, University of California, Irvine, CA, USA,Institute for Genomics and Bioinformatics, University of California, Irvine, CA, USA,Department of Biomedical Engineering, University of California, Irvine, CA, USA,Center for Translational Vision Research, Department of Ophthalmology, University of California, Irvine, CA, USA
| | - Sherif Abdelkarim
- Department of Computer Science, University of California, Irvine, CA, USA,Institute for Genomics and Bioinformatics, University of California, Irvine, CA, USA
| | - Junze Liu
- Department of Computer Science, University of California, Irvine, CA, USA,Institute for Genomics and Bioinformatics, University of California, Irvine, CA, USA
| | - Josiah K. To
- Center for Translational Vision Research, Department of Ophthalmology, University of California, Irvine, CA, USA
| | | | - Andrew W. Browne
- Department of Biomedical Engineering, University of California, Irvine, CA, USA,Center for Translational Vision Research, Department of Ophthalmology, University of California, Irvine, CA, USA,Gavin Herbert Eye Institute, Department of Ophthalmology, University of California, Irvine, CA, USA
| |
Collapse
|
18
|
De Backer P, Eckhoff JA, Simoens J, Müller DT, Allaeys C, Creemers H, Hallemeesch A, Mestdagh K, Van Praet C, Debbaut C, Decaestecker K, Bruns CJ, Meireles O, Mottrie A, Fuchs HF. Multicentric exploration of tool annotation in robotic surgery: lessons learned when starting a surgical artificial intelligence project. Surg Endosc 2022; 36:8533-8548. [PMID: 35941310 DOI: 10.1007/s00464-022-09487-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 07/16/2022] [Indexed: 01/06/2023]
Abstract
BACKGROUND Artificial intelligence (AI) holds tremendous potential to reduce surgical risks and improve surgical assessment. Machine learning, a subfield of AI, can be used to analyze surgical video and imaging data. Manual annotations provide veracity about the desired target features. Yet, methodological annotation explorations are limited to date. Here, we provide an exploratory analysis of the requirements and methods of instrument annotation in a multi-institutional team from two specialized AI centers and compile our lessons learned. METHODS We developed a bottom-up approach for team annotation of robotic instruments in robot-assisted partial nephrectomy (RAPN), which was subsequently validated in robot-assisted minimally invasive esophagectomy (RAMIE). Furthermore, instrument annotation methods were evaluated for their use in Machine Learning algorithms. Overall, we evaluated the efficiency and transferability of the proposed team approach and quantified performance metrics (e.g., time per frame required for each annotation modality) between RAPN and RAMIE. RESULTS We found a 0.05 Hz image sampling frequency to be adequate for instrument annotation. The bottom-up approach in annotation training and management resulted in accurate annotations and demonstrated efficiency in annotating large datasets. The proposed annotation methodology was transferrable between both RAPN and RAMIE. The average annotation time for RAPN pixel annotation ranged from 4.49 to 12.6 min per image; for vector annotation, we denote 2.92 min per image. Similar annotation times were found for RAMIE. Lastly, we elaborate on common pitfalls encountered throughout the annotation process. CONCLUSIONS We propose a successful bottom-up approach for annotator team composition, applicable to any surgical annotation project. Our results set the foundation to start AI projects for instrument detection, segmentation, and pose estimation. Due to the immense annotation burden resulting from spatial instrumental annotation, further analysis into sampling frequency and annotation detail needs to be conducted.
Collapse
Affiliation(s)
- Pieter De Backer
- ORSI Academy, Proefhoevestraat 12, 9090, Melle, Belgium.
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, Ghent, Belgium.
- IBiTech-Biommeda, Faculty of Engineering and Architecture, and CRIG, Ghent University, Ghent, Belgium.
- Department of Urology, Ghent University Hospital, Ghent, Belgium.
| | - Jennifer A Eckhoff
- Robotic Innovation Laboratory, Department of General, Visceral, Tumor and Transplantsurgery, University Hospital Cologne, Cologne, Germany
| | - Jente Simoens
- ORSI Academy, Proefhoevestraat 12, 9090, Melle, Belgium
| | - Dolores T Müller
- Robotic Innovation Laboratory, Department of General, Visceral, Tumor and Transplantsurgery, University Hospital Cologne, Cologne, Germany
| | - Charlotte Allaeys
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, Ghent, Belgium
| | - Heleen Creemers
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, Ghent, Belgium
| | - Amélie Hallemeesch
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, Ghent, Belgium
| | - Kenzo Mestdagh
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, Ghent, Belgium
| | | | - Charlotte Debbaut
- IBiTech-Biommeda, Faculty of Engineering and Architecture, and CRIG, Ghent University, Ghent, Belgium
| | | | - Christiane J Bruns
- Robotic Innovation Laboratory, Department of General, Visceral, Tumor and Transplantsurgery, University Hospital Cologne, Cologne, Germany
| | - Ozanan Meireles
- Surgical Artificial Intelligence and Innovation Laboratory, Massachusetts General Hospital, Boston, USA
| | - Alexandre Mottrie
- ORSI Academy, Proefhoevestraat 12, 9090, Melle, Belgium
- Department of Urology, OLV Hospital Aalst-Asse-Ninove, Aalst, Belgium
| | - Hans F Fuchs
- Robotic Innovation Laboratory, Department of General, Visceral, Tumor and Transplantsurgery, University Hospital Cologne, Cologne, Germany
| |
Collapse
|
19
|
Moglia A, Georgiou K, Morelli L, Toutouzas K, Satava RM, Cuschieri A. Breaking down the silos of artificial intelligence in surgery: glossary of terms. Surg Endosc 2022; 36:7986-7997. [PMID: 35729406 PMCID: PMC9613746 DOI: 10.1007/s00464-022-09371-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 05/28/2022] [Indexed: 01/06/2023]
Abstract
BACKGROUND The literature on artificial intelligence (AI) in surgery has advanced rapidly during the past few years. However, the published studies on AI are mostly reported by computer scientists using their own jargon which is unfamiliar to surgeons. METHODS A literature search was conducted in using PubMed following the preferred reporting items for systematic reviews and meta-analyses (PRISMA) statement. The primary outcome of this review is to provide a glossary with definitions of the commonly used AI terms in surgery to improve their understanding by surgeons. RESULTS One hundred ninety-five studies were included in this review, and 38 AI terms related to surgery were retrieved. Convolutional neural networks were the most frequently culled term by the search, accounting for 74 studies on AI in surgery, followed by classification task (n = 62), artificial neural networks (n = 53), and regression (n = 49). Then, the most frequent expressions were supervised learning (reported in 24 articles), support vector machine (SVM) in 21, and logistic regression in 16. The rest of the 38 terms was seldom mentioned. CONCLUSIONS The proposed glossary can be used by several stakeholders. First and foremost, by residents and attending consultant surgeons, both having to understand the fundamentals of AI when reading such articles. Secondly, junior researchers at the start of their career in Surgical Data Science and thirdly experts working in the regulatory sections of companies involved in the AI Business Software as a Medical Device (SaMD) preparing documents for submission to the Food and Drug Administration (FDA) or other agencies for approval.
Collapse
Affiliation(s)
- Andrea Moglia
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy.
| | - Konstantinos Georgiou
- 1st Propaedeutic Surgical Unit, Hippocrateion Athens General Hospital, Athens Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Luca Morelli
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
- Department of General Surgery, University of Pisa, Pisa, Italy
| | - Konstantinos Toutouzas
- 1st Propaedeutic Surgical Unit, Hippocrateion Athens General Hospital, Athens Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Richard M Satava
- Department of Surgery, University of Washington Medical Center, Seattle, WA, USA
| | - Alfred Cuschieri
- Scuola Superiore Sant'Anna of Pisa, 56214, Pisa, Italy
- Institute for Medical Science and Technology, University of Dundee, Dundee, DD2 1FD, UK
| |
Collapse
|
20
|
Cheikh Youssef S, Hachach-Haram N, Aydin A, Shah TT, Sapre N, Nair R, Rai S, Dasgupta P. Video labelling robot-assisted radical prostatectomy and the role of artificial intelligence (AI): training a novice. J Robot Surg 2022; 17:695-701. [PMID: 36309954 PMCID: PMC9618152 DOI: 10.1007/s11701-022-01465-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 10/10/2022] [Indexed: 10/31/2022]
Abstract
AbstractVideo labelling is the assigning of meaningful information to raw videos. With the evolution of artificial intelligence and its intended incorporation into the operating room, video datasets can be invaluable tools for education and the training of intelligent surgical workflow systems through computer vision. However, the process of manual labelling of video datasets can prove costly and time-consuming for already busy practising surgeons. Twenty-five robot-assisted radical prostatectomy (RARP) procedures were recorded on Proximie, an augmented reality platform, anonymised and access given to a novice, who was trained to develop the knowledge and skills needed to accurately segment a full-length RARP procedure on a video labelling platform. A labelled video was subsequently randomly selected for assessment of accuracy by four practising urologists. Of the 25 videos allocated, 17 were deemed suitable for labelling, and 8 were excluded on the basis of procedure length and video quality. The labelled video selected for assessment was graded for accuracy of temporal labelling, with an average score of 93.1%, and a range of 85.6–100%. The self-training of a novice in the accurate segmentation of a surgical video to the standard of a practising urologist is feasible and practical for the RARP procedure. The assigning of temporal labels on a video labelling platform was also studied and proved feasible throughout the study period.
Collapse
|
21
|
Mascagni P, Alapatt D, Sestini L, Altieri MS, Madani A, Watanabe Y, Alseidi A, Redan JA, Alfieri S, Costamagna G, Boškoski I, Padoy N, Hashimoto DA. Computer vision in surgery: from potential to clinical value. NPJ Digit Med 2022; 5:163. [PMID: 36307544 PMCID: PMC9616906 DOI: 10.1038/s41746-022-00707-5] [Citation(s) in RCA: 35] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 10/10/2022] [Indexed: 11/09/2022] Open
Abstract
Hundreds of millions of operations are performed worldwide each year, and the rising uptake in minimally invasive surgery has enabled fiber optic cameras and robots to become both important tools to conduct surgery and sensors from which to capture information about surgery. Computer vision (CV), the application of algorithms to analyze and interpret visual data, has become a critical technology through which to study the intraoperative phase of care with the goals of augmenting surgeons' decision-making processes, supporting safer surgery, and expanding access to surgical care. While much work has been performed on potential use cases, there are currently no CV tools widely used for diagnostic or therapeutic applications in surgery. Using laparoscopic cholecystectomy as an example, we reviewed current CV techniques that have been applied to minimally invasive surgery and their clinical applications. Finally, we discuss the challenges and obstacles that remain to be overcome for broader implementation and adoption of CV in surgery.
Collapse
Affiliation(s)
- Pietro Mascagni
- Gemelli Hospital, Catholic University of the Sacred Heart, Rome, Italy. .,IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France. .,Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada.
| | - Deepak Alapatt
- ICube, University of Strasbourg, CNRS, IHU, Strasbourg, France
| | - Luca Sestini
- ICube, University of Strasbourg, CNRS, IHU, Strasbourg, France.,Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy
| | - Maria S Altieri
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada.,Department of Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| | - Amin Madani
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada.,Department of Surgery, University Health Network, Toronto, ON, Canada
| | - Yusuke Watanabe
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada.,Department of Surgery, University of Hokkaido, Hokkaido, Japan
| | - Adnan Alseidi
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada.,Department of Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Jay A Redan
- Department of Surgery, AdventHealth-Celebration Health, Celebration, FL, USA
| | - Sergio Alfieri
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Guido Costamagna
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Ivo Boškoski
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Nicolas Padoy
- IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France.,ICube, University of Strasbourg, CNRS, IHU, Strasbourg, France
| | - Daniel A Hashimoto
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada.,Department of Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| |
Collapse
|
22
|
Wu MJ, Knoll RM, Bouhadjer K, Remenschneider AK, Kozin ED. Educational Quality of YouTube Cholesteatoma Surgery Videos: Areas for Improvement. OTO Open 2022; 6:2473974X221120250. [PMID: 36274920 PMCID: PMC9585570 DOI: 10.1177/2473974x221120250] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 06/16/2022] [Indexed: 11/15/2022] Open
Abstract
Otolaryngology surgical education continues to evolve where trainees increasingly use videos to learn technical skills. Trainees commonly use YouTube, but no study to date has evaluated the educational quality (EQ) of otologic surgical videos on YouTube. We aim to assess the EQ of cholesteatoma surgical videos. Cholesteatoma surgical videos were queried using YouTube search terms, assessed using LAParoscopic surgery Video Educational GuidelineS (LAP-VEGaS), a validated assessment tool for publication, and categorized into low (0-6), medium (7-12), and high (13-18) EQ groups. In total, 74 videos were identified (mean LAP-VEGaS score = 9.6 ± 4.0) and 44.6% had medium EQ. Videos commonly lacked graphic aids to highlight anatomy (71.6%) and postprocedural outcomes (68.9%). LAP-VEGaS scores were greater in videos originating from US surgeons compared to non-US surgeons (12.4 ± 3.4 vs 8.0 ± 3.5; P < .001). Our study highlights that otolaryngology trainees may experience difficulty finding high-EQ cholesteatoma surgery videos on YouTube. Areas for improved EQ content are discussed. Level of evidence: IV.
Collapse
Affiliation(s)
- Matthew J. Wu
- Department of Otolaryngology, Massachusetts Eye and Ear, Boston, Massachusetts, USA,Loyola University Chicago Stritch School of Medicine, Maywood, Illinois, USA,Elliott D. Kozin, MD, Department of Otolaryngology, Massachusetts Eye and Ear, 243 Charles St, Boston, MA 02114, USA.
| | - Renata M. Knoll
- Department of Otolaryngology, Massachusetts Eye and Ear, Boston, Massachusetts, USA,Department of Otolaryngology, Harvard Medical School, Boston, Massachusetts, USA
| | - Karim Bouhadjer
- Department of Otolaryngology, Massachusetts Eye and Ear, Boston, Massachusetts, USA,Department of Otolaryngology, Harvard Medical School, Boston, Massachusetts, USA
| | - Aaron K. Remenschneider
- Department of Otolaryngology, Massachusetts Eye and Ear, Boston, Massachusetts, USA,Department of Otolaryngology, Harvard Medical School, Boston, Massachusetts, USA,Department of Otolaryngology, University of Massachusetts Medical Center, Worcester, Massachusetts, USA
| | - Elliott D. Kozin
- Department of Otolaryngology, Massachusetts Eye and Ear, Boston, Massachusetts, USA,Department of Otolaryngology, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
23
|
Quero G, Mascagni P, Kolbinger FR, Fiorillo C, De Sio D, Longo F, Schena CA, Laterza V, Rosa F, Menghi R, Papa V, Tondolo V, Cina C, Distler M, Weitz J, Speidel S, Padoy N, Alfieri S. Artificial Intelligence in Colorectal Cancer Surgery: Present and Future Perspectives. Cancers (Basel) 2022; 14:cancers14153803. [PMID: 35954466 PMCID: PMC9367568 DOI: 10.3390/cancers14153803] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 07/29/2022] [Accepted: 08/03/2022] [Indexed: 02/05/2023] Open
Abstract
Artificial intelligence (AI) and computer vision (CV) are beginning to impact medicine. While evidence on the clinical value of AI-based solutions for the screening and staging of colorectal cancer (CRC) is mounting, CV and AI applications to enhance the surgical treatment of CRC are still in their early stage. This manuscript introduces key AI concepts to a surgical audience, illustrates fundamental steps to develop CV for surgical applications, and provides a comprehensive overview on the state-of-the-art of AI applications for the treatment of CRC. Notably, studies show that AI can be trained to automatically recognize surgical phases and actions with high accuracy even in complex colorectal procedures such as transanal total mesorectal excision (TaTME). In addition, AI models were trained to interpret fluorescent signals and recognize correct dissection planes during total mesorectal excision (TME), suggesting CV as a potentially valuable tool for intraoperative decision-making and guidance. Finally, AI could have a role in surgical training, providing automatic surgical skills assessment in the operating room. While promising, these proofs of concept require further development, validation in multi-institutional data, and clinical studies to confirm AI as a valuable tool to enhance CRC treatment.
Collapse
Affiliation(s)
- Giuseppe Quero
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Pietro Mascagni
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
- Institute of Image-Guided Surgery, IHU-Strasbourg, 67000 Strasbourg, France
| | - Fiona R. Kolbinger
- Department for Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, 01307 Dresden, Germany
| | - Claudio Fiorillo
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Correspondence: ; Tel.: +39-333-8747996
| | - Davide De Sio
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Fabio Longo
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Carlo Alberto Schena
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Vito Laterza
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Fausto Rosa
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Roberta Menghi
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Valerio Papa
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Vincenzo Tondolo
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Caterina Cina
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Marius Distler
- Department for Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, 01307 Dresden, Germany
| | - Juergen Weitz
- Department for Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, 01307 Dresden, Germany
| | - Stefanie Speidel
- National Center for Tumor Diseases (NCT), Partner Site Dresden, 01307 Dresden, Germany
| | - Nicolas Padoy
- Institute of Image-Guided Surgery, IHU-Strasbourg, 67000 Strasbourg, France
- ICube, Centre National de la Recherche Scientifique (CNRS), University of Strasbourg, 67000 Strasbourg, France
| | - Sergio Alfieri
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| |
Collapse
|
24
|
Surgical Tool Datasets for Machine Learning Research: A Survey. Int J Comput Vis 2022. [DOI: 10.1007/s11263-022-01640-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
AbstractThis paper is a comprehensive survey of datasets for surgical tool detection and related surgical data science and machine learning techniques and algorithms. The survey offers a high level perspective of current research in this area, analyses the taxonomy of approaches adopted by researchers using surgical tool datasets, and addresses key areas of research, such as the datasets used, evaluation metrics applied and deep learning techniques utilised. Our presentation and taxonomy provides a framework that facilitates greater understanding of current work, and highlights the challenges and opportunities for further innovative and useful research.
Collapse
|
25
|
Ban Y, Rosman G, Eckhoff JA, Ward TM, Hashimoto DA, Kondo T, Iwaki H, Meireles OR, Rus D. SUPR-GAN: SUrgical PRediction GAN for Event Anticipation in Laparoscopic and Robotic Surgery. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3156856] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Affiliation(s)
- Yutong Ban
- Distributed Robotics Laboratory, CSAIL, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Guy Rosman
- Distributed Robotics Laboratory, CSAIL, Massachusetts Institute of Technology, Cambridge, MA, USA
| | | | | | | | | | | | | | - Daniela Rus
- Distributed Robotics Laboratory, CSAIL, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
26
|
Artificial Intelligence for Computer Vision in Surgery: A Call for Developing Reporting Guidelines. Ann Surg 2022; 275:e609-e611. [PMID: 35129482 DOI: 10.1097/sla.0000000000005319] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
27
|
Maier-Hein L, Eisenmann M, Sarikaya D, März K, Collins T, Malpani A, Fallert J, Feussner H, Giannarou S, Mascagni P, Nakawala H, Park A, Pugh C, Stoyanov D, Vedula SS, Cleary K, Fichtinger G, Forestier G, Gibaud B, Grantcharov T, Hashizume M, Heckmann-Nötzel D, Kenngott HG, Kikinis R, Mündermann L, Navab N, Onogur S, Roß T, Sznitman R, Taylor RH, Tizabi MD, Wagner M, Hager GD, Neumuth T, Padoy N, Collins J, Gockel I, Goedeke J, Hashimoto DA, Joyeux L, Lam K, Leff DR, Madani A, Marcus HJ, Meireles O, Seitel A, Teber D, Ückert F, Müller-Stich BP, Jannin P, Speidel S. Surgical data science - from concepts toward clinical translation. Med Image Anal 2022; 76:102306. [PMID: 34879287 PMCID: PMC9135051 DOI: 10.1016/j.media.2021.102306] [Citation(s) in RCA: 82] [Impact Index Per Article: 41.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 11/03/2021] [Accepted: 11/08/2021] [Indexed: 02/06/2023]
Abstract
Recent developments in data science in general and machine learning in particular have transformed the way experts envision the future of surgery. Surgical Data Science (SDS) is a new research field that aims to improve the quality of interventional healthcare through the capture, organization, analysis and modeling of data. While an increasing number of data-driven approaches and clinical applications have been studied in the fields of radiological and clinical data science, translational success stories are still lacking in surgery. In this publication, we shed light on the underlying reasons and provide a roadmap for future advances in the field. Based on an international workshop involving leading researchers in the field of SDS, we review current practice, key achievements and initiatives as well as available standards and tools for a number of topics relevant to the field, namely (1) infrastructure for data acquisition, storage and access in the presence of regulatory constraints, (2) data annotation and sharing and (3) data analytics. We further complement this technical perspective with (4) a review of currently available SDS products and the translational progress from academia and (5) a roadmap for faster clinical translation and exploitation of the full potential of SDS, based on an international multi-round Delphi process.
Collapse
Affiliation(s)
- Lena Maier-Hein
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany; Medical Faculty, Heidelberg University, Heidelberg, Germany.
| | - Matthias Eisenmann
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Duygu Sarikaya
- Department of Computer Engineering, Faculty of Engineering, Gazi University, Ankara, Turkey; LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Keno März
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Anand Malpani
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland, USA
| | | | - Hubertus Feussner
- Department of Surgery, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Stamatia Giannarou
- The Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom
| | - Pietro Mascagni
- ICube, University of Strasbourg, CNRS, France; IHU Strasbourg, Strasbourg, France
| | | | - Adrian Park
- Department of Surgery, Anne Arundel Health System, Annapolis, Maryland, USA; Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Carla Pugh
- Department of Surgery, Stanford University School of Medicine, Stanford, California, USA
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Swaroop S Vedula
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Kevin Cleary
- The Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, D.C., USA
| | | | - Germain Forestier
- L'Institut de Recherche en Informatique, Mathématiques, Automatique et Signal (IRIMAS), University of Haute-Alsace, Mulhouse, France; Faculty of Information Technology, Monash University, Clayton, Victoria, Australia
| | - Bernard Gibaud
- LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Teodor Grantcharov
- University of Toronto, Toronto, Ontario, Canada; The Li Ka Shing Knowledge Institute of St. Michael's Hospital, Toronto, Ontario, Canada
| | - Makoto Hashizume
- Kyushu University, Fukuoka, Japan; Kitakyushu Koga Hospital, Fukuoka, Japan
| | - Doreen Heckmann-Nötzel
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Hannes G Kenngott
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Ron Kikinis
- Department of Radiology, Brigham and Women's Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| | | | - Nassir Navab
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany; Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Sinan Onogur
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tobias Roß
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany; Medical Faculty, Heidelberg University, Heidelberg, Germany
| | - Raphael Sznitman
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Russell H Taylor
- Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Minu D Tizabi
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Martin Wagner
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Gregory D Hager
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, Leipzig, Germany
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, France; IHU Strasbourg, Strasbourg, France
| | - Justin Collins
- Division of Surgery and Interventional Science, University College London, London, United Kingdom
| | - Ines Gockel
- Department of Visceral, Transplant, Thoracic and Vascular Surgery, Leipzig University Hospital, Leipzig, Germany
| | - Jan Goedeke
- Pediatric Surgery, Dr. von Hauner Children's Hospital, Ludwig-Maximilians-University, Munich, Germany
| | - Daniel A Hashimoto
- University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, Ohio, USA; Surgical AI and Innovation Laboratory, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Luc Joyeux
- My FetUZ Fetal Research Center, Department of Development and Regeneration, Biomedical Sciences, KU Leuven, Leuven, Belgium; Center for Surgical Technologies, Faculty of Medicine, KU Leuven, Leuven, Belgium; Department of Obstetrics and Gynecology, Division Woman and Child, Fetal Medicine Unit, University Hospitals Leuven, Leuven, Belgium; Michael E. DeBakey Department of Surgery, Texas Children's Hospital and Baylor College of Medicine, Houston, Texas, USA
| | - Kyle Lam
- Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Daniel R Leff
- Department of BioSurgery and Surgical Technology, Imperial College London, London, United Kingdom; Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom; Breast Unit, Imperial Healthcare NHS Trust, London, United Kingdom
| | - Amin Madani
- Department of Surgery, University Health Network, Toronto, Ontario, Canada
| | - Hani J Marcus
- National Hospital for Neurology and Neurosurgery, and UCL Queen Square Institute of Neurology, London, United Kingdom
| | - Ozanan Meireles
- Massachusetts General Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| | - Alexander Seitel
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Dogu Teber
- Department of Urology, City Hospital Karlsruhe, Karlsruhe, Germany
| | - Frank Ückert
- Institute for Applied Medical Informatics, Hamburg University Hospital, Hamburg, Germany
| | - Beat P Müller-Stich
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Pierre Jannin
- LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Stefanie Speidel
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC) Dresden, Dresden, Germany; Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| |
Collapse
|
28
|
Surgical Therapy of Esophageal Adenocarcinoma-Current Standards and Future Perspectives. Cancers (Basel) 2021; 13:cancers13225834. [PMID: 34830988 PMCID: PMC8616112 DOI: 10.3390/cancers13225834] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2021] [Revised: 11/15/2021] [Accepted: 11/18/2021] [Indexed: 12/18/2022] Open
Abstract
Simple Summary Subtotal resection of the esophagus with resection of local lymph nodes is the oncological procedure of choice for advanced esophageal cancer. Reconstruction of the intestinal tract is predominantly performed with a gastric tube. Even in specialized centers, this surgical procedure is associated with a high complication but low mortality rate. Therefore, clinical research aims to develop peri- and intra-operative strategies to improve the patient related outcome. Abstract Transthoracic esophagectomy is currently the predominant curative treatment option for resectable esophageal adenocarcinoma. The majority of carcinomas present as locally advanced tumors requiring multimodal strategies with either neoadjuvant chemoradiotherapy or perioperative chemotherapy alone. Minimally invasive, including robotic, techniques are increasingly applied with a broad spectrum of technical variations existing for the oncological resection as well as gastric reconstruction. At the present, intrathoracic esophagogastrostomy is the preferred technique of reconstruction (Ivor Lewis esophagectomy). With standardized surgical procedures, a complete resection of the primary tumor can be achieved in almost 95% of patients. Even in expert centers, postoperative morbidity remains high, with an overall complication rate of 50–60%, whereas 30- and 90-day mortality are reported to be <2% and <6%, respectively. Due to the complexity of transthoracic esophagetomy and its associated morbidity, esophageal surgery is recommended to be performed in specialized centers with an appropriate caseload yet to be defined. In order to reduce postoperative morbidity, the selection of patients, preoperative rehabilitation and postoperative fast-track concepts are feasible strategies of perioperative management. Future directives aim to further centralize esophageal services, to individualize surgical treatment for high-risk patients and to implement intraoperative imaging modalities modifying the oncological extent of resection and facilitating surgical reconstruction.
Collapse
|
29
|
Moglia A, Georgiou K, Georgiou E, Satava RM, Cuschieri A. A systematic review on artificial intelligence in robot-assisted surgery. Int J Surg 2021; 95:106151. [PMID: 34695601 DOI: 10.1016/j.ijsu.2021.106151] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 10/04/2021] [Accepted: 10/19/2021] [Indexed: 12/12/2022]
Abstract
BACKGROUND Despite the extensive published literature on the significant potential of artificial intelligence (AI) there are no reports on its efficacy in improving patient safety in robot-assisted surgery (RAS). The purposes of this work are to systematically review the published literature on AI in RAS, and to identify and discuss current limitations and challenges. MATERIALS AND METHODS A literature search was conducted on PubMed, Web of Science, Scopus, and IEEExplore according to PRISMA 2020 statement. Eligible articles were peer-review studies published in English language from January 1, 2016 to December 31, 2020. Amstar 2 was used for quality assessment. Risk of bias was evaluated with the Newcastle Ottawa Quality assessment tool. Data of the studies were visually presented in tables using SPIDER tool. RESULTS Thirty-five publications, representing 3436 patients, met the search criteria and were included in the analysis. The selected reports concern: motion analysis (n = 17), urology (n = 12), gynecology (n = 1), other specialties (n = 1), training (n = 3), and tissue retraction (n = 1). Precision for surgical tools detection varied from 76.0% to 90.6%. Mean absolute error on prediction of urinary continence after robot-assisted radical prostatectomy (RARP) ranged from 85.9 to 134.7 days. Accuracy on prediction of length of stay after RARP was 88.5%. Accuracy on recognition of the next surgical task during robot-assisted partial nephrectomy (RAPN) achieved 75.7%. CONCLUSION The reviewed studies were of low quality. The findings are limited by the small size of the datasets. Comparison between studies on the same topic was restricted due to algorithms and datasets heterogeneity. There is no proof that currently AI can identify the critical tasks of RAS operations, which determine patient outcome. There is an urgent need for studies on large datasets and external validation of the AI algorithms used. Furthermore, the results should be transparent and meaningful to surgeons, enabling them to inform patients in layman's words. REGISTRATION Review Registry Unique Identifying Number: reviewregistry1225.
Collapse
Affiliation(s)
- Andrea Moglia
- EndoCAS, Center for Computer Assisted Surgery, University of Pisa, 56124, Pisa, Italy 1st Propaedeutic Surgical Unit, Hippocrateion Athens General Hospital, Athens Medical School, National and Kapodistrian University of Athens, Greece MPLSC, Athens Medical School, National and Kapodistrian University of Athens, Greece Department of Surgery, University of Washington Medical Center, Seattle, WA, United States Scuola Superiore Sant'Anna of Pisa, 56214, Pisa, Italy Institute for Medical Science and Technology, University of Dundee, Dundee, DD2 1FD, United Kingdom
| | | | | | | | | |
Collapse
|
30
|
Meireles OR, Rosman G, Altieri MS, Carin L, Hager G, Madani A, Padoy N, Pugh CM, Sylla P, Ward TM, Hashimoto DA. SAGES consensus recommendations on an annotation framework for surgical video. Surg Endosc 2021; 35:4918-4929. [PMID: 34231065 DOI: 10.1007/s00464-021-08578-9] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2021] [Accepted: 05/26/2021] [Indexed: 11/25/2022]
Abstract
BACKGROUND The growing interest in analysis of surgical video through machine learning has led to increased research efforts; however, common methods of annotating video data are lacking. There is a need to establish recommendations on the annotation of surgical video data to enable assessment of algorithms and multi-institutional collaboration. METHODS Four working groups were formed from a pool of participants that included clinicians, engineers, and data scientists. The working groups were focused on four themes: (1) temporal models, (2) actions and tasks, (3) tissue characteristics and general anatomy, and (4) software and data structure. A modified Delphi process was utilized to create a consensus survey based on suggested recommendations from each of the working groups. RESULTS After three Delphi rounds, consensus was reached on recommendations for annotation within each of these domains. A hierarchy for annotation of temporal events in surgery was established. CONCLUSIONS While additional work remains to achieve accepted standards for video annotation in surgery, the consensus recommendations on a general framework for annotation presented here lay the foundation for standardization. This type of framework is critical to enabling diverse datasets, performance benchmarks, and collaboration.
Collapse
Affiliation(s)
- Ozanan R Meireles
- Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC460, Boston, MA, 02114, USA.
| | - Guy Rosman
- Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC460, Boston, MA, 02114, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, USA
| | - Maria S Altieri
- Department of Surgery, East Carolina University, Greenville, USA
| | - Lawrence Carin
- Department of Electrical and Computer Engineering, Duke University, Durham, USA
| | - Gregory Hager
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA
| | - Amin Madani
- Department of Surgery, University Health Network, Toronto, Canada
| | - Nicolas Padoy
- ICube, University of Strasbourg, Strasbourg, France
- IHU Strasbourg, Strasbourg, France
| | - Carla M Pugh
- Department of Surgery, Stanford University, Stanford, USA
| | - Patricia Sylla
- Department of Surgery, Mount Sinai Medical Center, New York, USA
| | - Thomas M Ward
- Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC460, Boston, MA, 02114, USA
| | - Daniel A Hashimoto
- Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC460, Boston, MA, 02114, USA.
| |
Collapse
|