1
|
Bannone E, Collins T, Esposito A, Cinelli L, De Pastena M, Pessaux P, Felli E, Andreotti E, Okamoto N, Barberio M, Felli E, Montorsi RM, Ingaglio N, Rodríguez-Luna MR, Nkusi R, Marescaux J, Hostettler A, Salvia R, Diana M. Surgical optomics: hyperspectral imaging and deep learning towards precision intraoperative automatic tissue recognition-results from the EX-MACHYNA trial. Surg Endosc 2024; 38:3758-3772. [PMID: 38789623 DOI: 10.1007/s00464-024-10880-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2024] [Accepted: 04/23/2024] [Indexed: 05/26/2024]
Abstract
BACKGROUND Hyperspectral imaging (HSI), combined with machine learning, can help to identify characteristic tissue signatures enabling automatic tissue recognition during surgery. This study aims to develop the first HSI-based automatic abdominal tissue recognition with human data in a prospective bi-center setting. METHODS Data were collected from patients undergoing elective open abdominal surgery at two international tertiary referral hospitals from September 2020 to June 2021. HS images were captured at various time points throughout the surgical procedure. Resulting RGB images were annotated with 13 distinct organ labels. Convolutional Neural Networks (CNNs) were employed for the analysis, with both external and internal validation settings utilized. RESULTS A total of 169 patients were included, 73 (43.2%) from Strasbourg and 96 (56.8%) from Verona. The internal validation within centers combined patients from both centers into a single cohort, randomly allocated to the training (127 patients, 75.1%, 585 images) and test sets (42 patients, 24.9%, 181 images). This validation setting showed the best performance. The highest true positive rate was achieved for the skin (100%) and the liver (97%). Misclassifications included tissues with a similar embryological origin (omentum and mesentery: 32%) or with overlaying boundaries (liver and hepatic ligament: 22%). The median DICE score for ten tissue classes exceeded 80%. CONCLUSION To improve automatic surgical scene segmentation and to drive clinical translation, multicenter accurate HSI datasets are essential, but further work is needed to quantify the clinical value of HSI. HSI might be included in a new omics science, namely surgical optomics, which uses light to extract quantifiable tissue features during surgery.
Collapse
Affiliation(s)
- Elisa Bannone
- Research Institute Against Digestive Cancer (IRCAD), 67000, Strasbourg, France.
- Department of General and Pancreatic Surgery, The Pancreas Institute, University of Verona Hospital Trust, P.Le Scuro 10, 37134, Verona, Italy.
| | - Toby Collins
- Research Institute Against Digestive Cancer (IRCAD), 67000, Strasbourg, France
| | - Alessandro Esposito
- Department of General and Pancreatic Surgery, The Pancreas Institute, University of Verona Hospital Trust, P.Le Scuro 10, 37134, Verona, Italy
| | - Lorenzo Cinelli
- Research Institute Against Digestive Cancer (IRCAD), 67000, Strasbourg, France
- Department of Gastrointestinal Surgery, San Raffaele Hospital IRCCS, Milan, Italy
| | - Matteo De Pastena
- Department of General and Pancreatic Surgery, The Pancreas Institute, University of Verona Hospital Trust, P.Le Scuro 10, 37134, Verona, Italy
| | - Patrick Pessaux
- Research Institute Against Digestive Cancer (IRCAD), 67000, Strasbourg, France
- Department of General, Digestive, and Endocrine Surgery, University Hospital of Strasbourg, Strasbourg, France
- Institut of Viral and Liver Disease, Inserm U1110, University of Strasbourg, Strasbourg, France
| | - Emanuele Felli
- Research Institute Against Digestive Cancer (IRCAD), 67000, Strasbourg, France
- Department of General, Digestive, and Endocrine Surgery, University Hospital of Strasbourg, Strasbourg, France
- Institut of Viral and Liver Disease, Inserm U1110, University of Strasbourg, Strasbourg, France
| | - Elena Andreotti
- Department of General and Pancreatic Surgery, The Pancreas Institute, University of Verona Hospital Trust, P.Le Scuro 10, 37134, Verona, Italy
| | - Nariaki Okamoto
- Research Institute Against Digestive Cancer (IRCAD), 67000, Strasbourg, France
- Photonics Instrumentation for Health, iCube Laboratory, University of Strasbourg, Strasbourg, France
| | - Manuel Barberio
- Research Institute Against Digestive Cancer (IRCAD), 67000, Strasbourg, France
- General Surgery Department, Ospedale Cardinale G. Panico, Tricase, Italy
| | - Eric Felli
- Research Institute Against Digestive Cancer (IRCAD), 67000, Strasbourg, France
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Roberto Maria Montorsi
- Department of General and Pancreatic Surgery, The Pancreas Institute, University of Verona Hospital Trust, P.Le Scuro 10, 37134, Verona, Italy
| | - Naomi Ingaglio
- Department of General and Pancreatic Surgery, The Pancreas Institute, University of Verona Hospital Trust, P.Le Scuro 10, 37134, Verona, Italy
| | - María Rita Rodríguez-Luna
- Research Institute Against Digestive Cancer (IRCAD), 67000, Strasbourg, France
- Photonics Instrumentation for Health, iCube Laboratory, University of Strasbourg, Strasbourg, France
| | - Richard Nkusi
- Research Institute Against Digestive Cancer (IRCAD), 67000, Strasbourg, France
| | - Jacque Marescaux
- Research Institute Against Digestive Cancer (IRCAD), 67000, Strasbourg, France
| | | | - Roberto Salvia
- Department of General and Pancreatic Surgery, The Pancreas Institute, University of Verona Hospital Trust, P.Le Scuro 10, 37134, Verona, Italy
| | - Michele Diana
- Photonics Instrumentation for Health, iCube Laboratory, University of Strasbourg, Strasbourg, France
- Department of Surgery, University Hospital of Geneva, Geneva, Switzerland
| |
Collapse
|
2
|
Pattilachan TM, Christodoulou M, Ross S. Diagnosis to dissection: AI's role in early detection and surgical intervention for gastric cancer. J Robot Surg 2024; 18:259. [PMID: 38900376 DOI: 10.1007/s11701-024-02005-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Accepted: 06/01/2024] [Indexed: 06/21/2024]
Abstract
Gastric cancer remains a formidable health challenge worldwide; early detection and effective surgical intervention are critical for improving patient outcomes. This comprehensive review explores the evolving landscape of gastric cancer management, emphasizing the significant contributions of artificial intelligence (AI) in revolutionizing both diagnostic and therapeutic approaches. Despite advancements in the medical field, the subtle nature of early gastric cancer symptoms often leads to late-stage diagnoses, where survival rates are notably decreased. Historically, the treatment of gastric cancer has transitioned from palliative care to surgical resection, evolving further with the introduction of minimally invasive surgical (MIS) techniques. In the current era, AI has emerged as a transformative force, enhancing the precision of early gastric cancer detection through sophisticated image analysis, and supporting surgical decision-making with predictive modeling and real-time preop-, intraop-, and postoperative guidance. However, the deployment of AI in healthcare raises significant ethical, legal, and practical challenges, including the necessity for ongoing professional education and the development of standardized protocols to ensure patient safety and the effective use of AI technologies. Future directions point toward a synergistic integration of AI with clinical best practices, promising a new era of personalized, efficient, and safer gastric cancer management.
Collapse
Affiliation(s)
- Tara Menon Pattilachan
- AdventHealth Tampa, Surgery College of Medicine, Digestive Health Institute, University of Central Florida (UCF), 3000 Medical Park Drive, Suite #500, Tampa, FL, 33613, USA
| | - Maria Christodoulou
- AdventHealth Tampa, Surgery College of Medicine, Digestive Health Institute, University of Central Florida (UCF), 3000 Medical Park Drive, Suite #500, Tampa, FL, 33613, USA
| | - Sharona Ross
- AdventHealth Tampa, Surgery College of Medicine, Digestive Health Institute, University of Central Florida (UCF), 3000 Medical Park Drive, Suite #500, Tampa, FL, 33613, USA.
| |
Collapse
|
3
|
Matsumoto S, Kawahira H, Fukata K, Doi Y, Kobayashi N, Hosoya Y, Sata N. Laparoscopic distal gastrectomy skill evaluation from video: a new artificial intelligence-based instrument identification system. Sci Rep 2024; 14:12432. [PMID: 38816459 PMCID: PMC11139867 DOI: 10.1038/s41598-024-63388-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 05/28/2024] [Indexed: 06/01/2024] Open
Abstract
The advent of Artificial Intelligence (AI)-based object detection technology has made identification of position coordinates of surgical instruments from videos possible. This study aimed to find kinematic differences by surgical skill level. An AI algorithm was developed to identify X and Y coordinates of surgical instrument tips accurately from video. Kinematic analysis including fluctuation analysis was performed on 18 laparoscopic distal gastrectomy videos from three expert and three novice surgeons (3 videos/surgeon, 11.6 h, 1,254,010 frames). Analysis showed the expert surgeon cohort moved more efficiently and regularly, with significantly less operation time and total travel distance. Instrument tip movement did not differ in velocity, acceleration, or jerk between skill levels. The evaluation index of fluctuation β was significantly higher in experts. ROC curve cutoff value at 1.4 determined sensitivity and specificity of 77.8% for experts and novices. Despite the small sample, this study suggests AI-based object detection with fluctuation analysis is promising because skill evaluation can be calculated in real time with potential for peri-operational evaluation.
Collapse
Affiliation(s)
- Shiro Matsumoto
- Department of Surgery, Division of Gastroenterological, General and Transplant Surgery, Jichi Medical University, Tochigi, Japan.
| | - Hiroshi Kawahira
- Medical Simulation Center, Jichi Medical University, Tochigi, Japan
| | | | | | | | - Yoshinori Hosoya
- Department of Surgery, Division of Gastroenterological, General and Transplant Surgery, Jichi Medical University, Tochigi, Japan
| | - Naohiro Sata
- Department of Surgery, Division of Gastroenterological, General and Transplant Surgery, Jichi Medical University, Tochigi, Japan
| |
Collapse
|
4
|
Rueckert T, Rueckert D, Palm C. Methods and datasets for segmentation of minimally invasive surgical instruments in endoscopic images and videos: A review of the state of the art. Comput Biol Med 2024; 169:107929. [PMID: 38184862 DOI: 10.1016/j.compbiomed.2024.107929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 12/02/2023] [Accepted: 01/01/2024] [Indexed: 01/09/2024]
Abstract
In the field of computer- and robot-assisted minimally invasive surgery, enormous progress has been made in recent years based on the recognition of surgical instruments in endoscopic images and videos. In particular, the determination of the position and type of instruments is of great interest. Current work involves both spatial and temporal information, with the idea that predicting the movement of surgical tools over time may improve the quality of the final segmentations. The provision of publicly available datasets has recently encouraged the development of new methods, mainly based on deep learning. In this review, we identify and characterize datasets used for method development and evaluation and quantify their frequency of use in the literature. We further present an overview of the current state of research regarding the segmentation and tracking of minimally invasive surgical instruments in endoscopic images and videos. The paper focuses on methods that work purely visually, without markers of any kind attached to the instruments, considering both single-frame semantic and instance segmentation approaches, as well as those that incorporate temporal information. The publications analyzed were identified through the platforms Google Scholar, Web of Science, and PubMed. The search terms used were "instrument segmentation", "instrument tracking", "surgical tool segmentation", and "surgical tool tracking", resulting in a total of 741 articles published between 01/2015 and 07/2023, of which 123 were included using systematic selection criteria. A discussion of the reviewed literature is provided, highlighting existing shortcomings and emphasizing the available potential for future developments.
Collapse
Affiliation(s)
- Tobias Rueckert
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Germany.
| | - Daniel Rueckert
- Artificial Intelligence in Healthcare and Medicine, Klinikum rechts der Isar, Technical University of Munich, Germany; Department of Computing, Imperial College London, UK
| | - Christoph Palm
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Germany; Regensburg Center of Health Sciences and Technology (RCHST), OTH Regensburg, Germany
| |
Collapse
|
5
|
Daneshgar Rahbar M, Mousavi Mojab SZ. Enhanced U-Net with GridMask (EUGNet): A Novel Approach for Robotic Surgical Tool Segmentation. J Imaging 2023; 9:282. [PMID: 38132700 PMCID: PMC10744415 DOI: 10.3390/jimaging9120282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Revised: 12/13/2023] [Accepted: 12/15/2023] [Indexed: 12/23/2023] Open
Abstract
This study proposed enhanced U-Net with GridMask (EUGNet) image augmentation techniques focused on pixel manipulation, emphasizing GridMask augmentation. This study introduces EUGNet, which incorporates GridMask augmentation to address U-Net's limitations. EUGNet features a deep contextual encoder, residual connections, class-balancing loss, adaptive feature fusion, GridMask augmentation module, efficient implementation, and multi-modal fusion. These innovations enhance segmentation accuracy and robustness, making it well-suited for medical image analysis. The GridMask algorithm is detailed, demonstrating its distinct approach to pixel elimination, enhancing model adaptability to occlusions and local features. A comprehensive dataset of robotic surgical scenarios and instruments is used for evaluation, showcasing the framework's robustness. Specifically, there are improvements of 1.6 percentage points in balanced accuracy for the foreground, 1.7 points in intersection over union (IoU), and 1.7 points in mean Dice similarity coefficient (DSC). These improvements are highly significant and have a substantial impact on inference speed. The inference speed, which is a critical factor in real-time applications, has seen a noteworthy reduction. It decreased from 0.163 milliseconds for the U-Net without GridMask to 0.097 milliseconds for the U-Net with GridMask.
Collapse
Affiliation(s)
- Mostafa Daneshgar Rahbar
- Department of Electrical and Computer Engineering, Lawrence Technological University, Southfield, MI 48075, USA
| | | |
Collapse
|