1
|
Laterza V, Marchegiani F, Aisoni F, Ammendola M, Schena CA, Lavazza L, Ravaioli C, Carra MC, Costa V, De Franceschi A, De Simone B, de’Angelis N. Smart Operating Room in Digestive Surgery: A Narrative Review. Healthcare (Basel) 2024; 12:1530. [PMID: 39120233 PMCID: PMC11311806 DOI: 10.3390/healthcare12151530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Revised: 07/24/2024] [Accepted: 07/29/2024] [Indexed: 08/10/2024] Open
Abstract
The introduction of new technologies in current digestive surgical practice is progressively reshaping the operating room, defining the fourth surgical revolution. The implementation of black boxes and control towers aims at streamlining workflow and reducing surgical error by early identification and analysis, while augmented reality and artificial intelligence augment surgeons' perceptual and technical skills by superimposing three-dimensional models to real-time surgical images. Moreover, the operating room architecture is transitioning toward an integrated digital environment to improve efficiency and, ultimately, patients' outcomes. This narrative review describes the most recent evidence regarding the role of these technologies in transforming the current digestive surgical practice, underlining their potential benefits and drawbacks in terms of efficiency and patients' outcomes, as an attempt to foresee the digestive surgical practice of tomorrow.
Collapse
Affiliation(s)
- Vito Laterza
- Department of Digestive Surgical Oncology and Liver Transplantation, University Hospital of Besançon, 3 Boulevard Alexandre Fleming, 25000 Besancon, France;
| | - Francesco Marchegiani
- Unit of Colorectal and Digestive Surgery, DIGEST Department, Beaujon University Hospital, AP-HP, University of Paris Cité, Clichy, 92110 Paris, France
| | - Filippo Aisoni
- Unit of Emergency Surgery, Department of Surgery, Ferrara University Hospital, 44124 Ferrara, Italy;
| | - Michele Ammendola
- Digestive Surgery Unit, Health of Science Department, University Hospital “R.Dulbecco”, 88100 Catanzaro, Italy;
| | - Carlo Alberto Schena
- Unit of Robotic and Minimally Invasive Surgery, Department of Surgery, Ferrara University Hospital, 44124 Ferrara, Italy; (C.A.S.); (N.d.)
| | - Luca Lavazza
- Hospital Network Coordinator of Azienda Ospedaliero, Universitaria and Azienda USL di Ferrara, 44121 Ferrara, Italy;
| | - Cinzia Ravaioli
- Azienda Ospedaliero, Universitaria di Ferrara, 44121 Ferrara, Italy;
| | - Maria Clotilde Carra
- Rothschild Hospital (AP-HP), 75012 Paris, France;
- INSERM-Sorbonne Paris Cité, Epidemiology and Statistics Research Centre, 75004 Paris, France
| | - Vittore Costa
- Unit of Orthopedics, Humanitas Hospital, 24125 Bergamo, Italy;
| | | | - Belinda De Simone
- Department of Emergency Surgery, Academic Hospital of Villeneuve St Georges, 91560 Villeneuve St. Georges, France;
| | - Nicola de’Angelis
- Unit of Robotic and Minimally Invasive Surgery, Department of Surgery, Ferrara University Hospital, 44124 Ferrara, Italy; (C.A.S.); (N.d.)
- Department of Translational Medicine, University of Ferrara, 44121 Ferrara, Italy
| |
Collapse
|
2
|
Oh MY, Yoon KC, Hyeon S, Jang T, Choi Y, Kim J, Kong HJ, Chai YJ. Navigating the Future of 3D Laparoscopic Liver Surgeries: Visualization of Internal Anatomy on Laparoscopic Images With Augmented Reality. Surg Laparosc Endosc Percutan Tech 2024:00129689-990000000-00251. [PMID: 38965779 DOI: 10.1097/sle.0000000000001307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 06/12/2024] [Indexed: 07/06/2024]
Abstract
INTRODUCTION Liver tumor resection requires precise localization of tumors and blood vessels. Despite advancements in 3-dimensional (3D) visualization for laparoscopic surgeries, challenges persist. We developed and evaluated an augmented reality (AR) system that overlays preoperative 3D models onto laparoscopic images, offering crucial support for 3D visualization during laparoscopic liver surgeries. METHODS Anatomic liver structures from preoperative computed tomography scans were segmented using open-source software including 3D Slicer and Maya 2022 for 3D model editing. A registration system was created with 3D visualization software utilizing a stereo registration input system to overlay the virtual liver onto laparoscopic images during surgical procedures. A controller was customized using a modified keyboard to facilitate manual alignment of the virtual liver with the laparoscopic image. The AR system was evaluated by 3 experienced surgeons who performed manual registration for a total of 27 images from 7 clinical cases. The evaluation criteria included registration time; measured in minutes, and accuracy; measured using the Dice similarity coefficient. RESULTS The overall mean registration time was 2.4±1.7 minutes (range: 0.3 to 9.5 min), and the overall mean registration accuracy was 93.8%±4.9% (range: 80.9% to 99.7%). CONCLUSION Our validated AR system has the potential to effectively enable the prediction of internal hepatic anatomic structures during 3D laparoscopic liver resection, and may enhance 3D visualization for select laparoscopic liver surgeries.
Collapse
Affiliation(s)
- Moon Young Oh
- Department of Surgery, Seoul National University College of Medicine, Seoul National University Boramae Medical Center
| | - Kyung Chul Yoon
- Department of Surgery, Seoul National University College of Medicine, Seoul National University Boramae Medical Center
| | - Seulgi Hyeon
- Department of Surgery, Seoul National University College of Medicine, Seoul National University Boramae Medical Center
| | - Taesoo Jang
- Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, Korea
| | - Yeonjin Choi
- Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, Korea
| | - Junki Kim
- Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, Korea
| | - Hyoun-Joong Kong
- Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, Korea
| | - Young Jun Chai
- Department of Surgery, Seoul National University College of Medicine, Seoul National University Boramae Medical Center
- Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, Korea
| |
Collapse
|
3
|
Zeng X, Deng H, Dong Y, Hu H, Fang C, Xiang N. A pilot study of virtual liver segment projection technology in subsegment-oriented laparoscopic anatomical liver resection when indocyanine green staining fails (with video). Surg Endosc 2024; 38:4057-4066. [PMID: 38806957 DOI: 10.1007/s00464-024-10912-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2024] [Accepted: 05/04/2024] [Indexed: 05/30/2024]
Abstract
BACKGROUND Precision surgery for liver tumors favors laparoscopic anatomical liver resection (LALR), involving the removal of specific liver segments or subsegments. Indocyanine green (ICG)-negative staining is a commonly used method for defining resection boundaries but may be prone to failure. The challenge arises when ICG staining fails, as it cannot be repeated during surgery. In this study, we employed the virtual liver segment projection (VLSP) technology as a salvage approach for precise boundary determination. Our aim was to assess the feasibility of the VLSP to be used for the determination of the boundaries of the liver resection in this situation. METHODS Between January 2021 and June 2023, 12 consecutive patients undergoing subsegment-oriented LALR were included in this pilot series. The VLSP technology was utilized to define the resection boundaries at the time of ICG-negative staining failure. Routine surgical parameters and short-term outcomes were evaluated to assess the safety of VLSP in this procedure. In addition, its feasibility was assessed by analyzing the accuracy between the predicted resected liver volume (PRLV) and actual resected liver volume (ARLV). RESULTS Of the 12 enrolled patients, the mean operation time was 444.58 ± 101.70 min (range 290-570 min), with a mean blood loss of 125.00 ± 96.53 ml (range 50-400 mL). One patient (8.3%) was converted to laparotomy for subsequent parenchymal transection, four (33.3%) received blood transfusions and four (33.3%) had postoperative complications. All patients received an R0 resection. The Pearson correlation coefficient (r) between PRLV and ARLV was 0.98 (R2 = 0.96, p < 0.05), and the relative error (RE) was 8.62 ± 6.66% in the 12 patients, indicating agreement. CONCLUSION Failure of intraoperative ICG-negative staining during subsegment-oriented LALR is possible, and VLSP may be an alternative to define the resection boundaries in such cases.
Collapse
Affiliation(s)
- Xiaojun Zeng
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, 510280, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, 510280, China
| | - Haowen Deng
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, 510280, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, 510280, China
| | - Yanchen Dong
- School of Traditional Chinese Medicine, Southern Medical University, Guangzhou, 510515, China
| | - Haoyu Hu
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, 510280, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, 510280, China
| | - Chihua Fang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, 510280, China.
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, 510280, China.
| | - Nan Xiang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, 510280, China.
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, 510280, China.
| |
Collapse
|
4
|
Deng H, Zeng X, Hu H, Zeng N, Huang D, Wu C, Fang C, Xiang N. Laparoscopic left hemihepatectomy using augmented reality navigation plus ICG fluorescence imaging for hepatolithiasis: a retrospective single-arm cohort study (with video). Surg Endosc 2024; 38:4048-4056. [PMID: 38806956 DOI: 10.1007/s00464-024-10922-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Accepted: 05/05/2024] [Indexed: 05/30/2024]
Abstract
BACKGROUND Laparoscopic left hemihepatectomy (LLH) has been shown to be an effective and safe method for treating hepatolithiasis primarily affecting the left hemiliver. However, this procedure still presents challenges. Due to pathological changes in intrahepatic duct stones, safely dissecting the hilar vessels and determining precise resection boundaries remains difficult, even with fluorescent imaging. Our team proposed a new method of augmented reality navigation (ARN) combined with Indocyanine green (ICG) fluorescence imaging for LLH in hepatolithiasis cases. This study aimed to investigate the feasibility of this combined approach in the procedure. METHODS Between May 2021 and September 2023, 16 patients with hepatolithiasis who underwent LLH were included. All patients underwent preoperative 3D evaluation and were then guided using ARN and ICG fluorescence imaging during the procedure. Perioperative and short-term postoperative outcomes were assessed to evaluate the safety and efficacy of the method. RESULTS All 16 patients successfully underwent LLH. The mean operation time was 380.31 ± 92.17 min, with a mean estimated blood loss of 116.25 ± 64.49 ml. ARN successfully aided in guiding hilar vessel dissection in all patients. ICG fluorescence imaging successfully identified liver resection boundaries in 11 patients (68.8%). In the remaining 5 patients (31.3%) where fluorescence imaging failed, virtual liver segment projection (VLSP) successfully identified their resection boundaries. No major complications occurred in any patients. Immediate stone residual rate, stone recurrence rate, and stone extraction rate through the T-tube sinus tract were 12.5%, 6.3%, and 6.3%, respectively. CONCLUSION The combination of ARN and ICG fluorescence imaging enhances the safety and precision of LLH for hepatolithiasis. Moreover, ARN may serve as a safe and effective tool for identifying precise resection boundaries in cases where ICG fluorescence imaging fails.
Collapse
Affiliation(s)
- Haowen Deng
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, 510280, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, 510280, China
| | - Xiaojun Zeng
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, 510280, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, 510280, China
| | - Haoyu Hu
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, 510280, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, 510280, China
| | - Ning Zeng
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, 510280, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, 510280, China
| | - Dongqing Huang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, 510280, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, 510280, China
| | - Chao Wu
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, 510280, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, 510280, China
| | - Chihua Fang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, 510280, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, 510280, China
| | - Nan Xiang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, 510280, China.
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, 510280, China.
| |
Collapse
|
5
|
Yang Z, Dai J, Pan J. 3D reconstruction from endoscopy images: A survey. Comput Biol Med 2024; 175:108546. [PMID: 38704902 DOI: 10.1016/j.compbiomed.2024.108546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 01/05/2024] [Accepted: 04/28/2024] [Indexed: 05/07/2024]
Abstract
Three-dimensional reconstruction of images acquired through endoscopes is playing a vital role in an increasing number of medical applications. Endoscopes used in the clinic are commonly classified as monocular endoscopes and binocular endoscopes. We have reviewed the classification of methods for depth estimation according to the type of endoscope. Basically, depth estimation relies on feature matching of images and multi-view geometry theory. However, these traditional techniques have many problems in the endoscopic environment. With the increasing development of deep learning techniques, there is a growing number of works based on learning methods to address challenges such as inconsistent illumination and texture sparsity. We have reviewed over 170 papers published in the 10 years from 2013 to 2023. The commonly used public datasets and performance metrics are summarized. We also give a taxonomy of methods and analyze the advantages and drawbacks of algorithms. Summary tables and result atlas are listed to facilitate the comparison of qualitative and quantitative performance of different methods in each category. In addition, we summarize commonly used scene representation methods in endoscopy and speculate on the prospects of deep estimation research in medical applications. We also compare the robustness performance, processing time, and scene representation of the methods to facilitate doctors and researchers in selecting appropriate methods based on surgical applications.
Collapse
Affiliation(s)
- Zhuoyue Yang
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, 37 Xueyuan Road, Haidian District, Beijing, 100191, China; Peng Cheng Lab, 2 Xingke 1st Street, Nanshan District, Shenzhen, Guangdong Province, 518000, China
| | - Ju Dai
- Peng Cheng Lab, 2 Xingke 1st Street, Nanshan District, Shenzhen, Guangdong Province, 518000, China
| | - Junjun Pan
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, 37 Xueyuan Road, Haidian District, Beijing, 100191, China; Peng Cheng Lab, 2 Xingke 1st Street, Nanshan District, Shenzhen, Guangdong Province, 518000, China.
| |
Collapse
|
6
|
Yang Z, Pan J, Dai J, Sun Z, Xiao Y. Self-Supervised Lightweight Depth Estimation in Endoscopy Combining CNN and Transformer. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1934-1944. [PMID: 38198275 DOI: 10.1109/tmi.2024.3352390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2024]
Abstract
In recent years, an increasing number of medical engineering tasks, such as surgical navigation, pre-operative registration, and surgical robotics, rely on 3D reconstruction techniques. Self-supervised depth estimation has attracted interest in endoscopic scenarios because it does not require ground truth. Most existing methods depend on expanding the size of parameters to improve their performance. There, designing a lightweight self-supervised model that can obtain competitive results is a hot topic. We propose a lightweight network with a tight coupling of convolutional neural network (CNN) and Transformer for depth estimation. Unlike other methods that use CNN and Transformer to extract features separately and then fuse them on the deepest layer, we utilize the modules of CNN and Transformer to extract features at different scales in the encoder. This hierarchical structure leverages the advantages of CNN in texture perception and Transformer in shape extraction. In the same scale of feature extraction, the CNN is used to acquire local features while the Transformer encodes global information. Finally, we add multi-head attention modules to the pose network to improve the accuracy of predicted poses. Experiments demonstrate that our approach obtains comparable results while effectively compressing the model parameters on two datasets.
Collapse
|
7
|
Ribeiro M, Espinel Y, Rabbani N, Pereira B, Bartoli A, Buc E. Augmented Reality Guided Laparoscopic Liver Resection: A Phantom Study With Intraparenchymal Tumors. J Surg Res 2024; 296:612-620. [PMID: 38354617 DOI: 10.1016/j.jss.2023.12.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 11/21/2023] [Accepted: 12/19/2023] [Indexed: 02/16/2024]
Abstract
INTRODUCTION Augmented reality (AR) in laparoscopic liver resection (LLR) can improve intrahepatic navigation by creating a virtual liver transparency. Our team has recently developed Hepataug, an AR software that projects the invisible intrahepatic tumors onto the laparoscopic images and allows the surgeon to localize them precisely. However, the accuracy of registration according to the location and size of the tumors, as well as the influence of the projection axis, have never been measured. The aim of this work was to measure the three-dimensional (3D) tumor prediction error of Hepataug. METHODS Eight 3D virtual livers were created from the computed tomography scan of a healthy human liver. Reference markers with known coordinates were virtually placed on the anterior surface. The virtual livers were then deformed and 3D printed, forming 3D liver phantoms. After placing each 3D phantom inside a pelvitrainer, registration allowed Hepataug to project virtual tumors along two axes: the laparoscope axis and the operator port axis. The surgeons had to point the center of eight virtual tumors per liver with a pointing tool whose coordinates were precisely calculated. RESULTS We obtained 128 pointing experiments. The average pointing error was 29.4 ± 17.1 mm and 9.2 ± 5.1 mm for the laparoscope and operator port axes respectively (P = 0.001). The pointing errors tended to increase with tumor depth (correlation coefficients greater than 0.5 with P < 0.001). There was no significant dependency of the pointing error on the tumor size for both projection axes. CONCLUSIONS Tumor visualization by projection toward the operating port improves the accuracy of AR guidance and partially solves the problem of the two-dimensional visual interface of monocular laparoscopy. Despite a lower precision of AR for tumors located in the posterior part of the liver, it could allow the surgeons to access these lesions without completely mobilizing the liver, hence decreasing the surgical trauma.
Collapse
Affiliation(s)
- Mathieu Ribeiro
- Department of Digestive and Hepatobiliary Surgery, Hospital Estaing, CHU de Clermont-Ferrand, Clermont-Ferrand, France; UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Institut Pascal, Clermont-Ferrand, France
| | - Yamid Espinel
- UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Institut Pascal, Clermont-Ferrand, France
| | - Navid Rabbani
- UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Institut Pascal, Clermont-Ferrand, France
| | - Bruno Pereira
- Biostatistics Unit (DRCI), University Hospital Clermont-Ferrand, Clermont-Ferrand, France
| | - Adrien Bartoli
- UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Institut Pascal, Clermont-Ferrand, France
| | - Emmanuel Buc
- Department of Digestive and Hepatobiliary Surgery, Hospital Estaing, CHU de Clermont-Ferrand, Clermont-Ferrand, France; UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Institut Pascal, Clermont-Ferrand, France.
| |
Collapse
|
8
|
Ramalhinho J, Yoo S, Dowrick T, Koo B, Somasundaram M, Gurusamy K, Hawkes DJ, Davidson B, Blandford A, Clarkson MJ. The value of Augmented Reality in surgery - A usability study on laparoscopic liver surgery. Med Image Anal 2023; 90:102943. [PMID: 37703675 PMCID: PMC10958137 DOI: 10.1016/j.media.2023.102943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 06/29/2023] [Accepted: 08/24/2023] [Indexed: 09/15/2023]
Abstract
Augmented Reality (AR) is considered to be a promising technology for the guidance of laparoscopic liver surgery. By overlaying pre-operative 3D information of the liver and internal blood vessels on the laparoscopic view, surgeons can better understand the location of critical structures. In an effort to enable AR, several authors have focused on the development of methods to obtain an accurate alignment between the laparoscopic video image and the pre-operative 3D data of the liver, without assessing the benefit that the resulting overlay can provide during surgery. In this paper, we present a study that aims to assess quantitatively and qualitatively the value of an AR overlay in laparoscopic surgery during a simulated surgical task on a phantom setup. We design a study where participants are asked to physically localise pre-operative tumours in a liver phantom using three image guidance conditions - a baseline condition without any image guidance, a condition where the 3D surfaces of the liver are aligned to the video and displayed on a black background, and a condition where video see-through AR is displayed on the laparoscopic video. Using data collected from a cohort of 24 participants which include 12 surgeons, we observe that compared to the baseline, AR decreases the median localisation error of surgeons on non-peripheral targets from 25.8 mm to 9.2 mm. Using subjective feedback, we also identify that AR introduces usability improvements in the surgical task and increases the perceived confidence of the users. Between the two tested displays, the majority of participants preferred to use the AR overlay instead of navigated view of the 3D surfaces on a separate screen. We conclude that AR has the potential to improve performance and decision making in laparoscopic surgery, and that improvements in overlay alignment accuracy and depth perception should be pursued in the future.
Collapse
Affiliation(s)
- João Ramalhinho
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom.
| | - Soojeong Yoo
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom; UCL Interaction Centre, University College London, London, United Kingdom
| | - Thomas Dowrick
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Bongjin Koo
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Murali Somasundaram
- Division of Surgery and Interventional Sciences, University College London, London, United Kingdom
| | - Kurinchi Gurusamy
- Division of Surgery and Interventional Sciences, University College London, London, United Kingdom
| | - David J Hawkes
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Brian Davidson
- Division of Surgery and Interventional Sciences, University College London, London, United Kingdom
| | - Ann Blandford
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom; UCL Interaction Centre, University College London, London, United Kingdom
| | - Matthew J Clarkson
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| |
Collapse
|
9
|
Tao H, Wang Z, Zeng X, Hu H, Li J, Lin J, Lin W, Fang C, Yang J. Augmented Reality Navigation Plus Indocyanine Green Fluorescence Imaging Can Accurately Guide Laparoscopic Anatomical Segment 8 Resection. Ann Surg Oncol 2023; 30:7373-7383. [PMID: 37606841 DOI: 10.1245/s10434-023-14126-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 07/27/2023] [Indexed: 08/23/2023]
Abstract
BACKGROUND Laparoscopic anatomical Segment 8 (S8) resection is a highly challenging hepatectomy. Augmented reality navigation (ARN), which could be combined with indocyanine green (ICG) fluorescence imaging, has been applied in various complex liver resections and may also be applied in laparoscopic anatomical S8 resection. However, no study has explored how to apply ARN plus ICG fluorescence imaging (ARN-FI) in laparoscopic anatomical S8 resection, or explored its accuracy. PATIENTS AND METHODS This study is a post hoc analysis that included 31 patients undergoing laparoscopic anatomical S8 resection from the clinical NaLLRFI trial, and the resected liver volume was measured in each patient. The perioperative parameters of safety and feasibility, as well as the accuracy analysis outcomes were compared. RESULTS There were 16 patients in the ARN-FI group and 15 patients underwent conventional laparoscopic hepatectomy without ARN or fluorescence imaging (non-ARN-FI group). There was no significant difference in baseline characteristics between the two groups. Compared with the non-ARN-FI group, the ARN-FI group had lower intraoperative bleeding (median 125 vs. 300 mL, P = 0.003). No significant difference was observed in other postoperative short-term outcomes. Accuracy analysis indicated that the actual resected liver volume (ARLV) in the ARN-FI group was more accurate. CONCLUSIONS ARN-FI was associated with less intraoperative bleeding and more accurate resection volume. These techniques may address existing challenges and provide rational guidance for laparoscopic anatomical S8 resection.
Collapse
Affiliation(s)
- Haisu Tao
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Zhuangxiong Wang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Xiaojun Zeng
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Haoyu Hu
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Jiang Li
- The First Affiliated Hospital, College of Medicine, Shihezi University, Shihezi, China
| | - Jinyu Lin
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Wenjun Lin
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Chihua Fang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China.
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China.
- Pazhou Lab, Guangzhou, China.
| | - Jian Yang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China.
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China.
- Pazhou Lab, Guangzhou, China.
| |
Collapse
|
10
|
Long Z, Chi Y, Yu X, Jiang Z, Yang D. ArthroNavi framework: stereo endoscope-guided instrument localization for arthroscopic minimally invasive surgeries. JOURNAL OF BIOMEDICAL OPTICS 2023; 28:106002. [PMID: 37841507 PMCID: PMC10576396 DOI: 10.1117/1.jbo.28.10.106002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 09/24/2023] [Accepted: 09/29/2023] [Indexed: 10/17/2023]
Abstract
Significance As an example of a minimally invasive arthroscopic surgical procedure, arthroscopic osteochondral autograft transplantation (OAT) is a common option for repairing focal cartilage defects in the knee joints. Arthroscopic OAT offers considerable benefits to patients, such as less post-operative pain and shorter hospital stays. However, performing OAT arthroscopically is an extremely demanding task because the osteochondral graft harvester must remain perpendicular to the cartilage surface to avoid differences in angulation. Aim We present a practical ArthroNavi framework for instrument pose localization by combining a self-developed stereo endoscopy with electromagnetic computation, which equips surgeons with surgical navigation assistance that eases the operational constraints of arthroscopic OAT surgery. Approach A prototype of a stereo endoscope specifically fit for a texture-less scene is introduced extensively. Then, the proposed framework employs the semi-global matching algorithm integrating the matching cubes method for real-time processing of the 3D point cloud. To address issues regarding initialization and occlusion, a displaying method based on patient tracking coordinates is proposed for intra-operative robust navigation. A geometrical constraint method that utilizes the 3D point cloud is used to compute a pose for the instrument. Finally, a hemisphere tabulation method is presented for pose accuracy evaluation. Results Experimental results show that our endoscope achieves 3D shape measurement with an accuracy of < 730 μ m . The mean error of pose localization is 15.4 deg (range of 10.3 deg to 21.3 deg; standard deviation of 3.08 deg) in our ArthroNavi method, which is within the same order of magnitude as that achieved by experienced surgeons using a freehand technique. Conclusions The effectiveness of the proposed ArthroNavi has been validated on a phantom femur. The potential contribution of this framework may provide a new computer-aided option for arthroscopic OAT surgery.
Collapse
Affiliation(s)
- Zhongjie Long
- Beijing Information Science & Technology University, School of Electromechanical Engineering, Beijing, China
| | - Yongting Chi
- Beijing Information Science & Technology University, School of Electromechanical Engineering, Beijing, China
| | - Xiaotong Yu
- Guang’anmen Hospital, China Academy of Chinese Medical Sciences, Beijing, China
| | - Zhouxiang Jiang
- Beijing Information Science & Technology University, School of Electromechanical Engineering, Beijing, China
| | - Dejin Yang
- Beijing Jishuitan Hospital, Capital Medical School, 4th Clinical College of Peking University, Department of Orthopedics, Beijing, China
| |
Collapse
|
11
|
Wang Z, Tao H, Wang J, Zhu Y, Lin J, Fang C, Yang J. Laparoscopic right hemi-hepatectomy plus total caudate lobectomy for perihilar cholangiocarcinoma via anterior approach with augmented reality navigation: a feasibility study. Surg Endosc 2023; 37:8156-8164. [PMID: 37653158 DOI: 10.1007/s00464-023-10397-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 08/13/2023] [Indexed: 09/02/2023]
Abstract
BACKGROUND Right hemi-hepatectomy plus total caudate lobectomy is the appropriate procedure for type IIIa or partial type II pCCA. However, the laparoscopic implementation of this procedure remains technically challenging, especially hilar vascular dissection and en bloc resection of the total caudate lobe. Augmented reality navigation can provide intraoperative navigation to enhance visualization of invisible hilar blood vessels and guide the parenchymal transection plane. METHODS Eleven patients who underwent laparoscopic right hemi-hepatectomy plus total caudate lobectomy from January 2021 to January 2023 were enrolled in this study. Augmented reality navigation technology and the anterior approach were utilized in this operation. Routine operative and short-term postoperative outcomes were assessed to evaluate the feasibility of the novel navigation method in this operation. RESULTS Right hemi-hepatectomy plus total caudate lobectomy was successfully performed in all 11 enrolled patients. Among the 11 patients, the mean operation time was 454.5 ± 25.0 min and the mean estimated blood loss was 209.1 ± 56.1 ml. Negative surgical margins were achieved in all patients. The postoperative course of all the patients was uneventful, and the mean length of postoperative hospital stay was 10.5 ± 1.2 days. CONCLUSION Laparoscopic right hemi-hepatectomy plus total caudate lobectomy via the anterior approach may be feasible and safe for pCCA with the assistance of augmented reality navigation.
Collapse
Affiliation(s)
- Zhuangxiong Wang
- Department of Hepatobiliary Surgery I, General Surgery Center, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Haisu Tao
- Department of Hepatobiliary Surgery I, General Surgery Center, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Junfeng Wang
- Department of Hepatobiliary Surgery I, General Surgery Center, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Yilin Zhu
- Department of Hepatobiliary Surgery I, General Surgery Center, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Jinyu Lin
- Department of Hepatobiliary Surgery I, General Surgery Center, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Chihua Fang
- Department of Hepatobiliary Surgery I, General Surgery Center, Zhujiang Hospital, Southern Medical University, Guangzhou, China.
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China.
| | - Jian Yang
- Department of Hepatobiliary Surgery I, General Surgery Center, Zhujiang Hospital, Southern Medical University, Guangzhou, China.
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China.
| |
Collapse
|
12
|
Chen Z, Marzullo A, Alberti D, Lievore E, Fontana M, De Cobelli O, Musi G, Ferrigno G, De Momi E. FRSR: Framework for real-time scene reconstruction in robot-assisted minimally invasive surgery. Comput Biol Med 2023; 163:107121. [PMID: 37311383 DOI: 10.1016/j.compbiomed.2023.107121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 05/12/2023] [Accepted: 05/30/2023] [Indexed: 06/15/2023]
Abstract
3D reconstruction of the intra-operative scenes provides precise position information which is the foundation of various safety related applications in robot-assisted surgery, such as augmented reality. Herein, a framework integrated into a known surgical system is proposed to enhance the safety of robotic surgery. In this paper, we present a scene reconstruction framework to restore the 3D information of the surgical site in real time. In particular, a lightweight encoder-decoder network is designed to perform disparity estimation, which is the key component of the scene reconstruction framework. The stereo endoscope of da Vinci Research Kit (dVRK) is adopted to explore the feasibility of the proposed approach, and it provides the possibility for the migration to other Robot Operating System (ROS) based robot platforms due to the strong independence on hardware. The framework is evaluated using three different scenarios, including a public dataset (3018 pairs of endoscopic images), the scene from the dVRK endoscope in our lab as well as a self-made clinical dataset captured from an oncology hospital. Experimental results show that the proposed framework can reconstruct 3D surgical scenes in real time (25 FPS), and achieve high accuracy (2.69 ± 1.48 mm in MAE, 5.47 ± 1.34 mm in RMSE and 0.41 ± 0.23 in SRE, respectively). It demonstrates that our framework can reconstruct intra-operative scenes with high reliability of both accuracy and speed, and the validation of clinical data also shows its potential in surgery. This work enhances the state of art in 3D intra-operative scene reconstruction based on medical robot platforms. The clinical dataset has been released to promote the development of scene reconstruction in the medical image community.
Collapse
Affiliation(s)
- Ziyang Chen
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, 20133, Italy.
| | - Aldo Marzullo
- Department of Mathematics and Computer Science, University of Calabria, Rende, 87036, Italy
| | - Davide Alberti
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, 20133, Italy
| | - Elena Lievore
- Department of Urology, European Institute of Oncology, IRCCS, Milan, 20141, Italy
| | - Matteo Fontana
- Department of Urology, European Institute of Oncology, IRCCS, Milan, 20141, Italy
| | - Ottavio De Cobelli
- Department of Urology, European Institute of Oncology, IRCCS, Milan, 20141, Italy; Department of Oncology and Onco-haematology, Faculty of Medicine and Surgery, University of Milan, Milan, 20122, Italy
| | - Gennaro Musi
- Department of Urology, European Institute of Oncology, IRCCS, Milan, 20141, Italy; Department of Oncology and Onco-haematology, Faculty of Medicine and Surgery, University of Milan, Milan, 20122, Italy
| | - Giancarlo Ferrigno
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, 20133, Italy
| | - Elena De Momi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, 20133, Italy; Department of Urology, European Institute of Oncology, IRCCS, Milan, 20141, Italy
| |
Collapse
|
13
|
Liu Y, Zuo S. Self-supervised monocular depth estimation for gastrointestinal endoscopy. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 238:107619. [PMID: 37235969 DOI: 10.1016/j.cmpb.2023.107619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 04/26/2023] [Accepted: 05/18/2023] [Indexed: 05/28/2023]
Abstract
BACKGROUND AND OBJECTIVE Gastrointestinal (GI) endoscopy represents a promising tool for GI cancer screening. However, the limited field of view and uneven skills of endoscopists make it remains difficult to accurately identify polyps and follow up on precancerous lesions under endoscopy. Estimating depth from GI endoscopic sequences is essential for a series of AI-assisted surgical techniques. Nonetheless, depth estimation algorithm of GI endoscopy is a challenging task due to the particularity of the environment and the limitation of datasets. In this paper, we propose a self-supervised monocular depth estimation method for GI endoscopy. METHODS A depth estimation network and a camera ego-motion estimation network are firstly constructed to obtain the depth information and pose information of the sequence respectively, and then the model is enabled to perform self-supervised training by calculating the multi-scale structural similarity with L1 norm (MS-SSIM+L1) loss function between the target frame and the reconstructed image as part of the loss of the training network. The MS-SSIM+L1 loss function is good for reserving high-frequency information and can maintain the invariance of brightness and color. Our model consists of the U-shape convolutional network with the dual-attention mechanism, which is beneficial to capture muti-scale contextual information, and greatly improves the accuracy of depth estimation. We evaluated our method qualitatively and quantitatively with different state-of-the-art methods. RESULTS AND CONCLUSIONS The experimental results manifest that our method has superior generality, achieving lower error metrics and higher accuracy metrics on both the UCL dataset and the Endoslam dataset. The proposed method has also been validated with clinical GI endoscopy, demonstrating the potential clinical value of the model.
Collapse
Affiliation(s)
- Yuying Liu
- Key Laboratory of Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, Tianjin, China
| | - Siyang Zuo
- Key Laboratory of Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, Tianjin, China.
| |
Collapse
|
14
|
Zhang X, Ji X, Wang J, Fan Y, Tao C. Renal surface reconstruction and segmentation for image-guided surgical navigation of laparoscopic partial nephrectomy. Biomed Eng Lett 2023; 13:165-174. [PMID: 37124114 PMCID: PMC10130295 DOI: 10.1007/s13534-023-00263-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 12/01/2022] [Accepted: 01/22/2023] [Indexed: 02/04/2023] Open
Abstract
An unpredictable dynamic surgical environment makes it necessary to measure morphological information of target tissue real-time for laparoscopic image-guided navigation. The stereo vision method for intraoperative tissue 3D reconstruction has the most potential for clinical development benefiting from its high reconstruction accuracy and laparoscopy compatibility. However, existing stereo vision methods have difficulty in achieving high reconstruction accuracy in real time. Also, intraoperative tissue reconstruction results often contain complex background and instrument information that prevents clinical development for image-guided systems. Taking laparoscopic partial nephrectomy (LPN) as the research object, this paper realizes a real-time dense reconstruction and extraction of the kidney tissue surface. The central symmetrical Census based semi-global block stereo matching algorithm is proposed to generate a dense disparity map. A GPU-based pixel-by-pixel connectivity segmentation mechanism is designed to segment the renal tissue area. An in-vitro porcine heart, in-vivo porcine kidney and offline clinical LPN data were performed to evaluate the accuracy and effectiveness of our approach. The algorithm achieved a reconstruction accuracy of ± 2 mm with a real-time update rate of 21 fps for an HD image size of 960 × 540, and 91.0% target tissue segmentation accuracy even with surgical instrument occlusions. Experimental results have demonstrated that the proposed method could accurately reconstruct and extract renal surface in real-time in LPN. The measurement results can be used directly for image-guided systems. Our method provides a new way to measure geometric information of target tissue intraoperatively in laparoscopy surgery. Supplementary Information The online version contains supplementary material available at 10.1007/s13534-023-00263-1.
Collapse
Affiliation(s)
- Xiaohui Zhang
- School of Engineering Medicine, Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100083 China
| | - Xuquan Ji
- School of Biomedical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100083 China
| | - Junchen Wang
- School of Mechanical Engineering and Automation, Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang Unviersity, No. 37 Xueyuan Road, Haidian District, Beijing, 100083 China
| | - Yubo Fan
- School of Engineering Medicine, Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100083 China
- School of Biomedical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100083 China
| | - Chunjing Tao
- School of Engineering Medicine, Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100083 China
| |
Collapse
|
15
|
Abstract
INTRODUCTION During an operation, augmented reality (AR) enables surgeons to enrich their vision of the operating field by means of digital imagery, particularly as regards tumors and anatomical structures. While in some specialties, this type of technology is routinely ustilized, in liver surgery due to the complexity of modeling organ deformities in real time, its applications remain limited. At present, numerous teams are attempting to find a solution applicable to current practice, the objective being to overcome difficulties of intraoperative navigation in an opaque organ. OBJECTIVE To identify, itemize and analyze series reporting AR techniques tested in liver surgery, the objectives being to establish a state of the art and to provide indications of perspectives for the future. METHODS In compliance with the PRISMA guidelines and availing ourselves of the PubMed, Embase and Cochrane databases, we identified English-language articles published between January 2020 and January 2022 corresponding to the following keywords: augmented reality, hepatic surgery, liver and hepatectomy. RESULTS Initially, 102 titles, studies and summaries were preselected. Twenty-eight corresponding to the inclusion criteria were included, reporting on 183patients operated with the help of AR by laparotomy (n=31) or laparoscopy (n=152). Several techniques of acquisition and visualization were reported. Anatomical precision was the main assessment criterion in 19 articles, with values ranging from 3mm to 14mm, followed by time of acquisition and clinical feasibility. CONCLUSION While several AR technologies are presently being developed, due to insufficient anatomical precision their clinical applications have remained limited. That much said, numerous teams are currently working toward their optimization, and it is highly likely that in the short term, the application of AR in liver surgery will have become more frequent and effective. As for its clinical impact, notably in oncology, it remains to be assessed.
Collapse
Affiliation(s)
- B Acidi
- Department of Surgery, AP-HP hôpital Paul-Brousse, Hepato-Biliary Center, 12, avenue Paul-Vaillant Couturier, 94804 Villejuif cedex, France; Augmented Operating Room Innovation Chair (BOPA), France; Inria « Mimesis », Strasbourg, France
| | - M Ghallab
- Department of Surgery, AP-HP hôpital Paul-Brousse, Hepato-Biliary Center, 12, avenue Paul-Vaillant Couturier, 94804 Villejuif cedex, France; Augmented Operating Room Innovation Chair (BOPA), France
| | - S Cotin
- Augmented Operating Room Innovation Chair (BOPA), France; Inria « Mimesis », Strasbourg, France
| | - E Vibert
- Department of Surgery, AP-HP hôpital Paul-Brousse, Hepato-Biliary Center, 12, avenue Paul-Vaillant Couturier, 94804 Villejuif cedex, France; Augmented Operating Room Innovation Chair (BOPA), France; DHU Hepatinov, 94800 Villejuif, France; Inserm, Paris-Saclay University, UMRS 1193, Pathogenesis and treatment of liver diseases; FHU Hepatinov, 94800 Villejuif, France
| | - N Golse
- Department of Surgery, AP-HP hôpital Paul-Brousse, Hepato-Biliary Center, 12, avenue Paul-Vaillant Couturier, 94804 Villejuif cedex, France; Augmented Operating Room Innovation Chair (BOPA), France; Inria « Mimesis », Strasbourg, France; DHU Hepatinov, 94800 Villejuif, France; Inserm, Paris-Saclay University, UMRS 1193, Pathogenesis and treatment of liver diseases; FHU Hepatinov, 94800 Villejuif, France.
| |
Collapse
|
16
|
Hou JX, Deng Z, Liu YY, Xu SK, Li ZX, Sun JC, Zhao MY. A Bibliometric Analysis of the Role of 3D Technology in Liver Cancer Resection. World J Surg 2023; 47:1548-1561. [PMID: 36882637 DOI: 10.1007/s00268-023-06950-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/22/2023] [Indexed: 03/09/2023]
Abstract
BACKGROUND Liver cancer resection is an effective but complex way to treat liver cancer, and complex anatomy is one of the reasons for the difficulty of surgery. The use of 3D technology can help surgeons cope with this dilemma. This article intends to conduct a bibliometric analysis of the role of 3D technology in liver cancer resection. METHODS (TS = (3D) OR TS = (three-dimensional)) AND (TS = (((hepatic) OR (liver)) AND ((cancer) OR (tumor) OR (neoplasm)))) AND (TS = (excision) OR TS = (resection)) was used as a search strategy for data collection in the Web of Science (WoS) Core Collection. CiteSpace, Carrot2 and Microsoft Office Excel were used for data analysis. RESULTS Three hundred and eighty-eight relevant articles were obtained. Their annual and journal distribution maps were produced. Countries/regions and institutions collaboration, author collaboration, references co-citations and their clusters and keywords co-occurrences and their clusters were constructed. Carrot2 cluster analysis was performed. CONCLUSIONS There was an overall upward trend in the number of publications. China's contribution was greater, and the USA had greater influence. Southern Med Univ was the most influential institution. However, the cooperation between institutions still needs to be further strengthened. Surgical Endoscopy and Other Interventional Techniques was the most published journal. Couinaud C and Soyer P were the authors with the highest citations and centrality, respectively. "Liver planning software accurately predicts postoperative liver volume and measures early regeneration" was the most influential article. 3D printing, 3D CT and 3D reconstruction may be the mainstream of current research, and augmented reality (AR) may be a future hot spot.
Collapse
Affiliation(s)
- Jia-Xing Hou
- Department of Hepatopancreatobiliary Surgery, Department of Pediatrics, The Third Xiangya Hospital of Central South University, Changsha, China
| | - Zhen Deng
- Department of Hepatopancreatobiliary Surgery, Department of Pediatrics, The Third Xiangya Hospital of Central South University, Changsha, China
| | - Yan-Yu Liu
- Changsha Central Hospital, University of South China, Changsha, China
| | - Shao-Kang Xu
- Department of Hepatopancreatobiliary Surgery, Department of Pediatrics, The Third Xiangya Hospital of Central South University, Changsha, China
| | - Zi-Xin Li
- Department of Hepatopancreatobiliary Surgery, Department of Pediatrics, The Third Xiangya Hospital of Central South University, Changsha, China
| | - Ji-Chun Sun
- Department of Hepatopancreatobiliary Surgery, Department of Pediatrics, The Third Xiangya Hospital of Central South University, Changsha, China.
| | - Ming-Yi Zhao
- Department of Hepatopancreatobiliary Surgery, Department of Pediatrics, The Third Xiangya Hospital of Central South University, Changsha, China.
| |
Collapse
|
17
|
A skeleton context-aware 3D fully convolutional network for abdominal artery segmentation. Int J Comput Assist Radiol Surg 2023; 18:461-472. [PMID: 36273078 DOI: 10.1007/s11548-022-02767-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2022] [Accepted: 09/26/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE This paper aims to propose a deep learning-based method for abdominal artery segmentation. Blood vessel structure information is essential to diagnosis and treatment. Accurate blood vessel segmentation is critical to preoperative planning. Although deep learning-based methods perform well on large organs, segmenting small organs such as blood vessels is challenging due to complicated branching structures and positions. We propose a 3D deep learning network from a skeleton context-aware perspective to improve segmentation accuracy. In addition, we propose a novel 3D patch generation method which could strengthen the structural diversity of a training data set. METHOD The proposed method segments abdominal arteries from an abdominal computed tomography (CT) volume using a 3D fully convolutional network (FCN). We add two auxiliary tasks to the network to extract the skeleton context of abdominal arteries. In addition, our skeleton-based patch generation (SBPG) method further enables the FCN to segment small arteries. SBPG generates a 3D patch from a CT volume by leveraging artery skeleton information. These methods improve the segmentation accuracies of small arteries. RESULTS We used 20 cases of abdominal CT volumes to evaluate the proposed method. The experimental results showed that our method outperformed previous segmentation accuracies. The averaged precision rate, recall rate, and F-measure were 95.5%, 91.0%, and 93.2%, respectively. Compared to a baseline method, our method improved 1.5% the averaged recall rate and 0.7% the averaged F-measure. CONCLUSIONS We present a skeleton context-aware 3D FCN to segment abdominal arteries from an abdominal CT volume. In addition, we propose a 3D patch generation method. Our fully automated method segmented most of the abdominal artery regions. The method produced competitive segmentation performance compared to previous methods.
Collapse
|
18
|
Perioperative and Disease-Free Survival Outcomes after Hepatectomy for Centrally Located Hepatocellular Carcinoma Guided by Augmented Reality and Indocyanine Green Fluorescence Imaging: A Single-Center Experience. J Am Coll Surg 2023; 236:328-337. [PMID: 36648260 DOI: 10.1097/xcs.0000000000000472] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
BACKGROUND Laparoscopic hepatectomy for centrally located hepatocellular carcinoma is challenging to perform. Augmented reality navigation (ARN) and fluorescence imaging are currently safe and reproducible techniques for hepatectomy, but the efficacy results for centrally located hepatocellular carcinoma have not been reported. This study aimed to evaluate the efficacy of an ARN system combined with fluorescence imaging (ARN-FI) in laparoscopic hepatectomy for centrally located hepatocellular carcinoma. STUDY DESIGN This was a post hoc analysis of an original nonrandomized clinical trial that was designed to evaluate the feasibility and efficacy of ARN-FI for laparoscopic liver resection. A total of 76 patients were consecutively enrolled from June 2018 to June 2021, of which 42 underwent laparoscopic hepatectomy using ARN-FI (ARN-FI group), and the other 34 who did not use ARN-FI guidance (non-ARN-FI group). Perioperative outcomes and disease-free survival were compared between the 2 groups. RESULTS Compared with the non-ARN-FI group, the ARN-FI group had less intraoperative blood loss (median 275 vs 300 mL, p = 0.013), lower intraoperative transfusion rate (14.3% vs 64.7%, p < 0.01), shorter postoperative hospital stay (median 8 vs 9 days, p = 0.005), and lower postoperative complication rate (35.7% vs 61.8%, p = 0.024). There was no death in the perioperative period and follow-up period. There was no significant difference in overall disease-free survival between the 2 groups (p = 0.16). CONCLUSIONS The ARN system and fluorescence imaging may be of value in improving the success rate of surgery, reducing postoperative complications, accelerating postoperative recovery, and shortening postoperative hospital stay.
Collapse
|
19
|
State of the Art and Future Prospects of Virtual and Augmented Reality in Veterinary Medicine: A Systematic Review. Animals (Basel) 2022; 12:ani12243517. [PMID: 36552437 PMCID: PMC9774422 DOI: 10.3390/ani12243517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 12/06/2022] [Accepted: 12/12/2022] [Indexed: 12/15/2022] Open
Abstract
Virtual reality and augmented reality are new but rapidly expanding topics in medicine. In virtual reality, users are immersed in a three-dimensional environment, whereas in augmented reality, computer-generated images are superimposed on the real world. Despite advances in human medicine, the number of published articles in veterinary medicine is low. These cutting-edge technologies can be used in combination with existing methods in veterinary medicine to achieve diagnostic/therapeutic and educational goals. The purpose of our review was to evaluate studies for their use of virtual reality and augmented reality in veterinary medicine, as well as human medicine with animal trials, to report results and the state of the art. We collected all of the articles we included in our review by screening the Scopus, PubMed, and Web of Science databases. Of the 24 included studies, 11 and 13 articles belonged to virtual reality and augmented reality, respectively. Based on these articles, we determined that using these technologies has a positive impact on the scientific output of students and residents, can reduce training costs, and can be used in training/educational programs. Furthermore, using these tools can promote ethical standards. We reported the absence of standard operation protocols and equipment costs as study limitations.
Collapse
|
20
|
Minimally invasive and invasive liver surgery based on augmented reality training: a review of the literature. J Robot Surg 2022; 17:753-763. [DOI: 10.1007/s11701-022-01499-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 11/14/2022] [Indexed: 11/29/2022]
|
21
|
A Scoping Review of Deep Learning in Cancer Nursing Combined with Augmented Reality: the Era of Intelligent Nursing is Coming. Asia Pac J Oncol Nurs 2022; 9:100135. [PMID: 36276884 PMCID: PMC9579790 DOI: 10.1016/j.apjon.2022.100135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Accepted: 08/22/2022] [Indexed: 11/30/2022] Open
Abstract
Artificial intelligence has been developing greatly in the field of medicine. As a new research hotspot of artificial intelligence, deep learning (DL) has been widely applied in the fields of cancer risk assessment, symptom recognition, and cancer detection. Therefore, nursing care issues in terms of consuming time and energy, lower accuracy, and lower efficiency can be solved with applying DL in caring cancer patients. In addition, augmented reality (AR) has great navigation potential through combining computer-generated virtual elements with the real world. Thus, DL + AR may facilitate patients with cancer to possess a brand-new model of nursing care that is more intelligent, mobile, and adapted to the information age, compared to traditional nursing. With the advent of the era of intelligent nursing, future nursing models can not only learn from the DL + AR model to meet the needs of patients with cancer but also reduce nursing workload, save healthcare resources, and improve work efficiency, the quality of nursing care, as well as the quality of life for cancer patients.
Collapse
|
22
|
Lee JJ, Klepcha M, Wong M, Dang PN, Sadrameli SS, Britz GW. The First Pilot Study of an Interactive, 360° Augmented Reality Visualization Platform for Neurosurgical Patient Education: A Case Series. Oper Neurosurg (Hagerstown) 2022; 23:53-59. [DOI: 10.1227/ons.0000000000000186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 01/09/2022] [Indexed: 11/19/2022] Open
|
23
|
Li C, Zheng Y, Yuan Y, Li H. Augmented reality navigation-guided pulmonary nodule localization in a canine model. Transl Lung Cancer Res 2022; 10:4152-4160. [PMID: 35004246 PMCID: PMC8674612 DOI: 10.21037/tlcr-21-618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Accepted: 10/20/2021] [Indexed: 11/25/2022]
Abstract
Background The current intraoperative pulmonary nodule localization techniques require specific medical equipment or skillful operators, which limits their widespread application. Here, we present an innovative nodule localization technique in a canine lung model using augmented reality (AR) navigation. Methods Peripheral pulmonary lesions were artificially created in canine model. A preoperative chest computed tomography scan was performed for each animal. The acquired computed tomography images were analyzed, and an established intraoperative localization plan was uploaded into HoloLens (a head-mounted AR device). Under general anesthesia, lung localization markers were implanted in each canine, guided by the established procedure plan displayed by HoloLens. All artificial lesions and markers were removed by video-assisted wedge resection or lobectomy in a single operation. Results Since June 2019, 12 peripheral pulmonary lesions were artificially created in 4 canine models. All lung localization markers were precisely implanted with a median registration and implantation time of 6 minutes (range, 2–15 minutes). The average distance between pulmonary lesions and markers was 1.9±1.7 mm, based on computed tomography examination after localization. No severe pneumothorax was observed after marker implantation. After an average implantation period of 16.5 days, no marker displacement was observed. Conclusions The AR navigation-guided pulmonary nodule localization technique was safe and effective in a canine model. The validity and feasibility of using this technology in patients will be examined further (NCT04211051).
Collapse
Affiliation(s)
- Chengqiang Li
- Department of Thoracic Surgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yuyan Zheng
- Department of Thoracic Surgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Ye Yuan
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Hecheng Li
- Department of Thoracic Surgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
24
|
Bardozzo F, Collins T, Forgione A, Hostettler A, Tagliaferri R. StaSiS-Net: a stacked and siamese disparity estimation network for depth reconstruction in modern 3D laparoscopy. Med Image Anal 2022; 77:102380. [DOI: 10.1016/j.media.2022.102380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2021] [Revised: 01/26/2022] [Accepted: 01/27/2022] [Indexed: 10/19/2022]
|
25
|
Luo H, Wang C, Duan X, Liu H, Wang P, Hu Q, Jia F. Unsupervised learning of depth estimation from imperfect rectified stereo laparoscopic images. Comput Biol Med 2022; 140:105109. [PMID: 34891097 DOI: 10.1016/j.compbiomed.2021.105109] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 11/30/2021] [Accepted: 12/02/2021] [Indexed: 11/03/2022]
Abstract
BACKGROUND Learning-based methods have achieved remarkable performances on depth estimation. However, the premise of most self-learning and unsupervised learning methods is built on rigorous, geometrically-aligned stereo rectification. The performances of these methods degrade when the rectification is not accurate. Therefore, we explore an approach for unsupervised depth estimation from stereo images that can handle imperfect camera parameters. METHODS We propose an unsupervised deep convolutional network that takes rectified stereo image pairs as input and outputs corresponding dense disparity maps. First, a new vertical correction module is designed for predicting a correction map to compensate for the imperfect geometry alignment. Second, the left and right images, which are reconstructed based on the input image pair and corresponding disparities as well as the vertical correction maps, are regarded as the outputs of the generative term of the generative adversarial network (GAN). Then, the discriminator term of the GAN is used to distinguish the reconstructed images from the original inputs to force the generator to output increasingly realistic images. In addition, a residual mask is introduced to exclude pixels that conflict with the appearance of the original image in the loss calculation. RESULTS The proposed model is validated on the publicly available Stereo Correspondence and Reconstruction of Endoscopic Data (SCARED) dataset and the average MAE is 3.054 mm. CONCLUSION Our model can effectively handle imperfect rectified stereo images for depth estimation.
Collapse
Affiliation(s)
- Huoling Luo
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Congcong Wang
- School of Computer Science and Engineering, Tianjin University of Technology, Tianjin, China; Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway
| | - Xingguang Duan
- Advanced Innovation Centre for Intelligent Robots & Systems, Beijing Institute of Technology, Beijing, China
| | - Hao Liu
- State Key Lab for Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
| | - Ping Wang
- Department of Hepatobiliary Surgery, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Qingmao Hu
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Fucang Jia
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China; Pazhou Lab, Guangzhou, China.
| |
Collapse
|
26
|
Giannone F, Felli E, Cherkaoui Z, Mascagni P, Pessaux P. Augmented Reality and Image-Guided Robotic Liver Surgery. Cancers (Basel) 2021; 13:cancers13246268. [PMID: 34944887 PMCID: PMC8699460 DOI: 10.3390/cancers13246268] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 12/07/2021] [Accepted: 12/10/2021] [Indexed: 12/13/2022] Open
Abstract
Artificial intelligence makes surgical resection easier and safer, and, at the same time, can improve oncological results. The robotic system fits perfectly with these more or less diffused technologies, and it seems that this benefit is mutual. In liver surgery, robotic systems help surgeons to localize tumors and improve surgical results with well-defined preoperative planning or increased intraoperative detection. Furthermore, they can balance the absence of tactile feedback and help recognize intrahepatic biliary or vascular structures during parenchymal transection. Some of these systems are well known and are already widely diffused in open and laparoscopic hepatectomies, such as indocyanine green fluorescence or ultrasound-guided resections, whereas other tools, such as Augmented Reality, are far from being standardized because of the high complexity and elevated costs. In this paper, we review all the experiences in the literature on the use of artificial intelligence systems in robotic liver resections, describing all their practical applications and their weaknesses.
Collapse
Affiliation(s)
- Fabio Giannone
- Department of Visceral and Digestive Surgery, University Hospital of Strasbourg, 1 Place de l’Hôpital, 67100 Strasbourg, France; (F.G.); (E.F.); (Z.C.)
- Institute of Viral and Liver Disease, Inserm U1110, University of Strasbourg, 1 Place de l’Hôpital, 67100 Strasbourg, France
- University Hospital Institute (IHU), Institute of Image-Guided Surgery, University of Strasbourg, 1 Place de l’Hôpital, 67100 Strasbourg, France;
| | - Emanuele Felli
- Department of Visceral and Digestive Surgery, University Hospital of Strasbourg, 1 Place de l’Hôpital, 67100 Strasbourg, France; (F.G.); (E.F.); (Z.C.)
- Institute of Viral and Liver Disease, Inserm U1110, University of Strasbourg, 1 Place de l’Hôpital, 67100 Strasbourg, France
- University Hospital Institute (IHU), Institute of Image-Guided Surgery, University of Strasbourg, 1 Place de l’Hôpital, 67100 Strasbourg, France;
| | - Zineb Cherkaoui
- Department of Visceral and Digestive Surgery, University Hospital of Strasbourg, 1 Place de l’Hôpital, 67100 Strasbourg, France; (F.G.); (E.F.); (Z.C.)
- Institute of Viral and Liver Disease, Inserm U1110, University of Strasbourg, 1 Place de l’Hôpital, 67100 Strasbourg, France
| | - Pietro Mascagni
- University Hospital Institute (IHU), Institute of Image-Guided Surgery, University of Strasbourg, 1 Place de l’Hôpital, 67100 Strasbourg, France;
| | - Patrick Pessaux
- Department of Visceral and Digestive Surgery, University Hospital of Strasbourg, 1 Place de l’Hôpital, 67100 Strasbourg, France; (F.G.); (E.F.); (Z.C.)
- Institute of Viral and Liver Disease, Inserm U1110, University of Strasbourg, 1 Place de l’Hôpital, 67100 Strasbourg, France
- University Hospital Institute (IHU), Institute of Image-Guided Surgery, University of Strasbourg, 1 Place de l’Hôpital, 67100 Strasbourg, France;
- Correspondence: ; Tel.: +33-369-550-552
| |
Collapse
|
27
|
Wendler T, van Leeuwen FWB, Navab N, van Oosterom MN. How molecular imaging will enable robotic precision surgery : The role of artificial intelligence, augmented reality, and navigation. Eur J Nucl Med Mol Imaging 2021; 48:4201-4224. [PMID: 34185136 PMCID: PMC8566413 DOI: 10.1007/s00259-021-05445-6] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Accepted: 06/01/2021] [Indexed: 02/08/2023]
Abstract
Molecular imaging is one of the pillars of precision surgery. Its applications range from early diagnostics to therapy planning, execution, and the accurate assessment of outcomes. In particular, molecular imaging solutions are in high demand in minimally invasive surgical strategies, such as the substantially increasing field of robotic surgery. This review aims at connecting the molecular imaging and nuclear medicine community to the rapidly expanding armory of surgical medical devices. Such devices entail technologies ranging from artificial intelligence and computer-aided visualization technologies (software) to innovative molecular imaging modalities and surgical navigation (hardware). We discuss technologies based on their role at different steps of the surgical workflow, i.e., from surgical decision and planning, over to target localization and excision guidance, all the way to (back table) surgical verification. This provides a glimpse of how innovations from the technology fields can realize an exciting future for the molecular imaging and surgery communities.
Collapse
Affiliation(s)
- Thomas Wendler
- Chair for Computer Aided Medical Procedures and Augmented Reality, Technische Universität München, Boltzmannstr. 3, 85748 Garching bei München, Germany
| | - Fijs W. B. van Leeuwen
- Department of Radiology, Interventional Molecular Imaging Laboratory, Leiden University Medical Center, Leiden, The Netherlands
- Department of Urology, The Netherlands Cancer Institute - Antonie van Leeuwenhoek Hospital, Amsterdam, The Netherlands
- Orsi Academy, Melle, Belgium
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures and Augmented Reality, Technische Universität München, Boltzmannstr. 3, 85748 Garching bei München, Germany
- Chair for Computer Aided Medical Procedures Laboratory for Computational Sensing + Robotics, Johns-Hopkins University, Baltimore, MD USA
| | - Matthias N. van Oosterom
- Department of Radiology, Interventional Molecular Imaging Laboratory, Leiden University Medical Center, Leiden, The Netherlands
- Department of Urology, The Netherlands Cancer Institute - Antonie van Leeuwenhoek Hospital, Amsterdam, The Netherlands
| |
Collapse
|
28
|
Adballah M, Espinel Y, Calvet L, Pereira B, Le Roy B, Bartoli A, Buc E. Augmented reality in laparoscopic liver resection evaluated on an ex-vivo animal model with pseudo-tumours. Surg Endosc 2021; 36:833-843. [PMID: 34734305 DOI: 10.1007/s00464-021-08798-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Accepted: 10/17/2021] [Indexed: 02/01/2023]
Abstract
BACKGROUND The aim of this study was to assess the performance of our augmented reality (AR) software (Hepataug) during laparoscopic resection of liver tumours and compare it to standard ultrasonography (US). MATERIALS AND METHODS Ninety pseudo-tumours ranging from 10 to 20 mm were created in sheep cadaveric livers by injection of alginate. CT-scans were then performed and 3D models reconstructed using a medical image segmentation software (MITK). The livers were placed in a pelvi-trainer on an inclined plane, approximately perpendicular to the laparoscope. The aim was to obtain free resection margins, as close as possible to 1 cm. Laparoscopic resection was performed using US alone (n = 30, US group), AR alone (n = 30, AR group) and both US and AR (n = 30, ARUS group). R0 resection, maximal margins, minimal margins and mean margins were assessed after histopathologic examination, adjusted to the tumour depth and to a liver zone-wise difficulty level. RESULTS The minimal margins were not different between the three groups (8.8, 8.0 and 6.9 mm in the US, AR and ARUS groups, respectively). The maximal margins were larger in the US group compared to the AR and ARUS groups after adjustment on depth and zone difficulty (21 vs. 18 mm, p = 0.001 and 21 vs. 19.5 mm, p = 0.037, respectively). The mean margins, which reflect the variability of the measurements, were larger in the US group than in the ARUS group after adjustment on depth and zone difficulty (15.2 vs. 12.8 mm, p < 0.001). When considering only the most difficult zone (difficulty 3), there were more R1/R2 resections in the US group than in the AR + ARUS group (50% vs. 21%, p = 0.019). CONCLUSION Laparoscopic liver resection using AR seems to provide more accurate resection margins with less variability than the gold standard US navigation, particularly in difficult to access liver zones with deep tumours.
Collapse
Affiliation(s)
- Mourad Adballah
- Institut Pascal, UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Bâtiment 3C, 28 place Henri Dunant, 63000, Clermont-Ferrand, France
- Department of Digestive and Hepatobiliary Surgery, University Hospital Clermont-Ferrand, 1 Place Lucie et Raymond Aubrac, 63003, Clermont-Ferrand Cedex, France
| | - Yamid Espinel
- Institut Pascal, UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Bâtiment 3C, 28 place Henri Dunant, 63000, Clermont-Ferrand, France
| | - Lilian Calvet
- Institut Pascal, UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Bâtiment 3C, 28 place Henri Dunant, 63000, Clermont-Ferrand, France
- Biostatistics Department (DRCI), University Hospital Clermont-Ferrand, 63000, Clermont-Ferrand, France
| | - Bruno Pereira
- Biostatistics Department (DRCI), University Hospital Clermont-Ferrand, 63000, Clermont-Ferrand, France
| | - Bertrand Le Roy
- Institut Pascal, UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Bâtiment 3C, 28 place Henri Dunant, 63000, Clermont-Ferrand, France
- Department of Digestive and Oncologic Surgery, University Hospital Nord St-Etienne, Avenue Albert Raimond, 42270, Saint-Priest en Jarez, France
| | - Adrien Bartoli
- Institut Pascal, UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Bâtiment 3C, 28 place Henri Dunant, 63000, Clermont-Ferrand, France
- Biostatistics Department (DRCI), University Hospital Clermont-Ferrand, 63000, Clermont-Ferrand, France
| | - Emmanuel Buc
- Institut Pascal, UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Bâtiment 3C, 28 place Henri Dunant, 63000, Clermont-Ferrand, France.
- Department of Digestive and Hepatobiliary Surgery, University Hospital Clermont-Ferrand, 1 Place Lucie et Raymond Aubrac, 63003, Clermont-Ferrand Cedex, France.
| |
Collapse
|
29
|
Augmented and virtual reality in spine surgery, current applications and future potentials. Spine J 2021; 21:1617-1625. [PMID: 33774210 DOI: 10.1016/j.spinee.2021.03.018] [Citation(s) in RCA: 67] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 03/17/2021] [Indexed: 02/03/2023]
Abstract
BACKGROUND CONTEXT The field of artificial intelligence (AI) is rapidly advancing, especially with recent improvements in deep learning (DL) techniques. Augmented (AR) and virtual reality (VR) are finding their place in healthcare, and spine surgery is no exception. The unique capabilities and advantages of AR and VR devices include their low cost, flexible integration with other technologies, user-friendly features and their application in navigation systems, which makes them beneficial across different aspects of spine surgery. Despite the use of AR for pedicle screw placement, targeted cervical foraminotomy, bone biopsy, osteotomy planning, and percutaneous intervention, the current applications of AR and VR in spine surgery remain limited. PURPOSE The primary goal of this study was to provide the spine surgeons and clinical researchers with the general information about the current applications, future potentials, and accessibility of AR and VR systems in spine surgery. STUDY DESIGN/SETTING We reviewed titles of more than 250 journal papers from google scholar and PubMed with search words: augmented reality, virtual reality, spine surgery, and orthopaedic, out of which 89 related papers were selected for abstract review. Finally, full text of 67 papers were analyzed and reviewed. METHODS The papers were divided into four groups: technological papers, applications in surgery, applications in spine education and training, and general application in orthopaedic. A team of two reviewers performed paper reviews and a thorough web search to ensure the most updated state of the art in each of four group is captured in the review. RESULTS In this review we discuss the current state of the art in AR and VR hardware, their preoperative applications and surgical applications in spine surgery. Finally, we discuss the future potentials of AR and VR and their integration with AI, robotic surgery, gaming, and wearables. CONCLUSIONS AR and VR are promising technologies that will soon become part of standard of care in spine surgery.
Collapse
|
30
|
Wang Y, Cao D, Chen SL, Li YM, Zheng YW, Ohkohchi N. Current trends in three-dimensional visualization and real-time navigation as well as robot-assisted technologies in hepatobiliary surgery. World J Gastrointest Surg 2021; 13:904-922. [PMID: 34621469 PMCID: PMC8462083 DOI: 10.4240/wjgs.v13.i9.904] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/14/2021] [Revised: 04/19/2021] [Accepted: 08/02/2021] [Indexed: 02/06/2023] Open
Abstract
With the continuous development of digital medicine, minimally invasive precision and safety have become the primary development trends in hepatobiliary surgery. Due to the specificity and complexity of hepatobiliary surgery, traditional preoperative imaging techniques such as computed tomography and magnetic resonance imaging cannot meet the need for identification of fine anatomical regions. Imaging-based three-dimensional (3D) reconstruction, virtual simulation of surgery and 3D printing optimize the surgical plan through preoperative assessment, improving the controllability and safety of intraoperative operations, and in difficult-to-reach areas of the posterior and superior liver, assistive robots reproduce the surgeon’s natural movements with stable cameras, reducing natural vibrations. Electromagnetic navigation in abdominal surgery solves the problem of conventional surgery still relying on direct visual observation or preoperative image assessment. We summarize and compare these recent trends in digital medical solutions for the future development and refinement of digital medicine in hepatobiliary surgery.
Collapse
Affiliation(s)
- Yun Wang
- Institute of Regenerative Medicine, and Affiliated Hospital of Jiangsu University, Jiangsu University, Zhenjiang 212001, Jiangsu Province, China
| | - Di Cao
- Institute of Regenerative Medicine, and Affiliated Hospital of Jiangsu University, Jiangsu University, Zhenjiang 212001, Jiangsu Province, China
| | - Si-Lin Chen
- Institute of Regenerative Medicine, and Affiliated Hospital of Jiangsu University, Jiangsu University, Zhenjiang 212001, Jiangsu Province, China
| | - Yu-Mei Li
- Institute of Regenerative Medicine, and Affiliated Hospital of Jiangsu University, Jiangsu University, Zhenjiang 212001, Jiangsu Province, China
| | - Yun-Wen Zheng
- Institute of Regenerative Medicine, and Affiliated Hospital of Jiangsu University, Jiangsu University, Zhenjiang 212001, Jiangsu Province, China
- Department of Gastrointestinal and Hepato-Biliary-Pancreatic Surgery, Faculty of Medicine, University of Tsukuba, Tsukuba 305-8575, Ibaraki, Japan
- Guangdong Provincial Key Laboratory of Large Animal Models for Biomedicine, and School of Biotechnology and Heath Sciences, Wuyi University, Jiangmen 529020, Guangdong Province, China
- School of Medicine, Yokohama City University, Yokohama 234-0006, Kanagawa, Japan
| | - Nobuhiro Ohkohchi
- Department of Gastrointestinal and Hepato-Biliary-Pancreatic Surgery, Faculty of Medicine, University of Tsukuba, Tsukuba 305-8575, Ibaraki, Japan
| |
Collapse
|
31
|
Ivashchenko OV, Kuhlmann KFD, van Veen R, Pouw B, Kok NFM, Hoetjes NJ, Smit JN, Klompenhouwer EG, Nijkamp J, Ruers TJM. CBCT-based navigation system for open liver surgery: Accurate guidance toward mobile and deformable targets with a semi-rigid organ approximation and electromagnetic tracking of the liver. Med Phys 2021; 48:2145-2159. [PMID: 33666243 PMCID: PMC8251891 DOI: 10.1002/mp.14825] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 02/23/2021] [Accepted: 02/23/2021] [Indexed: 12/21/2022] Open
Abstract
Purpose The surgical navigation system that provides guidance throughout the surgery can facilitate safer and more radical liver resections, but such a system should also be able to handle organ motion. This work investigates the accuracy of intraoperative surgical guidance during open liver resection, with a semi‐rigid organ approximation and electromagnetic tracking of the target area. Methods The suggested navigation technique incorporates a preoperative 3D liver model based on diagnostic 4D MRI scan, intraoperative contrast‐enhanced CBCT imaging and electromagnetic (EM) tracking of the liver surface, as well as surgical instruments, by means of six degrees‐of‐freedom micro‐EM sensors. Results The system was evaluated during surgeries with 35 patients and resulted in an accurate and intuitive real‐time visualization of liver anatomy and tumor's location, confirmed by intraoperative checks on visible anatomical landmarks. Based on accuracy measurements verified by intraoperative CBCT, the system’s average accuracy was 4.0 ± 3.0 mm, while the total surgical delay due to navigation stayed below 20 min. Conclusions The electromagnetic navigation system for open liver surgery developed in this work allows for accurate localization of liver lesions and critical anatomical structures surrounding the resection area, even when the liver was manipulated. However, further clinically integrating the method requires shortening the guidance‐related surgical delay, which can be achieved by shifting to faster intraoperative imaging like ultrasound. Our approach is adaptable to navigation on other mobile and deformable organs, and therefore may benefit various clinical applications.
Collapse
Affiliation(s)
- Oleksandra V Ivashchenko
- Department of Surgical Oncology, The Netherlands Cancer Institute Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Koert F D Kuhlmann
- Department of Surgical Oncology, The Netherlands Cancer Institute Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Ruben van Veen
- Department of Surgical Oncology, The Netherlands Cancer Institute Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Bas Pouw
- Department of Surgical Oncology, The Netherlands Cancer Institute Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Niels F M Kok
- Department of Surgical Oncology, The Netherlands Cancer Institute Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Nikie J Hoetjes
- Department of Surgical Oncology, The Netherlands Cancer Institute Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Jasper N Smit
- Department of Surgical Oncology, The Netherlands Cancer Institute Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Elisabeth G Klompenhouwer
- Department of Radiology, The Netherlands Cancer Institute Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Jasper Nijkamp
- Department of Surgical Oncology, The Netherlands Cancer Institute Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Theodoor J M Ruers
- Department of Surgical Oncology, The Netherlands Cancer Institute Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands.,Faculty of Science and Technology (TNW), University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| |
Collapse
|
32
|
Application of Real-Time Augmented Reality Laparoscopic Navigation in Splenectomy for Massive Splenomegaly. World J Surg 2021; 45:2108-2115. [PMID: 33770240 DOI: 10.1007/s00268-021-06082-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/12/2021] [Indexed: 10/21/2022]
Abstract
OBJECTIVES To evaluate the clinical impact and technical feasibility of augmented reality laparoscopic navigation (ARLN) system in laparoscopic splenectomy for massive splenomegaly. METHODS The clinical data of 17 consecutive patients who underwent laparoscopic splenectomy using ARLN (ARLN group) and 26 patients without ARLN guidance (Non-ARLN group) between January 2018 and April 2020 were enrolled. Propensity score matching (PSM) analysis was performed between the patients with and without ARLN guidance at a ratio of 1:1. RESULTS Mean intraoperative blood loss was significantly lower in the ARLN-group than in the Non-ARLN group (306.6 ml vs. 462.6 ml, p = 0.047). All the patients in the ARLN-group achieved successful splenic artery dissection, while surgical success was achieved in 12 patients in the Non-ARLN group (p = 0.044). Postoperative hospital stay was significantly longer in the Non-ARLN group (3.8 days vs. 4.5 days, p = 0.040). CONCLUSIONS ARLN can provide feasible and accurate intraoperative image guidance, and it could be helpful in the performance of laparoscopic splenectomy for massive splenomegaly.
Collapse
|
33
|
Zhang W, Zhu W, Yang J, Xiang N, Zeng N, Hu H, Jia F, Fang C. Augmented Reality Navigation for Stereoscopic Laparoscopic Anatomical Hepatectomy of Primary Liver Cancer: Preliminary Experience. Front Oncol 2021; 11:663236. [PMID: 33842378 PMCID: PMC8027474 DOI: 10.3389/fonc.2021.663236] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Accepted: 03/11/2021] [Indexed: 12/17/2022] Open
Abstract
Background Accurate determination of intrahepatic anatomy remains challenging for laparoscopic anatomical hepatectomy (LAH). Laparoscopic augmented reality navigation (LARN) is expected to facilitate LAH of primary liver cancer (PLC) by identifying the exact location of tumors and vessels. The study was to evaluate the safety and effectiveness of our independently developed LARN system in LAH of PLC. Methods From May 2018 to July 2020, the study included 85 PLC patients who underwent three-dimensional (3D) LAH. According to whether LARN was performed during the operation, the patients were divided into the intraoperative navigation (IN) group and the non-intraoperative navigation (NIN) group. We compared the preoperative data, perioperative results and postoperative complications between the two groups, and introduced our preliminary experience of this novel technology in LAH. Results There were 44 and 41 PLC patients in the IN group and the NIN group, respectively. No significant differences were found in preoperative characteristics and any of the resection-related complications between the two groups (All P > 0.05). Compared with the NIN group, the IN group had significantly less operative bleeding (P = 0.002), lower delta Hb% (P = 0.039), lower blood transfusion rate (P < 0.001), and reduced postoperative hospital stay (P = 0.003). For the IN group, the successful fusion of simulated surgical planning and operative scene helped to determine the extent of resection. Conclusions The LARN contributed to the identification of important anatomical structures during LAH of PLC. It reduced vascular injury and accelerated postoperative recovery, showing a potential application prospects in liver surgery.
Collapse
Affiliation(s)
- Weiqi Zhang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China.,Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Wen Zhu
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China.,Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Jian Yang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China.,Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Nan Xiang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China.,Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Ning Zeng
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China.,Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Haoyu Hu
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China.,Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Fucang Jia
- Research Laboratory for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Chihua Fang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China.,Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| |
Collapse
|
34
|
Pelanis E, Teatini A, Eigl B, Regensburger A, Alzaga A, Kumar RP, Rudolph T, Aghayan DL, Riediger C, Kvarnström N, Elle OJ, Edwin B. Evaluation of a novel navigation platform for laparoscopic liver surgery with organ deformation compensation using injected fiducials. Med Image Anal 2020; 69:101946. [PMID: 33454603 DOI: 10.1016/j.media.2020.101946] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Revised: 11/28/2020] [Accepted: 12/15/2020] [Indexed: 12/11/2022]
Abstract
In laparoscopic liver resection, surgeons conventionally rely on anatomical landmarks detected through a laparoscope, preoperative volumetric images and laparoscopic ultrasound to compensate for the challenges of minimally invasive access. Image guidance using optical tracking and registration procedures is a promising tool, although often undermined by its inaccuracy. This study evaluates a novel surgical navigation solution that can compensate for liver deformations using an accurate and effective registration method. The proposed solution relies on a robotic C-arm to perform registration to preoperative CT/MRI image data and allows for intraoperative updates during resection using fluoroscopic images. Navigation is offered both as a 3D liver model with real-time instrument visualization, as well as an augmented reality overlay on the laparoscope camera view. Testing was conducted through a pre-clinical trial which included four porcine models. Accuracy of the navigation system was measured through two evaluation methods: liver surface fiducials reprojection and a comparison between planned and navigated resection margins. Target Registration Error with the fiducials evaluation shows that the accuracy in the vicinity of the lesion was 3.78±1.89 mm. Resection margin evaluations resulted in an overall median accuracy of 4.44 mm with a maximum error of 9.75 mm over the four subjects. The presented solution is accurate enough to be potentially clinically beneficial for surgical guidance in laparoscopic liver surgery.
Collapse
Affiliation(s)
- Egidijus Pelanis
- The Intervention Centre, Oslo University Hospital Rikshospitalet 0424, Oslo, Norway; Institute of Clinical Medicine, University of Oslo 1072, Oslo, Norway.
| | - Andrea Teatini
- The Intervention Centre, Oslo University Hospital Rikshospitalet 0424, Oslo, Norway; Department of Informatics, University of Oslo 1072, Oslo, Norway
| | | | | | | | - Rahul Prasanna Kumar
- The Intervention Centre, Oslo University Hospital Rikshospitalet 0424, Oslo, Norway
| | | | - Davit L Aghayan
- The Intervention Centre, Oslo University Hospital Rikshospitalet 0424, Oslo, Norway; Institute of Clinical Medicine, University of Oslo 1072, Oslo, Norway; Department of Surgery N1, Yerevan State Medical University, 0025 Yerevan, Armenia
| | - Carina Riediger
- University Hospital Carl Gustav Carus, Technische Universität Dresden, 01307 Dresden, Germany
| | | | - Ole Jakob Elle
- The Intervention Centre, Oslo University Hospital Rikshospitalet 0424, Oslo, Norway; Department of Informatics, University of Oslo 1072, Oslo, Norway
| | - Bjørn Edwin
- The Intervention Centre, Oslo University Hospital Rikshospitalet 0424, Oslo, Norway; Institute of Clinical Medicine, University of Oslo 1072, Oslo, Norway; Department of Hepato-Pancreatic-Biliary surgery 0424, Oslo University Hospital, Oslo, Norway
| |
Collapse
|
35
|
Pan J, Liu W, Ge P, Li F, Shi W, Jia L, Qin H. Real-time segmentation and tracking of excised corneal contour by deep neural networks for DALK surgical navigation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 197:105679. [PMID: 32814253 DOI: 10.1016/j.cmpb.2020.105679] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Accepted: 07/26/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVE Corneal disease is one of the main causes of blindness for humans globally nowadays, and deep anterior lamellar keratoplasty (DALK) is a widely applied technique for corneal transplantation. However, the position of stitch points highly influences the success rate of such surgery, which would require accurate control and manipulation of surgical instruments. METHODS In this paper, we present a deep learning framework for augmented reality (AR) based surgery navigation to guide the suturing in DALK. It can robustly track the excised corneal contour by semantic segmentation and the reconstruction of occlusion. We propose a novel optical flow inpainting network to recover the missing motion caused by occlusion. The occluded regions are detected by weakly supervised segmentation of surgical instruments and reconstructed by key frame warping along the completed optical flow. Then we introduce two types of loss function to adapt the inpainting network in the optical flow space. RESULTS Our techniques are tested and evaluated by a number of real surgery videos from Shandong Eye Hospital in China. We compare our approaches with other typical methods in the corneal contour segmentation, optical flow inpainting and occlusion regions reconstruction. The tracking accuracy reachs 99.2% in average and PSNR reaches 25.52 for the reconstruction of the occluded frames. CONCLUSION From the experimental evaluations and user study, both the qualitative and quantitative results indicate that our techniques can achieve accurate detection and tracking of corneal contour under complex disturbance in real-time surgical scenes. Our prototype AR navigation system would be highly useful in clinical practice.
Collapse
Affiliation(s)
- Junjun Pan
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China; Peng Cheng Lab, Shenzhen, China; Faculty of Media and Communication, Bournemouth University, Bournemouth, UK.
| | - Weimin Liu
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
| | - Pu Ge
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
| | - Fanghong Li
- Shenzhen Kechuang GuangTai Technology Co.,Ltd., Shenzhen, China
| | - Weiyun Shi
- Shandong Eye Institute Shandong Eye Hospital, Jinan, China
| | - Liyun Jia
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing, China
| | - Hong Qin
- Department of Computer Science, Stony Brook University, New York, US.
| |
Collapse
|
36
|
Wisotzky EL, Rosenthal JC, Wege U, Hilsmann A, Eisert P, Uecker FC. Surgical Guidance for Removal of Cholesteatoma Using a Multispectral 3D-Endoscope. SENSORS (BASEL, SWITZERLAND) 2020; 20:E5334. [PMID: 32957675 PMCID: PMC7570528 DOI: 10.3390/s20185334] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Revised: 09/13/2020] [Accepted: 09/14/2020] [Indexed: 01/15/2023]
Abstract
We develop a stereo-multispectral endoscopic prototype in which a filter-wheel is used for surgical guidance to remove cholesteatoma tissue in the middle ear. Cholesteatoma is a destructive proliferating tissue. The only treatment for this disease is surgery. Removal is a very demanding task, even for experienced surgeons. It is very difficult to distinguish between bone and cholesteatoma. In addition, it can even reoccur if not all tissue particles of the cholesteatoma are removed, which leads to undesirable follow-up operations. Therefore, we propose an image-based method that combines multispectral tissue classification and 3D reconstruction to identify all parts of the removed tissue and determine their metric dimensions intraoperatively. The designed multispectral filter-wheel 3D-endoscope prototype can switch between narrow-band spectral and broad-band white illumination, which is technically evaluated in terms of optical system properties. Further, it is tested and evaluated on three patients. The wavelengths 400 nm and 420 nm are identified as most suitable for the differentiation task. The stereoscopic image acquisition allows accurate 3D surface reconstruction of the enhanced image information. The first results are promising, as the cholesteatoma can be easily highlighted, correctly identified, and visualized as a true-to-scale 3D model showing the patient-specific anatomy.
Collapse
Affiliation(s)
- Eric L. Wisotzky
- Department of Computer Vision and Graphics, Fraunhofer Heinrich-Hertz-Institute, 10587 Berlin, Germany; (J.-C.R.); (U.W.); (A.H.); (P.E.)
- Department of Visual Computing, Humboldt Universität zu Berlin, 10117 Berlin, Germany
| | - Jean-Claude Rosenthal
- Department of Computer Vision and Graphics, Fraunhofer Heinrich-Hertz-Institute, 10587 Berlin, Germany; (J.-C.R.); (U.W.); (A.H.); (P.E.)
| | - Ulla Wege
- Department of Computer Vision and Graphics, Fraunhofer Heinrich-Hertz-Institute, 10587 Berlin, Germany; (J.-C.R.); (U.W.); (A.H.); (P.E.)
| | - Anna Hilsmann
- Department of Computer Vision and Graphics, Fraunhofer Heinrich-Hertz-Institute, 10587 Berlin, Germany; (J.-C.R.); (U.W.); (A.H.); (P.E.)
| | - Peter Eisert
- Department of Computer Vision and Graphics, Fraunhofer Heinrich-Hertz-Institute, 10587 Berlin, Germany; (J.-C.R.); (U.W.); (A.H.); (P.E.)
- Department of Visual Computing, Humboldt Universität zu Berlin, 10117 Berlin, Germany
| | - Florian C. Uecker
- Department of Otorhinolaryngology, Charité-Universitätsmedizin Berlin, 10117 Berlin, Germany;
| |
Collapse
|
37
|
|
38
|
Schneider C, Thompson S, Totz J, Song Y, Allam M, Sodergren MH, Desjardins AE, Barratt D, Ourselin S, Gurusamy K, Stoyanov D, Clarkson MJ, Hawkes DJ, Davidson BR. Comparison of manual and semi-automatic registration in augmented reality image-guided liver surgery: a clinical feasibility study. Surg Endosc 2020; 34:4702-4711. [PMID: 32780240 PMCID: PMC7524854 DOI: 10.1007/s00464-020-07807-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2020] [Accepted: 07/10/2020] [Indexed: 02/06/2023]
Abstract
BACKGROUND The laparoscopic approach to liver resection may reduce morbidity and hospital stay. However, uptake has been slow due to concerns about patient safety and oncological radicality. Image guidance systems may improve patient safety by enabling 3D visualisation of critical intra- and extrahepatic structures. Current systems suffer from non-intuitive visualisation and a complicated setup process. A novel image guidance system (SmartLiver), offering augmented reality visualisation and semi-automatic registration has been developed to address these issues. A clinical feasibility study evaluated the performance and usability of SmartLiver with either manual or semi-automatic registration. METHODS Intraoperative image guidance data were recorded and analysed in patients undergoing laparoscopic liver resection or cancer staging. Stereoscopic surface reconstruction and iterative closest point matching facilitated semi-automatic registration. The primary endpoint was defined as successful registration as determined by the operating surgeon. Secondary endpoints were system usability as assessed by a surgeon questionnaire and comparison of manual vs. semi-automatic registration accuracy. Since SmartLiver is still in development no attempt was made to evaluate its impact on perioperative outcomes. RESULTS The primary endpoint was achieved in 16 out of 18 patients. Initially semi-automatic registration failed because the IGS could not distinguish the liver surface from surrounding structures. Implementation of a deep learning algorithm enabled the IGS to overcome this issue and facilitate semi-automatic registration. Mean registration accuracy was 10.9 ± 4.2 mm (manual) vs. 13.9 ± 4.4 mm (semi-automatic) (Mean difference - 3 mm; p = 0.158). Surgeon feedback was positive about IGS handling and improved intraoperative orientation but also highlighted the need for a simpler setup process and better integration with laparoscopic ultrasound. CONCLUSION The technical feasibility of using SmartLiver intraoperatively has been demonstrated. With further improvements semi-automatic registration may enhance user friendliness and workflow of SmartLiver. Manual and semi-automatic registration accuracy were comparable but evaluation on a larger patient cohort is required to confirm these findings.
Collapse
Affiliation(s)
- C. Schneider
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK
| | - S. Thompson
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - J. Totz
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - Y. Song
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - M. Allam
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK
| | - M. H. Sodergren
- Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - A. E. Desjardins
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - D. Barratt
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - S. Ourselin
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - K. Gurusamy
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK ,Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Department of Hepatopancreatobiliary and Liver Transplant Surgery, Royal Free Hospital, London, UK
| | - D. Stoyanov
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Computer Science, University College London, London, UK
| | - M. J. Clarkson
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - D. J. Hawkes
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - B. R. Davidson
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK ,Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Department of Hepatopancreatobiliary and Liver Transplant Surgery, Royal Free Hospital, London, UK
| |
Collapse
|