1
|
Azad TD, Alfonzo Horowitz M, Tracz JA, Khalifeh JM, Liu CJ, Hughes LP, Judy BF, Khan M, Bydon A, Witham TF. Augmented Reality Versus Freehand Spinopelvic Fixation in Spinal Deformity: A Case-Control Study. Surg Innov 2025; 32:36-45. [PMID: 39516001 DOI: 10.1177/15533506241299887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2024]
Abstract
PURPOSE This study sought to compare screw placement accuracy and outcomes between freehand (FH) and AR-guided pelvic fixation. While pelvic fixation is a critical technique in spinal deformity surgery, S2-alar iliac (S2AI) screw placement poses challenges. METHODS We conducted a case-control study of 50 consecutive patients who underwent spinopelvic fixation at a single institution. AR guidance was performed using a head-mounted display (Xvision, Augmedics). Patient demographics, surgical characteristics, spinopelvic parameters, and screw breach grade were compared using univariate and multivariate statistics. RESULTS Pelvic fixation was performed FH in 21 patients (median age, 64; female, 38.1%; median BMI 32.3 kg/m2) and AR-guided in 29 patients (median age, 66; female, 51.7%; median BMI 28.4 kg/m2). Mean follow-up was longer in the FH group (28 mos vs 11 mos, P < 0.001). Pelvic fixation in the FH group was performed using either S2AI (90.5%) or dual S2AI (9.5%) screws. There were no significant differences in length of surgery (FH, 439 minutes; AR, 490 minutes; P = 0.1) or estimated blood loss (FH, 2.1L; AR, 1.9L; P = 0.7). Accuracy of FH pelvic fixation was 95.6% (43/45 screws) and accuracy of AR pelvic fixation was 96.5% (55/57 screws). Multivariable logistic regression for screw breach revealed no significant association with AR guidance when controlling for age, BMI, osteoporosis, and smoking. CONCLUSIONS We present the first case-control study of AR-guided spinopelvic fixation, with findings suggesting parity between FH and AR-guidance, serving as foundation for prospective controlled studies with longitudinal follow-up to interrogate the benefits of AR-guidance in spinal deformity surgery.
Collapse
Affiliation(s)
- Tej D Azad
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | | | - Jovanna A Tracz
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Jawad M Khalifeh
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Connor J Liu
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Liam P Hughes
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Brendan F Judy
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Majid Khan
- Department of Radiology, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Ali Bydon
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Timothy F Witham
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, USA
| |
Collapse
|
2
|
Semash K, Dzhanbekov T. Laparoscopic donor hepatectomy: Are there obstacles on the path to global widespread? LAPAROSCOPIC, ENDOSCOPIC AND ROBOTIC SURGERY 2024. [DOI: 10.1016/j.lers.2024.12.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2025] Open
|
3
|
Han Z, Dou Q. A review on organ deformation modeling approaches for reliable surgical navigation using augmented reality. Comput Assist Surg (Abingdon) 2024; 29:2357164. [PMID: 39253945 DOI: 10.1080/24699322.2024.2357164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/11/2024] Open
Abstract
Augmented Reality (AR) holds the potential to revolutionize surgical procedures by allowing surgeons to visualize critical structures within the patient's body. This is achieved through superimposing preoperative organ models onto the actual anatomy. Challenges arise from dynamic deformations of organs during surgery, making preoperative models inadequate for faithfully representing intraoperative anatomy. To enable reliable navigation in augmented surgery, modeling of intraoperative deformation to obtain an accurate alignment of the preoperative organ model with the intraoperative anatomy is indispensable. Despite the existence of various methods proposed to model intraoperative organ deformation, there are still few literature reviews that systematically categorize and summarize these approaches. This review aims to fill this gap by providing a comprehensive and technical-oriented overview of modeling methods for intraoperative organ deformation in augmented reality in surgery. Through a systematic search and screening process, 112 closely relevant papers were included in this review. By presenting the current status of organ deformation modeling methods and their clinical applications, this review seeks to enhance the understanding of organ deformation modeling in AR-guided surgery, and discuss the potential topics for future advancements.
Collapse
Affiliation(s)
- Zheng Han
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
4
|
Asadi Z, Asadi M, Kazemipour N, Léger É, Kersten-Oertel M. A decade of progress: bringing mixed reality image-guided surgery systems in the operating room. Comput Assist Surg (Abingdon) 2024; 29:2355897. [PMID: 38794834 DOI: 10.1080/24699322.2024.2355897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/26/2024] Open
Abstract
Advancements in mixed reality (MR) have led to innovative approaches in image-guided surgery (IGS). In this paper, we provide a comprehensive analysis of the current state of MR in image-guided procedures across various surgical domains. Using the Data Visualization View (DVV) Taxonomy, we analyze the progress made since a 2013 literature review paper on MR IGS systems. In addition to examining the current surgical domains using MR systems, we explore trends in types of MR hardware used, type of data visualized, visualizations of virtual elements, and interaction methods in use. Our analysis also covers the metrics used to evaluate these systems in the operating room (OR), both qualitative and quantitative assessments, and clinical studies that have demonstrated the potential of MR technologies to enhance surgical workflows and outcomes. We also address current challenges and future directions that would further establish the use of MR in IGS.
Collapse
Affiliation(s)
- Zahra Asadi
- Department of Computer Science and Software Engineering, Concordia University, Montréal, Canada
| | - Mehrdad Asadi
- Department of Computer Science and Software Engineering, Concordia University, Montréal, Canada
| | - Negar Kazemipour
- Department of Computer Science and Software Engineering, Concordia University, Montréal, Canada
| | - Étienne Léger
- Montréal Neurological Institute & Hospital (MNI/H), Montréal, Canada
- McGill University, Montréal, Canada
| | - Marta Kersten-Oertel
- Department of Computer Science and Software Engineering, Concordia University, Montréal, Canada
| |
Collapse
|
5
|
Gerats BGA, Wolterink JM, Mol SP, Broeders IAMJ. Neural fields for 3D tracking of anatomy and surgical instruments in monocular laparoscopic video clips. Healthc Technol Lett 2024; 11:411-417. [PMID: 39720756 PMCID: PMC11665779 DOI: 10.1049/htl2.12113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2024] [Accepted: 11/27/2024] [Indexed: 12/26/2024] Open
Abstract
Laparoscopic video tracking primarily focuses on two target types: surgical instruments and anatomy. The former could be used for skill assessment, while the latter is necessary for the projection of virtual overlays. Where instrument and anatomy tracking have often been considered two separate problems, in this article, a method is proposed for joint tracking of all structures simultaneously. Based on a single 2D monocular video clip, a neural field is trained to represent a continuous spatiotemporal scene, used to create 3D tracks of all surfaces visible in at least one frame. Due to the small size of instruments, they generally cover a small part of the image only, resulting in decreased tracking accuracy. Therefore, enhanced class weighting is proposed to improve the instrument tracks. The authors evaluate tracking on video clips from laparoscopic cholecystectomies, where they find mean tracking accuracies of 92.4% for anatomical structures and 87.4% for instruments. Additionally, the quality of depth maps obtained from the method's scene reconstructions is assessed. It is shown that these pseudo-depths have comparable quality to a state-of-the-art pre-trained depth estimator. On laparoscopic videos in the SCARED dataset, the method predicts depth with an MAE of 2.9 mm and a relative error of 9.2%. These results show the feasibility of using neural fields for monocular 3D reconstruction of laparoscopic scenes. Code is available via GitHub: https://github.com/Beerend/Surgical-OmniMotion.
Collapse
Affiliation(s)
- Beerend G. A. Gerats
- AI & Data Science CenterMeander Medical CenterAmersfoortThe Netherlands
- Robotics and MechatronicsUniversity of TwenteEnschedeThe Netherlands
| | - Jelmer M. Wolterink
- Department of Applied MathematicsUniversity of TwenteEnschedeThe Netherlands
- Technical Medical CenterUniversity of TwenteEnschedeThe Netherlands
| | - Seb P. Mol
- AI & Data Science CenterMeander Medical CenterAmersfoortThe Netherlands
- Technical Medical CenterUniversity of TwenteEnschedeThe Netherlands
| | - Ivo A. M. J. Broeders
- AI & Data Science CenterMeander Medical CenterAmersfoortThe Netherlands
- Robotics and MechatronicsUniversity of TwenteEnschedeThe Netherlands
| |
Collapse
|
6
|
Oh MY, Yoon KC, Hyeon S, Jang T, Choi Y, Kim J, Kong HJ, Chai YJ. Navigating the Future of 3D Laparoscopic Liver Surgeries: Visualization of Internal Anatomy on Laparoscopic Images With Augmented Reality. Surg Laparosc Endosc Percutan Tech 2024; 34:459-465. [PMID: 38965779 DOI: 10.1097/sle.0000000000001307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 06/12/2024] [Indexed: 07/06/2024]
Abstract
INTRODUCTION Liver tumor resection requires precise localization of tumors and blood vessels. Despite advancements in 3-dimensional (3D) visualization for laparoscopic surgeries, challenges persist. We developed and evaluated an augmented reality (AR) system that overlays preoperative 3D models onto laparoscopic images, offering crucial support for 3D visualization during laparoscopic liver surgeries. METHODS Anatomic liver structures from preoperative computed tomography scans were segmented using open-source software including 3D Slicer and Maya 2022 for 3D model editing. A registration system was created with 3D visualization software utilizing a stereo registration input system to overlay the virtual liver onto laparoscopic images during surgical procedures. A controller was customized using a modified keyboard to facilitate manual alignment of the virtual liver with the laparoscopic image. The AR system was evaluated by 3 experienced surgeons who performed manual registration for a total of 27 images from 7 clinical cases. The evaluation criteria included registration time; measured in minutes, and accuracy; measured using the Dice similarity coefficient. RESULTS The overall mean registration time was 2.4±1.7 minutes (range: 0.3 to 9.5 min), and the overall mean registration accuracy was 93.8%±4.9% (range: 80.9% to 99.7%). CONCLUSION Our validated AR system has the potential to effectively enable the prediction of internal hepatic anatomic structures during 3D laparoscopic liver resection, and may enhance 3D visualization for select laparoscopic liver surgeries.
Collapse
Affiliation(s)
- Moon Young Oh
- Department of Surgery, Seoul National University College of Medicine, Seoul National University Boramae Medical Center
| | - Kyung Chul Yoon
- Department of Surgery, Seoul National University College of Medicine, Seoul National University Boramae Medical Center
| | - Seulgi Hyeon
- Department of Surgery, Seoul National University College of Medicine, Seoul National University Boramae Medical Center
| | - Taesoo Jang
- Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, Korea
| | - Yeonjin Choi
- Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, Korea
| | - Junki Kim
- Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, Korea
| | - Hyoun-Joong Kong
- Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, Korea
| | - Young Jun Chai
- Department of Surgery, Seoul National University College of Medicine, Seoul National University Boramae Medical Center
- Department of Transdisciplinary Medicine, Seoul National University Hospital, Seoul, Korea
| |
Collapse
|
7
|
Furnari G, Minelli M, Puliatti S, Micali S, Secchi C, Ferraguti F. Selective Clamping for Robot-Assisted Surgical Procedures. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-7. [PMID: 40039535 DOI: 10.1109/embc53108.2024.10782151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
Partial nephrectomy, the gold standard treatment for renal tumors, is performed with clamping of the renal arteries, in order to interrupt the blood flowing towards the tumor. However, the temporary interruption of arterial flow may lead to ischemia of the renal parenchyma. Thus, the interruption of the flow should be as short as possible and the clamping should be localized to the arteries flowing towards the tumor only, by implementing the so-called selective clamping. In this paper, we propose a system to automatically provide to the surgeon the optimal clamping points, according to our methods, which allow to minimize the ischemia percentage, thus preserving the health of the remaining renal parenchyma. Moreover, we exploit the algorithm as a planner for a robotic system that, starting from the clamping points automatically computed, emulates the clamping procedure. The overall architecture is validated on different patient's anatomies and using a robotic setup.
Collapse
|
8
|
Ribeiro M, Espinel Y, Rabbani N, Pereira B, Bartoli A, Buc E. Augmented Reality Guided Laparoscopic Liver Resection: A Phantom Study With Intraparenchymal Tumors. J Surg Res 2024; 296:612-620. [PMID: 38354617 DOI: 10.1016/j.jss.2023.12.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 11/21/2023] [Accepted: 12/19/2023] [Indexed: 02/16/2024]
Abstract
INTRODUCTION Augmented reality (AR) in laparoscopic liver resection (LLR) can improve intrahepatic navigation by creating a virtual liver transparency. Our team has recently developed Hepataug, an AR software that projects the invisible intrahepatic tumors onto the laparoscopic images and allows the surgeon to localize them precisely. However, the accuracy of registration according to the location and size of the tumors, as well as the influence of the projection axis, have never been measured. The aim of this work was to measure the three-dimensional (3D) tumor prediction error of Hepataug. METHODS Eight 3D virtual livers were created from the computed tomography scan of a healthy human liver. Reference markers with known coordinates were virtually placed on the anterior surface. The virtual livers were then deformed and 3D printed, forming 3D liver phantoms. After placing each 3D phantom inside a pelvitrainer, registration allowed Hepataug to project virtual tumors along two axes: the laparoscope axis and the operator port axis. The surgeons had to point the center of eight virtual tumors per liver with a pointing tool whose coordinates were precisely calculated. RESULTS We obtained 128 pointing experiments. The average pointing error was 29.4 ± 17.1 mm and 9.2 ± 5.1 mm for the laparoscope and operator port axes respectively (P = 0.001). The pointing errors tended to increase with tumor depth (correlation coefficients greater than 0.5 with P < 0.001). There was no significant dependency of the pointing error on the tumor size for both projection axes. CONCLUSIONS Tumor visualization by projection toward the operating port improves the accuracy of AR guidance and partially solves the problem of the two-dimensional visual interface of monocular laparoscopy. Despite a lower precision of AR for tumors located in the posterior part of the liver, it could allow the surgeons to access these lesions without completely mobilizing the liver, hence decreasing the surgical trauma.
Collapse
Affiliation(s)
- Mathieu Ribeiro
- Department of Digestive and Hepatobiliary Surgery, Hospital Estaing, CHU de Clermont-Ferrand, Clermont-Ferrand, France; UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Institut Pascal, Clermont-Ferrand, France
| | - Yamid Espinel
- UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Institut Pascal, Clermont-Ferrand, France
| | - Navid Rabbani
- UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Institut Pascal, Clermont-Ferrand, France
| | - Bruno Pereira
- Biostatistics Unit (DRCI), University Hospital Clermont-Ferrand, Clermont-Ferrand, France
| | - Adrien Bartoli
- UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Institut Pascal, Clermont-Ferrand, France
| | - Emmanuel Buc
- Department of Digestive and Hepatobiliary Surgery, Hospital Estaing, CHU de Clermont-Ferrand, Clermont-Ferrand, France; UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Institut Pascal, Clermont-Ferrand, France.
| |
Collapse
|
9
|
Cruz J, Gonçalves SB, Neves MC, Silva HP, Silva MT. Intraoperative Angle Measurement of Anatomical Structures: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2024; 24:1613. [PMID: 38475148 PMCID: PMC10934548 DOI: 10.3390/s24051613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 02/22/2024] [Accepted: 02/28/2024] [Indexed: 03/14/2024]
Abstract
Ensuring precise angle measurement during surgical correction of orientation-related deformities is crucial for optimal postoperative outcomes, yet there is a lack of an ideal commercial solution. Current measurement sensors and instrumentation have limitations that make their use context-specific, demanding a methodical evaluation of the field. A systematic review was carried out in March 2023. Studies reporting technologies and validation methods for intraoperative angular measurement of anatomical structures were analyzed. A total of 32 studies were included, 17 focused on image-based technologies (6 fluoroscopy, 4 camera-based tracking, and 7 CT-based), while 15 explored non-image-based technologies (6 manual instruments and 9 inertial sensor-based instruments). Image-based technologies offer better accuracy and 3D capabilities but pose challenges like additional equipment, increased radiation exposure, time, and cost. Non-image-based technologies are cost-effective but may be influenced by the surgeon's perception and require careful calibration. Nevertheless, the choice of the proper technology should take into consideration the influence of the expected error in the surgery, surgery type, and radiation dose limit. This comprehensive review serves as a valuable guide for surgeons seeking precise angle measurements intraoperatively. It not only explores the performance and application of existing technologies but also aids in the future development of innovative solutions.
Collapse
Affiliation(s)
- João Cruz
- IDMEC, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal; (J.C.); (S.B.G.)
| | - Sérgio B. Gonçalves
- IDMEC, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal; (J.C.); (S.B.G.)
| | | | - Hugo Plácido Silva
- IT—Instituto de Telecomunicações, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal;
| | - Miguel Tavares Silva
- IDMEC, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal; (J.C.); (S.B.G.)
| |
Collapse
|
10
|
Chen L, Ma L, Zhang F, Zhan W, Yang X, Sun L. A method of three-dimensional non-rigid localization of liver tumors based on structured light. OPTICS AND LASERS IN ENGINEERING 2024; 174:107962. [DOI: 10.1016/j.optlaseng.2023.107962] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2025]
|
11
|
Smit JN, Kuhlmann KFD, Thomson BR, Kok NFM, Ruers TJM, Fusaglia M. Ultrasound guidance in navigated liver surgery: toward deep-learning enhanced compensation of deformation and organ motion. Int J Comput Assist Radiol Surg 2024; 19:1-9. [PMID: 37249749 DOI: 10.1007/s11548-023-02942-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Accepted: 04/27/2023] [Indexed: 05/31/2023]
Abstract
PURPOSE Accuracy of image-guided liver surgery is challenged by deformation of the liver during the procedure. This study aims at improving navigation accuracy by using intraoperative deep learning segmentation and nonrigid registration of hepatic vasculature from ultrasound (US) images to compensate for changes in liver position and deformation. METHODS This was a single-center prospective study of patients with liver metastases from any origin. Electromagnetic tracking was used to follow US and liver movement. A preoperative 3D model of the liver, including liver lesions, and hepatic and portal vasculature, was registered with the intraoperative organ position. Hepatic vasculature was segmented using a reduced 3D U-Net and registered to preoperative imaging after initial alignment followed by nonrigid registration. Accuracy was assessed as Euclidean distance between the tumor center imaged in the intraoperative US and the registered preoperative image. RESULTS Median target registration error (TRE) after initial alignment was 11.6 mm in 25 procedures and improved to 6.9 mm after nonrigid registration (p = 0.0076). The number of TREs above 10 mm halved from 16 to 8 after nonrigid registration. In 9 cases, registration was performed twice after failure of the first attempt. The first registration cycle was completed in median 11 min (8:00-18:45 min) and a second in 5 min (2:30-10:20 min). CONCLUSION This novel registration workflow using automatic vascular detection and nonrigid registration allows to accurately localize liver lesions. Further automation in the workflow is required in initial alignment and classification accuracy.
Collapse
Affiliation(s)
- Jasper N Smit
- Department of Surgical Oncology, The Netherlands Cancer Institute-Antoni van Leeuwenhoek, Plesmanlaan 121, 1066CX, Amsterdam, The Netherlands.
| | - Koert F D Kuhlmann
- Department of Surgical Oncology, The Netherlands Cancer Institute-Antoni van Leeuwenhoek, Plesmanlaan 121, 1066CX, Amsterdam, The Netherlands
| | - Bart R Thomson
- Department of Surgical Oncology, The Netherlands Cancer Institute-Antoni van Leeuwenhoek, Plesmanlaan 121, 1066CX, Amsterdam, The Netherlands
| | - Niels F M Kok
- Department of Surgical Oncology, The Netherlands Cancer Institute-Antoni van Leeuwenhoek, Plesmanlaan 121, 1066CX, Amsterdam, The Netherlands
| | - Theo J M Ruers
- Department of Surgical Oncology, The Netherlands Cancer Institute-Antoni van Leeuwenhoek, Plesmanlaan 121, 1066CX, Amsterdam, The Netherlands
- Nanobiophysics Group (NBP), Faculty of Science and Technology (TNW), University of Twente, Enschede, The Netherlands
| | - Matteo Fusaglia
- Department of Surgical Oncology, The Netherlands Cancer Institute-Antoni van Leeuwenhoek, Plesmanlaan 121, 1066CX, Amsterdam, The Netherlands
| |
Collapse
|
12
|
Deng Z, Xiang N, Pan J. State of the Art in Immersive Interactive Technologies for Surgery Simulation: A Review and Prospective. Bioengineering (Basel) 2023; 10:1346. [PMID: 38135937 PMCID: PMC10740891 DOI: 10.3390/bioengineering10121346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 11/08/2023] [Accepted: 11/21/2023] [Indexed: 12/24/2023] Open
Abstract
Immersive technologies have thrived on a strong foundation of software and hardware, injecting vitality into medical training. This surge has witnessed numerous endeavors incorporating immersive technologies into surgery simulation for surgical skills training, with a growing number of researchers delving into this domain. Relevant experiences and patterns need to be summarized urgently to enable researchers to establish a comprehensive understanding of this field, thus promoting its continuous growth. This study provides a forward-looking perspective by reviewing the latest development of immersive interactive technologies for surgery simulation. The investigation commences from a technological standpoint, delving into the core aspects of virtual reality (VR), augmented reality (AR) and mixed reality (MR) technologies, namely, haptic rendering and tracking. Subsequently, we summarize recent work based on the categorization of minimally invasive surgery (MIS) and open surgery simulations. Finally, the study showcases the impressive performance and expansive potential of immersive technologies in surgical simulation while also discussing the current limitations. We find that the design of interaction and the choice of immersive technology in virtual surgery development should be closely related to the corresponding interactive operations in the real surgical speciality. This alignment facilitates targeted technological adaptations in the direction of greater applicability and fidelity of simulation.
Collapse
Affiliation(s)
- Zihan Deng
- Department of Computing, School of Advanced Technology, Xi’an Jiaotong-Liverpool Uiversity, Suzhou 215123, China;
| | - Nan Xiang
- Department of Computing, School of Advanced Technology, Xi’an Jiaotong-Liverpool Uiversity, Suzhou 215123, China;
| | - Junjun Pan
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China;
| |
Collapse
|
13
|
Condino S, Cutolo F, Carbone M, Cercenelli L, Badiali G, Montemurro N, Ferrari V. Registration Sanity Check for AR-guided Surgical Interventions: Experience From Head and Face Surgery. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2023; 12:258-267. [PMID: 38410181 PMCID: PMC10896424 DOI: 10.1109/jtehm.2023.3332088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 10/19/2023] [Accepted: 11/08/2023] [Indexed: 02/28/2024]
Abstract
Achieving and maintaining proper image registration accuracy is an open challenge of image-guided surgery. This work explores and assesses the efficacy of a registration sanity check method for augmented reality-guided navigation (AR-RSC), based on the visual inspection of virtual 3D models of landmarks. We analyze the AR-RSC sensitivity and specificity by recruiting 36 subjects to assess the registration accuracy of a set of 114 AR images generated from camera images acquired during an AR-guided orthognathic intervention. Translational or rotational errors of known magnitude up to ±1.5 mm/±15.5°, were artificially added to the image set in order to simulate different registration errors. This study analyses the performance of AR-RSC when varying (1) the virtual models selected for misalignment evaluation (e. g., the model of brackets, incisor teeth, and gingival margins in our experiment), (2) the type (translation/rotation) of registration error, and (3) the level of user experience in using AR technologies. Results show that: 1) the sensitivity and specificity of the AR-RSC depends on the virtual models (globally, a median true positive rate of up to 79.2% was reached with brackets, and a median true negative rate of up to 64.3% with incisor teeth), 2) there are error components that are more difficult to identify visually, 3) the level of user experience does not affect the method. In conclusion, the proposed AR-RSC, tested also in the operating room, could represent an efficient method to monitor and optimize the registration accuracy during the intervention, but special attention should be paid to the selection of the AR data chosen for the visual inspection of the registration accuracy.
Collapse
Affiliation(s)
- Sara Condino
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| | - Fabrizio Cutolo
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| | - Marina Carbone
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| | - Laura Cercenelli
- EDIMES Laboratory of BioengineeringDepartment of Experimental, Diagnostic and Specialty MedicineUniversity of Bologna40138BolognaItaly
| | - Giovanni Badiali
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| | - Nicola Montemurro
- Department of NeurosurgeryAzienda Ospedaliera Universitaria Pisana (AOUP)56127PisaItaly
| | - Vincenzo Ferrari
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| |
Collapse
|
14
|
Kim M, Zhang Y, Jin S. Soft tissue surgical robot for minimally invasive surgery: a review. Biomed Eng Lett 2023; 13:561-569. [PMID: 37872994 PMCID: PMC10590359 DOI: 10.1007/s13534-023-00326-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 09/08/2023] [Accepted: 09/26/2023] [Indexed: 10/25/2023] Open
Abstract
Purpose The current state of soft tissue surgery robots is surveyed, and the key technologies underlying their success are analyzed. State-of-the-art technologies are introduced, and future directions are discussed. Methods Relevant literature is explored, analyzed, and summarized. Results Soft tissue surgical robots had rapidly spread in the field of laparoscopic surgery based on the multi-degree-of-freedom movement of intra-abdominal surgical tools and stereoscopic imaging that are not possible in conventional surgery. The three key technologies that have made surgical robots successful are wire-driven mechanisms for multi-degree-of-freedom movement, master devices for intuitive remote control, and stereoscopic imaging technology. Recently, human-robot interaction technologies have been applied to develop user interfaces such as vision assistance and haptic feedback, and research on autonomous surgery has begun. Conclusion Robotic surgery not only replaces conventional laparoscopic surgery but also allows for complex surgeries that are not possible with laparoscopic surgery. On the other hand, it is also criticized for its high cost and lack of clinical superiority or patient benefit compared to conventional laparoscopic surgery. As various robots compete in the market, the cost of surgical robots is expected to decrease. Surgical robots are expected to continue to evolve in the future due to the need to reduce the workload of medical staff and improve the level of care demanded by patients.
Collapse
Affiliation(s)
- Minhyo Kim
- School of Mechanical Engineering, Pusan National University, 2, Busandaehak-ro 63beon-gil, Geumjeong-gu, Busan, 46241 Republic of Korea
| | - Youqiang Zhang
- School of Mechanical Engineering, Pusan National University, 2, Busandaehak-ro 63beon-gil, Geumjeong-gu, Busan, 46241 Republic of Korea
| | - Sangrok Jin
- School of Mechanical Engineering, Pusan National University, 2, Busandaehak-ro 63beon-gil, Geumjeong-gu, Busan, 46241 Republic of Korea
| |
Collapse
|
15
|
Vicente E, Quijano Y, Duran H, Diaz E, Fabra I, Malave L, Ruiz P, Pizzuti G, Naldini C, De Nobili G, Caruso R, Ferri V. Can 3D imaging modeling recognize functional tissue and predict liver failure? A retrospective study based on 3D modelling of the major hepatectomies after hepatic modulation. BMC Surg 2023; 23:316. [PMID: 37853412 PMCID: PMC10583474 DOI: 10.1186/s12893-023-02196-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2023] [Accepted: 09/13/2023] [Indexed: 10/20/2023] Open
Abstract
BACKGROUND Thanks to the introduction of radiomics, 3d reconstruction can be able to analyse tissues and recognise true hypertrophy from non-functioning tissue in patients treated with major hepatectomies with hepatic modulation.The aim of this study is to evaluate the performance of 3D Imaging Modelling in predict liver failure. METHODS Patients submitted to major hepatectomies after hepatic modulation at Sanchinarro University Hospital from May 2015 to October 2019 were analysed. Three-dimensional reconstruction was realised before and after surgical treatment. The volumetry of Future Liver Remnant was calculated, distinguishing in Functional Future Liver Remnant (FRFx) i.e. true hypertrophy tissue and Anatomic Future Liver Remnant (FRL) i.e. hypertrophy plus no functional tissue (oedema/congestion) These volumes were analysed in patients with and without post hepatic liver failure. RESULTS Twenty-four procedures were realised (11 ALPPS and 13 PVE followed by major hepatectomy). Post hepatic liver failure grade B and C occurred in 6 patients. The ROC curve showed a better AUC for FRFxV (74%) with respect to FRLV (54%) in prediction PHLF > B. The increase of anatomical FRL (iFRL) was superior in the ALPPS group (120%) with respect to the PVE group (73%) (p = 0,041), while the increase of functional FRFX (iFRFx) was 35% in the ALLPS group and 46% in the PVE group (p > 0,05), showing no difference in the two groups. CONCLUSION The 3D reconstruction model can allow optimal surgical planning, and through the use of specific algorithms, can contribute to differential functioning liver parenchyma of the FLR.
Collapse
Affiliation(s)
- Emilio Vicente
- Division of General Surgery, Sanchinarro Hospital, San Pablo University, Calle Oñaa 10, 28050, Madrid, Spain
| | - Yolanda Quijano
- Division of General Surgery, Sanchinarro Hospital, San Pablo University, Calle Oñaa 10, 28050, Madrid, Spain
| | - Hipolito Duran
- Division of General Surgery, Sanchinarro Hospital, San Pablo University, Calle Oñaa 10, 28050, Madrid, Spain
| | - Eduardo Diaz
- Division of General Surgery, Sanchinarro Hospital, San Pablo University, Calle Oñaa 10, 28050, Madrid, Spain
| | - Isabel Fabra
- Division of General Surgery, Sanchinarro Hospital, San Pablo University, Calle Oñaa 10, 28050, Madrid, Spain
| | - Luis Malave
- Division of General Surgery, Sanchinarro Hospital, San Pablo University, Calle Oñaa 10, 28050, Madrid, Spain
| | - Pablo Ruiz
- Division of General Surgery, Sanchinarro Hospital, San Pablo University, Calle Oñaa 10, 28050, Madrid, Spain
| | | | | | - Giovanni De Nobili
- Università Degli Studi Gabriele d'Annunzio Chieti Pescara, Pescara, Italy
| | - Riccardo Caruso
- Division of General Surgery, Sanchinarro Hospital, San Pablo University, Calle Oñaa 10, 28050, Madrid, Spain
| | - Valentina Ferri
- Division of General Surgery, Sanchinarro Hospital, San Pablo University, Calle Oñaa 10, 28050, Madrid, Spain.
| |
Collapse
|
16
|
Taleb A, Guigou C, Leclerc S, Lalande A, Bozorg Grayeli A. Image-to-Patient Registration in Computer-Assisted Surgery of Head and Neck: State-of-the-Art, Perspectives, and Challenges. J Clin Med 2023; 12:5398. [PMID: 37629441 PMCID: PMC10455300 DOI: 10.3390/jcm12165398] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 08/08/2023] [Accepted: 08/14/2023] [Indexed: 08/27/2023] Open
Abstract
Today, image-guided systems play a significant role in improving the outcome of diagnostic and therapeutic interventions. They provide crucial anatomical information during the procedure to decrease the size and the extent of the approach, to reduce intraoperative complications, and to increase accuracy, repeatability, and safety. Image-to-patient registration is the first step in image-guided procedures. It establishes a correspondence between the patient's preoperative imaging and the intraoperative data. When it comes to the head-and-neck region, the presence of many sensitive structures such as the central nervous system or the neurosensory organs requires a millimetric precision. This review allows evaluating the characteristics and the performances of different registration methods in the head-and-neck region used in the operation room from the perspectives of accuracy, invasiveness, and processing times. Our work led to the conclusion that invasive marker-based methods are still considered as the gold standard of image-to-patient registration. The surface-based methods are recommended for faster procedures and applied on the surface tissues especially around the eyes. In the near future, computer vision technology is expected to enhance these systems by reducing human errors and cognitive load in the operating room.
Collapse
Affiliation(s)
- Ali Taleb
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
| | - Caroline Guigou
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
- Otolaryngology Department, University Hospital of Dijon, 21000 Dijon, France
| | - Sarah Leclerc
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
| | - Alain Lalande
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
- Medical Imaging Department, University Hospital of Dijon, 21000 Dijon, France
| | - Alexis Bozorg Grayeli
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
- Otolaryngology Department, University Hospital of Dijon, 21000 Dijon, France
| |
Collapse
|
17
|
Abstract
INTRODUCTION During an operation, augmented reality (AR) enables surgeons to enrich their vision of the operating field by means of digital imagery, particularly as regards tumors and anatomical structures. While in some specialties, this type of technology is routinely ustilized, in liver surgery due to the complexity of modeling organ deformities in real time, its applications remain limited. At present, numerous teams are attempting to find a solution applicable to current practice, the objective being to overcome difficulties of intraoperative navigation in an opaque organ. OBJECTIVE To identify, itemize and analyze series reporting AR techniques tested in liver surgery, the objectives being to establish a state of the art and to provide indications of perspectives for the future. METHODS In compliance with the PRISMA guidelines and availing ourselves of the PubMed, Embase and Cochrane databases, we identified English-language articles published between January 2020 and January 2022 corresponding to the following keywords: augmented reality, hepatic surgery, liver and hepatectomy. RESULTS Initially, 102 titles, studies and summaries were preselected. Twenty-eight corresponding to the inclusion criteria were included, reporting on 183patients operated with the help of AR by laparotomy (n=31) or laparoscopy (n=152). Several techniques of acquisition and visualization were reported. Anatomical precision was the main assessment criterion in 19 articles, with values ranging from 3mm to 14mm, followed by time of acquisition and clinical feasibility. CONCLUSION While several AR technologies are presently being developed, due to insufficient anatomical precision their clinical applications have remained limited. That much said, numerous teams are currently working toward their optimization, and it is highly likely that in the short term, the application of AR in liver surgery will have become more frequent and effective. As for its clinical impact, notably in oncology, it remains to be assessed.
Collapse
Affiliation(s)
- B Acidi
- Department of Surgery, AP-HP hôpital Paul-Brousse, Hepato-Biliary Center, 12, avenue Paul-Vaillant Couturier, 94804 Villejuif cedex, France; Augmented Operating Room Innovation Chair (BOPA), France; Inria « Mimesis », Strasbourg, France
| | - M Ghallab
- Department of Surgery, AP-HP hôpital Paul-Brousse, Hepato-Biliary Center, 12, avenue Paul-Vaillant Couturier, 94804 Villejuif cedex, France; Augmented Operating Room Innovation Chair (BOPA), France
| | - S Cotin
- Augmented Operating Room Innovation Chair (BOPA), France; Inria « Mimesis », Strasbourg, France
| | - E Vibert
- Department of Surgery, AP-HP hôpital Paul-Brousse, Hepato-Biliary Center, 12, avenue Paul-Vaillant Couturier, 94804 Villejuif cedex, France; Augmented Operating Room Innovation Chair (BOPA), France; DHU Hepatinov, 94800 Villejuif, France; Inserm, Paris-Saclay University, UMRS 1193, Pathogenesis and treatment of liver diseases; FHU Hepatinov, 94800 Villejuif, France
| | - N Golse
- Department of Surgery, AP-HP hôpital Paul-Brousse, Hepato-Biliary Center, 12, avenue Paul-Vaillant Couturier, 94804 Villejuif cedex, France; Augmented Operating Room Innovation Chair (BOPA), France; Inria « Mimesis », Strasbourg, France; DHU Hepatinov, 94800 Villejuif, France; Inserm, Paris-Saclay University, UMRS 1193, Pathogenesis and treatment of liver diseases; FHU Hepatinov, 94800 Villejuif, France.
| |
Collapse
|
18
|
Douglas MJ, Callcut R, Celi LA, Merchant N. Interpretation and Use of Applied/Operational Machine Learning and Artificial Intelligence in Surgery. Surg Clin North Am 2023; 103:317-333. [PMID: 36948721 DOI: 10.1016/j.suc.2022.11.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/24/2023]
Abstract
Applications for artificial intelligence (AI) and machine learning in surgery include image interpretation, data summarization, automated narrative construction, trajectory and risk prediction, and operative navigation and robotics. The pace of development has been exponential, and some AI applications are working well. However, demonstrations of clinical utility, validity, and equity have lagged algorithm development and limited widespread adoption of AI into clinical practice. Outdated computing infrastructure and regulatory challenges which promote data silos are key barriers. Multidisciplinary teams will be needed to address these challenges and to build AI systems that are relevant, equitable, and dynamic.
Collapse
Affiliation(s)
- Molly J Douglas
- Department of Surgery, University of Arizona, 1501 N Campbell Avenue, Tucson, AZ 85724, USA.
| | - Rachel Callcut
- Trauma, Acute Care Surgery and Surgical Critical Care, University of California, Davis, 2335 Stockton Boulevard, Sacramento, CA 95817, USA. https://twitter.com/callcura
| | - Leo Anthony Celi
- Laboratory of Computational Physiology, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA; Beth Israel Deaconess Medical Center. https://twitter.com/MITCriticalData
| | - Nirav Merchant
- Data Science Institute, University of Arizona, 1230 North Cherry Avenue, Tucson, AZ 85721, USA
| |
Collapse
|
19
|
Takamoto T, Nara S, Ban D, Mizui T, Murase Y, Esaki M, Shimada K. Enhanced Recognition Confidence of Millimeter-Sized Intrahepatic Targets by Real-Time Virtual Sonography. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2023. [PMID: 36814362 DOI: 10.1002/jum.16199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 01/17/2023] [Accepted: 02/06/2023] [Indexed: 06/18/2023]
Abstract
OBJECTIVES Real-time virtual sonography (RVS) is an artificial-intelligence-assisted ultrasonographic navigation system that displays synchronized preoperative computed tomography (CT) images corresponding to real-time intraoperative ultrasonograms (IOUS). This study aimed to investigate whether RVS can enhance IOUS identification of small intrahepatic targets found in preoperative CT. METHODS Patients with small intrahepatic targets detected by preoperative thin-slice dynamic CT before liver resection were included. The targets included millimeter-sized liver tumors or a third-order or more distal portal branch and were marked on CT images using 3D simulation software. After laparotomy, the targets were searched using fundamental IOUS, and participating liver surgeons subjectively scored the target identifying confidence on a scale of 1-5 (5 points for detection with the highest confidence and one point for undetectable). Then, the search procedure was repeated using the RVS, and the scores were compared. RESULTS Totally, 55 patients with 117 small targets were investigated. The median target size was 6.0 mm, and the median registration time was 3.6 seconds. The target identification confidence score significantly increased from 2.78 to 4.52 points after using RVS. Seventeen targets (14.5%) were undetectable in fundamental IOUS, and 14 of them were identified by RVS. The detectability of small liver tumors (2-5 points of identification confidence) by IOUS was 81.1 and 96.7% by RVS. CONCLUSION RVS enhanced surgeons' confidence in identifying millimeter-sized intrahepatic targets found in preoperative CT.
Collapse
Affiliation(s)
- Takeshi Takamoto
- Department of Hepatobiliary and Pancreatic Surgery, National Cancer Center Hospital, Tokyo, Japan
| | - Satoshi Nara
- Department of Hepatobiliary and Pancreatic Surgery, National Cancer Center Hospital, Tokyo, Japan
| | - Daisuke Ban
- Department of Hepatobiliary and Pancreatic Surgery, National Cancer Center Hospital, Tokyo, Japan
| | - Takahiro Mizui
- Department of Hepatobiliary and Pancreatic Surgery, National Cancer Center Hospital, Tokyo, Japan
| | - Yoshiki Murase
- Department of Hepatobiliary and Pancreatic Surgery, National Cancer Center Hospital, Tokyo, Japan
| | - Minoru Esaki
- Department of Hepatobiliary and Pancreatic Surgery, National Cancer Center Hospital, Tokyo, Japan
| | - Kazuaki Shimada
- Department of Hepatobiliary and Pancreatic Surgery, National Cancer Center Hospital, Tokyo, Japan
| |
Collapse
|
20
|
Shahbaz M, Miao H, Farhaj Z, Gong X, Weikai S, Dong W, Jun N, Shuwei L, Yu D. Mixed reality navigation training system for liver surgery based on a high-definition human cross-sectional anatomy data set. Cancer Med 2023; 12:7992-8004. [PMID: 36607128 PMCID: PMC10134360 DOI: 10.1002/cam4.5583] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 11/24/2022] [Accepted: 12/17/2022] [Indexed: 01/07/2023] Open
Abstract
OBJECTIVES This study aims to use the three-dimensional (3D) mixed-reality model of liver, entailing complex intrahepatic systems and to deeply study the anatomical structures and to promote the training, diagnosis and treatment of liver diseases. METHODS Vascular perfusion human specimens were used for thin-layer frozen milling to obtain liver cross-sections. The 104-megapixel-high-definition cross sectional data set was established and registered to achieve structure identification and manual segmentation. The digital model was reconstructed and data was used to print a 3D hepatic model. The model was combined with HoloLens mixed reality technology to reflect the complex relationships of intrahepatic systems. We simulated 3D patient specific anatomy for identification and preoperative planning, conducted a questionnaire survey, and evaluated the results. RESULTS The 3D digital model and 1:1 transparent and colored model of liver established truly reflected intrahepatic vessels and their complex relationships. The reconstructed model imported into HoloLens could be accurately matched with the 3D model. Only 7.7% participants could identify accessory hepatic veins. The depth and spatial-relationship of intrahepatic structures were better understandable for 92%. The 100%, 84.6%, 69% and 84% believed the 3D models were useful in planning, safer surgical paths, reducing intraoperative complications and training of young surgeons respectively. CONCLUSIONS A detailed 3D model can be reconstructed using the higher quality cross-sectional anatomical data set. When combined with 3D printing and HoloLens technology, a novel hybrid-reality navigation-training system for liver surgery is created. Mixed Reality training is a worthy alternative to provide 3D information to clinicians and its possible application in surgery. This conclusion was obtained based on a questionnaire and evaluation. Surgeons with extensive experience in surgical operations perceived in the questionnaire that this technology might be useful in liver surgery, would help in precise preoperative planning, accurate intraoperative identification, and reduction of hepatic injury.
Collapse
Affiliation(s)
- Muhammad Shahbaz
- Department of Radiology, Qilu Hospital of Shandong UniversityJinanShandongChina
- Research Center for Sectional and Imaging AnatomyDigital Human Institute, School of Basic Medical Science, Shandong UniversityJinanShandongChina
- Department of General SurgeryQilu Hospital of Shandong UniversityJinanShandongChina
| | - Huachun Miao
- Department of Anatomy, Wannan Medical CollegeWuhuAnhuiChina
| | - Zeeshan Farhaj
- Department of Cardiovascular Surgery, Shandong Qianfoshan Hospital, Cheeloo College of MedicineShandong UniversityJinanShandongChina
| | - Xin Gong
- Department of Anatomy, Wannan Medical CollegeWuhuAnhuiChina
| | - Sun Weikai
- Department of Radiology, Qilu Hospital of Shandong UniversityJinanShandongChina
| | - Wenqing Dong
- Department of Anatomy, Wannan Medical CollegeWuhuAnhuiChina
| | - Niu Jun
- Department of General SurgeryQilu Hospital of Shandong UniversityJinanShandongChina
| | - Liu Shuwei
- Research Center for Sectional and Imaging AnatomyDigital Human Institute, School of Basic Medical Science, Shandong UniversityJinanShandongChina
| | - Dexin Yu
- Department of Radiology, Qilu Hospital of Shandong UniversityJinanShandongChina
| |
Collapse
|
21
|
Chen X, Sakai D, Fukuoka H, Shirai R, Ebina K, Shibuya S, Sase K, Tsujita T, Abe T, Oka K, Konno A. Basic Experiments Toward Mixed Reality Dynamic Navigation for Laparoscopic Surgery. JOURNAL OF ROBOTICS AND MECHATRONICS 2022. [DOI: 10.20965/jrm.2022.p1253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Laparoscopic surgery is a minimally invasive procedure that is performed by viewing endoscopic camera images. However, the limited field of view of endoscopic cameras makes laparoscopic surgery difficult. To provide more visual information during laparoscopic surgeries, augmented reality (AR) surgical navigation systems have been developed to visualize the positional relationship between the surgical field and organs based on preoperative medical images of a patient. However, since earlier studies used preoperative medical images, the navigation became inaccurate as the surgery progressed because the organs were displaced and deformed during surgery. To solve this problem, we propose a mixed reality (MR) surgery navigation system in which surgical instruments are tracked by a motion capture (Mocap) system; we also evaluated the contact between the instruments and organs and simulated and visualized the deformation of the organ caused by the contact. This paper describes a method for the numerical calculation of the deformation of a soft body. Then, the basic technology of MR and projection mapping is presented for MR surgical navigation. The accuracy of the simulated and visualized deformations is evaluated through basic experiments using a soft rectangular cuboid object.
Collapse
|
22
|
Minimally invasive and invasive liver surgery based on augmented reality training: a review of the literature. J Robot Surg 2022; 17:753-763. [DOI: 10.1007/s11701-022-01499-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 11/14/2022] [Indexed: 11/29/2022]
|
23
|
Huber T, Huettl F, Hanke LI, Vradelis L, Heinrich S, Hansen C, Boedecker C, Lang H. Leberchirurgie 4.0 - OP-Planung, Volumetrie, Navigation und Virtuelle
Realität. Zentralbl Chir 2022; 147:361-368. [DOI: 10.1055/a-1844-0549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
ZusammenfassungDurch die Optimierung der konservativen Behandlung, die Verbesserung der
bildgebenden Verfahren und die Weiterentwicklung der Operationstechniken haben
sich das operative Spektrum sowie der Maßstab für die Resektabilität in Bezug
auf die Leberchirurgie in den letzten Jahrzehnten deutlich verändert.Dank zahlreicher technischer Entwicklungen, insbesondere der 3-dimensionalen
Segmentierung, kann heutzutage die präoperative Planung und die Orientierung
während der Operation selbst, vor allem bei komplexen Eingriffen, unter
Berücksichtigung der patientenspezifischen Anatomie erleichtert werden.Neue Technologien wie 3-D-Druck, virtuelle und augmentierte Realität bieten
zusätzliche Darstellungsmöglichkeiten für die individuelle Anatomie.
Verschiedene intraoperative Navigationsmöglichkeiten sollen die präoperative
Planung im Operationssaal verfügbar machen, um so die Patientensicherheit zu
erhöhen.Dieser Übersichtsartikel soll einen Überblick über den gegenwärtigen Stand der
verfügbaren Technologien sowie einen Ausblick in den Operationssaal der Zukunft
geben.
Collapse
Affiliation(s)
- Tobias Huber
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie,
Universitätsmedizin Mainz, Mainz, Deutschland
| | - Florentine Huettl
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie,
Universitätsmedizin Mainz, Mainz, Deutschland
| | - Laura Isabel Hanke
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie,
Universitätsmedizin Mainz, Mainz, Deutschland
| | - Lukas Vradelis
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie,
Universitätsmedizin Mainz, Mainz, Deutschland
| | - Stefan Heinrich
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie,
Universitätsmedizin Mainz, Mainz, Deutschland
| | - Christian Hansen
- Fakultät für Informatik, Otto von Guericke Universität
Magdeburg, Magdeburg, Deutschland
| | - Christian Boedecker
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie,
Universitätsmedizin Mainz, Mainz, Deutschland
| | - Hauke Lang
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie,
Universitätsmedizin Mainz, Mainz, Deutschland
| |
Collapse
|
24
|
Abstract
Augmented reality (AR) is an innovative system that enhances the real world by superimposing virtual objects on reality. The aim of this study was to analyze the application of AR in medicine and which of its technical solutions are the most used. We carried out a scoping review of the articles published between 2019 and February 2022. The initial search yielded a total of 2649 articles. After applying filters, removing duplicates and screening, we included 34 articles in our analysis. The analysis of the articles highlighted that AR has been traditionally and mainly used in orthopedics in addition to maxillofacial surgery and oncology. Regarding the display application in AR, the Microsoft HoloLens Optical Viewer is the most used method. Moreover, for the tracking and registration phases, the marker-based method with a rigid registration remains the most used system. Overall, the results of this study suggested that AR is an innovative technology with numerous advantages, finding applications in several new surgery domains. Considering the available data, it is not possible to clearly identify all the fields of application and the best technologies regarding AR.
Collapse
|
25
|
Gavriilidis P, Edwin B, Pelanis E, Hidalgo E, de'Angelis N, Memeo R, Aldrighetti L, Sutcliffe RP. Navigated liver surgery: State of the art and future perspectives. Hepatobiliary Pancreat Dis Int 2022; 21:226-233. [PMID: 34544668 DOI: 10.1016/j.hbpd.2021.09.002] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 08/27/2021] [Indexed: 02/08/2023]
Abstract
BACKGROUND In recent years, the development of digital imaging technology has had a significant influence in liver surgery. The ability to obtain a 3-dimensional (3D) visualization of the liver anatomy has provided surgery with virtual reality of simulation 3D computer models, 3D printing models and more recently holograms and augmented reality (when virtual reality knowledge is superimposed onto reality). In addition, the utilization of real-time fluorescent imaging techniques based on indocyanine green (ICG) uptake allows clinicians to precisely delineate the liver anatomy and/or tumors within the parenchyma, applying the knowledge obtained preoperatively through digital imaging. The combination of both has transformed the abstract thinking until now based on 2D imaging into a 3D preoperative conception (virtual reality), enhanced with real-time visualization of the fluorescent liver structures, effectively facilitating intraoperative navigated liver surgery (augmented reality). DATA SOURCES A literature search was performed from inception until January 2021 in MEDLINE (PubMed), Embase, Cochrane library and database for systematic reviews (CDSR), Google Scholar, and National Institute for Health and Clinical Excellence (NICE) databases. RESULTS Fifty-one pertinent articles were retrieved and included. The different types of digital imaging technologies and the real-time navigated liver surgery were estimated and compared. CONCLUSIONS ICG fluorescent imaging techniques can contribute essentially to the real-time definition of liver segments; as a result, precise hepatic resection can be guided by the presence of fluorescence. Furthermore, 3D models can help essentially to further advancing of precision in hepatic surgery by permitting estimation of liver volume and functional liver remnant, delineation of resection lines along the liver segments and evaluation of tumor margins. In liver transplantation and especially in living donor liver transplantation (LDLT), 3D printed models of the donor's liver and models of the recipient's hilar anatomy can contribute further to improving the results. In particular, pediatric LDLT abdominal cavity models can help to manage the largest challenge of this procedure, namely large-for-size syndrome.
Collapse
Affiliation(s)
- Paschalis Gavriilidis
- Department of Hepato-Pancreato-Biliary and Liver Transplant Surgery, Queen Elizabeth University Hospitals Birmingham NHS Foundation Trust, B15 2TH, UK.
| | - Bjørn Edwin
- The Intervention Centre and Department of HPB Surgery, Oslo University Hospital and Faculty of Medicine, University of Oslo, Oslo, Norway
| | - Egidijus Pelanis
- The Intervention Centre and Department of HPB Surgery, Oslo University Hospital and Faculty of Medicine, University of Oslo, Oslo, Norway
| | - Ernest Hidalgo
- Department of Hepato-Pancreatico-Biliary Surgery and Transplantation, Hospital Universitari Vall d'Hebron, Barcelona, Spain
| | - Nicola de'Angelis
- Department of Digestive Surgery, University Hospital Henri Mondor (AP-HP), 94010 Créteil and University of Paris Est, Créteil, France
| | - Riccardo Memeo
- Department of Hepatobiliary and Pancreatic Surgery, Miulli Hospital, Acquaviva delle Fonti, Bari 70021, Italy
| | - Luca Aldrighetti
- Division of Hepatobiliary Surgery, San Raffaele Hospital, Via Olgettina 60, Milan 20132, Italy
| | - Robert P Sutcliffe
- Department of Hepato-Pancreato-Biliary and Liver Transplant Surgery, Queen Elizabeth University Hospitals Birmingham NHS Foundation Trust, B15 2TH, UK
| |
Collapse
|
26
|
Ultrasound-based navigation for open liver surgery using active liver tracking. Int J Comput Assist Radiol Surg 2022; 17:1765-1773. [PMID: 35622201 DOI: 10.1007/s11548-022-02659-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 04/25/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Despite extensive preoperative imaging, intraoperative localization of liver lesions after systemic treatment can be challenging. Therefore, an image-guided navigation setup is explored that links preoperative diagnostic scans and 3D models to intraoperative ultrasound (US), enabling overlay of detailed diagnostic images on intraoperative US. Aim of this study is to assess the workflow and accuracy of such a navigation system which compensates for liver motion. METHODS Electromagnetic (EM) tracking was used for organ tracking and movement of the transducer. After laparotomy, a sensor was attached to the liver surface while the EM-tracked US transducer enabled image acquisition and landmark digitization. Landmarks surrounding the lesion were selected during patient-specific preoperative 3D planning and identified for registration during surgery. Endpoints were accuracy and additional times of the investigative steps. Accuracy was computed at the center of the target lesion. RESULTS In total, 22 navigated procedures were performed. Navigation provided useful visualization of preoperative 3D models and their overlay on US imaging. Landmark-based registration resulted in a mean fiducial registration error of 10.3 ± 4.3 mm, and a mean target registration error of 8.5 ± 4.2 mm. Navigation was available after an average of 12.7 min. CONCLUSION We developed a navigation method combining ultrasound with active liver tracking for organ motion compensation, with an accuracy below 10 mm. Fixation of the liver sensor near the target lesion compensates for local movement and contributes to improved reliability during navigation. This represents an important step forward in providing surgical navigation throughout the procedure. TRIAL REGISTRATION This study is registered in the Netherlands Trial Register (number NL7951).
Collapse
|
27
|
Saito Y, Shimada M, Morine Y, Yamada S, Sugimoto M. Essential updates 2020/2021: Current topics of simulation and navigation in hepatectomy. Ann Gastroenterol Surg 2022; 6:190-196. [PMID: 35261944 PMCID: PMC8889864 DOI: 10.1002/ags3.12542] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 11/26/2021] [Accepted: 12/14/2021] [Indexed: 01/01/2023] Open
Abstract
With the development of three-dimensional (3D) simulation software, preoperative simulation technology is almost completely established. The remaining issue is how to recognize anatomy three-dimensionally. Extended reality is a newly developed technology with several merits for surgical application: no requirement for a sterilized display monitor, better spatial awareness, and the ability to share 3D images among all surgeons. Various technology or devices for intraoperative navigation have also been developed to support the safety and certainty of liver surgery. Consensus recommendations regarding indocyanine green fluorescence were determined in 2021. Extended reality has also been applied to intraoperative navigation, and artificial intelligence (AI) is one of the topics of real-time navigation. AI might overcome the problem of liver deformity with automatic registration. Including the issues described above, this article focuses on recent advances in simulation and navigation in liver surgery from 2020 to 2021.
Collapse
Affiliation(s)
- Yu Saito
- Department of SurgeryTokushima UniversityTokushimaJapan
| | | | - Yuji Morine
- Department of SurgeryTokushima UniversityTokushimaJapan
| | | | - Maki Sugimoto
- Department of SurgeryTokushima UniversityTokushimaJapan
- Okinaga Research InstituteTeikyo UniversityChiyoda‐kuJapan
| |
Collapse
|
28
|
Architecture of a Hybrid Video/Optical See-through Head-Mounted Display-Based Augmented Reality Surgical Navigation Platform. INFORMATION 2022. [DOI: 10.3390/info13020081] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
In the context of image-guided surgery, augmented reality (AR) represents a ground-breaking enticing improvement, mostly when paired with wearability in the case of open surgery. Commercially available AR head-mounted displays (HMDs), designed for general purposes, are increasingly used outside their indications to develop surgical guidance applications with the ambition to demonstrate the potential of AR in surgery. The applications proposed in the literature underline the hunger for AR-guidance in the surgical room together with the limitations that hinder commercial HMDs from being the answer to such a need. The medical domain demands specifically developed devices that address, together with ergonomics, the achievement of surgical accuracy objectives and compliance with medical device regulations. In the framework of an EU Horizon2020 project, a hybrid video and optical see-through augmented reality headset paired with a software architecture, both specifically designed to be seamlessly integrated into the surgical workflow, has been developed. In this paper, the overall architecture of the system is described. The developed AR HMD surgical navigation platform was positively tested on seven patients to aid the surgeon while performing Le Fort 1 osteotomy in cranio-maxillofacial surgery, demonstrating the value of the hybrid approach and the safety and usability of the navigation platform.
Collapse
|
29
|
Guérinot C, Marcon V, Godard C, Blanc T, Verdier H, Planchon G, Raimondi F, Boddaert N, Alonso M, Sailor K, Lledo PM, Hajj B, El Beheiry M, Masson JB. New Approach to Accelerated Image Annotation by Leveraging Virtual Reality and Cloud Computing. FRONTIERS IN BIOINFORMATICS 2022; 1:777101. [PMID: 36303792 PMCID: PMC9580868 DOI: 10.3389/fbinf.2021.777101] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 12/15/2021] [Indexed: 01/02/2023] Open
Abstract
Three-dimensional imaging is at the core of medical imaging and is becoming a standard in biological research. As a result, there is an increasing need to visualize, analyze and interact with data in a natural three-dimensional context. By combining stereoscopy and motion tracking, commercial virtual reality (VR) headsets provide a solution to this critical visualization challenge by allowing users to view volumetric image stacks in a highly intuitive fashion. While optimizing the visualization and interaction process in VR remains an active topic, one of the most pressing issue is how to utilize VR for annotation and analysis of data. Annotating data is often a required step for training machine learning algorithms. For example, enhancing the ability to annotate complex three-dimensional data in biological research as newly acquired data may come in limited quantities. Similarly, medical data annotation is often time-consuming and requires expert knowledge to identify structures of interest correctly. Moreover, simultaneous data analysis and visualization in VR is computationally demanding. Here, we introduce a new procedure to visualize, interact, annotate and analyze data by combining VR with cloud computing. VR is leveraged to provide natural interactions with volumetric representations of experimental imaging data. In parallel, cloud computing performs costly computations to accelerate the data annotation with minimal input required from the user. We demonstrate multiple proof-of-concept applications of our approach on volumetric fluorescent microscopy images of mouse neurons and tumor or organ annotations in medical images.
Collapse
Affiliation(s)
- Corentin Guérinot
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
- Perception and Memory Unit, CNRS UMR3571, Institut Pasteur, Paris, France
- Sorbonne Université, Collège Doctoral, Paris, France
| | - Valentin Marcon
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
| | - Charlotte Godard
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
- École Doctorale Physique en Île-de-France, PSL University, Paris, France
| | - Thomas Blanc
- Sorbonne Université, Collège Doctoral, Paris, France
- Laboratoire Physico-Chimie, Institut Curie, PSL Research University, CNRS UMR168, Paris, France
| | - Hippolyte Verdier
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
- Histopathology and Bio-Imaging Group, Sanofi R&D, Vitry-Sur-Seine, France
- Université de Paris, UFR de Physique, Paris, France
| | - Guillaume Planchon
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
| | - Francesca Raimondi
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
- Unité Médicochirurgicale de Cardiologie Congénitale et Pédiatrique, Centre de Référence des Malformations Cardiaques Congénitales Complexes M3C, Hôpital Universitaire Necker-Enfants Malades, Université de Paris, Paris, France
- Pediatric Radiology Unit, Hôpital Universitaire Necker-Enfants Malades, Université de Paris, Paris, France
- UMR-1163 Institut Imagine, Hôpital Universitaire Necker-Enfants Malades, AP-HP, Paris, France
| | - Nathalie Boddaert
- Pediatric Radiology Unit, Hôpital Universitaire Necker-Enfants Malades, Université de Paris, Paris, France
- UMR-1163 Institut Imagine, Hôpital Universitaire Necker-Enfants Malades, AP-HP, Paris, France
| | - Mariana Alonso
- Perception and Memory Unit, CNRS UMR3571, Institut Pasteur, Paris, France
| | - Kurt Sailor
- Perception and Memory Unit, CNRS UMR3571, Institut Pasteur, Paris, France
| | - Pierre-Marie Lledo
- Perception and Memory Unit, CNRS UMR3571, Institut Pasteur, Paris, France
| | - Bassam Hajj
- Sorbonne Université, Collège Doctoral, Paris, France
- École Doctorale Physique en Île-de-France, PSL University, Paris, France
| | - Mohamed El Beheiry
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
| | - Jean-Baptiste Masson
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
| |
Collapse
|
30
|
Kasai M, Aihara T, Ikuta S, Nakajima T, Yamanaka N. A Percutaneous Portal Vein Puncture Under Artificial Ascites for Intraoperative Hepatic Segmentation Using Indocyanine Green Fluorescence: A Technical Report of Laparoscopic Anatomic Liver Resection. Surg Laparosc Endosc Percutan Tech 2021; 32:281-284. [PMID: 34882613 PMCID: PMC8969844 DOI: 10.1097/sle.0000000000001022] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Accepted: 09/15/2021] [Indexed: 11/25/2022]
Abstract
BACKGROUND Laparoscopic liver resection have developed and is widely spread as standard procedure in these days, however, laparoscopic anatomic liver resection is still challenging, especially for posterosuperior lesions because of difficulties in segmental mapping and surgical techniques. Recently, the positive staining and negative staining method using fluorescent imaging techniques have been reported from experienced Asian centers, allowing to identify the tumor-bearing portal territory to be resected including the posterosuperior segment in laparoscopy. Those techniques are applicable in some cases; hence, it remains the room for improvement to establish as a feasible approach. Herein, we describe a percutaneous tumor-bearing portal vein puncture method under artificial ascites after the pneumoperitoneum for laparoscopic segmentectomy for segment 8. CASE PRESENTATION AND SURGICAL PROCEDURE A male patient in his 60s was admitted for an incidentally diagnosed hepatic mass in segment 8. Findings of the computed tomography scan showed a 2.5-cm-sized hepatocellular carcinoma lesion. Then, laparoscopic anatomic liver resection for segment 8 was planned. The segmentation of the segment 8 was performed through a percutaneous tumor-bearing portal vein puncture using indocyanine green injection with extracorporeal ultrasound guidance under artificial ascites. According to indocyanine green fluorescence navigation, anatomic liver resection was completed. Operative time was recorded as 375 minutes. The estimated intraoperative blood loss was 50 mL without the requirement for an intraoperative transfusion. The planned resections were successful with histologically negative surgical margins. The patient was discharged on the 19th postoperative day with normal liver function test results. There was no operation-related complication during hospitalization. CONCLUSION The intraoperative percutaneous portal vein puncture method under artificial ascites was useful for the identification of posterosuperior segment in laparoscopic anatomic segmentectomy.
Collapse
Affiliation(s)
- Meidai Kasai
- Department of Surgery, Meiwa Hospital, Hyogo, Japan
| | | | | | | | | |
Collapse
|
31
|
Adballah M, Espinel Y, Calvet L, Pereira B, Le Roy B, Bartoli A, Buc E. Augmented reality in laparoscopic liver resection evaluated on an ex-vivo animal model with pseudo-tumours. Surg Endosc 2021; 36:833-843. [PMID: 34734305 DOI: 10.1007/s00464-021-08798-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Accepted: 10/17/2021] [Indexed: 02/01/2023]
Abstract
BACKGROUND The aim of this study was to assess the performance of our augmented reality (AR) software (Hepataug) during laparoscopic resection of liver tumours and compare it to standard ultrasonography (US). MATERIALS AND METHODS Ninety pseudo-tumours ranging from 10 to 20 mm were created in sheep cadaveric livers by injection of alginate. CT-scans were then performed and 3D models reconstructed using a medical image segmentation software (MITK). The livers were placed in a pelvi-trainer on an inclined plane, approximately perpendicular to the laparoscope. The aim was to obtain free resection margins, as close as possible to 1 cm. Laparoscopic resection was performed using US alone (n = 30, US group), AR alone (n = 30, AR group) and both US and AR (n = 30, ARUS group). R0 resection, maximal margins, minimal margins and mean margins were assessed after histopathologic examination, adjusted to the tumour depth and to a liver zone-wise difficulty level. RESULTS The minimal margins were not different between the three groups (8.8, 8.0 and 6.9 mm in the US, AR and ARUS groups, respectively). The maximal margins were larger in the US group compared to the AR and ARUS groups after adjustment on depth and zone difficulty (21 vs. 18 mm, p = 0.001 and 21 vs. 19.5 mm, p = 0.037, respectively). The mean margins, which reflect the variability of the measurements, were larger in the US group than in the ARUS group after adjustment on depth and zone difficulty (15.2 vs. 12.8 mm, p < 0.001). When considering only the most difficult zone (difficulty 3), there were more R1/R2 resections in the US group than in the AR + ARUS group (50% vs. 21%, p = 0.019). CONCLUSION Laparoscopic liver resection using AR seems to provide more accurate resection margins with less variability than the gold standard US navigation, particularly in difficult to access liver zones with deep tumours.
Collapse
Affiliation(s)
- Mourad Adballah
- Institut Pascal, UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Bâtiment 3C, 28 place Henri Dunant, 63000, Clermont-Ferrand, France
- Department of Digestive and Hepatobiliary Surgery, University Hospital Clermont-Ferrand, 1 Place Lucie et Raymond Aubrac, 63003, Clermont-Ferrand Cedex, France
| | - Yamid Espinel
- Institut Pascal, UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Bâtiment 3C, 28 place Henri Dunant, 63000, Clermont-Ferrand, France
| | - Lilian Calvet
- Institut Pascal, UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Bâtiment 3C, 28 place Henri Dunant, 63000, Clermont-Ferrand, France
- Biostatistics Department (DRCI), University Hospital Clermont-Ferrand, 63000, Clermont-Ferrand, France
| | - Bruno Pereira
- Biostatistics Department (DRCI), University Hospital Clermont-Ferrand, 63000, Clermont-Ferrand, France
| | - Bertrand Le Roy
- Institut Pascal, UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Bâtiment 3C, 28 place Henri Dunant, 63000, Clermont-Ferrand, France
- Department of Digestive and Oncologic Surgery, University Hospital Nord St-Etienne, Avenue Albert Raimond, 42270, Saint-Priest en Jarez, France
| | - Adrien Bartoli
- Institut Pascal, UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Bâtiment 3C, 28 place Henri Dunant, 63000, Clermont-Ferrand, France
- Biostatistics Department (DRCI), University Hospital Clermont-Ferrand, 63000, Clermont-Ferrand, France
| | - Emmanuel Buc
- Institut Pascal, UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Bâtiment 3C, 28 place Henri Dunant, 63000, Clermont-Ferrand, France.
- Department of Digestive and Hepatobiliary Surgery, University Hospital Clermont-Ferrand, 1 Place Lucie et Raymond Aubrac, 63003, Clermont-Ferrand Cedex, France.
| |
Collapse
|
32
|
Sarmadi H, Muñoz-Salinas R, Álvaro Berbís M, Luna A, Medina-Carnicer R. Joint scene and object tracking for cost-Effective augmented reality guided patient positioning in radiation therapy. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 209:106296. [PMID: 34380076 DOI: 10.1016/j.cmpb.2021.106296] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 07/17/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE The research is done in the field of Augmented Reality (AR) for patient positioning in radiation therapy is scarce. We propose an efficient and cost-effective algorithm for tracking the scene and the patient to interactively assist the patient's positioning process by providing visual feedback to the operator. Up to our knowledge, this is the first framework that can be employed for mobile interactive AR to guide patient positioning. METHODS We propose a pointcloud processing method that, combined with a fiducial marker-mapper algorithm and the generalized ICP algorithm, tracks the patient and the camera precisely and efficiently only using the CPU unit. The 3D reference model and body marker map alignment is calculated employing an efficient body reconstruction algorithm. RESULTS Our quantitative evaluation shows that the proposed method achieves a translational and rotational error of 4.17 mm/0.82∘ at 9 fps. Furthermore, the qualitative results demonstrate the usefulness of our algorithm in patient positioning on different human subjects. CONCLUSION Since our algorithm achieves a relatively high frame rate and accuracy employing a regular laptop (without a dedicated GPU), it is a very cost-effective AR-based patient positioning method. It also opens the way for other researchers by introducing a framework that could be improved upon for better mobile interactive AR patient positioning solutions in the future.
Collapse
Affiliation(s)
- Hamid Sarmadi
- Instituto Maimónides de Investigación en Biomedicina (IMIBIC). Avenida Menéndez Pidal s/n, Córdoba, 14004, Spain.
| | - Rafael Muñoz-Salinas
- Computing and Numerical Analysis Department, Edificio Einstein. Campus de Rabanales, Córdoba University, Córdoba, 14071, Spain; Instituto Maimónides de Investigación en Biomedicina (IMIBIC). Avenida Menéndez Pidal s/n, Córdoba, 14004, Spain.
| | - M Álvaro Berbís
- HT Médica, Hospital San Juan de Dios. Avda Brillante 106, Córdoba, 14012, Spain.
| | - Antonio Luna
- HT Médica, Clínica las Nieves, Carmelo Torres 2, Jaén, 23007, Spain.
| | - R Medina-Carnicer
- Computing and Numerical Analysis Department, Edificio Einstein. Campus de Rabanales, Córdoba University, Córdoba, 14071, Spain; Instituto Maimónides de Investigación en Biomedicina (IMIBIC). Avenida Menéndez Pidal s/n, Córdoba, 14004, Spain.
| |
Collapse
|
33
|
Wahba R, Thomas MN, Bunck AC, Bruns CJ, Stippel DL. Clinical use of augmented reality, mixed reality, three-dimensional-navigation and artificial intelligence in liver surgery. Artif Intell Gastroenterol 2021; 2:94-104. [DOI: 10.35712/aig.v2.i4.94] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Revised: 07/10/2021] [Accepted: 08/27/2021] [Indexed: 02/06/2023] Open
Abstract
A precise knowledge of intra-parenchymal vascular and biliary architecture and the location of lesions in relation to the complex anatomy is indispensable to perform liver surgery. Therefore, virtual three-dimensional (3D)-reconstruction models from computed tomography/magnetic resonance imaging scans of the liver might be helpful for visualization. Augmented reality, mixed reality and 3D-navigation could transfer such 3D-image data directly into the operation theater to support the surgeon. This review examines the literature about the clinical and intraoperative use of these image guidance techniques in liver surgery and provides the reader with the opportunity to learn about these techniques. Augmented reality and mixed reality have been shown to be feasible for the use in open and minimally invasive liver surgery. 3D-navigation facilitated targeting of intraparenchymal lesions. The existing data is limited to small cohorts and description about technical details e.g., accordance between the virtual 3D-model and the real liver anatomy. Randomized controlled trials regarding clinical data or oncological outcome are not available. Up to now there is no intraoperative application of artificial intelligence in liver surgery. The usability of all these sophisticated image guidance tools has still not reached the grade of immersion which would be necessary for a widespread use in the daily surgical routine. Although there are many challenges, augmented reality, mixed reality, 3D-navigation and artificial intelligence are emerging fields in hepato-biliary surgery.
Collapse
Affiliation(s)
- Roger Wahba
- Department of General, Visceral, Cancer and Transplantation Surgery, University of Cologne, Faculty of Medicine and University Hospital Cologne, Cologne 50937, Germany
| | - Michael N Thomas
- Department of General, Visceral, Cancer and Transplantation Surgery, University of Cologne, Faculty of Medicine and University Hospital Cologne, Cologne 50937, Germany
| | - Alexander C Bunck
- Department of Diagnostic and Interventional Radiology, University of Cologne, Faculty of Medicine and University Hospital Cologne, Cologne 50937, Germany
| | - Christiane J Bruns
- Department of General, Visceral, Cancer and Transplantation Surgery, University of Cologne, Faculty of Medicine and University Hospital Cologne, Cologne 50937, Germany
| | - Dirk L Stippel
- Department of General, Visceral, Cancer and Transplantation Surgery, University of Cologne, Faculty of Medicine and University Hospital Cologne, Cologne 50937, Germany
| |
Collapse
|
34
|
Tarutani K, Takaki H, Igeta M, Fujiwara M, Okamura A, Horio F, Toudou Y, Nakajima S, Kagawa K, Tanooka M, Yamakado K. Development and Accuracy Evaluation of Augmented Reality-based Patient Positioning System in Radiotherapy: A Phantom Study. In Vivo 2021; 35:2081-2087. [PMID: 34182483 DOI: 10.21873/invivo.12477] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Revised: 05/27/2021] [Accepted: 05/28/2021] [Indexed: 01/04/2023]
Abstract
BACKGROUND/AIM To develop and evaluate the accuracy of augmented reality (AR)-based patient positioning systems in radiotherapy. MATERIALS AND METHODS AR head-mounted displays (AR-HMDs), which virtually superimpose a three-dimensional (3D) image generated by the digital imaging and communications in medicine (DICOM) data, have been developed. The AR-based positioning feasibility was evaluated. Then, the setup errors of three translational axes directions and rotation angles between the AR and the conventional laser-based positioning were compared. RESULTS The AR-based pelvic phantom positioning was feasible. The setup errors of AR-based positioning were comparable to laser-based positioning in all translational axis directions and rotation angles. The time necessary for AR-based positioning was significantly longer than that for laser-based positioning (171.0 s vs. 47.5 s, p<0.001). CONCLUSION AR-based positioning for radiotherapy was feasible, and showed comparable positioning errors to those of conventional line-based positioning; however, a markedly longer setup time was necessary.
Collapse
Affiliation(s)
- Kazuo Tarutani
- Department of Radiology, Hyogo College of Medicine, Hyogo, Japan.,Japan Organization of Occupational Health and Safety Kansai Rousai Hospital, Hyogo, Japan
| | - Haruyuki Takaki
- Department of Radiology, Hyogo College of Medicine, Hyogo, Japan;
| | - Masataka Igeta
- Department of Biostatistics, Hyogo College of Medicine, Hyogo, Japan
| | | | - Ayako Okamura
- Japan Organization of Occupational Health and Safety Kansai Rousai Hospital, Hyogo, Japan
| | - Futo Horio
- Kobe Digital Labo Incorporated, Hyogo, Japan
| | - Yuki Toudou
- Japan Organization of Occupational Health and Safety Kansai Rousai Hospital, Hyogo, Japan
| | - Satoshi Nakajima
- Japan Organization of Occupational Health and Safety Kansai Rousai Hospital, Hyogo, Japan
| | - Kazufumi Kagawa
- Japan Organization of Occupational Health and Safety Kansai Rousai Hospital, Hyogo, Japan
| | - Masao Tanooka
- Department of Radiotherapy, Takarazuka City Hospital, Hyogo, Japan
| | | |
Collapse
|
35
|
Lungu AJ, Swinkels W, Claesen L, Tu P, Egger J, Chen X. A review on the applications of virtual reality, augmented reality and mixed reality in surgical simulation: an extension to different kinds of surgery. Expert Rev Med Devices 2020; 18:47-62. [PMID: 33283563 DOI: 10.1080/17434440.2021.1860750] [Citation(s) in RCA: 57] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Background: Research proves that the apprenticeship model, which is the gold standard for training surgical residents, is obsolete. For that reason, there is a continuing effort toward the development of high-fidelity surgical simulators to replace the apprenticeship model. Applying Virtual Reality Augmented Reality (AR) and Mixed Reality (MR) in surgical simulators increases the fidelity, level of immersion and overall experience of these simulators.Areas covered: The objective of this review is to provide a comprehensive overview of the application of VR, AR and MR for distinct surgical disciplines, including maxillofacial surgery and neurosurgery. The current developments in these areas, as well as potential future directions, are discussed.Expert opinion: The key components for incorporating VR into surgical simulators are visual and haptic rendering. These components ensure that the user is completely immersed in the virtual environment and can interact in the same way as in the physical world. The key components for the application of AR and MR into surgical simulators include the tracking system as well as the visual rendering. The advantages of these surgical simulators are the ability to perform user evaluations and increase the training frequency of surgical residents.
Collapse
Affiliation(s)
- Abel J Lungu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Wout Swinkels
- Computational Sensing Systems, Department of Engineering Technology, Hasselt University, Diepenbeek, Belgium
| | - Luc Claesen
- Computational Sensing Systems, Department of Engineering Technology, Hasselt University, Diepenbeek, Belgium
| | - Puxun Tu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jan Egger
- Graz University of Technology, Institute of Computer Graphics and Vision, Graz, Austria.,Graz Department of Oral &maxillofacial Surgery, Medical University of Graz, Graz, Austria.,The Laboratory of Computer Algorithms for Medicine, Medical University of Graz, Graz, Austria
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
36
|
Felli E, Urade T, Al-Taher M, Felli E, Barberio M, Goffin L, Ettorre GM, Marescaux J, Pessaux P, Swanstrom L, Diana M. Demarcation Line Assessment in Anatomical Liver Resection: An Overview. Surg Innov 2020; 27:424-430. [DOI: 10.1177/1553350620953651] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
Abstract
Anatomical liver resection (ALR) is the preferred oncological approach for the treatment of primary liver malignancies, such as hepatocellular carcinoma and intrahepatic cholangiocarcinoma. The demarcation line (DL) is formed by means of selective vascular occlusion and is used by surgeons to guide ALR. Emerging intraoperative technologies are playing a major role to enhance the surgeon’s vision and ensure a precise oncologic surgery. In this article, a brief overview of modalities to assess the DL during ALRs is presented, from the established conventional techniques to future perspectives.
Collapse
Affiliation(s)
- Eric Felli
- IHU-Strasbourg, Institute of Image-Guided Surgery, France
- Institute of Physiology, EA3072 Mitochondria Respiration and Oxidative Stress, University of Strasbourg, France
| | - Takeshi Urade
- IHU-Strasbourg, Institute of Image-Guided Surgery, France
| | - Mahdi Al-Taher
- IHU-Strasbourg, Institute of Image-Guided Surgery, France
| | - Emanuele Felli
- Department of General, Digestive, and Endocrine Surgery, University Hospital of Strasbourg, France
- INSERM U1110, Institute of Viral and Liver Disease, University of Strasbourg, France
| | - Manuel Barberio
- IHU-Strasbourg, Institute of Image-Guided Surgery, France
- Institute of Physiology, EA3072 Mitochondria Respiration and Oxidative Stress, University of Strasbourg, France
| | | | - Giuseppe M. Ettorre
- Department of Transplantation and General Surgery, San Camillo Hospital, Italy
| | - Jacques Marescaux
- IHU-Strasbourg, Institute of Image-Guided Surgery, France
- IRCAD, Research Institute against Digestive Cancer, France
| | - Patrick Pessaux
- IHU-Strasbourg, Institute of Image-Guided Surgery, France
- Department of General, Digestive, and Endocrine Surgery, University Hospital of Strasbourg, France
- INSERM U1110, Institute of Viral and Liver Disease, University of Strasbourg, France
| | - Lee Swanstrom
- IHU-Strasbourg, Institute of Image-Guided Surgery, France
| | - Michele Diana
- IHU-Strasbourg, Institute of Image-Guided Surgery, France
- Institute of Physiology, EA3072 Mitochondria Respiration and Oxidative Stress, University of Strasbourg, France
- Department of General, Digestive, and Endocrine Surgery, University Hospital of Strasbourg, France
- IRCAD, Research Institute against Digestive Cancer, France
- ICUBE Laboratory, Photonic Instrumentation for Health, France
| |
Collapse
|