1
|
Göbel B, Reiterer A, Möller K. Image-Based 3D Reconstruction in Laparoscopy: A Review Focusing on the Quantitative Evaluation by Applying the Reconstruction Error. J Imaging 2024; 10:180. [PMID: 39194969 DOI: 10.3390/jimaging10080180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 07/16/2024] [Accepted: 07/18/2024] [Indexed: 08/29/2024] Open
Abstract
Image-based 3D reconstruction enables laparoscopic applications as image-guided navigation and (autonomous) robot-assisted interventions, which require a high accuracy. The review's purpose is to present the accuracy of different techniques to label the most promising. A systematic literature search with PubMed and google scholar from 2015 to 2023 was applied by following the framework of "Review articles: purpose, process, and structure". Articles were considered when presenting a quantitative evaluation (root mean squared error and mean absolute error) of the reconstruction error (Euclidean distance between real and reconstructed surface). The search provides 995 articles, which were reduced to 48 articles after applying exclusion criteria. From these, a reconstruction error data set could be generated for the techniques of stereo vision, Shape-from-Motion, Simultaneous Localization and Mapping, deep-learning, and structured light. The reconstruction error varies from below one millimeter to higher than ten millimeters-with deep-learning and Simultaneous Localization and Mapping delivering the best results under intraoperative conditions. The high variance emerges from different experimental conditions. In conclusion, submillimeter accuracy is challenging, but promising image-based 3D reconstruction techniques could be identified. For future research, we recommend computing the reconstruction error for comparison purposes and use ex/in vivo organs as reference objects for realistic experiments.
Collapse
Affiliation(s)
- Birthe Göbel
- Department of Sustainable Systems Engineering-INATECH, University of Freiburg, Emmy-Noether-Street 2, 79110 Freiburg im Breisgau, Germany
- KARL STORZ SE & Co. KG, Dr.-Karl-Storz-Street 34, 78532 Tuttlingen, Germany
| | - Alexander Reiterer
- Department of Sustainable Systems Engineering-INATECH, University of Freiburg, Emmy-Noether-Street 2, 79110 Freiburg im Breisgau, Germany
- Fraunhofer Institute for Physical Measurement Techniques IPM, 79110 Freiburg im Breisgau, Germany
| | - Knut Möller
- Institute of Technical Medicine-ITeM, Furtwangen University (HFU), 78054 Villingen-Schwenningen, Germany
- Mechanical Engineering, University of Canterbury, Christchurch 8140, New Zealand
| |
Collapse
|
2
|
Yang Z, Dai J, Pan J. 3D reconstruction from endoscopy images: A survey. Comput Biol Med 2024; 175:108546. [PMID: 38704902 DOI: 10.1016/j.compbiomed.2024.108546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 01/05/2024] [Accepted: 04/28/2024] [Indexed: 05/07/2024]
Abstract
Three-dimensional reconstruction of images acquired through endoscopes is playing a vital role in an increasing number of medical applications. Endoscopes used in the clinic are commonly classified as monocular endoscopes and binocular endoscopes. We have reviewed the classification of methods for depth estimation according to the type of endoscope. Basically, depth estimation relies on feature matching of images and multi-view geometry theory. However, these traditional techniques have many problems in the endoscopic environment. With the increasing development of deep learning techniques, there is a growing number of works based on learning methods to address challenges such as inconsistent illumination and texture sparsity. We have reviewed over 170 papers published in the 10 years from 2013 to 2023. The commonly used public datasets and performance metrics are summarized. We also give a taxonomy of methods and analyze the advantages and drawbacks of algorithms. Summary tables and result atlas are listed to facilitate the comparison of qualitative and quantitative performance of different methods in each category. In addition, we summarize commonly used scene representation methods in endoscopy and speculate on the prospects of deep estimation research in medical applications. We also compare the robustness performance, processing time, and scene representation of the methods to facilitate doctors and researchers in selecting appropriate methods based on surgical applications.
Collapse
Affiliation(s)
- Zhuoyue Yang
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, 37 Xueyuan Road, Haidian District, Beijing, 100191, China; Peng Cheng Lab, 2 Xingke 1st Street, Nanshan District, Shenzhen, Guangdong Province, 518000, China
| | - Ju Dai
- Peng Cheng Lab, 2 Xingke 1st Street, Nanshan District, Shenzhen, Guangdong Province, 518000, China
| | - Junjun Pan
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, 37 Xueyuan Road, Haidian District, Beijing, 100191, China; Peng Cheng Lab, 2 Xingke 1st Street, Nanshan District, Shenzhen, Guangdong Province, 518000, China.
| |
Collapse
|
3
|
Schneider C, Allam M, Stoyanov D, Hawkes DJ, Gurusamy K, Davidson BR. Performance of image guided navigation in laparoscopic liver surgery - A systematic review. Surg Oncol 2021; 38:101637. [PMID: 34358880 DOI: 10.1016/j.suronc.2021.101637] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Revised: 07/04/2021] [Accepted: 07/24/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND Compared to open surgery, minimally invasive liver resection has improved short term outcomes. It is however technically more challenging. Navigated image guidance systems (IGS) are being developed to overcome these challenges. The aim of this systematic review is to provide an overview of their current capabilities and limitations. METHODS Medline, Embase and Cochrane databases were searched using free text terms and corresponding controlled vocabulary. Titles and abstracts of retrieved articles were screened for inclusion criteria. Due to the heterogeneity of the retrieved data it was not possible to conduct a meta-analysis. Therefore results are presented in tabulated and narrative format. RESULTS Out of 2015 articles, 17 pre-clinical and 33 clinical papers met inclusion criteria. Data from 24 articles that reported on accuracy indicates that in recent years navigation accuracy has been in the range of 8-15 mm. Due to discrepancies in evaluation methods it is difficult to compare accuracy metrics between different systems. Surgeon feedback suggests that current state of the art IGS may be useful as a supplementary navigation tool, especially in small liver lesions that are difficult to locate. They are however not able to reliably localise all relevant anatomical structures. Only one article investigated IGS impact on clinical outcomes. CONCLUSIONS Further improvements in navigation accuracy are needed to enable reliable visualisation of tumour margins with the precision required for oncological resections. To enhance comparability between different IGS it is crucial to find a consensus on the assessment of navigation accuracy as a minimum reporting standard.
Collapse
Affiliation(s)
- C Schneider
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK.
| | - M Allam
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK; General surgery Department, Tanta University, Egypt
| | - D Stoyanov
- Department of Computer Science, University College London, London, UK; Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - D J Hawkes
- Centre for Medical Image Computing (CMIC), University College London, London, UK; Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK
| | - K Gurusamy
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK
| | - B R Davidson
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK
| |
Collapse
|
4
|
Collins T, Pizarro D, Gasparini S, Bourdel N, Chauvet P, Canis M, Calvet L, Bartoli A. Augmented Reality Guided Laparoscopic Surgery of the Uterus. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:371-380. [PMID: 32986548 DOI: 10.1109/tmi.2020.3027442] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
A major research area in Computer Assisted Intervention (CAI) is to aid laparoscopic surgery teams with Augmented Reality (AR) guidance. This involves registering data from other modalities such as MR and fusing it with the laparoscopic video in real-time, to reveal the location of hidden critical structures. We present the first system for AR guided laparoscopic surgery of the uterus. This works with pre-operative MR or CT data and monocular laparoscopes, without requiring any additional interventional hardware such as optical trackers. We present novel and robust solutions to two main sub-problems: the initial registration, which is solved using a short exploratory video, and update registration, which is solved with real-time tracking-by-detection. These problems are challenging for the uterus because it is a weakly-textured, highly mobile organ that moves independently of surrounding structures. In the broader context, our system is the first that has successfully performed markerless real-time registration and AR of a mobile human organ with monocular laparoscopes in the OR.
Collapse
|
5
|
Singh T, Alsadoon A, Prasad P, Alsadoon OH, Venkata HS, Alrubaie A. A novel enhanced hybrid recursive algorithm: Image processing based augmented reality for gallbladder and uterus visualisation. EGYPTIAN INFORMATICS JOURNAL 2020. [DOI: 10.1016/j.eij.2019.11.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
6
|
Heiselman JS, Jarnagin WR, Miga MI. Intraoperative Correction of Liver Deformation Using Sparse Surface and Vascular Features via Linearized Iterative Boundary Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2223-2234. [PMID: 31976882 PMCID: PMC7314378 DOI: 10.1109/tmi.2020.2967322] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
During image guided liver surgery, soft tissue deformation can cause considerable error when attempting to achieve accurate localization of the surgical anatomy through image-to-physical registration. In this paper, a linearized iterative boundary reconstruction technique is proposed to account for these deformations. The approach leverages a superposed formulation of boundary conditions to rapidly and accurately estimate the deformation applied to a preoperative model of the organ given sparse intraoperative data of surface and subsurface features. With this method, tracked intraoperative ultrasound (iUS) is investigated as a potential data source for augmenting registration accuracy beyond the capacity of conventional organ surface registration. In an expansive simulated dataset, features including vessel contours, vessel centerlines, and the posterior liver surface are extracted from iUS planes. Registration accuracy is compared across increasing data density to establish how iUS can be best employed to improve target registration error (TRE). From a baseline average TRE of 11.4 ± 2.2 mm using sparse surface data only, incorporating additional sparse features from three iUS planes improved average TRE to 6.4 ± 1.0 mm. Furthermore, increasing the sparse coverage to 16 tracked iUS planes improved average TRE to 3.9 ± 0.7 mm, exceeding the accuracy of registration based on complete surface data available with more cumbersome intraoperative CT without contrast. Additionally, the approach was applied to three clinical cases where on average error improved 67% over rigid registration and 56% over deformable surface registration when incorporating additional features from one independent tracked iUS plane.
Collapse
Affiliation(s)
| | - William R. Jarnagin
- Department of Surgery at Memorial Sloan Kettering Cancer Center, New York, NY 10065 USA
| | - Michael I. Miga
- Department of Biomedical Engineering at Vanderbilt University, Nashville, TN 37235 USA
| |
Collapse
|
7
|
Singh P, Alsadoon A, Prasad P, Venkata HS, Ali RS, Haddad S, Alrubaie A. A novel augmented reality to visualize the hidden organs and internal structure in surgeries. Int J Med Robot 2020; 16:e2055. [DOI: 10.1002/rcs.2055] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2018] [Revised: 10/27/2019] [Accepted: 10/28/2019] [Indexed: 11/08/2022]
Affiliation(s)
- P. Singh
- School of Computing and MathematicsCharles Sturt University Sydney New South Wales Australia
| | - Abeer Alsadoon
- School of Computing and MathematicsCharles Sturt University Sydney New South Wales Australia
| | - P.W.C. Prasad
- School of Computing and MathematicsCharles Sturt University Sydney New South Wales Australia
| | | | - Rasha S. Ali
- Department of Computer Techniques EngineeringAL Nisour University College Baghdad Iraq
| | - Sami Haddad
- Department of Oral and Maxillofacial ServicesGreater Western Sydney Area Health Services New South Wales Australia
- Department of Oral and Maxillofacial ServicesCentral Coast Area Health Gosford New South Wales Australia
| | - Ahmad Alrubaie
- Faculty of MedicineUniversity of New South Wales Sydney New South Wales Australia
| |
Collapse
|
8
|
A case study: impact of target surface mesh size and mesh quality on volume-to-surface registration performance in hepatic soft tissue navigation. Int J Comput Assist Radiol Surg 2020; 15:1235-1245. [PMID: 32221798 PMCID: PMC7351822 DOI: 10.1007/s11548-020-02123-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Accepted: 02/10/2020] [Indexed: 11/30/2022]
Abstract
Purpose Soft tissue deformation severely impacts the registration of pre- and intra-operative image data during computer-assisted navigation in laparoscopic liver surgery. However, quantifying the impact of target surface size, surface orientation, and mesh quality on non-rigid registration performance remains an open research question. This paper aims to uncover how these affect volume-to-surface registration performance. Methods To find such evidence, we design three experiments that are evaluated using a three-step pipeline: (1) volume-to-surface registration using the physics-based shape matching method or PBSM, (2) voxelization of the deformed surface to a \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$1024^3$$\end{document}10243 voxel grid, and (3) computation of similarity (e.g., mutual information), distance (i.e., Hausdorff distance), and classical metrics (i.e., mean squared error or MSE). Results Using the Hausdorff distance, we report a statistical significance for the different partial surfaces. We found that removing non-manifold geometry and noise improved registration performance, and a target surface size of only 16.5% was necessary. Conclusion By investigating three different factors and improving registration results, we defined a generalizable evaluation pipeline and automatic post-processing strategies that were deemed helpful. All source code, reference data, models, and evaluation results are openly available for download: https://github.com/ghattab/EvalPBSM/.
Collapse
|
9
|
Chen F, Cui X, Liu J, Han B, Zhang X, Zhang D, Liao H. Tissue Structure Updating for In Situ Augmented Reality Navigation Using Calibrated Ultrasound and Two-Level Surface Warping. IEEE Trans Biomed Eng 2020; 67:3211-3222. [PMID: 32175853 DOI: 10.1109/tbme.2020.2979535] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
OBJECTIVE In minimally invasive surgery (MIS), in situ augmented reality (AR) navigation systems are usually implemented using a glasses-free 3D display to represent the preoperative tissue structure, and can provide intuitive see-through guidance information. However, due to changes in intraoperative tissue, the preoperative tissue structure is not able to exactly correspond to reality, which influences the precision of in situ AR navigation. To solve this problem, we propose a method to update the tissue structure for in situ AR navigation in such way to reflect changes in intraoperative tissue. METHODS The proposed method to update the tissue structure is based on the calibrated ultrasound and two-level surface warping technologies. Firstly, the particle filter-based calibration is implemented to perform ultrasound calibration and obtain intraoperative position of anatomical points. Secondly, intraoperative positions of anatomical points are inputted in the two-level surface warping method to update the preoperative tissue structure. Finally, the glasses-free real 3-D display of the updated tissue structure is finished, and is superimposed onto a patient by a translucent mirror for in situ AR navigation. RESULTS we validated the proposed method by simulating liver tissue intervention, and achieved the tissue updating accuracy of 92.86%. Furthermore, the targeting error of AR navigation based on the proposed method was also evaluated through minimally invasive liver surgery, and the acquired mean targeting error was 1.92 mm. CONCLUSION The results demonstrate that the proposed AR navigation method is effective. SIGNIFICANCE The proposed method can facilitate MIS, as it provides accurate 3D navigation.
Collapse
|
10
|
Pfeiffer M, Riediger C, Weitz J, Speidel S. Learning soft tissue behavior of organs for surgical navigation with convolutional neural networks. Int J Comput Assist Radiol Surg 2019; 14:1147-1155. [PMID: 30993520 DOI: 10.1007/s11548-019-01965-7] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2019] [Accepted: 04/02/2019] [Indexed: 12/12/2022]
Abstract
PURPOSE In surgical navigation, pre-operative organ models are presented to surgeons during the intervention to help them in efficiently finding their target. In the case of soft tissue, these models need to be deformed and adapted to the current situation by using intra-operative sensor data. A promising method to realize this are real-time capable biomechanical models. METHODS We train a fully convolutional neural network to estimate a displacement field of all points inside an organ when given only the displacement of a part of the organ's surface. The network trains on entirely synthetic data of random organ-like meshes, which allows us to use much more data than is otherwise available. The input and output data are discretized into a regular grid, allowing us to fully utilize the capabilities of convolutional operators and to train and infer in a highly parallelized manner. RESULTS The system is evaluated on in-silico liver models, phantom liver data and human in-vivo breathing data. We test the performance with varying material parameters, organ shapes and amount of visible surface. Even though the network is only trained on synthetic data, it adapts well to the various cases and gives a good estimation of the internal organ displacement. The inference runs at over 50 frames per second. CONCLUSION We present a novel method for training a data-driven, real-time capable deformation model. The accuracy is comparable to other registration methods, it adapts very well to previously unseen organs and does not need to be re-trained for every patient. The high inferring speed makes this method useful for many applications such as surgical navigation and real-time simulation.
Collapse
Affiliation(s)
- Micha Pfeiffer
- National Center for Tumor Diseases (NCT), Partner Site Dresden, Dresden, Germany.
| | - Carina Riediger
- Department for Visceral, Thoracic and Vascular Surgery, University Hospital, Technical University Dresden, Dresden, Germany
| | - Jürgen Weitz
- Department for Visceral, Thoracic and Vascular Surgery, University Hospital, Technical University Dresden, Dresden, Germany
| | - Stefanie Speidel
- National Center for Tumor Diseases (NCT), Partner Site Dresden, Dresden, Germany
| |
Collapse
|
11
|
Augmented visualization with depth perception cues to improve the surgeon's performance in minimally invasive surgery. Med Biol Eng Comput 2018; 57:995-1013. [PMID: 30511205 DOI: 10.1007/s11517-018-1929-6] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 11/03/2018] [Indexed: 01/14/2023]
Abstract
Minimally invasive techniques, such as laparoscopy and radiofrequency ablation of tumors, bring important advantages in surgery: by minimizing incisions on the patient's body, they can reduce the hospitalization period and the risk of postoperative complications. Unfortunately, they come with drawbacks for surgeons, who have a restricted vision of the operation area through an indirect access and 2D images provided by a camera inserted in the body. Augmented reality provides an "X-ray vision" of the patient anatomy thanks to the visualization of the internal organs of the patient. In this way, surgeons are free from the task of mentally associating the content from CT images to the operative scene. We present a navigation system that supports surgeons in preoperative and intraoperative phases and an augmented reality system that superimposes virtual organs on the patient's body together with depth and distance information. We implemented a combination of visual and audio cues allowing the surgeon to improve the intervention precision and avoid the risk of damaging anatomical structures. The test scenarios proved the good efficacy and accuracy of the system. Moreover, tests in the operating room suggested some modifications to the tracking system to make it more robust with respect to occlusions. Graphical Abstract Augmented visualization in minimally invasive surgery.
Collapse
|
12
|
In vivo estimation of target registration errors during augmented reality laparoscopic surgery. Int J Comput Assist Radiol Surg 2018; 13:865-874. [PMID: 29663273 PMCID: PMC5973973 DOI: 10.1007/s11548-018-1761-3] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2018] [Accepted: 04/02/2018] [Indexed: 11/02/2022]
Abstract
PURPOSE Successful use of augmented reality for laparoscopic surgery requires that the surgeon has a thorough understanding of the likely accuracy of any overlay. Whilst the accuracy of such systems can be estimated in the laboratory, it is difficult to extend such methods to the in vivo clinical setting. Herein we describe a novel method that enables the surgeon to estimate in vivo errors during use. We show that the method enables quantitative evaluation of in vivo data gathered with the SmartLiver image guidance system. METHODS The SmartLiver system utilises an intuitive display to enable the surgeon to compare the positions of landmarks visible in both a projected model and in the live video stream. From this the surgeon can estimate the system accuracy when using the system to locate subsurface targets not visible in the live video. Visible landmarks may be either point or line features. We test the validity of the algorithm using an anatomically representative liver phantom, applying simulated perturbations to achieve clinically realistic overlay errors. We then apply the algorithm to in vivo data. RESULTS The phantom results show that using projected errors of surface features provides a reliable predictor of subsurface target registration error for a representative human liver shape. Applying the algorithm to in vivo data gathered with the SmartLiver image-guided surgery system shows that the system is capable of accuracies around 12 mm; however, achieving this reliably remains a significant challenge. CONCLUSION We present an in vivo quantitative evaluation of the SmartLiver image-guided surgery system, together with a validation of the evaluation algorithm. This is the first quantitative in vivo analysis of an augmented reality system for laparoscopic surgery.
Collapse
|
13
|
Speidel S, Bodenstedt S, Maier-Hein L, Kenngott H. Kognitive Chirurgie/Chirurgie 4.0. COLOPROCTOLOGY 2018. [DOI: 10.1007/s00053-018-0236-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
14
|
Heiselman JS, Clements LW, Collins JA, Weis JA, Simpson AL, Geevarghese SK, Kingham TP, Jarnagin WR, Miga MI. Characterization and correction of intraoperative soft tissue deformation in image-guided laparoscopic liver surgery. J Med Imaging (Bellingham) 2017; 5:021203. [PMID: 29285519 DOI: 10.1117/1.jmi.5.2.021203] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2017] [Accepted: 11/21/2017] [Indexed: 12/12/2022] Open
Abstract
Laparoscopic liver surgery is challenging to perform due to a compromised ability of the surgeon to localize subsurface anatomy in the constrained environment. While image guidance has the potential to address this barrier, intraoperative factors, such as insufflation and variable degrees of organ mobilization from supporting ligaments, may generate substantial deformation. The severity of laparoscopic deformation in humans has not been characterized, and current laparoscopic correction methods do not account for the mechanics of how intraoperative deformation is applied to the liver. We first measure the degree of laparoscopic deformation at two insufflation pressures over the course of laparoscopic-to-open conversion in 25 patients. With this clinical data alongside a mock laparoscopic phantom setup, we report a biomechanical correction approach that leverages anatomically load-bearing support surfaces from ligament attachments to iteratively reconstruct and account for intraoperative deformations. Laparoscopic deformations were significantly larger than deformations associated with open surgery, and our correction approach yielded subsurface target error of [Formula: see text] and surface error of [Formula: see text] using only sparse surface data with realistic surgical extent. Laparoscopic surface data extents were examined and found to impact registration accuracy. Finally, we demonstrate viability of the correction method with clinical data.
Collapse
Affiliation(s)
- Jon S Heiselman
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States.,Vanderbilt University, Vanderbilt Institute for Surgery and Engineering, Nashville, Tennessee, United States
| | - Logan W Clements
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States.,Vanderbilt University, Vanderbilt Institute for Surgery and Engineering, Nashville, Tennessee, United States
| | - Jarrod A Collins
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States.,Vanderbilt University, Vanderbilt Institute for Surgery and Engineering, Nashville, Tennessee, United States
| | - Jared A Weis
- Wake Forest School of Medicine, Department of Biomedical Engineering, Winston-Salem, North Carolina, United States
| | - Amber L Simpson
- Memorial Sloan-Kettering Cancer Center, Hepatopancreatobiliary Service, Department of Surgery, New York, New York, United States
| | - Sunil K Geevarghese
- Vanderbilt University Medical Center, Division of Hepatobiliary Surgery and Liver Transplantation, Nashville, Tennessee, United States
| | - T Peter Kingham
- Memorial Sloan-Kettering Cancer Center, Hepatopancreatobiliary Service, Department of Surgery, New York, New York, United States
| | - William R Jarnagin
- Memorial Sloan-Kettering Cancer Center, Hepatopancreatobiliary Service, Department of Surgery, New York, New York, United States
| | - Michael I Miga
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States.,Vanderbilt University, Vanderbilt Institute for Surgery and Engineering, Nashville, Tennessee, United States
| |
Collapse
|