1
|
Han Z, Dou Q. A review on organ deformation modeling approaches for reliable surgical navigation using augmented reality. Comput Assist Surg (Abingdon) 2024; 29:2357164. [PMID: 39253945 DOI: 10.1080/24699322.2024.2357164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/11/2024] Open
Abstract
Augmented Reality (AR) holds the potential to revolutionize surgical procedures by allowing surgeons to visualize critical structures within the patient's body. This is achieved through superimposing preoperative organ models onto the actual anatomy. Challenges arise from dynamic deformations of organs during surgery, making preoperative models inadequate for faithfully representing intraoperative anatomy. To enable reliable navigation in augmented surgery, modeling of intraoperative deformation to obtain an accurate alignment of the preoperative organ model with the intraoperative anatomy is indispensable. Despite the existence of various methods proposed to model intraoperative organ deformation, there are still few literature reviews that systematically categorize and summarize these approaches. This review aims to fill this gap by providing a comprehensive and technical-oriented overview of modeling methods for intraoperative organ deformation in augmented reality in surgery. Through a systematic search and screening process, 112 closely relevant papers were included in this review. By presenting the current status of organ deformation modeling methods and their clinical applications, this review seeks to enhance the understanding of organ deformation modeling in AR-guided surgery, and discuss the potential topics for future advancements.
Collapse
Affiliation(s)
- Zheng Han
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
2
|
Oya T, Kadomatsu Y, Chen-Yoshikawa TF, Nakao M. 2D/3D deformable registration for endoscopic camera images using self-supervised offline learning of intraoperative pneumothorax deformation. Comput Med Imaging Graph 2024; 116:102418. [PMID: 39079410 DOI: 10.1016/j.compmedimag.2024.102418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2024] [Revised: 07/10/2024] [Accepted: 07/15/2024] [Indexed: 09/02/2024]
Abstract
Shape registration of patient-specific organ shapes to endoscopic camera images is expected to be a key to realizing image-guided surgery, and a variety of applications of machine learning methods have been considered. Because the number of training data available from clinical cases is limited, the use of synthetic images generated from a statistical deformation model has been attempted; however, the influence on estimation caused by the difference between synthetic images and real scenes is a problem. In this study, we propose a self-supervised offline learning framework for model-based registration using image features commonly obtained from synthetic images and real camera images. Because of the limited number of endoscopic images available for training, we use a synthetic image generated from the nonlinear deformation model that represents possible intraoperative pneumothorax deformations. In order to solve the difficulty in estimating deformed shapes and viewpoints from the common image features obtained from synthetic and real images, we attempted to improve the registration error by adding the shading and distance information that can be obtained as prior knowledge in the synthetic image. Shape registration with real camera images is performed by learning the task of predicting the differential model parameters between two synthetic images. The developed framework achieved registration accuracy with a mean absolute error of less than 10 mm and a mean distance of less than 5 mm in a thoracoscopic pulmonary cancer resection, confirming improved prediction accuracy compared with conventional methods.
Collapse
Affiliation(s)
- Tomoki Oya
- Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Sakyo, Kyoto, 606-8501, Japan
| | - Yuka Kadomatsu
- Nagoya University Hospital, 65 Tsurumai-cho, Showa-ku, Nagoya, 466-8550, Japan
| | | | - Megumi Nakao
- Graduate School of Medicine, Kyoto University, 53 Shogoin Kawahara-cho, Sakyo, Kyoto, 606-8507, Japan.
| |
Collapse
|
3
|
Smit JN, Kuhlmann KFD, Thomson BR, Kok NFM, Ruers TJM, Fusaglia M. Ultrasound guidance in navigated liver surgery: toward deep-learning enhanced compensation of deformation and organ motion. Int J Comput Assist Radiol Surg 2024; 19:1-9. [PMID: 37249749 DOI: 10.1007/s11548-023-02942-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Accepted: 04/27/2023] [Indexed: 05/31/2023]
Abstract
PURPOSE Accuracy of image-guided liver surgery is challenged by deformation of the liver during the procedure. This study aims at improving navigation accuracy by using intraoperative deep learning segmentation and nonrigid registration of hepatic vasculature from ultrasound (US) images to compensate for changes in liver position and deformation. METHODS This was a single-center prospective study of patients with liver metastases from any origin. Electromagnetic tracking was used to follow US and liver movement. A preoperative 3D model of the liver, including liver lesions, and hepatic and portal vasculature, was registered with the intraoperative organ position. Hepatic vasculature was segmented using a reduced 3D U-Net and registered to preoperative imaging after initial alignment followed by nonrigid registration. Accuracy was assessed as Euclidean distance between the tumor center imaged in the intraoperative US and the registered preoperative image. RESULTS Median target registration error (TRE) after initial alignment was 11.6 mm in 25 procedures and improved to 6.9 mm after nonrigid registration (p = 0.0076). The number of TREs above 10 mm halved from 16 to 8 after nonrigid registration. In 9 cases, registration was performed twice after failure of the first attempt. The first registration cycle was completed in median 11 min (8:00-18:45 min) and a second in 5 min (2:30-10:20 min). CONCLUSION This novel registration workflow using automatic vascular detection and nonrigid registration allows to accurately localize liver lesions. Further automation in the workflow is required in initial alignment and classification accuracy.
Collapse
Affiliation(s)
- Jasper N Smit
- Department of Surgical Oncology, The Netherlands Cancer Institute-Antoni van Leeuwenhoek, Plesmanlaan 121, 1066CX, Amsterdam, The Netherlands.
| | - Koert F D Kuhlmann
- Department of Surgical Oncology, The Netherlands Cancer Institute-Antoni van Leeuwenhoek, Plesmanlaan 121, 1066CX, Amsterdam, The Netherlands
| | - Bart R Thomson
- Department of Surgical Oncology, The Netherlands Cancer Institute-Antoni van Leeuwenhoek, Plesmanlaan 121, 1066CX, Amsterdam, The Netherlands
| | - Niels F M Kok
- Department of Surgical Oncology, The Netherlands Cancer Institute-Antoni van Leeuwenhoek, Plesmanlaan 121, 1066CX, Amsterdam, The Netherlands
| | - Theo J M Ruers
- Department of Surgical Oncology, The Netherlands Cancer Institute-Antoni van Leeuwenhoek, Plesmanlaan 121, 1066CX, Amsterdam, The Netherlands
- Nanobiophysics Group (NBP), Faculty of Science and Technology (TNW), University of Twente, Enschede, The Netherlands
| | - Matteo Fusaglia
- Department of Surgical Oncology, The Netherlands Cancer Institute-Antoni van Leeuwenhoek, Plesmanlaan 121, 1066CX, Amsterdam, The Netherlands
| |
Collapse
|
4
|
Abstract
INTRODUCTION During an operation, augmented reality (AR) enables surgeons to enrich their vision of the operating field by means of digital imagery, particularly as regards tumors and anatomical structures. While in some specialties, this type of technology is routinely ustilized, in liver surgery due to the complexity of modeling organ deformities in real time, its applications remain limited. At present, numerous teams are attempting to find a solution applicable to current practice, the objective being to overcome difficulties of intraoperative navigation in an opaque organ. OBJECTIVE To identify, itemize and analyze series reporting AR techniques tested in liver surgery, the objectives being to establish a state of the art and to provide indications of perspectives for the future. METHODS In compliance with the PRISMA guidelines and availing ourselves of the PubMed, Embase and Cochrane databases, we identified English-language articles published between January 2020 and January 2022 corresponding to the following keywords: augmented reality, hepatic surgery, liver and hepatectomy. RESULTS Initially, 102 titles, studies and summaries were preselected. Twenty-eight corresponding to the inclusion criteria were included, reporting on 183patients operated with the help of AR by laparotomy (n=31) or laparoscopy (n=152). Several techniques of acquisition and visualization were reported. Anatomical precision was the main assessment criterion in 19 articles, with values ranging from 3mm to 14mm, followed by time of acquisition and clinical feasibility. CONCLUSION While several AR technologies are presently being developed, due to insufficient anatomical precision their clinical applications have remained limited. That much said, numerous teams are currently working toward their optimization, and it is highly likely that in the short term, the application of AR in liver surgery will have become more frequent and effective. As for its clinical impact, notably in oncology, it remains to be assessed.
Collapse
Affiliation(s)
- B Acidi
- Department of Surgery, AP-HP hôpital Paul-Brousse, Hepato-Biliary Center, 12, avenue Paul-Vaillant Couturier, 94804 Villejuif cedex, France; Augmented Operating Room Innovation Chair (BOPA), France; Inria « Mimesis », Strasbourg, France
| | - M Ghallab
- Department of Surgery, AP-HP hôpital Paul-Brousse, Hepato-Biliary Center, 12, avenue Paul-Vaillant Couturier, 94804 Villejuif cedex, France; Augmented Operating Room Innovation Chair (BOPA), France
| | - S Cotin
- Augmented Operating Room Innovation Chair (BOPA), France; Inria « Mimesis », Strasbourg, France
| | - E Vibert
- Department of Surgery, AP-HP hôpital Paul-Brousse, Hepato-Biliary Center, 12, avenue Paul-Vaillant Couturier, 94804 Villejuif cedex, France; Augmented Operating Room Innovation Chair (BOPA), France; DHU Hepatinov, 94800 Villejuif, France; Inserm, Paris-Saclay University, UMRS 1193, Pathogenesis and treatment of liver diseases; FHU Hepatinov, 94800 Villejuif, France
| | - N Golse
- Department of Surgery, AP-HP hôpital Paul-Brousse, Hepato-Biliary Center, 12, avenue Paul-Vaillant Couturier, 94804 Villejuif cedex, France; Augmented Operating Room Innovation Chair (BOPA), France; Inria « Mimesis », Strasbourg, France; DHU Hepatinov, 94800 Villejuif, France; Inserm, Paris-Saclay University, UMRS 1193, Pathogenesis and treatment of liver diseases; FHU Hepatinov, 94800 Villejuif, France.
| |
Collapse
|
5
|
Shahkoo AA, Abin AA. Deep reinforcement learning in continuous action space for autonomous robotic surgery. Int J Comput Assist Radiol Surg 2023; 18:423-431. [PMID: 36383302 DOI: 10.1007/s11548-022-02789-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 10/28/2022] [Indexed: 11/18/2022]
Abstract
PURPOSE Reinforcement learning methods have shown promising results for the automation of sub-tasks in robotic surgery systems. With the development of these methods, surgical robots have been able to achieve good performances, so that they can be used in complex and high-risk environments such as surgical pattern cutting to reduce stress and pressure on the surgeon and increase surgical accuracy. This study has aimed at providing a deep reinforcement learning-based approach to control the gripper arm when cutting soft tissue in a continuous action space. METHODS Surgical soft tissue cutting in this study is performed by controlling the gripper arm in a continuous action space and a grid observation space. In the proposed method using deep reinforcement learning, we find an optimal tensioning policy in the continuous action space that increases the cutting accuracy of the predetermined pattern. RESULTS The simulation results demonstrated that in the cutting of many complex patterns, the proposed method works better than the methods in which the tensioning was performed in a discrete action space and the observation space was modeled as a partial and random representation. CONCLUSION We introduced a deep reinforcement learning-based method for obtaining the optimal tensioning policy in a continuous action space when cutting a predetermined pattern. We showed that the proposed approach outperforms the state-of-the-art method in the soft pattern cutting task with respect to accuracy.
Collapse
Affiliation(s)
- Amin Abbasi Shahkoo
- Faculty of Computer Science and Engineering, Shahid Beheshti University, Daneshjou Blvd., Tehran, Tehran, 1983969411, Iran
| | - Ahmad Ali Abin
- Faculty of Computer Science and Engineering, Shahid Beheshti University, Daneshjou Blvd., Tehran, Tehran, 1983969411, Iran.
| |
Collapse
|
6
|
Soltani-Sarvestani MA, Cotin S, Saccomandi P. Unscented Kalman Filtering for Real Time Thermometry During Laser Ablation Interventions. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:3485-3488. [PMID: 36085919 DOI: 10.1109/embc48229.2022.9871282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
We present a data-assimilation Bayesian framework in the context of laser ablation for the treatment of cancer. For solving the nonlinear estimation of the tissue temperature evolving during the therapy, the Unscented Kalman Filter (UKF) predicts the next thermal status and controls the ablation process, based on sparse temperature information. The purpose of this paper is to study the outcome of the prediction model based on UKF and to assess the influence of different model settings on the framework performances. In particular, we analyze the effects of the time resolution of the filter and the number and the location of the observations. Clinical Relevance - The application of a data-assimilation approach based on limited temperature information allows to monitor and predict in real-time the thermal effects induced by thermal therapy for tumors.
Collapse
|
7
|
A 3D Image Registration Method for Laparoscopic Liver Surgery Navigation. ELECTRONICS 2022. [DOI: 10.3390/electronics11111670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
At present, laparoscopic augmented reality (AR) navigation has been applied to minimally invasive abdominal surgery, which can help doctors to see the location of blood vessels and tumors in organs, so as to perform precise surgery operations. Image registration is the process of optimally mapping one or more images to the target image, and it is also the core of laparoscopic AR navigation. The key is how to shorten the registration time and optimize the registration accuracy. We have studied the three-dimensional (3D) image registration technology in laparoscopic liver surgery navigation and proposed a new registration method combining rough registration and fine registration. First, the adaptive fireworks algorithm (AFWA) is applied to rough registration, and then the optimized iterative closest point (ICP) algorithm is applied to fine registration. We proposed a method that is validated by the computed tomography (CT) dataset 3D-IRCADb-01. Experimental results show that our method is superior to other registration methods based on stochastic optimization algorithms in terms of registration time and accuracy.
Collapse
|
8
|
Zhang F, Zhang S, Sun L, Zhan W, Sun L. Research on registration and navigation technology of augmented reality for ex-vivo hepatectomy. Int J Comput Assist Radiol Surg 2021; 17:147-155. [PMID: 34800225 DOI: 10.1007/s11548-021-02531-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 10/27/2021] [Indexed: 11/29/2022]
Abstract
PURPOSE The application of augmented reality technology to the partial hepatectomy procedure has high practical significance, but the existing augmented reality navigation system has major drawbacks in the display and registration methods, which result in low precision. The augmented reality surgical navigation system proposed in this study has been improved in the above two aspects, which can significantly improve the surgical accuracy. METHODS The use of optical see-through head-mounted displays for imaging displays can prevent doctors from reconstructing the patient's two-dimensional image information in their minds and reduce the psychological burden of doctors. In the registration process, the biomechanical properties of the liver are introduced, and a non-rigid registration method based on biomechanics is proposed and realized by a meshless algorithm. In addition, this study uses the moving grid algorithm to carry out clinical experiments on ex-vivo pig liver for experimental verification. RESULTS The mark-based interactive registration error is 4.21 ± 1.6 mm, and the registration error is reduced after taking the biomechanical properties of the liver into account, which is - 0.153 ± 0.398 mm. The cutting error of the liver model is 0.159 ± 0.292 mm. In addition, with the aid of the navigation system proposed in this paper, the experiment of ex-vivo pig liver cutting was completed with an error of - 1.164 ± 0.576 mm. CONCLUSIONS As a proof-of-concept study, the augmented reality navigation system proposed in this study improves the traditional image-guided surgery in terms of display and registration methods, and the feasibility of the system is verified by ex-vivo pig liver experiments. Therefore, the navigation system has a certain guiding significance for clinical surgery.
Collapse
Affiliation(s)
- Fengfeng Zhang
- School of Mechanical and Electrical Engineering, Soochow University, Suzhou, 215006, China. .,Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, Suzhou, 215123, China.
| | - Shi Zhang
- College of Mechanical and Engineering, Harbin Engineering University, Harbin, 150001, China
| | - Long Sun
- College of Mechanical and Engineering, Harbin Engineering University, Harbin, 150001, China
| | - Wei Zhan
- The First Affiliated Hospital of Soochow University, Suzhou, 215006, China
| | - Lining Sun
- School of Mechanical and Electrical Engineering, Soochow University, Suzhou, 215006, China.,Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, Suzhou, 215123, China
| |
Collapse
|
9
|
Schneider C, Allam M, Stoyanov D, Hawkes DJ, Gurusamy K, Davidson BR. Performance of image guided navigation in laparoscopic liver surgery - A systematic review. Surg Oncol 2021; 38:101637. [PMID: 34358880 DOI: 10.1016/j.suronc.2021.101637] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Revised: 07/04/2021] [Accepted: 07/24/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND Compared to open surgery, minimally invasive liver resection has improved short term outcomes. It is however technically more challenging. Navigated image guidance systems (IGS) are being developed to overcome these challenges. The aim of this systematic review is to provide an overview of their current capabilities and limitations. METHODS Medline, Embase and Cochrane databases were searched using free text terms and corresponding controlled vocabulary. Titles and abstracts of retrieved articles were screened for inclusion criteria. Due to the heterogeneity of the retrieved data it was not possible to conduct a meta-analysis. Therefore results are presented in tabulated and narrative format. RESULTS Out of 2015 articles, 17 pre-clinical and 33 clinical papers met inclusion criteria. Data from 24 articles that reported on accuracy indicates that in recent years navigation accuracy has been in the range of 8-15 mm. Due to discrepancies in evaluation methods it is difficult to compare accuracy metrics between different systems. Surgeon feedback suggests that current state of the art IGS may be useful as a supplementary navigation tool, especially in small liver lesions that are difficult to locate. They are however not able to reliably localise all relevant anatomical structures. Only one article investigated IGS impact on clinical outcomes. CONCLUSIONS Further improvements in navigation accuracy are needed to enable reliable visualisation of tumour margins with the precision required for oncological resections. To enhance comparability between different IGS it is crucial to find a consensus on the assessment of navigation accuracy as a minimum reporting standard.
Collapse
Affiliation(s)
- C Schneider
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK.
| | - M Allam
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK; General surgery Department, Tanta University, Egypt
| | - D Stoyanov
- Department of Computer Science, University College London, London, UK; Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - D J Hawkes
- Centre for Medical Image Computing (CMIC), University College London, London, UK; Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK
| | - K Gurusamy
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK
| | - B R Davidson
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK
| |
Collapse
|
10
|
Nitta J, Nakao M, Imanishi K, Matsuda T. Deep Learning Based Lung Region Segmentation with Data Preprocessing by Generative Adversarial Nets. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1278-1281. [PMID: 33018221 DOI: 10.1109/embc44109.2020.9176214] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
In endoscopic surgery, it is necessary to understand the three-dimensional structure of the target region to improve safety. For organs that do not deform much during surgery, preoperative computed tomography (CT) images can be used to understand their three-dimensional structure, however, deformation estimation is necessary for organs that deform substantially. Even though the intraoperative deformation estimation of organs has been widely studied, two-dimensional organ region segmentations from camera images are necessary to perform this estimation. In this paper, we propose a region segmentation method using U-net for the lung, which is an organ that deforms substantially during surgery. Because the accuracy of the results for smoker lungs is lower than that for non-smoker lungs, we improved the accuracy by translating the texture of the lung surface using a CycleGAN.
Collapse
|
11
|
Singh T, Alsadoon A, Prasad P, Alsadoon OH, Venkata HS, Alrubaie A. A novel enhanced hybrid recursive algorithm: Image processing based augmented reality for gallbladder and uterus visualisation. EGYPTIAN INFORMATICS JOURNAL 2020. [DOI: 10.1016/j.eij.2019.11.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
12
|
Singh P, Alsadoon A, Prasad P, Venkata HS, Ali RS, Haddad S, Alrubaie A. A novel augmented reality to visualize the hidden organs and internal structure in surgeries. Int J Med Robot 2020; 16:e2055. [DOI: 10.1002/rcs.2055] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2018] [Revised: 10/27/2019] [Accepted: 10/28/2019] [Indexed: 11/08/2022]
Affiliation(s)
- P. Singh
- School of Computing and MathematicsCharles Sturt University Sydney New South Wales Australia
| | - Abeer Alsadoon
- School of Computing and MathematicsCharles Sturt University Sydney New South Wales Australia
| | - P.W.C. Prasad
- School of Computing and MathematicsCharles Sturt University Sydney New South Wales Australia
| | | | - Rasha S. Ali
- Department of Computer Techniques EngineeringAL Nisour University College Baghdad Iraq
| | - Sami Haddad
- Department of Oral and Maxillofacial ServicesGreater Western Sydney Area Health Services New South Wales Australia
- Department of Oral and Maxillofacial ServicesCentral Coast Area Health Gosford New South Wales Australia
| | - Ahmad Alrubaie
- Faculty of MedicineUniversity of New South Wales Sydney New South Wales Australia
| |
Collapse
|
13
|
Schneider C, Thompson S, Totz J, Song Y, Allam M, Sodergren MH, Desjardins AE, Barratt D, Ourselin S, Gurusamy K, Stoyanov D, Clarkson MJ, Hawkes DJ, Davidson BR. Comparison of manual and semi-automatic registration in augmented reality image-guided liver surgery: a clinical feasibility study. Surg Endosc 2020; 34:4702-4711. [PMID: 32780240 PMCID: PMC7524854 DOI: 10.1007/s00464-020-07807-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2020] [Accepted: 07/10/2020] [Indexed: 02/06/2023]
Abstract
BACKGROUND The laparoscopic approach to liver resection may reduce morbidity and hospital stay. However, uptake has been slow due to concerns about patient safety and oncological radicality. Image guidance systems may improve patient safety by enabling 3D visualisation of critical intra- and extrahepatic structures. Current systems suffer from non-intuitive visualisation and a complicated setup process. A novel image guidance system (SmartLiver), offering augmented reality visualisation and semi-automatic registration has been developed to address these issues. A clinical feasibility study evaluated the performance and usability of SmartLiver with either manual or semi-automatic registration. METHODS Intraoperative image guidance data were recorded and analysed in patients undergoing laparoscopic liver resection or cancer staging. Stereoscopic surface reconstruction and iterative closest point matching facilitated semi-automatic registration. The primary endpoint was defined as successful registration as determined by the operating surgeon. Secondary endpoints were system usability as assessed by a surgeon questionnaire and comparison of manual vs. semi-automatic registration accuracy. Since SmartLiver is still in development no attempt was made to evaluate its impact on perioperative outcomes. RESULTS The primary endpoint was achieved in 16 out of 18 patients. Initially semi-automatic registration failed because the IGS could not distinguish the liver surface from surrounding structures. Implementation of a deep learning algorithm enabled the IGS to overcome this issue and facilitate semi-automatic registration. Mean registration accuracy was 10.9 ± 4.2 mm (manual) vs. 13.9 ± 4.4 mm (semi-automatic) (Mean difference - 3 mm; p = 0.158). Surgeon feedback was positive about IGS handling and improved intraoperative orientation but also highlighted the need for a simpler setup process and better integration with laparoscopic ultrasound. CONCLUSION The technical feasibility of using SmartLiver intraoperatively has been demonstrated. With further improvements semi-automatic registration may enhance user friendliness and workflow of SmartLiver. Manual and semi-automatic registration accuracy were comparable but evaluation on a larger patient cohort is required to confirm these findings.
Collapse
Affiliation(s)
- C. Schneider
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK
| | - S. Thompson
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - J. Totz
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - Y. Song
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - M. Allam
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK
| | - M. H. Sodergren
- Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - A. E. Desjardins
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - D. Barratt
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - S. Ourselin
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - K. Gurusamy
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK ,Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Department of Hepatopancreatobiliary and Liver Transplant Surgery, Royal Free Hospital, London, UK
| | - D. Stoyanov
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Computer Science, University College London, London, UK
| | - M. J. Clarkson
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - D. J. Hawkes
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - B. R. Davidson
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK ,Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Department of Hepatopancreatobiliary and Liver Transplant Surgery, Royal Free Hospital, London, UK
| |
Collapse
|
14
|
Chen L, Tang W, John NW, Wan TR, Zhang JJ. SLAM-based dense surface reconstruction in monocular Minimally Invasive Surgery and its application to Augmented Reality. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 158:135-146. [PMID: 29544779 DOI: 10.1016/j.cmpb.2018.02.006] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2017] [Revised: 01/03/2018] [Accepted: 02/02/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVE While Minimally Invasive Surgery (MIS) offers considerable benefits to patients, it also imposes big challenges on a surgeon's performance due to well-known issues and restrictions associated with the field of view (FOV), hand-eye misalignment and disorientation, as well as the lack of stereoscopic depth perception in monocular endoscopy. Augmented Reality (AR) technology can help to overcome these limitations by augmenting the real scene with annotations, labels, tumour measurements or even a 3D reconstruction of anatomy structures at the target surgical locations. However, previous research attempts of using AR technology in monocular MIS surgical scenes have been mainly focused on the information overlay without addressing correct spatial calibrations, which could lead to incorrect localization of annotations and labels, and inaccurate depth cues and tumour measurements. In this paper, we present a novel intra-operative dense surface reconstruction framework that is capable of providing geometry information from only monocular MIS videos for geometry-aware AR applications such as site measurements and depth cues. We address a number of compelling issues in augmenting a scene for a monocular MIS environment, such as drifting and inaccurate planar mapping. METHODS A state-of-the-art Simultaneous Localization And Mapping (SLAM) algorithm used in robotics has been extended to deal with monocular MIS surgical scenes for reliable endoscopic camera tracking and salient point mapping. A robust global 3D surface reconstruction framework has been developed for building a dense surface using only unorganized sparse point clouds extracted from the SLAM. The 3D surface reconstruction framework employs the Moving Least Squares (MLS) smoothing algorithm and the Poisson surface reconstruction framework for real time processing of the point clouds data set. Finally, the 3D geometric information of the surgical scene allows better understanding and accurate placement AR augmentations based on a robust 3D calibration. RESULTS We demonstrate the clinical relevance of our proposed system through two examples: (a) measurement of the surface; (b) depth cues in monocular endoscopy. The performance and accuracy evaluations of the proposed framework consist of two steps. First, we have created a computer-generated endoscopy simulation video to quantify the accuracy of the camera tracking by comparing the results of the video camera tracking with the recorded ground-truth camera trajectories. The accuracy of the surface reconstruction is assessed by evaluating the Root Mean Square Distance (RMSD) of surface vertices of the reconstructed mesh with that of the ground truth 3D models. An error of 1.24 mm for the camera trajectories has been obtained and the RMSD for surface reconstruction is 2.54 mm, which compare favourably with previous approaches. Second, in vivo laparoscopic videos are used to examine the quality of accurate AR based annotation and measurement, and the creation of depth cues. These results show the potential promise of our geometry-aware AR technology to be used in MIS surgical scenes. CONCLUSIONS The results show that the new framework is robust and accurate in dealing with challenging situations such as the rapid endoscopy camera movements in monocular MIS scenes. Both camera tracking and surface reconstruction based on a sparse point cloud are effective and operated in real-time. This demonstrates the potential of our algorithm for accurate AR localization and depth augmentation with geometric cues and correct surface measurements in MIS with monocular endoscopes.
Collapse
|
15
|
Luo X, Mori K, Peters TM. Advanced Endoscopic Navigation: Surgical Big Data, Methodology, and Applications. Annu Rev Biomed Eng 2018; 20:221-251. [PMID: 29505729 DOI: 10.1146/annurev-bioeng-062117-120917] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Interventional endoscopy (e.g., bronchoscopy, colonoscopy, laparoscopy, cystoscopy) is a widely performed procedure that involves either diagnosis of suspicious lesions or guidance for minimally invasive surgery in a variety of organs within the body cavity. Endoscopy may also be used to guide the introduction of certain items (e.g., stents) into the body. Endoscopic navigation systems seek to integrate big data with multimodal information (e.g., computed tomography, magnetic resonance images, endoscopic video sequences, ultrasound images, external trackers) relative to the patient's anatomy, control the movement of medical endoscopes and surgical tools, and guide the surgeon's actions during endoscopic interventions. Nevertheless, it remains challenging to realize the next generation of context-aware navigated endoscopy. This review presents a broad survey of various aspects of endoscopic navigation, particularly with respect to the development of endoscopic navigation techniques. First, we investigate big data with multimodal information involved in endoscopic navigation. Next, we focus on numerous methodologies used for endoscopic navigation. We then review different endoscopic procedures in clinical applications. Finally, we discuss novel techniques and promising directions for the development of endoscopic navigation.
Collapse
Affiliation(s)
- Xiongbiao Luo
- Department of Computer Science, Fujian Key Laboratory of Computing and Sensing for Smart City, Xiamen University, Xiamen 361005, China;
| | - Kensaku Mori
- Department of Intelligent Systems, Graduate School of Informatics, Nagoya University, Nagoya 464-8601, Japan;
| | - Terry M Peters
- Robarts Research Institute, Western University, London, Ontario N6A 3K7, Canada;
| |
Collapse
|
16
|
Kobayashi L, Zhang XC, Collins SA, Karim N, Merck DL. Exploratory Application of Augmented Reality/Mixed Reality Devices for Acute Care Procedure Training. West J Emerg Med 2017; 19:158-164. [PMID: 29383074 PMCID: PMC5785186 DOI: 10.5811/westjem.2017.10.35026] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2017] [Revised: 10/17/2017] [Accepted: 10/29/2017] [Indexed: 11/11/2022] Open
Abstract
Introduction Augmented reality (AR), mixed reality (MR), and virtual reality devices are enabling technologies that may facilitate effective communication in healthcare between those with information and knowledge (clinician/specialist; expert; educator) and those seeking understanding and insight (patient/family; non-expert; learner). Investigators initiated an exploratory program to enable the study of AR/MR use-cases in acute care clinical and instructional settings. Methods Academic clinician educators, computer scientists, and diagnostic imaging specialists conducted a proof-of-concept project to 1) implement a core holoimaging pipeline infrastructure and open-access repository at the study institution, and 2) use novel AR/MR techniques on off-the-shelf devices with holoimages generated by the infrastructure to demonstrate their potential role in the instructive communication of complex medical information. Results The study team successfully developed a medical holoimaging infrastructure methodology to identify, retrieve, and manipulate real patients’ de-identified computed tomography and magnetic resonance imagesets for rendering, packaging, transfer, and display of modular holoimages onto AR/MR headset devices and connected displays. Holoimages containing key segmentations of cervical and thoracic anatomic structures and pathology were overlaid and registered onto physical task trainers for simulation-based “blind insertion” invasive procedural training. During the session, learners experienced and used task-relevant anatomic holoimages for central venous catheter and tube thoracostomy insertion training with enhanced visual cues and haptic feedback. Direct instructor access into the learner’s AR/MR headset view of the task trainer was achieved for visual-axis interactive instructional guidance. Conclusion Investigators implemented a core holoimaging pipeline infrastructure and modular open-access repository to generate and enable access to modular holoimages during exploratory pilot stage applications for invasive procedure training that featured innovative AR/MR techniques on off-the-shelf headset devices.
Collapse
Affiliation(s)
- Leo Kobayashi
- Alpert Medical School of Brown University, Department of Emergency Medicine, Providence, Rhode Island
| | - Xiao Chi Zhang
- Alpert Medical School of Brown University, Department of Emergency Medicine, Providence, Rhode Island
| | - Scott A Collins
- Rhode Island Hospital, CT Scan Department, Providence, Rhode Island
| | - Naz Karim
- Alpert Medical School of Brown University, Department of Emergency Medicine, Providence, Rhode Island
| | - Derek L Merck
- Alpert Medical School of Brown University, Department of Diagnostic Imaging, Providence, Rhode Island
| |
Collapse
|
17
|
Chen L, Tang W, John NW. Real-time geometry-aware augmented reality in minimally invasive surgery. Healthc Technol Lett 2017; 4:163-167. [PMID: 29184658 PMCID: PMC5683199 DOI: 10.1049/htl.2017.0068] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Accepted: 07/31/2017] [Indexed: 11/25/2022] Open
Abstract
The potential of augmented reality (AR) technology to assist minimally invasive surgery (MIS) lies in its computational performance and accuracy in dealing with challenging MIS scenes. Even with the latest hardware and software technologies, achieving both real-time and accurate augmented information overlay in MIS is still a formidable task. In this Letter, the authors present a novel real-time AR framework for MIS that achieves interactive geometric aware AR in endoscopic surgery with stereo views. The authors' framework tracks the movement of the endoscopic camera and simultaneously reconstructs a dense geometric mesh of the MIS scene. The movement of the camera is predicted by minimising the re-projection error to achieve a fast tracking performance, while the three-dimensional mesh is incrementally built by a dense zero mean normalised cross-correlation stereo-matching method to improve the accuracy of the surface reconstruction. The proposed system does not require any prior template or pre-operative scan and can infer the geometric information intra-operatively in real time. With the geometric information available, the proposed AR framework is able to interactively add annotations, localisation of tumours and vessels, and measurement labelling with greater precision and accuracy compared with the state-of-the-art approaches.
Collapse
Affiliation(s)
- Long Chen
- Department of Creative Technology, Bournemouth University, Poole, UK
| | - Wen Tang
- Department of Creative Technology, Bournemouth University, Poole, UK
| | - Nigel W. John
- Deaprtment of Computer Science, University of Chester, Chester, UK
| |
Collapse
|
18
|
Schoob A, Kundrat D, Kahrs LA, Ortmaier T. Stereo vision-based tracking of soft tissue motion with application to online ablation control in laser microsurgery. Med Image Anal 2017. [PMID: 28624755 DOI: 10.1016/j.media.2017.06.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Recent research has revealed that image-based methods can enhance accuracy and safety in laser microsurgery. In this study, non-rigid tracking using surgical stereo imaging and its application to laser ablation is discussed. A recently developed motion estimation framework based on piecewise affine deformation modeling is extended by a mesh refinement step and considering texture information. This compensates for tracking inaccuracies potentially caused by inconsistent feature matches or drift. To facilitate online application of the method, computational load is reduced by concurrent processing and affine-invariant fusion of tracking and refinement results. The residual latency-dependent tracking error is further minimized by Kalman filter-based upsampling, considering a motion model in disparity space. Accuracy is assessed in laparoscopic, beating heart, and laryngeal sequences with challenging conditions, such as partial occlusions and significant deformation. Performance is compared with that of state-of-the-art methods. In addition, the online capability of the method is evaluated by tracking two motion patterns performed by a high-precision parallel-kinematic platform. Related experiments are discussed for tissue substitute and porcine soft tissue in order to compare performances in an ideal scenario and in a setup mimicking clinical conditions. Regarding the soft tissue trial, the tracking error can be significantly reduced from 0.72 mm to below 0.05 mm with mesh refinement. To demonstrate online laser path adaptation during ablation, the non-rigid tracking framework is integrated into a setup consisting of a surgical Er:YAG laser, a three-axis scanning unit, and a low-noise stereo camera. Regardless of the error source, such as laser-to-camera registration, camera calibration, image-based tracking, and scanning latency, the ablation root mean square error is kept below 0.21 mm when the sample moves according to the aforementioned patterns. Final experiments regarding motion-compensated laser ablation of structurally deforming tissue highlight the potential of the method for vision-guided laser surgery.
Collapse
Affiliation(s)
- Andreas Schoob
- Leibniz Universität Hannover, Institute of Mechatronic Systems, Appelstr. 11a, 30167 Hanover, Germany.
| | - Dennis Kundrat
- Leibniz Universität Hannover, Institute of Mechatronic Systems, Appelstr. 11a, 30167 Hanover, Germany
| | - Lüder A Kahrs
- Leibniz Universität Hannover, Institute of Mechatronic Systems, Appelstr. 11a, 30167 Hanover, Germany
| | - Tobias Ortmaier
- Leibniz Universität Hannover, Institute of Mechatronic Systems, Appelstr. 11a, 30167 Hanover, Germany
| |
Collapse
|
19
|
The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal 2017; 37:66-90. [DOI: 10.1016/j.media.2017.01.007] [Citation(s) in RCA: 183] [Impact Index Per Article: 26.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2016] [Revised: 01/16/2017] [Accepted: 01/23/2017] [Indexed: 12/27/2022]
|
20
|
Robust augmented reality registration method for localization of solid organs' tumors using CT-derived virtual biomechanical model and fluorescent fiducials. Surg Endosc 2016; 31:2863-2871. [PMID: 27796600 DOI: 10.1007/s00464-016-5297-8] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2016] [Accepted: 10/14/2016] [Indexed: 12/11/2022]
Abstract
BACKGROUND Augmented reality (AR) is the fusion of computer-generated and real-time images. AR can be used in surgery as a navigation tool, by creating a patient-specific virtual model through 3D software manipulation of DICOM imaging (e.g., CT scan). The virtual model can be superimposed to real-time images enabling transparency visualization of internal anatomy and accurate localization of tumors. However, the 3D model is rigid and does not take into account inner structures' deformations. We present a concept of automated AR registration, while the organs undergo deformation during surgical manipulation, based on finite element modeling (FEM) coupled with optical imaging of fluorescent surface fiducials. METHODS Two 10 × 1 mm wires (pseudo-tumors) and six 10 × 0.9 mm fluorescent fiducials were placed in ex vivo porcine kidneys (n = 10). Biomechanical FEM-based models were generated from CT scan. Kidneys were deformed and the shape changes were identified by tracking the fiducials, using a near-infrared optical system. The changes were registered automatically with the virtual model, which was deformed accordingly. Accuracy of prediction of pseudo-tumors' location was evaluated with a CT scan in the deformed status (ground truth). In vivo: fluorescent fiducials were inserted under ultrasound guidance in the kidney of one pig, followed by a CT scan. The FEM-based virtual model was superimposed on laparoscopic images by automatic registration of the fiducials. RESULTS Biomechanical models were successfully generated and accurately superimposed on optical images. The mean measured distance between the estimated tumor by biomechanical propagation and the scanned tumor (ground truth) was 0.84 ± 0.42 mm. All fiducials were successfully placed in in vivo kidney and well visualized in near-infrared mode enabling accurate automatic registration of the virtual model on the laparoscopic images. CONCLUSIONS Our preliminary experiments showed the potential of a biomechanical model with fluorescent fiducials to propagate the deformation of solid organs' surface to their inner structures including tumors with good accuracy and automatized robust tracking.
Collapse
|
21
|
Augmented Endoscopic Images Overlaying Shape Changes in Bone Cutting Procedures. PLoS One 2016; 11:e0161815. [PMID: 27584732 PMCID: PMC5008631 DOI: 10.1371/journal.pone.0161815] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2015] [Accepted: 08/12/2016] [Indexed: 11/19/2022] Open
Abstract
In microendoscopic discectomy for spinal disorders, bone cutting procedures are performed in tight spaces while observing a small portion of the target structures. Although optical tracking systems are able to measure the tip of the surgical tool during surgery, the poor shape information available during surgery makes accurate cutting difficult, even if preoperative computed tomography and magnetic resonance images are used for reference. Shape estimation and visualization of the target structures are essential for accurate cutting. However, time-varying shape changes during cutting procedures are still challenging issues for intraoperative navigation. This paper introduces a concept of endoscopic image augmentation that overlays shape changes to support bone cutting procedures. This framework handles the history of the location of the measured drill tip as a volume label and visualizes the remains to be cut overlaid on the endoscopic image in real time. A cutting experiment was performed with volunteers, and the feasibility of this concept was examined using a clinical navigation system. The efficacy of the cutting aid was evaluated with respect to the shape similarity, total moved distance of a cutting tool, and required cutting time. The results of the experiments showed that cutting performance was significantly improved by the proposed framework.
Collapse
|
22
|
Abstract
Minimally invasive surgery is slowly taking over as the preferred operative approach for colorectal diseases. However, many of the procedures remain technically difficult. This article will give an overview of the state of minimally invasive surgery and the many advances that have been made over the last two decades. Specifically, we discuss the introduction of the robotic platform and some of its benefits and limitations. We also describe some newer techniques related to robotics.
Collapse
Affiliation(s)
- Matthew Whealon
- Department of Surgery, University of California, Irvine, Orange, California
| | - Alessio Vinci
- Department of Surgery, University of California, Irvine, Orange, California
| | - Alessio Pigazzi
- Department of Surgery, University of California, Irvine, Orange, California
| |
Collapse
|
23
|
Haouchine N, Dequidt J, Berger MO, Cotin S. Monocular 3D Reconstruction and Augmentation of Elastic Surfaces with Self-Occlusion Handling. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2015; 21:1363-1376. [PMID: 26529459 DOI: 10.1109/tvcg.2015.2452905] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
This paper focuses on the 3D shape recovery and augmented reality on elastic objects with self-occlusions handling, using only single view images. Shape recovery from a monocular video sequence is an underconstrained problem and many approaches have been proposed to enforce constraints and resolve the ambiguities. State-of-the art solutions enforce smoothness or geometric constraints, consider specific deformation properties such as inextensibility or resort to shading constraints. However, few of them can handle properly large elastic deformations. We propose in this paper a real-time method that uses a mechanical model and able to handle highly elastic objects. The problem is formulated as an energy minimization problem accounting for a non-linear elastic model constrained by external image points acquired from a monocular camera. This method prevents us from formulating restrictive assumptions and specific constraint terms in the minimization. In addition, we propose to handle self-occluded regions thanks to the ability of mechanical models to provide appropriate predictions of the shape. Our method is compared to existing techniques with experiments conducted on computer-generated and real data that show the effectiveness of recovering and augmenting 3D elastic objects. Additionally, experiments in the context of minimally invasive liver surgery are also provided and results on deformations with the presence of self-occlusions are exposed.
Collapse
|
24
|
Patient-Specific Biomechanical Modeling for Guidance During Minimally-Invasive Hepatic Surgery. Ann Biomed Eng 2015; 44:139-53. [DOI: 10.1007/s10439-015-1419-z] [Citation(s) in RCA: 79] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2015] [Accepted: 08/05/2015] [Indexed: 11/26/2022]
|