1
|
Zhang X, Ji X, Wang J, Fan Y, Tao C. Renal surface reconstruction and segmentation for image-guided surgical navigation of laparoscopic partial nephrectomy. Biomed Eng Lett 2023; 13:165-174. [PMID: 37124114 PMCID: PMC10130295 DOI: 10.1007/s13534-023-00263-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 12/01/2022] [Accepted: 01/22/2023] [Indexed: 02/04/2023] Open
Abstract
An unpredictable dynamic surgical environment makes it necessary to measure morphological information of target tissue real-time for laparoscopic image-guided navigation. The stereo vision method for intraoperative tissue 3D reconstruction has the most potential for clinical development benefiting from its high reconstruction accuracy and laparoscopy compatibility. However, existing stereo vision methods have difficulty in achieving high reconstruction accuracy in real time. Also, intraoperative tissue reconstruction results often contain complex background and instrument information that prevents clinical development for image-guided systems. Taking laparoscopic partial nephrectomy (LPN) as the research object, this paper realizes a real-time dense reconstruction and extraction of the kidney tissue surface. The central symmetrical Census based semi-global block stereo matching algorithm is proposed to generate a dense disparity map. A GPU-based pixel-by-pixel connectivity segmentation mechanism is designed to segment the renal tissue area. An in-vitro porcine heart, in-vivo porcine kidney and offline clinical LPN data were performed to evaluate the accuracy and effectiveness of our approach. The algorithm achieved a reconstruction accuracy of ± 2 mm with a real-time update rate of 21 fps for an HD image size of 960 × 540, and 91.0% target tissue segmentation accuracy even with surgical instrument occlusions. Experimental results have demonstrated that the proposed method could accurately reconstruct and extract renal surface in real-time in LPN. The measurement results can be used directly for image-guided systems. Our method provides a new way to measure geometric information of target tissue intraoperatively in laparoscopy surgery. Supplementary Information The online version contains supplementary material available at 10.1007/s13534-023-00263-1.
Collapse
Affiliation(s)
- Xiaohui Zhang
- School of Engineering Medicine, Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100083 China
| | - Xuquan Ji
- School of Biomedical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100083 China
| | - Junchen Wang
- School of Mechanical Engineering and Automation, Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang Unviersity, No. 37 Xueyuan Road, Haidian District, Beijing, 100083 China
| | - Yubo Fan
- School of Engineering Medicine, Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100083 China
- School of Biomedical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100083 China
| | - Chunjing Tao
- School of Engineering Medicine, Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100083 China
| |
Collapse
|
2
|
Hayashi Y, Misawa K, Mori K. Database-driven patient-specific registration error compensation method for image-guided laparoscopic surgery. Int J Comput Assist Radiol Surg 2023; 18:63-69. [PMID: 36534226 DOI: 10.1007/s11548-022-02804-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 11/21/2022] [Indexed: 12/23/2022]
Abstract
PURPOSE A surgical navigation system helps surgeons understand anatomical structures in the operative field during surgery. Patient-to-image registration, which aligns coordinate systems between the CT volume and a positional tracker, is vital for accurate surgical navigation. Although a point-based rigid registration method using fiducials on the body surface is often utilized for laparoscopic surgery navigation, precise registration is difficult due to such factors as soft tissue deformation. We propose a method that compensates a transformation matrix computed using fiducials on the body surface based on the analysis of positional information in the database. METHODS We built our database by measuring the positional information of the fiducials and the guidance targets in both the CT volume and positional tracker coordinate systems through previous surgeries. We computed two transformation matrices: using only the fiducials and using only the guidance targets in all the data in the database. We calculated the differences between the two transformation matrices in each piece of data. The compensation transformation matrix was computed by averaging these difference matrices. In this step, we selected the data from the database based on the similarity of the fiducials and the configuration of the guidance targets. RESULTS We evaluated our proposed method using 20 pieces of data acquired during laparoscopic gastrectomy for gastric cancer. The locations of blood vessels were used as guidance targets for computing target registration error. The mean target registration errors significantly decreased from 33.0 to 17.1 mm before and after the compensation. CONCLUSION This paper described a registration error compensation method using a database for image-guided laparoscopic surgery. Since our proposed method reduced registration error without additional intraoperative measurements during surgery, it increases the accuracy of surgical navigation for laparoscopic surgery.
Collapse
Affiliation(s)
- Yuichiro Hayashi
- Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan.
| | - Kazunari Misawa
- Department of Gastroenterological Surgery, Aichi Cancer Center Hospital, 1-1 Kanokoden, Chikusa-ku, Nagoya, 464-8681, Japan
| | - Kensaku Mori
- Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan.,Research Center for Medical Bigdata, National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo, 101-8430, Japan
| |
Collapse
|
3
|
Ramalhinho J, Koo B, Montaña-Brown N, Saeed SU, Bonmati E, Gurusamy K, Pereira SP, Davidson B, Hu Y, Clarkson MJ. Deep hashing for global registration of untracked 2D laparoscopic ultrasound to CT. Int J Comput Assist Radiol Surg 2022; 17:1461-1468. [PMID: 35366130 PMCID: PMC9307559 DOI: 10.1007/s11548-022-02605-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 03/09/2022] [Indexed: 11/30/2022]
Abstract
PURPOSE The registration of Laparoscopic Ultrasound (LUS) to CT can enhance the safety of laparoscopic liver surgery by providing the surgeon with awareness on the relative positioning between critical vessels and a tumour. In an effort to provide a translatable solution for this poorly constrained problem, Content-based Image Retrieval (CBIR) based on vessel information has been suggested as a method for obtaining a global coarse registration without using tracking information. However, the performance of these frameworks is limited by the use of non-generalisable handcrafted vessel features. METHODS We propose the use of a Deep Hashing (DH) network to directly convert vessel images from both LUS and CT into fixed size hash codes. During training, these codes are learnt from a patient-specific CT scan by supplying the network with triplets of vessel images which include both a registered and a mis-registered pair. Once hash codes have been learnt, they can be used to perform registration with CBIR methods. RESULTS We test a CBIR pipeline on 11 sequences of untracked LUS distributed across 5 clinical cases. Compared to a handcrafted feature approach, our model improves the registration success rate significantly from 48% to 61%, considering a 20 mm error as the threshold for a successful coarse registration. CONCLUSIONS We present the first DH framework for interventional multi-modal registration tasks. The presented approach is easily generalisable to other registration problems, does not require annotated data for training, and may promote the translation of these techniques.
Collapse
Affiliation(s)
- João Ramalhinho
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences and Centre for Medical Image Computing, UCL, London, UK.
| | - Bongjin Koo
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences and Centre for Medical Image Computing, UCL, London, UK
| | - Nina Montaña-Brown
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences and Centre for Medical Image Computing, UCL, London, UK
| | - Shaheer U Saeed
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences and Centre for Medical Image Computing, UCL, London, UK
| | - Ester Bonmati
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences and Centre for Medical Image Computing, UCL, London, UK
| | | | | | - Brian Davidson
- Division of Surgery and Interventional Science, UCL, London, UK
| | - Yipeng Hu
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences and Centre for Medical Image Computing, UCL, London, UK
| | - Matthew J Clarkson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences and Centre for Medical Image Computing, UCL, London, UK
| |
Collapse
|
4
|
Schneider C, Allam M, Stoyanov D, Hawkes DJ, Gurusamy K, Davidson BR. Performance of image guided navigation in laparoscopic liver surgery - A systematic review. Surg Oncol 2021; 38:101637. [PMID: 34358880 DOI: 10.1016/j.suronc.2021.101637] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Revised: 07/04/2021] [Accepted: 07/24/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND Compared to open surgery, minimally invasive liver resection has improved short term outcomes. It is however technically more challenging. Navigated image guidance systems (IGS) are being developed to overcome these challenges. The aim of this systematic review is to provide an overview of their current capabilities and limitations. METHODS Medline, Embase and Cochrane databases were searched using free text terms and corresponding controlled vocabulary. Titles and abstracts of retrieved articles were screened for inclusion criteria. Due to the heterogeneity of the retrieved data it was not possible to conduct a meta-analysis. Therefore results are presented in tabulated and narrative format. RESULTS Out of 2015 articles, 17 pre-clinical and 33 clinical papers met inclusion criteria. Data from 24 articles that reported on accuracy indicates that in recent years navigation accuracy has been in the range of 8-15 mm. Due to discrepancies in evaluation methods it is difficult to compare accuracy metrics between different systems. Surgeon feedback suggests that current state of the art IGS may be useful as a supplementary navigation tool, especially in small liver lesions that are difficult to locate. They are however not able to reliably localise all relevant anatomical structures. Only one article investigated IGS impact on clinical outcomes. CONCLUSIONS Further improvements in navigation accuracy are needed to enable reliable visualisation of tumour margins with the precision required for oncological resections. To enhance comparability between different IGS it is crucial to find a consensus on the assessment of navigation accuracy as a minimum reporting standard.
Collapse
Affiliation(s)
- C Schneider
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK.
| | - M Allam
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK; General surgery Department, Tanta University, Egypt
| | - D Stoyanov
- Department of Computer Science, University College London, London, UK; Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - D J Hawkes
- Centre for Medical Image Computing (CMIC), University College London, London, UK; Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK
| | - K Gurusamy
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK
| | - B R Davidson
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK
| |
Collapse
|
5
|
Montaña-Brown N, Ramalhinho J, Allam M, Davidson B, Hu Y, Clarkson MJ. Vessel segmentation for automatic registration of untracked laparoscopic ultrasound to CT of the liver. Int J Comput Assist Radiol Surg 2021; 16:1151-1160. [PMID: 34046826 PMCID: PMC8260404 DOI: 10.1007/s11548-021-02400-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Accepted: 05/02/2021] [Indexed: 01/22/2023]
Abstract
Purpose: Registration of Laparoscopic Ultrasound (LUS) to a pre-operative scan such as Computed Tomography (CT) using blood vessel information has been proposed as a method to enable image-guidance for laparoscopic liver resection. Currently, there are solutions for this problem that can potentially enable clinical translation by bypassing the need for a manual initialisation and tracking information. However, no reliable framework for the segmentation of vessels in 2D untracked LUS images has been presented. Methods: We propose the use of 2D UNet for the segmentation of liver vessels in 2D LUS images. We integrate these results in a previously developed registration method, and show the feasibility of a fully automatic initialisation to the LUS to CT registration problem without a tracking device. Results: We validate our segmentation using LUS data from 6 patients. We test multiple models by placing patient datasets into different combinations of training, testing and hold-out, and obtain mean Dice scores ranging from 0.543 to 0.706. Using these segmentations, we obtain registration accuracies between 6.3 and 16.6 mm in 50% of cases. Conclusions: We demonstrate the first instance of deep learning (DL) for the segmentation of liver vessels in LUS. Our results show the feasibility of UNet in detecting multiple vessel instances in 2D LUS images, and potentially automating a LUS to CT registration pipeline.
Collapse
Affiliation(s)
- Nina Montaña-Brown
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK. .,Centre For Medical Image Computing, University College London, London, UK.
| | - João Ramalhinho
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK. .,Centre For Medical Image Computing, University College London, London, UK.
| | - Moustafa Allam
- Division of Surgery and Interventional Science, University College London, London, UK
| | - Brian Davidson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.,Division of Surgery and Interventional Science, University College London, London, UK
| | - Yipeng Hu
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.,Centre For Medical Image Computing, University College London, London, UK
| | - Matthew J Clarkson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.,Centre For Medical Image Computing, University College London, London, UK
| |
Collapse
|
6
|
Ramalhinho J, Tregidgo HFJ, Gurusamy K, Hawkes DJ, Davidson B, Clarkson MJ. Registration of Untracked 2D Laparoscopic Ultrasound to CT Images of the Liver Using Multi-Labelled Content-Based Image Retrieval. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1042-1054. [PMID: 33326379 DOI: 10.1109/tmi.2020.3045348] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Laparoscopic Ultrasound (LUS) is recommended as a standard-of-care when performing laparoscopic liver resections as it images sub-surface structures such as tumours and major vessels. Given that LUS probes are difficult to handle and some tumours are iso-echoic, registration of LUS images to a pre-operative CT has been proposed as an image-guidance method. This registration problem is particularly challenging due to the small field of view of LUS, and usually depends on both a manual initialisation and tracking to compose a volume, hindering clinical translation. In this paper, we extend a proposed registration approach using Content-Based Image Retrieval (CBIR), removing the requirement for tracking or manual initialisation. Pre-operatively, a set of possible LUS planes is simulated from CT and a descriptor generated for each image. Then, a Bayesian framework is employed to estimate the most likely sequence of CT simulations that matches a series of LUS images. We extend our CBIR formulation to use multiple labelled objects and constrain the registration by separating liver vessels into portal vein and hepatic vein branches. The value of this new labeled approach is demonstrated in retrospective data from 5 patients. Results show that, by including a series of 5 untracked images in time, a single LUS image can be registered with accuracies ranging from 5.7 to 16.4 mm with a success rate of 78%. Initialisation of the LUS to CT registration with the proposed framework could potentially enable the clinical translation of these image fusion techniques.
Collapse
|
7
|
Liu X, Plishker W, Shekhar R. Hybrid electromagnetic-ArUco tracking of laparoscopic ultrasound transducer in laparoscopic video. J Med Imaging (Bellingham) 2021; 8:015001. [PMID: 33585664 PMCID: PMC7857492 DOI: 10.1117/1.jmi.8.1.015001] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 01/12/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: The purpose of this work was to develop a new method of tracking a laparoscopic ultrasound (LUS) transducer in laparoscopic video by combining the hardware [e.g., electromagnetic (EM)] and the computer vision-based (e.g., ArUco) tracking methods. Approach: We developed a special tracking mount for the imaging tip of the LUS transducer. The mount incorporated an EM sensor and an ArUco pattern registered to it. The hybrid method used ArUco tracking for ArUco-success frames (i.e., frames where ArUco succeeds in detecting the pattern) and used corrected EM tracking for the ArUco-failure frames. The corrected EM tracking result was obtained by applying correction matrices to the original EM tracking result. The correction matrices were calculated in previous ArUco-success frames by comparing the ArUco result and the original EM tracking result. Results: We performed phantom and animal studies to evaluate the performance of our hybrid tracking method. The corrected EM tracking results showed significant improvements over the original EM tracking results. In the animal study, 59.2% frames were ArUco-success frames. For the ArUco-failure frames, mean reprojection errors for the original EM tracking method and for the corrected EM tracking method were 30.8 pixel and 10.3 pixel, respectively. Conclusions: The new hybrid method is more reliable than using ArUco tracking alone and more accurate and practical than using EM tracking alone for tracking the LUS transducer in the laparoscope camera image. The proposed method has the potential to significantly improve tracking performance for LUS-based augmented reality applications.
Collapse
Affiliation(s)
- Xinyang Liu
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, United States
| | | | - Raj Shekhar
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, United States.,IGI Technologies, Inc., Silver Spring, Maryland, United States
| |
Collapse
|
8
|
Ma L, Wang J, Kiyomatsu H, Tsukihara H, Sakuma I, Kobayashi E. Surgical navigation system for laparoscopic lateral pelvic lymph node dissection in rectal cancer surgery using laparoscopic-vision-tracked ultrasonic imaging. Surg Endosc 2020; 35:6556-6567. [PMID: 33185764 DOI: 10.1007/s00464-020-08153-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 11/04/2020] [Indexed: 10/23/2022]
Abstract
BACKGROUND Laparoscopic lateral pelvic lymph node dissection (LPLND) in rectal cancer surgery requires considerable skill because the pelvic arteries, which need to be located to guide the dissection, are covered by other tissues and cannot be observed on laparoscopic views. Therefore, surgeons need to localize the pelvic arteries accurately before dissection, to prevent injury to these arteries. METHODS This report proposes a surgical navigation system to facilitate artery localization in laparoscopic LPLND by combining ultrasonic imaging and laparoscopy. Specifically, free-hand laparoscopic ultrasound (LUS) is employed to capture the arteries intraoperatively in this approach, and a laparoscopic vision-based tracking system is utilized to track the LUS probe. To extract the artery contours from the two-dimensional ultrasound image sequences efficiently, an artery extraction framework based on local phase-based snakes was developed. After reconstructing the three-dimensional intraoperative artery model from ultrasound images, a high-resolution artery model segmented from preoperative computed tomography (CT) images was rigidly registered to the intraoperative artery model and overlaid onto the laparoscopic view to guide laparoscopic LPLND. RESULTS Experiments were conducted to evaluate the performance of the vision-based tracking system, and the average reconstruction error of the proposed tracking system was found to be 2.4 mm. Then, the proposed navigation system was quantitatively evaluated on an artery phantom. The reconstruction time and average navigation error were 8 min and 2.3 mm, respectively. A navigation system was also successfully constructed to localize the pelvic arteries in laparoscopic and open surgeries of a swine. This demonstrated the feasibility of the proposed system in vivo. The construction times in the laparoscopic and open surgeries were 14 and 12 min, respectively. CONCLUSIONS The experimental results showed that the proposed navigation system can guide laparoscopic LPLND and requires a significantly shorter setting time than the state-of-the-art navigation systems do.
Collapse
Affiliation(s)
- Lei Ma
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Junchen Wang
- School of Mechanical Engineering, Beihang University, Beijing, China
| | | | | | - Ichiro Sakuma
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Etsuko Kobayashi
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan.
| |
Collapse
|
9
|
Heiselman JS, Jarnagin WR, Miga MI. Intraoperative Correction of Liver Deformation Using Sparse Surface and Vascular Features via Linearized Iterative Boundary Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2223-2234. [PMID: 31976882 PMCID: PMC7314378 DOI: 10.1109/tmi.2020.2967322] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
During image guided liver surgery, soft tissue deformation can cause considerable error when attempting to achieve accurate localization of the surgical anatomy through image-to-physical registration. In this paper, a linearized iterative boundary reconstruction technique is proposed to account for these deformations. The approach leverages a superposed formulation of boundary conditions to rapidly and accurately estimate the deformation applied to a preoperative model of the organ given sparse intraoperative data of surface and subsurface features. With this method, tracked intraoperative ultrasound (iUS) is investigated as a potential data source for augmenting registration accuracy beyond the capacity of conventional organ surface registration. In an expansive simulated dataset, features including vessel contours, vessel centerlines, and the posterior liver surface are extracted from iUS planes. Registration accuracy is compared across increasing data density to establish how iUS can be best employed to improve target registration error (TRE). From a baseline average TRE of 11.4 ± 2.2 mm using sparse surface data only, incorporating additional sparse features from three iUS planes improved average TRE to 6.4 ± 1.0 mm. Furthermore, increasing the sparse coverage to 16 tracked iUS planes improved average TRE to 3.9 ± 0.7 mm, exceeding the accuracy of registration based on complete surface data available with more cumbersome intraoperative CT without contrast. Additionally, the approach was applied to three clinical cases where on average error improved 67% over rigid registration and 56% over deformable surface registration when incorporating additional features from one independent tracked iUS plane.
Collapse
Affiliation(s)
| | - William R. Jarnagin
- Department of Surgery at Memorial Sloan Kettering Cancer Center, New York, NY 10065 USA
| | - Michael I. Miga
- Department of Biomedical Engineering at Vanderbilt University, Nashville, TN 37235 USA
| |
Collapse
|
10
|
Schneider C, Thompson S, Totz J, Song Y, Allam M, Sodergren MH, Desjardins AE, Barratt D, Ourselin S, Gurusamy K, Stoyanov D, Clarkson MJ, Hawkes DJ, Davidson BR. Comparison of manual and semi-automatic registration in augmented reality image-guided liver surgery: a clinical feasibility study. Surg Endosc 2020; 34:4702-4711. [PMID: 32780240 PMCID: PMC7524854 DOI: 10.1007/s00464-020-07807-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2020] [Accepted: 07/10/2020] [Indexed: 02/06/2023]
Abstract
BACKGROUND The laparoscopic approach to liver resection may reduce morbidity and hospital stay. However, uptake has been slow due to concerns about patient safety and oncological radicality. Image guidance systems may improve patient safety by enabling 3D visualisation of critical intra- and extrahepatic structures. Current systems suffer from non-intuitive visualisation and a complicated setup process. A novel image guidance system (SmartLiver), offering augmented reality visualisation and semi-automatic registration has been developed to address these issues. A clinical feasibility study evaluated the performance and usability of SmartLiver with either manual or semi-automatic registration. METHODS Intraoperative image guidance data were recorded and analysed in patients undergoing laparoscopic liver resection or cancer staging. Stereoscopic surface reconstruction and iterative closest point matching facilitated semi-automatic registration. The primary endpoint was defined as successful registration as determined by the operating surgeon. Secondary endpoints were system usability as assessed by a surgeon questionnaire and comparison of manual vs. semi-automatic registration accuracy. Since SmartLiver is still in development no attempt was made to evaluate its impact on perioperative outcomes. RESULTS The primary endpoint was achieved in 16 out of 18 patients. Initially semi-automatic registration failed because the IGS could not distinguish the liver surface from surrounding structures. Implementation of a deep learning algorithm enabled the IGS to overcome this issue and facilitate semi-automatic registration. Mean registration accuracy was 10.9 ± 4.2 mm (manual) vs. 13.9 ± 4.4 mm (semi-automatic) (Mean difference - 3 mm; p = 0.158). Surgeon feedback was positive about IGS handling and improved intraoperative orientation but also highlighted the need for a simpler setup process and better integration with laparoscopic ultrasound. CONCLUSION The technical feasibility of using SmartLiver intraoperatively has been demonstrated. With further improvements semi-automatic registration may enhance user friendliness and workflow of SmartLiver. Manual and semi-automatic registration accuracy were comparable but evaluation on a larger patient cohort is required to confirm these findings.
Collapse
Affiliation(s)
- C. Schneider
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK
| | - S. Thompson
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - J. Totz
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - Y. Song
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - M. Allam
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK
| | - M. H. Sodergren
- Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - A. E. Desjardins
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - D. Barratt
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - S. Ourselin
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - K. Gurusamy
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK ,Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Department of Hepatopancreatobiliary and Liver Transplant Surgery, Royal Free Hospital, London, UK
| | - D. Stoyanov
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Computer Science, University College London, London, UK
| | - M. J. Clarkson
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - D. J. Hawkes
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - B. R. Davidson
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK ,Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Department of Hepatopancreatobiliary and Liver Transplant Surgery, Royal Free Hospital, London, UK
| |
Collapse
|
11
|
Ma L, Nakamae K, Wang J, Kiyomatsu H, Tsukihara H, Kobayashi E, Sakuma I. Image-guided laparoscopic pelvic lymph node dissection using stereo visual tracking free-hand laparoscopic ultrasound. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2017:3240-3243. [PMID: 29060588 DOI: 10.1109/embc.2017.8037547] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Laparoscopic pelvic lymph node dissection is a delicate operation because pelvic arteries, which should be located first to guide the dissection, are often concealed by tissues and cannot be identified in the endoscopic view. Consequently, arteries can be damaged if they are not located accurately. To improve dissection safety and efficiency, we have developed an image-guided navigation system to provide pelvic artery position information by registering a 3D artery model extracted from CT images to a 3D model reconstructed from free-hand laparoscopic ultrasound images. The ultrasound probe is tracked using a proposed stereo vision-based tracking strategy that can simplify the system and reduce setup time. The artery is segmented from 2D ultrasound images using a local phase-based snakes framework. The accuracy of the proposed navigation system was estimated in a phantom experiment (the TRE error was 1.58 ± 0.70 mm), and the feasibility of the proposed navigation system was confirmed in an animal experiment.
Collapse
|
12
|
Ramalhinho J, Robu MR, Thompson S, Gurusamy K, Davidson B, Hawkes D, Barratt D, Clarkson MJ. A pre-operative planning framework for global registration of laparoscopic ultrasound to CT images. Int J Comput Assist Radiol Surg 2018; 13:1177-1186. [PMID: 29860550 PMCID: PMC6096745 DOI: 10.1007/s11548-018-1799-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2018] [Accepted: 05/21/2018] [Indexed: 12/31/2022]
Abstract
PURPOSE Laparoscopic ultrasound (LUS) enhances the safety of laparoscopic liver resection by enabling real-time imaging of internal structures such as vessels. However, LUS probes can be difficult to use, and many tumours are iso-echoic and hence are not visible. Registration of LUS to a pre-operative CT or MR scan has been proposed as a method of image guidance. However, the field of view of the probe is very small compared to the whole liver, making the registration task challenging and dependent on a very accurate initialisation. METHODS We propose the use of a subject-specific planning framework that provides information on which anatomical liver regions it is possible to acquire vascular data that is unique enough for a globally optimal initial registration. Vessel-based rigid registration on different areas of the pre-operative CT vascular tree is used in order to evaluate predicted accuracy and reliability. RESULTS The planning framework is tested on one porcine subject where we have taken 5 independent sweeps of LUS data from different sections of the liver. Target registration error of vessel branching points was used to measure accuracy. Global registration based on vessel centrelines is applied to the 5 datasets. In 3 out of 5 cases registration is successful and in agreement with the planning. Further tests with a CT scan under abdominal insufflation show that the framework can provide valuable information in all of the 5 cases. CONCLUSIONS We have introduced a planning framework that can guide the surgeon on how much LUS data to collect in order to provide a reliable globally unique registration without the need for an initial manual alignment. This could potentially improve the usability of these methods in clinic.
Collapse
Affiliation(s)
- João Ramalhinho
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.
- Centre For Medical Image Computing, University College London, London, UK.
| | - Maria R Robu
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Centre For Medical Image Computing, University College London, London, UK
| | - Stephen Thompson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Centre For Medical Image Computing, University College London, London, UK
| | - Kurinchi Gurusamy
- Division of Surgery and Interventional Science, University College London, London, UK
| | - Brian Davidson
- Division of Surgery and Interventional Science, University College London, London, UK
| | - David Hawkes
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Centre For Medical Image Computing, University College London, London, UK
| | - Dean Barratt
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Centre For Medical Image Computing, University College London, London, UK
| | - Matthew J Clarkson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Centre For Medical Image Computing, University College London, London, UK
| |
Collapse
|
13
|
Gruijthuijsen C, Colchester R, Devreker A, Javaux A, Maneas E, Noimark S, Xia W, Stoyanov D, Reynaerts D, Deprest J, Ourselin S, Desjardins A, Vercauteren T, Vander Poorten E. Haptic Guidance Based on All-Optical Ultrasound Distance Sensing for Safer Minimally Invasive Fetal Surgery. JOURNAL OF MEDICAL ROBOTICS RESEARCH 2018; 3:10.1142/S2424905X18410015. [PMID: 30820482 PMCID: PMC6390942 DOI: 10.1142/s2424905x18410015] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
By intervening during the early stage of gestation, fetal surgeons aim to correct or minimize the effects of congenital disorders. As compared to postnatal treatment of these disorders, such early interventions can often actually save the life of the fetus and also improve the quality of life of the newborn. However, fetal surgery is considered one of the most challenging disciplines within Minimally Invasive Surgery (MIS), owing to factors such as the fragility of the anatomic features, poor visibility, limited manoeuvrability, and extreme requirements in terms of instrument handling with precise positioning. This work is centered on a fetal laser surgery procedure treating placental disorders. It proposes the use of haptic guidance to enhance the overall safety of this procedure and to simplify instrument handling. A method is described that provides effective guidance by installing a forbidden region virtual fixture over the placenta, thereby safeguarding adequate clearance between the instrument tip and the placenta. With a novel application of all-optical ultrasound distance sensing in which transmission and reception are performed with fibre optics, this method can be used with a sole reliance on intraoperatively acquired data. The added value of the guidance approach, in terms of safety and performance, is demonstrated in a series of experiments with a robotic platform.
Collapse
Affiliation(s)
| | - Richard Colchester
- Department of Medical Physics & Biomedical Engineering, University College London, UK
| | - Alain Devreker
- Department of Mechanical Engineering, KU Leuven, Belgium
| | - Allan Javaux
- Department of Mechanical Engineering, KU Leuven, Belgium
| | - Efthymios Maneas
- Department of Medical Physics & Biomedical Engineering, University College London, UK
| | - Sacha Noimark
- Department of Medical Physics & Biomedical Engineering, University College London, UK
| | - Wenfeng Xia
- Department of Medical Physics & Biomedical Engineering, University College London, UK
| | - Danail Stoyanov
- Centre for Medical Imaging Computing, University College London, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, UK
| | | | - Jan Deprest
- Department of Obstetrics and Gynecology, Division Woman and Child, Fetal Medicine Unit, KU Leuven, Belgium
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, UK
| | - Sebastien Ourselin
- Centre for Medical Imaging Computing, University College London, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, UK
| | - Adrien Desjardins
- Department of Medical Physics & Biomedical Engineering, University College London, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, UK
| | - Tom Vercauteren
- Department of Medical Physics & Biomedical Engineering, University College London, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, London, UK
| | | |
Collapse
|
14
|
Kim D, Kim N, Lee S, Seo JB. A fast and robust level set motion-assisted deformable registration method for volumetric CT guided lung intervention. Biocybern Biomed Eng 2018. [DOI: 10.1016/j.bbe.2018.04.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
15
|
Heiselman JS, Clements LW, Collins JA, Weis JA, Simpson AL, Geevarghese SK, Kingham TP, Jarnagin WR, Miga MI. Characterization and correction of intraoperative soft tissue deformation in image-guided laparoscopic liver surgery. J Med Imaging (Bellingham) 2017; 5:021203. [PMID: 29285519 DOI: 10.1117/1.jmi.5.2.021203] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2017] [Accepted: 11/21/2017] [Indexed: 12/12/2022] Open
Abstract
Laparoscopic liver surgery is challenging to perform due to a compromised ability of the surgeon to localize subsurface anatomy in the constrained environment. While image guidance has the potential to address this barrier, intraoperative factors, such as insufflation and variable degrees of organ mobilization from supporting ligaments, may generate substantial deformation. The severity of laparoscopic deformation in humans has not been characterized, and current laparoscopic correction methods do not account for the mechanics of how intraoperative deformation is applied to the liver. We first measure the degree of laparoscopic deformation at two insufflation pressures over the course of laparoscopic-to-open conversion in 25 patients. With this clinical data alongside a mock laparoscopic phantom setup, we report a biomechanical correction approach that leverages anatomically load-bearing support surfaces from ligament attachments to iteratively reconstruct and account for intraoperative deformations. Laparoscopic deformations were significantly larger than deformations associated with open surgery, and our correction approach yielded subsurface target error of [Formula: see text] and surface error of [Formula: see text] using only sparse surface data with realistic surgical extent. Laparoscopic surface data extents were examined and found to impact registration accuracy. Finally, we demonstrate viability of the correction method with clinical data.
Collapse
Affiliation(s)
- Jon S Heiselman
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States.,Vanderbilt University, Vanderbilt Institute for Surgery and Engineering, Nashville, Tennessee, United States
| | - Logan W Clements
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States.,Vanderbilt University, Vanderbilt Institute for Surgery and Engineering, Nashville, Tennessee, United States
| | - Jarrod A Collins
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States.,Vanderbilt University, Vanderbilt Institute for Surgery and Engineering, Nashville, Tennessee, United States
| | - Jared A Weis
- Wake Forest School of Medicine, Department of Biomedical Engineering, Winston-Salem, North Carolina, United States
| | - Amber L Simpson
- Memorial Sloan-Kettering Cancer Center, Hepatopancreatobiliary Service, Department of Surgery, New York, New York, United States
| | - Sunil K Geevarghese
- Vanderbilt University Medical Center, Division of Hepatobiliary Surgery and Liver Transplantation, Nashville, Tennessee, United States
| | - T Peter Kingham
- Memorial Sloan-Kettering Cancer Center, Hepatopancreatobiliary Service, Department of Surgery, New York, New York, United States
| | - William R Jarnagin
- Memorial Sloan-Kettering Cancer Center, Hepatopancreatobiliary Service, Department of Surgery, New York, New York, United States
| | - Michael I Miga
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States.,Vanderbilt University, Vanderbilt Institute for Surgery and Engineering, Nashville, Tennessee, United States
| |
Collapse
|
16
|
Collins JA, Weis JA, Heiselman JS, Clements LW, Simpson AL, Jarnagin WR, Miga MI. Improving Registration Robustness for Image-Guided Liver Surgery in a Novel Human-to-Phantom Data Framework. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:1502-1510. [PMID: 28212080 PMCID: PMC5757161 DOI: 10.1109/tmi.2017.2668842] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
In open image-guided liver surgery (IGLS), a sparse representation of the intraoperative organ surface can be acquired to drive image-to-physical registration. We hypothesize that uncharacterized error induced by variation in the collection patterns of organ surface data limits the accuracy and robustness of an IGLS registration. Clinical validation of such registration methods is challenged due to the difficulty in obtaining data representative of the true state of organ deformation. We propose a novel human-to-phantom validation framework that transforms surface collection patterns from in vivo IGLS procedures (n = 13) onto a well-characterized hepatic deformation phantom for the purpose of validating surface-driven, volumetric nonrigid registration methods. An important feature of the approach is that it centers on combining workflow-realistic data acquisition and surgical deformations that are appropriate in behavior and magnitude. Using the approach, we investigate volumetric target registration error (TRE) with both current rigid IGLS and our improved nonrigid registration methods. Additionally, we introduce a spatial data resampling approach to mitigate the workflow-sensitive sampling problem. Using our human-to-phantom approach, TRE after routine rigid registration was 10.9 ± 0.6 mm with a signed closest point distance associated with residual surface fit in the range of ±10 mm, highly representative of open liver resections. After applying our novel resampling strategy and improved deformation correction method, TRE was reduced by 51%, i.e., a TRE of 5.3 ± 0.5 mm. This paper reported herein realizes a novel tractable approach for the validation of image-to-physical registration methods and demonstrates promising results for our correction method.
Collapse
Affiliation(s)
| | - Jared A. Weis
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN 37235 USA
| | - Jon S. Heiselman
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN 37235 USA
| | - Logan W. Clements
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN 37235 USA
| | | | | | - Michael I. Miga
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN 37235 USA
| |
Collapse
|
17
|
The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal 2017; 37:66-90. [DOI: 10.1016/j.media.2017.01.007] [Citation(s) in RCA: 183] [Impact Index Per Article: 26.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2016] [Revised: 01/16/2017] [Accepted: 01/23/2017] [Indexed: 12/27/2022]
|
18
|
Hawkes DJ. From clinical imaging and computational models to personalised medicine and image guided interventions. Med Image Anal 2016; 33:50-55. [PMID: 27407003 DOI: 10.1016/j.media.2016.06.022] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2016] [Revised: 06/10/2016] [Accepted: 06/15/2016] [Indexed: 11/25/2022]
Abstract
This short paper describes the development of the UCL Centre for Medical Image Computing (CMIC) from 2006 to 2016, together with reference to historical developments of the Computational Imaging sciences Group (CISG) at Guy's Hospital. Key early work in automated image registration led to developments in image guided surgery and improved cancer diagnosis and therapy. The work is illustrated with examples from neurosurgery, laparoscopic liver and gastric surgery, diagnosis and treatment of prostate cancer and breast cancer, and image guided radiotherapy for lung cancer.
Collapse
Affiliation(s)
- David J Hawkes
- Centre for Medical Image Computing, UCL, London, UK, WC1E 6BT, United Kingdom.
| |
Collapse
|
19
|
Time-Of-Flight Camera, Optical Tracker and Computed Tomography in Pairwise Data Registration. PLoS One 2016; 11:e0159493. [PMID: 27434396 PMCID: PMC4951045 DOI: 10.1371/journal.pone.0159493] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2015] [Accepted: 07/05/2016] [Indexed: 11/21/2022] Open
Abstract
Purpose A growing number of medical applications, including minimal invasive surgery, depends on multi-modal or multi-sensors data processing. Fast and accurate 3D scene analysis, comprising data registration, seems to be crucial for the development of computer aided diagnosis and therapy. The advancement of surface tracking system based on optical trackers already plays an important role in surgical procedures planning. However, new modalities, like the time-of-flight (ToF) sensors, widely explored in non-medical fields are powerful and have the potential to become a part of computer aided surgery set-up. Connection of different acquisition systems promises to provide a valuable support for operating room procedures. Therefore, the detailed analysis of the accuracy of such multi-sensors positioning systems is needed. Methods We present the system combining pre-operative CT series with intra-operative ToF-sensor and optical tracker point clouds. The methodology contains: optical sensor set-up and the ToF-camera calibration procedures, data pre-processing algorithms, and registration technique. The data pre-processing yields a surface, in case of CT, and point clouds for ToF-sensor and marker-driven optical tracker representation of an object of interest. An applied registration technique is based on Iterative Closest Point algorithm. Results The experiments validate the registration of each pair of modalities/sensors involving phantoms of four various human organs in terms of Hausdorff distance and mean absolute distance metrics. The best surface alignment was obtained for CT and optical tracker combination, whereas the worst for experiments involving ToF-camera. Conclusion The obtained accuracies encourage to further develop the multi-sensors systems. The presented substantive discussion concerning the system limitations and possible improvements mainly related to the depth information produced by the ToF-sensor is useful for computer aided surgery developers.
Collapse
|
20
|
Thompson S, Stoyanov D, Schneider C, Gurusamy K, Ourselin S, Davidson B, Hawkes D, Clarkson MJ. Hand-eye calibration for rigid laparoscopes using an invariant point. Int J Comput Assist Radiol Surg 2016; 11:1071-80. [PMID: 26995597 PMCID: PMC4893361 DOI: 10.1007/s11548-016-1364-9] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2016] [Accepted: 02/24/2016] [Indexed: 01/22/2023]
Abstract
PURPOSE Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet it can be difficult due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but one current challenge is in accurate "hand-eye" calibration, which determines the position and orientation of the laparoscope camera relative to the tracking markers. METHODS In this paper, we propose a simple and clinically feasible calibration method based on a single invariant point. The method requires no additional hardware, can be constructed by theatre staff during surgical setup, requires minimal image processing and can be visualised in real time. Real-time visualisation allows the surgical team to assess the calibration accuracy before use in surgery. In addition, in the laboratory, we have developed a laparoscope with an electromagnetic tracking sensor attached to the camera end and an optical tracking marker attached to the distal end. This enables a comparison of tracking performance. RESULTS We have evaluated our method in the laboratory and compared it to two widely used methods, "Tsai's method" and "direct" calibration. The new method is of comparable accuracy to existing methods, and we show RMS projected error due to calibration of 1.95 mm for optical tracking and 0.85 mm for EM tracking, versus 4.13 and 1.00 mm respectively, using existing methods. The new method has also been shown to be workable under sterile conditions in the operating room. CONCLUSION We have proposed a new method of hand-eye calibration, based on a single invariant point. Initial experience has shown that the method provides visual feedback, satisfactory accuracy and can be performed during surgery. We also show that an EM sensor placed near the camera would provide significantly improved image overlay accuracy.
Collapse
Affiliation(s)
- Stephen Thompson
- Centre for Medical Image Computing, Front Engineering Building, University College London, Malet Place, London, UK.
| | - Danail Stoyanov
- Centre for Medical Image Computing, Front Engineering Building, University College London, Malet Place, London, UK
| | - Crispin Schneider
- Division of Surgery, Hampstead Campus, UCL Medical School, Royal Free Hospital, 9th Floor, Rowland Hill Street, London, UK
| | - Kurinchi Gurusamy
- Division of Surgery, Hampstead Campus, UCL Medical School, Royal Free Hospital, 9th Floor, Rowland Hill Street, London, UK
| | - Sébastien Ourselin
- Centre for Medical Image Computing, Front Engineering Building, University College London, Malet Place, London, UK
| | - Brian Davidson
- Division of Surgery, Hampstead Campus, UCL Medical School, Royal Free Hospital, 9th Floor, Rowland Hill Street, London, UK
| | - David Hawkes
- Centre for Medical Image Computing, Front Engineering Building, University College London, Malet Place, London, UK
| | - Matthew J Clarkson
- Centre for Medical Image Computing, Front Engineering Building, University College London, Malet Place, London, UK
| |
Collapse
|