1
|
Ribeiro M, Espinel Y, Rabbani N, Pereira B, Bartoli A, Buc E. Augmented Reality Guided Laparoscopic Liver Resection: A Phantom Study With Intraparenchymal Tumors. J Surg Res 2024; 296:612-620. [PMID: 38354617 DOI: 10.1016/j.jss.2023.12.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 11/21/2023] [Accepted: 12/19/2023] [Indexed: 02/16/2024]
Abstract
INTRODUCTION Augmented reality (AR) in laparoscopic liver resection (LLR) can improve intrahepatic navigation by creating a virtual liver transparency. Our team has recently developed Hepataug, an AR software that projects the invisible intrahepatic tumors onto the laparoscopic images and allows the surgeon to localize them precisely. However, the accuracy of registration according to the location and size of the tumors, as well as the influence of the projection axis, have never been measured. The aim of this work was to measure the three-dimensional (3D) tumor prediction error of Hepataug. METHODS Eight 3D virtual livers were created from the computed tomography scan of a healthy human liver. Reference markers with known coordinates were virtually placed on the anterior surface. The virtual livers were then deformed and 3D printed, forming 3D liver phantoms. After placing each 3D phantom inside a pelvitrainer, registration allowed Hepataug to project virtual tumors along two axes: the laparoscope axis and the operator port axis. The surgeons had to point the center of eight virtual tumors per liver with a pointing tool whose coordinates were precisely calculated. RESULTS We obtained 128 pointing experiments. The average pointing error was 29.4 ± 17.1 mm and 9.2 ± 5.1 mm for the laparoscope and operator port axes respectively (P = 0.001). The pointing errors tended to increase with tumor depth (correlation coefficients greater than 0.5 with P < 0.001). There was no significant dependency of the pointing error on the tumor size for both projection axes. CONCLUSIONS Tumor visualization by projection toward the operating port improves the accuracy of AR guidance and partially solves the problem of the two-dimensional visual interface of monocular laparoscopy. Despite a lower precision of AR for tumors located in the posterior part of the liver, it could allow the surgeons to access these lesions without completely mobilizing the liver, hence decreasing the surgical trauma.
Collapse
Affiliation(s)
- Mathieu Ribeiro
- Department of Digestive and Hepatobiliary Surgery, Hospital Estaing, CHU de Clermont-Ferrand, Clermont-Ferrand, France; UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Institut Pascal, Clermont-Ferrand, France
| | - Yamid Espinel
- UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Institut Pascal, Clermont-Ferrand, France
| | - Navid Rabbani
- UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Institut Pascal, Clermont-Ferrand, France
| | - Bruno Pereira
- Biostatistics Unit (DRCI), University Hospital Clermont-Ferrand, Clermont-Ferrand, France
| | - Adrien Bartoli
- UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Institut Pascal, Clermont-Ferrand, France
| | - Emmanuel Buc
- Department of Digestive and Hepatobiliary Surgery, Hospital Estaing, CHU de Clermont-Ferrand, Clermont-Ferrand, France; UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Institut Pascal, Clermont-Ferrand, France.
| |
Collapse
|
2
|
Begagić E, Bečulić H, Pugonja R, Memić Z, Balogun S, Džidić-Krivić A, Milanović E, Salković N, Nuhović A, Skomorac R, Sefo H, Pojskić M. Augmented Reality Integration in Skull Base Neurosurgery: A Systematic Review. MEDICINA (KAUNAS, LITHUANIA) 2024; 60:335. [PMID: 38399622 PMCID: PMC10889940 DOI: 10.3390/medicina60020335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Revised: 02/05/2024] [Accepted: 02/09/2024] [Indexed: 02/25/2024]
Abstract
Background and Objectives: To investigate the role of augmented reality (AR) in skull base (SB) neurosurgery. Materials and Methods: Utilizing PRISMA methodology, PubMed and Scopus databases were explored to extract data related to AR integration in SB surgery. Results: The majority of 19 included studies (42.1%) were conducted in the United States, with a focus on the last five years (77.8%). Categorization included phantom skull models (31.2%, n = 6), human cadavers (15.8%, n = 3), or human patients (52.6%, n = 10). Microscopic surgery was the predominant modality in 10 studies (52.6%). Of the 19 studies, surgical modality was specified in 18, with microscopic surgery being predominant (52.6%). Most studies used only CT as the data source (n = 9; 47.4%), and optical tracking was the prevalent tracking modality (n = 9; 47.3%). The Target Registration Error (TRE) spanned from 0.55 to 10.62 mm. Conclusion: Despite variations in Target Registration Error (TRE) values, the studies highlighted successful outcomes and minimal complications. Challenges, such as device practicality and data security, were acknowledged, but the application of low-cost AR devices suggests broader feasibility.
Collapse
Affiliation(s)
- Emir Begagić
- Department of General Medicine, School of Medicine, University of Zenica, Travnička 1, 72000 Zenica, Bosnia and Herzegovina;
| | - Hakija Bečulić
- Department of Neurosurgery, Cantonal Hospital Zenica, Crkvice 67, 72000 Zenica, Bosnia and Herzegovina; (H.B.)
- Department of Anatomy, School of Medicine, University of Zenica, Travnička 1, 72000 Zenica, Bosnia and Herzegovina;
| | - Ragib Pugonja
- Department of Anatomy, School of Medicine, University of Zenica, Travnička 1, 72000 Zenica, Bosnia and Herzegovina;
| | - Zlatan Memić
- Department of General Medicine, School of Medicine, University of Zenica, Travnička 1, 72000 Zenica, Bosnia and Herzegovina;
| | - Simon Balogun
- Division of Neurosurgery, Department of Surgery, Obafemi Awolowo University Teaching Hospitals Complex, Ilesa Road PMB 5538, Ile-Ife 220282, Nigeria
| | - Amina Džidić-Krivić
- Department of Neurology, Cantonal Hospital Zenica, Crkvice 67, 72000 Zenica, Bosnia and Herzegovina
| | - Elma Milanović
- Neurology Clinic, Clinical Center University of Sarajevo, Bolnička 25, 71000 Sarajevo, Bosnia and Herzegovina
| | - Naida Salković
- Department of General Medicine, School of Medicine, University of Tuzla, Univerzitetska 1, 75000 Tuzla, Bosnia and Herzegovina;
| | - Adem Nuhović
- Department of General Medicine, School of Medicine, University of Sarajevo, Univerzitetska 1, 71000 Sarajevo, Bosnia and Herzegovina;
| | - Rasim Skomorac
- Department of Neurosurgery, Cantonal Hospital Zenica, Crkvice 67, 72000 Zenica, Bosnia and Herzegovina; (H.B.)
- Department of Surgery, School of Medicine, University of Zenica, Travnička 1, 72000 Zenica, Bosnia and Herzegovina
| | - Haso Sefo
- Neurosurgery Clinic, Clinical Center University of Sarajevo, Bolnička 25, 71000 Sarajevo, Bosnia and Herzegovina
| | - Mirza Pojskić
- Department of Neurosurgery, University Hospital Marburg, Baldingerstr., 35033 Marburg, Germany
| |
Collapse
|
3
|
Ramalhinho J, Yoo S, Dowrick T, Koo B, Somasundaram M, Gurusamy K, Hawkes DJ, Davidson B, Blandford A, Clarkson MJ. The value of Augmented Reality in surgery - A usability study on laparoscopic liver surgery. Med Image Anal 2023; 90:102943. [PMID: 37703675 PMCID: PMC10958137 DOI: 10.1016/j.media.2023.102943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 06/29/2023] [Accepted: 08/24/2023] [Indexed: 09/15/2023]
Abstract
Augmented Reality (AR) is considered to be a promising technology for the guidance of laparoscopic liver surgery. By overlaying pre-operative 3D information of the liver and internal blood vessels on the laparoscopic view, surgeons can better understand the location of critical structures. In an effort to enable AR, several authors have focused on the development of methods to obtain an accurate alignment between the laparoscopic video image and the pre-operative 3D data of the liver, without assessing the benefit that the resulting overlay can provide during surgery. In this paper, we present a study that aims to assess quantitatively and qualitatively the value of an AR overlay in laparoscopic surgery during a simulated surgical task on a phantom setup. We design a study where participants are asked to physically localise pre-operative tumours in a liver phantom using three image guidance conditions - a baseline condition without any image guidance, a condition where the 3D surfaces of the liver are aligned to the video and displayed on a black background, and a condition where video see-through AR is displayed on the laparoscopic video. Using data collected from a cohort of 24 participants which include 12 surgeons, we observe that compared to the baseline, AR decreases the median localisation error of surgeons on non-peripheral targets from 25.8 mm to 9.2 mm. Using subjective feedback, we also identify that AR introduces usability improvements in the surgical task and increases the perceived confidence of the users. Between the two tested displays, the majority of participants preferred to use the AR overlay instead of navigated view of the 3D surfaces on a separate screen. We conclude that AR has the potential to improve performance and decision making in laparoscopic surgery, and that improvements in overlay alignment accuracy and depth perception should be pursued in the future.
Collapse
Affiliation(s)
- João Ramalhinho
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom.
| | - Soojeong Yoo
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom; UCL Interaction Centre, University College London, London, United Kingdom
| | - Thomas Dowrick
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Bongjin Koo
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Murali Somasundaram
- Division of Surgery and Interventional Sciences, University College London, London, United Kingdom
| | - Kurinchi Gurusamy
- Division of Surgery and Interventional Sciences, University College London, London, United Kingdom
| | - David J Hawkes
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Brian Davidson
- Division of Surgery and Interventional Sciences, University College London, London, United Kingdom
| | - Ann Blandford
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom; UCL Interaction Centre, University College London, London, United Kingdom
| | - Matthew J Clarkson
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| |
Collapse
|
4
|
Condino S, Cutolo F, Carbone M, Cercenelli L, Badiali G, Montemurro N, Ferrari V. Registration Sanity Check for AR-guided Surgical Interventions: Experience From Head and Face Surgery. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2023; 12:258-267. [PMID: 38410181 PMCID: PMC10896424 DOI: 10.1109/jtehm.2023.3332088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 10/19/2023] [Accepted: 11/08/2023] [Indexed: 02/28/2024]
Abstract
Achieving and maintaining proper image registration accuracy is an open challenge of image-guided surgery. This work explores and assesses the efficacy of a registration sanity check method for augmented reality-guided navigation (AR-RSC), based on the visual inspection of virtual 3D models of landmarks. We analyze the AR-RSC sensitivity and specificity by recruiting 36 subjects to assess the registration accuracy of a set of 114 AR images generated from camera images acquired during an AR-guided orthognathic intervention. Translational or rotational errors of known magnitude up to ±1.5 mm/±15.5°, were artificially added to the image set in order to simulate different registration errors. This study analyses the performance of AR-RSC when varying (1) the virtual models selected for misalignment evaluation (e. g., the model of brackets, incisor teeth, and gingival margins in our experiment), (2) the type (translation/rotation) of registration error, and (3) the level of user experience in using AR technologies. Results show that: 1) the sensitivity and specificity of the AR-RSC depends on the virtual models (globally, a median true positive rate of up to 79.2% was reached with brackets, and a median true negative rate of up to 64.3% with incisor teeth), 2) there are error components that are more difficult to identify visually, 3) the level of user experience does not affect the method. In conclusion, the proposed AR-RSC, tested also in the operating room, could represent an efficient method to monitor and optimize the registration accuracy during the intervention, but special attention should be paid to the selection of the AR data chosen for the visual inspection of the registration accuracy.
Collapse
Affiliation(s)
- Sara Condino
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| | - Fabrizio Cutolo
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| | - Marina Carbone
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| | - Laura Cercenelli
- EDIMES Laboratory of BioengineeringDepartment of Experimental, Diagnostic and Specialty MedicineUniversity of Bologna40138BolognaItaly
| | - Giovanni Badiali
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| | - Nicola Montemurro
- Department of NeurosurgeryAzienda Ospedaliera Universitaria Pisana (AOUP)56127PisaItaly
| | - Vincenzo Ferrari
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| |
Collapse
|
5
|
Abstract
INTRODUCTION During an operation, augmented reality (AR) enables surgeons to enrich their vision of the operating field by means of digital imagery, particularly as regards tumors and anatomical structures. While in some specialties, this type of technology is routinely ustilized, in liver surgery due to the complexity of modeling organ deformities in real time, its applications remain limited. At present, numerous teams are attempting to find a solution applicable to current practice, the objective being to overcome difficulties of intraoperative navigation in an opaque organ. OBJECTIVE To identify, itemize and analyze series reporting AR techniques tested in liver surgery, the objectives being to establish a state of the art and to provide indications of perspectives for the future. METHODS In compliance with the PRISMA guidelines and availing ourselves of the PubMed, Embase and Cochrane databases, we identified English-language articles published between January 2020 and January 2022 corresponding to the following keywords: augmented reality, hepatic surgery, liver and hepatectomy. RESULTS Initially, 102 titles, studies and summaries were preselected. Twenty-eight corresponding to the inclusion criteria were included, reporting on 183patients operated with the help of AR by laparotomy (n=31) or laparoscopy (n=152). Several techniques of acquisition and visualization were reported. Anatomical precision was the main assessment criterion in 19 articles, with values ranging from 3mm to 14mm, followed by time of acquisition and clinical feasibility. CONCLUSION While several AR technologies are presently being developed, due to insufficient anatomical precision their clinical applications have remained limited. That much said, numerous teams are currently working toward their optimization, and it is highly likely that in the short term, the application of AR in liver surgery will have become more frequent and effective. As for its clinical impact, notably in oncology, it remains to be assessed.
Collapse
Affiliation(s)
- B Acidi
- Department of Surgery, AP-HP hôpital Paul-Brousse, Hepato-Biliary Center, 12, avenue Paul-Vaillant Couturier, 94804 Villejuif cedex, France; Augmented Operating Room Innovation Chair (BOPA), France; Inria « Mimesis », Strasbourg, France
| | - M Ghallab
- Department of Surgery, AP-HP hôpital Paul-Brousse, Hepato-Biliary Center, 12, avenue Paul-Vaillant Couturier, 94804 Villejuif cedex, France; Augmented Operating Room Innovation Chair (BOPA), France
| | - S Cotin
- Augmented Operating Room Innovation Chair (BOPA), France; Inria « Mimesis », Strasbourg, France
| | - E Vibert
- Department of Surgery, AP-HP hôpital Paul-Brousse, Hepato-Biliary Center, 12, avenue Paul-Vaillant Couturier, 94804 Villejuif cedex, France; Augmented Operating Room Innovation Chair (BOPA), France; DHU Hepatinov, 94800 Villejuif, France; Inserm, Paris-Saclay University, UMRS 1193, Pathogenesis and treatment of liver diseases; FHU Hepatinov, 94800 Villejuif, France
| | - N Golse
- Department of Surgery, AP-HP hôpital Paul-Brousse, Hepato-Biliary Center, 12, avenue Paul-Vaillant Couturier, 94804 Villejuif cedex, France; Augmented Operating Room Innovation Chair (BOPA), France; Inria « Mimesis », Strasbourg, France; DHU Hepatinov, 94800 Villejuif, France; Inserm, Paris-Saclay University, UMRS 1193, Pathogenesis and treatment of liver diseases; FHU Hepatinov, 94800 Villejuif, France.
| |
Collapse
|
6
|
Davis C, Yoo S, Reissis A, Clarkson MJ, Thompson S. Enhanced Surgeons: Understanding the Design of Augmented Reality Instructions for Keyhole Surgery. PROCEEDINGS. IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES 2023; 2023:123-127. [PMID: 37525696 PMCID: PMC7614851 DOI: 10.1109/vrw58643.2023.00031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 08/02/2023]
Abstract
It is important to understand how to design AR content for surgical contexts to mitigate the risk of distracting the surgeons. In this work, we test information overlays for AR guidance during keyhole surgery. We performed a preliminary evaluation of a prototype, focusing on the effects of colour, opacity, and information representation. Our work contributes insights into the design of AR guidance in surgery settings and a foundation for future research on visualisation design for surgical AR.
Collapse
Affiliation(s)
- Christoph Davis
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London (UCL), United Kingdom
| | - Soojeong Yoo
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London (UCL), United Kingdom
| | - Athena Reissis
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London (UCL), United Kingdom
| | - Matthew J. Clarkson
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London (UCL), United Kingdom
| | - Stephen Thompson
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London (UCL), United Kingdom
| |
Collapse
|
7
|
Guan P, Luo H, Guo J, Zhang Y, Jia F. Intraoperative laparoscopic liver surface registration with preoperative CT using mixing features and overlapping region masks. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02846-w. [PMID: 36787037 DOI: 10.1007/s11548-023-02846-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 01/27/2023] [Indexed: 02/15/2023]
Abstract
PURPOSE Laparoscopic liver resection is a minimal invasive surgery. Augmented reality can map preoperative anatomy information extracted from computed tomography to the intraoperative liver surface reconstructed from stereo 3D laparoscopy. However, liver surface registration is particularly challenging as the intraoperative surface is only partially visible and suffers from large liver deformations due to pneumoperitoneum. This study proposes a deep learning-based robust point cloud registration network. METHODS This study proposed a low overlap liver surface registration algorithm combining local mixed features and global features of point clouds. A learned overlap mask is used to filter the non-overlapping region of the point cloud, and a network is used to predict the overlapping region threshold to regulate the training process. RESULTS We validated the algorithm on the DePoLL (the Deformable Porcine Laparoscopic Liver) dataset. Compared with the baseline method and other state-of-the-art registration methods, our method achieves minimum target registration error (TRE) of 19.9 ± 2.7 mm. CONCLUSION The proposed point cloud registration method uses the learned overlapping mask to filter the non-overlapping areas in the point cloud, then the extracted overlapping area point cloud is registered according to the mixed features and global features, and this method is robust and efficient in low-overlap liver surface registration.
Collapse
Affiliation(s)
- Peidong Guan
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Shenzhen College of Advanced Technology, University of Chinese Academy and Sciences, Shenzhen, China
| | - Huoling Luo
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Jianxi Guo
- Department of Interventional Radiology, Shenzhen People's Hospital, Shenzhen, China
| | - Yanfang Zhang
- Department of Interventional Radiology, Shenzhen People's Hospital, Shenzhen, China.
| | - Fucang Jia
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China. .,Shenzhen College of Advanced Technology, University of Chinese Academy and Sciences, Shenzhen, China. .,Pazhou Lab, Guangzhou, China.
| |
Collapse
|
8
|
Chen X, Sakai D, Fukuoka H, Shirai R, Ebina K, Shibuya S, Sase K, Tsujita T, Abe T, Oka K, Konno A. Basic Experiments Toward Mixed Reality Dynamic Navigation for Laparoscopic Surgery. JOURNAL OF ROBOTICS AND MECHATRONICS 2022. [DOI: 10.20965/jrm.2022.p1253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Laparoscopic surgery is a minimally invasive procedure that is performed by viewing endoscopic camera images. However, the limited field of view of endoscopic cameras makes laparoscopic surgery difficult. To provide more visual information during laparoscopic surgeries, augmented reality (AR) surgical navigation systems have been developed to visualize the positional relationship between the surgical field and organs based on preoperative medical images of a patient. However, since earlier studies used preoperative medical images, the navigation became inaccurate as the surgery progressed because the organs were displaced and deformed during surgery. To solve this problem, we propose a mixed reality (MR) surgery navigation system in which surgical instruments are tracked by a motion capture (Mocap) system; we also evaluated the contact between the instruments and organs and simulated and visualized the deformation of the organ caused by the contact. This paper describes a method for the numerical calculation of the deformation of a soft body. Then, the basic technology of MR and projection mapping is presented for MR surgical navigation. The accuracy of the simulated and visualized deformations is evaluated through basic experiments using a soft rectangular cuboid object.
Collapse
|
9
|
Bierbrier J, Gueziri HE, Collins DL. Estimating medical image registration error and confidence: A taxonomy and scoping review. Med Image Anal 2022; 81:102531. [PMID: 35858506 DOI: 10.1016/j.media.2022.102531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 06/16/2022] [Accepted: 07/01/2022] [Indexed: 11/18/2022]
Abstract
Given that image registration is a fundamental and ubiquitous task in both clinical and research domains of the medical field, errors in registration can have serious consequences. Since such errors can mislead clinicians during image-guided therapies or bias the results of a downstream analysis, methods to estimate registration error are becoming more popular. To give structure to this new heterogenous field we developed a taxonomy and performed a scoping review of methods that quantitatively and automatically provide a dense estimation of registration error. The taxonomy breaks down error estimation methods into Approach (Image- or Transformation-based), Framework (Machine Learning or Direct) and Measurement (error or confidence) components. Following the PRISMA guidelines for scoping reviews, the 570 records found were reduced to twenty studies that met inclusion criteria, which were then reviewed according to the proposed taxonomy. Trends in the field, advantages and disadvantages of the methods, and potential sources of bias are also discussed. We provide suggestions for best practices and identify areas of future research.
Collapse
Affiliation(s)
- Joshua Bierbrier
- Department of Biomedical Engineering, McGill University, Montreal, QC, Canada; McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, Montreal, QC, Canada.
| | - Houssem-Eddine Gueziri
- McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, Montreal, QC, Canada
| | - D Louis Collins
- Department of Biomedical Engineering, McGill University, Montreal, QC, Canada; McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, Montreal, QC, Canada; Department of Neurology and Neurosurgery, McGill University, Montreal, QC, Canada
| |
Collapse
|
10
|
Edwards PJE, Psychogyios D, Speidel S, Maier-Hein L, Stoyanov D. SERV-CT: A disparity dataset from cone-beam CT for validation of endoscopic 3D reconstruction. Med Image Anal 2021; 76:102302. [PMID: 34906918 PMCID: PMC8961000 DOI: 10.1016/j.media.2021.102302] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 11/01/2021] [Accepted: 11/04/2021] [Indexed: 11/27/2022]
Abstract
Full torso porcine CT model for stereo-endoscopic reconstruction validation CT of endoscope and anatomy with constrained manual alignment provides a reference Accuracy analysis of repeated alignments and performance of existing algorithms presented Open sourced dataset for stereo reconstruction validation
In computer vision, reference datasets from simulation and real outdoor scenes have been highly successful in promoting algorithmic development in stereo reconstruction. Endoscopic stereo reconstruction for surgical scenes gives rise to specific problems, including the lack of clear corner features, highly specular surface properties and the presence of blood and smoke. These issues present difficulties for both stereo reconstruction itself and also for standardised dataset production. Previous datasets have been produced using computed tomography (CT) or structured light reconstruction on phantom or ex vivo models. We present a stereo-endoscopic reconstruction validation dataset based on cone-beam CT (SERV-CT). Two ex vivo small porcine full torso cadavers were placed within the view of the endoscope with both the endoscope and target anatomy visible in the CT scan. Subsequent orientation of the endoscope was manually aligned to match the stereoscopic view and benchmark disparities, depths and occlusions are calculated. The requirement of a CT scan limited the number of stereo pairs to 8 from each ex vivo sample. For the second sample an RGB surface was acquired to aid alignment of smooth, featureless surfaces. Repeated manual alignments showed an RMS disparity accuracy of around 2 pixels and a depth accuracy of about 2 mm. A simplified reference dataset is provided consisting of endoscope image pairs with corresponding calibration, disparities, depths and occlusions covering the majority of the endoscopic image and a range of tissue types, including smooth specular surfaces, as well as significant variation of depth. We assessed the performance of various stereo algorithms from online available repositories. There is a significant variation between algorithms, highlighting some of the challenges of surgical endoscopic images. The SERV-CT dataset provides an easy to use stereoscopic validation for surgical applications with smooth reference disparities and depths covering the majority of the endoscopic image. This complements existing resources well and we hope will aid the development of surgical endoscopic anatomical reconstruction algorithms.
Collapse
Affiliation(s)
- P J Eddie Edwards
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK.
| | - Dimitris Psychogyios
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK
| | - Stefanie Speidel
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT) Dresden, Dresden, 01307, Germany
| | - Lena Maier-Hein
- Division of Medical and Biological Informatics, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK
| |
Collapse
|
11
|
Automatic, global registration in laparoscopic liver surgery. Int J Comput Assist Radiol Surg 2021; 17:167-176. [PMID: 34697757 PMCID: PMC8739294 DOI: 10.1007/s11548-021-02518-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Accepted: 10/04/2021] [Indexed: 11/13/2022]
Abstract
Purpose The initial registration of a 3D pre-operative CT model to a 2D laparoscopic video image in augmented reality systems for liver surgery needs to be fast, intuitive to perform and with minimal interruptions to the surgical intervention. Several recent methods have focussed on using easily recognisable landmarks across modalities. However, these methods still need manual annotation or manual alignment. We propose a novel, fully automatic pipeline for 3D–2D global registration in laparoscopic liver interventions. Methods Firstly, we train a fully convolutional network for the semantic detection of liver contours in laparoscopic images. Secondly, we propose a novel contour-based global registration algorithm to estimate the camera pose without any manual input during surgery. The contours used are the anterior ridge and the silhouette of the liver. Results We show excellent generalisation of the semantic contour detection on test data from 8 clinical cases. In quantitative experiments, the proposed contour-based registration can successfully estimate a global alignment with as little as 30% of the liver surface, a visibility ratio which is characteristic of laparoscopic interventions. Moreover, the proposed pipeline showed very promising results in clinical data from 5 laparoscopic interventions. Conclusions Our proposed automatic global registration could make augmented reality systems more intuitive and usable for surgeons and easier to translate to operating rooms. Yet, as the liver is deformed significantly during surgery, it will be very beneficial to incorporate deformation into our method for more accurate registration.
Collapse
|
12
|
Schneider C, Allam M, Stoyanov D, Hawkes DJ, Gurusamy K, Davidson BR. Performance of image guided navigation in laparoscopic liver surgery - A systematic review. Surg Oncol 2021; 38:101637. [PMID: 34358880 DOI: 10.1016/j.suronc.2021.101637] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Revised: 07/04/2021] [Accepted: 07/24/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND Compared to open surgery, minimally invasive liver resection has improved short term outcomes. It is however technically more challenging. Navigated image guidance systems (IGS) are being developed to overcome these challenges. The aim of this systematic review is to provide an overview of their current capabilities and limitations. METHODS Medline, Embase and Cochrane databases were searched using free text terms and corresponding controlled vocabulary. Titles and abstracts of retrieved articles were screened for inclusion criteria. Due to the heterogeneity of the retrieved data it was not possible to conduct a meta-analysis. Therefore results are presented in tabulated and narrative format. RESULTS Out of 2015 articles, 17 pre-clinical and 33 clinical papers met inclusion criteria. Data from 24 articles that reported on accuracy indicates that in recent years navigation accuracy has been in the range of 8-15 mm. Due to discrepancies in evaluation methods it is difficult to compare accuracy metrics between different systems. Surgeon feedback suggests that current state of the art IGS may be useful as a supplementary navigation tool, especially in small liver lesions that are difficult to locate. They are however not able to reliably localise all relevant anatomical structures. Only one article investigated IGS impact on clinical outcomes. CONCLUSIONS Further improvements in navigation accuracy are needed to enable reliable visualisation of tumour margins with the precision required for oncological resections. To enhance comparability between different IGS it is crucial to find a consensus on the assessment of navigation accuracy as a minimum reporting standard.
Collapse
Affiliation(s)
- C Schneider
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK.
| | - M Allam
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK; General surgery Department, Tanta University, Egypt
| | - D Stoyanov
- Department of Computer Science, University College London, London, UK; Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - D J Hawkes
- Centre for Medical Image Computing (CMIC), University College London, London, UK; Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK
| | - K Gurusamy
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK
| | - B R Davidson
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK
| |
Collapse
|
13
|
Humm G, Harries RL, Stoyanov D, Lovat LB. Supporting laparoscopic general surgery training with digital technology: The United Kingdom and Ireland paradigm. BMC Surg 2021; 21:123. [PMID: 33685437 PMCID: PMC7941971 DOI: 10.1186/s12893-021-01123-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 02/25/2021] [Indexed: 12/20/2022] Open
Abstract
Surgical training in the UK and Ireland has faced challenges following the implementation of the European Working Time Directive and postgraduate training reform. The health services are undergoing a digital transformation; digital technology is remodelling the delivery of surgical care and surgical training. This review aims to critically evaluate key issues in laparoscopic general surgical training and the digital technology such as virtual and augmented reality, telementoring and automated workflow analysis and surgical skills assessment. We include pre-clinical, proof of concept research and commercial systems that are being developed to provide solutions. Digital surgical technology is evolving through interdisciplinary collaboration to provide widespread access to high-quality laparoscopic general surgery training and assessment. In the future this could lead to integrated, context-aware systems that support surgical teams in providing safer surgical care.
Collapse
Affiliation(s)
- Gemma Humm
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Charles Bell House, 43-45 Foley Street, London, W1W 7TY, UK.
- Division of Surgery and Interventional Science, University College London, London, UK.
| | | | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Charles Bell House, 43-45 Foley Street, London, W1W 7TY, UK
- Department of Computer Science, University College London, London, UK
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Charles Bell House, 43-45 Foley Street, London, W1W 7TY, UK
- Division of Surgery and Interventional Science, University College London, London, UK
| |
Collapse
|
14
|
Ramalhinho J, Tregidgo HFJ, Gurusamy K, Hawkes DJ, Davidson B, Clarkson MJ. Registration of Untracked 2D Laparoscopic Ultrasound to CT Images of the Liver Using Multi-Labelled Content-Based Image Retrieval. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1042-1054. [PMID: 33326379 DOI: 10.1109/tmi.2020.3045348] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Laparoscopic Ultrasound (LUS) is recommended as a standard-of-care when performing laparoscopic liver resections as it images sub-surface structures such as tumours and major vessels. Given that LUS probes are difficult to handle and some tumours are iso-echoic, registration of LUS images to a pre-operative CT has been proposed as an image-guidance method. This registration problem is particularly challenging due to the small field of view of LUS, and usually depends on both a manual initialisation and tracking to compose a volume, hindering clinical translation. In this paper, we extend a proposed registration approach using Content-Based Image Retrieval (CBIR), removing the requirement for tracking or manual initialisation. Pre-operatively, a set of possible LUS planes is simulated from CT and a descriptor generated for each image. Then, a Bayesian framework is employed to estimate the most likely sequence of CT simulations that matches a series of LUS images. We extend our CBIR formulation to use multiple labelled objects and constrain the registration by separating liver vessels into portal vein and hepatic vein branches. The value of this new labeled approach is demonstrated in retrospective data from 5 patients. Results show that, by including a series of 5 untracked images in time, a single LUS image can be registered with accuracies ranging from 5.7 to 16.4 mm with a success rate of 78%. Initialisation of the LUS to CT registration with the proposed framework could potentially enable the clinical translation of these image fusion techniques.
Collapse
|
15
|
Thompson S, Dowrick T, Ahmad M, Opie J, Clarkson MJ. Are fiducial registration error and target registration error correlated? SciKit-SurgeryFRED for teaching and research. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2021; 11598:115980U. [PMID: 34840671 PMCID: PMC7612039 DOI: 10.1117/12.2580159] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Understanding the relationship between fiducial registration error (FRE) and target registration error (TRE) is important for the correct use of interventional guidance systems. Whilst it is well established that TRE is statistically independent of FRE, system users still struggle against the intuitive assumption that a low FRE indicates a low TRE. We present the SciKit-Surgery Fiducial Registration Educational Demonstrator and describe its use. SciKit-SurgeryFRED was developed to enable remote teaching of key concepts in image registration. SciKit-SurgeryFRED also supports research into user interface design for image registration systems. SciKit-SurgeryFRED can be used to enable remote tutorials covering the statistics relevant to image guided interventions. Students are able to place fiducial markers on pre and intra-operative images and observe the effects of changes in marker geometry, marker count, and fiducial localisation error on TRE and FRE. SciKit-SurgeryFRED also calculates statistical measures for the expected values of TRE and FRE. Because many registrations can be performed quickly the students can then explore potential correlations between the different statistics. SciKit-SurgeryFRED also implements a registration based game, where participants are rewarded for complete treatment of a clinical target, whilst minimising the treatment margin. We used this game to perform a remote study on registration and simulated ablation, measuring how user performance changes depending on what error statistics are made available. The results support the assumption that knowing the exact value of target registration error leads to better treatment. Display of other statistics did not have a significant impact on the treatment performance.
Collapse
Affiliation(s)
- Stephen Thompson
- Wellcome/EPSRC Centre for Interventional and Surgical Science, University College London, United Kingdom
| | - Tom Dowrick
- Wellcome/EPSRC Centre for Interventional and Surgical Science, University College London, United Kingdom
| | - Mian Ahmad
- Wellcome/EPSRC Centre for Interventional and Surgical Science, University College London, United Kingdom
| | - Jeremy Opie
- Wellcome/EPSRC Centre for Interventional and Surgical Science, University College London, United Kingdom
| | - Matthew J Clarkson
- Wellcome/EPSRC Centre for Interventional and Surgical Science, University College London, United Kingdom
| |
Collapse
|
16
|
Teatini A, Kumar RP, Elle OJ, Wiig O. Mixed reality as a novel tool for diagnostic and surgical navigation in orthopaedics. Int J Comput Assist Radiol Surg 2021; 16:407-414. [PMID: 33555563 PMCID: PMC7946663 DOI: 10.1007/s11548-020-02302-z] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Accepted: 12/14/2020] [Indexed: 12/15/2022]
Abstract
Purpose This study presents a novel surgical navigation tool developed in mixed reality environment for orthopaedic surgery. Joint and skeletal deformities affect all age groups and greatly reduce the range of motion of the joints. These deformities are notoriously difficult to diagnose and to correct through surgery. Method We have developed a surgical tool which integrates surgical instrument tracking and augmented reality through a head mounted display. This allows the surgeon to visualise bones with the illusion of possessing “X-ray” vision. The studies presented below aim to assess the accuracy of the surgical navigation tool in tracking a location at the tip of the surgical instrument in holographic space. Results Results show that the average accuracy provided by the navigation tool is around 8 mm, and qualitative assessment by the orthopaedic surgeons provided positive feedback in terms of the capabilities for diagnostic use. Conclusions More improvements are necessary for the navigation tool to be accurate enough for surgical applications, however, this new tool has the potential to improve diagnostic accuracy and allow for safer and more precise surgeries, as well as provide for better learning conditions for orthopaedic surgeons in training.
Collapse
Affiliation(s)
- Andrea Teatini
- The Intervention Centre, Oslo University Hospital, Oslo, Norway.
- Department of Informatics, University of Oslo, Oslo, Norway.
| | - Rahul P Kumar
- The Intervention Centre, Oslo University Hospital, Oslo, Norway
| | - Ole Jakob Elle
- The Intervention Centre, Oslo University Hospital, Oslo, Norway
- Department of Informatics, University of Oslo, Oslo, Norway
| | - Ola Wiig
- Department of Orthopaedic Surgery, Oslo University Hospital, Oslo, Norway
| |
Collapse
|
17
|
Pelanis E, Teatini A, Eigl B, Regensburger A, Alzaga A, Kumar RP, Rudolph T, Aghayan DL, Riediger C, Kvarnström N, Elle OJ, Edwin B. Evaluation of a novel navigation platform for laparoscopic liver surgery with organ deformation compensation using injected fiducials. Med Image Anal 2020; 69:101946. [PMID: 33454603 DOI: 10.1016/j.media.2020.101946] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Revised: 11/28/2020] [Accepted: 12/15/2020] [Indexed: 12/11/2022]
Abstract
In laparoscopic liver resection, surgeons conventionally rely on anatomical landmarks detected through a laparoscope, preoperative volumetric images and laparoscopic ultrasound to compensate for the challenges of minimally invasive access. Image guidance using optical tracking and registration procedures is a promising tool, although often undermined by its inaccuracy. This study evaluates a novel surgical navigation solution that can compensate for liver deformations using an accurate and effective registration method. The proposed solution relies on a robotic C-arm to perform registration to preoperative CT/MRI image data and allows for intraoperative updates during resection using fluoroscopic images. Navigation is offered both as a 3D liver model with real-time instrument visualization, as well as an augmented reality overlay on the laparoscope camera view. Testing was conducted through a pre-clinical trial which included four porcine models. Accuracy of the navigation system was measured through two evaluation methods: liver surface fiducials reprojection and a comparison between planned and navigated resection margins. Target Registration Error with the fiducials evaluation shows that the accuracy in the vicinity of the lesion was 3.78±1.89 mm. Resection margin evaluations resulted in an overall median accuracy of 4.44 mm with a maximum error of 9.75 mm over the four subjects. The presented solution is accurate enough to be potentially clinically beneficial for surgical guidance in laparoscopic liver surgery.
Collapse
Affiliation(s)
- Egidijus Pelanis
- The Intervention Centre, Oslo University Hospital Rikshospitalet 0424, Oslo, Norway; Institute of Clinical Medicine, University of Oslo 1072, Oslo, Norway.
| | - Andrea Teatini
- The Intervention Centre, Oslo University Hospital Rikshospitalet 0424, Oslo, Norway; Department of Informatics, University of Oslo 1072, Oslo, Norway
| | | | | | | | - Rahul Prasanna Kumar
- The Intervention Centre, Oslo University Hospital Rikshospitalet 0424, Oslo, Norway
| | | | - Davit L Aghayan
- The Intervention Centre, Oslo University Hospital Rikshospitalet 0424, Oslo, Norway; Institute of Clinical Medicine, University of Oslo 1072, Oslo, Norway; Department of Surgery N1, Yerevan State Medical University, 0025 Yerevan, Armenia
| | - Carina Riediger
- University Hospital Carl Gustav Carus, Technische Universität Dresden, 01307 Dresden, Germany
| | | | - Ole Jakob Elle
- The Intervention Centre, Oslo University Hospital Rikshospitalet 0424, Oslo, Norway; Department of Informatics, University of Oslo 1072, Oslo, Norway
| | - Bjørn Edwin
- The Intervention Centre, Oslo University Hospital Rikshospitalet 0424, Oslo, Norway; Institute of Clinical Medicine, University of Oslo 1072, Oslo, Norway; Department of Hepato-Pancreatic-Biliary surgery 0424, Oslo University Hospital, Oslo, Norway
| |
Collapse
|
18
|
An Augmented Reality-Based Mobile Application Facilitates the Learning about the Spinal Cord. EDUCATION SCIENCES 2020. [DOI: 10.3390/educsci10120376] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
Health education is one of the knowledge areas in which augmented reality (AR) technology is widespread, and it has been considered as a facilitator of the learning process. In literature, there are still few studies detailing the role of mobile AR in neuroanatomy. Specifically, for the spinal cord, the teaching–learning process may be hindered due to its abstract nature and the absence of three-dimensional models. In this sense, we implemented a mobile application with AR technology named NitLabEduca for studying the spinal cord with an interactive exploration of 3D rotating models in the macroscopic scale, theoretical content of its specificities, animations, and simulations regarding its physiology. To investigate NitLabEduca’s effects, eighty individuals with and without previous neuroanatomy knowledge were selected and grouped into control and experimental groups. Divided, they performed learning tasks through a questionnaire. We used the System Usability Scale (SUS) to evaluate the usability level of the mobile application and a complimentary survey to verify the adherence level to the use of mobile applications in higher education. As a result, we observed that participants of both groups who started the task with the application and finished with text had more correct results in the test (p < 0.001). SUS results were promising in terms of usability and learning factor. We concluded that studying the spinal cord through NitLabEduca seems to favor learning when used as a complement to the printed material.
Collapse
|
19
|
General first-order target registration error model considering a coordinate reference frame in an image-guided surgical system. Med Biol Eng Comput 2020; 58:2989-3002. [PMID: 33029759 DOI: 10.1007/s11517-020-02265-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 09/08/2020] [Indexed: 10/23/2022]
Abstract
Point-based rigid registration (PBRR) techniques are widely used in many aspects of image-guided surgery (IGS). Accurately estimating target registration error (TRE) statistics is of essential value for medical applications such as optically surgical tool-tip tracking and image registration. For example, knowing the TRE distribution statistics of surgical tool tip can help the surgeon make right decisions during surgery. In the meantime, the pose of a surgical tool is usually reported relative to a second rigid body whose local frame is called coordinate reference frame (CRF). In an n-ocular tracking system, fiducial localization error (FLE) should be considered inhomogeneous, that means FLE is different between fiducials, and anisotropic that indicates FLE is different in all directions. In this paper, we extend the TRE estimation algorithm relative to a CRF from homogeneous and anisotropic to heterogeneous FLE cases. Arbitrary weightings can be assumed in solving the registration problems in the proposed TRE estimation algorithm. Monte Carlo simulation results demonstrate the proposed algorithm's effectiveness for both homogeneous and inhomogeneous FLE distributions. The results are further compared with those using the other two algorithms. When FLE distribution is anisotropic and homogeneous, the proposed TRE estimation algorithm's performance is comparable with that of the first one. When FLE distribution is heterogeneous, proposed TRE estimation algorithm outperforms the other two classical algorithms in all test cases when ideal weighting scheme is adopted in solving two registrations. Possible clinical applications include the online estimation of surgical tool-tip tracking error with respect to a CRF in IGS. Graphical Abstract This paper provides the target registration error model considering a coordinate reference frame in surgical navigation.
Collapse
|
20
|
Prevost GA, Eigl B, Paolucci I, Rudolph T, Peterhans M, Weber S, Beldi G, Candinas D, Lachenmayer A. Efficiency, Accuracy and Clinical Applicability of a New Image-Guided Surgery System in 3D Laparoscopic Liver Surgery. J Gastrointest Surg 2020; 24:2251-2258. [PMID: 31621024 DOI: 10.1007/s11605-019-04395-7] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/23/2019] [Accepted: 09/01/2019] [Indexed: 01/31/2023]
Abstract
BACKGROUND To investigate efficiency, accuracy and clinical benefit of a new augmented reality system for 3D laparoscopic liver surgery. METHODS All patients who received laparoscopic liver resection by a new image-guided surgery system with augmented 3D-imaging in a university hospital were included for analysis. Digitally processed preoperative cross-sectional imaging was merged with the laparoscopic image. Intraoperative efficiency of the procedure was measured as time needed to achieve sufficient registration accuracy. Technical accuracy was reported as fiducial registration error (FRE). Clinical benefit was assessed trough a questionnaire, reporting measures in a 5-point Likert scale format ranging from 1 (high) to 5 (low). RESULTS From January to March 2018, ten laparoscopic liver resections of a total of 18 lesions were performed using the novel augmented reality system. Median time for registration was 8:50 min (range 1:31-23:56). The mean FRE was reduced from 14.0 mm (SD 5.0) in the first registration attempt to 9.2 mm (SD 2.8) in the last attempt. The questionnaire revealed the ease of use of the system (1.2, SD 0.4) and the benefit for resection of vanishing lesions (1.0, SD 0.0) as convincing positive aspects, whereas image registration accuracy for resection guidance was consistently judged as too inaccurate. CONCLUSIONS Augmented reality in 3D laparoscopic liver surgery with landmark-based registration technique is feasible with only little impact on the intraoperative workflow. The benefit for detecting particularly vanishing lesions is high. For an additional benefit during the resection process, registration accuracy has to be improved and non-rigid registration algorithms will be required to address intraoperative anatomical deformation.
Collapse
Affiliation(s)
- Gian Andrea Prevost
- Department of Visceral Surgery and Medicine, Inselspital, University Hospital Bern, University of Bern, 3010, Bern, Switzerland.
| | - Benjamin Eigl
- ARTORG Center for Biomedical Engineering Research, University of Bern, 3010, Bern, Switzerland
- CAScination AG, 3008, Bern, Switzerland
| | - Iwan Paolucci
- ARTORG Center for Biomedical Engineering Research, University of Bern, 3010, Bern, Switzerland
| | | | | | - Stefan Weber
- ARTORG Center for Biomedical Engineering Research, University of Bern, 3010, Bern, Switzerland
| | - Guido Beldi
- Department of Visceral Surgery and Medicine, Inselspital, University Hospital Bern, University of Bern, 3010, Bern, Switzerland
| | - Daniel Candinas
- Department of Visceral Surgery and Medicine, Inselspital, University Hospital Bern, University of Bern, 3010, Bern, Switzerland
| | - Anja Lachenmayer
- Department of Visceral Surgery and Medicine, Inselspital, University Hospital Bern, University of Bern, 3010, Bern, Switzerland
| |
Collapse
|
21
|
Thompson S, Dowrick T, Ahmad M, Xiao G, Koo B, Bonmati E, Kahl K, Clarkson MJ. SciKit-Surgery: compact libraries for surgical navigation. Int J Comput Assist Radiol Surg 2020; 15:1075-1084. [PMID: 32436132 PMCID: PMC7316849 DOI: 10.1007/s11548-020-02180-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2019] [Accepted: 04/22/2020] [Indexed: 12/03/2022]
Abstract
Purpose This paper introduces the SciKit-Surgery libraries, designed to enable rapid development of clinical applications for image-guided interventions. SciKit-Surgery implements a family of compact, orthogonal, libraries accompanied by robust testing, documentation, and quality control. SciKit-Surgery libraries can be rapidly assembled into testable clinical applications and subsequently translated to production software without the need for software reimplementation. The aim is to support translation from single surgeon trials to multicentre trials in under 2 years. Methods At the time of publication, there were 13 SciKit-Surgery libraries provide functionality for visualisation and augmented reality in surgery, together with hardware interfaces for video, tracking, and ultrasound sources. The libraries are stand-alone, open source, and provide Python interfaces. This design approach enables fast development of robust applications and subsequent translation. The paper compares the libraries with existing platforms and uses two example applications to show how SciKit-Surgery libraries can be used in practice. Results Using the number of lines of code and the occurrence of cross-dependencies as proxy measurements of code complexity, two example applications using SciKit-Surgery libraries are analysed. The SciKit-Surgery libraries demonstrate ability to support rapid development of testable clinical applications. By maintaining stricter orthogonality between libraries, the number, and complexity of dependencies can be reduced. The SciKit-Surgery libraries also demonstrate the potential to support wider dissemination of novel research. Conclusion The SciKit-Surgery libraries utilise the modularity of the Python language and the standard data types of the NumPy package to provide an easy-to-use, well-tested, and extensible set of tools for the development of applications for image-guided interventions. The example application built on SciKit-Surgery has a simpler dependency structure than the same application built using a monolithic platform, making ongoing clinical translation more feasible.
Collapse
Affiliation(s)
- Stephen Thompson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK.
| | - Thomas Dowrick
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Mian Ahmad
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Goufang Xiao
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Bongjin Koo
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Ester Bonmati
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Kim Kahl
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Matthew J Clarkson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| |
Collapse
|
22
|
Luo H, Yin D, Zhang S, Xiao D, He B, Meng F, Zhang Y, Cai W, He S, Zhang W, Hu Q, Guo H, Liang S, Zhou S, Liu S, Sun L, Guo X, Fang C, Liu L, Jia F. Augmented reality navigation for liver resection with a stereoscopic laparoscope. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 187:105099. [PMID: 31601442 DOI: 10.1016/j.cmpb.2019.105099] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Revised: 08/14/2019] [Accepted: 09/27/2019] [Indexed: 06/10/2023]
Abstract
OBJECTIVE Understanding the three-dimensional (3D) spatial position and orientation of vessels and tumor(s) is vital in laparoscopic liver resection procedures. Augmented reality (AR) techniques can help surgeons see the patient's internal anatomy in conjunction with laparoscopic video images. METHOD In this paper, we present an AR-assisted navigation system for liver resection based on a rigid stereoscopic laparoscope. The stereo image pairs from the laparoscope are used by an unsupervised convolutional network (CNN) framework to estimate depth and generate an intraoperative 3D liver surface. Meanwhile, 3D models of the patient's surgical field are segmented from preoperative CT images using V-Net architecture for volumetric image data in an end-to-end predictive style. A globally optimal iterative closest point (Go-ICP) algorithm is adopted to register the pre- and intraoperative models into a unified coordinate space; then, the preoperative 3D models are superimposed on the live laparoscopic images to provide the surgeon with detailed information about the subsurface of the patient's anatomy, including tumors, their resection margins and vessels. RESULTS The proposed navigation system is tested on four laboratory ex vivo porcine livers and five operating theatre in vivo porcine experiments to validate its accuracy. The ex vivo and in vivo reprojection errors (RPE) are 6.04 ± 1.85 mm and 8.73 ± 2.43 mm, respectively. CONCLUSION AND SIGNIFICANCE Both the qualitative and quantitative results indicate that our AR-assisted navigation system shows promise and has the potential to be highly useful in clinical practice.
Collapse
Affiliation(s)
- Huoling Luo
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Dalong Yin
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China; Department of Hepatobiliary Surgery, Shengli Hospital Affiliated to University of Science and Technology of China, Hefei, China
| | - Shugeng Zhang
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China; Department of Hepatobiliary Surgery, Shengli Hospital Affiliated to University of Science and Technology of China, Hefei, China
| | - Deqiang Xiao
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Baochun He
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Fanzheng Meng
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Yanfang Zhang
- Department of Interventional Radiology, Shenzhen People's Hospital, Shenzhen, China
| | - Wei Cai
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Shenghao He
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Wenyu Zhang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Qingmao Hu
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Hongrui Guo
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Shuhang Liang
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Shuo Zhou
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Shuxun Liu
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Linmao Sun
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Xiao Guo
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Chihua Fang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Lianxin Liu
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China; Department of Hepatobiliary Surgery, Shengli Hospital Affiliated to University of Science and Technology of China, Hefei, China.
| | - Fucang Jia
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China.
| |
Collapse
|
23
|
Teatini A, Pérez de Frutos J, Eigl B, Pelanis E, Aghayan DL, Lai M, Kumar RP, Palomar R, Edwin B, Elle OJ. Influence of sampling accuracy on augmented reality for laparoscopic image-guided surgery. MINIM INVASIV THER 2020; 30:229-238. [DOI: 10.1080/13645706.2020.1727524] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Affiliation(s)
- Andrea Teatini
- The Intervention Centre, Oslo University Hospital Rikshospitalet, Oslo, Norway
- Department of Informatics, University of Oslo, Oslo, Norway
| | - Javier Pérez de Frutos
- SINTEF Digital, SINTEF A.S, Trondheim, Norway
- Department of Computer Science, NTNU, Trondheim, Norway
| | | | - Egidijus Pelanis
- The Intervention Centre, Oslo University Hospital Rikshospitalet, Oslo, Norway
- Institute of Clinical Medicine, University of Oslo, Oslo, Norway
| | - Davit L. Aghayan
- The Intervention Centre, Oslo University Hospital Rikshospitalet, Oslo, Norway
- Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- Department of Surgery N1, Yerevan State Medical University, Yerevan, Armenia
| | - Marco Lai
- Philips Research, High Tech, Eindhoven, The Netherlands
| | | | - Rafael Palomar
- The Intervention Centre, Oslo University Hospital Rikshospitalet, Oslo, Norway
- Department of Computer Science, NTNU, Trondheim, Norway
| | - Bjørn Edwin
- The Intervention Centre, Oslo University Hospital Rikshospitalet, Oslo, Norway
- Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- Hepato-Pancreatic-Biliary Surgery, Oslo University Hospital, Oslo, Norway
| | - Ole Jakob Elle
- The Intervention Centre, Oslo University Hospital Rikshospitalet, Oslo, Norway
- SINTEF Digital, SINTEF A.S, Trondheim, Norway
| |
Collapse
|
24
|
Non-linear-Optimization Using SQP for 3D Deformable Prostate Model Pose Estimation in Minimally Invasive Surgery. ADVANCES IN INTELLIGENT SYSTEMS AND COMPUTING 2020. [DOI: 10.1007/978-3-030-17795-9_35] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
25
|
Schneider C, Thompson S, Totz J, Song Y, Allam M, Sodergren MH, Desjardins AE, Barratt D, Ourselin S, Gurusamy K, Stoyanov D, Clarkson MJ, Hawkes DJ, Davidson BR. Comparison of manual and semi-automatic registration in augmented reality image-guided liver surgery: a clinical feasibility study. Surg Endosc 2020; 34:4702-4711. [PMID: 32780240 PMCID: PMC7524854 DOI: 10.1007/s00464-020-07807-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2020] [Accepted: 07/10/2020] [Indexed: 02/06/2023]
Abstract
BACKGROUND The laparoscopic approach to liver resection may reduce morbidity and hospital stay. However, uptake has been slow due to concerns about patient safety and oncological radicality. Image guidance systems may improve patient safety by enabling 3D visualisation of critical intra- and extrahepatic structures. Current systems suffer from non-intuitive visualisation and a complicated setup process. A novel image guidance system (SmartLiver), offering augmented reality visualisation and semi-automatic registration has been developed to address these issues. A clinical feasibility study evaluated the performance and usability of SmartLiver with either manual or semi-automatic registration. METHODS Intraoperative image guidance data were recorded and analysed in patients undergoing laparoscopic liver resection or cancer staging. Stereoscopic surface reconstruction and iterative closest point matching facilitated semi-automatic registration. The primary endpoint was defined as successful registration as determined by the operating surgeon. Secondary endpoints were system usability as assessed by a surgeon questionnaire and comparison of manual vs. semi-automatic registration accuracy. Since SmartLiver is still in development no attempt was made to evaluate its impact on perioperative outcomes. RESULTS The primary endpoint was achieved in 16 out of 18 patients. Initially semi-automatic registration failed because the IGS could not distinguish the liver surface from surrounding structures. Implementation of a deep learning algorithm enabled the IGS to overcome this issue and facilitate semi-automatic registration. Mean registration accuracy was 10.9 ± 4.2 mm (manual) vs. 13.9 ± 4.4 mm (semi-automatic) (Mean difference - 3 mm; p = 0.158). Surgeon feedback was positive about IGS handling and improved intraoperative orientation but also highlighted the need for a simpler setup process and better integration with laparoscopic ultrasound. CONCLUSION The technical feasibility of using SmartLiver intraoperatively has been demonstrated. With further improvements semi-automatic registration may enhance user friendliness and workflow of SmartLiver. Manual and semi-automatic registration accuracy were comparable but evaluation on a larger patient cohort is required to confirm these findings.
Collapse
Affiliation(s)
- C. Schneider
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK
| | - S. Thompson
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - J. Totz
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - Y. Song
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - M. Allam
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK
| | - M. H. Sodergren
- Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - A. E. Desjardins
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - D. Barratt
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - S. Ourselin
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - K. Gurusamy
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK ,Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Department of Hepatopancreatobiliary and Liver Transplant Surgery, Royal Free Hospital, London, UK
| | - D. Stoyanov
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Computer Science, University College London, London, UK
| | - M. J. Clarkson
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - D. J. Hawkes
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - B. R. Davidson
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK ,Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Department of Hepatopancreatobiliary and Liver Transplant Surgery, Royal Free Hospital, London, UK
| |
Collapse
|
26
|
An in vivo porcine dataset and evaluation methodology to measure soft-body laparoscopic liver registration accuracy with an extended algorithm that handles collisions. Int J Comput Assist Radiol Surg 2019; 14:1237-1245. [PMID: 31147817 DOI: 10.1007/s11548-019-02001-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Accepted: 05/15/2019] [Indexed: 10/26/2022]
Abstract
PURPOSE The registration of preoperative 3D images to intra-operative laparoscopic 2D images is one of the main concerns for augmented reality in computer-assisted surgery. For laparoscopic liver surgery, while several algorithms have been proposed, there is neither a public dataset nor a systematic evaluation methodology to quantitatively evaluate registration accuracy. METHOD Our main contribution is to provide such a dataset with an in vivo porcine model. It is used to evaluate a state-of-the-art registration algorithm that is capable of simultaneous registration and soft-body collision reasoning. RESULTS The dataset consists of 13 deformed liver states, with corresponding exploration videos and interventional CT acquisitions with 60 small artificial fiducials located on the surface of the liver and distributed within the parenchyma, where a precise registration is crucial for augmented reality. This dataset will be made public. Using this dataset, we show that collision reasoning improves performance of registration for strong deformation and independent lobe motion. CONCLUSION This dataset addresses the lack of public datasets in this field. As an example of use, we present and evaluate a state-of-the-art energy-based approach and a novel extension that handles self-collisions.
Collapse
|
27
|
Xiao G, Bonmati E, Thompson S, Evans J, Hipwell J, Nikitichev D, Gurusamy K, Ourselin S, Hawkes DJ, Davidson B, Clarkson MJ. Electromagnetic tracking in image-guided laparoscopic surgery: Comparison with optical tracking and feasibility study of a combined laparoscope and laparoscopic ultrasound system. Med Phys 2018; 45:5094-5104. [PMID: 30247765 PMCID: PMC6282846 DOI: 10.1002/mp.13210] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2018] [Revised: 09/07/2018] [Accepted: 09/07/2018] [Indexed: 11/23/2022] Open
Abstract
PURPOSE In image-guided laparoscopy, optical tracking is commonly employed, but electromagnetic (EM) systems have been proposed in the literature. In this paper, we provide a thorough comparison of EM and optical tracking systems for use in image-guided laparoscopic surgery and a feasibility study of a combined, EM-tracked laparoscope and laparoscopic ultrasound (LUS) image guidance system. METHODS We first assess the tracking accuracy of a laparoscope with two optical trackers tracking retroreflective markers mounted on the shaft and an EM tracker with the sensor embedded at the proximal end, using a standard evaluation plate. We then use a stylus to test the precision of position measurement and accuracy of distance measurement of the trackers. Finally, we assess the accuracy of an image guidance system comprised of an EM-tracked laparoscope and an EM-tracked LUS probe. RESULTS In the experiment using a standard evaluation plate, the two optical trackers show less jitter in position and orientation measurement than the EM tracker. Also, the optical trackers demonstrate better consistency of orientation measurement within the test volume. However, their accuracy of measuring relative positions decreases significantly with longer distances whereas the EM tracker's performance is stable; at 50 mm distance, the RMS errors for the two optical trackers are 0.210 and 0.233 mm, respectively, and it is 0.214 mm for the EM tracker; at 250 mm distance, the RMS errors for the two optical trackers become 1.031 and 1.178 mm, respectively, while it is 0.367 mm for the EM tracker. In the experiment using the stylus, the two optical trackers have RMS errors of 1.278 and 1.555 mm in localizing the stylus tip, and it is 1.117 mm for the EM tracker. Our prototype of a combined, EM-tracked laparoscope and LUS system using representative calibration methods showed a RMS point localization error of 3.0 mm for the laparoscope and 1.3 mm for the LUS probe, the lager error of the former being predominantly due to the triangulation error when using a narrow-baseline stereo laparoscope. CONCLUSIONS The errors incurred by optical trackers, due to the lever-arm effect and variation in tracking accuracy in the depth direction, would make EM-tracked solutions preferable if the EM sensor is placed at the proximal end of the laparoscope.
Collapse
Affiliation(s)
- Guofang Xiao
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Ester Bonmati
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Stephen Thompson
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Joe Evans
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - John Hipwell
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Daniil Nikitichev
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Kurinchi Gurusamy
- Division of Surgery and Interventional ScienceUniversity College LondonLondonUK
| | - Sébastien Ourselin
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - David J. Hawkes
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Brian Davidson
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Division of Surgery and Interventional ScienceUniversity College LondonLondonUK
| | - Matthew J. Clarkson
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| |
Collapse
|