1
|
Dowrick T, Xiao G, Nikitichev D, Dursun E, van Berkel N, Allam M, Koo B, Ramalhinho J, Thompson S, Gurusamy K, Blandford A, Stoyanov D, Davidson BR, Clarkson MJ. Evaluation of a calibration rig for stereo laparoscopes. Med Phys 2023; 50:2695-2704. [PMID: 36779419 PMCID: PMC10614700 DOI: 10.1002/mp.16310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 02/01/2023] [Accepted: 02/01/2023] [Indexed: 02/14/2023] Open
Abstract
BACKGROUND Accurate camera and hand-eye calibration are essential to ensure high-quality results in image-guided surgery applications. The process must also be able to be undertaken by a nonexpert user in a surgical setting. PURPOSE This work seeks to identify a suitable method for tracked stereo laparoscope calibration within theater. METHODS A custom calibration rig, to enable rapid calibration in a surgical setting, was designed. The rig was compared against freehand calibration. Stereo reprojection, stereo reconstruction, tracked stereo reprojection, and tracked stereo reconstruction error metrics were used to evaluate calibration quality. RESULTS Use of the calibration rig reduced mean errors: reprojection (1.47 mm [SD 0.13] vs. 3.14 mm [SD 2.11], p-value 1e-8), reconstruction (1.37 px [SD 0.10] vs. 10.10 px [SD 4.54], p-value 6e-7), and tracked reconstruction (1.38 mm [SD 0.10] vs. 12.64 mm [SD 4.34], p-value 1e-6) compared with freehand calibration. The use of a ChArUco pattern yielded slightly lower reprojection errors, while a dot grid produced lower reconstruction errors and was more robust under strong global illumination. CONCLUSION The use of the calibration rig results in a statistically significant decrease in calibration error metrics, versus freehand calibration, and represents the preferred approach for use in the operating theater.
Collapse
Affiliation(s)
- Thomas Dowrick
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Guofang Xiao
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Daniil Nikitichev
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Eren Dursun
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Niels van Berkel
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Moustafa Allam
- Royal Free CampusUCL Medical SchoolRoyal Free HospitalLondonUK
| | - Bongjin Koo
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Joao Ramalhinho
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Stephen Thompson
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | | | - Ann Blandford
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Danail Stoyanov
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | | | | |
Collapse
|
2
|
Jackson P, Simon R, Linte C. Integrating Real-time Video View with Pre-operative Models for Image-guided Renal Navigation: An in vitro Evaluation Study. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:1366-1371. [PMID: 34891539 PMCID: PMC9137973 DOI: 10.1109/embc46164.2021.9629683] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
To provide a complete picture of a scene sufficient to conduct a minimally invasive, image-guided renal intervention, real-time laparoscopic video needs to be integrated with underlying anatomy information typically available from pre- or intra-operative images. Here we present a simple and efficient hand-eye calibration method for an optically tracked camera, which only requires the acquisition of several poses of a Polaris stylus featuring 4 markers automatically localized by both the camera and the optical tracker. We evaluate the calibration using both the Polaris stylus, as well as a patient-specific 3D printed kidney phantom in terms of the number of poses acquired, as well as the depth of the imaged scene into the field of view of the camera, by projecting the several landmarks on the imaged object at known location in the 3D world onto the camera image. The RMS projection error decreases with increasing distance from the camera to the imaged object from 7 pixels at 15-18 mm, to under 2 pixels at 28-30 mm, which corresponds to a 2 mm and 1 mm error, respectively, in 3D space.
Collapse
|
3
|
Li W, Fan J, Li S, Tian Z, Ai D, Song H, Yang J. Homography-based robust pose compensation and fusion imaging for augmented reality based endoscopic navigation system. Comput Biol Med 2021; 138:104864. [PMID: 34634638 DOI: 10.1016/j.compbiomed.2021.104864] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 08/23/2021] [Accepted: 09/09/2021] [Indexed: 11/17/2022]
Abstract
BACKGROUND Augmented reality (AR) based fusion imaging in endoscopic surgeries rely on the quality of image-to-patient registration and camera calibration, and these two offline steps are usually performed independently to get the target transformation separately. The optimal solution can be obtained under independent conditions but may not be globally optimal. All residual errors will be accumulated and eventually lead to inaccurate AR fusion. METHODS After a careful analysis of the principle of AR imaging, a robust online calibration framework was proposed for an endoscopic camera to enable accurate AR fusion. A 2D checkerboard-based homography estimation algorithm was proposed to estimate the local pose of the endoscopic camera, and the least square method was used to calculate the compensation matrix in combination with the optical tracking system. RESULTS In comparison with conventional methods, the proposed compensation method improved the performance of AR fusion, which reduced physical error by up to 82%, reduced pixel error by up to 83%, and improved target coverage by up to 6%. Experimental results of simulating mechanical noise revealed that the proposed compensation method effectively corrected the fusion errors caused by the rotation of the endoscopic tube without recalibrating the camera. Furthermore, the simulation results revealed the robustness of the proposed compensation method to noises. CONCLUSIONS Overall, the experiment results proved the effectiveness of the proposed compensation method and online calibration framework, and revealed a considerable potential in clinical practice.
Collapse
Affiliation(s)
- Wenjie Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Shaowen Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Zhaorui Tian
- Ariemedi Medical Technology (Beijing) CO., LTD., Beijing, 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| |
Collapse
|
4
|
Sun Y, Pan B, Guo Y, Fu Y, Niu G. Vision-based hand-eye calibration for robot-assisted minimally invasive surgery. Int J Comput Assist Radiol Surg 2020; 15:2061-2069. [PMID: 32808149 DOI: 10.1007/s11548-020-02245-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2020] [Accepted: 08/07/2020] [Indexed: 11/24/2022]
Abstract
PURPOSE The knowledge of laparoscope vision can greatly improve the surgical operation room (OR) efficiency. For the vision-based computer-assisted surgery, the hand-eye calibration establishes the coordinate relationship between laparoscope and robot slave arm. While significant advances have been made for hand-eye calibration in recent years, efficient algorithm for minimally invasive surgical robot is still a major challenge. Removing the external calibration object in abdominal environment to estimate the hand-eye transformation is still a critical problem. METHODS We propose a novel hand-eye calibration algorithm to tackle the problem which relies purely on surgical instrument already in the operating scenario for robot-assisted minimally invasive surgery (RMIS). Our model is formed by the geometry information of the surgical instrument and the remote center-of-motion (RCM) constraint. We also enhance the algorithm with stereo laparoscope model. RESULTS Promising validation of synthetic simulation and experimental surgical robot system have been conducted to evaluate the proposed method. We report results that the proposed method can exhibit the hand-eye calibration without calibration object. CONCLUSION Vision-based hand-eye calibration is developed. We demonstrate the feasibility to perform hand-eye calibration by taking advantage of the components of surgical robot system, leading to the efficiency of surgical OR.
Collapse
Affiliation(s)
- Yanwen Sun
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Bo Pan
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China.
| | - Yongchen Guo
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Yili Fu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Guojun Niu
- School of Mechanical Engineering and Automation, Zhejiang Sci-Tech University, Hangzhou, China
| |
Collapse
|
5
|
Kalia M, Mathur P, Tsang K, Black P, Navab N, Salcudean S. Evaluation of a marker-less, intra-operative, augmented reality guidance system for robot-assisted laparoscopic radical prostatectomy. Int J Comput Assist Radiol Surg 2020; 15:1225-1233. [PMID: 32500450 DOI: 10.1007/s11548-020-02181-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Accepted: 04/22/2020] [Indexed: 12/19/2022]
Abstract
PURPOSE Robot-assisted laparoscopic radical prostatectomy (RALRP) using the da Vinci surgical robot is a common treatment for organ-confined prostate cancer. Augmented reality (AR) can help during RALRP by showing the surgeon the location of anatomical structures and tumors from preoperative imaging. Previously, we proposed hand-eye and camera intrinsic matrix estimation procedures that can be carried out with conventional instruments within the patient during surgery, take < 3 min to perform, and fit seamlessly in the existing surgical workflow. In this paper, we describe and evaluate a complete AR guidance system for RALRP and quantify its accuracy. METHODS Our AR system requires three transformations: the transrectal ultrasound (TRUS) to da Vinci transformation, the camera intrinsic matrix, and the hand-eye transformation. For evaluation, a 3D-printed cross-wire was visualized in TRUS and stereo endoscope in a water bath. Manually triangulated cross-wire points from stereo images were used as ground truth to evaluate overall TRE between these points and points transformed from TRUS to camera. RESULTS After transforming the ground-truth points from the TRUS to the camera coordinate frame, the mean target registration error (TRE) (SD) was [Formula: see text] mm. The mean TREs (SD) in the x-, y-, and z-directions are [Formula: see text] mm, [Formula: see text] mm, and [Formula: see text] mm, respectively. CONCLUSIONS We describe and evaluate a complete AR guidance system for RALRP which can augment preoperative data to endoscope camera image, after a deformable magnetic resonance image to TRUS registration step. The streamlined procedures with current surgical workflow and low TRE demonstrate the compatibility and readiness of the system for clinical translation. A detailed sensitivity study remains part of future work.
Collapse
Affiliation(s)
- Megha Kalia
- Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver, BC, V6T 1Z4, Canada.
- Computer Aided Medical Procedures, Technical University of Munich, Boltzmannstraße 15, 85748, Garching bei München, Germany.
| | - Prateek Mathur
- Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver, BC, V6T 1Z4, Canada.
| | - Keith Tsang
- Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver, BC, V6T 1Z4, Canada
| | - Peter Black
- Vancouver Prostate Centre, Department of Urologic Sciences, University of British Columbia, Vancouver, BC, V5Z 1M9, Canada
| | - Nassir Navab
- Computer Aided Medical Procedures, Technical University of Munich, Boltzmannstraße 15, 85748, Garching bei München, Germany
| | - Septimiu Salcudean
- Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver, BC, V6T 1Z4, Canada
| |
Collapse
|
6
|
Teatini A, Pérez de Frutos J, Eigl B, Pelanis E, Aghayan DL, Lai M, Kumar RP, Palomar R, Edwin B, Elle OJ. Influence of sampling accuracy on augmented reality for laparoscopic image-guided surgery. MINIM INVASIV THER 2020; 30:229-238. [DOI: 10.1080/13645706.2020.1727524] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Affiliation(s)
- Andrea Teatini
- The Intervention Centre, Oslo University Hospital Rikshospitalet, Oslo, Norway
- Department of Informatics, University of Oslo, Oslo, Norway
| | - Javier Pérez de Frutos
- SINTEF Digital, SINTEF A.S, Trondheim, Norway
- Department of Computer Science, NTNU, Trondheim, Norway
| | | | - Egidijus Pelanis
- The Intervention Centre, Oslo University Hospital Rikshospitalet, Oslo, Norway
- Institute of Clinical Medicine, University of Oslo, Oslo, Norway
| | - Davit L. Aghayan
- The Intervention Centre, Oslo University Hospital Rikshospitalet, Oslo, Norway
- Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- Department of Surgery N1, Yerevan State Medical University, Yerevan, Armenia
| | - Marco Lai
- Philips Research, High Tech, Eindhoven, The Netherlands
| | | | - Rafael Palomar
- The Intervention Centre, Oslo University Hospital Rikshospitalet, Oslo, Norway
- Department of Computer Science, NTNU, Trondheim, Norway
| | - Bjørn Edwin
- The Intervention Centre, Oslo University Hospital Rikshospitalet, Oslo, Norway
- Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- Hepato-Pancreatic-Biliary Surgery, Oslo University Hospital, Oslo, Norway
| | - Ole Jakob Elle
- The Intervention Centre, Oslo University Hospital Rikshospitalet, Oslo, Norway
- SINTEF Digital, SINTEF A.S, Trondheim, Norway
| |
Collapse
|
7
|
Lee S, Shim S, Ha HG, Lee H, Hong J. Simultaneous Optimization of Patient-Image Registration and Hand-Eye Calibration for Accurate Augmented Reality in Surgery. IEEE Trans Biomed Eng 2020; 67:2669-2682. [PMID: 31976878 DOI: 10.1109/tbme.2020.2967802] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Augmented reality (AR) navigation using a position sensor in endoscopic surgeries relies on the quality of patient-image registration and hand-eye calibration. Conventional methods collect the necessary data to compute two output transformation matrices separately. However, the AR display setting during surgery generally differs from that during preoperative processes. Although conventional methods can identify optimal solutions under initial conditions, AR display errors are unavoidable during surgery owing to the inherent computational complexity of AR processes, such as error accumulation over successive matrix multiplications, and tracking errors of position sensor. METHODS We propose the simultaneous optimization of patient-image registration and hand-eye calibration in an AR environment before surgery. The relationship between the endoscope and a virtual object to overlay is first calculated using an endoscopic image, which also functions as a reference during optimization. After including the tracking information from the position sensor, patient-image registration and hand-eye calibration are optimized in terms of least-squares. RESULTS Experiments with synthetic data verify that the proposed method is less sensitive to computation and tracking errors. A phantom experiment with a position sensor is also conducted. The accuracy of the proposed method is significantly higher than that of the conventional method. CONCLUSION The AR accuracy of the proposed method is compared with those of the conventional ones, and the superiority of the proposed method is verified. SIGNIFICANCE This study demonstrates that the proposed method exhibits substantial potential for improving AR navigation accuracy.
Collapse
|
8
|
Schneider C, Thompson S, Totz J, Song Y, Allam M, Sodergren MH, Desjardins AE, Barratt D, Ourselin S, Gurusamy K, Stoyanov D, Clarkson MJ, Hawkes DJ, Davidson BR. Comparison of manual and semi-automatic registration in augmented reality image-guided liver surgery: a clinical feasibility study. Surg Endosc 2020; 34:4702-4711. [PMID: 32780240 PMCID: PMC7524854 DOI: 10.1007/s00464-020-07807-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2020] [Accepted: 07/10/2020] [Indexed: 02/06/2023]
Abstract
BACKGROUND The laparoscopic approach to liver resection may reduce morbidity and hospital stay. However, uptake has been slow due to concerns about patient safety and oncological radicality. Image guidance systems may improve patient safety by enabling 3D visualisation of critical intra- and extrahepatic structures. Current systems suffer from non-intuitive visualisation and a complicated setup process. A novel image guidance system (SmartLiver), offering augmented reality visualisation and semi-automatic registration has been developed to address these issues. A clinical feasibility study evaluated the performance and usability of SmartLiver with either manual or semi-automatic registration. METHODS Intraoperative image guidance data were recorded and analysed in patients undergoing laparoscopic liver resection or cancer staging. Stereoscopic surface reconstruction and iterative closest point matching facilitated semi-automatic registration. The primary endpoint was defined as successful registration as determined by the operating surgeon. Secondary endpoints were system usability as assessed by a surgeon questionnaire and comparison of manual vs. semi-automatic registration accuracy. Since SmartLiver is still in development no attempt was made to evaluate its impact on perioperative outcomes. RESULTS The primary endpoint was achieved in 16 out of 18 patients. Initially semi-automatic registration failed because the IGS could not distinguish the liver surface from surrounding structures. Implementation of a deep learning algorithm enabled the IGS to overcome this issue and facilitate semi-automatic registration. Mean registration accuracy was 10.9 ± 4.2 mm (manual) vs. 13.9 ± 4.4 mm (semi-automatic) (Mean difference - 3 mm; p = 0.158). Surgeon feedback was positive about IGS handling and improved intraoperative orientation but also highlighted the need for a simpler setup process and better integration with laparoscopic ultrasound. CONCLUSION The technical feasibility of using SmartLiver intraoperatively has been demonstrated. With further improvements semi-automatic registration may enhance user friendliness and workflow of SmartLiver. Manual and semi-automatic registration accuracy were comparable but evaluation on a larger patient cohort is required to confirm these findings.
Collapse
Affiliation(s)
- C. Schneider
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK
| | - S. Thompson
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - J. Totz
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - Y. Song
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - M. Allam
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK
| | - M. H. Sodergren
- Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - A. E. Desjardins
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - D. Barratt
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - S. Ourselin
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - K. Gurusamy
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK ,Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Department of Hepatopancreatobiliary and Liver Transplant Surgery, Royal Free Hospital, London, UK
| | - D. Stoyanov
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Computer Science, University College London, London, UK
| | - M. J. Clarkson
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - D. J. Hawkes
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - B. R. Davidson
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK ,Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Department of Hepatopancreatobiliary and Liver Transplant Surgery, Royal Free Hospital, London, UK
| |
Collapse
|
9
|
Kalia M, Mathur P, Navab N, Salcudean SE. Marker-less real-time intra-operative camera and hand-eye calibration procedure for surgical augmented reality. Healthc Technol Lett 2019; 6:255-260. [PMID: 32038867 PMCID: PMC6952262 DOI: 10.1049/htl.2019.0094] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Accepted: 10/02/2019] [Indexed: 12/28/2022] Open
Abstract
Accurate medical Augmented Reality (AR) rendering requires two calibrations, a camera intrinsic matrix estimation and a hand-eye transformation. We present a unified, practical, marker-less, real-time system to estimate both these transformations during surgery. For camera calibration we perform calibrations at multiple distances from the endoscope, pre-operatively, to parametrize the camera intrinsic matrix as a function of distance from the endoscope. Then, we retrieve the camera parameters intra-operatively by estimating the distance of the surgical site from the endoscope in less than 1 s. Unlike in prior work, our method does not require the endoscope to be taken out of the patient; for the hand-eye calibration, as opposed to conventional methods that require the identification of a marker, we make use of a rendered tool-tip in 3D. As the surgeon moves the instrument and observes the offset between the actual and the rendered tool-tip, they can select points of high visual error and manually bring the instrument tip to match the virtual rendered tool tip. To evaluate the hand-eye calibration, 5 subjects carried out the hand-eye calibration procedure on a da Vinci robot. Average Target Registration Error of approximately 7mm was achieved with just three data points.
Collapse
Affiliation(s)
- Megha Kalia
- Robotics and Control Lab, Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver, BC V6T 1Z4, Canada.,Computer Aided Medical Procedures, Technical University of Munich, Boltzmannstraße 15, 85748 Garching bei Múnchen, Germany
| | - Prateek Mathur
- Robotics and Control Lab, Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver, BC V6T 1Z4, Canada
| | - Nassir Navab
- Computer Aided Medical Procedures, Technical University of Munich, Boltzmannstraße 15, 85748 Garching bei Múnchen, Germany
| | - Septimiu E Salcudean
- Robotics and Control Lab, Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver, BC V6T 1Z4, Canada
| |
Collapse
|
10
|
Xiao G, Bonmati E, Thompson S, Evans J, Hipwell J, Nikitichev D, Gurusamy K, Ourselin S, Hawkes DJ, Davidson B, Clarkson MJ. Electromagnetic tracking in image-guided laparoscopic surgery: Comparison with optical tracking and feasibility study of a combined laparoscope and laparoscopic ultrasound system. Med Phys 2018; 45:5094-5104. [PMID: 30247765 PMCID: PMC6282846 DOI: 10.1002/mp.13210] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2018] [Revised: 09/07/2018] [Accepted: 09/07/2018] [Indexed: 11/23/2022] Open
Abstract
PURPOSE In image-guided laparoscopy, optical tracking is commonly employed, but electromagnetic (EM) systems have been proposed in the literature. In this paper, we provide a thorough comparison of EM and optical tracking systems for use in image-guided laparoscopic surgery and a feasibility study of a combined, EM-tracked laparoscope and laparoscopic ultrasound (LUS) image guidance system. METHODS We first assess the tracking accuracy of a laparoscope with two optical trackers tracking retroreflective markers mounted on the shaft and an EM tracker with the sensor embedded at the proximal end, using a standard evaluation plate. We then use a stylus to test the precision of position measurement and accuracy of distance measurement of the trackers. Finally, we assess the accuracy of an image guidance system comprised of an EM-tracked laparoscope and an EM-tracked LUS probe. RESULTS In the experiment using a standard evaluation plate, the two optical trackers show less jitter in position and orientation measurement than the EM tracker. Also, the optical trackers demonstrate better consistency of orientation measurement within the test volume. However, their accuracy of measuring relative positions decreases significantly with longer distances whereas the EM tracker's performance is stable; at 50 mm distance, the RMS errors for the two optical trackers are 0.210 and 0.233 mm, respectively, and it is 0.214 mm for the EM tracker; at 250 mm distance, the RMS errors for the two optical trackers become 1.031 and 1.178 mm, respectively, while it is 0.367 mm for the EM tracker. In the experiment using the stylus, the two optical trackers have RMS errors of 1.278 and 1.555 mm in localizing the stylus tip, and it is 1.117 mm for the EM tracker. Our prototype of a combined, EM-tracked laparoscope and LUS system using representative calibration methods showed a RMS point localization error of 3.0 mm for the laparoscope and 1.3 mm for the LUS probe, the lager error of the former being predominantly due to the triangulation error when using a narrow-baseline stereo laparoscope. CONCLUSIONS The errors incurred by optical trackers, due to the lever-arm effect and variation in tracking accuracy in the depth direction, would make EM-tracked solutions preferable if the EM sensor is placed at the proximal end of the laparoscope.
Collapse
Affiliation(s)
- Guofang Xiao
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Ester Bonmati
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Stephen Thompson
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Joe Evans
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - John Hipwell
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Daniil Nikitichev
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Kurinchi Gurusamy
- Division of Surgery and Interventional ScienceUniversity College LondonLondonUK
| | - Sébastien Ourselin
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - David J. Hawkes
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Brian Davidson
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Division of Surgery and Interventional ScienceUniversity College LondonLondonUK
| | - Matthew J. Clarkson
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| |
Collapse
|
11
|
Zhou M, Hamad M, Weiss J, Eslami A, Huang K, Maier M, Lohmann CP, Navab N, Knoll A, Nasseri MA. Towards Robotic Eye Surgery: Marker-Free, Online Hand-Eye Calibration Using Optical Coherence Tomography Images. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2858744] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
12
|
In vivo estimation of target registration errors during augmented reality laparoscopic surgery. Int J Comput Assist Radiol Surg 2018; 13:865-874. [PMID: 29663273 PMCID: PMC5973973 DOI: 10.1007/s11548-018-1761-3] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2018] [Accepted: 04/02/2018] [Indexed: 11/02/2022]
Abstract
PURPOSE Successful use of augmented reality for laparoscopic surgery requires that the surgeon has a thorough understanding of the likely accuracy of any overlay. Whilst the accuracy of such systems can be estimated in the laboratory, it is difficult to extend such methods to the in vivo clinical setting. Herein we describe a novel method that enables the surgeon to estimate in vivo errors during use. We show that the method enables quantitative evaluation of in vivo data gathered with the SmartLiver image guidance system. METHODS The SmartLiver system utilises an intuitive display to enable the surgeon to compare the positions of landmarks visible in both a projected model and in the live video stream. From this the surgeon can estimate the system accuracy when using the system to locate subsurface targets not visible in the live video. Visible landmarks may be either point or line features. We test the validity of the algorithm using an anatomically representative liver phantom, applying simulated perturbations to achieve clinically realistic overlay errors. We then apply the algorithm to in vivo data. RESULTS The phantom results show that using projected errors of surface features provides a reliable predictor of subsurface target registration error for a representative human liver shape. Applying the algorithm to in vivo data gathered with the SmartLiver image-guided surgery system shows that the system is capable of accuracies around 12 mm; however, achieving this reliably remains a significant challenge. CONCLUSION We present an in vivo quantitative evaluation of the SmartLiver image-guided surgery system, together with a validation of the evaluation algorithm. This is the first quantitative in vivo analysis of an augmented reality system for laparoscopic surgery.
Collapse
|
13
|
Chen ECS, Morgan I, Jayarathne U, Ma B, Peters TM. Hand-eye calibration using a target registration error model. Healthc Technol Lett 2017; 4:157-162. [PMID: 29184657 PMCID: PMC5683221 DOI: 10.1049/htl.2017.0072] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2017] [Accepted: 08/01/2017] [Indexed: 11/19/2022] Open
Abstract
Surgical cameras are prevalent in modern operating theatres and are often used as a surrogate for direct vision. Visualisation techniques (e.g. image fusion) made possible by tracking the camera require accurate hand-eye calibration between the camera and the tracking system. The authors introduce the concept of 'guided hand-eye calibration', where calibration measurements are facilitated by a target registration error (TRE) model. They formulate hand-eye calibration as a registration problem between homologous point-line pairs. For each measurement, the position of a monochromatic ball-tip stylus (a point) and its projection onto the image (a line) is recorded, and the TRE of the resulting calibration is predicted using a TRE model. The TRE model is then used to guide the placement of the calibration tool, so that the subsequent measurement minimises the predicted TRE. Assessing TRE after each measurement produces accurate calibration using a minimal number of measurements. As a proof of principle, they evaluated guided calibration using a webcam and an endoscopic camera. Their endoscopic camera results suggest that millimetre TRE is achievable when at least 15 measurements are acquired with the tracker sensor ∼80 cm away on the laparoscope handle for a target ∼20 cm away from the camera.
Collapse
Affiliation(s)
| | - Isabella Morgan
- Biomedical Engineering, University of Waterloo, Waterloo, Ontario, Canada
| | | | - Burton Ma
- Department of Electrical Engineering and Computer Science, York University, Toronto, Ontario, Canada
| | | |
Collapse
|
14
|
Sánchez-Ferrer ML, Grima-Murcia MD, Sánchez-Ferrer F, Hernández-Peñalver AI, Fernández-Jover E, Sánchez Del Campo F. Use of Eye Tracking as an Innovative Instructional Method in Surgical Human Anatomy. JOURNAL OF SURGICAL EDUCATION 2017; 74:668-673. [PMID: 28126379 DOI: 10.1016/j.jsurg.2016.12.012] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2016] [Revised: 11/14/2016] [Accepted: 12/26/2016] [Indexed: 06/06/2023]
Abstract
OBJECTIVE Tobii glasses can record corneal infrared light reflection to track pupil position and to map gaze focusing in the video recording. Eye tracking has been proposed for use in training and coaching as a visually guided control interface. The aim of our study was to test the potential use of these glasses in various situations: explanations of anatomical structures on tablet-type electronic devices, explanations of anatomical models and dissected cadavers, and during the prosection thereof. An additional aim of the study was to test the use of the glasses during laparoscopies performed on Thiel-embalmed cadavers (that allows pneumoinsufflation and exact reproduction of the laparoscopic surgical technique). The device was also tried out in actual surgery (both laparoscopy and open surgery). DESIGN We performed a pilot study using the Tobii glasses. SETTING Dissection room at our School of Medicine and in the operating room at our Hospital. PARTICIPANTS To evaluate usefulness, a survey was designed for use among students, instructors, and practicing physicians. RESULTS The results were satisfactory, with the usefulness of this tool supported by more than 80% positive responses to most questions. There was no inconvenience for surgeons and that patient safety was ensured in the real laparoscopy. CONCLUSION To our knowledge, this is the first publication to demonstrate the usefulness of eye tracking in practical instruction of human anatomy, as well as in teaching clinical anatomy and surgical techniques in the dissection and operating rooms.
Collapse
Affiliation(s)
- María Luísa Sánchez-Ferrer
- Department of Obstetrics and Gynecology, "Virgen delaArrixaca" University Clinical Hospital and Institute for Biomedical Research of Murcia, IMIB-Arrixaca, Murcia, Spain.
| | | | - Francisco Sánchez-Ferrer
- Department of Pediatrics, "San Juan" University Clinical Hospital, University Miguel Hernández, Alicante, Spain
| | - Ana Isabel Hernández-Peñalver
- Department of Obstetrics and Gynecology, "Virgen delaArrixaca" University Clinical Hospital and Institute for Biomedical Research of Murcia, IMIB-Arrixaca, Murcia, Spain
| | - Eduardo Fernández-Jover
- Department of Histology and Anatomy, Bioengineering Institute, Miguel Hernández University, Alicante, Spain
| | | |
Collapse
|
15
|
Morgan I, Jayarathne U, Rankin A, Peters TM, Chen ECS. Hand-eye calibration for surgical cameras: a Procrustean Perspective-n-Point solution. Int J Comput Assist Radiol Surg 2017; 12:1141-1149. [PMID: 28425030 DOI: 10.1007/s11548-017-1590-9] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2017] [Accepted: 04/10/2017] [Indexed: 11/30/2022]
Abstract
PURPOSE Surgical cameras are prevalent in modern operating theatres often used as surrogates for direct vision. A surgical navigational system is a useful adjunct, but requires an accurate "hand-eye" calibration to determine the geometrical relationship between the surgical camera and tracking markers. METHODS Using a tracked ball-tip stylus, we formulated hand-eye calibration as a Perspective-n-Point problem, which can be solved efficiently and accurately using as few as 15 measurements. RESULTS The proposed hand-eye calibration algorithm was applied to three types of camera and validated against five other widely used methods. Using projection error as the accuracy metric, our proposed algorithm compared favourably with existing methods. CONCLUSION We present a fully automated hand-eye calibration technique, based on Procrustean point-to-line registration, which provides superior results for calibrating surgical cameras when compared to existing methods.
Collapse
Affiliation(s)
| | | | - Adam Rankin
- Robarts Research Institute, Western University, London, ON, Canada
| | - Terry M Peters
- Robarts Research Institute, Western University, London, ON, Canada
| | - Elvis C S Chen
- Robarts Research Institute, Western University, London, ON, Canada.
| |
Collapse
|
16
|
Shao J, Luo H, Xiao D, Hu Q, Jia F. Progressive Hand-Eye Calibration for Laparoscopic Surgery Navigation. LECTURE NOTES IN COMPUTER SCIENCE 2017. [DOI: 10.1007/978-3-319-67543-5_4] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|