1
|
Dowrick T, Xiao G, Nikitichev D, Dursun E, van Berkel N, Allam M, Koo B, Ramalhinho J, Thompson S, Gurusamy K, Blandford A, Stoyanov D, Davidson BR, Clarkson MJ. Evaluation of a calibration rig for stereo laparoscopes. Med Phys 2023; 50:2695-2704. [PMID: 36779419 PMCID: PMC10614700 DOI: 10.1002/mp.16310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 02/01/2023] [Accepted: 02/01/2023] [Indexed: 02/14/2023] Open
Abstract
BACKGROUND Accurate camera and hand-eye calibration are essential to ensure high-quality results in image-guided surgery applications. The process must also be able to be undertaken by a nonexpert user in a surgical setting. PURPOSE This work seeks to identify a suitable method for tracked stereo laparoscope calibration within theater. METHODS A custom calibration rig, to enable rapid calibration in a surgical setting, was designed. The rig was compared against freehand calibration. Stereo reprojection, stereo reconstruction, tracked stereo reprojection, and tracked stereo reconstruction error metrics were used to evaluate calibration quality. RESULTS Use of the calibration rig reduced mean errors: reprojection (1.47 mm [SD 0.13] vs. 3.14 mm [SD 2.11], p-value 1e-8), reconstruction (1.37 px [SD 0.10] vs. 10.10 px [SD 4.54], p-value 6e-7), and tracked reconstruction (1.38 mm [SD 0.10] vs. 12.64 mm [SD 4.34], p-value 1e-6) compared with freehand calibration. The use of a ChArUco pattern yielded slightly lower reprojection errors, while a dot grid produced lower reconstruction errors and was more robust under strong global illumination. CONCLUSION The use of the calibration rig results in a statistically significant decrease in calibration error metrics, versus freehand calibration, and represents the preferred approach for use in the operating theater.
Collapse
Affiliation(s)
- Thomas Dowrick
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Guofang Xiao
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Daniil Nikitichev
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Eren Dursun
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Niels van Berkel
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Moustafa Allam
- Royal Free CampusUCL Medical SchoolRoyal Free HospitalLondonUK
| | - Bongjin Koo
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Joao Ramalhinho
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Stephen Thompson
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | | | - Ann Blandford
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Danail Stoyanov
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | | | | |
Collapse
|
2
|
Xu P, Kim K, Koh J, Wu D, Rim Lee Y, Young Park S, Young Tak W, Liu H, Li Q. Efficient knowledge distillation for liver CT segmentation using growing assistant network. Phys Med Biol 2021; 66. [PMID: 34768246 DOI: 10.1088/1361-6560/ac3935] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Accepted: 11/12/2021] [Indexed: 12/21/2022]
Abstract
Segmentation has been widely used in diagnosis, lesion detection, and surgery planning. Although deep learning (DL)-based segmentation methods currently outperform traditional methods, most DL-based segmentation models are computationally expensive and memory inefficient, which are not suitable for the intervention of liver surgery. To address this issue, a simple solution is to make a segmentation model very small for the fast inference time, however, there is a trade-off between the model size and performance. In this paper, we propose a DL-based real-time 3-D liver CT segmentation method, where knowledge distillation (KD) method, known as knowledge transfer from teacher to student models, is incorporated to compress the model while preserving the performance. Because it is well known that the knowledge transfer is inefficient when the disparity of teacher and student model sizes is large, we propose a growing teacher assistant network (GTAN) to gradually learn the knowledge without extra computational cost, which can efficiently transfer knowledge even with the large gap of teacher and student model sizes. In our results, dice similarity coefficient of the student model with KD improved 1.2% (85.9% to 87.1%) compared to the student model without KD, which is a similar performance of the teacher model using only 8% (100k) parameters. Furthermore, with a student model of 2% (30k) parameters, the proposed model using the GTAN improved the dice coefficient about 2% compared to the student model without KD, and the inference time is 13 ms per a 3-D image. Therefore, the proposed method has a great potential for intervention in liver surgery as well as in many real-time applications.
Collapse
Affiliation(s)
- Pengcheng Xu
- College of Optical Science and Engineering, Zhejiang University, Hangzhou, People's Republic of China.,Massachusetts General Hospital and Harvard Medical School, Radiology Department, 55 Fruit Street, Boston, MA 02114, United States of America
| | - Kyungsang Kim
- Massachusetts General Hospital and Harvard Medical School, Radiology Department, 55 Fruit Street, Boston, MA 02114, United States of America
| | - Jeongwan Koh
- Massachusetts General Hospital and Harvard Medical School, Radiology Department, 55 Fruit Street, Boston, MA 02114, United States of America
| | - Dufan Wu
- Massachusetts General Hospital and Harvard Medical School, Radiology Department, 55 Fruit Street, Boston, MA 02114, United States of America
| | - Yu Rim Lee
- Department of Internal Medicine, School of Medicine, Kyungpook National University, Daegu, Republic of Korea
| | - Soo Young Park
- Department of Internal Medicine, School of Medicine, Kyungpook National University, Daegu, Republic of Korea
| | - Won Young Tak
- Department of Internal Medicine, School of Medicine, Kyungpook National University, Daegu, Republic of Korea
| | - Huafeng Liu
- College of Optical Science and Engineering, Zhejiang University, Hangzhou, People's Republic of China
| | - Quanzheng Li
- Massachusetts General Hospital and Harvard Medical School, Radiology Department, 55 Fruit Street, Boston, MA 02114, United States of America
| |
Collapse
|
3
|
General first-order target registration error model considering a coordinate reference frame in an image-guided surgical system. Med Biol Eng Comput 2020; 58:2989-3002. [PMID: 33029759 DOI: 10.1007/s11517-020-02265-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 09/08/2020] [Indexed: 10/23/2022]
Abstract
Point-based rigid registration (PBRR) techniques are widely used in many aspects of image-guided surgery (IGS). Accurately estimating target registration error (TRE) statistics is of essential value for medical applications such as optically surgical tool-tip tracking and image registration. For example, knowing the TRE distribution statistics of surgical tool tip can help the surgeon make right decisions during surgery. In the meantime, the pose of a surgical tool is usually reported relative to a second rigid body whose local frame is called coordinate reference frame (CRF). In an n-ocular tracking system, fiducial localization error (FLE) should be considered inhomogeneous, that means FLE is different between fiducials, and anisotropic that indicates FLE is different in all directions. In this paper, we extend the TRE estimation algorithm relative to a CRF from homogeneous and anisotropic to heterogeneous FLE cases. Arbitrary weightings can be assumed in solving the registration problems in the proposed TRE estimation algorithm. Monte Carlo simulation results demonstrate the proposed algorithm's effectiveness for both homogeneous and inhomogeneous FLE distributions. The results are further compared with those using the other two algorithms. When FLE distribution is anisotropic and homogeneous, the proposed TRE estimation algorithm's performance is comparable with that of the first one. When FLE distribution is heterogeneous, proposed TRE estimation algorithm outperforms the other two classical algorithms in all test cases when ideal weighting scheme is adopted in solving two registrations. Possible clinical applications include the online estimation of surgical tool-tip tracking error with respect to a CRF in IGS. Graphical Abstract This paper provides the target registration error model considering a coordinate reference frame in surgical navigation.
Collapse
|
4
|
Kalia M, Mathur P, Tsang K, Black P, Navab N, Salcudean S. Evaluation of a marker-less, intra-operative, augmented reality guidance system for robot-assisted laparoscopic radical prostatectomy. Int J Comput Assist Radiol Surg 2020; 15:1225-1233. [PMID: 32500450 DOI: 10.1007/s11548-020-02181-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Accepted: 04/22/2020] [Indexed: 12/19/2022]
Abstract
PURPOSE Robot-assisted laparoscopic radical prostatectomy (RALRP) using the da Vinci surgical robot is a common treatment for organ-confined prostate cancer. Augmented reality (AR) can help during RALRP by showing the surgeon the location of anatomical structures and tumors from preoperative imaging. Previously, we proposed hand-eye and camera intrinsic matrix estimation procedures that can be carried out with conventional instruments within the patient during surgery, take < 3 min to perform, and fit seamlessly in the existing surgical workflow. In this paper, we describe and evaluate a complete AR guidance system for RALRP and quantify its accuracy. METHODS Our AR system requires three transformations: the transrectal ultrasound (TRUS) to da Vinci transformation, the camera intrinsic matrix, and the hand-eye transformation. For evaluation, a 3D-printed cross-wire was visualized in TRUS and stereo endoscope in a water bath. Manually triangulated cross-wire points from stereo images were used as ground truth to evaluate overall TRE between these points and points transformed from TRUS to camera. RESULTS After transforming the ground-truth points from the TRUS to the camera coordinate frame, the mean target registration error (TRE) (SD) was [Formula: see text] mm. The mean TREs (SD) in the x-, y-, and z-directions are [Formula: see text] mm, [Formula: see text] mm, and [Formula: see text] mm, respectively. CONCLUSIONS We describe and evaluate a complete AR guidance system for RALRP which can augment preoperative data to endoscope camera image, after a deformable magnetic resonance image to TRUS registration step. The streamlined procedures with current surgical workflow and low TRE demonstrate the compatibility and readiness of the system for clinical translation. A detailed sensitivity study remains part of future work.
Collapse
Affiliation(s)
- Megha Kalia
- Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver, BC, V6T 1Z4, Canada.
- Computer Aided Medical Procedures, Technical University of Munich, Boltzmannstraße 15, 85748, Garching bei München, Germany.
| | - Prateek Mathur
- Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver, BC, V6T 1Z4, Canada.
| | - Keith Tsang
- Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver, BC, V6T 1Z4, Canada
| | - Peter Black
- Vancouver Prostate Centre, Department of Urologic Sciences, University of British Columbia, Vancouver, BC, V5Z 1M9, Canada
| | - Nassir Navab
- Computer Aided Medical Procedures, Technical University of Munich, Boltzmannstraße 15, 85748, Garching bei München, Germany
| | - Septimiu Salcudean
- Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver, BC, V6T 1Z4, Canada
| |
Collapse
|
5
|
Kalia M, Mathur P, Navab N, Salcudean SE. Marker-less real-time intra-operative camera and hand-eye calibration procedure for surgical augmented reality. Healthc Technol Lett 2019; 6:255-260. [PMID: 32038867 PMCID: PMC6952262 DOI: 10.1049/htl.2019.0094] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Accepted: 10/02/2019] [Indexed: 12/28/2022] Open
Abstract
Accurate medical Augmented Reality (AR) rendering requires two calibrations, a camera intrinsic matrix estimation and a hand-eye transformation. We present a unified, practical, marker-less, real-time system to estimate both these transformations during surgery. For camera calibration we perform calibrations at multiple distances from the endoscope, pre-operatively, to parametrize the camera intrinsic matrix as a function of distance from the endoscope. Then, we retrieve the camera parameters intra-operatively by estimating the distance of the surgical site from the endoscope in less than 1 s. Unlike in prior work, our method does not require the endoscope to be taken out of the patient; for the hand-eye calibration, as opposed to conventional methods that require the identification of a marker, we make use of a rendered tool-tip in 3D. As the surgeon moves the instrument and observes the offset between the actual and the rendered tool-tip, they can select points of high visual error and manually bring the instrument tip to match the virtual rendered tool tip. To evaluate the hand-eye calibration, 5 subjects carried out the hand-eye calibration procedure on a da Vinci robot. Average Target Registration Error of approximately 7mm was achieved with just three data points.
Collapse
Affiliation(s)
- Megha Kalia
- Robotics and Control Lab, Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver, BC V6T 1Z4, Canada.,Computer Aided Medical Procedures, Technical University of Munich, Boltzmannstraße 15, 85748 Garching bei Múnchen, Germany
| | - Prateek Mathur
- Robotics and Control Lab, Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver, BC V6T 1Z4, Canada
| | - Nassir Navab
- Computer Aided Medical Procedures, Technical University of Munich, Boltzmannstraße 15, 85748 Garching bei Múnchen, Germany
| | - Septimiu E Salcudean
- Robotics and Control Lab, Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver, BC V6T 1Z4, Canada
| |
Collapse
|