1
|
Baptista T, Marques M, Raposo C, Ribeiro L, Antunes M, Barreto JP. Structured light for touchless 3D registration in video-based surgical navigation. Int J Comput Assist Radiol Surg 2024; 19:1429-1437. [PMID: 38816650 PMCID: PMC11230986 DOI: 10.1007/s11548-024-03180-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Accepted: 05/07/2024] [Indexed: 06/01/2024]
Abstract
PURPOSE Arthroscopic surgery, with its inherent difficulties on visibility and maneuverability inside the joint, poses significant challenges to surgeons. Video-based surgical navigation (VBSN) has proven to have clinical benefits in arthroscopy but relies on a time-consuming and challenging surface digitization using a touch probe to accomplish registration of intraoperative data with preoperative anatomical models. This paper presents an off-the-shelf laser scanner for noninvasive registration that enables an increased area of reachable region. METHODS Our solution uses a standard arthroscope and a light projector with visual markers for real-time extrinsic calibration. Nevertheless, the shift from a touch probe to a laser scanner introduces a new challenge-the presence of a significant amount of outliers resulting from the reconstruction of nonrigid structures. To address this issue, we propose to identify the structures of interest prior to reconstruction using a deep learning-based semantic segmentation technique. RESULTS Experimental validation using knee and hip phantoms, as well as ex-vivo data, assesses the laser scanner's effectiveness. The integration of the segmentation model improves results in ex-vivo experiments by mitigating outliers. Specifically, the laser scanner with the segmentation model achieves registration errors below 2.2 mm, with the intercondylar region exhibiting errors below 1 mm. In experiments with phantoms, the errors are always below 1 mm. CONCLUSION The results show the viability of integrating the laser scanner with VBSN as a noninvasive and potential alternative to traditional methods by overcoming surface digitization challenges and expanding the reachable region. Future efforts aim to improve hardware to further optimize performance and applicability in complex procedures.
Collapse
Affiliation(s)
- Tânia Baptista
- Institute of Systems and Robotics, University of Coimbra, Coimbra, Portugal.
- Perceive3D, Coimbra, Portugal.
| | | | - Carolina Raposo
- Institute of Systems and Robotics, University of Coimbra, Coimbra, Portugal
- Perceive3D, Coimbra, Portugal
| | | | - Michel Antunes
- Institute of Systems and Robotics, University of Coimbra, Coimbra, Portugal
- Perceive3D, Coimbra, Portugal
| | - Joao P Barreto
- Institute of Systems and Robotics, University of Coimbra, Coimbra, Portugal
- Perceive3D, Coimbra, Portugal
| |
Collapse
|
2
|
Lin Z, Lei C, Yang L. Modern Image-Guided Surgery: A Narrative Review of Medical Image Processing and Visualization. SENSORS (BASEL, SWITZERLAND) 2023; 23:9872. [PMID: 38139718 PMCID: PMC10748263 DOI: 10.3390/s23249872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 11/15/2023] [Accepted: 12/13/2023] [Indexed: 12/24/2023]
Abstract
Medical image analysis forms the basis of image-guided surgery (IGS) and many of its fundamental tasks. Driven by the growing number of medical imaging modalities, the research community of medical imaging has developed methods and achieved functionality breakthroughs. However, with the overwhelming pool of information in the literature, it has become increasingly challenging for researchers to extract context-relevant information for specific applications, especially when many widely used methods exist in a variety of versions optimized for their respective application domains. By being further equipped with sophisticated three-dimensional (3D) medical image visualization and digital reality technology, medical experts could enhance their performance capabilities in IGS by multiple folds. The goal of this narrative review is to organize the key components of IGS in the aspects of medical image processing and visualization with a new perspective and insights. The literature search was conducted using mainstream academic search engines with a combination of keywords relevant to the field up until mid-2022. This survey systemically summarizes the basic, mainstream, and state-of-the-art medical image processing methods as well as how visualization technology like augmented/mixed/virtual reality (AR/MR/VR) are enhancing performance in IGS. Further, we hope that this survey will shed some light on the future of IGS in the face of challenges and opportunities for the research directions of medical image processing and visualization.
Collapse
Affiliation(s)
- Zhefan Lin
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Chen Lei
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Liangjing Yang
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| |
Collapse
|
3
|
Long Z, Chi Y, Yu X, Jiang Z, Yang D. ArthroNavi framework: stereo endoscope-guided instrument localization for arthroscopic minimally invasive surgeries. JOURNAL OF BIOMEDICAL OPTICS 2023; 28:106002. [PMID: 37841507 PMCID: PMC10576396 DOI: 10.1117/1.jbo.28.10.106002] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 09/24/2023] [Accepted: 09/29/2023] [Indexed: 10/17/2023]
Abstract
Significance As an example of a minimally invasive arthroscopic surgical procedure, arthroscopic osteochondral autograft transplantation (OAT) is a common option for repairing focal cartilage defects in the knee joints. Arthroscopic OAT offers considerable benefits to patients, such as less post-operative pain and shorter hospital stays. However, performing OAT arthroscopically is an extremely demanding task because the osteochondral graft harvester must remain perpendicular to the cartilage surface to avoid differences in angulation. Aim We present a practical ArthroNavi framework for instrument pose localization by combining a self-developed stereo endoscopy with electromagnetic computation, which equips surgeons with surgical navigation assistance that eases the operational constraints of arthroscopic OAT surgery. Approach A prototype of a stereo endoscope specifically fit for a texture-less scene is introduced extensively. Then, the proposed framework employs the semi-global matching algorithm integrating the matching cubes method for real-time processing of the 3D point cloud. To address issues regarding initialization and occlusion, a displaying method based on patient tracking coordinates is proposed for intra-operative robust navigation. A geometrical constraint method that utilizes the 3D point cloud is used to compute a pose for the instrument. Finally, a hemisphere tabulation method is presented for pose accuracy evaluation. Results Experimental results show that our endoscope achieves 3D shape measurement with an accuracy of < 730 μ m . The mean error of pose localization is 15.4 deg (range of 10.3 deg to 21.3 deg; standard deviation of 3.08 deg) in our ArthroNavi method, which is within the same order of magnitude as that achieved by experienced surgeons using a freehand technique. Conclusions The effectiveness of the proposed ArthroNavi has been validated on a phantom femur. The potential contribution of this framework may provide a new computer-aided option for arthroscopic OAT surgery.
Collapse
Affiliation(s)
- Zhongjie Long
- Beijing Information Science & Technology University, School of Electromechanical Engineering, Beijing, China
| | - Yongting Chi
- Beijing Information Science & Technology University, School of Electromechanical Engineering, Beijing, China
| | - Xiaotong Yu
- Guang’anmen Hospital, China Academy of Chinese Medical Sciences, Beijing, China
| | - Zhouxiang Jiang
- Beijing Information Science & Technology University, School of Electromechanical Engineering, Beijing, China
| | - Dejin Yang
- Beijing Jishuitan Hospital, Capital Medical School, 4th Clinical College of Peking University, Department of Orthopedics, Beijing, China
| |
Collapse
|
4
|
Yang B, Xu S, Chen H, Zheng W, Liu C. Reconstruct Dynamic Soft-Tissue With Stereo Endoscope Based on a Single-Layer Network. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:5828-5840. [PMID: 36054398 DOI: 10.1109/tip.2022.3202367] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In dynamic minimally invasive surgery environments, 3D reconstruction of deformable soft-tissue surfaces with stereo endoscopic images is very challenging. A simple self-supervised stereo reconstruction framework is proposed to address this issue, which bridges the traditional geometric deformable models and the newly revived neural networks. The equivalence between the classical thin plate spline (TPS) model and a single-layer fully-connected or convolutional network is studied. By alternating training of two TPS equivalent networks within the self-supervised framework, disparity priors are learnt from the past stereo frames of target tissues to form an optimized disparity basis, on which disparity maps of subsequent frames can be estimated more accurately without sacrificing computational efficiency and robustness. The proposed method was verified on stereo-endoscopic videos recorded by the da Vinci® surgical robots.
Collapse
|
5
|
Schneider C, Allam M, Stoyanov D, Hawkes DJ, Gurusamy K, Davidson BR. Performance of image guided navigation in laparoscopic liver surgery - A systematic review. Surg Oncol 2021; 38:101637. [PMID: 34358880 DOI: 10.1016/j.suronc.2021.101637] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Revised: 07/04/2021] [Accepted: 07/24/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND Compared to open surgery, minimally invasive liver resection has improved short term outcomes. It is however technically more challenging. Navigated image guidance systems (IGS) are being developed to overcome these challenges. The aim of this systematic review is to provide an overview of their current capabilities and limitations. METHODS Medline, Embase and Cochrane databases were searched using free text terms and corresponding controlled vocabulary. Titles and abstracts of retrieved articles were screened for inclusion criteria. Due to the heterogeneity of the retrieved data it was not possible to conduct a meta-analysis. Therefore results are presented in tabulated and narrative format. RESULTS Out of 2015 articles, 17 pre-clinical and 33 clinical papers met inclusion criteria. Data from 24 articles that reported on accuracy indicates that in recent years navigation accuracy has been in the range of 8-15 mm. Due to discrepancies in evaluation methods it is difficult to compare accuracy metrics between different systems. Surgeon feedback suggests that current state of the art IGS may be useful as a supplementary navigation tool, especially in small liver lesions that are difficult to locate. They are however not able to reliably localise all relevant anatomical structures. Only one article investigated IGS impact on clinical outcomes. CONCLUSIONS Further improvements in navigation accuracy are needed to enable reliable visualisation of tumour margins with the precision required for oncological resections. To enhance comparability between different IGS it is crucial to find a consensus on the assessment of navigation accuracy as a minimum reporting standard.
Collapse
Affiliation(s)
- C Schneider
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK.
| | - M Allam
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK; General surgery Department, Tanta University, Egypt
| | - D Stoyanov
- Department of Computer Science, University College London, London, UK; Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - D J Hawkes
- Centre for Medical Image Computing (CMIC), University College London, London, UK; Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK
| | - K Gurusamy
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK
| | - B R Davidson
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK
| |
Collapse
|
6
|
Yang L, Etsuko K. Review on vision‐based tracking in surgical navigation. IET CYBER-SYSTEMS AND ROBOTICS 2020. [DOI: 10.1049/iet-csr.2020.0013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Liangjing Yang
- Zhejiang University/University of Illinois at Urbana‐Champaign Institute, Zhejiang University Haining People's Republic of China
- School of Mechanical Engineering Zhejiang University Hangzhou People's Republic of China
- Department of Mechanical Science and Engineering University of Illinois at Urbana‐Champaign Urbana USA
| | - Kobayashi Etsuko
- Graduate School of Engineering The University of Tokyo Tokyo Japan
- Institute of Advanced Biomedical Engineering and Science Tokyo Women's Medical University Tokyo Japan
| |
Collapse
|
7
|
Sui C, Wu J, Wang Z, Ma G, Liu YH. A Real-Time 3D Laparoscopic Imaging System: Design, Method, and Validation. IEEE Trans Biomed Eng 2020; 67:2683-2695. [PMID: 31985404 DOI: 10.1109/tbme.2020.2968488] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE This paper aims to propose a 3D laparoscopic imaging system that can realize dense 3D reconstruction in real time. METHODS Based on the active stereo technique which yields high-density, accurate and robust 3D reconstruction by combining structured light and stereo vision, we design a laparoscopic system consisting of two image feedback channels and one pattern projection channel. Remote high-speed image acquisition and pattern generation lay the foundation for the real-time dense 3D surface reconstruction and enable the miniaturization of the laparoscopic probe. To enhance the reconstruction efficiency and accuracy, we propose a novel active stereo method by which the dense 3D point cloud is obtained using only five patterns, while most existing multiple-shot structured light techniques require [Formula: see text] patterns. In our method, dual-frequency phase-shifting fringes are utilized to uniquely encode the pixels of the measured targets, and a dual-codeword matching scheme is developed to simplify the matching procedure and achieve high-precision reconstruction. RESULTS Compared with the existing structured light techniques, the proposed method shows better real-time efficiency and accuracy in both quantitative and qualitative ways. Ex-vivo experiments demonstrate the robustness of the proposed method to different biological organs and the effectiveness to lesions and deformations of the organs. Feasibility of the proposed system for real-time dense 3D reconstruction is verified in dynamic experiments. According to the experimental results, the system acquires 3D point clouds with a speed of 12 frames per second. Each frame contains more than 40,000 points, and the average errors tested on standard objects are less than 0.2 mm. SIGNIFICANCE This paper provides a new real-time dense 3D reconstruction method for 3D laparoscopic imaging. The established prototype system has shown good performance in reconstructing surface of biological tissues.
Collapse
|
8
|
Kuntz A, Fu M, Alterovitz R. Planning High-Quality Motions for Concentric Tube Robots in Point Clouds via Parallel Sampling and Optimization. PROCEEDINGS OF THE ... IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS. IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS 2020; 2019:2205-2212. [PMID: 32355572 DOI: 10.1109/iros40897.2019.8968172] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
We present a method that plans motions for a concentric tube robot to automatically reach surgical targets inside the body while avoiding obstacles, where the patient's anatomy is represented by point clouds. Point clouds can be generated intra-operatively via endoscopic instruments, enabling the system to update obstacle representations over time as the patient anatomy changes during surgery. Our new motion planning method uses a combination of sampling-based motion planning methods and local optimization to efficiently handle point cloud data and quickly compute high quality plans. The local optimization step uses an interior point optimization method, ensuring that the computed plan is feasible and avoids obstacles at every iteration. This enables the motion planner to run in an anytime fashion, i.e., the method can be stopped at any time and the best solution found up until that point is returned. We demonstrate the method's efficacy in three anatomical scenarios, including two generated from endoscopic videos of real patient anatomy.
Collapse
Affiliation(s)
- Alan Kuntz
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Mengyu Fu
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Ron Alterovitz
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|
9
|
Yang B, Liu C, Zheng W, Liu S, Huang K. Reconstructing a 3D heart surface with stereo-endoscope by learning eigen-shapes. BIOMEDICAL OPTICS EXPRESS 2018; 9:6222-6236. [PMID: 31065424 PMCID: PMC6490979 DOI: 10.1364/boe.9.006222] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Revised: 10/26/2018] [Accepted: 11/02/2018] [Indexed: 06/09/2023]
Abstract
An efficient approach to dynamically reconstruct a region of interest (ROI) on a beating heart from stereo-endoscopic video is developed. A ROI is first pre-reconstructed with a decoupled high-rank thin plate spline model. Eigen-shapes are learned from the pre-reconstructed data by using principal component analysis (PCA) to build a low-rank statistical deformable model for reconstructing subsequent frames. The linear transferability of PCA is proved, which allows fast eigen-shape learning. A general dynamic reconstruction framework is developed that formulates ROI reconstruction as an optimization problem of model parameters, and an efficient second-order minimization algorithm is derived to iteratively solve it. The performance of the proposed method is finally validated on stereo-endoscopic videos recorded by da Vinci robots.
Collapse
Affiliation(s)
- Bo Yang
- School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu,
China
| | - Chao Liu
- LIRMM, CNRS-UM, Montpellier,
France
| | - Wenfeng Zheng
- School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu,
China
| | - Shan Liu
- School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu,
China
| | - Keli Huang
- Cardiac Surgery Center, Sichuan Provincial People’s Hospital, Chengdu,
China
| |
Collapse
|
10
|
Chalasani P, Wang L, Yasin R, Simaan N, Taylor RH. Preliminary Evaluation of an Online Estimation Method for Organ Geometry and Tissue Stiffness. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2801481] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
11
|
Sugawara M, Kiyomitsu K, Namae T, Nakaguchi T, Tsumura N. An optical projection system with mirrors for laparoscopy. ARTIFICIAL LIFE AND ROBOTICS 2017. [DOI: 10.1007/s10015-016-0311-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
12
|
Van der Jeught S, Dirckx JJJ. Real-time structured light-based otoscopy for quantitative measurement of eardrum deformation. JOURNAL OF BIOMEDICAL OPTICS 2017; 22:16008. [PMID: 28301636 DOI: 10.1117/1.jbo.22.1.016008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2016] [Accepted: 12/20/2016] [Indexed: 06/06/2023]
Abstract
An otological profilometry device based on real-time structured light triangulation is presented. A clinical otoscope head is mounted onto a custom-handheld unit containing both a small digital light projector and a high-speed digital camera. Digital fringe patterns are projected onto the eardrum surface and are recorded at a rate of 120 unique frames per second. The relative angle between projection and camera axes causes the projected patterns to appear deformed by the eardrum shape, allowing its full-field three-dimensional (3-D) surface map to be reconstructed. By combining hardware triggering between projector and camera with a dedicated parallel processing pipeline, the proposed system is capable of acquiring a live stream of point clouds of over 300,000 data points per frame at a rate of 40 Hz. Real-time eardrum profilometry adds an additional dimension of depth to the standard two-dimensional otoscopy image and provides a noninvasive tool to enhance the qualitative depth perception of the clinical operator with quantitative 3-D data. Visualization of the eardrum from different perspectives can improve the diagnosis of existing and the detection of impending middle ear pathology. The capability of the device to detect small middle ear pressure changes by monitoring eardrum deformation in real time is demonstrated.
Collapse
Affiliation(s)
- Sam Van der Jeught
- University of Antwerp, Department of Physics, Laboratory of Biomedical Physics, Groenenborgerlaan 171, B-2020 Antwerp, Belgium
| | - Joris J J Dirckx
- University of Antwerp, Department of Physics, Laboratory of Biomedical Physics, Groenenborgerlaan 171, B-2020 Antwerp, Belgium
| |
Collapse
|
13
|
Yang B, Liu C, Poignet P, Zheng W, Liu S. Motion prediction using dual Kalman filter for robust beating heart tracking. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2015:4875-8. [PMID: 26737385 DOI: 10.1109/embc.2015.7319485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A novel prediction method for robust beating heart tracking is proposed. The dual time-varying Fourier series is used to model the heart motion. The frequency parameters and Fourier coefficients in the model are estimated respectively by using a dual Kalman filter scheme. The instantaneous frequencies of breathing and heartbeat motion are measured online from the 3D trajectory of the point of interest using an orthogonal decomposition algorithm. The proposed method is evaluated based on both the simulated signals and the real motion signals, which are measured from the videos recorded using the da Vinci surgical system.
Collapse
|
14
|
A clinically applicable laser-based image-guided system for laparoscopic liver procedures. Int J Comput Assist Radiol Surg 2015; 11:1499-513. [PMID: 26476640 DOI: 10.1007/s11548-015-1309-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2015] [Accepted: 09/24/2015] [Indexed: 10/22/2022]
Abstract
PURPOSE Laser range scanners (LRS) allow performing a surface scan without physical contact with the organ, yielding higher registration accuracy for image-guided surgery (IGS) systems. However, the use of LRS-based registration in laparoscopic liver surgery is still limited because current solutions are composed of expensive and bulky equipment which can hardly be integrated in a surgical scenario. METHODS In this work, we present a novel LRS-based IGS system for laparoscopic liver procedures. A triangulation process is formulated to compute the 3D coordinates of laser points by using the existing IGS system tracking devices. This allows the use of a compact and cost-effective LRS and therefore facilitates the integration into the laparoscopic setup. The 3D laser points are then reconstructed into a surface to register to the preoperative liver model using a multi-level registration process. RESULTS Experimental results show that the proposed system provides submillimeter scanning precision and accuracy comparable to those reported in the literature. Further quantitative analysis shows that the proposed system is able to achieve a patient-to-image registration accuracy, described as target registration error, of [Formula: see text]. CONCLUSIONS We believe that the presented approach will lead to a faster integration of LRS-based registration techniques in the surgical environment. Further studies will focus on optimizing scanning time and on the respiratory motion compensation.
Collapse
|
15
|
Edgcumbe P, Pratt P, Yang GZ, Nguan C, Rohling R. Pico Lantern: Surface reconstruction and augmented reality in laparoscopic surgery using a pick-up laser projector. Med Image Anal 2015; 25:95-102. [PMID: 26024818 DOI: 10.1016/j.media.2015.04.008] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2014] [Revised: 04/08/2015] [Accepted: 04/09/2015] [Indexed: 10/23/2022]
Abstract
The Pico Lantern is a miniature projector developed for structured light surface reconstruction, augmented reality and guidance in laparoscopic surgery. During surgery it will be dropped into the patient and picked up by a laparoscopic tool. While inside the patient it projects a known coded pattern and images onto the surface of the tissue. The Pico Lantern is visually tracked in the laparoscope's field of view for the purpose of stereo triangulation between it and the laparoscope. In this paper, the first application is surface reconstruction. Using a stereo laparoscope and an untracked Pico Lantern, the absolute error for surface reconstruction for a plane, cylinder and ex vivo kidney, is 2.0 mm, 3.0 mm and 5.6 mm, respectively. Using a mono laparoscope and a tracked Pico Lantern for the same plane, cylinder and kidney the absolute error is 1.4 mm, 1.5 mm and 1.5 mm, respectively. These results confirm the benefit of the wider baseline produced by tracking the Pico Lantern. Virtual viewpoint images are generated from the kidney surface data and an in vivo proof-of-concept porcine trial is reported. Surface reconstruction of the neck of a volunteer shows that the pulsatile motion of the tissue overlying a major blood vessel can be detected and displayed in vivo. Future work will integrate the Pico Lantern into standard and robot-assisted laparoscopic surgery.
Collapse
Affiliation(s)
- Philip Edgcumbe
- MD/PhD Program, Biomedical Engineering Program, University of British Columbia, Vancouver, BC, Canada.
| | - Philip Pratt
- Hamlyn Centre for Robotic Surgery, Imperial College of Science, Technology and Medicine, London, UK
| | - Guang-Zhong Yang
- Hamlyn Centre for Robotic Surgery, Imperial College of Science, Technology and Medicine, London, UK
| | - Christopher Nguan
- Department of Urologic Sciences, University of British Columbia, Vancouver, BC, Canada
| | - Robert Rohling
- Department of Electrical Engineering and Computer Engineering, University of British Columbia, Vancouver, BC, Canada; Department of Mechanical Engineering, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
16
|
Lin B, Sun Y, Qian X, Goldgof D, Gitlin R, You Y. Video‐based 3D reconstruction, laparoscope localization and deformation recovery for abdominal minimally invasive surgery: a survey. Int J Med Robot 2015; 12:158-78. [DOI: 10.1002/rcs.1661] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/23/2015] [Indexed: 11/07/2022]
Affiliation(s)
- Bingxiong Lin
- Department of Computer Science and Engineering University of South Florida Tampa FL USA
| | - Yu Sun
- Department of Computer Science and Engineering University of South Florida Tampa FL USA
| | - Xiaoning Qian
- Department of Electrical and Computer Engineering Texas A&M University College Station TX USA
| | - Dmitry Goldgof
- Department of Computer Science and Engineering University of South Florida Tampa FL USA
| | - Richard Gitlin
- Department of Electrical Engineering University of South Florida Tampa FL USA
| | - Yuncheng You
- Department of Mathematics and Statistics University of South Florida Tampa FL USA
| |
Collapse
|
17
|
Jang J, Kim HW, So BR, Kim YS. Experimental study on restricting the robotic end-effector inside a lesion for safe telesurgery. MINIM INVASIV THER 2015; 24:317-25. [PMID: 25921599 DOI: 10.3109/13645706.2015.1033636] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
INTRODUCTION Using an endoscopic telesurgical robot system (ETSRS), the authors propose a strategy for improving the safety of telesurgery by restricting the movement of an end-effector within a lesion. The strategy is validated by phantom model experiments. MATERIAL AND METHODS The method focused on generation of force feedback and restriction of robotic end-effector movement of ETSRS based on a virtual wall. Collision detection and case classification procedures were used to determine whether the generation of force feedback or restricting the end-effector's movement was continued. The method was implemented in ETSRS and tested using a brain and tofu phantom. RESULTS Force feedback was well generated proportional to a linear combination of the insertion depth and the velocity of the end-effector of the ETSRS from the surface of the predefined virtual wall. The movement of the end-effector was well limited inside the virtual wall by the method. The virtual wall update was sufficiently fast to check the current surgical situation. The control rate of the entire system was >30 fps so that the method showed acceptable performance in phantom experiments. CONCLUSION The results show that the strategy allows for well controlled robotic end-effectors inside a predefined virtual wall by the robot itself and an operator through the signal and force feedback.
Collapse
Affiliation(s)
- Jongseong Jang
- a 1 Institute of Innovative Surgical Technology, Hanyang University , Seoul, Republic of Korea
| | - Hyung Wook Kim
- a 1 Institute of Innovative Surgical Technology, Hanyang University , Seoul, Republic of Korea
| | - Byung-Rok So
- c 3 Robotics R/BD Group, Korea Institute of Industrial Technology , Republic of Korea
| | - Young Soo Kim
- b 2 Department of Neurosurgery and Department of Biomedical Engineering, College of Medicine, Hanyang University , Seoul, Republic of Korea
| |
Collapse
|
18
|
Malti A, Bartoli A. Combining conformal deformation and Cook-Torrance shading for 3-D reconstruction in laparoscopy. IEEE Trans Biomed Eng 2015; 61:1684-92. [PMID: 24845278 DOI: 10.1109/tbme.2014.2300237] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
We propose a new monocular 3-D reconstruction method adapted for reconstructing organs in the abdominal cavity. It combines both motion and shading cues. The former uses a conformal deformation prior and the latter the Cook-Torrance reflectance model. Our method runs in two phases: first, a 3-D geometric and photometric template of the organ at rest is reconstructed in vivo. The geometric shape is reconstructed using rigid shape-from-motion while the surgeon is exploring-but not deforming-structures in the abdominal cavity. This geometric template is then used to retrieve the photometric properties. A nonparametric model of the light's direction of the laparoscope and the Cook-Torrance reflectance model of the organ's tissue are estimated. Second, the surgeon manipulates and deforms the environment. Here, the 3-D template is conformally deformed to globally match a set of few correspondences between the 2-D image data provided by the monocular laparoscope and the 3-D template. Then, the coarse 3-D shape is refined using shading cues to obtain a final 3-D deformed shape. This second phase only relies on a single image. Therefore, it copes with both sequential processing and self-recovery from tracking failure. The proposed approach has been validated using 1) ex vivo and in vivo data with ground-truth, and 2) in vivo laparoscopic videos of a patient's uterus. Our experimental results illustrate the ability of our method to reconstruct natural 3-D deformations typical in real surgical procedures.
Collapse
|
19
|
Endoscopy-MR Image Fusion for Image Guided Procedures. Int J Biomed Imaging 2013; 2013:472971. [PMID: 24298281 PMCID: PMC3835800 DOI: 10.1155/2013/472971] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2013] [Revised: 08/27/2013] [Accepted: 09/13/2013] [Indexed: 11/29/2022] Open
Abstract
Minimally invasive endoscope based abdominal procedures provide potential advantages over conventional open surgery such as reduced trauma, shorter hospital stay, and quick recovery. One major limitation of using this technique is the narrow view of the endoscope and the lack of proper 3D context of the surgical site. In this paper, we propose a rapid and accurate method to align intraoperative stereo endoscopic images of the surgical site with preoperative Magnetic Resonance (MR) images. Gridline light pattern is projected on the surgical site to facilitate the registration. The purpose of this surface-based registration is to provide 3D context of the surgical site to the endoscopic view. We have validated the proposed method on a liver phantom and achieved the surface registration error of 0.76 ± 0.11 mm.
Collapse
|
20
|
Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery. Med Image Anal 2013; 17:974-96. [DOI: 10.1016/j.media.2013.04.003] [Citation(s) in RCA: 182] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2012] [Revised: 04/05/2013] [Accepted: 04/12/2013] [Indexed: 12/16/2022]
|
21
|
3D surface reconstruction of stereo endoscopic images for minimally invasive surgery. Biomed Eng Lett 2013. [DOI: 10.1007/s13534-013-0098-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022] Open
|
22
|
Mishima K, Nakano A, Shiraishi R, Ueyama Y. Range image of the velopharynx produced using a 3-D endoscope with pattern projection. Laryngoscope 2013; 123:E122-6. [PMID: 23775257 DOI: 10.1002/lary.24253] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2013] [Revised: 04/17/2013] [Accepted: 05/23/2013] [Indexed: 11/10/2022]
Abstract
OBJECTIVES/HYPOTHESIS To measure movements of the velopharynx in detail, a novel method of producing range images of the velopharynx was developed using a 3-D endoscope. The purpose of this paper is to introduce this system and to clarify its accuracy. STUDY DESIGN The standard errors of repeated measurements, intraclass correlation coefficients (ICC), ICC(1,1) for intrarater reliability, and ICC(2,1) for interrater reliability were measured using a phantom of the nasopharynx. METHODS An endoscopic measuring system was developed in which a pattern projection system was incorporated into a commercially available 3-D endoscope. Right and left images of the endoscope were integrated into one video file and digitized at a horizontal and vertical resolution of 640 × 480 pixels, as odd and even scanning lines, respectively. After separation of the video file into right and left images by interlace interpolation, correcting the distortion of the camera images, rectifying, and stereo-matching range images of the velopharynx were produced. The distances between two points, which were marked at an interval of 5 mm on the uvula and the pharynx of the phantom, were measured. RESULTS The standard errors of repeated measurements were 0.02 horizontally and 0.01 vertically. The ICC(1,1) and ICC(2,1) were 0.83 and 0.94, and both correlation coefficients were considered to be "almost perfect." CONCLUSION The present endoscopic measuring system provided relatively accurate and reliable range images of the velopharynx, and enabled quantitative analysis of movements of the velopharynx.
Collapse
Affiliation(s)
- Katsuaki Mishima
- Department of Oral and Maxillofacial Surgery, Graduate School of Medicine, Yamaguchi University, Ube, Yamaguchi, Japan
| | | | | | | |
Collapse
|
23
|
Burgner J, Simpson AL, Fitzpatrick JM, Lathrop RA, Herrell SD, Miga MI, Webster RJ. A study on the theoretical and practical accuracy of conoscopic holography-based surface measurements: toward image registration in minimally invasive surgery. Int J Med Robot 2013; 9:190-203. [PMID: 22761086 PMCID: PMC3819208 DOI: 10.1002/rcs.1446] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/08/2012] [Indexed: 11/10/2022]
Abstract
BACKGROUND Registered medical images can assist with surgical navigation and enable image-guided therapy delivery. In soft tissues, surface-based registration is often used and can be facilitated by laser surface scanning. Tracked conoscopic holography (which provides distance measurements) has been recently proposed as a minimally invasive way to obtain surface scans. Moving this technique from concept to clinical use requires a rigorous accuracy evaluation, which is the purpose of our paper. METHODS We adapt recent non-homogeneous and anisotropic point-based registration results to provide a theoretical framework for predicting the accuracy of tracked distance measurement systems. Experiments are conducted a complex objects of defined geometry, an anthropomorphic kidney phantom and a human cadaver kidney. RESULTS Experiments agree with model predictions, producing point RMS errors consistently < 1 mm, surface-based registration with mean closest point error < 1 mm in the phantom and a RMS target registration error of 0.8 mm in the human cadaver kidney. CONCLUSIONS Tracked conoscopic holography is clinically viable; it enables minimally invasive surface scan accuracy comparable to current clinical methods that require open surgery.
Collapse
Affiliation(s)
- J Burgner
- Department of Mechanical Engineering, Vanderbilt University, Nashville, TN, USA.
| | | | | | | | | | | | | |
Collapse
|
24
|
Takeshita T, Kim M, Nakajima Y. 3-D shape measurement endoscope using a single-lens system. Int J Comput Assist Radiol Surg 2012; 8:451-9. [PMID: 23070835 DOI: 10.1007/s11548-012-0794-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2012] [Accepted: 09/17/2012] [Indexed: 11/30/2022]
Abstract
PURPOSE A three-dimensional (3-D) shape measurement endoscopic technique is proposed to provide depth information, which is lacking in current endoscopes, in addition to the conventional surface texture information. The integration of surface texture and 3-D shapes offers effective analytical data and can be used to detect unusual tissues. We constructed a prototype endoscope to validate our method. METHODS A 3-D measurement endoscope using shape from focus is proposed in this paper. It employs a focusing part to measure both texture and 3-D shapes of objects. Image focusing is achieved with a single-lens system. RESULTS A prototype was made in consideration of proper endoscope sizes. We validated the method by experimenting on artificial objects and a biological object with the prototype. First, the accuracy was evaluated using artificial objects. The RMS errors were 0.87 mm for a plate and 0.64 mm for a cylinder. Next, inner wall of pig stomach was measured in vitro to evaluate the feasibility of the proposed method. CONCLUSION The proposed method was efficient for 3-D measurement with endoscopes in the experiments and is suitable for downsizing because it is a single-lens system.
Collapse
Affiliation(s)
- Takaaki Takeshita
- School of Engineering, The University of Tokyo, Intelligent Modeling Laboratory Room # 602, Yayoi 2-11-16, Bunkyo, Tokyo, 113-8656, Japan.
| | | | | |
Collapse
|
25
|
Maurice X, Albitar C, Doignon C, de Mathelin M. A structured light-based laparoscope with real-time organs' surface reconstruction for minimally invasive surgery. 2012 ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY 2012; 2012:5769-72. [PMID: 23367240 DOI: 10.1109/embc.2012.6347305] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Xavier Maurice
- Laboratoire des Sciences de l’Image, de l’Informatique et de la Télédétection (UMR CNRS), Equipe Automatique Vision et Robotique, Université de Strasbourg, France
| | | | | | | |
Collapse
|
26
|
Schmalz C, Forster F, Schick A, Angelopoulou E. An endoscopic 3D scanner based on structured light. Med Image Anal 2012; 16:1063-72. [PMID: 22542326 DOI: 10.1016/j.media.2012.04.001] [Citation(s) in RCA: 79] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2011] [Revised: 03/19/2012] [Accepted: 04/03/2012] [Indexed: 11/19/2022]
Abstract
We present a new endoscopic 3D scanning system based on Single Shot Structured Light. The proposed design makes it possible to build an extremely small scanner. The sensor head contains a catadioptric camera and a pattern projection unit. The paper describes the working principle and calibration procedure of the sensor. The prototype sensor head has a diameter of only 3.6mm and a length of 14mm. It is mounted on a flexible shaft. The scanner is designed for tubular cavities and has a cylindrical working volume of about 30mm length and 30mm diameter. It acquires 3D video at 30 frames per second and typically generates approximately 5000 3D points per frame. By design, the resolution varies over the working volume, but is generally better than 200μm. A prototype scanner has been built and is evaluated in experiments with phantoms and biological samples. The recorded average error on a known test object was 92μm.
Collapse
Affiliation(s)
- Christoph Schmalz
- University of Erlangen-Nuremberg, Pattern Recognition Lab, Martensstrasse 3, Erlangen, Germany.
| | | | | | | |
Collapse
|
27
|
Wang D, Tewfik AH. Real time 3D visualization of intraoperative organ deformations using structured dictionary. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:924-937. [PMID: 22127996 DOI: 10.1109/tmi.2011.2177470] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Restricted visualization of the surgical field is one of the most critical challenges for minimally invasive surgery (MIS). Current intraoperative visualization systems are promising. However, they can hardly meet the requirements of high resolution and real time 3D visualization of the surgical scene to support the recognition of anatomic structures for safe MIS procedures. In this paper, we present a new approach for real time 3D visualization of organ deformations based on optical imaging patches with limited field-of-view and a single preoperative scan of magnetic resonance imaging (MRI) or computed tomography (CT). The idea for reconstruction is motivated by our empirical observation that the spherical harmonic coefficients corresponding to distorted surfaces of a given organ lie in lower dimensional subspaces in a structured dictionary that can be learned from a set of representative training surfaces. We provide both theoretical and practical designs for achieving these goals. Specifically, we discuss details about the selection of limited optical views and the registration of partial optical images with a single preoperative MRI/CT scan. The design proposed in this paper is evaluated with both finite element modeling data and ex vivo experiments. The ex vivo test is conducted on fresh porcine kidneys using 3D MRI scans with 1.2 mm resolution and a portable laser scanner with an accuracy of 0.13 mm. Results show that the proposed method achieves a sub-3 mm spatial resolution in terms of Hausdorff distance when using only one preoperative MRI scan and the optical patch from the single-sided view of the kidney. The reconstruction frame rate is between 10 frames/s and 39 frames/s depending on the complexity of the test model.
Collapse
Affiliation(s)
- Dan Wang
- Department of Electrical and Computer Engineering, The University of Texas at Austin, 78712, USA.
| | | |
Collapse
|
28
|
Karargyris A, Bourbakis N. Three-dimensional reconstruction of the digestive wall in capsule endoscopy videos using elastic video interpolation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2011; 30:957-971. [PMID: 21147593 DOI: 10.1109/tmi.2010.2098882] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Wireless capsule endoscopy is a revolutionary technology that allows physicians to examine the digestive tract of a human body in the minimum invasive way. Physicians can detect diseases such as blood-based abnormalities, polyps, ulcers, and Crohn's disease. Although this technology is really a marvel of our modern times, currently it suffers from two serious drawbacks: 1) frame rate is low (3 frames/s) and 2) no 3-D representation of the objects is captured from the camera of the capsule. In this paper we offer solutions (methodologies) that deal with each of the above issues improving the current technology without forcing hardware upgrades. These methodologies work synergistically to create smooth and visually friendly interpolated images from consecutive frames, while preserving the structure of the observed objects. They also extract and represent the texture of the surface of the digestive tract in 3-D. Thus the purpose of our methodology is not to reduce the time that the gastroenterologists need to spend to examine the video. On the contrary, the purpose is to enhance the video and therefore improve the viewing of the digestive tract leading to a more qualitative and efficient examination. The proposed work introduces 3-D capsule endoscopy textured results that have been welcomed by Digestive Specialists, Inc., Dayton, OH. Finally, illustrative results are given at the end of the paper.
Collapse
Affiliation(s)
- Alexandros Karargyris
- College of Engineering, Assistive Technologies Research Center, Wright State University, Dayton, OH 45435, USA.
| | | |
Collapse
|
29
|
Richa R, Poignet P, Chao Liu. Three-dimensional Motion Tracking for Beating Heart Surgery Using a Thin-plate Spline Deformable Model. Int J Rob Res 2009. [DOI: 10.1177/0278364909356600] [Citation(s) in RCA: 59] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Minimally invasive cardiac surgery offers important benefits for the patient but it also imposes several challenges for the surgeon. Robotic assistance has been proposed to overcome many of the difficulties inherent to the minimally invasive procedure, but so far no solutions for compensating physiological motion are present in the existing surgical robotic platforms. In beating heart surgery, cardiac and respiratory motions are important sources of disturbance, hindering the surgeon’s gestures and limiting the types of procedures that can be performed in a minimally invasive fashion. In this context, computer vision techniques can be used for retrieving the heart motion for active motion stabilization, which improves the precision and repeatability of the surgical gestures. However, efficient tracking of the heart surface is a challenging problem due to the heart surface characteristics, large deformations and the complex illumination conditions. In this article, we present an efficient method for active cancellation of cardiac motion where we combine an efficient algorithm for 3D tracking of the heart surface based on a thin-plate spline deformable model and an illumination compensation algorithm able to cope with arbitrary illumination changes. The proposed method has two novelties: the thin-plate spline model for representing the heart surface deformations and an efficient parametrization for 3D tracking of the beating heart using stereo images from a calibrated stereo endoscope. The proposed tracking method has been evaluated offline on in vivo images acquired by a DaVinci surgical robotic platform.
Collapse
Affiliation(s)
- Rogério Richa
- LIRMM, UMR 5506 CNRS, UM 2, 161, rue Ada, 34392 Montpellier Cedex 5, France,
| | - Philippe Poignet
- LIRMM, UMR 5506 CNRS, UM 2, 161, rue Ada, 34392 Montpellier Cedex 5, France
| | - Chao Liu
- LIRMM, UMR 5506 CNRS, UM 2, 161, rue Ada, 34392 Montpellier Cedex 5, France
| |
Collapse
|
30
|
Penne J, Höller K, Stürmer M, Schrauder T, Schneider A, Engelbrecht R, Feussner H, Schmauss B, Hornegger J. Time-of-Flight 3-D endoscopy. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2009; 12:467-74. [PMID: 20426021 DOI: 10.1007/978-3-642-04268-3_58] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
This paper describes the first accomplishment of the Time-of-Flight (ToF) measurement principle via endoscope optics. The applicability of the approach is verified by in-vitro experiments. Off-the-shelf ToF camera sensors enable the per-pixel, on-chip, real-time, marker-less acquisition of distance information. The transfer of the emerging ToF measurement technique to endoscope optics is the basis for a new generation of ToF rigid or flexible 3-D endoscopes. No modification of the endoscope optic itself is necessary as only an enhancement of illumination unit and image sensors is necessary. The major contribution of this paper is threefold: First, the accomplishment of the ToF measurement principle via endoscope optics; second, the development and validation of a complete calibration and post-processing routine; third, accomplishment of extensive in-vitro experiments. Currently, a depth measurement precision of 0.89 mm at 20 fps with 3072 3-D points is achieved.
Collapse
Affiliation(s)
- Jochen Penne
- Chair of Pattern Recognition and Erlangen Graduate School in Advanced Optical Technologies, Friedrich-Alexander-University Erlangen-Nuremberg, Germany.
| | | | | | | | | | | | | | | | | |
Collapse
|