1
|
Yang Z, Dai J, Pan J. 3D reconstruction from endoscopy images: A survey. Comput Biol Med 2024; 175:108546. [PMID: 38704902 DOI: 10.1016/j.compbiomed.2024.108546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 01/05/2024] [Accepted: 04/28/2024] [Indexed: 05/07/2024]
Abstract
Three-dimensional reconstruction of images acquired through endoscopes is playing a vital role in an increasing number of medical applications. Endoscopes used in the clinic are commonly classified as monocular endoscopes and binocular endoscopes. We have reviewed the classification of methods for depth estimation according to the type of endoscope. Basically, depth estimation relies on feature matching of images and multi-view geometry theory. However, these traditional techniques have many problems in the endoscopic environment. With the increasing development of deep learning techniques, there is a growing number of works based on learning methods to address challenges such as inconsistent illumination and texture sparsity. We have reviewed over 170 papers published in the 10 years from 2013 to 2023. The commonly used public datasets and performance metrics are summarized. We also give a taxonomy of methods and analyze the advantages and drawbacks of algorithms. Summary tables and result atlas are listed to facilitate the comparison of qualitative and quantitative performance of different methods in each category. In addition, we summarize commonly used scene representation methods in endoscopy and speculate on the prospects of deep estimation research in medical applications. We also compare the robustness performance, processing time, and scene representation of the methods to facilitate doctors and researchers in selecting appropriate methods based on surgical applications.
Collapse
Affiliation(s)
- Zhuoyue Yang
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, 37 Xueyuan Road, Haidian District, Beijing, 100191, China; Peng Cheng Lab, 2 Xingke 1st Street, Nanshan District, Shenzhen, Guangdong Province, 518000, China
| | - Ju Dai
- Peng Cheng Lab, 2 Xingke 1st Street, Nanshan District, Shenzhen, Guangdong Province, 518000, China
| | - Junjun Pan
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, 37 Xueyuan Road, Haidian District, Beijing, 100191, China; Peng Cheng Lab, 2 Xingke 1st Street, Nanshan District, Shenzhen, Guangdong Province, 518000, China.
| |
Collapse
|
2
|
Kim J, Lee H, Oh SR, Yang S. Real-Time Endomicroscopic Image Mosaicking with an EKF-based Sensor Fusion Approach. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-5. [PMID: 38083648 DOI: 10.1109/embc40787.2023.10340903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
This study presents a real-time sensor fusion framework based on the extended Kalman filter (EKF) for accurate and robust endomicroscopic image mosaicking. The sensor fusion framework incorporates an optical tracking system that can track 6-DOF pose of the imaging probe with high accuracy in real time in conjunction with 2D local image registration from image feature matching between two consecutive frames. We evaluated the performance of the real-time image mosaicking based on the sensor fusion compared with the image or tracker only approach. As a result, it could retain the microscopic level of image detail from the image-based approach and also achieve a robust image mosaic without any drift by using the accurate optical tracking system.
Collapse
|
3
|
Li L, Mazomenos E, Chandler JH, Obstein KL, Valdastri P, Stoyanov D, Vasconcelos F. Robust endoscopic image mosaicking via fusion of multimodal estimation. Med Image Anal 2023; 84:102709. [PMID: 36549045 PMCID: PMC10636739 DOI: 10.1016/j.media.2022.102709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 08/15/2022] [Accepted: 11/29/2022] [Indexed: 12/23/2022]
Abstract
We propose an endoscopic image mosaicking algorithm that is robust to light conditioning changes, specular reflections, and feature-less scenes. These conditions are especially common in minimally invasive surgery where the light source moves with the camera to dynamically illuminate close range scenes. This makes it difficult for a single image registration method to robustly track camera motion and then generate consistent mosaics of the expanded surgical scene across different and heterogeneous environments. Instead of relying on one specialised feature extractor or image registration method, we propose to fuse different image registration algorithms according to their uncertainties, formulating the problem as affine pose graph optimisation. This allows to combine landmarks, dense intensity registration, and learning-based approaches in a single framework. To demonstrate our application we consider deep learning-based optical flow, hand-crafted features, and intensity-based registration, however, the framework is general and could take as input other sources of motion estimation, including other sensor modalities. We validate the performance of our approach on three datasets with very different characteristics to highlighting its generalisability, demonstrating the advantages of our proposed fusion framework. While each individual registration algorithm eventually fails drastically on certain surgical scenes, the fusion approach flexibly determines which algorithms to use and in which proportion to more robustly obtain consistent mosaics.
Collapse
Affiliation(s)
- Liang Li
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences(WEISS) and Department of Computer Science, University College London, London, UK; College of Control Science and Engineering, Zhejiang University, Hangzhou, 310027, China.
| | - Evangelos Mazomenos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences(WEISS) and Department of Computer Science, University College London, London, UK.
| | - James H Chandler
- Storm Lab UK, School of Electronic, and Electrical Engineering, University of Leeds, Leeds LS2 9JT, UK.
| | - Keith L Obstein
- Division of Gastroenterology, Hepatology, and Nutrition, Vanderbilt University Medical Center, Nashville, TN 37232, USA; STORM Lab, Department of Mechanical Engineering, Vanderbilt University, Nashville, TN 37235, USA.
| | - Pietro Valdastri
- Storm Lab UK, School of Electronic, and Electrical Engineering, University of Leeds, Leeds LS2 9JT, UK.
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences(WEISS) and Department of Computer Science, University College London, London, UK.
| | - Francisco Vasconcelos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences(WEISS) and Department of Computer Science, University College London, London, UK.
| |
Collapse
|
4
|
Intensity-based nonrigid endomicroscopic image mosaicking incorporating texture relevance for compensation of tissue deformation. Comput Biol Med 2021; 142:105169. [PMID: 34974384 DOI: 10.1016/j.compbiomed.2021.105169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2021] [Revised: 12/12/2021] [Accepted: 12/20/2021] [Indexed: 12/09/2022]
Abstract
Image mosaicking has emerged as a universal technique to broaden the field-of-view of the probe-based confocal laser endomicroscopy (pCLE) imaging system. However, due to the influence of probe-tissue contact forces and optical components on imaging quality, existing mosaicking methods remain insufficient to deal with practical challenges. In this paper, we present the texture encoded sum of conditional variance (TESCV) as a novel similarity metric, and effectively incorporate it into a sequential mosaicking scheme to simultaneously correct rigid probe shift and nonrigid tissue deformation. TESCV combines both intensity dependency and texture relevance to quantify the differences between pCLE image frames, where a discriminative binary descriptor named fully cross-detected local derivative pattern (FCLDP) is designed to extract more detailed structural textures. Furthermore, we also analytically derive the closed-form gradient of TESCV with respect to the transformation variables. Experiments on the circular dataset highlighted the advantage of the TESCV metric in improving mosaicking performance compared with the other four recently published metrics. The comparison with the other four state-of-the-art mosaicking methods on the spiral and manual datasets indicated that the proposed TESCV-based method not only worked stably at different contact forces, but was also suitable for both low- and high-resolution imaging systems. With more accurate and delicate mosaics, the proposed method holds promises to meet clinical demands for intraoperative optical biopsy.
Collapse
|
5
|
Zhou H, Jayender J. Real-Time Nonrigid Mosaicking of Laparoscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1726-1736. [PMID: 33690113 PMCID: PMC8169627 DOI: 10.1109/tmi.2021.3065030] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The ability to extend the field of view of laparoscopy images can help the surgeons to obtain a better understanding of the anatomical context. However, due to tissue deformation, complex camera motion and significant three-dimensional (3D) anatomical surface, image pixels may have non-rigid deformation and traditional mosaicking methods cannot work robustly for laparoscopy images in real-time. To solve this problem, a novel two-dimensional (2D) non-rigid simultaneous localization and mapping (SLAM) system is proposed in this paper, which is able to compensate for the deformation of pixels and perform image mosaicking in real-time. The key algorithm of this 2D non-rigid SLAM system is the expectation maximization and dual quaternion (EMDQ) algorithm, which can generate smooth and dense deformation field from sparse and noisy image feature matches in real-time. An uncertainty-based loop closing method has been proposed to reduce the accumulative errors. To achieve real-time performance, both CPU and GPU parallel computation technologies are used for dense mosaicking of all pixels. Experimental results on in vivo and synthetic data demonstrate the feasibility and accuracy of our non-rigid mosaicking method.
Collapse
|
6
|
Ping Z, Zhang T, Gong L, Zhang C, Zuo S. Miniature Flexible Instrument with Fibre Bragg Grating-Based Triaxial Force Sensing for Intraoperative Gastric Endomicroscopy. Ann Biomed Eng 2021; 49:2323-2336. [PMID: 33880633 DOI: 10.1007/s10439-021-02781-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Accepted: 04/11/2021] [Indexed: 11/28/2022]
Abstract
Optical biopsy methods, such as probe-based endomicroscopy, can be used to identify early-stage gastric cancer in vivo. However, it is difficult to scan a large area of the gastric mucosa for mosaicking during endoscopy. In this work, we propose a miniaturised flexible instrument based on contact-aided compliant mechanisms and fibre Bragg grating (FBG) sensing for intraoperative gastric endomicroscopy. The instrument has a compact design with an outer diameter of 2.7 mm, incorporating a central channel with a diameter of 1.9 mm for the endomicroscopic probe to pass through. Experimental results show that the instrument can achieve raster trajectory scanning over a large tissue surface with a positioning accuracy of 0.5 mm. The tip force sensor provides a 4.6 mN resolution for the axial force and 2.8 mN for transverse forces. Validation with random samples shows that the force sensor can provide consistent and accurate three-axis force detection. Endomicroscopic imaging experiments were conducted, and the flexible instrument performed no gap scanning (mosaicking area more than 3 mm2) and contact force monitoring during scanning, demonstrating the potential of the system in clinical applications.
Collapse
Affiliation(s)
- Zhongyuan Ping
- Key Laboratory of Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, Tianjin, 300072, China
| | - Tianci Zhang
- Key Laboratory of Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, Tianjin, 300072, China
| | - Lun Gong
- Key Laboratory of Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, Tianjin, 300072, China
| | - Chi Zhang
- Key Laboratory of Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, Tianjin, 300072, China
| | - Siyang Zuo
- Key Laboratory of Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, Tianjin, 300072, China.
| |
Collapse
|