1
|
Wang K, Ma S, Chen J, Ren F, Lu J. Approaches Challenges and Applications for Deep Visual Odometry Toward to Complicated and Emerging Areas. IEEE Trans Cogn Dev Syst 2021. [DOI: 10.1109/tcds.2020.3038898] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
2
|
Chu Y, Li H, Li X, Ding Y, Yang X, Ai D, Chen X, Wang Y, Yang J. Endoscopic image feature matching via motion consensus and global bilateral regression. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 190:105370. [PMID: 32036206 DOI: 10.1016/j.cmpb.2020.105370] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Revised: 12/17/2019] [Accepted: 01/26/2020] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Feature matching of endoscopic images is of crucial importance in many clinical applications, such as object tracking and surface reconstruction. However, with the presence of low texture, specular reflections and deformations, the feature matching methods of natural scene are facing great challenges in minimally invasive surgery (MIS) scenarios. We propose a novel motion consensus-based method for endoscopic image feature matching to address these problems. METHODS Our method starts by correcting the radial distortion with the spherical projection model and removing the specular reflection regions with an adaptive detection method, which helps to eliminate the image distortion and to reduce the quantity of outliers. We solve the matching problem with a two-stage strategy that progressively estimates a consensus of inliers; the result is a precisely smoothed motion field. First, we construct a spatial motion field from candidate feature matches and estimate its maximum posterior with expectation maximization algorithm, which is computationally efficient and able to obtain smoothed motion field quickly. Second, we extend the smoothed motion field to the affine domain and refine it with bilateral regression to preserve locally subtle motions. The true matches can be identified by checking the difference of feature motion against the estimated field. RESULTS Evaluations are implemented on two simulation datasets of deformation (218 images) and four different types of endoscopic datasets (1032 images). Our method is compared with three other state-of-the-art methods and achieves the best performance on affine transformation and nonrigid deformation simulations, with inlier ratio of 86.7% and 94.3%, sensitivity of 90.0% and 96.2%, precision of 88.2% and 93.9%, and F1-Score of 89.1% and 95.0%, respectively. On clinical datasets evaluations, the proposed method achieves an average reprojection error of 3.7 pixels and a consistent performance in multi-image correspondence of sequential images. Furthermore, we also present a surface reconstruction result from rhinoscopic images to validate the reliability of our method, which shows high-quality feature matching results. CONCLUSIONS The proposed motion consensus-based feature matching method is proved effective and robust for endoscopic images correspondence. This demonstrates its capability to generate reliable feature matches for surface reconstruction and other meaningful applications in MIS scenarios.
Collapse
Affiliation(s)
- Yakui Chu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Heng Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China.
| | - Xu Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Yuan Ding
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Xilin Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Xiaohong Chen
- Department of Otolaryngology, Head and Neck Surgery, Beijing Tongren Hospital, Beijing 100730, China
| | - Yongtian Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China.
| |
Collapse
|
3
|
Vasilakakis M, Koulaouzidis A, Yung DE, Plevris JN, Toth E, Iakovidis DK. Follow-up on: optimizing lesion detection in small bowel capsule endoscopy and beyond: from present problems to future solutions. Expert Rev Gastroenterol Hepatol 2019; 13:129-141. [PMID: 30791780 DOI: 10.1080/17474124.2019.1553616] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
This review presents noteworthy advances in clinical and experimental Capsule Endoscopy (CE), focusing on the progress that has been reported over the last 5 years since our previous review on the subject. Areas covered: This study presents the commercially available CE platforms, as well as the advances made in optimizing the diagnostic capabilities of CE. The latter includes recent concept and prototype capsule endoscopes, medical approaches to improve diagnostic yield, and progress in software for enhancing visualization, abnormality detection, and lesion localization. Expert commentary: Currently, moving through the second decade of CE evolution, there are still several open issues and remarkable challenges to overcome.
Collapse
Affiliation(s)
- Michael Vasilakakis
- a Department of Computer Science and Biomedical Informatics , University of Thessaly , Lamia , Greece
| | - Anastasios Koulaouzidis
- b Endoscopy Unit , The Royal Infirmary of Edinburgh , Edinburgh , Scotland.,c Department of Clinical Sciences , Lund University , Malmö , Sweden
| | - Diana E Yung
- b Endoscopy Unit , The Royal Infirmary of Edinburgh , Edinburgh , Scotland
| | - John N Plevris
- b Endoscopy Unit , The Royal Infirmary of Edinburgh , Edinburgh , Scotland
| | - Ervin Toth
- c Department of Clinical Sciences , Lund University , Malmö , Sweden.,d Section of Gastroenterology, Department of Clinical Sciences , Skåne University Hospital Malmö , Malmö , Sweden
| | - Dimitris K Iakovidis
- a Department of Computer Science and Biomedical Informatics , University of Thessaly , Lamia , Greece
| |
Collapse
|
4
|
Iakovidis DK, Dimas G, Karargyris A, Bianchi F, Ciuti G, Koulaouzidis A. Deep Endoscopic Visual Measurements. IEEE J Biomed Health Inform 2018; 23:2211-2219. [PMID: 29994623 DOI: 10.1109/jbhi.2018.2853987] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Robotic endoscopic systems offer a minimally invasive approach to the examination of internal body structures, and their application is rapidly extending to cover the increasing needs for accurate therapeutic interventions. In this context, it is essential for such systems to be able to perform measurements, such as measuring the distance traveled by a wireless capsule endoscope, so as to determine the location of a lesion in the gastrointestinal tract, or to measure the size of lesions for diagnostic purposes. In this paper, we investigate the feasibility of performing contactless measurements using a computer vision approach based on neural networks. The proposed system integrates a deep convolutional image registration approach and a multilayer feed-forward neural network into a novel architecture. The main advantage of this system, with respect to the state-of-the-art ones, is that it is more generic in the sense that it is 1) unconstrained by specific models, 2) more robust to nonrigid deformations, and 3) adaptable to most of the endoscopic systems and environment, while enabling measurements of enhanced accuracy. The performance of this system is evaluated under ex vivo conditions using a phantom experimental model and a robotically assisted test bench. The results obtained promise a wider applicability and impact in endoscopy in the era of big data.
Collapse
|
5
|
Koulaouzidis A, Iakovidis DK, Yung DE, Mazomenos E, Bianchi F, Karagyris A, Dimas G, Stoyanov D, Thorlacius H, Toth E, Ciuti G. Novel experimental and software methods for image reconstruction and localization in capsule endoscopy. Endosc Int Open 2018; 6:E205-E210. [PMID: 29399619 PMCID: PMC5794451 DOI: 10.1055/s-0043-121882] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/16/2017] [Accepted: 10/09/2017] [Indexed: 12/18/2022] Open
Abstract
BACKGROUND AND STUDY AIMS Capsule endoscopy (CE) is invaluable for minimally invasive endoscopy of the gastrointestinal tract; however, several technological limitations remain including lack of reliable lesion localization. We present an approach to 3D reconstruction and localization using visual information from 2D CE images. PATIENTS AND METHODS Colored thumbtacks were secured in rows to the internal wall of a LifeLike bowel model. A PillCam SB3 was calibrated and navigated linearly through the lumen by a high-precision robotic arm. The motion estimation algorithm used data (light falling on the object, fraction of reflected light and surface geometry) from 2D CE images in the video sequence to achieve 3D reconstruction of the bowel model at various frames. The ORB-SLAM technique was used for 3D reconstruction and CE localization within the reconstructed model. This algorithm compared pairs of points between images for reconstruction and localization. RESULTS As the capsule moved through the model bowel 42 to 66 video frames were obtained per pass. Mean absolute error in the estimated distance travelled by the CE was 4.1 ± 3.9 cm. Our algorithm was able to reconstruct the cylindrical shape of the model bowel with details of the attached thumbtacks. ORB-SLAM successfully reconstructed the bowel wall from simultaneous frames of the CE video. The "track" in the reconstruction corresponded well with the linear forwards-backwards movement of the capsule through the model lumen. CONCLUSION The reconstruction methods, detailed above, were able to achieve good quality reconstruction of the bowel model and localization of the capsule trajectory using information from the CE video and images alone.
Collapse
Affiliation(s)
- Anastasios Koulaouzidis
- Centre for Liver & Digestive Disorders, The Royal Infirmary of Edinburgh, Edinburgh, UK,Corresponding author Dr Anastasios Koulaouzidis Endoscopy Unit, The Royal Infirmary of Edinburgh51 Little France CrescentEdinburgh EH16 4SAUnited Kingdom+077-89-588-408
| | - Dimitris K. Iakovidis
- University of Thessaly, Department of Computer Science and Biomedical Informatics, Lamia, Greece
| | - Diana E. Yung
- Centre for Liver & Digestive Disorders, The Royal Infirmary of Edinburgh, Edinburgh, UK
| | - Evangelos Mazomenos
- Centre of Medical Image Computing and Department of Computer Science, University College London, London, UK
| | - Federico Bianchi
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
| | | | - George Dimas
- University of Thessaly, Department of Computer Science and Biomedical Informatics, Lamia, Greece
| | - Danail Stoyanov
- Centre of Medical Image Computing and Department of Computer Science, University College London, London, UK
| | | | - Ervin Toth
- Department of Gastroenterology, Skåne University Hospital, Malmö, Sweden
| | - Gastone Ciuti
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
| |
Collapse
|
6
|
Dimas G, Spyrou E, Iakovidis DK, Koulaouzidis A. Intelligent visual localization of wireless capsule endoscopes enhanced by color information. Comput Biol Med 2017; 89:429-440. [PMID: 28886480 DOI: 10.1016/j.compbiomed.2017.08.029] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2017] [Revised: 08/07/2017] [Accepted: 08/27/2017] [Indexed: 12/28/2022]
Affiliation(s)
- George Dimas
- Dept. of Computer Science and Biomedical Informatics, University of Thessaly, Lamia, Greece
| | - Evaggelos Spyrou
- Dept. of Computer Science and Biomedical Informatics, University of Thessaly, Lamia, Greece; Institute of Informatics and Telecommunications, National Center for Scientific Research -"Demokritos", Athens, Greece.
| | - Dimitris K Iakovidis
- Dept. of Computer Science and Biomedical Informatics, University of Thessaly, Lamia, Greece
| | | |
Collapse
|
7
|
Dimas G, Iakovidis DK, Karargyris A, Ciuti G, Koulaouzidis A. An artificial neural network architecture for non-parametric visual odometry in wireless capsule endoscopy. MEASUREMENT SCIENCE AND TECHNOLOGY 2017; 28:094005. [DOI: 10.1088/1361-6501/aa7ebf] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
8
|
Dimas G, Iakovidis DK, Ciuti G, Karargyris A, Koulaouzidis A. Visual Localization of Wireless Capsule Endoscopes Aided by Artificial Neural Networks. 2017 IEEE 30TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS (CBMS) 2017. [DOI: 10.1109/cbms.2017.67] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|