1
|
Ochoa-Astorga JE, Wang L, Du W, Peng Y. A Straightforward Bifurcation Pattern-Based Fundus Image Registration Method. SENSORS (BASEL, SWITZERLAND) 2023; 23:7809. [PMID: 37765866 PMCID: PMC10534639 DOI: 10.3390/s23187809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 08/23/2023] [Accepted: 09/08/2023] [Indexed: 09/29/2023]
Abstract
Fundus image registration is crucial in eye disease examination, as it enables the alignment of overlapping fundus images, facilitating a comprehensive assessment of conditions like diabetic retinopathy, where a single image's limited field of view might be insufficient. By combining multiple images, the field of view for retinal analysis is extended, and resolution is enhanced through super-resolution imaging. Moreover, this method facilitates patient follow-up through longitudinal studies. This paper proposes a straightforward method for fundus image registration based on bifurcations, which serve as prominent landmarks. The approach aims to establish a baseline for fundus image registration using these landmarks as feature points, addressing the current challenge of validation in this field. The proposed approach involves the use of a robust vascular tree segmentation method to detect feature points within a specified range. The method involves coarse vessel segmentation to analyze patterns in the skeleton of the segmentation foreground, followed by feature description based on the generation of a histogram of oriented gradients and determination of image relation through a transformation matrix. Image blending produces a seamless registered image. Evaluation on the FIRE dataset using registration error as the key parameter for accuracy demonstrates the method's effectiveness. The results show the superior performance of the proposed method compared to other techniques using vessel-based feature extraction or partially based on SURF, achieving an area under the curve of 0.526 for the entire FIRE dataset.
Collapse
Affiliation(s)
| | - Linni Wang
- Retina & Neuron-Ophthalmology, Tianjin Medical University Eye Hospital, Tianjin 300084, China
| | - Weiwei Du
- Information and Human Science, Kyoto Institute of Technology University, Kyoto 6068585, Japan;
| | - Yahui Peng
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China;
| |
Collapse
|
2
|
Thuma TBT, Bogovic JA, Gunton KB, Jimenez H, Negreiros B, Pulido JS. The big warp: Registration of disparate retinal imaging modalities and an example overlay of ultrawide-field photos and en-face OCTA images. PLoS One 2023; 18:e0284905. [PMID: 37098039 PMCID: PMC10129009 DOI: 10.1371/journal.pone.0284905] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 04/03/2023] [Indexed: 04/26/2023] Open
Abstract
PURPOSE To develop an algorithm and scripts to combine disparate multimodal imaging modalities and show its use by overlaying en-face optical coherence tomography angiography (OCTA) images and Optos ultra-widefield (UWF) retinal images using the Fiji (ImageJ) plugin BigWarp. METHODS Optos UWF images and Heidelberg en-face OCTA images were collected from various patients as part of their routine care. En-face OCTA images were generated and ten (10) images at varying retinal depths were exported. The Fiji plugin BigWarp was used to transform the Optos UWF image onto the en-face OCTA image using matching reference points in the retinal vasculature surrounding the macula. The images were then overlayed and stacked to create a series of ten combined Optos UWF and en-face OCTA images of increasing retinal depths. The first algorithm was modified to include two scripts that automatically aligned all the en-face OCTA images. RESULTS The Optos UWF image could easily be transformed to the en-face OCTA images using BigWarp with common vessel branch point landmarks in the vasculature. The resulting warped Optos image was then successfully superimposed onto the ten Optos UWF images. The scripts more easily allowed for automatic overlay of the images. CONCLUSIONS Optos UWF images can be successfully superimposed onto en-face OCTA images using freely available software that has been applied to ocular use. This synthesis of multimodal imaging may increase their potential diagnostic value. Script A is publicly available at https://doi.org/10.6084/m9.figshare.16879591.v1 and Script B is available at https://doi.org/10.6084/m9.figshare.17330048.
Collapse
Affiliation(s)
- Tobin B T Thuma
- Department of Pediatric Ophthalmology and Strabismus, Wills Eye Hospital, Philadelphia, Pennsylvania, United States of America
| | - John A Bogovic
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America
| | - Kammi B Gunton
- Department of Pediatric Ophthalmology and Strabismus, Wills Eye Hospital, Philadelphia, Pennsylvania, United States of America
| | - Hiram Jimenez
- Vickie and Jack Farber Vision Research Center, Wills Eye Hospital, Philadelphia, Pennsylvania, United States of America
| | | | - Jose S Pulido
- Vickie and Jack Farber Vision Research Center, Wills Eye Hospital, Philadelphia, Pennsylvania, United States of America
- Retina Service, Wills Eye Hospital, Philadelphia, Pennsylvania, United States of America
| |
Collapse
|
3
|
Robust Detection Model of Vascular Landmarks for Retinal Image Registration: A Two-Stage Convolutional Neural Network. BIOMED RESEARCH INTERNATIONAL 2022; 2022:1705338. [PMID: 35941970 PMCID: PMC9356876 DOI: 10.1155/2022/1705338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Accepted: 07/08/2022] [Indexed: 11/18/2022]
Abstract
Registration is useful for image processing in computer vision. It can be applied to retinal images and provide support for ophthalmologists in tracking disease progression and monitoring therapeutic responses. This study proposed a robust detection model of vascular landmarks to improve the performance of retinal image registration. The proposed model consists of a two-stage convolutional neural network, in which one segments the retinal vessels on a pair of images, and the other detects junction points from the vessel segmentation image. Information obtained from the model was utilized for the registration. The keypoints were extracted based on the acquired vascular landmark points, and the orientation features were calculated as descriptors. Then, the reference and sensed images were registered by matching keypoints using a homography matrix and random sample consensus algorithm. The proposed method was evaluated on five databases and seven evaluation metrics to verify both clinical effectiveness and robustness. The results established that the proposed method showed outstanding performance for registration compared with other state-of-the-art methods. In particular, the high and significantly improved registration results were identified on FIRE database with area under the curve (AUC) of 0.988, 0.511, and 0.803 in S, P, and A classes. Furthermore, the proposed method worked well on poor quality and multimodal datasets demonstrating an ability to achieve high AUC above 0.8.
Collapse
|
4
|
State-of-the-art retinal vessel segmentation with minimalistic models. Sci Rep 2022; 12:6174. [PMID: 35418576 PMCID: PMC9007957 DOI: 10.1038/s41598-022-09675-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 03/10/2022] [Indexed: 01/03/2023] Open
Abstract
The segmentation of retinal vasculature from eye fundus images is a fundamental task in retinal image analysis. Over recent years, increasingly complex approaches based on sophisticated Convolutional Neural Network architectures have been pushing performance on well-established benchmark datasets. In this paper, we take a step back and analyze the real need of such complexity. We first compile and review the performance of 20 different techniques on some popular databases, and we demonstrate that a minimalistic version of a standard U-Net with several orders of magnitude less parameters, carefully trained and rigorously evaluated, closely approximates the performance of current best techniques. We then show that a cascaded extension (W-Net) reaches outstanding performance on several popular datasets, still using orders of magnitude less learnable weights than any previously published work. Furthermore, we provide the most comprehensive cross-dataset performance analysis to date, involving up to 10 different databases. Our analysis demonstrates that the retinal vessel segmentation is far from solved when considering test images that differ substantially from the training data, and that this task represents an ideal scenario for the exploration of domain adaptation techniques. In this context, we experiment with a simple self-labeling strategy that enables moderate enhancement of cross-dataset performance, indicating that there is still much room for improvement in this area. Finally, we test our approach on Artery/Vein and vessel segmentation from OCTA imaging problems, where we again achieve results well-aligned with the state-of-the-art, at a fraction of the model complexity available in recent literature. Code to reproduce the results in this paper is released.
Collapse
|
5
|
Jiang Z, Lei Y, Zhang L, Ni W, Gao C, Gao X, Yang H, Su J, Xiao W, Yu J, Gu Y. Automated Quantitative Analysis of Blood Flow in Extracranial-Intracranial Arterial Bypass Based on Indocyanine Green Angiography. Front Surg 2021; 8:649719. [PMID: 34179066 PMCID: PMC8225942 DOI: 10.3389/fsurg.2021.649719] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Accepted: 04/20/2021] [Indexed: 11/13/2022] Open
Abstract
Microvascular imaging based on indocyanine green is an important tool for surgeons who carry out extracranial–intracranial arterial bypass surgery. In terms of blood perfusion, indocyanine green images contain abundant information, which cannot be effectively interpreted by humans or currently available commercial software. In this paper, an automatic processing framework for perfusion assessments based on indocyanine green videos is proposed and consists of three stages, namely, vessel segmentation based on the UNet deep neural network, preoperative and postoperative image registrations based on scale-invariant transform features, and blood flow evaluation based on the Horn–Schunck optical flow method. This automatic processing flow can reveal the blood flow direction and intensity curve of any vessel, as well as the blood perfusion changes before and after an operation. Commercial software embedded in a microscope is used as a reference to evaluate the effectiveness of the algorithm in this study. A total of 120 patients from multiple centers were sampled for the study. For blood vessel segmentation, a Dice coefficient of 0.80 and a Jaccard coefficient of 0.73 were obtained. For image registration, the success rate was 81%. In preoperative and postoperative video processing, the coincidence rates between the automatic processing method and commercial software were 89 and 87%, respectively. The proposed framework not only achieves blood perfusion analysis similar to that of commercial software but also automatically detects and matches blood vessels before and after an operation, thus quantifying the flow direction and enabling surgeons to intuitively evaluate the perfusion changes caused by bypass surgery.
Collapse
Affiliation(s)
- Zhuoyun Jiang
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Yu Lei
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Liqiong Zhang
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Wei Ni
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Chao Gao
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Xinjie Gao
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Heng Yang
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Jiabin Su
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Weiping Xiao
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Jinhua Yu
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Yuxiang Gu
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| |
Collapse
|
6
|
Retinal image registration using log-polar transform and robust description of bifurcation points. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102424] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
7
|
Golkar E, Rabbani H, Dehghani A. Hybrid registration of retinal fluorescein angiography and optical coherence tomography images of patients with diabetic retinopathy. BIOMEDICAL OPTICS EXPRESS 2021; 12:1707-1724. [PMID: 33796382 PMCID: PMC7984788 DOI: 10.1364/boe.415939] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 01/26/2021] [Accepted: 02/21/2021] [Indexed: 05/10/2023]
Abstract
Diabetic retinopathy (DR) is a common ophthalmic disease among diabetic patients. It is essential to diagnose DR in the early stages of treatment. Various imaging systems have been proposed to detect and visualize retina diseases. The fluorescein angiography (FA) imaging technique is now widely used as a gold standard technique to evaluate the clinical manifestations of DR. Optical coherence tomography (OCT) imaging is another technique that provides 3D information of the retinal structure. The FA and OCT images are captured in two different phases and field of views and image fusion of these modalities are of interest to clinicians. This paper proposes a hybrid registration framework based on the extraction and refinement of segmented major blood vessels of retinal images. The newly extracted features significantly improve the success rate of global registration results in the complex blood vessel network of retinal images. Afterward, intensity-based and deformable transformations are utilized to further compensate the motion magnitude between the FA and OCT images. Experimental results of 26 images of the various stages of DR patients indicate that this algorithm yields promising registration and fusion results for clinical routine.
Collapse
Affiliation(s)
- Ehsan Golkar
- Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Hossein Rabbani
- Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Alireza Dehghani
- Eye Research Center, Isfahan University of Medical Sciences, Isfahan, Iran and Didavaran Eye Clinic, Isfahan, Iran
| |
Collapse
|
8
|
Hernandez-Matas C, Zabulis X, Argyros AA. REMPE: Registration of Retinal Images Through Eye Modelling and Pose Estimation. IEEE J Biomed Health Inform 2020; 24:3362-3373. [DOI: 10.1109/jbhi.2020.2984483] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
9
|
Laha S, LaLonde R, Carmack AE, Foroosh H, Olson JC, Shaikh S, Bagci U. Analysis of Video Retinal Angiography With Deep Learning and Eulerian Magnification. FRONTIERS IN COMPUTER SCIENCE 2020. [DOI: 10.3389/fcomp.2020.00024] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
10
|
Jalili J, Hejazi SM, Riazi-Esfahani M, Eliasi A, Ebrahimi M, Seydi M, Fard MA, Ahmadian A. Retinal image mosaicking using scale-invariant feature transformation feature descriptors and Voronoi diagram. J Med Imaging (Bellingham) 2020; 7:044001. [PMID: 32715023 DOI: 10.1117/1.jmi.7.4.044001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2019] [Accepted: 06/30/2020] [Indexed: 11/14/2022] Open
Abstract
Purpose: Peripheral retinal lesions substantially increase the risk of diabetic retinopathy and retinopathy of prematurity. The peripheral changes can be visualized in wide field imaging, which is obtained by combining multiple images with an overlapping field of view using mosaicking methods. However, a robust and accurate registration of mosaicking techniques for normal angle fundus cameras is still a challenge due to the random selection of matching points and execution time. We propose a method of retinal image mosaicking based on scale-invariant feature transformation (SIFT) feature descriptor and Voronoi diagram. Approach: In our method, the SIFT algorithm is used to describe local features in the input images. Then the input images are subdivided into regions based on the Voronoi method. Each pair of Voronoi regions is matched by the method zero mean normalized cross correlation. After matching, the retinal images are mapped into the same coordinate system to form a mosaic image. The success rate and the mean registration error (RE) of our method were compared with those of other state-of-the-art methods for the P category of the fundus image registration database. Results: Experimental results show that the proposed method accurately registered 42% of retinal image pairs with a mean RE of 3.040 pixels, while a lower success rate was observed in the other four state-of-the-art retinal image registration methods GDB-ICP (33%), Harris-PIIFD (0%), HM-2016 (0%), and HM-2017 (2%). Conclusions: The proposed method outperforms state-of-the-art methods in terms of quality and running time and reduces the computational complexity.
Collapse
Affiliation(s)
- Jalil Jalili
- Tehran University of Medical Sciences, School of Medicine, Medical Physics and Biomedical Engineering Department, Tehran, Iran
| | - Sedigheh M Hejazi
- Tehran University of Medical Sciences, School of Medicine, Medical Physics and Biomedical Engineering Department, Tehran, Iran.,Tehran University of Medical Sciences, Imam Khomeini Hospital, Advanced Medical Technologies and Equipment Institute Research Center for Molecular and Cellular in Imaging, Bio-optical Imaging Group, Tehran, Iran
| | - Mohammad Riazi-Esfahani
- University of California Irvine, Gavin Herbert Eye Institute, Department of Ophthalmology, Irvine, California, United States
| | - Arash Eliasi
- Tehran University of Medical Sciences, Imam Khomeini Hospital, Advanced Medical Technologies and Equipment Institute Research Center for Molecular and Cellular in Imaging, Bio-optical Imaging Group, Tehran, Iran
| | - Mohsen Ebrahimi
- Tehran University of Medical Sciences, Imam Khomeini Hospital, Advanced Medical Technologies and Equipment Institute Research Center for Molecular and Cellular in Imaging, Bio-optical Imaging Group, Tehran, Iran
| | - Mojtaba Seydi
- Tehran University of Medical Sciences, Imam Khomeini Hospital, Advanced Medical Technologies and Equipment Institute Research Center for Molecular and Cellular in Imaging, Bio-optical Imaging Group, Tehran, Iran
| | - Masoud Aghsaei Fard
- Tehran University of Medical Science, Farabi Eye Hospital BB, Eye Research Center, Tehran, Iran
| | - Alireza Ahmadian
- Tehran University of Medical Sciences, School of Medicine, Medical Physics and Biomedical Engineering Department, Tehran, Iran
| |
Collapse
|
11
|
Yu L, Qin Z, Zhuang T, Ding Y, Qin Z, Raymond Choo KK. A framework for hierarchical division of retinal vascular networks. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2018.11.113] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
12
|
Strisciuglio N, Azzopardi G, Petkov N. Robust Inhibition-Augmented Operator for Delineation of Curvilinear Structures. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:5852-5866. [PMID: 31247549 DOI: 10.1109/tip.2019.2922096] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Delineation of curvilinear structures in images is an important basic step of several image processing applications, such as segmentation of roads or rivers in aerial images, vessels or staining membranes in medical images, and cracks in pavements and roads, among others. Existing methods suffer from insufficient robustness to noise. In this paper, we propose a novel operator for the detection of curvilinear structures in images, which we demonstrate to be robust to various types of noise and effective in several applications. We call it RUSTICO, which stands for RobUST Inhibition-augmented Curvilinear Operator. It is inspired by the push-pull inhibition in visual cortex and takes as input the responses of two trainable B-COSFIRE filters of opposite polarity. The output of RUSTICO consists of a magnitude map and an orientation map. We carried out experiments on a data set of synthetic stimuli with noise drawn from different distributions, as well as on several benchmark data sets of retinal fundus images, crack pavements, and aerial images and a new data set of rose bushes used for automatic gardening. We evaluated the performance of RUSTICO by a metric that considers the structural properties of line networks (connectivity, area, and length) and demonstrated that RUSTICO outperforms many existing methods with high statistical significance. RUSTICO exhibits high robustness to noise and texture.
Collapse
|
13
|
A robust non-local total-variation based image registration method under illumination changes in medical applications. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2018.11.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
14
|
Saha SK, Xiao D, Bhuiyan A, Wong TY, Kanagasingam Y. Color fundus image registration techniques and applications for automated analysis of diabetic retinopathy progression: A review. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2018.08.034] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
15
|
Islam ST, Saha S, Rahaman GMA, Dutta D, Kanagasingam Y. An Efficient Binary Descriptor to Describe Retinal Bifurcation Point for Image Registration. PATTERN RECOGNITION AND IMAGE ANALYSIS 2019. [DOI: 10.1007/978-3-030-31332-6_47] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
16
|
A-RANSAC: Adaptive random sample consensus method in multimodal retinal image registration. Biomed Signal Process Control 2018. [DOI: 10.1016/j.bspc.2018.06.002] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
17
|
Feature-Based Retinal Image Registration Using D-Saddle Feature. JOURNAL OF HEALTHCARE ENGINEERING 2017; 2017:1489524. [PMID: 29204257 PMCID: PMC5674727 DOI: 10.1155/2017/1489524] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/09/2017] [Revised: 08/09/2017] [Accepted: 08/23/2017] [Indexed: 11/17/2022]
Abstract
Retinal image registration is important to assist diagnosis and monitor retinal diseases, such as diabetic retinopathy and glaucoma. However, registering retinal images for various registration applications requires the detection and distribution of feature points on the low-quality region that consists of vessels of varying contrast and sizes. A recent feature detector known as Saddle detects feature points on vessels that are poorly distributed and densely positioned on strong contrast vessels. Therefore, we propose a multiresolution difference of Gaussian pyramid with Saddle detector (D-Saddle) to detect feature points on the low-quality region that consists of vessels with varying contrast and sizes. D-Saddle is tested on Fundus Image Registration (FIRE) Dataset that consists of 134 retinal image pairs. Experimental results show that D-Saddle successfully registered 43% of retinal image pairs with average registration accuracy of 2.329 pixels while a lower success rate is observed in other four state-of-the-art retinal image registration methods GDB-ICP (28%), Harris-PIIFD (4%), H-M (16%), and Saddle (16%). Furthermore, the registration accuracy of D-Saddle has the weakest correlation (Spearman) with the intensity uniformity metric among all methods. Finally, the paired t-test shows that D-Saddle significantly improved the overall registration accuracy of the original Saddle.
Collapse
|
18
|
Braun D, Yang S, Martel JN, Riviere CN, Becker BC. EyeSLAM: Real-time simultaneous localization and mapping of retinal vessels during intraocular microsurgery. Int J Med Robot 2017; 14. [PMID: 28719002 DOI: 10.1002/rcs.1848] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Revised: 05/20/2017] [Accepted: 05/23/2017] [Indexed: 12/17/2022]
Abstract
BACKGROUND Fast and accurate mapping and localization of the retinal vasculature is critical to increasing the effectiveness and clinical utility of robot-assisted intraocular microsurgery such as laser photocoagulation and retinal vessel cannulation. METHODS The proposed EyeSLAM algorithm delivers 30 Hz real-time simultaneous localization and mapping of the human retina and vasculature during intraocular surgery, combining fast vessel detection with 2D scan-matching techniques to build and localize a probabilistic map of the vasculature. RESULTS In the harsh imaging environment of retinal surgery with high magnification, quick shaky motions, textureless retina background, variable lighting and tool occlusion, EyeSLAM can map 75% of the vessels within two seconds of initialization and localize the retina in real time with a root mean squared (RMS) error of under 5.0 pixels (translation) and 1° (rotation). CONCLUSIONS EyeSLAM robustly provides retinal maps and registration that enable intelligent surgical micromanipulators to aid surgeons in simulated retinal vessel tracing and photocoagulation tasks.
Collapse
Affiliation(s)
- Daniel Braun
- The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Sungwook Yang
- The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Joseph N Martel
- The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Cameron N Riviere
- The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Brian C Becker
- The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
19
|
Rodrigues LC, Marengoni M. Segmentation of optic disc and blood vessels in retinal images using wavelets, mathematical morphology and Hessian-based multi-scale filtering. Biomed Signal Process Control 2017. [DOI: 10.1016/j.bspc.2017.03.014] [Citation(s) in RCA: 67] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
20
|
Guo F, Zhao X, Zou B, Liang Y. Automatic Retinal Image Registration Using Blood Vessel Segmentation and SIFT Feature. INT J PATTERN RECOGN 2017. [DOI: 10.1142/s0218001417570063] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Automatic retinal image registration is still a great challenge in computer aided diagnosis and screening system. In this paper, a new retinal image registration method is proposed based on the combination of blood vessel segmentation and scale invariant feature transform (SIFT) feature. The algorithm includes two stages: retinal image segmentation and registration. In the segmentation stage, the blood vessel is segmented by using the guided filter to enhance the vessel structure and the bottom-hat transformation to extract blood vessel. In the registration stage, the SIFT algorithm is adopted to detect the feature of vessel segmentation image, complemented by using a random sample consensus (RANSAC) algorithm to eliminate incorrect matches. We evaluate our method from both segmentation and registration aspects. For segmentation evaluation, we test our method on DRIVE database, which provides manually labeled images from two specialists. The experimental results show that our method achieves 0.9562 in accuracy (Acc), which presents competitive performance compare to other existing segmentation methods. For registration evaluation, we test our method on STARE database, and the experimental results demonstrate the superior performance of the proposed method, which makes the algorithm a suitable tool for automated retinal image analysis.
Collapse
Affiliation(s)
- Fan Guo
- School of Information Science and Engineering, Central South University, Changsha, P. R. China
- Joint Laboratory of Mobile Health, Ministry of Education and China Mobile, Changsha, P. R. China
- Center for Ophthalmic Imaging Research, Central South University, Changsha, P. R. China
| | - Xin Zhao
- School of Information Science and Engineering, Central South University, Changsha, P. R. China
- Joint Laboratory of Mobile Health, Ministry of Education and China Mobile, Changsha, P. R. China
- Center for Ophthalmic Imaging Research, Central South University, Changsha, P. R. China
| | - Beiji Zou
- School of Information Science and Engineering, Central South University, Changsha, P. R. China
- Joint Laboratory of Mobile Health, Ministry of Education and China Mobile, Changsha, P. R. China
- Center for Ophthalmic Imaging Research, Central South University, Changsha, P. R. China
| | - Yixiong Liang
- School of Information Science and Engineering, Central South University, Changsha, P. R. China
- Joint Laboratory of Mobile Health, Ministry of Education and China Mobile, Changsha, P. R. China
- Center for Ophthalmic Imaging Research, Central South University, Changsha, P. R. China
| |
Collapse
|
21
|
A Two-Step Approach for Longitudinal Registration of Retinal Images. J Med Syst 2016; 40:277. [DOI: 10.1007/s10916-016-0640-0] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2016] [Accepted: 10/14/2016] [Indexed: 11/26/2022]
|
22
|
Aghajani K, Manzuri MT, Yousefpour R. A robust image registration method based on total variation regularization under complex illumination changes. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2016; 134:89-107. [PMID: 27480735 DOI: 10.1016/j.cmpb.2016.06.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2016] [Revised: 05/10/2016] [Accepted: 06/28/2016] [Indexed: 06/06/2023]
Abstract
BACKGROUND AND OBJECTIVE Image registration is one of the fundamental and essential tasks for medical imaging and remote sensing applications. One of the most common challenges in this area is the presence of complex spatially varying intensity distortion in the images. The widely used similarity metrics, such as MI (Mutual Information), CC (Correlation Coefficient), SSD (Sum of Square Difference), SAD (Sum of Absolute Difference) and CR (Correlation Ratio), are not robust against this kind of distortion because stationarity assumption and the pixel-wise independence cannot be obeyed and captured by these metrics. METHODS In this paper, we propose a new intensity-based method for simultaneous image registration and intensity correction. We assume that the registered moving image can be reconstructed by the reference image through a linear function that consists of multiplicative and additive coefficients. We also assume that the illumination changes in the images are spatially smooth in each region, so we use weighted Total Variation as a regularization term to estimate the aforesaid multiplicative and additive coefficients. Using weighted Total Variation leads to reduce the smoothness-effect on the coefficients across the edges and causes low level segmentation on the coefficients. For minimizing the reconstruction error, as a dissimilarity term, we use l1norm which is more robust against illumination change and non-Gaussian noises than the l2 norm. Primal-Dual method is used for solving the optimization problem. RESULTS The proposed method is applied to simulated as well as real-world data consisting of clinically 4-D Computed Tomography, retina, Digital Subtraction Angiography (DSA), and iris image pairs. Then, the comparisons are made to MI, CC, SSD, SAD and RC qualitatively and sometimes quantitatively. CONCLUSIONS The experiment results are demonstrating that the proposed method produces more accurate registration results than conventional methods.
Collapse
Affiliation(s)
- Khadijeh Aghajani
- Department of Computer Engineering, Sharif University of Technology, Tehran, Iran.
| | - Mohammad T Manzuri
- Department of Computer Engineering, Sharif University of Technology, Tehran, Iran
| | - Rohollah Yousefpour
- Department of Mathematical and Computer Sciences, University of Mazandaran, Babolsar, Iran
| |
Collapse
|
23
|
Aghajani K, Yousefpour R, Shirpour M, Manzuri MT. Intensity based image registration by minimizing the complexity of weighted subtraction under illumination changes. Biomed Signal Process Control 2016. [DOI: 10.1016/j.bspc.2015.10.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|