1
|
An C, Wang Y, Zhang J, Nguyen TQ. Self-Supervised Rigid Registration for Multimodal Retinal Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:5733-5747. [PMID: 36040946 PMCID: PMC11211857 DOI: 10.1109/tip.2022.3201476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The ability to accurately overlay one modality retinal image to another is critical in ophthalmology. Our previous framework achieved the state-of-the-art results for multimodal retinal image registration. However, it requires human-annotated labels due to the supervised approach of the previous work. In this paper, we propose a self-supervised multimodal retina registration method to alleviate the burdens of time and expense to prepare for training data, that is, aiming to automatically register multimodal retinal images without any human annotations. Specially, we focus on registering color fundus images with infrared reflectance and fluorescein angiography images, and compare registration results with several conventional and supervised and unsupervised deep learning methods. From the experimental results, the proposed self-supervised framework achieves a comparable accuracy comparing to the state-of-the-art supervised learning method in terms of registration accuracy and Dice coefficient.
Collapse
|
2
|
Jiang H, Gao M, Yang K, Zhang D, Ma H, Qian W. Neonatal Fundus Image Registration and Mosaic Using Improved Speeded Up Robust Features Based on Shannon Entropy. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3004-3007. [PMID: 34891876 DOI: 10.1109/embc46164.2021.9630593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Fundus examination of the newborn is quite important, which needs to be done timely so as to avoid irreversible blindness. Ophthalmologists have to review at least five images of each eye during one examination, which is a time-consuming task. To improve the diagnosis efficiency, this paper proposed a stable and robust fundus image mosaic method based on improved Speeded Up Robust Features (SURF) with Shannon entropy and make real assessment when ophthalmologists used it clinically. Our method is characterized by avoiding the useless detection and extraction of the feature points in the non-overlapping region of the paired images during registration process. The experiments showed that the proposed method successfully registered 90.91% of 110 different field of view (FOV) image pairs from 22 eyes of 13 screening newborns and acquired 93.51% normalized correlation coefficient and 1.2557 normalized mutual information. Also, the total fusion success rate reached 86.36% and a subjective visual assessment approach was adopted to measure the fusion performance by three experts, which obtained 84.85% acceptance rate. The performance of our proposed method demonstrated its accuracy and effectiveness in the clinical application, which can help ophthalmologists a lot during their diagnosis.
Collapse
|
3
|
Wang Y, Zhang J, Cavichini M, Bartsch DUG, Freeman WR, Nguyen TQ, An C. Robust Content-Adaptive Global Registration for Multimodal Retinal Images Using Weakly Supervised Deep-Learning Framework. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:3167-3178. [PMID: 33600314 DOI: 10.1109/tip.2021.3058570] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Multimodal retinal imaging plays an important role in ophthalmology. We propose a content-adaptive multimodal retinal image registration method in this paper that focuses on the globally coarse alignment and includes three weakly supervised neural networks for vessel segmentation, feature detection and description, and outlier rejection. We apply the proposed framework to register color fundus images with infrared reflectance and fluorescein angiography images, and compare it with several conventional and deep learning methods. Our proposed framework demonstrates a significant improvement in robustness and accuracy reflected by a higher success rate and Dice coefficient compared with other methods.
Collapse
|
4
|
Zou B, He Z, Zhao R, Zhu C, Liao W, Li S. Non-rigid retinal image registration using an unsupervised structure-driven regression network. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.04.122] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
5
|
Motta D, Casaca W, Paiva A. Vessel Optimal Transport for Automated Alignment of Retinal Fundus Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:6154-6168. [PMID: 31283507 DOI: 10.1109/tip.2019.2925287] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Optimal transport has emerged as a promising and useful tool for supporting modern image processing applications such as medical imaging and scientific visualization. Indeed, the optimal transport theory enables great flexibility in modeling problems related to image registration, as different optimization resources can be successfully used as well as the choice of suitable matching models to align the images. In this paper, we introduce an automated framework for fundus image registration which unifies optimal transport theory, image processing tools, and graph matching schemes into a functional and concise methodology. Given two ocular fundus images, we construct representative graphs which embed in their structures spatial and topological information from the eye's blood vessels. The graphs produced are then used as input by our optimal transport model in order to establish a correspondence between their sets of nodes. Finally, geometric transformations are performed between the images so as to accomplish the registration task properly. Our formulation relies on the solid mathematical foundation of optimal transport as a constrained optimization problem, being also robust when dealing with outliers created during the matching stage. We demonstrate the accuracy and effectiveness of the present framework throughout a comprehensive set of qualitative and quantitative comparisons against several influential state-of-the-art methods on various fundus image databases.
Collapse
|
6
|
Saha SK, Xiao D, Bhuiyan A, Wong TY, Kanagasingam Y. Color fundus image registration techniques and applications for automated analysis of diabetic retinopathy progression: A review. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2018.08.034] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
7
|
A-RANSAC: Adaptive random sample consensus method in multimodal retinal image registration. Biomed Signal Process Control 2018. [DOI: 10.1016/j.bspc.2018.06.002] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
8
|
Adal KM, van Etten PG, Martinez JP, Rouwen KW, Vermeer KA, van Vliet LJ. An Automated System for the Detection and Classification of Retinal Changes Due to Red Lesions in Longitudinal Fundus Images. IEEE Trans Biomed Eng 2018; 65:1382-1390. [DOI: 10.1109/tbme.2017.2752701] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
9
|
Li Z, Huang F, Zhang J, Dashtbozorg B, Abbasi-Sureshjani S, Sun Y, Long X, Yu Q, Romeny BTH, Tan T. Multi-modal and multi-vendor retina image registration. BIOMEDICAL OPTICS EXPRESS 2018; 9:410-422. [PMID: 29552382 PMCID: PMC5854047 DOI: 10.1364/boe.9.000410] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2017] [Revised: 12/09/2017] [Accepted: 12/10/2017] [Indexed: 05/04/2023]
Abstract
Multi-modal retinal image registration is often required to utilize the complementary information from different retinal imaging modalities. However, a robust and accurate registration is still a challenge due to the modality-varied resolution, contrast, and luminosity. In this paper, a two step registration method is proposed to address this problem. Descriptor matching on mean phase images is used to globally register images in the first step. Deformable registration based on modality independent neighbourhood descriptor (MIND) method is followed to locally refine the registration result in the second step. The proposed method is extensively evaluated on color fundus images and scanning laser ophthalmoscope (SLO) images. Both qualitative and quantitative tests demonstrate improved registration using the proposed method compared to the state-of-the-art. The proposed method produces significantly and substantially larger mean Dice coefficients compared to other methods (p<0.001). It may facilitate the measurement of corresponding features from different retinal images, which can aid in assessing certain retinal diseases.
Collapse
Affiliation(s)
- Zhang Li
- College of Aerospace Science and Engineering, National University of Defense Technology, Changsha 410073,
China
- Hunan Provincial Key Laboratory of Image Measurement and Vision Navigation, Changsha 410073,
China
| | - Fan Huang
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB,
The Netherlands
| | - Jiong Zhang
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB,
The Netherlands
| | - Behdad Dashtbozorg
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB,
The Netherlands
| | - Samaneh Abbasi-Sureshjani
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB,
The Netherlands
| | - Yue Sun
- Electrical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB,
The Netherlands
| | - Xi Long
- Electrical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB,
The Netherlands
| | - Qifeng Yu
- College of Aerospace Science and Engineering, National University of Defense Technology, Changsha 410073,
China
- Hunan Provincial Key Laboratory of Image Measurement and Vision Navigation, Changsha 410073,
China
| | - Bart ter Haar Romeny
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB,
The Netherlands
- Department of Biomedical and Information Technology, Northeastern University, Shenyang, 110000,
China
| | - Tao Tan
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB,
The Netherlands
- Research and Development, ScreenPoint Medical, Nijmegen, 6512 AB,
The Netherlands
| |
Collapse
|
10
|
Noyel G, Thomas R, Bhakta G, Crowder A, Owens D, Boyle P. Superimposition of eye fundus images for longitudinal analysis from large public health databases. Biomed Phys Eng Express 2017. [DOI: 10.1088/2057-1976/aa7d16] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
11
|
Guo F, Zhao X, Zou B, Liang Y. Automatic Retinal Image Registration Using Blood Vessel Segmentation and SIFT Feature. INT J PATTERN RECOGN 2017. [DOI: 10.1142/s0218001417570063] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Automatic retinal image registration is still a great challenge in computer aided diagnosis and screening system. In this paper, a new retinal image registration method is proposed based on the combination of blood vessel segmentation and scale invariant feature transform (SIFT) feature. The algorithm includes two stages: retinal image segmentation and registration. In the segmentation stage, the blood vessel is segmented by using the guided filter to enhance the vessel structure and the bottom-hat transformation to extract blood vessel. In the registration stage, the SIFT algorithm is adopted to detect the feature of vessel segmentation image, complemented by using a random sample consensus (RANSAC) algorithm to eliminate incorrect matches. We evaluate our method from both segmentation and registration aspects. For segmentation evaluation, we test our method on DRIVE database, which provides manually labeled images from two specialists. The experimental results show that our method achieves 0.9562 in accuracy (Acc), which presents competitive performance compare to other existing segmentation methods. For registration evaluation, we test our method on STARE database, and the experimental results demonstrate the superior performance of the proposed method, which makes the algorithm a suitable tool for automated retinal image analysis.
Collapse
Affiliation(s)
- Fan Guo
- School of Information Science and Engineering, Central South University, Changsha, P. R. China
- Joint Laboratory of Mobile Health, Ministry of Education and China Mobile, Changsha, P. R. China
- Center for Ophthalmic Imaging Research, Central South University, Changsha, P. R. China
| | - Xin Zhao
- School of Information Science and Engineering, Central South University, Changsha, P. R. China
- Joint Laboratory of Mobile Health, Ministry of Education and China Mobile, Changsha, P. R. China
- Center for Ophthalmic Imaging Research, Central South University, Changsha, P. R. China
| | - Beiji Zou
- School of Information Science and Engineering, Central South University, Changsha, P. R. China
- Joint Laboratory of Mobile Health, Ministry of Education and China Mobile, Changsha, P. R. China
- Center for Ophthalmic Imaging Research, Central South University, Changsha, P. R. China
| | - Yixiong Liang
- School of Information Science and Engineering, Central South University, Changsha, P. R. China
- Joint Laboratory of Mobile Health, Ministry of Education and China Mobile, Changsha, P. R. China
- Center for Ophthalmic Imaging Research, Central South University, Changsha, P. R. China
| |
Collapse
|
12
|
Lee JA, Wong DWK. An automatic quantitative measurement method for performance assessment of retina image registration algorithms. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2016:3252-3255. [PMID: 28269001 DOI: 10.1109/embc.2016.7591422] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
This paper presents a novel automatic quantitative measurement method for assessment of the performance of image registration algorithms designed for registering retina fundus images. To achieve automatic quantitative measurement, we propose the use of edges and edge dissimilarity measure for determining the performance of retina image registration algorithms. Our input is the registered pair of retina fundus images obtained using any of the existing retina image registration algorithms in the literature. To compute edge dissimilarity score, we propose an edge dissimilarity measure that we called "robustified Hausdorff distance". We show that our proposed approach is feasible as designed by drawing comparison to visual evaluation results when tested on images from the DRIVERA and G9 dataset.
Collapse
|
13
|
Kolar R, Tornow RP, Odstrcilik J, Liberdova I. Registration of retinal sequences from new video-ophthalmoscopic camera. Biomed Eng Online 2016; 15:57. [PMID: 27206477 PMCID: PMC4875736 DOI: 10.1186/s12938-016-0191-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2016] [Accepted: 05/11/2016] [Indexed: 11/10/2022] Open
Abstract
Background Analysis of fast temporal changes on retinas has become an important part of diagnostic video-ophthalmology. It enables investigation of the hemodynamic processes in retinal tissue, e.g. blood-vessel diameter changes as a result of blood-pressure variation, spontaneous venous pulsation influenced by intracranial-intraocular pressure difference, blood-volume changes as a result of changes in light reflection from retinal tissue, and blood flow using laser speckle contrast imaging. For such applications, image registration of the recorded sequence must be performed. Methods Here we use a new non-mydriatic video-ophthalmoscope for simple and fast acquisition of low SNR retinal sequences. We introduce a novel, two-step approach for fast image registration. The phase correlation in the first stage removes large eye movements. Lucas-Kanade tracking in the second stage removes small eye movements. We propose robust adaptive selection of the tracking points, which is the most important part of tracking-based approaches. We also describe a method for quantitative evaluation of the registration results, based on vascular tree intensity profiles. Results The achieved registration error evaluated on 23 sequences (5840 frames) is 0.78 ± 0.67 pixels inside the optic disc and 1.39 ± 0.63 pixels outside the optic disc. We compared the results with the commonly used approaches based on Lucas-Kanade tracking and scale-invariant feature transform, which achieved worse results. Conclusion The proposed method can efficiently correct particular frames of retinal sequences for shift and rotation. The registration results for each frame (shift in X and Y direction and eye rotation) can also be used for eye-movement evaluation during single-spot fixation tasks.
Collapse
Affiliation(s)
- Radim Kolar
- Department of Biomedical Engineering, Faculty of Electrical Engineering and Communication, Brno University of Technology, Technicka 12, 616 00, Brno, Czech Republic.
| | - Ralf P Tornow
- Department of Ophthalmology, Friedrich-Alexander-University Erlangen-Nürnberg, Schwabachanlage 6, 91054, Erlangen, Germany
| | - Jan Odstrcilik
- Department of Biomedical Engineering, Faculty of Electrical Engineering and Communication, Brno University of Technology, Technicka 12, 616 00, Brno, Czech Republic
| | - Ivana Liberdova
- Department of Biomedical Engineering, Faculty of Electrical Engineering and Communication, Brno University of Technology, Technicka 12, 616 00, Brno, Czech Republic
| |
Collapse
|
14
|
Ghassabi Z, Shanbehzadeh J, Mohammadzadeh A. A structure-based region detector for high-resolution retinal fundus image registration. Biomed Signal Process Control 2016. [DOI: 10.1016/j.bspc.2015.08.005] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
15
|
Ruppert GC, Chiachia G, Bergo FP, Favretto FO, Yasuda CL, Rocha A, Falcão AX. Medical image registration based on watershed transform from greyscale marker and multi-scale parameter search. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2015. [DOI: 10.1080/21681163.2015.1029643] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
16
|
Patankar SS, Kulkarni JV. Orthogonal moments for determining correspondence between vessel bifurcations for retinal image registration. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2015; 119:121-141. [PMID: 25837489 DOI: 10.1016/j.cmpb.2015.02.009] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/21/2014] [Revised: 02/16/2015] [Accepted: 02/25/2015] [Indexed: 06/04/2023]
Abstract
Retinal image registration is a necessary step in diagnosis and monitoring of Diabetes Retinopathy (DR), which is one of the leading causes of blindness. Long term diabetes affects the retinal blood vessels and capillaries eventually causing blindness. This progressive damage to retina and subsequent blindness can be prevented by periodic retinal screening. The extent of damage caused by DR can be assessed by comparing retinal images captured during periodic retinal screenings. During image acquisition at the time of periodic screenings translation, rotation and scale (TRS) are introduced in the retinal images. Therefore retinal image registration is an essential step in automated system for screening, diagnosis, treatment and evaluation of DR. This paper presents an algorithm for registration of retinal images using orthogonal moment invariants as features for determining the correspondence between the dominant points (vessel bifurcations) in the reference and test retinal images. As orthogonal moments are invariant to TRS; moment invariants features around a vessel bifurcation are unaltered due to TRS and can be used to determine the correspondence between reference and test retinal images. The vessel bifurcation points are located in segmented, thinned (mono pixel vessel width) retinal images and labeled in corresponding grayscale retinal images. The correspondence between vessel bifurcations in reference and test retinal image is established based on moment invariants features. Further the TRS in test retinal image with respect to reference retinal image is estimated using similarity transformation. The test retinal image is aligned with reference retinal image using the estimated registration parameters. The accuracy of registration is evaluated in terms of mean error and standard deviation of the labeled vessel bifurcation points in the aligned images. The experimentation is carried out on DRIVE database, STARE database, VARIA database and database provided by local government hospital in Pune, India. The experimental results exhibit effectiveness of the proposed algorithm for registration of retinal images.
Collapse
|
17
|
Liu S, Datta A, Ho D, Dwelle J, Wang D, Milner TE, Rylander HG, Markey MK. Effect of image registration on longitudinal analysis of retinal nerve fiber layer thickness of non-human primates using Optical Coherence Tomography (OCT). EYE AND VISION (LONDON, ENGLAND) 2015; 2:3. [PMID: 26605359 PMCID: PMC4657366 DOI: 10.1186/s40662-015-0013-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2014] [Accepted: 01/27/2015] [Indexed: 11/18/2022]
Abstract
BACKGROUND In this paper we determined the benefits of image registration on estimating longitudinal retinal nerve fiber layer thickness (RNFLT) changes. METHODS RNFLT maps around the optic nerve head (ONH) of healthy primate eyes were measured using Optical Coherence Tomography (OCT) weekly for 30 weeks. One automatic algorithm based on mutual information (MI) and the other semi-automatic algorithm based on log-polar transform cross-correlation using manually segmented blood vessels (LPCC_MSBV), were used to register retinal maps longitudinally. We compared the precision and recall between manually segmented image pairs for the two algorithms using a linear mixed effects model. RESULTS We found that the precision calculated between manually segmented image pairs following registration by LPCC_MSBV algorithm is significantly better than the one following registration by MI algorithm (p < <0.0001). Trend of the all-rings and temporal, superior, nasal and inferior (TSNI) quadrants average of RNFLT over time in healthy primate eyes are not affected by registration. RNFLT of clock hours 1, 2, and 10 showed significant change over 30 weeks (p = 0.0058, 0.0054, and 0.0298 for clock hours 1, 2 and 10 respectively) without registration, but stayed constant over time with registration. CONCLUSIONS The LPCC_MSBV provides better registration of RNFLT maps recorded on different dates than the automatic MI algorithm. Registration of RNFLT maps can improve clinical analysis of glaucoma progression.
Collapse
Affiliation(s)
- Shuang Liu
- />Department of Biomedical Engineering, The University of Texas at Austin, Austin, TX 78712 USA
- />Present address: Clinical Neuroscience Imaging Center (CNIC), Department of Neurology, Yale School of Medicine, New Haven, CT 06510 USA
| | - Anjali Datta
- />Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX 78712 USA
| | - Derek Ho
- />Department of Biomedical Engineering, The University of Texas at Austin, Austin, TX 78712 USA
| | - Jordan Dwelle
- />Department of Biomedical Engineering, The University of Texas at Austin, Austin, TX 78712 USA
| | - Daifeng Wang
- />Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX 78712 USA
| | - Thomas E Milner
- />Department of Biomedical Engineering, The University of Texas at Austin, Austin, TX 78712 USA
| | - Henry Grady Rylander
- />Department of Biomedical Engineering, The University of Texas at Austin, Austin, TX 78712 USA
| | - Mia K Markey
- />Department of Biomedical Engineering, The University of Texas at Austin, Austin, TX 78712 USA
- />Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030 USA
| |
Collapse
|
18
|
Retinal image registration using topological vascular tree segmentation and bifurcation structures. Biomed Signal Process Control 2015. [DOI: 10.1016/j.bspc.2014.10.009] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
19
|
Zhang K, Zhang XL, Xu X, Fu XW. Mutual information optimization based dynamic log-polar image registration. ACTA ACUST UNITED AC 2015. [DOI: 10.1007/s12204-015-1589-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
20
|
Adal KM, Ensing RM, Couvert R, van Etten P, Martinez JP, Vermeer KA, van Vliet LJ. A Hierarchical Coarse-to-Fine Approach for Fundus Image Registration. BIOMEDICAL IMAGE REGISTRATION 2014. [DOI: 10.1007/978-3-319-08554-8_10] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
21
|
Kolar R, Harabis V, Odstrcilik J. Hybrid retinal image registration using phase correlation. IMAGING SCIENCE JOURNAL 2013. [DOI: 10.1179/1743131x11y.0000000065] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
22
|
Lan S, Luo S, Huh BK, Chandra A, Altman AR, Qin L, Liu XS. 3D image registration is critical to ensure accurate detection of longitudinal changes in trabecular bone density, microstructure, and stiffness measurements in rat tibiae by in vivo microcomputed tomography (μCT). Bone 2013; 56:83-90. [PMID: 23727434 PMCID: PMC3715966 DOI: 10.1016/j.bone.2013.05.014] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/07/2012] [Revised: 05/08/2013] [Accepted: 05/14/2013] [Indexed: 11/21/2022]
Abstract
In the recent decade, in vivo μCT scanners have become available to monitor temporal changes in rodent bone in response to diseases and treatments. We investigated short-term and long-term precision of in vivo μCT measurements of trabecular bone density, microstructure and stiffness of rat tibiae and tested whether they can be improved by 3D image registration. Rats in the short-term precision group underwent baseline and follow-up scans within the same day (n = 15) and those in the long-term precision group were scanned at day 0 and day 14 (n = 16) at 10.5 μm voxel size. A 3D image-registration scheme was applied to register the trabecular bone compartments of baseline and follow-up scans. Prior to image registration, short-term precision ranged between 0.85% and 2.65% in bone volume fraction (BV/TV), trabecular number, thickness, and spacing (Tb.N*, Tb.Th*, Tb.Sp*), trabecular bone mineral density and tissue mineral density (Tb.BMD, and Tb.TMD), and was particularly high in structure model index (SMI), connectivity density (Conn.D), and stiffness (4.29%-8.83%). Image registration tended to improve the short-term precision, but the only statistically significant improvement was in Tb.N*, Tb.TMD, and stiffness. On the other hand, unregistered comparisons between day-0 and day-14 scans suggested significant increases in BV/TV, Tb.N*, Tb.Th*, Conn.D, and Tb.BMD and decrease in Tb.Sp* and SMI. However, the percent change in each parameter from registered comparisons was significantly different from unregistered comparisons. Registered results suggested a significant increase in BV/TV, Tb.BMD, and stiffness over 14 days, primarily caused by increased Tb.Th* and Tb.TMD. Due to the continuous growth of rodents, the direct comparisons between the unregistered baseline and follow-up scans were driven by changes due to global bone modeling instead of local remodeling. Our results suggested that 3D image registration is critical for detecting changes due to bone remodeling activities in rodent trabecular bone by in vivo μCT imaging.
Collapse
Affiliation(s)
- Shenghui Lan
- McKay Orthopaedic Research Laboratory, Department of Orthopaedic Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
- Department of Orthopaedic Surgery, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Hubei Province, People’s Republic of China
- Department of Orthopaedic Surgery, Wuhan General Hospital of Guangzhou Military Command, Hubei Province, People’s Republic of China
| | - Shiming Luo
- McKay Orthopaedic Research Laboratory, Department of Orthopaedic Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Beom Kang Huh
- McKay Orthopaedic Research Laboratory, Department of Orthopaedic Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Abhishek Chandra
- McKay Orthopaedic Research Laboratory, Department of Orthopaedic Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Allison R. Altman
- McKay Orthopaedic Research Laboratory, Department of Orthopaedic Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Ling Qin
- McKay Orthopaedic Research Laboratory, Department of Orthopaedic Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - X. Sherry Liu
- McKay Orthopaedic Research Laboratory, Department of Orthopaedic Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
- To whom correspondence should be addressed X. Sherry Liu, McKay Orthopaedic Research Laboratory, Department of Orthopaedic Surgery, University of Pennsylvania, 426C Stemmler Hall, 36th Street and Hamilton Walk, Philadelphia, PA 19104, USA, , Phone: 1-215-746-4668
| |
Collapse
|
23
|
Legg PA, Rosin PL, Marshall D, Morgan JE. Improving accuracy and efficiency of mutual information for multi-modal retinal image registration using adaptive probability density estimation. Comput Med Imaging Graph 2013; 37:597-606. [PMID: 24054309 DOI: 10.1016/j.compmedimag.2013.08.004] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2013] [Revised: 08/02/2013] [Accepted: 08/08/2013] [Indexed: 11/17/2022]
Abstract
Mutual information (MI) is a popular similarity measure for performing image registration between different modalities. MI makes a statistical comparison between two images by computing the entropy from the probability distribution of the data. Therefore, to obtain an accurate registration it is important to have an accurate estimation of the true underlying probability distribution. Within the statistics literature, many methods have been proposed for finding the 'optimal' probability density, with the aim of improving the estimation by means of optimal histogram bin size selection. This provokes the common question of how many bins should actually be used when constructing a histogram. There is no definitive answer to this. This question itself has received little attention in the MI literature, and yet this issue is critical to the effectiveness of the algorithm. The purpose of this paper is to highlight this fundamental element of the MI algorithm. We present a comprehensive study that introduces methods from statistics literature and incorporates these for image registration. We demonstrate this work for registration of multi-modal retinal images: colour fundus photographs and scanning laser ophthalmoscope images. The registration of these modalities offers significant enhancement to early glaucoma detection, however traditional registration techniques fail to perform sufficiently well. We find that adaptive probability density estimation heavily impacts on registration accuracy and runtime, improving over traditional binning techniques.
Collapse
Affiliation(s)
- P A Legg
- School of Computer Science, Cardiff University, UK; Department of Computer Science, University of Oxford, UK.
| | | | | | | |
Collapse
|
24
|
Dame A, Marchand E. Second-order optimization of mutual information for real-time image registration. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2012; 21:4190-4203. [PMID: 22588592 DOI: 10.1109/tip.2012.2199124] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
In this paper, we present a direct image registration approach that uses mutual information (MI) as a metric for alignment. The proposed approach is robust and gives an accurate estimation of a set of 2-D motion parameters in real time. MI is a measure of the quantity of information shared by signals. Although it has the ability to perform robust alignment with illumination changes, multimodality, and partial occlusions, few works have proposed MI-based applications related to spatiotemporal image registration or object tracking in image sequences because of some optimization problems, which we will explain. In this paper, we propose a new optimization method that is adapted to the MI cost function and gives a practical solution for real-time tracking. We show that by refining the computation of the Hessian matrix and using a specific optimization approach, the registration results are far more robust and accurate than the existing solutions, with the computation also being cheaper. A new approach is also proposed to speed up the computation of the derivatives and keep the same optimization efficiency. To validate the advantages of the proposed approach, several experiments are performed.
Collapse
Affiliation(s)
- Amaury Dame
- Institut de Recherche en Informatique et Systèmes Aléatoires, Centre National de la Recherche Scientifique, Rennes 35042, France.
| | | |
Collapse
|
25
|
FOOKES C, BENNAMOUN M. RIGID MEDICAL IMAGE REGISTRATION AND ITS ASSOCIATION WITH MUTUAL INFORMATION. INT J PATTERN RECOGN 2011. [DOI: 10.1142/s0218001403002800] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Image registration plays a crucial role in the computer vision and medical imaging field where it is used to develop a spatial mapping between different sets of data. These transformations can range from simple rigid registrations to complex nonrigid deformations. Mutual information (MI) is a popular entropy-based similarity measure which has recently experienced a prolific expansion in a number of image registration applications. Stemming from information theory, this measure generally outperforms most other intensity-based measures in multimodal applications as it only assumes a statistical dependence between images. This paper provides a thorough introduction to the MI measure and its use in rigid medical image registration. A look at the extensions proposed to the original measure will also be provided. These were developed to improve the robustness of the measure and to avoid certain cases when maximizing MI does not lead to the correct spatial alignment.
Collapse
Affiliation(s)
- C. FOOKES
- School of Electrical & Electronic Systems Engineering, Queensland University of Technology, GPO Box 2434, Brisbane, QLD 4001, Australia
| | - M. BENNAMOUN
- Department of Computer Science and Software Engineering, The University of Western Australia, 35 Stirling Highway, Crawley, WA 6009, Australia
| |
Collapse
|
26
|
DRÉO JOHANN, AUMASSON JEANPHILIPPE, TFAILI WALID, SIARRY PATRICK. ADAPTIVE LEARNING SEARCH, A NEW TOOL TO HELP COMPREHENDING METAHEURISTICS. INT J ARTIF INTELL T 2011. [DOI: 10.1142/s0218213007003370] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
The majority of the algorithms used to solve hard optimization problems today are population metaheuristics. These methods are often presented under a purely algorithmic angle, while insisting on the metaphors which led to their design. We propose in this article to regard population metaheuristics as methods making evolution a probabilistic sampling of the objective function, either explicitly, implicitly, or directly, via processes of learning, diversification, and intensification. We present a synthesis of some metaheuristics and their functioning seen under this angle, called Adaptive Learning Search. We discuss how to design metaheuristics following this approach, and propose an implementation with our Open Metaheuristics framework, along with concrete examples.
Collapse
Affiliation(s)
- JOHANN DRÉO
- Université Paris XII Val-de-Marne, Laboratoire Images, Signaux et Systèmes Intelligents (LISSI, EA 3956), 61, avenue du Général de Gaulle, 94010 Créteil, France
| | | | - WALID TFAILI
- Université Paris XII Val-de-Marne, Laboratoire Images, Signaux et Systèmes Intelligents (LISSI, EA 3956), 61, avenue du Général de Gaulle, 94010 Créteil, France
| | - PATRICK SIARRY
- Université Paris XII Val-de-Marne, Laboratoire Images, Signaux et Systèmes Intelligents (LISSI, EA 3956), 61, avenue du Général de Gaulle, 94010 Créteil, France
| |
Collapse
|
27
|
Xing C, Qiu P. Intensity-Based Image Registration by Nonparametric Local Smoothing. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2011; 33:2081-2092. [PMID: 21321367 DOI: 10.1109/tpami.2011.26] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Image registration is used widely in applications for mapping one image to another. Existing image registration methods are either feature-based or intensity-based. Feature-based methods first extract relevant image features and then find the geometrical transformation that best matches the two corresponding sets of features extracted from the two images. Because identification and extraction of image features is often a challenging and time-consuming process, intensity-based image registration, by which the mapping transformation is estimated directly from the observed image intensities of the two images, has received much attention recently. In the literature, most existing intensity-based image registration methods estimate the mapping transformation globally by solving a minimization/maximization problem defined by the two entire images to register. To this end, it needs to be assumed that the mapping transformation has a certain type of parametric form or it is a continuous bivariate function satisfying certain regularity conditions. In this paper, we propose a novel intensity-based image registration method using nonparametric local smoothing. By this method, the mapping transformation at a given pixel is estimated locally in a neighborhood after certain image features are accommodated in the estimation. Due to the flexibility of local smoothing, this method does not require any parametric form for the mapping transformation. It even allows the transformation to be a discontinuous function. Numerical examples show that it is effective in various applications.
Collapse
|
28
|
Mariño C, Ortega M, Barreira N, Penedo MG, Carreira MJ, González F. Algorithm for registration of full Scanning Laser Ophthalmoscope video sequences. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2011; 102:1-16. [PMID: 21269727 DOI: 10.1016/j.cmpb.2010.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2009] [Revised: 11/04/2010] [Accepted: 12/01/2010] [Indexed: 05/30/2023]
Abstract
Fluorescein angiography is an established technique for examining the functional integrity of the retinal microcirculation for early detection of changes due to retinopathy. This paper describes a new method for the registration of large Scanning Laser Ophthalmoscope sequences (SLO), where the patient has been injected with a fluorescent dye. This allows the measurement of parameters such as the arteriovenous passage time. Due to the long time needed to acquire these sequences, there will inevitably be eye movement, which must be corrected prior to the application of quantitative analysis. The algorithm described here combines mutual information-based registration and landmark-based registration. The former will allow the alignment of the darkest frames of the sequence, where the dye has not still arrived to the retina, because of its ability to work with images without a preprocessing or segmentation, while the latter uses relevant features (the vessels) extracted by means of a robust creaseness operator, to get a very fast and accurate registration. The algorithm only detects rigid transformations but proves to be robust against the slight alterations derived from the eye location perspective during acquisition. Results were validated by expert clinicians.
Collapse
Affiliation(s)
- C Mariño
- Dep. Computación, Universidade da Coruña, Spain.
| | | | | | | | | | | |
Collapse
|
29
|
Li Y, Gregori G, Knighton RW, Lujan BJ, Rosenfeld PJ. Registration of OCT fundus images with color fundus photographs based on blood vessel ridges. OPTICS EXPRESS 2011; 19:7-16. [PMID: 21263537 PMCID: PMC3368356 DOI: 10.1364/oe.19.000007] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2010] [Revised: 12/16/2010] [Accepted: 12/16/2010] [Indexed: 05/20/2023]
Abstract
This paper proposes an algorithm to register OCT fundus images (OFIs) with color fundus photographs (CFPs). This makes it possible to correlate retinal features across the different imaging modalities. Blood vessel ridges are taken as features for registration. A specially defined distance, incorporating information of normal direction of blood vessel ridge pixels, is designed to calculate the distance between each pair of pixels to be matched in the pair image. Based on this distance a similarity function between the pair image is defined. Brute force search is used for a coarse registration and then an Iterative Closest Point (ICP) algorithm for a more accurate registration. The registration algorithm was tested on a sample set containing images of both normal eyes and eyes with pathologies. Three transformation models (similarity, affine and quadratic models) were tested on all image pairs respectively. The experimental results showed that the registration algorithm worked well. The average root mean square errors for the affine model are 31 µm (normal) and 59 µm (eyes with disease). The proposed algorithm can be easily adapted to registration for other modality retinal images.
Collapse
Affiliation(s)
- Ying Li
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida 33136, USA.
| | | | | | | | | |
Collapse
|
30
|
Zheng J, Tian J, Deng K, Dai X, Zhang X, Xu M. Salient feature region: a new method for retinal image registration. ACTA ACUST UNITED AC 2010; 15:221-32. [PMID: 21138808 DOI: 10.1109/titb.2010.2091145] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Retinal image registration is crucial for the diagnoses and treatments of various eye diseases. A great number of methods have been developed to solve this problem; however, fast and accurate registration of low-quality retinal images is still a challenging problem since the low content contrast, large intensity variance as well as deterioration of unhealthy retina caused by various pathologies. This paper provides a new retinal image registration method based on salient feature region (SFR). We first propose a well-defined region saliency measure that consists of both local adaptive variance and gradient field entropy to extract the SFRs in each image. Next, an innovative local feature descriptor that combines gradient field distribution with corresponding geometric information is then computed to match the SFRs accurately. After that, normalized cross-correlation-based local rigid registration is performed on those matched SFRs to refine the accuracy of local alignment. Finally, the two images are registered by adopting high-order global transformation model with locally well-aligned region centers as control points. Experimental results show that our method is quite effective for retinal image registration.
Collapse
Affiliation(s)
- Jian Zheng
- Medical Image Processing Group, Key Laboratory of Complex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | | | | | | | | | | |
Collapse
|
31
|
Retinal Fundus Image Registration via Vascular Structure Graph Matching. Int J Biomed Imaging 2010; 2010. [PMID: 20871853 PMCID: PMC2943092 DOI: 10.1155/2010/906067] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2010] [Accepted: 07/07/2010] [Indexed: 11/18/2022] Open
Abstract
Motivated by the observation that a retinal fundus image may contain some unique geometric structures within
its vascular trees which can be utilized for feature matching, in this paper, we proposed a graph-based registration
framework called GM-ICP to align pairwise retinal images. First, the retinal vessels are automatically detected and
represented as vascular structure graphs. A graph matching is then performed to find global correspondences between
vascular bifurcations. Finally, a revised ICP algorithm incorporating with quadratic transformation model is used at
fine level to register vessel shape models. In order to eliminate the incorrect matches from global correspondence
set obtained via graph matching, we proposed a structure-based sample consensus (STRUCT-SAC) algorithm. The
advantages of our approach are threefold: (1) global optimum solution can be achieved with graph matching; (2)
our method is invariant to linear geometric transformations; and (3) heavy local feature descriptors are not required.
The effectiveness of our method is demonstrated by the experiments with 48 pairs retinal images collected from
clinical patients.
Collapse
|
32
|
Broehan AM, Tappeiner C, Rothenbuehler SP, Rudolph T, Amstutz CA, Kowal JH. Multimodal registration procedure for the initial spatial alignment of a retinal video sequence to a retinal composite image. IEEE Trans Biomed Eng 2010; 57:1991-2000. [PMID: 20460204 DOI: 10.1109/tbme.2010.2048710] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Accurate placement of lesions is crucial for the effectiveness and safety of a retinal laser photocoagulation treatment. Computer assistance provides the capability for improvements to treatment accuracy and execution time. The idea is to use video frames acquired from a scanning digital ophthalmoscope (SDO) to compensate for retinal motion during laser treatment. This paper presents a method for the multimodal registration of the initial frame from an SDO retinal video sequence to a retinal composite image, which may contain a treatment plan. The retinal registration procedure comprises the following steps: 1) detection of vessel centerline points and identification of the optic disc; 2) prealignment of the video frame and the composite image based on optic disc parameters; and 3) iterative matching of the detected vessel centerline points in expanding matching regions. This registration algorithm was designed for the initialization of a real-time registration procedure that registers the subsequent video frames to the composite image. The algorithm demonstrated its capability to register various pairs of SDO video frames and composite images acquired from patients.
Collapse
Affiliation(s)
- A Martina Broehan
- artificial organ (ARTORG) Center for Biomedical Engineering Research, University of Bern, Bern 3014, Switzerland
| | | | | | | | | | | |
Collapse
|
33
|
Affine-based registration of CT and MR modality images of human brain using multiresolution approaches: comparative study on genetic algorithm and particle swarm optimization. Neural Comput Appl 2010. [DOI: 10.1007/s00521-010-0374-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
34
|
Zheng J, Tian J, Dai Y, Deng K, Chen J. Retinal image registration based on salient feature regions. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2010; 2009:102-5. [PMID: 19964922 DOI: 10.1109/iembs.2009.5334778] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Retinal image registration is essential and crucial for ophthalmologists to diagnose various diseases. A great number of methods have been developed to solve this problem, however, fast and accurate retinal image registration is still a challenging problem since the great content complexity and low image quality of the unhealthy retina. This paper provides a new retinal image registration method based on salient feature regions (SFR). We first extract the SFR in each image based on a well defined region saliency metric. Next, SFR are matched by using an innovative local feature descriptor. Then we register those matched SFR using local rigid transformation. Finally, we register the two images adopting global second order polynomial transformation with locally rigid registered region centers as control points. Experimental results prove that our method is very fast and accurate, especially quite effective for the low quality retinal images registration.
Collapse
Affiliation(s)
- Jian Zheng
- Medical Image Processing Group, Key Laboratory of Complex Systems and Intelligence Science, Institute of Automation Chinese Academy of Sciences
| | | | | | | | | |
Collapse
|
35
|
Tsai CL, Li CY, Yang G, Lin KS. The edge-driven dual-bootstrap iterative closest point algorithm for registration of multimodal fluorescein angiogram sequence. IEEE TRANSACTIONS ON MEDICAL IMAGING 2010; 29:636-649. [PMID: 19709965 DOI: 10.1109/tmi.2009.2030324] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Motivated by the need for multimodal image registration in ophthalmology, this paper introduces an algorithm which is tailored to jointly align in a common reference space all the images in a complete fluorescein angiogram (FA) sequence, which contains both red-free (RF) and FA images. Our work is inspired by Generalized Dual-Bootstrap Iterative Closest Point (GDB-ICP), which rank-orders Lowe keypoint matches and refines the transformation, going from local and low-order to global and higher-order model, computed from each keypoint match in succession. Albeit GDB-ICP has been shown to be robust in registering images taken under different lighting conditions, the performance is not satisfactory for image pairs with substantial, nonlinear intensity differences. Our algorithm, named Edge-Driven DB-ICP, targeting the least reliable component of GDB-ICP, modifies generation of keypoint matches for initialization by extracting the Lowe keypoints from the gradient magnitude image and enriching the keypoint descriptor with global-shape context using the edge points. Our dataset consists of 60 randomly-selected pathological sequences, each on average having up to two RF and 13 FA images. Edge-Driven DB-ICP successfully registered 92.4% of all pairs, and 81.1% multimodal pairs, whereas GDB-ICP registered 80.1% and 40.1%, respectively. For the joint registration of all images in a sequence, Edge-Driven DB-ICP succeeded in 59 sequences, which is a 23% improvement over GDB-ICP.
Collapse
Affiliation(s)
- Chia-Ling Tsai
- Department of Computer Science, Iona College, New Rochelle, NY 10801, USA.
| | | | | | | |
Collapse
|
36
|
Chen J, Tian J, Lee N, Zheng J, Smith RT, Laine AF. A partial intensity invariant feature descriptor for multimodal retinal image registration. IEEE Trans Biomed Eng 2010; 57:1707-18. [PMID: 20176538 DOI: 10.1109/tbme.2010.2042169] [Citation(s) in RCA: 191] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Detection of vascular bifurcations is a challenging task in multimodal retinal image registration. Existing algorithms based on bifurcations usually fail in correctly aligning poor quality retinal image pairs. To solve this problem, we propose a novel highly distinctive local feature descriptor named partial intensity invariant feature descriptor (PIIFD) and describe a robust automatic retinal image registration framework named Harris-PIIFD. PIIFD is invariant to image rotation, partially invariant to image intensity, affine transformation, and viewpoint/perspective change. Our Harris-PIIFD framework consists of four steps. First, corner points are used as control point candidates instead of bifurcations since corner points are sufficient and uniformly distributed across the image domain. Second, PIIFDs are extracted for all corner points, and a bilateral matching technique is applied to identify corresponding PIIFDs matches between image pairs. Third, incorrect matches are removed and inaccurate matches are refined. Finally, an adaptive transformation is used to register the image pairs. PIIFD is so distinctive that it can be correctly identified even in nonvascular areas. When tested on 168 pairs of multimodal retinal images, the Harris-PIIFD far outperforms existing algorithms in terms of robustness, accuracy, and computational efficiency.
Collapse
Affiliation(s)
- Jian Chen
- Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
| | | | | | | | | | | |
Collapse
|
37
|
Tsai CL, Madore B, Leotta M, Sofka M, Yang G, Majerovics A, Tanenbaum H, Stewart C, Roysam B. Automated Retinal Image Analysis Over the Internet. ACTA ACUST UNITED AC 2008; 12:480-7. [DOI: 10.1109/titb.2007.908790] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
38
|
Baumgarten D, Doering A. [Registration of fundus images for generating wide field composite images of the retina ]. BIOMED ENG-BIOMED TE 2008; 52:365-74. [PMID: 18047401 DOI: 10.1515/bmt.2007.061] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The composition of retinal images presents high demands to the applied methods. Substantially different lighting conditions between the images, glarings and fade-outs within one image, large textureless regions and non-linear distortions are the main challenges. We present a fully automatic algorithm for the registration of images of the human retina and their overlay to wide field montage images combining area-based and point-based approaches. The algorithm combines an area-based as well as a point-based approach for determining similarities between images. Various measures of similarity were investigated, where the normalized correlation coefficient was superior compared to the usual definitions of transinformation. The transformation of the images was based on a quadratic model that can be derived from the spherical surface of the retina. This model was compared to four other parameterized transformations and performed best both visually and quantitatively in terms of measured misregistration. Problems may occur if the images are extremely defocused or contain very little relevant structural information.
Collapse
Affiliation(s)
- Daniel Baumgarten
- Institut für Biomedizinische Technik und Informatik, Technische Universität Ilmenau, Ilmenau, Deutschland.
| | | |
Collapse
|
39
|
Higgins WE, Helferty JP, Lu K, Merritt SA, Rai L, Yu KC. 3D CT-video fusion for image-guided bronchoscopy. Comput Med Imaging Graph 2007; 32:159-73. [PMID: 18096365 DOI: 10.1016/j.compmedimag.2007.11.001] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2006] [Revised: 10/01/2007] [Accepted: 11/01/2007] [Indexed: 12/18/2022]
Abstract
Bronchoscopic biopsy of the central-chest lymph nodes is an important step for lung-cancer staging. Before bronchoscopy, the physician first visually assesses a patient's three-dimensional (3D) computed tomography (CT) chest scan to identify suspect lymph-node sites. Next, during bronchoscopy, the physician guides the bronchoscope to each desired lymph-node site. Unfortunately, the physician has no link between the 3D CT image data and the live video stream provided during bronchoscopy. Thus, the physician must essentially perform biopsy blindly, and the skill levels between different physicians differ greatly. We describe an approach that enables synergistic fusion between the 3D CT data and the bronchoscopic video. Both the integrated planning and guidance system and the internal CT-video registration and fusion methods are described. Phantom, animal, and human studies illustrate the efficacy of the methods.
Collapse
Affiliation(s)
- William E Higgins
- Department of Electrical Engineering, Penn State University, University Park, PA 16802, United States.
| | | | | | | | | | | |
Collapse
|
40
|
Markaki VE, Asvestas PA, Matsopoulos GK, Uzunoglu NK. Application of the Kohonen network for automatic point correspondence in retinal images. ACTA ACUST UNITED AC 2007; 2007:6468-71. [PMID: 18003506 DOI: 10.1109/iembs.2007.4353840] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
In this paper, an algorithm for automatic point correspondence is proposed towards retinal image registration. Given a pair of corresponding retinal images and a set of bifurcations or other salient points in one of the images, the algorithm detects effectively the set of corresponding points in the second image, by exploiting the properties of Kohonen's Self Organizing Maps and embedding them in a stochastic optimization procedure. The proposed algorithm was tested on 20 unimodal retinal pairs and the obtained results show an enhanced performance in terms of accuracy and robustness compared to the existing algorithm, on which it is based.
Collapse
Affiliation(s)
- V E Markaki
- Institute of Communications and Computer Systems, National Technical University of Athens, 15780 Zografos, Greece
| | | | | | | |
Collapse
|
41
|
Asvestas PA, Matsopoulos GK, Delibasis KK, Mouravliansky NA. Registration of retinal angiograms using self organizing maps. CONFERENCE PROCEEDINGS : ... ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL CONFERENCE 2007; 2006:4722-5. [PMID: 17946259 DOI: 10.1109/iembs.2006.260567] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
In this paper, an automatic method for registering multimodal retinal images is presented. The method consists of three steps: the vessel centerline detection and extraction of bifurcation points only in the reference image, the automatic correspondence of bifurcation points in the two images using a novel implementation of the Self Organized Maps (SOMs) and the extraction of the parameters of the affine transform using the previously obtained correspondences. The proposed registration algorithm was tested on 24 multimodal retinal pairs and the obtained results show an advantageous performance in terms of accuracy with respect to the manual registration.
Collapse
|
42
|
Sarkar I, Bansal M. A Wavelet-Based Multiresolution Approach to Solve the Stereo Correspondence Problem Using Mutual Information. ACTA ACUST UNITED AC 2007; 37:1009-14. [PMID: 17702296 DOI: 10.1109/tsmcb.2007.890584] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
In this correspondence, we propose a wavelet-based hierarchical approach using mutual information (MI) to solve the correspondence problem in stereo vision. The correspondence problem involves identifying corresponding pixels between images of a given stereo pair. This results in a disparity map, which is required to extract depth information of the relevant scene. Until recently, mostly correlation-based methods have been used to solve the correspondence problem. However, the performance of correlation-based methods degrades significantly when there is a change in illumination between the two images of the stereo pair. Recent studies indicate MI to be a more robust stereo matching metric for images affected by such radiometric distortions. In this short correspondence paper, we compare the performances of MI and correlation-based metrics for different types of illumination changes between stereo images. MI, as a statistical metric, is computationally more expensive. We propose a wavelet-based hierarchical technique to counter the increase in computational cost and show its effectiveness in stereo matching.
Collapse
|
43
|
Adjeroh DA, Kandaswamy U, Odom JV. Texton-based segmentation of retinal vessels. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2007; 24:1384-93. [PMID: 17429484 DOI: 10.1364/josaa.24.001384] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
With improvements in fundus imaging technology and the increasing use of digital images in screening and diagnosis, the issue of automated analysis of retinal images is gaining more serious attention. We consider the problem of retinal vessel segmentation, a key issue in automated analysis of digital fundus images. We propose a texture-based vessel segmentation algorithm based on the notion of textons. Using a weak statistical learning approach, we construct textons for retinal vasculature by designing filters that are specifically tuned to the structural and photometric properties of retinal vessels. We evaluate the performance of the proposed approach using a standard database of retinal images. On the DRIVE data set, the proposed method produced an average performance of 0.9568 specificity at 0.7346 sensitivity. This compares well with the best-published results on the data set 0.9773 specificity at 0.7194 sensitivity [Proc. SPIE5370, 648 (2004)].
Collapse
Affiliation(s)
- Donald A Adjeroh
- Lane Department of Computer Science and Electrical Engineering, Vido and Image Processing Laboratory, West Virginia University, Morgantown 26506, USA.
| | | | | |
Collapse
|
44
|
Zhu YM. A Java program for stereo retinal image visualization. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2007; 85:214-9. [PMID: 17257706 DOI: 10.1016/j.cmpb.2006.11.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2005] [Revised: 10/30/2006] [Accepted: 11/24/2006] [Indexed: 05/13/2023]
Abstract
Stereo imaging of the optic-disc is a gold standard examination of glaucoma, and progression of glaucoma can be detected from temporal stereo images. A Java-based software system is reported here which automatically aligns the left and right stereo retinal images and presents the aligned images side by side, along with the anaglyph computed from the aligned images. Moreover, the disparity between two aligned images is computed and used as the depth cue to render the optic-disc images, which can be interactively edited, panned, zoomed, rotated, and animated, allowing one to examine the surface of the optic-nerve head from different view angles. Measurement including length, area, and volume of regions of interest can also be performed interactively.
Collapse
|
45
|
Kubecka L, Jan J. Registration of bimodal retinal images - improving modifications. CONFERENCE PROCEEDINGS : ... ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL CONFERENCE 2007; 2004:1695-8. [PMID: 17272030 DOI: 10.1109/iembs.2004.1403510] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
The proper optical disc segmentation in images provided by confocal laser scanning ophthalmoscope and by color fundus-camera is a necessary step in early glaucoma or arteriosclerosis detection. Fusing information from both modalities into a vector-valued image is expected to improve the segmentation reliability. The paper describes a registration of these images using optimization based on mutual information criterion function extended with gradient-image mutual information. The controlled random search (CRS) has been found to be a more robust optimization routine than the simulated annealing (SA) while tested on a set of 174 image pairs. Finally, the multi-resolution algorithm for bimodal retinal image registration achieving the success-rate of 94% is proposed.
Collapse
Affiliation(s)
- L Kubecka
- Department of Biomedical Engineering, Brno University of Technology, Czech Republic.
| | | |
Collapse
|
46
|
Dréo J, Nunes JC, Siarry P. Robust rigid registration of retinal angiograms through optimization. Comput Med Imaging Graph 2006; 30:453-63. [PMID: 17034991 DOI: 10.1016/j.compmedimag.2006.07.004] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2005] [Revised: 07/16/2006] [Accepted: 07/19/2006] [Indexed: 11/17/2022]
Abstract
Retinal fundus photographs are employed as standard diagnostic tools in ophthalmology. Serial photographs of the flow of fluorescein and indocyanine green (ICG) dye are used to determine the areas of the retinal lesions. For objective measurements of features, the registration of the images is a necessity. In this paper, we employ optimization techniques for registration with the help of 2-parameter translational motion model of retinal angiograms, based on non-linear pre-processing (Wiener filtering and morphological gradient) and computation of the similarity criteria for the alignment of the two gradient images for any given rigid transformation. The optimization methods are effectively employed to minimize the similarity criterion. The presence of noise, the variations in the background and the temporal variation of the fluorescence level pose serious problems in obtaining a robust registration of the retinal images. Moreover, local search strategies are not robust in the case of ICG angiograms, even if one uses a multiresolution approach. The present work makes a systematic comparison of different optimization techniques, namely the minimization method derived from the optical flow formulation, the Nelder-Mead local search and the HCIAC ant colony metaheuristic, each optimizing a similarity criterion for the gradient images. The impact of the resolution and median filtering of gradient image is studied and the robustness of the approaches is tested through experimental studies, performed on macular fluorescein and ICG angiographies. Our proposed optimization techniques have shown interesting results especially for high resolution difficult registration problems. Moreover, this approach seems promising for affine (6-parameter motion model) or elastical registrations.
Collapse
Affiliation(s)
- Johann Dréo
- Laboratoire Images Signaux et Systèmes Intelligents (LiSSi, EA 3956), Université Paris XII-Val de Marne, 94010 Créteil, France
| | | | | |
Collapse
|
47
|
Xu J, Chutatape O. Auto-adjusted 3-D optic disk viewing from low-resolution stereo fundus image. Comput Biol Med 2006; 36:921-40. [PMID: 16023095 DOI: 10.1016/j.compbiomed.2005.05.001] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2004] [Revised: 05/11/2005] [Accepted: 05/11/2005] [Indexed: 10/25/2022]
Abstract
Three-dimensional (3-D) visualization of the optic nerve head (optic disk) is very useful for clinical applications. It allows clinicians to measure the disk parameters more accurately and thus make the pathological diagnosis and progression monitoring easier. This paper describes an automatic, precise, 3-D optic nerve head reconstruction method from a pair of stereo images for which efficient steps including sparse-image registration and dense-depth recovery are used. A combination of two registration methods is used to detect the sub-pixel correspondences. The proposed method takes advantages of both the correlation methods which is robust to noise and the feature-based method on its accuracy. The searching range in image registration is auto-adjusted based on the previous iteration result. Only sparse matched points are computed to speed up the processing and the sub-pixel matching is used to overcome the problem of low resolution in the image. This is followed by the piecewise cubic interpolation to obtain the dense disparities and depths. Multiple windowing is applied here by first using the large window to obtain basic disparities followed by the small window and previous basic disparities to measure details. The result is then smoothed and displayed as the final 3-D shape.
Collapse
Affiliation(s)
- Juan Xu
- Biomedical Engineering Research Centre, School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore.
| | | |
Collapse
|
48
|
Narasimha-Iyer H, Can A, Roysam B, Stewart CV, Tanenbaum HL, Majerovics A, Singh H. Robust detection and classification of longitudinal changes in color retinal fundus images for monitoring diabetic retinopathy. IEEE Trans Biomed Eng 2006; 53:1084-98. [PMID: 16761836 DOI: 10.1109/tbme.2005.863971] [Citation(s) in RCA: 114] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
A fully automated approach is presented for robust detection and classification of changes in longitudinal time-series of color retinal fundus images of diabetic retinopathy. The method is robust to: 1) spatial variations in illumination resulting from instrument limitations and changes both within, and between patient visits; 2) imaging artifacts such as dust particles; 3) outliers in the training data; 4) segmentation and alignment errors. Robustness to illumination variation is achieved by a novel iterative algorithm to estimate the reflectance of the retina exploiting automatically extracted segmentations of the retinal vasculature, optic disk, fovea, and pathologies. Robustness to dust artifacts is achieved by exploiting their spectral characteristics, enabling application to film-based, as well as digital imaging systems. False changes from alignment errors are minimized by subpixel accuracy registration using a 12-parameter transformation that accounts for unknown retinal curvature and camera parameters. Bayesian detection and classification algorithms are used to generate a color-coded output that is readily inspected. A multiobserver validation on 43 image pairs from 22 eyes involving nonproliferative and proliferative diabetic retinopathies, showed a 97% change detection rate, a 3% miss rate, and a 10% false alarm rate. The performance in correctly classifying the changes was 99.3%. A self-consistency metric, and an error factor were developed to measure performance over more than two periods. The average self consistency was 94% and the error factor was 0.06%. Although this study focuses on diabetic changes, the proposed techniques have broader applicability in ophthalmology.
Collapse
Affiliation(s)
- Harihar Narasimha-Iyer
- Department of Electrical, Computer, and Systems Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180, USA.
| | | | | | | | | | | | | |
Collapse
|
49
|
Fang B, Tang YY. Elastic registration for retinal images based on reconstructed vascular trees. IEEE Trans Biomed Eng 2006; 53:1183-7. [PMID: 16761845 DOI: 10.1109/tbme.2005.863927] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The vascular tree of the retina is likely the most representative and stable feature for eye fundus images in registration. Based on the reconstructed vascular tree, we propose an elastic matching algorithm to register pairs of fundus images. The identified vessels are thinned and approximated using short line segments of equal length that results a set of elements. The set of elements corresponding to one vascular tree are elastically deformed to optimally match the set of elements of another vascular tree, with the guide of an energy function to finally establish pixel relationship between both vascular trees. The mapped positions of pixels in the transformed retinal image are computed to be the sum of their original locations and corresponding displacement vectors. For the purpose of performance comparison, a weak affine model based fast chamfer matching technique is proposed and implemented. Experiment results validated the effectiveness of the elastic matching algorithm and its advantage over the weak affine model for registration of retinal fundus images.
Collapse
Affiliation(s)
- Bin Fang
- Department of Computer Science, Chongqing University, PR China
| | | |
Collapse
|
50
|
Abstract
This work studies retinal image registration in the context of the National Institutes of Health (NIH) Early Treatment Diabetic Retinopathy Study (ETDRS) standard. The ETDRS imaging protocol specifies seven fields of each retina and presents three major challenges for the image registration task. First, small overlaps between adjacent fields lead to inadequate landmark points for feature-based methods. Second, the non-uniform contrast/intensity distributions due to imperfect data acquisition will deteriorate the performance of area-based techniques. Third, high-resolution images contain large homogeneous nonvascular/texureless regions that weaken the capabilities of both feature-based and area-based techniques. In this work, we propose a hybrid retinal image registration approach for ETDRS images that effectively combines both area-based and feature-based methods. Four major steps are involved. First, the vascular tree is extracted by using an efficient local entropy-based thresholding technique. Next, zeroth-order translation is estimated by maximizing mutual information based on the binary image pair (area-based). Then image quality assessment regarding the ETDRS field definition is performed based on the translation model. If the image pair is accepted, higher-order transformations will be involved. Specifically, we use two types of features, landmark points and sampling points, for affine/quadratic model estimation. Three empirical conditions are derived experimentally to control the algorithm progress, so that we can achieve the lowest registration error and the highest success rate. Simulation results on 504 pairs of ETDRS images show the effectiveness and robustness of the proposed algorithm.
Collapse
Affiliation(s)
- Thitiporn Chanwimaluang
- School of Electrical and Computer Engineering, Oklahoma State University, Stillwater 74078, USA.
| | | | | |
Collapse
|