1
|
|
2
|
Hernandez-Matas C, Zabulis X, Argyros AA. REMPE: Registration of Retinal Images Through Eye Modelling and Pose Estimation. IEEE J Biomed Health Inform 2020; 24:3362-3373. [DOI: 10.1109/jbhi.2020.2984483] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
3
|
Deng M, Li S, Zhang Z, Kang I, Fang NX, Barbastathis G. On the interplay between physical and content priors in deep learning for computational imaging. OPTICS EXPRESS 2020; 28:24152-24170. [PMID: 32752400 DOI: 10.1364/oe.395204] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 07/21/2020] [Indexed: 06/11/2023]
Abstract
Deep learning (DL) has been applied extensively in many computational imaging problems, often leading to superior performance over traditional iterative approaches. However, two important questions remain largely unanswered: first, how well can the trained neural network generalize to objects very different from the ones in training? This is particularly important in practice, since large-scale annotated examples similar to those of interest are often not available during training. Second, has the trained neural network learnt the underlying (inverse) physics model, or has it merely done something trivial, such as memorizing the examples or point-wise pattern matching? This pertains to the interpretability of machine-learning based algorithms. In this work, we use the Phase Extraction Neural Network (PhENN) [Optica 4, 1117-1125 (2017)], a deep neural network (DNN) for quantitative phase retrieval in a lensless phase imaging system as the standard platform and show that the two questions are related and share a common crux: the choice of the training examples. Moreover, we connect the strength of the regularization effect imposed by a training set to the training process with the Shannon entropy of images in the dataset. That is, the higher the entropy of the training images, the weaker the regularization effect can be imposed. We also discover that weaker regularization effect leads to better learning of the underlying propagation model, i.e. the weak object transfer function, applicable for weakly scattering objects under the weak object approximation. Finally, simulation and experimental results show that better cross-domain generalization performance can be achieved if DNN is trained on a higher-entropy database, e.g. the ImageNet, than if the same DNN is trained on a lower-entropy database, e.g. MNIST, as the former allows the underlying physics model be learned better than the latter.
Collapse
|
4
|
Laha S, LaLonde R, Carmack AE, Foroosh H, Olson JC, Shaikh S, Bagci U. Analysis of Video Retinal Angiography With Deep Learning and Eulerian Magnification. FRONTIERS IN COMPUTER SCIENCE 2020. [DOI: 10.3389/fcomp.2020.00024] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
5
|
Kang I, Zhang F, Barbastathis G. Phase extraction neural network (PhENN) with coherent modulation imaging (CMI) for phase retrieval at low photon counts. OPTICS EXPRESS 2020; 28:21578-21600. [PMID: 32752433 DOI: 10.1364/oe.397430] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Accepted: 06/19/2020] [Indexed: 06/11/2023]
Abstract
Imaging with low-dose light is of importance in various fields, especially when minimizing radiation-induced damage onto samples is desirable. The raw image captured at the detector plane is then predominantly a Poisson random process with Gaussian noise added due to the quantum nature of photo-electric conversion. Under such noisy conditions, highly ill-posed problems such as phase retrieval from raw intensity measurements become prone to strong artifacts in the reconstructions; a situation that deep neural networks (DNNs) have already been shown to be useful at improving. Here, we demonstrate that random phase modulation on the optical field, also known as coherent modulation imaging (CMI), in conjunction with the phase extraction neural network (PhENN) and a Gerchberg-Saxton-Fienup (GSF) approximant, further improves resilience to noise of the phase-from-intensity imaging problem. We offer design guidelines for implementing the CMI hardware with the proposed computational reconstruction scheme and quantify reconstruction improvement as function of photon count.
Collapse
|
6
|
Morphological Band Registration of Multispectral Cameras for Water Quality Analysis with Unmanned Aerial Vehicle. REMOTE SENSING 2020. [DOI: 10.3390/rs12122024] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Multispectral imagery contains abundant spectral information on terrestrial and oceanic targets, and retrieval of the geophysical variables of the targets is possible when the radiometric integrity of the data is secured. Multispectral cameras typically require the registration of individual band images because their lens locations for individual bands are often displaced from each other, thereby generating images of different viewing angles. Although this type of displacement can be corrected through a geometric transformation of the image coordinates, a mismatch or misregistration between the bands still remains, owing to the image acquisition timing that differs by bands. Even a short time difference is critical for the image quality of fast-moving targets, such as water surfaces, and this type of deformation cannot be compensated for with a geometric transformation between the bands. This study proposes a novel morphological band registration technique, based on the quantile matching method, for which the correspondence between the pixels of different bands is not sought by their geometric relationship, but by the radiometric distribution constructed in the vicinity of the pixel. In this study, a Micasense Rededge-M camera was operated on an unmanned aerial vehicle and multispectral images of coastal areas were acquired at various altitudes to examine the performance of the proposed method for different spatial scales. To assess the impact of the correction on a geophysical variable, the performance of the proposed method was evaluated for the chlorophyll-a concentration estimation. The results showed that the proposed method successfully removed the noisy spatial pattern caused by misregistration while maintaining the original spatial resolution for both homogeneous scenes and an episodic scene with a red tide outbreak.
Collapse
|
7
|
Motta D, Casaca W, Paiva A. Vessel Optimal Transport for Automated Alignment of Retinal Fundus Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:6154-6168. [PMID: 31283507 DOI: 10.1109/tip.2019.2925287] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Optimal transport has emerged as a promising and useful tool for supporting modern image processing applications such as medical imaging and scientific visualization. Indeed, the optimal transport theory enables great flexibility in modeling problems related to image registration, as different optimization resources can be successfully used as well as the choice of suitable matching models to align the images. In this paper, we introduce an automated framework for fundus image registration which unifies optimal transport theory, image processing tools, and graph matching schemes into a functional and concise methodology. Given two ocular fundus images, we construct representative graphs which embed in their structures spatial and topological information from the eye's blood vessels. The graphs produced are then used as input by our optimal transport model in order to establish a correspondence between their sets of nodes. Finally, geometric transformations are performed between the images so as to accomplish the registration task properly. Our formulation relies on the solid mathematical foundation of optimal transport as a constrained optimization problem, being also robust when dealing with outliers created during the matching stage. We demonstrate the accuracy and effectiveness of the present framework throughout a comprehensive set of qualitative and quantitative comparisons against several influential state-of-the-art methods on various fundus image databases.
Collapse
|
8
|
Yao Z, Feng H, Song Y, Li S, Yang Y, Liu L, Liu C. A supervised network for fast image-guided radiotherapy (IGRT) registration. J Med Syst 2019; 43:194. [PMID: 31114956 DOI: 10.1007/s10916-019-1256-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2018] [Accepted: 03/27/2019] [Indexed: 11/30/2022]
Abstract
3D/3D image registration in IGRT, which aligns planning Computed Tomography (CT) image set with on board Cone Beam CT (CBCT) image set in a short time with high accuracy, is still a challenge due to its high computational cost and complex anatomical structure of medical image. In order to overcome these difficulties, a new method is proposed which contains a coarse registration and a fine registration. For the coarse registration, a supervised regression convolutional neural networks (CNNs) is used to optimize the spatial variation by minimizing the loss when combine the CT images with the CBCT images. For the fine registration, intensity-based image registration is used to calculate the accurate spatial difference of the input image pairs. A coarse registration can get a rough result with a wide capture range in less than 0.5 s. Sequentially a fine registration can get accurate results in a reasonable short time. RSD-111 T chest phantom was used to test our new method. The set-up error was calculated in less than 10s in time scale, and was reduced to sub-millimeter level in spatial scale. The average residual errors in translation and rotation are within ±0.5 mm and ± 0.2°.
Collapse
Affiliation(s)
- Zhixin Yao
- Institute of Plasma Physics, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, 230031, China.,University of Science and Technology of China, Hefei, 230026, China
| | - Hansheng Feng
- Institute of Plasma Physics, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, 230031, China. .,University of Science and Technology of China, Hefei, 230026, China.
| | - Yuntao Song
- Institute of Plasma Physics, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, 230031, China.,University of Science and Technology of China, Hefei, 230026, China
| | - Shi Li
- Institute of Plasma Physics, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, 230031, China.,University of Science and Technology of China, Hefei, 230026, China
| | - Yang Yang
- Institute of Plasma Physics, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, 230031, China
| | - Lingling Liu
- Cancer Hospital, Chinese Academy of Science, Hefei, 230031, China.,Anhui Province Key Laboratory of Medical Physics and Technology, Center of Medical Physics and Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, 230031, China
| | - Chunbo Liu
- University of Science and Technology of China, Hefei, 230026, China
| |
Collapse
|
9
|
Saha SK, Xiao D, Bhuiyan A, Wong TY, Kanagasingam Y. Color fundus image registration techniques and applications for automated analysis of diabetic retinopathy progression: A review. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2018.08.034] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
10
|
Ishikawa S, Yoshinaga Y, Kantake D, Nakamura D, Yoshida N, Hisatomi T, Ikeda Y, Ishibashi T, Enaida H. Development of a novel noninvasive system for measurement and imaging of the arterial phase oxygen density ratio in the retinal microcirculation. Graefes Arch Clin Exp Ophthalmol 2018; 257:557-565. [PMID: 30569321 DOI: 10.1007/s00417-018-04211-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2018] [Revised: 11/25/2018] [Accepted: 12/04/2018] [Indexed: 11/28/2022] Open
Abstract
PURPOSE This study was conducted in order to develop a novel noninvasive system for measurement and imaging of the arterial oxygen density ratio (ODR) in the retinal microcirculation. METHODS We developed a system composed of two digital cameras with two different filters, which were attached to a fundus camera capable of simultaneously obtaining two images. Actual measurements were performed on healthy volunteer eyes (n = 61). A new algorithm for ODR measurement and pixel-level imaging of erythrocytes was constructed from these data. The algorithm was based on the morphological closing operation and the line convergence index filter. For system calibration, we compared and verified the ODR values in arterioles and venules that were specified in advance for 56 eyes with reproducibility. In 10 additional volunteers, ODR measurements and imaging of the arterial phase in the retinal microcirculation corresponding to changes in oxygen saturation of the peripheral arteries at normal breathing and breath holding were performed. RESULTS Estimation of incident light to erythrocytes and pixel-level ODR calculation were achieved using the algorithm. The mean ODR values of arterioles and venules were 0.77 ± 0.060 and 1.02 ± 0.067, respectively. It was possible to separate these regions, calibrate at the pixel level, and estimate the arterial phase. In each of the 10 volunteers, changes in the arterial phase ODR corresponding to changes in oxygen saturation of the peripheral arteries were observed before and after breath holding on ODR images. The mean ODR in 10 volunteers was increased by breath holding (p < 0.05). CONCLUSIONS We developed a basic system for arterial phase ODR measurement and imaging of the retinal microcirculation. With further validation and development, this may provide a useful tool for evaluating retinal oxygen metabolism in the retinal microcirculation.
Collapse
Affiliation(s)
- Shinichiro Ishikawa
- Department of Ophthalmology, Faculty of Medicine, Saga University, 5-1-1 Nabeshima, Saga, 849-8501, Japan
| | - Yukiyasu Yoshinaga
- Graduate School of Design, Kyushu University, 4-9-1 Shiobaru, Fukuoka, Minami-ku, 815-8540, Japan
| | - Daichi Kantake
- Department of Ophthalmology, Faculty of Medicine, Saga University, 5-1-1 Nabeshima, Saga, 849-8501, Japan
| | - Daisuke Nakamura
- Graduate School of Information Science and Electrical Engineering, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka, 819-0395, Japan
| | - Noriko Yoshida
- Section of Ophthalmology, Department of Medicine, Fukuoka Dental College, 2-15-1 Tamura, Sawara-ku, Fukuoka, 814-0193, Japan
| | - Toshio Hisatomi
- Department of Ophthalmology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Yasuhiro Ikeda
- Department of Ophthalmology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | | | - Hiroshi Enaida
- Department of Ophthalmology, Faculty of Medicine, Saga University, 5-1-1 Nabeshima, Saga, 849-8501, Japan.
| |
Collapse
|
11
|
A-RANSAC: Adaptive random sample consensus method in multimodal retinal image registration. Biomed Signal Process Control 2018. [DOI: 10.1016/j.bspc.2018.06.002] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
12
|
Adal KM, van Etten PG, Martinez JP, Rouwen KW, Vermeer KA, van Vliet LJ. An Automated System for the Detection and Classification of Retinal Changes Due to Red Lesions in Longitudinal Fundus Images. IEEE Trans Biomed Eng 2018; 65:1382-1390. [DOI: 10.1109/tbme.2017.2752701] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
13
|
Hernandez-Matas C, Zabulis X, Argyros AA. Retinal image registration based on keypoint correspondences, spherical eye modeling and camera pose estimation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2015:5650-4. [PMID: 26737574 DOI: 10.1109/embc.2015.7319674] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
In this work, an image registration method for two retinal images is proposed. The proposed method utilizes keypoint correspondences and assumes a spherical model of the eye. Image registration is treated as a pose estimation problem, which requires estimation of the rigid transformation that relates the two images. Using this estimate, one image can be warped so that it is registered to the coordinate frame of the other. Experimental evaluation shows improved accuracy over state-of-the-art approaches as well as robustness to noise and spurious keypoint correspondences. Experiments also indicate the method's applicability to diagnostic image enhancement and comparative analysis of images from different examinations.
Collapse
|
14
|
Noyel G, Thomas R, Bhakta G, Crowder A, Owens D, Boyle P. Superimposition of eye fundus images for longitudinal analysis from large public health databases. Biomed Phys Eng Express 2017. [DOI: 10.1088/2057-1976/aa7d16] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
15
|
Hernandez-Matas C, Zabulis X, Argyros AA. An experimental evaluation of the accuracy of keypoints-based retinal image registration. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2017:377-381. [PMID: 29059889 DOI: 10.1109/embc.2017.8036841] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
This work regards an investigation of the accuracy of a state-of-the-art, keypoint-based retinal image registration approach, as to the type of keypoint features used to guide the registration process. The employed registration approach is a local method that incorporates the notion of a 3D retinal surface imaged from different viewpoints and has been shown, experimentally, to be more accurate than competing approaches. The correspondences obtained between SIFT, SURF, Harris-PIIFD and vessel bifurcations are studied, either individually or in combinations. The combination of SIFT features with vessel bifurcations was found to perform better than other combinations or any individual feature type, alone. The registration approach is also comparatively evaluated against representative methods of the state-of-the-art in retinal image registration, using a benchmark dataset that covers a broad range of cases regarding the overlap of the acquired images and the anatomical characteristics of the imaged retinas.
Collapse
|
16
|
Hernandez-Matas C, Zabulis X, Argyros AA. Retinal image registration through simultaneous camera pose and eye shape estimation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2016:3247-3251. [PMID: 28269000 DOI: 10.1109/embc.2016.7591421] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
In this paper, a retinal image registration method is proposed. The approach utilizes keypoint correspondences and assumes that the human eye has a spherical or ellipsoidal shape. The image registration problem amounts to solving a camera 3D pose estimation problem and, simultaneously, an eye 3D shape estimation problem. The camera pose estimation problem is solved by estimating the relative pose between the views from which the images were acquired. The eye shape estimation problem parameterizes the shape and orientation of an ellipsoidal model for the eye. Experimental evaluation shows 17.91% reduction of registration error and 47.52% reduction of the error standard deviation over state of the art methods.
Collapse
|
17
|
Soomro TA, Gao J, Khan T, Hani AFM, Khan MAU, Paul M. Computerised approaches for the detection of diabetic retinopathy using retinal fundus images: a survey. Pattern Anal Appl 2017. [DOI: 10.1007/s10044-017-0630-y] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
18
|
Guo F, Zhao X, Zou B, Liang Y. Automatic Retinal Image Registration Using Blood Vessel Segmentation and SIFT Feature. INT J PATTERN RECOGN 2017. [DOI: 10.1142/s0218001417570063] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Automatic retinal image registration is still a great challenge in computer aided diagnosis and screening system. In this paper, a new retinal image registration method is proposed based on the combination of blood vessel segmentation and scale invariant feature transform (SIFT) feature. The algorithm includes two stages: retinal image segmentation and registration. In the segmentation stage, the blood vessel is segmented by using the guided filter to enhance the vessel structure and the bottom-hat transformation to extract blood vessel. In the registration stage, the SIFT algorithm is adopted to detect the feature of vessel segmentation image, complemented by using a random sample consensus (RANSAC) algorithm to eliminate incorrect matches. We evaluate our method from both segmentation and registration aspects. For segmentation evaluation, we test our method on DRIVE database, which provides manually labeled images from two specialists. The experimental results show that our method achieves 0.9562 in accuracy (Acc), which presents competitive performance compare to other existing segmentation methods. For registration evaluation, we test our method on STARE database, and the experimental results demonstrate the superior performance of the proposed method, which makes the algorithm a suitable tool for automated retinal image analysis.
Collapse
Affiliation(s)
- Fan Guo
- School of Information Science and Engineering, Central South University, Changsha, P. R. China
- Joint Laboratory of Mobile Health, Ministry of Education and China Mobile, Changsha, P. R. China
- Center for Ophthalmic Imaging Research, Central South University, Changsha, P. R. China
| | - Xin Zhao
- School of Information Science and Engineering, Central South University, Changsha, P. R. China
- Joint Laboratory of Mobile Health, Ministry of Education and China Mobile, Changsha, P. R. China
- Center for Ophthalmic Imaging Research, Central South University, Changsha, P. R. China
| | - Beiji Zou
- School of Information Science and Engineering, Central South University, Changsha, P. R. China
- Joint Laboratory of Mobile Health, Ministry of Education and China Mobile, Changsha, P. R. China
- Center for Ophthalmic Imaging Research, Central South University, Changsha, P. R. China
| | - Yixiong Liang
- School of Information Science and Engineering, Central South University, Changsha, P. R. China
- Joint Laboratory of Mobile Health, Ministry of Education and China Mobile, Changsha, P. R. China
- Center for Ophthalmic Imaging Research, Central South University, Changsha, P. R. China
| |
Collapse
|
19
|
Accurate Joint-Alignment of Indocyanine Green and Fluorescein Angiograph Sequences for Treatment of Subretinal Lesions. IEEE J Biomed Health Inform 2017; 21:785-793. [PMID: 28113480 DOI: 10.1109/jbhi.2016.2538265] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
In ophthalmology, aligning images in indocyanine green and fluorescein angiograph sequences is important for the treatment of subretinal lesions. This paper introduces an algorithm that is tailored to align jointly in a common reference space all the images in an angiogram sequence containing both modalities. To overcome the issues of low image contrast and low signal-to-noise ratio for late-phase images, the structural similarity between two images is enhanced using Gabor wavelet transform. Image pairs are pairwise registered and the transformations are simultaneously and globally adjusted for a mutually consistent joint alignment. The joint registration process is incremental and the success depends on the correctness of matches from the pairwise registration. To safeguard the joint process, our system performs the consistency test to exclude incorrect pairwise results automatically to ensure correct matches as more images are jointly aligned. Our dataset consists of 60 sequences of polypoidal choroidal vasculopathy collected by the EVEREST Study Group. On average, each sequence contains 20 images. Our algorithm successfully pairwise registered 95.04% of all image pairs, and joint registered 98.7% of all images, with an average alignment error of 1.58 pixels.
Collapse
|
20
|
Hernandez-Matas C, Zabulis X, Triantafyllou A, Anyfanti P, Argyros AA. Retinal image registration under the assumption of a spherical eye. Comput Med Imaging Graph 2016; 55:95-105. [PMID: 27370900 DOI: 10.1016/j.compmedimag.2016.06.006] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2016] [Revised: 05/23/2016] [Accepted: 06/21/2016] [Indexed: 10/21/2022]
Abstract
We propose a method for registering a pair of retinal images. The proposed approach employs point correspondences and assumes that the human eye has a spherical shape. The image registration problem is formulated as a 3D pose estimation problem, solved by estimating the rigid transformation that relates the views from which the two images were acquired. Given this estimate, each image can be warped upon the other so that pixels with the same coordinates image the same retinal point. Extensive experimental evaluation shows improved accuracy over state of the art methods, as well as robustness to noise and spurious keypoint matches. Experiments also indicate the method's applicability to the comparative analysis of images from different examinations that may exhibit changes and its applicability to diagnostic support.
Collapse
Affiliation(s)
- Carlos Hernandez-Matas
- Institute of Computer Science, Foundation for Research and Technology - Hellas (FORTH), Heraklion, Greece; Computer Science Department, University of Crete, Heraklion, Greece
| | - Xenophon Zabulis
- Institute of Computer Science, Foundation for Research and Technology - Hellas (FORTH), Heraklion, Greece
| | - Areti Triantafyllou
- Department of Internal Medicine, Papageorgiou Hospital, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Panagiota Anyfanti
- Department of Internal Medicine, Papageorgiou Hospital, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Antonis A Argyros
- Institute of Computer Science, Foundation for Research and Technology - Hellas (FORTH), Heraklion, Greece; Computer Science Department, University of Crete, Heraklion, Greece
| |
Collapse
|
21
|
Patankar SS, Kulkarni JV. Orthogonal moments for determining correspondence between vessel bifurcations for retinal image registration. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2015; 119:121-141. [PMID: 25837489 DOI: 10.1016/j.cmpb.2015.02.009] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/21/2014] [Revised: 02/16/2015] [Accepted: 02/25/2015] [Indexed: 06/04/2023]
Abstract
Retinal image registration is a necessary step in diagnosis and monitoring of Diabetes Retinopathy (DR), which is one of the leading causes of blindness. Long term diabetes affects the retinal blood vessels and capillaries eventually causing blindness. This progressive damage to retina and subsequent blindness can be prevented by periodic retinal screening. The extent of damage caused by DR can be assessed by comparing retinal images captured during periodic retinal screenings. During image acquisition at the time of periodic screenings translation, rotation and scale (TRS) are introduced in the retinal images. Therefore retinal image registration is an essential step in automated system for screening, diagnosis, treatment and evaluation of DR. This paper presents an algorithm for registration of retinal images using orthogonal moment invariants as features for determining the correspondence between the dominant points (vessel bifurcations) in the reference and test retinal images. As orthogonal moments are invariant to TRS; moment invariants features around a vessel bifurcation are unaltered due to TRS and can be used to determine the correspondence between reference and test retinal images. The vessel bifurcation points are located in segmented, thinned (mono pixel vessel width) retinal images and labeled in corresponding grayscale retinal images. The correspondence between vessel bifurcations in reference and test retinal image is established based on moment invariants features. Further the TRS in test retinal image with respect to reference retinal image is estimated using similarity transformation. The test retinal image is aligned with reference retinal image using the estimated registration parameters. The accuracy of registration is evaluated in terms of mean error and standard deviation of the labeled vessel bifurcation points in the aligned images. The experimentation is carried out on DRIVE database, STARE database, VARIA database and database provided by local government hospital in Pune, India. The experimental results exhibit effectiveness of the proposed algorithm for registration of retinal images.
Collapse
|
22
|
Retinal image registration using topological vascular tree segmentation and bifurcation structures. Biomed Signal Process Control 2015. [DOI: 10.1016/j.bspc.2014.10.009] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
23
|
Zheng Y, Daniel E, Hunter AA, Xiao R, Gao J, Li H, Maguire MG, Brainard DH, Gee JC. Landmark matching based retinal image alignment by enforcing sparsity in correspondence matrix. Med Image Anal 2014; 18:903-13. [PMID: 24238743 PMCID: PMC4141885 DOI: 10.1016/j.media.2013.09.009] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2013] [Revised: 09/06/2013] [Accepted: 09/23/2013] [Indexed: 11/21/2022]
Abstract
Retinal image alignment is fundamental to many applications in diagnosis of eye diseases. In this paper, we address the problem of landmark matching based retinal image alignment. We propose a novel landmark matching formulation by enforcing sparsity in the correspondence matrix and offer its solutions based on linear programming. The proposed formulation not only enables a joint estimation of the landmark correspondences and a predefined transformation model but also combines the benefits of the softassign strategy (Chui and Rangarajan, 2003) and the combinatorial optimization of linear programming. We also introduced a set of reinforced self-similarities descriptors which can better characterize local photometric and geometric properties of the retinal image. Theoretical analysis and experimental results with both fundus color images and angiogram images show the superior performances of our algorithms to several state-of-the-art techniques.
Collapse
Affiliation(s)
- Yuanjie Zheng
- Department of Radiology, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA.
| | - Ebenezer Daniel
- Department of Ophthalmology, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA
| | - Allan A Hunter
- Department of Ophthalmology, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA
| | - Rui Xiao
- Department of Biostatistics and Epidemiology, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA
| | - Jianbin Gao
- University of Electronic Science and Technology, Chengdu, Sichuan, China
| | - Hongsheng Li
- University of Electronic Science and Technology, Chengdu, Sichuan, China
| | - Maureen G Maguire
- Department of Ophthalmology, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA
| | - David H Brainard
- Department of Psychology, School of Arts and Sciences at the University of Pennsylvania, Philadelphia, PA, USA
| | - James C Gee
- Department of Radiology, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
24
|
Adal KM, Ensing RM, Couvert R, van Etten P, Martinez JP, Vermeer KA, van Vliet LJ. A Hierarchical Coarse-to-Fine Approach for Fundus Image Registration. BIOMEDICAL IMAGE REGISTRATION 2014. [DOI: 10.1007/978-3-319-08554-8_10] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
25
|
Ayatollahi F, Shokouhi SB, Ayatollahi A. A new hybrid particle swarm optimization for multimodal brain image registration. ACTA ACUST UNITED AC 2012. [DOI: 10.4236/jbise.2012.54020] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
26
|
Bhattacharya M, Das A. Multimodality Medical Image Registration and Fusion Techniques Using Mutual Information and Genetic Algorithm-Based Approaches. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2011; 696:441-9. [DOI: 10.1007/978-1-4419-7046-6_44] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2023]
|
27
|
Zheng J, Tian J, Deng K, Dai X, Zhang X, Xu M. Salient feature region: a new method for retinal image registration. ACTA ACUST UNITED AC 2010; 15:221-32. [PMID: 21138808 DOI: 10.1109/titb.2010.2091145] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Retinal image registration is crucial for the diagnoses and treatments of various eye diseases. A great number of methods have been developed to solve this problem; however, fast and accurate registration of low-quality retinal images is still a challenging problem since the low content contrast, large intensity variance as well as deterioration of unhealthy retina caused by various pathologies. This paper provides a new retinal image registration method based on salient feature region (SFR). We first propose a well-defined region saliency measure that consists of both local adaptive variance and gradient field entropy to extract the SFRs in each image. Next, an innovative local feature descriptor that combines gradient field distribution with corresponding geometric information is then computed to match the SFRs accurately. After that, normalized cross-correlation-based local rigid registration is performed on those matched SFRs to refine the accuracy of local alignment. Finally, the two images are registered by adopting high-order global transformation model with locally well-aligned region centers as control points. Experimental results show that our method is quite effective for retinal image registration.
Collapse
Affiliation(s)
- Jian Zheng
- Medical Image Processing Group, Key Laboratory of Complex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | | | | | | | | | | |
Collapse
|
28
|
Lin WC, Wu CC, Zhang G, Wu TH, Lin YH, Huang TC, Liu RS, Lin KP. An approach to automatic blood vessel image registration of microcirculation for blood flow analysis on nude mice. Comput Methods Biomech Biomed Engin 2010; 14:319-30. [PMID: 21082459 DOI: 10.1080/10255842.2010.497489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Image registration is often a required and a time-consuming step in blood flow analysis of large microscopic video sequences in vivo. In order to obtain stable images for blood flow analysis, frame-to-frame image matching as a preprocessing step is a solution to the problem of movement during image acquisition. In this paper, microscopic system analysis without fluorescent labelling is performed to provide precise and continuous quantitative data of blood flow rate in individual microvessels of nude mice. The performance properties of several matching metrics are evaluated through simulated image registrations. An automatic image registration programme based on Powell's optimisation search method with low calculation redundancy was implemented. The matching method by variance of ratio is computationally efficient and improves the registration robustness and accuracy in practical application of microcirculation registration. The presented registration method shows acceptable results in close requisition to analyse red blood cell velocities, confirming the scientific potential of the system in blood flow analysis.
Collapse
Affiliation(s)
- Wen-Chen Lin
- Department of Electrical Engineering, Chung Yuan Christian University, Chungli, Taiwan
| | | | | | | | | | | | | | | |
Collapse
|
29
|
Delibasis KK, Kechriniotis AI, Tsonos C, Assimakis N. Automatic model-based tracing algorithm for vessel segmentation and diameter estimation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2010; 100:108-22. [PMID: 20363522 DOI: 10.1016/j.cmpb.2010.03.004] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2009] [Accepted: 03/01/2010] [Indexed: 05/16/2023]
Abstract
An automatic algorithm capable of segmenting the whole vessel tree and calculate vessel diameter and orientation in a digital ophthalmologic image is presented in this work. The algorithm is based on a parametric model of a vessel that can assume arbitrarily complex shape and a simple measure of match that quantifies how well the vessel model matches a given angiographic image. An automatic vessel tracing algorithm is described that exploits the geometric model and actively seeks vessel bifurcation, without user intervention. The proposed algorithm uses the geometric vessel model to determine the vessel diameter at each detected central axis pixel. For this reason, the algorithm is fine tuned using a subset of ophthalmologic images of the publically available DRIVE database, by maximizing vessel segmentation accuracy. The proposed algorithm is then applied to the remaining ophthalmological images of the DRIVE database. The segmentation results of the proposed algorithm compare favorably in terms of accuracy with six other well established vessel detection techniques, outperforming three of them in the majority of the available ophthalmologic images. The proposed algorithm achieves subpixel root mean square central axis positioning error that outperforms the non-expert based vessel segmentation, whereas the accuracy of vessel diameter estimation is comparable to that of the non-expert based vessel segmentation.
Collapse
Affiliation(s)
- Konstantinos K Delibasis
- School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
| | | | | | | |
Collapse
|
30
|
Retinal Fundus Image Registration via Vascular Structure Graph Matching. Int J Biomed Imaging 2010; 2010. [PMID: 20871853 PMCID: PMC2943092 DOI: 10.1155/2010/906067] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2010] [Accepted: 07/07/2010] [Indexed: 11/18/2022] Open
Abstract
Motivated by the observation that a retinal fundus image may contain some unique geometric structures within
its vascular trees which can be utilized for feature matching, in this paper, we proposed a graph-based registration
framework called GM-ICP to align pairwise retinal images. First, the retinal vessels are automatically detected and
represented as vascular structure graphs. A graph matching is then performed to find global correspondences between
vascular bifurcations. Finally, a revised ICP algorithm incorporating with quadratic transformation model is used at
fine level to register vessel shape models. In order to eliminate the incorrect matches from global correspondence
set obtained via graph matching, we proposed a structure-based sample consensus (STRUCT-SAC) algorithm. The
advantages of our approach are threefold: (1) global optimum solution can be achieved with graph matching; (2)
our method is invariant to linear geometric transformations; and (3) heavy local feature descriptors are not required.
The effectiveness of our method is demonstrated by the experiments with 48 pairs retinal images collected from
clinical patients.
Collapse
|
31
|
Chen J, Tian J, Lee N, Zheng J, Smith RT, Laine AF. A partial intensity invariant feature descriptor for multimodal retinal image registration. IEEE Trans Biomed Eng 2010; 57:1707-18. [PMID: 20176538 PMCID: PMC3030813 DOI: 10.1109/tbme.2010.2042169] [Citation(s) in RCA: 191] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Detection of vascular bifurcations is a challenging task in multimodal retinal image registration. Existing algorithms based on bifurcations usually fail in correctly aligning poor quality retinal image pairs. To solve this problem, we propose a novel highly distinctive local feature descriptor named partial intensity invariant feature descriptor (PIIFD) and describe a robust automatic retinal image registration framework named Harris-PIIFD. PIIFD is invariant to image rotation, partially invariant to image intensity, affine transformation, and viewpoint/perspective change. Our Harris-PIIFD framework consists of four steps. First, corner points are used as control point candidates instead of bifurcations since corner points are sufficient and uniformly distributed across the image domain. Second, PIIFDs are extracted for all corner points, and a bilateral matching technique is applied to identify corresponding PIIFDs matches between image pairs. Third, incorrect matches are removed and inaccurate matches are refined. Finally, an adaptive transformation is used to register the image pairs. PIIFD is so distinctive that it can be correctly identified even in nonvascular areas. When tested on 168 pairs of multimodal retinal images, the Harris-PIIFD far outperforms existing algorithms in terms of robustness, accuracy, and computational efficiency.
Collapse
Affiliation(s)
- Jian Chen
- Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
| | | | | | | | | | | |
Collapse
|
32
|
Lupascu CA, Tegolo D, Trucco E. FABC: retinal vessel segmentation using AdaBoost. ACTA ACUST UNITED AC 2010; 14:1267-74. [PMID: 20529750 DOI: 10.1109/titb.2010.2052282] [Citation(s) in RCA: 138] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
This paper presents a method for automated vessel segmentation in retinal images. For each pixel in the field of view of the image, a 41-D feature vector is constructed, encoding information on the local intensity structure, spatial properties, and geometry at multiple scales. An AdaBoost classifier is trained on 789 914 gold standard examples of vessel and nonvessel pixels, then used for classifying previously unseen images. The algorithm was tested on the public digital retinal images for vessel extraction (DRIVE) set, frequently used in the literature and consisting of 40 manually labeled images with gold standard. Results were compared experimentally with those of eight algorithms as well as the additional manual segmentation provided by DRIVE. Training was conducted confined to the dedicated training set from the DRIVE database, and feature-based AdaBoost classifier (FABC) was tested on the 20 images from the test set. FABC achieved an area under the receiver operating characteristic (ROC) curve of 0.9561, in line with state-of-the-art approaches, but outperforming their accuracy ( 0.9597 versus 0.9473 for the nearest performer).
Collapse
Affiliation(s)
- Carmen Alina Lupascu
- Dipartimento di Matematica e Informatica, Universit`a degli Studi di Palermo, 90123 Palermo, Italy.
| | | | | |
Collapse
|
33
|
Affine-based registration of CT and MR modality images of human brain using multiresolution approaches: comparative study on genetic algorithm and particle swarm optimization. Neural Comput Appl 2010. [DOI: 10.1007/s00521-010-0374-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
34
|
Zheng J, Tian J, Dai Y, Deng K, Chen J. Retinal image registration based on salient feature regions. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2010; 2009:102-5. [PMID: 19964922 DOI: 10.1109/iembs.2009.5334778] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Retinal image registration is essential and crucial for ophthalmologists to diagnose various diseases. A great number of methods have been developed to solve this problem, however, fast and accurate retinal image registration is still a challenging problem since the great content complexity and low image quality of the unhealthy retina. This paper provides a new retinal image registration method based on salient feature regions (SFR). We first extract the SFR in each image based on a well defined region saliency metric. Next, SFR are matched by using an innovative local feature descriptor. Then we register those matched SFR using local rigid transformation. Finally, we register the two images adopting global second order polynomial transformation with locally rigid registered region centers as control points. Experimental results prove that our method is very fast and accurate, especially quite effective for the low quality retinal images registration.
Collapse
Affiliation(s)
- Jian Zheng
- Medical Image Processing Group, Key Laboratory of Complex Systems and Intelligence Science, Institute of Automation Chinese Academy of Sciences
| | | | | | | | | |
Collapse
|
35
|
Tsai CL, Li CY, Yang G, Lin KS. The edge-driven dual-bootstrap iterative closest point algorithm for registration of multimodal fluorescein angiogram sequence. IEEE TRANSACTIONS ON MEDICAL IMAGING 2010; 29:636-649. [PMID: 19709965 DOI: 10.1109/tmi.2009.2030324] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Motivated by the need for multimodal image registration in ophthalmology, this paper introduces an algorithm which is tailored to jointly align in a common reference space all the images in a complete fluorescein angiogram (FA) sequence, which contains both red-free (RF) and FA images. Our work is inspired by Generalized Dual-Bootstrap Iterative Closest Point (GDB-ICP), which rank-orders Lowe keypoint matches and refines the transformation, going from local and low-order to global and higher-order model, computed from each keypoint match in succession. Albeit GDB-ICP has been shown to be robust in registering images taken under different lighting conditions, the performance is not satisfactory for image pairs with substantial, nonlinear intensity differences. Our algorithm, named Edge-Driven DB-ICP, targeting the least reliable component of GDB-ICP, modifies generation of keypoint matches for initialization by extracting the Lowe keypoints from the gradient magnitude image and enriching the keypoint descriptor with global-shape context using the edge points. Our dataset consists of 60 randomly-selected pathological sequences, each on average having up to two RF and 13 FA images. Edge-Driven DB-ICP successfully registered 92.4% of all pairs, and 81.1% multimodal pairs, whereas GDB-ICP registered 80.1% and 40.1%, respectively. For the joint registration of all images in a sequence, Edge-Driven DB-ICP succeeded in 59 sequences, which is a 23% improvement over GDB-ICP.
Collapse
Affiliation(s)
- Chia-Ling Tsai
- Department of Computer Science, Iona College, New Rochelle, NY 10801, USA.
| | | | | | | |
Collapse
|
36
|
Pandey B, Mishra R. Knowledge and intelligent computing system in medicine. Comput Biol Med 2009; 39:215-30. [PMID: 19201398 DOI: 10.1016/j.compbiomed.2008.12.008] [Citation(s) in RCA: 80] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2008] [Revised: 11/24/2008] [Accepted: 12/17/2008] [Indexed: 01/04/2023]
|
37
|
Troglio G, Benediktsson JA, Serpico SB, Moser G, Karlsson RA, Halldorsson GH, Stefansson E. Automatic registration of retina images based on genetic techniques. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2009; 2008:5419-24. [PMID: 19163943 DOI: 10.1109/iembs.2008.4650440] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The aim of this paper is to develop an automatic method for the registration of multitemporal digital images of the fundus of the human retina. The images are acquired from the same patient at different times by a color fundus camera. The proposed approach is based on the application of global optimization techniques to previously extracted maps of curvilinear structures in the images to be registered (such structures being represented by the vessels in the human retina): in particular, a genetic algorithm is used, in order to estimate the optimum transformation between the input and the base image. The algorithm is tested on two different types of data, gray scale and color images, and for both types, images with small changes and with large changes are used. The comparison between the registered images using the implemented method and a manual one points out that the proposed algorithm provides an accurate registration. The convergence to a solution is not possible only when dealing with images taken from very different view-points.
Collapse
Affiliation(s)
- G Troglio
- University of Genoa, Dept. of Biophysical and Electronic Eng. (DIBE), Via Opera Pia 11a, I-16145, Italy.
| | | | | | | | | | | | | |
Collapse
|
38
|
Nourrit V, Bueno JM, Vohnsen B, Artal P. Nonlinear registration for scanned retinal images: application to ocular polarimetry. APPLIED OPTICS 2008; 47:5341-5347. [PMID: 18846174 DOI: 10.1364/ao.47.005341] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Retinal images of approximately 1 degrees of visual field were recorded with a homemade scanning laser ophthalmoscope. The benefit of using a nonlinear registration technique to improve the summation process when averaging frames, rather than a standard approach based on correlation, was assessed. Results suggest that nonlinear methods can surpass linear transformations, allowing improved contrast and more uniform image quality. The importance of this is also demonstrated with specific polarization measurements to determine the degree of polarization across an imaged retinal area. In such a context, where this parameter of polarization is extracted from a combination of registered images, the benefit of the nonlinear method is further increased.
Collapse
Affiliation(s)
- Vincent Nourrit
- The University of Manchester, Faculty of Life Sciences, Sackville Street, Manchester M60 1QD, UK.
| | | | | | | |
Collapse
|
39
|
Tsai CL, Madore B, Leotta M, Sofka M, Yang G, Majerovics A, Tanenbaum H, Stewart C, Roysam B. Automated Retinal Image Analysis Over the Internet. ACTA ACUST UNITED AC 2008; 12:480-7. [DOI: 10.1109/titb.2007.908790] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
40
|
Automatic lung nodule matching on sequential CT images. Comput Biol Med 2008; 38:623-34. [DOI: 10.1016/j.compbiomed.2008.02.010] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2005] [Revised: 02/18/2008] [Accepted: 02/29/2008] [Indexed: 11/23/2022]
|
41
|
Choe TE, Medioni G, Cohen I, Walsh AC, Sadda SR. 2-D registration and 3-D shape inference of the retinal fundus from fluorescein images. Med Image Anal 2008; 12:174-90. [PMID: 18060827 PMCID: PMC2556232 DOI: 10.1016/j.media.2007.10.002] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2007] [Revised: 09/13/2007] [Accepted: 10/01/2007] [Indexed: 11/15/2022]
Abstract
This study presents methods to 2-D registration of retinal image sequences and 3-D shape inference from fluorescein images. The Y-feature is a robust geometric entity that is largely invariant across modalities as well as across the temporal grey level variations induced by the propagation of the dye in the vessels. We first present a Y-feature extraction method that finds a set of Y-feature candidates using local image gradient information. A gradient-based approach is then used to align an articulated model of the Y-feature to the candidates more accurately while optimizing a cost function. Using mutual information, fitted Y-features are subsequently matched across images, including colors and fluorescein angiographic frames, for registration. To reconstruct the retinal fundus in 3-D, the extracted Y-features are used to estimate the epipolar geometry with a plane-and-parallax approach. The proposed solution provides a robust estimation of the fundamental matrix suitable for plane-like surfaces, such as the retinal fundus. The mutual information criterion is used to accurately estimate the dense disparity map. Our experimental results validate the proposed method on a set of difficult fluorescein image pairs.
Collapse
Affiliation(s)
- Tae Eun Choe
- Institute for Robotics and Intelligent Systems, University of Southern California, 3737 Watt way, Los Angeles, CA 90248, USA.
| | | | | | | | | |
Collapse
|
42
|
Baumgarten D, Doering A. [Registration of fundus images for generating wide field composite images of the retina ]. BIOMED ENG-BIOMED TE 2008; 52:365-74. [PMID: 18047401 DOI: 10.1515/bmt.2007.061] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The composition of retinal images presents high demands to the applied methods. Substantially different lighting conditions between the images, glarings and fade-outs within one image, large textureless regions and non-linear distortions are the main challenges. We present a fully automatic algorithm for the registration of images of the human retina and their overlay to wide field montage images combining area-based and point-based approaches. The algorithm combines an area-based as well as a point-based approach for determining similarities between images. Various measures of similarity were investigated, where the normalized correlation coefficient was superior compared to the usual definitions of transinformation. The transformation of the images was based on a quadratic model that can be derived from the spherical surface of the retina. This model was compared to four other parameterized transformations and performed best both visually and quantitatively in terms of measured misregistration. Problems may occur if the images are extremely defocused or contain very little relevant structural information.
Collapse
Affiliation(s)
- Daniel Baumgarten
- Institut für Biomedizinische Technik und Informatik, Technische Universität Ilmenau, Ilmenau, Deutschland.
| | | |
Collapse
|
43
|
Shift-invariant discrete wavelet transform analysis for retinal image classification. Med Biol Eng Comput 2007; 45:1211-22. [DOI: 10.1007/s11517-007-0273-z] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2006] [Accepted: 10/04/2007] [Indexed: 10/22/2022]
|
44
|
Asvestas PA, Matsopoulos GK, Delibasis KK, Mouravliansky NA. Registration of retinal angiograms using self organizing maps. CONFERENCE PROCEEDINGS : ... ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL CONFERENCE 2007; 2006:4722-5. [PMID: 17946259 DOI: 10.1109/iembs.2006.260567] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
In this paper, an automatic method for registering multimodal retinal images is presented. The method consists of three steps: the vessel centerline detection and extraction of bifurcation points only in the reference image, the automatic correspondence of bifurcation points in the two images using a novel implementation of the Self Organized Maps (SOMs) and the extraction of the parameters of the affine transform using the previously obtained correspondences. The proposed registration algorithm was tested on 24 multimodal retinal pairs and the obtained results show an advantageous performance in terms of accuracy with respect to the manual registration.
Collapse
|
45
|
Narasimha-Iyer H, Can A, Roysam B, Stewart CV, Tanenbaum HL, Majerovics A, Singh H. Robust detection and classification of longitudinal changes in color retinal fundus images for monitoring diabetic retinopathy. IEEE Trans Biomed Eng 2006; 53:1084-98. [PMID: 16761836 DOI: 10.1109/tbme.2005.863971] [Citation(s) in RCA: 63] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
A fully automated approach is presented for robust detection and classification of changes in longitudinal time-series of color retinal fundus images of diabetic retinopathy. The method is robust to: 1) spatial variations in illumination resulting from instrument limitations and changes both within, and between patient visits; 2) imaging artifacts such as dust particles; 3) outliers in the training data; 4) segmentation and alignment errors. Robustness to illumination variation is achieved by a novel iterative algorithm to estimate the reflectance of the retina exploiting automatically extracted segmentations of the retinal vasculature, optic disk, fovea, and pathologies. Robustness to dust artifacts is achieved by exploiting their spectral characteristics, enabling application to film-based, as well as digital imaging systems. False changes from alignment errors are minimized by subpixel accuracy registration using a 12-parameter transformation that accounts for unknown retinal curvature and camera parameters. Bayesian detection and classification algorithms are used to generate a color-coded output that is readily inspected. A multiobserver validation on 43 image pairs from 22 eyes involving nonproliferative and proliferative diabetic retinopathies, showed a 97% change detection rate, a 3% miss rate, and a 10% false alarm rate. The performance in correctly classifying the changes was 99.3%. A self-consistency metric, and an error factor were developed to measure performance over more than two periods. The average self consistency was 94% and the error factor was 0.06%. Although this study focuses on diabetic changes, the proposed techniques have broader applicability in ophthalmology.
Collapse
Affiliation(s)
- Harihar Narasimha-Iyer
- Department of Electrical, Computer, and Systems Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180, USA.
| | | | | | | | | | | | | |
Collapse
|
46
|
Wachowiak MP, Peters TM. High-Performance Medical Image Registration Using New Optimization Techniques. ACTA ACUST UNITED AC 2006; 10:344-53. [PMID: 16617623 DOI: 10.1109/titb.2006.864476] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Optimization of a similarity metric is an essential component in intensity-based medical image registration. The increasing availability of parallel computers makes parallelizing some registration tasks an attractive option to increase speed. In this paper, two new deterministic, derivative-free, and intrinsically parallel optimization methods are adapted for image registration. DIviding RECTangles (DIRECT) is a global technique for linearly bounded problems, and multidirectional search (MDS) is a recent local method. The performance of DIRECT, MDS, and hybrid methods using a parallel implementation of Powell's method for local refinement, are compared. Experimental results demonstrate that DIRECT and MDS are robust, accurate, and substantially reduce computation time in parallel implementations.
Collapse
Affiliation(s)
- Mark P Wachowiak
- Imaging Laboratories, Robarts Research Institute, London, ON N6A 5K8, Canada.
| | | |
Collapse
|
47
|
Abstract
This work studies retinal image registration in the context of the National Institutes of Health (NIH) Early Treatment Diabetic Retinopathy Study (ETDRS) standard. The ETDRS imaging protocol specifies seven fields of each retina and presents three major challenges for the image registration task. First, small overlaps between adjacent fields lead to inadequate landmark points for feature-based methods. Second, the non-uniform contrast/intensity distributions due to imperfect data acquisition will deteriorate the performance of area-based techniques. Third, high-resolution images contain large homogeneous nonvascular/texureless regions that weaken the capabilities of both feature-based and area-based techniques. In this work, we propose a hybrid retinal image registration approach for ETDRS images that effectively combines both area-based and feature-based methods. Four major steps are involved. First, the vascular tree is extracted by using an efficient local entropy-based thresholding technique. Next, zeroth-order translation is estimated by maximizing mutual information based on the binary image pair (area-based). Then image quality assessment regarding the ETDRS field definition is performed based on the translation model. If the image pair is accepted, higher-order transformations will be involved. Specifically, we use two types of features, landmark points and sampling points, for affine/quadratic model estimation. Three empirical conditions are derived experimentally to control the algorithm progress, so that we can achieve the lowest registration error and the highest success rate. Simulation results on 504 pairs of ETDRS images show the effectiveness and robustness of the proposed algorithm.
Collapse
Affiliation(s)
- Thitiporn Chanwimaluang
- School of Electrical and Computer Engineering, Oklahoma State University, Stillwater 74078, USA.
| | | | | |
Collapse
|
48
|
Walsh AC, Updike PG, Sadda SR. Quantitative Fluorescein Angiography. Retina 2006. [DOI: 10.1016/b978-0-323-02598-0.50058-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
49
|
Patton N, Aslam TM, MacGillivray T, Deary IJ, Dhillon B, Eikelboom RH, Yogesan K, Constable IJ. Retinal image analysis: concepts, applications and potential. Prog Retin Eye Res 2005; 25:99-127. [PMID: 16154379 DOI: 10.1016/j.preteyeres.2005.07.001] [Citation(s) in RCA: 260] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
As digital imaging and computing power increasingly develop, so too does the potential to use these technologies in ophthalmology. Image processing, analysis and computer vision techniques are increasing in prominence in all fields of medical science, and are especially pertinent to modern ophthalmology, as it is heavily dependent on visually oriented signs. The retinal microvasculature is unique in that it is the only part of the human circulation that can be directly visualised non-invasively in vivo, readily photographed and subject to digital image analysis. Exciting developments in image processing relevant to ophthalmology over the past 15 years includes the progress being made towards developing automated diagnostic systems for conditions, such as diabetic retinopathy, age-related macular degeneration and retinopathy of prematurity. These diagnostic systems offer the potential to be used in large-scale screening programs, with the potential for significant resource savings, as well as being free from observer bias and fatigue. In addition, quantitative measurements of retinal vascular topography using digital image analysis from retinal photography have been used as research tools to better understand the relationship between the retinal microvasculature and cardiovascular disease. Furthermore, advances in electronic media transmission increase the relevance of using image processing in 'teleophthalmology' as an aid in clinical decision-making, with particular relevance to large rural-based communities. In this review, we outline the principles upon which retinal digital image analysis is based. We discuss current techniques used to automatically detect landmark features of the fundus, such as the optic disc, fovea and blood vessels. We review the use of image analysis in the automated diagnosis of pathology (with particular reference to diabetic retinopathy). We also review its role in defining and performing quantitative measurements of vascular topography, how these entities are based on 'optimisation' principles and how they have helped to describe the relationship between systemic cardiovascular disease and retinal vascular changes. We also review the potential future use of fundal image analysis in telemedicine.
Collapse
Affiliation(s)
- Niall Patton
- Lions Eye Institute, 2, Verdun Street, Nedlands, WA 6009, Australia.
| | | | | | | | | | | | | | | |
Collapse
|
50
|
Kouloulias VE, Kouvaris JR, Pissakas G, Mallas E, Antypas C, Kokakis JD, Matsopoulos G, Michopoulos S, Mystakidou K, Vlahos LJ. Phase II multicenter randomized study of amifostine for prevention of acute radiation rectal toxicity: topical intrarectal versus subcutaneous application. Int J Radiat Oncol Biol Phys 2005; 62:486-93. [PMID: 15890591 DOI: 10.1016/j.ijrobp.2004.10.043] [Citation(s) in RCA: 37] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2003] [Revised: 10/08/2004] [Accepted: 10/14/2004] [Indexed: 02/02/2023]
Abstract
PURPOSE To investigate the cytoprotective effect of subcutaneous vs. intrarectal administration of amifostine against acute radiation toxicity. METHODS AND MATERIALS Patients were randomized to receive amifostine either intrarectally (Group A, n = 27) or a 500-mg flat dose subcutaneously (Group B, n = 26) before irradiation. Therapy was delivered using a four-field technique with three-dimensional conformal planning. In Group A, 1,500 mg of amifostine was administered intrarectally as an aqueous solution in 40 mL of enema. Two different toxicity scales were used: the European Organization for Research and Treatment of Cancer/Radiation Therapy Oncology Group (RTOG) rectal and urologic toxicity criteria and the Subjective-RectoSigmoid scale based on the endoscopic terminology of the World Organization for Digestive Endoscopy. Objective measurements with rectosigmoidoscopy were performed at baseline and 1-2 days after radiotherapy completion. The area under the curve for the time course of mucositis (RTOG criteria) during irradiation represented the mucositis index. RESULTS Intrarectal amifostine was feasible and well tolerated without any systemic or local side effects. According to the RTOG toxicity scale, Group A had superior results with a significantly lower incidence of Grades I-II rectal radiation morbidity (11% vs. 42%, p = 0.04) but inferior results concerning urinary toxicity (48% vs. 15%, p = 0.03). The mean rectal mucositis index and Subjective-RectoSigmoid score were significantly lower in Group A (0.44 vs. 2.45 [p = 0.015] and 3.9 vs. 6.0 [p = 0.01], respectively), and the mean urinary mucositis index was lower in Group B (2.39 vs. 0.34, p < 0.028). CONCLUSIONS Intrarectal administration of amifostine (1,500 mg) seemed to have a cytoprotective efficacy in acute radiation rectal mucositis but was inferior to subcutaneous administration in terms of urinary toxicity. Additional randomized studies are needed for definitive decisions concerning the cytoprotection of pelvic irradiated areas.
Collapse
Affiliation(s)
- Vassilis E Kouloulias
- Department of Radiation Oncology, Aretaieion University Hospital, Medical School of Athens, Athens, Greece.
| | | | | | | | | | | | | | | | | | | |
Collapse
|