1
|
Öfverstedt J, Lindblad J, Sladoje N. INSPIRE: Intensity and spatial information-based deformable image registration. PLoS One 2023; 18:e0282432. [PMID: 36867617 PMCID: PMC9983883 DOI: 10.1371/journal.pone.0282432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 02/15/2023] [Indexed: 03/04/2023] Open
Abstract
We present INSPIRE, a top-performing general-purpose method for deformable image registration. INSPIRE brings distance measures which combine intensity and spatial information into an elastic B-splines-based transformation model and incorporates an inverse inconsistency penalization supporting symmetric registration performance. We introduce several theoretical and algorithmic solutions which provide high computational efficiency and thereby applicability of the proposed framework in a wide range of real scenarios. We show that INSPIRE delivers highly accurate, as well as stable and robust registration results. We evaluate the method on a 2D dataset created from retinal images, characterized by presence of networks of thin structures. Here INSPIRE exhibits excellent performance, substantially outperforming the widely used reference methods. We also evaluate INSPIRE on the Fundus Image Registration Dataset (FIRE), which consists of 134 pairs of separately acquired retinal images. INSPIRE exhibits excellent performance on the FIRE dataset, substantially outperforming several domain-specific methods. We also evaluate the method on four benchmark datasets of 3D magnetic resonance images of brains, for a total of 2088 pairwise registrations. A comparison with 17 other state-of-the-art methods reveals that INSPIRE provides the best overall performance. Code is available at github.com/MIDA-group/inspire.
Collapse
Affiliation(s)
- Johan Öfverstedt
- Department of Information Technology, Uppsala University, Uppsala, Sweden
- * E-mail:
| | - Joakim Lindblad
- Department of Information Technology, Uppsala University, Uppsala, Sweden
| | - Nataša Sladoje
- Department of Information Technology, Uppsala University, Uppsala, Sweden
| |
Collapse
|
2
|
Robust Detection Model of Vascular Landmarks for Retinal Image Registration: A Two-Stage Convolutional Neural Network. BIOMED RESEARCH INTERNATIONAL 2022; 2022:1705338. [PMID: 35941970 PMCID: PMC9356876 DOI: 10.1155/2022/1705338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Accepted: 07/08/2022] [Indexed: 11/18/2022]
Abstract
Registration is useful for image processing in computer vision. It can be applied to retinal images and provide support for ophthalmologists in tracking disease progression and monitoring therapeutic responses. This study proposed a robust detection model of vascular landmarks to improve the performance of retinal image registration. The proposed model consists of a two-stage convolutional neural network, in which one segments the retinal vessels on a pair of images, and the other detects junction points from the vessel segmentation image. Information obtained from the model was utilized for the registration. The keypoints were extracted based on the acquired vascular landmark points, and the orientation features were calculated as descriptors. Then, the reference and sensed images were registered by matching keypoints using a homography matrix and random sample consensus algorithm. The proposed method was evaluated on five databases and seven evaluation metrics to verify both clinical effectiveness and robustness. The results established that the proposed method showed outstanding performance for registration compared with other state-of-the-art methods. In particular, the high and significantly improved registration results were identified on FIRE database with area under the curve (AUC) of 0.988, 0.511, and 0.803 in S, P, and A classes. Furthermore, the proposed method worked well on poor quality and multimodal datasets demonstrating an ability to achieve high AUC above 0.8.
Collapse
|
3
|
Hernandez-Matas C, Zabulis X, Argyros AA. REMPE: Registration of Retinal Images Through Eye Modelling and Pose Estimation. IEEE J Biomed Health Inform 2020; 24:3362-3373. [DOI: 10.1109/jbhi.2020.2984483] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
4
|
Jalili J, Hejazi SM, Riazi-Esfahani M, Eliasi A, Ebrahimi M, Seydi M, Fard MA, Ahmadian A. Retinal image mosaicking using scale-invariant feature transformation feature descriptors and Voronoi diagram. J Med Imaging (Bellingham) 2020; 7:044001. [PMID: 32715023 DOI: 10.1117/1.jmi.7.4.044001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2019] [Accepted: 06/30/2020] [Indexed: 11/14/2022] Open
Abstract
Purpose: Peripheral retinal lesions substantially increase the risk of diabetic retinopathy and retinopathy of prematurity. The peripheral changes can be visualized in wide field imaging, which is obtained by combining multiple images with an overlapping field of view using mosaicking methods. However, a robust and accurate registration of mosaicking techniques for normal angle fundus cameras is still a challenge due to the random selection of matching points and execution time. We propose a method of retinal image mosaicking based on scale-invariant feature transformation (SIFT) feature descriptor and Voronoi diagram. Approach: In our method, the SIFT algorithm is used to describe local features in the input images. Then the input images are subdivided into regions based on the Voronoi method. Each pair of Voronoi regions is matched by the method zero mean normalized cross correlation. After matching, the retinal images are mapped into the same coordinate system to form a mosaic image. The success rate and the mean registration error (RE) of our method were compared with those of other state-of-the-art methods for the P category of the fundus image registration database. Results: Experimental results show that the proposed method accurately registered 42% of retinal image pairs with a mean RE of 3.040 pixels, while a lower success rate was observed in the other four state-of-the-art retinal image registration methods GDB-ICP (33%), Harris-PIIFD (0%), HM-2016 (0%), and HM-2017 (2%). Conclusions: The proposed method outperforms state-of-the-art methods in terms of quality and running time and reduces the computational complexity.
Collapse
Affiliation(s)
- Jalil Jalili
- Tehran University of Medical Sciences, School of Medicine, Medical Physics and Biomedical Engineering Department, Tehran, Iran
| | - Sedigheh M Hejazi
- Tehran University of Medical Sciences, School of Medicine, Medical Physics and Biomedical Engineering Department, Tehran, Iran.,Tehran University of Medical Sciences, Imam Khomeini Hospital, Advanced Medical Technologies and Equipment Institute Research Center for Molecular and Cellular in Imaging, Bio-optical Imaging Group, Tehran, Iran
| | - Mohammad Riazi-Esfahani
- University of California Irvine, Gavin Herbert Eye Institute, Department of Ophthalmology, Irvine, California, United States
| | - Arash Eliasi
- Tehran University of Medical Sciences, Imam Khomeini Hospital, Advanced Medical Technologies and Equipment Institute Research Center for Molecular and Cellular in Imaging, Bio-optical Imaging Group, Tehran, Iran
| | - Mohsen Ebrahimi
- Tehran University of Medical Sciences, Imam Khomeini Hospital, Advanced Medical Technologies and Equipment Institute Research Center for Molecular and Cellular in Imaging, Bio-optical Imaging Group, Tehran, Iran
| | - Mojtaba Seydi
- Tehran University of Medical Sciences, Imam Khomeini Hospital, Advanced Medical Technologies and Equipment Institute Research Center for Molecular and Cellular in Imaging, Bio-optical Imaging Group, Tehran, Iran
| | - Masoud Aghsaei Fard
- Tehran University of Medical Science, Farabi Eye Hospital BB, Eye Research Center, Tehran, Iran
| | - Alireza Ahmadian
- Tehran University of Medical Sciences, School of Medicine, Medical Physics and Biomedical Engineering Department, Tehran, Iran
| |
Collapse
|
5
|
Motta D, Casaca W, Paiva A. Vessel Optimal Transport for Automated Alignment of Retinal Fundus Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:6154-6168. [PMID: 31283507 DOI: 10.1109/tip.2019.2925287] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Optimal transport has emerged as a promising and useful tool for supporting modern image processing applications such as medical imaging and scientific visualization. Indeed, the optimal transport theory enables great flexibility in modeling problems related to image registration, as different optimization resources can be successfully used as well as the choice of suitable matching models to align the images. In this paper, we introduce an automated framework for fundus image registration which unifies optimal transport theory, image processing tools, and graph matching schemes into a functional and concise methodology. Given two ocular fundus images, we construct representative graphs which embed in their structures spatial and topological information from the eye's blood vessels. The graphs produced are then used as input by our optimal transport model in order to establish a correspondence between their sets of nodes. Finally, geometric transformations are performed between the images so as to accomplish the registration task properly. Our formulation relies on the solid mathematical foundation of optimal transport as a constrained optimization problem, being also robust when dealing with outliers created during the matching stage. We demonstrate the accuracy and effectiveness of the present framework throughout a comprehensive set of qualitative and quantitative comparisons against several influential state-of-the-art methods on various fundus image databases.
Collapse
|
6
|
Xin JH. Normalized Total Gradient: A New Measure for Multispectral Image Registration. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:1297-1310. [PMID: 29990251 DOI: 10.1109/tip.2017.2776753] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Image registration is a fundamental issue in multispectral image processing, and is challenged by two main characteristics of multispectral images. First, the regional intensities can be essentially different between band images. Second, the local contrasts of two difference band images are inconsistent or even reversed. Conventional measures can align images with different regional intensity levels, but may fail in the circumstance of severe local intensity variation. In this paper, a new measure called normalized total gradient is proposed for multispectral image registration. The measure is based on the key assumption (observation) that the gradient of the difference between two aligned band images is sparser than that between two misaligned ones. A registration framework, which incorporates image pyramid and global/local optimization, is further introduced for affine transform. Experimental results validate that the proposed method is not only effective for multispectral image registration, but also applicable to general unimodal/multimodal image registration tasks. It performs better than or comparable to the existing methods, both quantitatively and qualitatively.
Collapse
|
7
|
Gilliam C, Blu T. Local All-Pass Geometric Deformations. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:1010-1025. [PMID: 29757743 DOI: 10.1109/tip.2017.2765822] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This paper deals with the estimation of a deformation that describes the geometric transformation between two images. To solve this problem, we propose a novel framework that relies upon the brightness consistency hypothesis-a pixel's intensity is maintained throughout the transformation. Instead of assuming small distortion and linearizing the problem (e.g. via Taylor Series expansion), we propose to interpret the brightness hypothesis as an all-pass filtering relation between the two images. The key advantages of this new interpretation are that no restrictions are placed on the amplitude of the deformation or on the spatial variations of the images. Moreover, by converting the all-pass filtering to a linear forward-backward filtering relation, our solution to the estimation problem equates to solving a linear system of equations, which leads to a highly efficient implementation. Using this framework, we develop a fast algorithm that relates one image to another, on a local level, using an all-pass filter and then extracts the deformation from the filter-hence the name "Local All-Pass" (LAP) algorithm. The effectiveness of this algorithm is demonstrated on a variety of synthetic and real deformations that are found in applications, such as image registration and motion estimation. In particular, when compared with a selection of image registration algorithms, the LAP obtains very accurate results for significantly reduced computation time and is very robust to noise corruption.
Collapse
|
8
|
Feature-Based Retinal Image Registration Using D-Saddle Feature. JOURNAL OF HEALTHCARE ENGINEERING 2017; 2017:1489524. [PMID: 29204257 PMCID: PMC5674727 DOI: 10.1155/2017/1489524] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/09/2017] [Revised: 08/09/2017] [Accepted: 08/23/2017] [Indexed: 11/17/2022]
Abstract
Retinal image registration is important to assist diagnosis and monitor retinal diseases, such as diabetic retinopathy and glaucoma. However, registering retinal images for various registration applications requires the detection and distribution of feature points on the low-quality region that consists of vessels of varying contrast and sizes. A recent feature detector known as Saddle detects feature points on vessels that are poorly distributed and densely positioned on strong contrast vessels. Therefore, we propose a multiresolution difference of Gaussian pyramid with Saddle detector (D-Saddle) to detect feature points on the low-quality region that consists of vessels with varying contrast and sizes. D-Saddle is tested on Fundus Image Registration (FIRE) Dataset that consists of 134 retinal image pairs. Experimental results show that D-Saddle successfully registered 43% of retinal image pairs with average registration accuracy of 2.329 pixels while a lower success rate is observed in other four state-of-the-art retinal image registration methods GDB-ICP (28%), Harris-PIIFD (4%), H-M (16%), and Saddle (16%). Furthermore, the registration accuracy of D-Saddle has the weakest correlation (Spearman) with the intensity uniformity metric among all methods. Finally, the paired t-test shows that D-Saddle significantly improved the overall registration accuracy of the original Saddle.
Collapse
|
9
|
Noyel G, Thomas R, Bhakta G, Crowder A, Owens D, Boyle P. Superimposition of eye fundus images for longitudinal analysis from large public health databases. Biomed Phys Eng Express 2017. [DOI: 10.1088/2057-1976/aa7d16] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
10
|
Hernandez-Matas C, Zabulis X, Argyros AA. An experimental evaluation of the accuracy of keypoints-based retinal image registration. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2017:377-381. [PMID: 29059889 DOI: 10.1109/embc.2017.8036841] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
This work regards an investigation of the accuracy of a state-of-the-art, keypoint-based retinal image registration approach, as to the type of keypoint features used to guide the registration process. The employed registration approach is a local method that incorporates the notion of a 3D retinal surface imaged from different viewpoints and has been shown, experimentally, to be more accurate than competing approaches. The correspondences obtained between SIFT, SURF, Harris-PIIFD and vessel bifurcations are studied, either individually or in combinations. The combination of SIFT features with vessel bifurcations was found to perform better than other combinations or any individual feature type, alone. The registration approach is also comparatively evaluated against representative methods of the state-of-the-art in retinal image registration, using a benchmark dataset that covers a broad range of cases regarding the overlap of the acquired images and the anatomical characteristics of the imaged retinas.
Collapse
|