1
|
Pan L, Cai Z, Hu D, Zhu W, Shi F, Tao W, Wu Q, Xiao S, Chen X. Research on registration method for enface image using multi-feature fusion. Phys Med Biol 2024; 69:215037. [PMID: 39413811 DOI: 10.1088/1361-6560/ad87a5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2024] [Accepted: 10/16/2024] [Indexed: 10/18/2024]
Abstract
Objective.The purpose of this work is to accurately and quickly register the Optical coherence tomography (OCT) projection (enface) images at adjacent time points, and to solve the problem of interference caused by CNV lesions on the registration features.Approach.In this work, a multi-feature registration strategy was proposed, in which a combined feature (com-feature) containing 3D information, intersection information and SURF feature was designed. Firstly, the coordinates of all feature points were extracted as combined features, and then these feature coordinates were added to the initial vascular coordinate set simplified by the Douglas-Peucker algorithm as the point set for registration. Finally, the coherent point drift registration algorithm was used to register the enface coordinate point sets of adjacent time series.Main results.The newly designed features significantly improve the success rate of global registration of vascular networks in enface images, while the simplification step greatly improves the registration speed on the basis of preserving vascular features. The MSE, DSC and time complexity of the proposed method are 0.07993, 0.9693 and 42.7016 s, respectively.Significance.CNV is a serious retinal disease in ophthalmology. The registration of OCT enface images at adjacent time points can timely monitor the progress of the disease and assist doctors in making diagnoses. The proposed method not only improves the accuracy of OCT enface image registration, but also significantly reduces the time complexity. It has good registration results in clinical routine and provides a more efficient method for clinical diagnosis and treatment.
Collapse
Affiliation(s)
- Lingjiao Pan
- Department of Electrical Information Engineering, Jiangsu University of Technology, Changzhou, People's Republic of China
| | - Zhongwang Cai
- Department of Mechanical Engineering, Jiangsu University of Technology, Changzhou, People's Republic of China
| | - Derong Hu
- Department of Mechanical Engineering, Jiangsu University of Technology, Changzhou, People's Republic of China
| | - Weifang Zhu
- Department of Information Engineering, Suzhou University, Suzhou, People's Republic of China
| | - Fei Shi
- Department of Information Engineering, Suzhou University, Suzhou, People's Republic of China
| | - Weige Tao
- Department of Electrical Information Engineering, Jiangsu University of Technology, Changzhou, People's Republic of China
| | - Quanyu Wu
- Department of Electrical Information Engineering, Jiangsu University of Technology, Changzhou, People's Republic of China
| | - Shuyan Xiao
- Department of Electrical Information Engineering, Jiangsu University of Technology, Changzhou, People's Republic of China
| | - Xinjian Chen
- Department of Information Engineering, Suzhou University, Suzhou, People's Republic of China
| |
Collapse
|
2
|
Nie Q, Zhang X, Hu Y, Gong M, Liu J. Medical image registration and its application in retinal images: a review. Vis Comput Ind Biomed Art 2024; 7:21. [PMID: 39167337 PMCID: PMC11339199 DOI: 10.1186/s42492-024-00173-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Accepted: 07/31/2024] [Indexed: 08/23/2024] Open
Abstract
Medical image registration is vital for disease diagnosis and treatment with its ability to merge diverse information of images, which may be captured under different times, angles, or modalities. Although several surveys have reviewed the development of medical image registration, they have not systematically summarized the existing medical image registration methods. To this end, a comprehensive review of these methods is provided from traditional and deep-learning-based perspectives, aiming to help audiences quickly understand the development of medical image registration. In particular, we review recent advances in retinal image registration, which has not attracted much attention. In addition, current challenges in retinal image registration are discussed and insights and prospects for future research provided.
Collapse
Affiliation(s)
- Qiushi Nie
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China
| | - Xiaoqing Zhang
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China
- Center for High Performance Computing and Shenzhen Key Laboratory of Intelligent Bioinformatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yan Hu
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China
| | - Mingdao Gong
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China
| | - Jiang Liu
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China.
- Singapore Eye Research Institute, Singapore, 169856, Singapore.
- State Key Laboratory of Ophthalmology, Optometry and Visual Science, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| |
Collapse
|
3
|
Darzi F, Bocklitz T. A Review of Medical Image Registration for Different Modalities. Bioengineering (Basel) 2024; 11:786. [PMID: 39199744 PMCID: PMC11351674 DOI: 10.3390/bioengineering11080786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2024] [Revised: 07/26/2024] [Accepted: 07/30/2024] [Indexed: 09/01/2024] Open
Abstract
Medical image registration has become pivotal in recent years with the integration of various imaging modalities like X-ray, ultrasound, MRI, and CT scans, enabling comprehensive analysis and diagnosis of biological structures. This paper provides a comprehensive review of registration techniques for medical images, with an in-depth focus on 2D-2D image registration methods. While 3D registration is briefly touched upon, the primary emphasis remains on 2D techniques and their applications. This review covers registration techniques for diverse modalities, including unimodal, multimodal, interpatient, and intra-patient. The paper explores the challenges encountered in medical image registration, including geometric distortion, differences in image properties, outliers, and optimization convergence, and discusses their impact on registration accuracy and reliability. Strategies for addressing these challenges are highlighted, emphasizing the need for continual innovation and refinement of techniques to enhance the accuracy and reliability of medical image registration systems. The paper concludes by emphasizing the importance of accurate medical image registration in improving diagnosis.
Collapse
Affiliation(s)
- Fatemehzahra Darzi
- Institute of Physical Chemistry, Friedrich Schiller University Jena, Helmholtzweg 4, 07743 Jena, Germany;
| | - Thomas Bocklitz
- Institute of Physical Chemistry, Friedrich Schiller University Jena, Helmholtzweg 4, 07743 Jena, Germany;
- Department of Photonic Data Science, Leibniz Institute of Photonic Technology, Albert-Einstein-Straße 9, 07745 Jena, Germany
| |
Collapse
|
4
|
Steffi S, Sam Emmanuel WR. Resilient back-propagation machine learning-based classification on fundus images for retinal microaneurysm detection. Int Ophthalmol 2024; 44:91. [PMID: 38367192 DOI: 10.1007/s10792-024-02982-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 10/29/2023] [Indexed: 02/19/2024]
Abstract
BACKGROUND The timely diagnosis of medical conditions, particularly diabetic retinopathy, relies on the identification of retinal microaneurysms. However, the commonly used retinography method poses a challenge due to the diminutive dimensions and limited differentiation of microaneurysms in images. PROBLEM STATEMENT Automated identification of microaneurysms becomes crucial, necessitating the use of comprehensive ad-hoc processing techniques. Although fluorescein angiography enhances detectability, its invasiveness limits its suitability for routine preventative screening. OBJECTIVE This study proposes a novel approach for detecting retinal microaneurysms using a fundus scan, leveraging circular reference-based shape features (CR-SF) and radial gradient-based texture features (RG-TF). METHODOLOGY The proposed technique involves extracting CR-SF and RG-TF for each candidate microaneurysm, employing a robust back-propagation machine learning method for training. During testing, extracted features from test images are compared with training features to categorize microaneurysm presence. RESULTS The experimental assessment utilized four datasets (MESSIDOR, Diaretdb1, e-ophtha-MA, and ROC), employing various measures. The proposed approach demonstrated high accuracy (98.01%), sensitivity (98.74%), specificity (97.12%), and area under the curve (91.72%). CONCLUSION The presented approach showcases a successful method for detecting retinal microaneurysms using a fundus scan, providing promising accuracy and sensitivity. This non-invasive technique holds potential for effective screening in diabetic retinopathy and other related medical conditions.
Collapse
Affiliation(s)
- S Steffi
- Department of Computer Science, Nesamony Memorial Christian College Affiliated to Manonmaniam Sundaranar University, Abishekapatti, Tirunelveli, Tamil Nadu, 627012, India.
| | - W R Sam Emmanuel
- Department of PG Computer Science, Nesamony Memorial Christian College Affiliated to Manonmaniam Sundaranar University, Abishekapatti, Tirunelveli, Tamil Nadu, 627012, India
| |
Collapse
|
5
|
Pham VN, Le DT, Bum J, Kim SH, Song SJ, Choo H. Discriminative-Region Multi-Label Classification of Ultra-Widefield Fundus Images. Bioengineering (Basel) 2023; 10:1048. [PMID: 37760150 PMCID: PMC10525847 DOI: 10.3390/bioengineering10091048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 09/01/2023] [Accepted: 09/04/2023] [Indexed: 09/29/2023] Open
Abstract
Ultra-widefield fundus image (UFI) has become a crucial tool for ophthalmologists in diagnosing ocular diseases because of its ability to capture a wide field of the retina. Nevertheless, detecting and classifying multiple diseases within this imaging modality continues to pose a significant challenge for ophthalmologists. An automated disease classification system for UFI can support ophthalmologists in making faster and more precise diagnoses. However, existing works for UFI classification often focus on a single disease or assume each image only contains one disease when tackling multi-disease issues. Furthermore, the distinctive characteristics of each disease are typically not utilized to improve the performance of the classification systems. To address these limitations, we propose a novel approach that leverages disease-specific regions of interest for the multi-label classification of UFI. Our method uses three regions, including the optic disc area, the macula area, and the entire UFI, which serve as the most informative regions for diagnosing one or multiple ocular diseases. Experimental results on a dataset comprising 5930 UFIs with six common ocular diseases showcase that our proposed approach attains exceptional performance, with the area under the receiver operating characteristic curve scores for each class spanning from 95.07% to 99.14%. These results not only surpass existing state-of-the-art methods but also exhibit significant enhancements, with improvements of up to 5.29%. These results demonstrate the potential of our method to provide ophthalmologists with valuable information for early and accurate diagnosis of ocular diseases, ultimately leading to improved patient outcomes.
Collapse
Affiliation(s)
- Van-Nguyen Pham
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea;
| | - Duc-Tai Le
- College of Computing and Informatics, Sungkyunkwan University, Suwon 16419, Republic of Korea;
| | - Junghyun Bum
- Sungkyun AI Research Institute, Sungkyunkwan University, Suwon 16419, Republic of Korea;
| | - Seong Ho Kim
- Department of Ophthalmology, Kangbuk Samsung Hospital, School of Medicine, Sungkyunkwan University, Seoul 03181, Republic of Korea;
| | - Su Jeong Song
- Department of Ophthalmology, Kangbuk Samsung Hospital, School of Medicine, Sungkyunkwan University, Seoul 03181, Republic of Korea;
- Biomedical Institute for Convergence, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - Hyunseung Choo
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea;
- College of Computing and Informatics, Sungkyunkwan University, Suwon 16419, Republic of Korea;
- Department of Superintelligence Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| |
Collapse
|
6
|
Attention-Driven Cascaded Network for Diabetic Retinopathy Grading from Fundus Images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
7
|
Deep Learning and Medical Image Processing Techniques for Diabetic Retinopathy: A Survey of Applications, Challenges, and Future Trends. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:2728719. [PMID: 36776951 PMCID: PMC9911247 DOI: 10.1155/2023/2728719] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 10/28/2022] [Accepted: 11/25/2022] [Indexed: 02/05/2023]
Abstract
Diabetic retinopathy (DR) is a common eye retinal disease that is widely spread all over the world. It leads to the complete loss of vision based on the level of severity. It damages both retinal blood vessels and the eye's microscopic interior layers. To avoid such issues, early detection of DR is essential in association with routine screening methods to discover mild causes in manual initiation. But these diagnostic procedures are extremely difficult and expensive. The unique contributions of the study include the following: first, providing detailed background of the DR disease and the traditional detection techniques. Second, the various imaging techniques and deep learning applications in DR are presented. Third, the different use cases and real-life scenarios are explored relevant to DR detection wherein deep learning techniques have been implemented. The study finally highlights the potential research opportunities for researchers to explore and deliver effective performance results in diabetic retinopathy detection.
Collapse
|
8
|
Benvenuto GA, Colnago M, Dias MA, Negri RG, Silva EA, Casaca W. A Fully Unsupervised Deep Learning Framework for Non-Rigid Fundus Image Registration. Bioengineering (Basel) 2022; 9:bioengineering9080369. [PMID: 36004894 PMCID: PMC9404907 DOI: 10.3390/bioengineering9080369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 07/31/2022] [Accepted: 08/03/2022] [Indexed: 11/26/2022] Open
Abstract
In ophthalmology, the registration problem consists of finding a geometric transformation that aligns a pair of images, supporting eye-care specialists who need to record and compare images of the same patient. Considering the registration methods for handling eye fundus images, the literature offers only a limited number of proposals based on deep learning (DL), whose implementations use the supervised learning paradigm to train a model. Additionally, ensuring high-quality registrations while still being flexible enough to tackle a broad range of fundus images is another drawback faced by most existing methods in the literature. Therefore, in this paper, we address the above-mentioned issues by introducing a new DL-based framework for eye fundus registration. Our methodology combines a U-shaped fully convolutional neural network with a spatial transformation learning scheme, where a reference-free similarity metric allows the registration without assuming any pre-annotated or artificially created data. Once trained, the model is able to accurately align pairs of images captured under several conditions, which include the presence of anatomical differences and low-quality photographs. Compared to other registration methods, our approach achieves better registration outcomes by just passing as input the desired pair of fundus images.
Collapse
Affiliation(s)
- Giovana A. Benvenuto
- Faculty of Science and Technology (FCT), São Paulo State University (UNESP), Presidente Prudente 19060-900, Brazil
| | - Marilaine Colnago
- Institute of Mathematics and Computer Science (ICMC), São Paulo University (USP), São Carlos 13566-590, Brazil
| | - Maurício A. Dias
- Faculty of Science and Technology (FCT), São Paulo State University (UNESP), Presidente Prudente 19060-900, Brazil
| | - Rogério G. Negri
- Science and Technology Institute (ICT), São Paulo State University (UNESP), São José dos Campos 12224-300, Brazil
| | - Erivaldo A. Silva
- Faculty of Science and Technology (FCT), São Paulo State University (UNESP), Presidente Prudente 19060-900, Brazil
| | - Wallace Casaca
- Institute of Biosciences, Letters and Exact Sciences (IBILCE), São Paulo State University (UNESP), São José do Rio Preto 15054-000, Brazil
- Correspondence:
| |
Collapse
|
9
|
Robust Detection Model of Vascular Landmarks for Retinal Image Registration: A Two-Stage Convolutional Neural Network. BIOMED RESEARCH INTERNATIONAL 2022; 2022:1705338. [PMID: 35941970 PMCID: PMC9356876 DOI: 10.1155/2022/1705338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Accepted: 07/08/2022] [Indexed: 11/18/2022]
Abstract
Registration is useful for image processing in computer vision. It can be applied to retinal images and provide support for ophthalmologists in tracking disease progression and monitoring therapeutic responses. This study proposed a robust detection model of vascular landmarks to improve the performance of retinal image registration. The proposed model consists of a two-stage convolutional neural network, in which one segments the retinal vessels on a pair of images, and the other detects junction points from the vessel segmentation image. Information obtained from the model was utilized for the registration. The keypoints were extracted based on the acquired vascular landmark points, and the orientation features were calculated as descriptors. Then, the reference and sensed images were registered by matching keypoints using a homography matrix and random sample consensus algorithm. The proposed method was evaluated on five databases and seven evaluation metrics to verify both clinical effectiveness and robustness. The results established that the proposed method showed outstanding performance for registration compared with other state-of-the-art methods. In particular, the high and significantly improved registration results were identified on FIRE database with area under the curve (AUC) of 0.988, 0.511, and 0.803 in S, P, and A classes. Furthermore, the proposed method worked well on poor quality and multimodal datasets demonstrating an ability to achieve high AUC above 0.8.
Collapse
|
10
|
Khan R, Saha SK, Frost S, Kanagasingam Y, Raman R. The Longitudinal Assessment of Vascular Parameters of the Retina and Their Correlations with Systemic Characteristics in Type 2 Diabetes-A Pilot Study. Vision (Basel) 2022; 6:vision6030045. [PMID: 35893762 PMCID: PMC9326718 DOI: 10.3390/vision6030045] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 05/18/2022] [Accepted: 06/30/2022] [Indexed: 11/16/2022] Open
Abstract
The aim of the study was to assess various retinal vessel parameters of diabetes mellitus (DM) patients and their correlations with systemic factors in type 2 DM. A retrospective exploratory study in which 21 pairs of baseline and follow-up images of patients affected by DM were randomly chosen from the Sankara Nethralaya−Diabetic Retinopathy Study (SN DREAMS) I and II datasets. Patients’ fundus was photographed, and the diagnosis was made based on Klein classification. Vessel thickness parameters were generated using a web-based retinal vascular analysis platform called VASP. The thickness changes between the baseline and follow-up images were computed and normalized with the actual thicknesses of baseline images. The majority of parameters showed 10~20% changes over time. Vessel width in zone C for the second vein was significantly reduced from baseline to follow-up, which showed positive correlations with systolic blood pressure and serum high-density lipoproteins. Fractal dimension for all vessels in zones B and C and fractal dimension for vein in zones A, B and C showed a minimal increase from baseline to follow-up, which had a linear relationship with diastolic pressure, mean arterial pressure, serum triglycerides (p < 0.05). Lacunarity for all vessels and veins in zones A, B and C showed a minimal decrease from baseline to follow-up which had a negative correlation with pulse pressure and positive correlation with serum triglycerides (p < 0.05). The vessel widths for the first and second arteries significantly increased from baseline to follow-up and had an association with high-density lipoproteins, glycated haemoglobin A1C, serum low-density lipoproteins and total serum cholesterol. The central reflex intensity ratio for the second artery was significantly decreased from baseline to follow-up, and positive correlations were noted with serum triglyceride, serum low-density lipoproteins and total serum cholesterol. The coefficients for branches in zones B and C artery and the junctional exponent deviation for the artery in zone A decreased from baseline to follow-up showed positive correlations with serum triglycerides, serum low-density lipoproteins and total serum cholesterol. Identifying early microvascular changes in diabetic patients will allow for earlier intervention, improve visual outcomes and prevent vision loss.
Collapse
Affiliation(s)
- Rehana Khan
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Chennai 600006, Tamil Nadu, India;
| | - Sajib K Saha
- Australian e-Health Research Centre, The Commonwealth Scientific and Industrial Research Organisation (CSIRO), Kensington, WA 6151, Australia; (S.K.S.); (S.F.)
| | - Shaun Frost
- Australian e-Health Research Centre, The Commonwealth Scientific and Industrial Research Organisation (CSIRO), Kensington, WA 6151, Australia; (S.K.S.); (S.F.)
| | - Yogesan Kanagasingam
- Digital Health and Telemedicine, The University of Notre Dame, Fremantle, WA 6160, Australia;
| | - Rajiv Raman
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Chennai 600006, Tamil Nadu, India;
- Correspondence: ; Tel.: +91-44-28271616
| |
Collapse
|
11
|
Dida H, Charif F, Benchabane A. Registration of computed tomography images of a lung infected with COVID-19 based in the new meta-heuristic algorithm HPSGWO. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:18955-18976. [PMID: 35287378 PMCID: PMC8907398 DOI: 10.1007/s11042-022-12658-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 04/27/2021] [Accepted: 02/09/2022] [Indexed: 05/03/2023]
Abstract
Computed tomography (CT) helps the radiologist in the rapid and correct detection of a person infected with the coronavirus disease 2019 (COVID-19), and this by showing the presence of the ground-glass opacity in the lung of with the virus. Tracking the evolution of the spread of the ground-glass opacity (GGO) in the lung of the person infected with the virus needs to study more than one image in different times. The various CT images must be registration to identify the evolution of the ground glass in the lung and to facilitate the study and identification of the virus. Due to the process of registration images is essentially an improvement problem, we present in this paper a new HPSGWO algorithm for registration CT images of a lung infected with the COVID-19. This algorithm is a hybridization of the two algorithms Particle swarm optimization (PSO) and Grey wolf optimizer (GWO). The simulation results obtained after applying the algorithm to the test images show that the proposed approach achieved high-precision and robust registration compared to other methods such as GWO, PSO, Firefly Algorithm (FA), and Crow Searcha Algorithms (CSA).
Collapse
Affiliation(s)
- Hedifa Dida
- Faculty of New Information and Communication Technologies, Department of Electronics and Telecommunications, Kasdi Merbah University, Ouargla, Algeria
| | - Fella Charif
- Faculty of New Information and Communication Technologies, Department of Electronics and Telecommunications, Kasdi Merbah University, Ouargla, Algeria
| | - Abderrazak Benchabane
- Faculty of New Information and Communication Technologies, Department of Electronics and Telecommunications, Kasdi Merbah University, Ouargla, Algeria
| |
Collapse
|
12
|
Rivas-Villar D, Hervella ÁS, Rouco J, Novo J. Color fundus image registration using a learning-based domain-specific landmark detection methodology. Comput Biol Med 2022; 140:105101. [PMID: 34875412 DOI: 10.1016/j.compbiomed.2021.105101] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 11/29/2021] [Accepted: 11/29/2021] [Indexed: 11/17/2022]
Abstract
Medical imaging, and particularly retinal imaging, allows to accurately diagnose many eye pathologies as well as some systemic diseases such as hypertension or diabetes. Registering these images is crucial to correctly compare key structures, not only within patients, but also to contrast data with a model or among a population. Currently, this field is dominated by complex classical methods because the novel deep learning methods cannot compete yet in terms of results and commonly used methods are difficult to adapt to the retinal domain. In this work, we propose a novel method to register color fundus images based on previous works which employed classical approaches to detect domain-specific landmarks. Instead, we propose to use deep learning methods for the detection of these highly-specific domain-related landmarks. Our method uses a neural network to detect the bifurcations and crossovers of the retinal blood vessels, whose arrangement and location are unique to each eye and person. This proposal is the first deep learning feature-based registration method in fundus imaging. These keypoints are matched using a method based on RANSAC (Random Sample Consensus) without the requirement to calculate complex descriptors. Our method was tested using the public FIRE dataset, although the landmark detection network was trained using the DRIVE dataset. Our method provides accurate results, a registration score of 0.657 for the whole FIRE dataset (0.908 for category S, 0.293 for category P and 0.660 for category A). Therefore, our proposal can compete with complex classical methods and beat the deep learning methods in the state of the art.
Collapse
Affiliation(s)
- David Rivas-Villar
- Centro de investigacion CITIC, Universidade da Coruña, 15 071, A Coruña, Spain; Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15 006, A Coruña, Spain.
| | - Álvaro S Hervella
- Centro de investigacion CITIC, Universidade da Coruña, 15 071, A Coruña, Spain; Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15 006, A Coruña, Spain.
| | - José Rouco
- Centro de investigacion CITIC, Universidade da Coruña, 15 071, A Coruña, Spain; Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15 006, A Coruña, Spain.
| | - Jorge Novo
- Centro de investigacion CITIC, Universidade da Coruña, 15 071, A Coruña, Spain; Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15 006, A Coruña, Spain.
| |
Collapse
|
13
|
Retinal image registration using log-polar transform and robust description of bifurcation points. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102424] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
14
|
Avendaño-Valencia LD, Yderstræde KB, Nadimi ES, Blanes-Vidal V. Video-based eye tracking performance for computer-assisted diagnostic support of diabetic neuropathy. Artif Intell Med 2021; 114:102050. [PMID: 33875161 DOI: 10.1016/j.artmed.2021.102050] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Revised: 02/16/2021] [Accepted: 02/21/2021] [Indexed: 10/22/2022]
Abstract
Diabetes is currently one of the major public health threats. The essential components for effective treatment of diabetes include early diagnosis and regular monitoring. However, health-care providers are often short of human resources to closely monitor populations at risk. In this work, a video-based eye-tracking method is proposed as a low-cost alternative for detection of diabetic neuropathy. The method is based on the tracking of the eye-trajectories recorded on videos while the subject follows a target on a screen, forcing saccadic movements. Upon extraction of the eye trajectories, representation of the obtained time-series is made with the help of heteroscedastic ARX (H-ARX) models, which capture the dynamics and latency on the subject's response, while features based on the H-ARX model's predictive ability are subsequently used for classification. The methodology is evaluated on a population constituted by 11 control and 20 insulin-treated diabetic individuals suffering from diverse diabetic complications including neuropathy and retinopathy. Results show significant differences on latency and eye movement precision between the populations of control subjects and diabetics, while simultaneously demonstrating that both groups can be classified with an accuracy of 95%. Although this study is limited by the small sample size, the results align with other findings in the literature and encourage further research.
Collapse
Affiliation(s)
- Luis David Avendaño-Valencia
- Group of Applied AI and Data Science, Maersk-McKinney-Moller Institute, University of Southern Denmark, Campusvej 55, 5230 Odense M, Denmark.
| | - Knud B Yderstræde
- Steno Diabetes Center and Center for Innovative Medical Technology, Odense University Hospital, Sdr. Boulevard 29, 5000 Odense C, Denmark.
| | - Esmaeil S Nadimi
- Group of Applied AI and Data Science, Maersk-McKinney-Moller Institute, University of Southern Denmark, Campusvej 55, 5230 Odense M, Denmark.
| | - Victoria Blanes-Vidal
- Group of Applied AI and Data Science, Maersk-McKinney-Moller Institute, University of Southern Denmark, Campusvej 55, 5230 Odense M, Denmark.
| |
Collapse
|
15
|
Golkar E, Rabbani H, Dehghani A. Hybrid registration of retinal fluorescein angiography and optical coherence tomography images of patients with diabetic retinopathy. BIOMEDICAL OPTICS EXPRESS 2021; 12:1707-1724. [PMID: 33796382 PMCID: PMC7984788 DOI: 10.1364/boe.415939] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 01/26/2021] [Accepted: 02/21/2021] [Indexed: 05/10/2023]
Abstract
Diabetic retinopathy (DR) is a common ophthalmic disease among diabetic patients. It is essential to diagnose DR in the early stages of treatment. Various imaging systems have been proposed to detect and visualize retina diseases. The fluorescein angiography (FA) imaging technique is now widely used as a gold standard technique to evaluate the clinical manifestations of DR. Optical coherence tomography (OCT) imaging is another technique that provides 3D information of the retinal structure. The FA and OCT images are captured in two different phases and field of views and image fusion of these modalities are of interest to clinicians. This paper proposes a hybrid registration framework based on the extraction and refinement of segmented major blood vessels of retinal images. The newly extracted features significantly improve the success rate of global registration results in the complex blood vessel network of retinal images. Afterward, intensity-based and deformable transformations are utilized to further compensate the motion magnitude between the FA and OCT images. Experimental results of 26 images of the various stages of DR patients indicate that this algorithm yields promising registration and fusion results for clinical routine.
Collapse
Affiliation(s)
- Ehsan Golkar
- Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Hossein Rabbani
- Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Alireza Dehghani
- Eye Research Center, Isfahan University of Medical Sciences, Isfahan, Iran and Didavaran Eye Clinic, Isfahan, Iran
| |
Collapse
|
16
|
Bhuiyan A, Govindaiah A, Deobhakta A, Hossain M, Rosen R, Smith T. Automated diabetic retinopathy screening for primary care settings using deep learning. INTELLIGENCE-BASED MEDICINE 2021; 5. [PMID: 35528965 PMCID: PMC9071157 DOI: 10.1016/j.ibmed.2021.100045] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Diabetic Retinopathy (DR) is one of the leading causes of blindness in the United States and other high-income countries. Early detection is key to prevention, which could be achieved effectively with a fully automated screening tool performing well on clinically relevant measures in primary care settings. We have built an artificial intelligence-based tool on a cloud-based platform for large-scale screening of DR as referable or non-referable. In this paper, we aim to validate this tool built using deep learning based techniques. The cloud-based screening model was developed and tested using deep learning techniques with 88702 images from the Kaggle dataset and externally validated using 1748 high-resolution images of the retina (or fundus images) from the Messidor-2 dataset. For validation in the primary care settings, 264 images were taken prospectively from two diabetes clinics in Queens, New York. The images were uploaded to the cloud-based software for testing the automated system as compared to expert ophthalmologists’ evaluations of referable DR. Measures used were area under the curve (AUC), sensitivity, and specificity of the screening model with respect to professional graders. The screening system achieved a high sensitivity of 99.21% and a specificity of 97.59% on the Kaggle test dataset with an AUC of 0.9992. The system was also externally validated in Messidor-2, where it achieved a sensitivity of 97.63% and a specificity of 99.49% (AUC, 0.9985). On primary care data, the sensitivity was 92.3% overall (12/13 referable images are correctly identified), and overall specificity was 94.8% (233/251 non-referable images). The proposed DR screening tool achieves state-of-the-art performance among the publicly available datasets: Kaggle and Messidor-2 to the best of our knowledge. The performance on various clinically relevant measures demonstrates that the tool is suitable for screening and early diagnosis of DR in primary care settings.
Collapse
Affiliation(s)
- Alauddin Bhuiyan
- iHealthScreen Inc, NY, USA.,New York Eye and Ear Infirmary of Mount Sinai, Icahn School of Medicine at Mount Sinai, NY, USA
| | | | - Avnish Deobhakta
- New York Eye and Ear Infirmary of Mount Sinai, Icahn School of Medicine at Mount Sinai, NY, USA
| | | | - Richard Rosen
- New York Eye and Ear Infirmary of Mount Sinai, Icahn School of Medicine at Mount Sinai, NY, USA
| | - Theodore Smith
- New York Eye and Ear Infirmary of Mount Sinai, Icahn School of Medicine at Mount Sinai, NY, USA
| |
Collapse
|
17
|
Saha S, Wang Z, Sadda S, Kanagasingam Y, Hu Z. Visualizing and understanding inherent features in SD-OCT for the progression of age-related macular degeneration using deconvolutional neural networks. APPLIED AI LETTERS 2020; 1:e16. [PMID: 36478669 PMCID: PMC9725889 DOI: 10.1002/ail2.16] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
To develop a convolutional neural network visualization strategy so that optical coherence tomography (OCT) features contributing to the evolution of age-related macular degeneration (AMD) can be better determined. We have trained a U-Net model to utilize baseline OCT to predict the progression of geographic atrophy (GA), a late stage manifestation of AMD. We have augmented the U-Net architecture by attaching deconvolutional neural networks (deconvnets). Deconvnets produce the reconstructed feature maps and provide an indication regarding the inherent baseline OCT features contributing to GA progression. Experiments were conducted on longitudinal spectral domain (SD)-OCT and fundus autofluorescence images collected from 70 eyes with GA. The intensity of Bruch's membrane-outer choroid (BMChoroid) retinal junction exhibited a relative importance of 24%, in the GA progression. The intensity of the inner retinal pigment epithelium (RPE) and BM junction (InRPEBM) showed a relative importance of 22%. BMChoroid (where the AMD feature/damage of choriocapillaris was included) followed by InRPEBM (where the AMD feature/damage of RPE was included) are the layers which appear to be most relevant in predicting the progression of AMD.
Collapse
Affiliation(s)
- Sajib Saha
- Doheny Eye Institute, Los Angeles, California
- Australian e-Health Research Centre, CSIRO, Perth, Australia
| | - Ziyuan Wang
- Doheny Eye Institute, Los Angeles, California
- The University of California, Los Angeles, California
| | - Srinivas Sadda
- Doheny Eye Institute, Los Angeles, California
- The University of California, Los Angeles, California
| | | | - Zhihong Hu
- Doheny Eye Institute, Los Angeles, California
| |
Collapse
|
18
|
Wang J, Shao W, Kim J. Combining MF-DFA and LSSVM for retina images classification. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101943] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
19
|
Shukla AK, Pandey RK, Pachori RB. A fractional filter based efficient algorithm for retinal blood vessel segmentation. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101883] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
20
|
Long S, Chen J, Hu A, Liu H, Chen Z, Zheng D. Microaneurysms detection in color fundus images using machine learning based on directional local contrast. Biomed Eng Online 2020; 19:21. [PMID: 32295576 PMCID: PMC7161183 DOI: 10.1186/s12938-020-00766-3] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Accepted: 04/06/2020] [Indexed: 02/07/2023] Open
Abstract
Background As one of the major
complications of diabetes, diabetic retinopathy (DR) is a leading
cause of visual impairment and blindness due to delayed diagnosis
and intervention. Microaneurysms appear as the earliest symptom of
DR. Accurate and reliable detection of microaneurysms in color
fundus images has great importance for DR screening. Methods A microaneurysms' detection method
using machine learning based on directional local contrast (DLC) is
proposed for the early diagnosis of DR. First, blood vessels were
enhanced and segmented using improved enhancement function based on
analyzing eigenvalues of Hessian matrix. Next, with blood vessels
excluded, microaneurysm candidate regions were obtained using shape
characteristics and connected components analysis. After image
segmented to patches, the features of each microaneurysm candidate
patch were extracted, and each candidate patch was classified into
microaneurysm or non-microaneurysm. The main contributions of our
study are (1) making use of directional local contrast in
microaneurysms' detection for the first time, which does make sense
for better microaneurysms' classification. (2) Applying three
different machine learning techniques for classification and
comparing their performance for microaneurysms' detection. The
proposed algorithm was trained and tested on e-ophtha MA database,
and further tested on another independent DIARETDB1 database.
Results of microaneurysms' detection on the two databases were
evaluated on lesion level and compared with existing algorithms. Results The proposed method has achieved better performance compared with existing algorithms on accuracy and computation time. On e-ophtha MA and DIARETDB1 databases, the area under curve (AUC) of receiver operating characteristic (ROC) curve was 0.87 and 0.86, respectively. The free-response ROC (FROC) score on the two databases was 0.374 and 0.210, respectively. The computation time per image with resolution of 2544×1969, 1400×960 and 1500×1152 is 29 s, 3 s and 2.6 s, respectively. Conclusions The proposed method
using machine learning based on directional local contrast of image
patches can effectively detect microaneurysms in color fundus images
and provide an effective scientific basis for early clinical DR
diagnosis.
Collapse
Affiliation(s)
- Shengchun Long
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China
| | - Jiali Chen
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China.
| | - Ante Hu
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China
| | - Haipeng Liu
- Research Center of Intelligent Healthcare, Faculty of Health and Life Science, Coventry University, Coventry, CV1 5RW, UK
| | - Zhiqing Chen
- Eye Center, The second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009, China
| | - Dingchang Zheng
- Research Center of Intelligent Healthcare, Faculty of Health and Life Science, Coventry University, Coventry, CV1 5RW, UK
| |
Collapse
|
21
|
Retinal Blood Vessel Segmentation: A Semi-supervised Approach. PATTERN RECOGNITION AND IMAGE ANALYSIS 2019. [DOI: 10.1007/978-3-030-31321-0_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
22
|
Islam ST, Saha S, Rahaman GMA, Dutta D, Kanagasingam Y. An Efficient Binary Descriptor to Describe Retinal Bifurcation Point for Image Registration. PATTERN RECOGNITION AND IMAGE ANALYSIS 2019. [DOI: 10.1007/978-3-030-31332-6_47] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
23
|
A Semi-supervised Approach to Segment Retinal Blood Vessels in Color Fundus Photographs. Artif Intell Med 2019. [DOI: 10.1007/978-3-030-21642-9_44] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|