1
|
Li X, Wen X, Shang X, Liu J, Zhang L, Cui Y, Luo X, Zhang G, Xie J, Huang T, Chen Z, Lyu Z, Wu X, Lan Y, Meng Q. Identification of diabetic retinopathy classification using machine learning algorithms on clinical data and optical coherence tomography angiography. Eye (Lond) 2024; 38:2813-2821. [PMID: 38871934 PMCID: PMC11427469 DOI: 10.1038/s41433-024-03173-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Revised: 04/10/2024] [Accepted: 06/06/2024] [Indexed: 06/15/2024] Open
Abstract
BACKGROUND To apply machine learning (ML) algorithms to perform multiclass diabetic retinopathy (DR) classification using both clinical data and optical coherence tomography angiography (OCTA). METHODS In this cross-sectional observational study, clinical data and OCTA parameters from 203 diabetic patients (203 eye) were used to establish the ML models, and those from 169 diabetic patients (169 eye) were used for independent external validation. The random forest, gradient boosting machine (GBM), deep learning and logistic regression algorithms were used to identify the presence of DR, referable DR (RDR) and vision-threatening DR (VTDR). Four different variable patterns based on clinical data and OCTA variables were examined. The algorithms' performance were evaluated using receiver operating characteristic curves and the area under the curve (AUC) was used to assess predictive accuracy. RESULTS The random forest algorithm on OCTA+clinical data-based variables and OCTA+non-laboratory factor-based variables provided the higher AUC values for DR, RDR and VTDR. The GBM algorithm produced similar results, albeit with slightly lower AUC values. Leading predictors of DR status included vessel density, retinal thickness and GCC thickness, as well as the body mass index, waist-to-hip ratio and glucose-lowering treatment. CONCLUSIONS ML-based multiclass DR classification using OCTA and clinical data can provide reliable assistance for screening, referral, and management DR populations.
Collapse
Affiliation(s)
- Xiaoli Li
- Department of Ophthalmology, Guangdong Eye Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Xin Wen
- Department of Ophthalmology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Xianwen Shang
- Department of Ophthalmology, Guangdong Eye Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Junbin Liu
- Department of Ophthalmology, Guangdong Eye Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Liang Zhang
- Department of Ophthalmology, Guangdong Eye Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Ying Cui
- Department of Ophthalmology, Guangdong Eye Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Xiaoyang Luo
- Department of Ophthalmology, Guangdong Eye Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Guanrong Zhang
- Statistics Section, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, Guangdong, China
| | - Jie Xie
- Department of Ophthalmology, Heyuan People's Hospital, Heyuan, China
| | - Tian Huang
- Department of Ophthalmology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Zhifan Chen
- Department of Ophthalmology, The Fourth Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Zheng Lyu
- Department of Ophthalmology, Guangdong Eye Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Xiyu Wu
- Department of Ophthalmology, Guangdong Eye Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Yuqing Lan
- Department of Ophthalmology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.
| | - Qianli Meng
- Department of Ophthalmology, Guangdong Eye Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China.
| |
Collapse
|
2
|
Pavithra S, Jaladi D, Tamilarasi K. Optical imaging for diabetic retinopathy diagnosis and detection using ensemble models. Photodiagnosis Photodyn Ther 2024; 48:104259. [PMID: 38944405 DOI: 10.1016/j.pdpdt.2024.104259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 06/16/2024] [Accepted: 06/20/2024] [Indexed: 07/01/2024]
Abstract
Diabetes, characterized by heightened blood sugar levels, can lead to a condition called Diabetic Retinopathy (DR), which adversely impacts the eyes due to elevated blood sugar affecting the retinal blood vessels. The most common cause of blindness in diabetics is thought to be Diabetic Retinopathy (DR), particularly in working-age individuals living in poor nations. People with type 1 or type 2 diabetes may develop this illness, and the risk rises with the length of diabetes and inadequate blood sugar management. There are limits to traditional approaches for the early identification of diabetic retinopathy (DR). In order to diagnose diabetic retinopathy, a model based on Convolutional neural network (CNN) is used in a unique way in this research. The suggested model uses a number of deep learning (DL) models, such as VGG19, Resnet50, and InceptionV3, to extract features. After concatenation, these characteristics are sent through the CNN algorithm for classification. By combining the advantages of several models, ensemble approaches can be effective tools for detecting diabetic retinopathy and increase overall performance and resilience. Classification and image recognition are just a few of the tasks that may be accomplished with ensemble approaches like combination of VGG19,Inception V3 and Resnet 50 to achieve high accuracy. The proposed model is evaluated using a publicly accessible collection of fundus images.VGG19, ResNet50, and InceptionV3 differ in their neural network architectures, feature extraction capabilities, object detection methods, and approaches to retinal delineation. VGG19 may excel in capturing fine details, ResNet50 in recognizing complex patterns, and InceptionV3 in efficiently capturing multi-scale features. Their combined use in an ensemble approach can provide a comprehensive analysis of retinal images, aiding in the delineation of retinal regions and identification of abnormalities associated with diabetic retinopathy. For instance, micro aneurysms, the earliest signs of DR, often require precise detection of subtle vascular abnormalities. VGG19's proficiency in capturing fine details allows for the identification of these minute changes in retinal morphology. On the other hand, ResNet50's strength lies in recognizing intricate patterns, making it effective in detecting neoneovascularization and complex haemorrhagic lesions. Meanwhile, InceptionV3's multi-scale feature extraction enables comprehensive analysis, crucial for assessing macular oedema and ischaemic changes across different retinal layers.
Collapse
Affiliation(s)
- S Pavithra
- School of Computer Science and Engineering, VIT University, Chennai, Tamil Nadu, India.
| | - Deepika Jaladi
- School of Computer Science and Engineering, VIT University, Chennai, Tamil Nadu, India.
| | - K Tamilarasi
- School of Computer Science and Engineering, VIT University, Chennai, Tamil Nadu, India.
| |
Collapse
|
3
|
Mohamed Z, Vankudre GS, Ayyappan JP, Noushad B, Alzeedi AN, Alazzani SS, Alkaabi AJ. Vision-Related Quality of Life Among Diabetic Retinopathy Patients in a Hospital-Based Population in the Sultanate of Oman. CLINICAL OPTOMETRY 2024; 16:123-129. [PMID: 38784861 PMCID: PMC11114135 DOI: 10.2147/opto.s462498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Accepted: 05/03/2024] [Indexed: 05/25/2024]
Abstract
Background The global prevalence of diabetic retinopathy (DR) among individuals with diabetes is 22.27%. This highlights the likelihood of developing burden of retinopathy within the at risk population and can have a detrimental impact on an individual's quality of life (QoL). The aim of this study is to assess the vision-related QoL in individuals with (DR) in a hospital-based population in the Al-Buraimi governorate, Sultanate of Oman. Methods The study was conducted in the Ophthalmology Outpatient Department of Al Buraimi Hospital and Polyclinic. This study enrolled 218 patients (114 males, 104 females) diagnosed with DR. The NEI-VFQ-25 questionnaire was adopted in this study. The patients were classified into different groups according to their type of diabetes and other relevant demographic information. Results A total of 218 patients responded to the NEI-VFQ-25 questionnaire. The mean age of the participants was 57.49 ± 12.3 years, 52.3% were male, and 47.7% were female. The overall QoL score was 41.53± 20.8. Patients aged more than 75 years had the lowest QoL scores compared with the other age groups (p = 0.02). The results showed that the duration of diabetes had no significant impact on the overall QoL scores (p = 0.06). A higher QoL score was observed among patients with type II diabetes mellitus (DM) than with type I diabetes mellitus (p = 0.01). Patients diagnosed with proliferative DR (PDR) had a significantly lower QoL score than those diagnosed at other stages (p < 0.001). Conclusion The QoL of the population with DR is negatively affected by various factors, including demographics, disease severity, and in patients with DM type II. It is important to consider these factors to enhance QoL in patients with DR. Regular evaluation of an individual's QoL is beneficial for both physicians and healthcare teams.
Collapse
Affiliation(s)
- Zoelfigar Mohamed
- Department of Optometry, College of Health Sciences, University of Buraimi, Al Buraimi, Sultanate of Oman
| | - Gopi Suesh Vankudre
- Department of Optometry, College of Health Sciences, University of Buraimi, Al Buraimi, Sultanate of Oman
| | - Janitha Plackal Ayyappan
- Department of Optometry, College of Health Sciences, University of Buraimi, Al Buraimi, Sultanate of Oman
| | - Babu Noushad
- Department of Optometry, College of Health Sciences, University of Buraimi, Al Buraimi, Sultanate of Oman
| | | | | | - Aisha Juma Alkaabi
- Department of Optometry, College of Health Sciences, University of Buraimi, Al Buraimi, Sultanate of Oman
| |
Collapse
|
4
|
Ma F, Liu X, Wang S, Li S, Dai C, Meng J. CSANet: a lightweight channel and spatial attention neural network for grading diabetic retinopathy with optical coherence tomography angiography. Quant Imaging Med Surg 2024; 14:1820-1834. [PMID: 38415109 PMCID: PMC10895115 DOI: 10.21037/qims-23-1270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 12/12/2023] [Indexed: 02/29/2024]
Abstract
Background Diabetic retinopathy (DR) is one of the most common eye diseases. Convolutional neural networks (CNNs) have proven to be a powerful tool for learning DR features; however, accurate DR grading remains challenging due to the small lesions in optical coherence tomography angiography (OCTA) images and the small number of samples. Methods In this article, we developed a novel deep-learning framework to achieve the fine-grained classification of DR; that is, the lightweight channel and spatial attention network (CSANet). Our CSANet comprises two modules: the baseline model, and the hybrid attention module (HAM) based on spatial attention and channel attention. The spatial attention module is used to mine small lesions and obtain a set of spatial position weights to address the problem of small lesions being ignored during the convolution process. The channel attention module uses a set of channel weights to focus on useful features and suppress irrelevant features. Results The extensive experimental results for the OCTA-DR and diabetic retinopathy analysis challenge (DRAC) 2022 data sets showed that the CSANet achieved state-of-the-art DR grading results, showing the effectiveness of the proposed model. The CSANet had an accuracy rate of 97.41% for the OCTA-DR data set and 85.71% for the DRAC 2022 data set. Conclusions Extensive experiments using the OCTA-DR and DRAC 2022 data sets showed that the proposed model effectively mitigated the problems of mutual confusion between DRs of different severity and small lesions being neglected in the convolution process, and thus improved the accuracy of DR classification.
Collapse
Affiliation(s)
- Fei Ma
- School of Computer Science, Qufu Normal University, Rizhao, China
| | - Xiao Liu
- School of Computer Science, Qufu Normal University, Rizhao, China
| | - Shengbo Wang
- School of Computer Science, Qufu Normal University, Rizhao, China
| | - Sien Li
- School of Computer Science, Qufu Normal University, Rizhao, China
| | - Cuixia Dai
- College Science, Shanghai Institute of Technology, Shanghai, China
| | - Jing Meng
- School of Computer Science, Qufu Normal University, Rizhao, China
| |
Collapse
|
5
|
Ahmed TS, Shah J, Zhen YNB, Chua J, Wong DWK, Nusinovici S, Tan R, Tan G, Schmetterer L, Tan B. Ocular microvascular complications in diabetic retinopathy: insights from machine learning. BMJ Open Diabetes Res Care 2024; 12:e003758. [PMID: 38167606 PMCID: PMC10773391 DOI: 10.1136/bmjdrc-2023-003758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 11/19/2023] [Indexed: 01/05/2024] Open
Abstract
INTRODUCTION Diabetic retinopathy (DR) is a leading cause of preventable blindness among working-age adults, primarily driven by ocular microvascular complications from chronic hyperglycemia. Comprehending the complex relationship between microvascular changes in the eye and disease progression poses challenges, traditional methods assuming linear or logistical relationships may not adequately capture the intricate interactions between these changes and disease advances. Hence, the aim of this study was to evaluate the microvascular involvement of diabetes mellitus (DM) and non-proliferative DR with the implementation of non-parametric machine learning methods. RESEARCH DESIGN AND METHODS We conducted a retrospective cohort study that included optical coherence tomography angiography (OCTA) images collected from a healthy group (196 eyes), a DM no DR group (120 eyes), a mild DR group (71 eyes), and a moderate DR group (66 eyes). We implemented a non-parametric machine learning method for four classification tasks that used parameters extracted from the OCTA images as predictors: DM no DR versus healthy, mild DR versus DM no DR, moderate DR versus mild DR, and any DR versus no DR. SHapley Additive exPlanations values were used to determine the importance of these parameters in the classification. RESULTS We found large choriocapillaris flow deficits were the most important for healthy versus DM no DR, and became less important in eyes with mild or moderate DR. The superficial microvasculature was important for the healthy versus DM no DR and mild DR versus moderate DR tasks, but not for the DM no DR versus mild DR task-the stage when deep microvasculature plays an important role. Foveal avascular zone metric was in general less affected, but its involvement increased with worsening DR. CONCLUSIONS The findings from this study provide valuable insights into the microvascular involvement of DM and DR, facilitating the development of early detection methods and intervention strategies.
Collapse
Affiliation(s)
- Thiara S Ahmed
- Singapore Eye Research Institute, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore
| | | | - Yvonne N B Zhen
- Singapore Eye Research Institute, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore
| | - Jacqueline Chua
- Singapore Eye Research Institute, Singapore
- Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Damon W K Wong
- Singapore Eye Research Institute, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore
- Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Simon Nusinovici
- Singapore Eye Research Institute, Singapore
- Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Rose Tan
- Singapore Eye Research Institute, Singapore
| | - Gavin Tan
- Singapore Eye Research Institute, Singapore
- Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore
- Academic Clinical Program, Duke-NUS Medical School, Singapore
- Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
- Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
| | - Bingyao Tan
- Singapore Eye Research Institute, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore
- Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore
| |
Collapse
|
6
|
Zang P, Hormel TT, Wang J, Guo Y, Bailey ST, Flaxel CJ, Huang D, Hwang TS, Jia Y. Interpretable Diabetic Retinopathy Diagnosis Based on Biomarker Activation Map. IEEE Trans Biomed Eng 2024; 71:14-25. [PMID: 37405891 PMCID: PMC10796196 DOI: 10.1109/tbme.2023.3290541] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/07/2023]
Abstract
OBJECTIVE Deep learning classifiers provide the most accurate means of automatically diagnosing diabetic retinopathy (DR) based on optical coherence tomography (OCT) and its angiography (OCTA). The power of these models is attributable in part to the inclusion of hidden layers that provide the complexity required to achieve a desired task. However, hidden layers also render algorithm outputs difficult to interpret. Here we introduce a novel biomarker activation map (BAM) framework based on generative adversarial learning that allows clinicians to verify and understand classifiers' decision-making. METHODS A data set including 456 macular scans were graded as non-referable or referable DR based on current clinical standards. A DR classifier that was used to evaluate our BAM was first trained based on this data set. The BAM generation framework was designed by combing two U-shaped generators to provide meaningful interpretability to this classifier. The main generator was trained to take referable scans as input and produce an output that would be classified by the classifier as non-referable. The BAM is then constructed as the difference image between the output and input of the main generator. To ensure that the BAM only highlights classifier-utilized biomarkers an assistant generator was trained to do the opposite, producing scans that would be classified as referable by the classifier from non-referable scans. RESULTS The generated BAMs highlighted known pathologic features including nonperfusion area and retinal fluid. CONCLUSION/SIGNIFICANCE A fully interpretable classifier based on these highlights could help clinicians better utilize and verify automated DR diagnosis.
Collapse
Affiliation(s)
- Pengxiao Zang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239 USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239 USA
| | - Tristan T. Hormel
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239 USA
| | - Jie Wang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239 USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239 USA
| | - Yukun Guo
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239 USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239 USA
| | - Steven T. Bailey
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239 USA
| | - Christina J. Flaxel
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239 USA
| | - David Huang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239 USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239 USA
| | - Thomas S. Hwang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239 USA
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239 USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239 USA
| |
Collapse
|
7
|
Uppamma P, Bhattacharya S. A multidomain bio-inspired feature extraction and selection model for diabetic retinopathy severity classification: an ensemble learning approach. Sci Rep 2023; 13:18572. [PMID: 37903967 PMCID: PMC10616283 DOI: 10.1038/s41598-023-45886-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Accepted: 10/25/2023] [Indexed: 11/01/2023] Open
Abstract
Diabetes retinopathy (DR) is one of the leading causes of blindness globally. Early detection of this condition is essential for preventing patients' loss of eyesight caused by diabetes mellitus being untreated for an extended period. This paper proposes the design of an augmented bioinspired multidomain feature extraction and selection model for diabetic retinopathy severity estimation using an ensemble learning process. The proposed approach initiates by identifying DR severity levels from retinal images that segment the optical disc, macula, blood vessels, exudates, and hemorrhages using an adaptive thresholding process. Once the images are segmented, multidomain features are extracted from the retinal images, including frequency, entropy, cosine, gabor, and wavelet components. These data were fed into a novel Modified Moth Flame Optimization-based feature selection method that assisted in optimal feature selection. Finally, an ensemble model using various ML (machine learning) algorithms, which included Naive Bayes, K-Nearest Neighbours, Support Vector Machine, Multilayer Perceptron, Random Forests, and Logistic Regression were used to identify the various severity complications of DR. The experiments on different openly accessible data sources have shown that the proposed method outperformed conventional methods and achieved an Accuracy of 96.5% in identifying DR severity levels.
Collapse
Affiliation(s)
- Posham Uppamma
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, Tamilnadu, 632014, India
| | - Sweta Bhattacharya
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, Tamilnadu, 632014, India.
| |
Collapse
|
8
|
Ebrahimi B, Le D, Abtahi M, Dadzie AK, Lim JI, Chan RVP, Yao X. Optimizing the OCTA layer fusion option for deep learning classification of diabetic retinopathy. BIOMEDICAL OPTICS EXPRESS 2023; 14:4713-4724. [PMID: 37791267 PMCID: PMC10545199 DOI: 10.1364/boe.495999] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 07/29/2023] [Accepted: 07/31/2023] [Indexed: 10/05/2023]
Abstract
The purpose of this study is to evaluate layer fusion options for deep learning classification of optical coherence tomography (OCT) angiography (OCTA) images. A convolutional neural network (CNN) end-to-end classifier was utilized to classify OCTA images from healthy control subjects and diabetic patients with no retinopathy (NoDR) and non-proliferative diabetic retinopathy (NPDR). For each eye, three en-face OCTA images were acquired from the superficial capillary plexus (SCP), deep capillary plexus (DCP), and choriocapillaris (CC) layers. The performances of the CNN classifier with individual layer inputs and multi-layer fusion architectures, including early-fusion, intermediate-fusion, and late-fusion, were quantitatively compared. For individual layer inputs, the superficial OCTA was observed to have the best performance, with 87.25% accuracy, 78.26% sensitivity, and 90.10% specificity, to differentiate control, NoDR, and NPDR. For multi-layer fusion options, the best option is the intermediate-fusion architecture, which achieved 92.65% accuracy, 87.01% sensitivity, and 94.37% specificity. To interpret the deep learning performance, the Gradient-weighted Class Activation Mapping (Grad-CAM) was utilized to identify spatial characteristics for OCTA classification. Comparative analysis indicates that the layer data fusion options can affect the performance of deep learning classification, and the intermediate-fusion approach is optimal for OCTA classification of DR.
Collapse
Affiliation(s)
- Behrouz Ebrahimi
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - David Le
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Mansour Abtahi
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Albert K. Dadzie
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Jennifer I. Lim
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| | - R. V. Paul Chan
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| | - Xincheng Yao
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| |
Collapse
|
9
|
Alshahrani M, Al-Jabbar M, Senan EM, Ahmed IA, Saif JAM. Hybrid Methods for Fundus Image Analysis for Diagnosis of Diabetic Retinopathy Development Stages Based on Fusion Features. Diagnostics (Basel) 2023; 13:2783. [PMID: 37685321 PMCID: PMC10486790 DOI: 10.3390/diagnostics13172783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 08/22/2023] [Accepted: 08/24/2023] [Indexed: 09/10/2023] Open
Abstract
Diabetic retinopathy (DR) is a complication of diabetes that damages the delicate blood vessels of the retina and leads to blindness. Ophthalmologists rely on diagnosing the retina by imaging the fundus. The process takes a long time and needs skilled doctors to diagnose and determine the stage of DR. Therefore, automatic techniques using artificial intelligence play an important role in analyzing fundus images for the detection of the stages of DR development. However, diagnosis using artificial intelligence techniques is a difficult task and passes through many stages, and the extraction of representative features is important in reaching satisfactory results. Convolutional Neural Network (CNN) models play an important and distinct role in extracting features with high accuracy. In this study, fundus images were used for the detection of the developmental stages of DR by two proposed methods, each with two systems. The first proposed method uses GoogLeNet with SVM and ResNet-18 with SVM. The second method uses Feed-Forward Neural Networks (FFNN) based on the hybrid features extracted by first using GoogLeNet, Fuzzy color histogram (FCH), Gray Level Co-occurrence Matrix (GLCM), and Local Binary Pattern (LBP); followed by ResNet-18, FCH, GLCM and LBP. All the proposed methods obtained superior results. The FFNN network with hybrid features of ResNet-18, FCH, GLCM, and LBP obtained 99.7% accuracy, 99.6% precision, 99.6% sensitivity, 100% specificity, and 99.86% AUC.
Collapse
Affiliation(s)
- Mohammed Alshahrani
- Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia;
| | - Mohammed Al-Jabbar
- Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia;
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana’a, Yemen
| | | | | |
Collapse
|
10
|
Khalili Pour E, Rezaee K, Azimi H, Mirshahvalad SM, Jafari B, Fadakar K, Faghihi H, Mirshahi A, Ghassemi F, Ebrahimiadib N, Mirghorbani M, Bazvand F, Riazi-Esfahani H, Riazi Esfahani M. Automated machine learning-based classification of proliferative and non-proliferative diabetic retinopathy using optical coherence tomography angiography vascular density maps. Graefes Arch Clin Exp Ophthalmol 2023; 261:391-399. [PMID: 36050474 DOI: 10.1007/s00417-022-05818-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Revised: 08/07/2022] [Accepted: 08/23/2022] [Indexed: 01/17/2023] Open
Abstract
PURPOSE The study aims to classify the eyes with proliferative diabetic retinopathy (PDR) and non-proliferative diabetic retinopathy (NPDR) based on the optical coherence tomography angiography (OCTA) vascular density maps using a supervised machine learning algorithm. METHODS OCTA vascular density maps (at superficial capillary plexus (SCP), deep capillary plexus (DCP), and total retina (R) levels) of 148 eyes from 78 patients with diabetic retinopathy (45 PDR and 103 NPDR) was used to classify the images to NPDR and PDR groups based on a supervised machine learning algorithm known as the support vector machine (SVM) classifier optimized by a genetic evolutionary algorithm. RESULTS The implemented algorithm in three different models reached up to 85% accuracy in classifying PDR and NPDR in all three levels of vascular density maps. The deep retinal layer vascular density map demonstrated the best performance with a 90% accuracy in discriminating between PDR and NPDR. CONCLUSIONS The current study on a limited number of patients with diabetic retinopathy demonstrated that a supervised machine learning-based method known as SVM can be used to differentiate PDR and NPDR patients using OCTA vascular density maps.
Collapse
Affiliation(s)
- Elias Khalili Pour
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Khosro Rezaee
- Department of Biomedical Engineering, Meybod University, Meybod, Iran
| | - Hossein Azimi
- Faculty of Mathematical Sciences and Computer, Kharazmi University, Tehran, Iran
| | - Seyed Mohammad Mirshahvalad
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Behzad Jafari
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Kaveh Fadakar
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Hooshang Faghihi
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Ahmad Mirshahi
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Fariba Ghassemi
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Nazanin Ebrahimiadib
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Masoud Mirghorbani
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Fatemeh Bazvand
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Hamid Riazi-Esfahani
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran.
| | - Mohammad Riazi Esfahani
- Department of Ophthalmology, Gavin Herbert Eye Institute, University of California Irvine, Irvine, CA, USA
| |
Collapse
|
11
|
Deep Learning in Optical Coherence Tomography Angiography: Current Progress, Challenges, and Future Directions. Diagnostics (Basel) 2023; 13:diagnostics13020326. [PMID: 36673135 PMCID: PMC9857993 DOI: 10.3390/diagnostics13020326] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Revised: 01/11/2023] [Accepted: 01/12/2023] [Indexed: 01/18/2023] Open
Abstract
Optical coherence tomography angiography (OCT-A) provides depth-resolved visualization of the retinal microvasculature without intravenous dye injection. It facilitates investigations of various retinal vascular diseases and glaucoma by assessment of qualitative and quantitative microvascular changes in the different retinal layers and radial peripapillary layer non-invasively, individually, and efficiently. Deep learning (DL), a subset of artificial intelligence (AI) based on deep neural networks, has been applied in OCT-A image analysis in recent years and achieved good performance for different tasks, such as image quality control, segmentation, and classification. DL technologies have further facilitated the potential implementation of OCT-A in eye clinics in an automated and efficient manner and enhanced its clinical values for detecting and evaluating various vascular retinopathies. Nevertheless, the deployment of this combination in real-world clinics is still in the "proof-of-concept" stage due to several limitations, such as small training sample size, lack of standardized data preprocessing, insufficient testing in external datasets, and absence of standardized results interpretation. In this review, we introduce the existing applications of DL in OCT-A, summarize the potential challenges of the clinical deployment, and discuss future research directions.
Collapse
|
12
|
Schottenhamml J, Hohberger B, Mardin CY. Applications of Artificial Intelligence in Optical Coherence Tomography Angiography Imaging. Klin Monbl Augenheilkd 2022; 239:1412-1426. [PMID: 36493762 DOI: 10.1055/a-1961-7137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Optical coherence tomography angiography (OCTA) and artificial intelligence (AI) are two emerging fields that complement each other. OCTA enables the noninvasive, in vivo, 3D visualization of retinal blood flow with a micrometer resolution, which has been impossible with other imaging modalities. As it does not need dye-based injections, it is also a safer procedure for patients. AI has excited great interest in many fields of daily life, by enabling automatic processing of huge amounts of data with a performance that greatly surpasses previous algorithms. It has been used in many breakthrough studies in recent years, such as the finding that AlphaGo can beat humans in the strategic board game of Go. This paper will give a short introduction into both fields and will then explore the manifold applications of AI in OCTA imaging that have been presented in the recent years. These range from signal generation over signal enhancement to interpretation tasks like segmentation and classification. In all these areas, AI-based algorithms have achieved state-of-the-art performance that has the potential to improve standard care in ophthalmology when integrated into the daily clinical routine.
Collapse
Affiliation(s)
- Julia Schottenhamml
- Augenklinik, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Bettina Hohberger
- Augenklinik, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | | |
Collapse
|
13
|
Zang P, Hormel TT, Hwang TS, Bailey ST, Huang D, Jia Y. Deep-Learning-Aided Diagnosis of Diabetic Retinopathy, Age-Related Macular Degeneration, and Glaucoma Based on Structural and Angiographic OCT. OPHTHALMOLOGY SCIENCE 2022; 3:100245. [PMID: 36579336 PMCID: PMC9791595 DOI: 10.1016/j.xops.2022.100245] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Revised: 10/21/2022] [Accepted: 10/28/2022] [Indexed: 11/11/2022]
Abstract
Purpose Timely diagnosis of eye diseases is paramount to obtaining the best treatment outcomes. OCT and OCT angiography (OCTA) have several advantages that lend themselves to early detection of ocular pathology; furthermore, the techniques produce large, feature-rich data volumes. However, the full clinical potential of both OCT and OCTA is stymied when complex data acquired using the techniques must be manually processed. Here, we propose an automated diagnostic framework based on structural OCT and OCTA data volumes that could substantially support the clinical application of these technologies. Design Cross sectional study. Participants Five hundred twenty-six OCT and OCTA volumes were scanned from the eyes of 91 healthy participants, 161 patients with diabetic retinopathy (DR), 95 patients with age-related macular degeneration (AMD), and 108 patients with glaucoma. Methods The diagnosis framework was constructed based on semisequential 3-dimensional (3D) convolutional neural networks. The trained framework classifies combined structural OCT and OCTA scans as normal, DR, AMD, or glaucoma. Fivefold cross-validation was performed, with 60% of the data reserved for training, 20% for validation, and 20% for testing. The training, validation, and test data sets were independent, with no shared patients. For scans diagnosed as DR, AMD, or glaucoma, 3D class activation maps were generated to highlight subregions that were considered important by the framework for automated diagnosis. Main Outcome Measures The area under the curve (AUC) of the receiver operating characteristic curve and quadratic-weighted kappa were used to quantify the diagnostic performance of the framework. Results For the diagnosis of DR, the framework achieved an AUC of 0.95 ± 0.01. For the diagnosis of AMD, the framework achieved an AUC of 0.98 ± 0.01. For the diagnosis of glaucoma, the framework achieved an AUC of 0.91 ± 0.02. Conclusions Deep learning frameworks can provide reliable, sensitive, interpretable, and fully automated diagnosis of eye diseases. Financial Disclosures Proprietary or commercial disclosure may be found after the references.
Collapse
Affiliation(s)
- Pengxiao Zang
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon,Department of Biomedical Engineering, Oregon Health & Science University, Portland, Oregon
| | - Tristan T. Hormel
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Thomas S. Hwang
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Steven T. Bailey
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - David Huang
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon,Department of Biomedical Engineering, Oregon Health & Science University, Portland, Oregon,Correspondence: Yali Jia, PhD, Casey Eye Institute & Department of Biomedical Engineering, Oregon Health & Science University, 515 SW Campus Dr., CEI 3154, Portland, OR 97239-4197.
| |
Collapse
|
14
|
Self-supervised patient-specific features learning for OCT image classification. Med Biol Eng Comput 2022; 60:2851-2863. [DOI: 10.1007/s11517-022-02627-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Accepted: 04/28/2022] [Indexed: 11/26/2022]
|
15
|
Diagnosing Diabetic Retinopathy in OCTA Images Based on Multilevel Information Fusion Using a Deep Learning Framework. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:4316507. [PMID: 35966243 PMCID: PMC9371870 DOI: 10.1155/2022/4316507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/20/2021] [Accepted: 07/18/2022] [Indexed: 11/17/2022]
Abstract
Objective As an extension of optical coherence tomography (OCT), optical coherence tomographic angiography (OCTA) provides information on the blood flow status at the microlevel and is sensitive to changes in the fundus vessels. However, due to the distinct imaging mechanism of OCTA, existing models, which are primarily used for analyzing fundus images, do not work well on OCTA images. Effectively extracting and analyzing the information in OCTA images remains challenging. To this end, a deep learning framework that fuses multilevel information in OCTA images is proposed in this study. The effectiveness of the proposed model was demonstrated in the task of diabetic retinopathy (DR) classification. Method First, a U-Net-based segmentation model was proposed to label the boundaries of large retinal vessels and the foveal avascular zone (FAZ) in OCTA images. Then, we designed an isolated concatenated block (ICB) structure to extract and fuse information from the original OCTA images and segmentation results at different fusion levels. Results The experiments were conducted on 301 OCTA images. Of these images, 244 were labeled by ophthalmologists as normal images, and 57 were labeled as DR images. An accuracy of 93.1% and a mean intersection over union (mIOU) of 77.1% were achieved using the proposed large vessel and FAZ segmentation model. In the ablation experiment with 6-fold validation, the proposed deep learning framework that combines the proposed isolated and concatenated convolution process significantly improved the DR diagnosis accuracy. Moreover, inputting the merged images of the original OCTA images and segmentation results further improved the model performance. Finally, a DR diagnosis accuracy of 88.1% (95%CI ± 3.6%) and an area under the curve (AUC) of 0.92 were achieved using our proposed classification model, which significantly outperforms the state-of-the-art classification models. As a comparison, an accuracy of 83.7 (95%CI ± 1.5%) and AUC of 0.76 were obtained using EfficientNet. Significance. The visualization results show that the FAZ and the vascular region close to the FAZ provide more information for the model than the farther surrounding area. Furthermore, this study demonstrates that a clinically sophisticated designed deep learning model is not only able to effectively assist in the diagnosis but also help to locate new indicators for certain illnesses.
Collapse
|
16
|
The Role of Medical Image Modalities and AI in the Early Detection, Diagnosis and Grading of Retinal Diseases: A Survey. Bioengineering (Basel) 2022; 9:bioengineering9080366. [PMID: 36004891 PMCID: PMC9405367 DOI: 10.3390/bioengineering9080366] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 07/28/2022] [Accepted: 08/01/2022] [Indexed: 11/16/2022] Open
Abstract
Traditional dilated ophthalmoscopy can reveal diseases, such as age-related macular degeneration (AMD), diabetic retinopathy (DR), diabetic macular edema (DME), retinal tear, epiretinal membrane, macular hole, retinal detachment, retinitis pigmentosa, retinal vein occlusion (RVO), and retinal artery occlusion (RAO). Among these diseases, AMD and DR are the major causes of progressive vision loss, while the latter is recognized as a world-wide epidemic. Advances in retinal imaging have improved the diagnosis and management of DR and AMD. In this review article, we focus on the variable imaging modalities for accurate diagnosis, early detection, and staging of both AMD and DR. In addition, the role of artificial intelligence (AI) in providing automated detection, diagnosis, and staging of these diseases will be surveyed. Furthermore, current works are summarized and discussed. Finally, projected future trends are outlined. The work done on this survey indicates the effective role of AI in the early detection, diagnosis, and staging of DR and/or AMD. In the future, more AI solutions will be presented that hold promise for clinical applications.
Collapse
|
17
|
Zang P, Hormel TT, Wang X, Tsuboi K, Huang D, Hwang TS, Jia Y. A Diabetic Retinopathy Classification Framework Based on Deep-Learning Analysis of OCT Angiography. Transl Vis Sci Technol 2022; 11:10. [PMID: 35822949 PMCID: PMC9288155 DOI: 10.1167/tvst.11.7.10] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose Reliable classification of referable and vision threatening diabetic retinopathy (DR) is essential for patients with diabetes to prevent blindness. Optical coherence tomography (OCT) and its angiography (OCTA) have several advantages over fundus photographs. We evaluated a deep-learning-aided DR classification framework using volumetric OCT and OCTA. Methods Four hundred fifty-six OCT and OCTA volumes were scanned from eyes of 50 healthy participants and 305 patients with diabetes. Retina specialists labeled the eyes as non-referable (nrDR), referable (rDR), or vision threatening DR (vtDR). Each eye underwent a 3 × 3-mm scan using a commercial 70 kHz spectral-domain OCT system. We developed a DR classification framework and trained it using volumetric OCT and OCTA to classify eyes into rDR and vtDR. For the scans identified as rDR or vtDR, 3D class activation maps were generated to highlight the subregions which were considered important by the framework for DR classification. Results For rDR classification, the framework achieved a 0.96 ± 0.01 area under the receiver operating characteristic curve (AUC) and 0.83 ± 0.04 quadratic-weighted kappa. For vtDR classification, the framework achieved a 0.92 ± 0.02 AUC and 0.73 ± 0.04 quadratic-weighted kappa. In addition, the multiple DR classification (non-rDR, rDR but non-vtDR, or vtDR) achieved a 0.83 ± 0.03 quadratic-weighted kappa. Conclusions A deep learning framework only based on OCT and OCTA can provide specialist-level DR classification using only a single imaging modality. Translational Relevance The proposed framework can be used to develop clinically valuable automated DR diagnosis system because of the specialist-level performance showed in this study.
Collapse
Affiliation(s)
- Pengxiao Zang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA.,Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR, USA
| | - Tristan T Hormel
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | | | - Kotaro Tsuboi
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA.,Department of Ophthalmology, Aichi Medical University, Nagakute, Japan
| | - David Huang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | - Thomas S Hwang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA.,Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR, USA
| |
Collapse
|
18
|
Li B, Ding Y, Wei Z, Fu Z, Sun P, Sun Q, Zhang H, Mo H. A Self-Supervised Model Advance OCTA Image Disease Diagnosis. INT J PATTERN RECOGN 2022. [DOI: 10.1142/s0218001422570038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Due to the lack of medical image datasets, transfer learning/fine-tuning is generally used to realize disease detection (mainly the ImageNet transfer model). Significant differences of dominance between natural and medical images seriously restrict the performance of the model. In this paper, a contrastive learning method (BY-OCTA) combined with patient metadata is proposed to detect the pathology in fundus OCTA images. This method uses the patient’s metadata to construct positive sample pairs. By introducing super-parameters into the loss function, we can reasonably adjust the approximate proportion of the same patient metadata sample pair, so as to produce a better representation and initialization model. This paper evaluates the performance of downstream tasks by fine-tuning the multi-layer perceptron of the model. Experiments show that the linear model pretrained by BY-OCTA is better than that pretrained by ImageNet and BYOL on multiple datasets. Furthermore, in the case of limited labeled training data, BY-OCTA provides the most significant benefit. This shows that the BY-OCTA pretraining model has better characterization extraction ability and transferability. This method allows a flexible combination of medical opinions and uses metadata to construct positive sample pairs, which can be widely used in medical image interpretation.
Collapse
Affiliation(s)
- Bingbing Li
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, Heilongjiang, P. R. China
- College of Engineering, Jilin Business and Technology College, Changchun, Jilin, P. R. China
| | - Yiheng Ding
- Department of Ophthalmology, The First Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang, P. R. China
| | - Ziqiang Wei
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, Heilongjiang, P. R. China
| | - Zhijie Fu
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, Heilongjiang, P. R. China
| | - Peng Sun
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, Heilongjiang, P. R. China
| | - Qi Sun
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, Heilongjiang, P. R. China
| | - Hong Zhang
- Department of Ophthalmology, The First Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang, P. R. China
| | - Hongwei Mo
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, Heilongjiang, P. R. China
| |
Collapse
|
19
|
Elsharkawy M, Elrazzaz M, Sharafeldeen A, Alhalabi M, Khalifa F, Soliman A, Elnakib A, Mahmoud A, Ghazal M, El-Daydamony E, Atwan A, Sandhu HS, El-Baz A. The Role of Different Retinal Imaging Modalities in Predicting Progression of Diabetic Retinopathy: A Survey. SENSORS (BASEL, SWITZERLAND) 2022; 22:3490. [PMID: 35591182 PMCID: PMC9101725 DOI: 10.3390/s22093490] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 04/28/2022] [Accepted: 04/29/2022] [Indexed: 06/15/2023]
Abstract
Diabetic retinopathy (DR) is a devastating condition caused by progressive changes in the retinal microvasculature. It is a leading cause of retinal blindness in people with diabetes. Long periods of uncontrolled blood sugar levels result in endothelial damage, leading to macular edema, altered retinal permeability, retinal ischemia, and neovascularization. In order to facilitate rapid screening and diagnosing, as well as grading of DR, different retinal modalities are utilized. Typically, a computer-aided diagnostic system (CAD) uses retinal images to aid the ophthalmologists in the diagnosis process. These CAD systems use a combination of machine learning (ML) models (e.g., deep learning (DL) approaches) to speed up the diagnosis and grading of DR. In this way, this survey provides a comprehensive overview of different imaging modalities used with ML/DL approaches in the DR diagnosis process. The four imaging modalities that we focused on are fluorescein angiography, fundus photographs, optical coherence tomography (OCT), and OCT angiography (OCTA). In addition, we discuss limitations of the literature that utilizes such modalities for DR diagnosis. In addition, we introduce research gaps and provide suggested solutions for the researchers to resolve. Lastly, we provide a thorough discussion about the challenges and future directions of the current state-of-the-art DL/ML approaches. We also elaborate on how integrating different imaging modalities with the clinical information and demographic data will lead to promising results for the scientists when diagnosing and grading DR. As a result of this article's comparative analysis and discussion, it remains necessary to use DL methods over existing ML models to detect DR in multiple modalities.
Collapse
Affiliation(s)
- Mohamed Elsharkawy
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| | - Mostafa Elrazzaz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| | - Ahmed Sharafeldeen
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| | - Marah Alhalabi
- Electrical, Computer and Biomedical Engineering Department, College of Engineering, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.G.); (M.A.)
| | - Fahmi Khalifa
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| | - Ahmed Soliman
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| | - Ahmed Elnakib
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| | - Ali Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| | - Mohammed Ghazal
- Electrical, Computer and Biomedical Engineering Department, College of Engineering, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.G.); (M.A.)
| | - Eman El-Daydamony
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt; (E.E.-D.); (A.A.)
| | - Ahmed Atwan
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt; (E.E.-D.); (A.A.)
| | - Harpal Singh Sandhu
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| | - Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| |
Collapse
|
20
|
Mou L, Liang L, Gao Z, Wang X. A multi-scale anomaly detection framework for retinal OCT images based on the Bayesian neural network. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103619] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
21
|
Ryu G, Lee K, Park D, Kim I, Park SH, Sagong M. A Deep Learning Algorithm for Classifying Diabetic Retinopathy Using Optical Coherence Tomography Angiography. Transl Vis Sci Technol 2022; 11:39. [PMID: 35703566 PMCID: PMC8899862 DOI: 10.1167/tvst.11.2.39] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose To develop an automated diabetic retinopathy (DR) staging system using optical coherence tomography angiography (OCTA) images with a convolutional neural network (CNN) and to verify the feasibility of the system. Methods In this retrospective cross-sectional study, a total of 918 data sets of 3 × 3 mm2 OCTA images and 917 data sets of 6 × 6 mm2 OCTA images were obtained from 1118 eyes. A deep CNN and four traditional machine learning models were trained with annotations made by a retinal specialist based on ultra-widefield fluorescein angiography. Separately, the same images of the test data sets were independently graded by two human experts. The results of the CNN algorithm were compared with those of traditional machine learning–based classifiers and human experts. Results The proposed CNN achieved an accuracy of 0.728, a sensitivity of 0.675, a specificity of 0.944, an F1 score of 0.683, and a quadratic weighted κ of 0.908 for a six-level staging task, which were far superior to the results of traditional machine learning methods or human experts. The CNN algorithm showed a better performance using 6 × 6 mm2 rather than 3 × 3 mm2 sized OCTA images and using combined data rather than a separate OCTA layer alone. Conclusions CNN-based classification using OCTA images can provide reliable assistance to clinicians for DR classification. Translational Relevance This CNN algorithm can guide the clinical decision for invasive angiography or referrals to ophthalmology specialists, helping to create more efficient diagnostic workflow in primary care settings.
Collapse
Affiliation(s)
- Gahyung Ryu
- Department of Ophthalmology, Yeungnam University College of Medicine, Daegu, South Korea.,Nune Eye Hospital, Daegu, South Korea
| | - Kyungmin Lee
- Department of Robotics Engineering, DGIST, Daegu, South Korea
| | - Donggeun Park
- Department of Ophthalmology, Yeungnam University College of Medicine, Daegu, South Korea
| | - Inhye Kim
- Department of Ophthalmology, Yeungnam University College of Medicine, Daegu, South Korea
| | - Sang Hyun Park
- Department of Robotics Engineering, DGIST, Daegu, South Korea
| | - Min Sagong
- Department of Ophthalmology, Yeungnam University College of Medicine, Daegu, South Korea.,Yeungnam Eye Center, Yeungnam University Hospital, Daegu, South Korea
| |
Collapse
|
22
|
Federated Learning for Microvasculature Segmentation and Diabetic Retinopathy Classification of OCT Data. OPHTHALMOLOGY SCIENCE 2021; 1:100069. [PMID: 36246944 PMCID: PMC9559956 DOI: 10.1016/j.xops.2021.100069] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 09/01/2021] [Accepted: 09/28/2021] [Indexed: 12/26/2022]
Abstract
Purpose To evaluate the performance of a federated learning framework for deep neural network-based retinal microvasculature segmentation and referable diabetic retinopathy (RDR) classification using OCT and OCT angiography (OCTA). Design Retrospective analysis of clinical OCT and OCTA scans of control participants and patients with diabetes. Participants The 153 OCTA en face images used for microvasculature segmentation were acquired from 4 OCT instruments with fields of view ranging from 2 × 2-mm to 6 × 6-mm. The 700 eyes used for RDR classification consisted of OCTA en face images and structural OCT projections acquired from 2 commercial OCT systems. Methods OCT angiography images used for microvasculature segmentation were delineated manually and verified by retina experts. Diabetic retinopathy (DR) severity was evaluated by retinal specialists and was condensed into 2 classes: non-RDR and RDR. The federated learning configuration was demonstrated via simulation using 4 clients for microvasculature segmentation and was compared with other collaborative training methods. Subsequently, federated learning was applied over multiple institutions for RDR classification and was compared with models trained and tested on data from the same institution (internal models) and different institutions (external models). Main Outcome Measures For microvasculature segmentation, we measured the accuracy and Dice similarity coefficient (DSC). For severity classification, we measured accuracy, area under the receiver operating characteristic curve (AUROC), area under the precision-recall curve, balanced accuracy, F1 score, sensitivity, and specificity. Results For both applications, federated learning achieved similar performance as internal models. Specifically, for microvasculature segmentation, the federated learning model achieved similar performance (mean DSC across all test sets, 0.793) as models trained on a fully centralized dataset (mean DSC, 0.807). For RDR classification, federated learning achieved a mean AUROC of 0.954 and 0.960; the internal models attained a mean AUROC of 0.956 and 0.973. Similar results are reflected in the other calculated evaluation metrics. Conclusions Federated learning showed similar results to traditional deep learning in both applications of segmentation and classification, while maintaining data privacy. Evaluation metrics highlight the potential of collaborative learning for increasing domain diversity and the generalizability of models used for the classification of OCT data.
Collapse
|
23
|
A deep learning model for identifying diabetic retinopathy using optical coherence tomography angiography. Sci Rep 2021; 11:23024. [PMID: 34837030 PMCID: PMC8626435 DOI: 10.1038/s41598-021-02479-6] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 11/10/2021] [Indexed: 12/15/2022] Open
Abstract
As the prevalence of diabetes increases, millions of people need to be screened for diabetic retinopathy (DR). Remarkable advances in technology have made it possible to use artificial intelligence to screen DR from retinal images with high accuracy and reliability, resulting in reducing human labor by processing large amounts of data in a shorter time. We developed a fully automated classification algorithm to diagnose DR and identify referable status using optical coherence tomography angiography (OCTA) images with convolutional neural network (CNN) model and verified its feasibility by comparing its performance with that of conventional machine learning model. Ground truths for classifications were made based on ultra-widefield fluorescein angiography to increase the accuracy of data annotation. The proposed CNN classifier achieved an accuracy of 91–98%, a sensitivity of 86–97%, a specificity of 94–99%, and an area under the curve of 0.919–0.976. In the external validation, overall similar performances were also achieved. The results were similar regardless of the size and depth of the OCTA images, indicating that DR could be satisfactorily classified even with images comprising narrow area of the macular region and a single image slab of retina. The CNN-based classification using OCTA is expected to create a novel diagnostic workflow for DR detection and referral.
Collapse
|
24
|
Yu TT, Ma D, Lo J, Ju MJ, Beg MF, Sarunic MV. Effect of optical coherence tomography and angiography sampling rate towards diabetic retinopathy severity classification. BIOMEDICAL OPTICS EXPRESS 2021; 12:6660-6673. [PMID: 34745763 PMCID: PMC8547994 DOI: 10.1364/boe.431992] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 08/12/2021] [Accepted: 08/16/2021] [Indexed: 06/13/2023]
Abstract
Optical coherence tomography (OCT) and OCT angiography (OCT-A) may benefit the screening of diabetic retinopathy (DR). This study investigated the effect of laterally subsampling OCT/OCT-A en face scans by up to a factor of 8 when using deep neural networks for automated referable DR classification. There was no significant difference in the classification performance across all evaluation metrics when subsampling up to a factor of 3, and only minimal differences up to a factor of 8. Our findings suggest that OCT/OCT-A can reduce the number of samples (and hence the acquisition time) for a volume for a given field of view on the retina that is acquired for rDR classification.
Collapse
Affiliation(s)
- Timothy T. Yu
- Engineering Science, Simon Fraser University, Burnaby BC V5A1S6, Canada
| | - Da Ma
- Engineering Science, Simon Fraser University, Burnaby BC V5A1S6, Canada
| | - Julian Lo
- Engineering Science, Simon Fraser University, Burnaby BC V5A1S6, Canada
| | - Myeong Jin Ju
- Dept. of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, BC, V5Z 3N9, Canada
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, V5Z 3N9, Canada
| | - Mirza Faisal Beg
- Engineering Science, Simon Fraser University, Burnaby BC V5A1S6, Canada
| | | |
Collapse
|
25
|
Guo Y, Hormel TT, Pi S, Wei X, Gao M, Morrison JC, Jia Y. An end-to-end network for segmenting the vasculature of three retinal capillary plexuses from OCT angiographic volumes. BIOMEDICAL OPTICS EXPRESS 2021; 12:4889-4900. [PMID: 34513231 PMCID: PMC8407822 DOI: 10.1364/boe.431888] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 06/28/2021] [Accepted: 07/06/2021] [Indexed: 06/13/2023]
Abstract
The segmentation of en face retinal capillary angiograms from volumetric optical coherence tomographic angiography (OCTA) usually relies on retinal layer segmentation, which is time-consuming and error-prone. In this study, we developed a deep-learning-based method to segment vessels in the superficial vascular plexus (SVP), intermediate capillary plexus (ICP), and deep capillary plexus (DCP) directly from volumetric OCTA data. The method contains a three-dimensional convolutional neural network (CNN) for extracting distinct retinal layers, a custom projection module to generate three vascular plexuses from OCTA data, and three parallel CNNs to segment vasculature. Experimental results on OCTA data from rat eyes demonstrated the feasibility of the proposed method. This end-to-end network has the potential to simplify OCTA data processing on retinal vasculature segmentation. The main contribution of this study is that we propose a custom projection module to connect retinal layer segmentation and vasculature segmentation modules and automatically convert data from three to two dimensions, thus establishing an end-to-end method to segment three retinal capillary plexuses from volumetric OCTA without any human intervention.
Collapse
Affiliation(s)
- Yukun Guo
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Tristan T. Hormel
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Shaohua Pi
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Xiang Wei
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239, USA
| | - Min Gao
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - John C. Morrison
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239, USA
| |
Collapse
|
26
|
Luo Y, Xu Q, Jin R, Wu M, Liu L. Automatic detection of retinopathy with optical coherence tomography images via a semi-supervised deep learning method. BIOMEDICAL OPTICS EXPRESS 2021; 12:2684-2702. [PMID: 34123497 PMCID: PMC8176801 DOI: 10.1364/boe.418364] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 03/27/2021] [Accepted: 04/02/2021] [Indexed: 05/03/2023]
Abstract
Automatic detection of retinopathy via computer vision techniques is of great importance for clinical applications. However, traditional deep learning based methods in computer vision require a large amount of labeled data, which are expensive and may not be available in clinical applications. To mitigate this issue, in this paper, we propose a semi-supervised deep learning method built upon pre-trained VGG-16 and virtual adversarial training (VAT) for the detection of retinopathy with optical coherence tomography (OCT) images. It only requires very few labeled and a number of unlabeled OCT images for model training. In experiments, we have evaluated the proposed method on two popular datasets. With only 80 labeled OCT images, the proposed method can achieve classification accuracies of 0.942 and 0.936, sensitivities of 0.942 and 0.936, specificities of 0.971 and 0.979, and AUCs (Area under the ROC Curves) of 0.997 and 0.993 on the two datasets, respectively. When comparing with human experts, it achieves expert level with 80 labeled OCT images and outperforms four out of six experts with 200 labeled OCT images. Furthermore, we also adopt the Gradient Class Activation Map (Grad-CAM) method to visualize the key regions that the proposed method focuses on when making predictions. It shows that the proposed method can accurately recognize the key patterns of the input OCT images when predicting retinopathy.
Collapse
Affiliation(s)
- Yuemei Luo
- School of Electrical and Electronic
Engineering, Nanyang Technological
University, Singapore, 639798, Singapore
| | - Qing Xu
- Institute for Infocomm
Research, Agency for Science, Technology and Research
(A*STAR), Singapore, 138632, Singapore
| | - Ruibing Jin
- Institute for Infocomm
Research, Agency for Science, Technology and Research
(A*STAR), Singapore, 138632, Singapore
| | - Min Wu
- Institute for Infocomm
Research, Agency for Science, Technology and Research
(A*STAR), Singapore, 138632, Singapore
| | - Linbo Liu
- School of Electrical and Electronic
Engineering, Nanyang Technological
University, Singapore, 639798, Singapore
- School of Chemical and Biomedical
Engineering, Nanyang Technological
University, Singapore, 637459, Singapore
| |
Collapse
|