1
|
Bhati A, Gour N, Khanna P, Ojha A, Werghi N. An interpretable dual attention network for diabetic retinopathy grading: IDANet. Artif Intell Med 2024; 149:102782. [PMID: 38462283 DOI: 10.1016/j.artmed.2024.102782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 01/05/2024] [Accepted: 01/15/2024] [Indexed: 03/12/2024]
Abstract
Diabetic retinopathy (DR) is the most prevalent cause of visual impairment in adults worldwide. Typically, patients with DR do not show symptoms until later stages, by which time it may be too late to receive effective treatment. DR Grading is challenging because of the small size and variation in lesion patterns. The key to fine-grained DR grading is to discover more discriminating elements such as cotton wool, hard exudates, hemorrhages, microaneurysms etc. Although deep learning models like convolutional neural networks (CNN) seem ideal for the automated detection of abnormalities in advanced clinical imaging, small-size lesions are very hard to distinguish by using traditional networks. This work proposes a bi-directional spatial and channel-wise parallel attention based network to learn discriminative features for diabetic retinopathy grading. The proposed attention block plugged with a backbone network helps to extract features specific to fine-grained DR-grading. This scheme boosts classification performance along with the detection of small-sized lesion parts. Extensive experiments are performed on four widely used benchmark datasets for DR grading, and performance is evaluated on different quality metrics. Also, for model interpretability, activation maps are generated using the LIME method to visualize the predicted lesion parts. In comparison with state-of-the-art methods, the proposed IDANet exhibits better performance for DR grading and lesion detection.
Collapse
Affiliation(s)
- Amit Bhati
- PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur 482005, India
| | - Neha Gour
- Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, United Arab Emirates
| | - Pritee Khanna
- PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur 482005, India.
| | - Aparajita Ojha
- PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur 482005, India
| | - Naoufel Werghi
- Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, United Arab Emirates
| |
Collapse
|
2
|
Kaur J, Mittal D, Malebary S, Nayak SR, Kumar D, Kumar M, Gagandeep, Singh S. Automated Detection and Segmentation of Exudates for the Screening of Background Retinopathy. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:4537253. [PMID: 37483301 PMCID: PMC10361834 DOI: 10.1155/2023/4537253] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Accepted: 04/15/2022] [Indexed: 07/25/2023]
Abstract
Exudate, an asymptomatic yellow deposit on retina, is among the primary characteristics of background diabetic retinopathy. Background diabetic retinopathy is a retinopathy related to high blood sugar levels which slowly affects all the organs of the body. The early detection of exudates aids doctors in screening the patients suffering from background diabetic retinopathy. A computer-aided method proposed in the present work detects and then segments the exudates in the images of retina acquired using a digital fundus camera by (i) gradient method to trace the contour of exudates, (ii) marking the connected candidate pixels to remove false exudates pixels, and (iii) linking the edge pixels for the boundary extraction of exudates. The method is tested on 1307 retinal fundus images with varying characteristics. Six hundred and forty-nine images were acquired from hospital and the remaining 658 from open-source benchmark databases, namely, STARE, DRIVE MESSIDOR, DiaretDB1, and e-Ophtha. The exudates segmentation method proposed in this research work results in the retinal fundus image-based (i) accuracy of 98.04%, (ii) sensitivity of 95.345%, and (iii) specificity of 98.63%. The segmentation results for a number of exudates-based evaluations depict the average (i) accuracy of 95.68%, (ii) sensitivity of 93.44%, and (iii) specificity of 97.22%. The substantial combined performance at image and exudates-based evaluations proves the contribution of the proposed method in mass screening as well as treatment process of background diabetic retinopathy.
Collapse
Affiliation(s)
- Jaskirat Kaur
- Department of Electronics and Communication Engineering, Punjab Engineering College (Deemed to be University), Sector 12, Chandigarh 160012, India
| | - Deepti Mittal
- Electrical and Instrumentation Engineering Department, Thapar Institute of Engineering and Technology, Patiala 147004, India
| | - Sharaf Malebary
- Department of Information Technology, Faculty of Computing and Information Technology in Rabigh, King Abdulaziz University, Jeddah 21911, Saudi Arabia
| | - Soumya Ranjan Nayak
- School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar 751024, Odisha, India
| | - Devendra Kumar
- Department of Computer Science, Wachemo University, Hosaena, Ethiopia
| | - Manoj Kumar
- Faculty of Engineering and Information Sciences, University of Wollongong in Dubai, Dubai Knowledge Park, UAE
- MEU Research Unit, Middle East University, Amman 11831, Jordan
| | - Gagandeep
- Computer Science Engineering Department, Chandigarh Engineering College, Mohali, India
| | - Simrandeep Singh
- Electronics and Communication Engineering Department, UCRD, Chandigarh University, Mohali, India
| |
Collapse
|
3
|
Ishtiaq U, Abdullah ERMF, Ishtiaque Z. A Hybrid Technique for Diabetic Retinopathy Detection Based on Ensemble-Optimized CNN and Texture Features. Diagnostics (Basel) 2023; 13:diagnostics13101816. [PMID: 37238304 DOI: 10.3390/diagnostics13101816] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 05/16/2023] [Accepted: 05/17/2023] [Indexed: 05/28/2023] Open
Abstract
One of the most prevalent chronic conditions that can result in permanent vision loss is diabetic retinopathy (DR). Diabetic retinopathy occurs in five stages: no DR, and mild, moderate, severe, and proliferative DR. The early detection of DR is essential for preventing vision loss in diabetic patients. In this paper, we propose a method for the detection and classification of DR stages to determine whether patients are in any of the non-proliferative stages or in the proliferative stage. The hybrid approach based on image preprocessing and ensemble features is the foundation of the proposed classification method. We created a convolutional neural network (CNN) model from scratch for this study. Combining Local Binary Patterns (LBP) and deep learning features resulted in the creation of the ensemble features vector, which was then optimized using the Binary Dragonfly Algorithm (BDA) and the Sine Cosine Algorithm (SCA). Moreover, this optimized feature vector was fed to the machine learning classifiers. The SVM classifier achieved the highest classification accuracy of 98.85% on a publicly available dataset, i.e., Kaggle EyePACS. Rigorous testing and comparisons with state-of-the-art approaches in the literature indicate the effectiveness of the proposed methodology.
Collapse
Affiliation(s)
- Uzair Ishtiaq
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur 50603, Malaysia
- Department of Computer Science, COMSATS University Islamabad, Vehari Campus, Vehari 61100, Pakistan
| | - Erma Rahayu Mohd Faizal Abdullah
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur 50603, Malaysia
| | - Zubair Ishtiaque
- Department of Analytical, Biopharmaceutical and Medical Sciences, Atlantic Technological University, H91 T8NW Galway, Ireland
| |
Collapse
|
4
|
AI-Based Automatic Detection and Classification of Diabetic Retinopathy Using U-Net and Deep Learning. Symmetry (Basel) 2022. [DOI: 10.3390/sym14071427] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Artificial intelligence is widely applied to automate Diabetic retinopathy diagnosis. Diabetes-related retinal vascular disease is one of the world’s most common leading causes of blindness and vision impairment. Therefore, automated DR detection systems would greatly benefit the early screening and treatment of DR and prevent vision loss caused by it. Researchers have proposed several systems to detect abnormalities in retinal images in the past few years. However, Diabetic Retinopathy automatic detection methods have traditionally been based on hand-crafted feature extraction from the retinal images and using a classifier to obtain the final classification. DNN (Deep neural networks) have made several changes in the previous few years to assist overcome the problem mentioned above. We suggested a two-stage novel approach for automated DR classification in this research. Due to the low fraction of positive instances in the asymmetric Optic Disk (OD) and blood vessels (BV) detection system, preprocessing and data augmentation techniques are used to enhance the image quality and quantity. The first step uses two independent U-Net models for OD (optic disc) and BV (blood vessel) segmentation. In the second stage, the symmetric hybrid CNN-SVD model was created after preprocessing to extract and choose the most discriminant features following OD and BV extraction using Inception-V3 based on transfer learning, and detects DR by recognizing retinal biomarkers such as MA (microaneurysms), HM (hemorrhages), and exudates (EX). On EyePACS-1, Messidor-2, and DIARETDB0, the proposed methodology demonstrated state-of-the-art performance, with an average accuracy of 97.92%, 94.59%, and 93.52%, respectively. Extensive testing and comparisons with baseline approaches indicate the efficacy of the suggested methodology.
Collapse
|
5
|
Biswas S, Khan MIA, Hossain MT, Biswas A, Nakai T, Rohdin J. Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? LIFE (BASEL, SWITZERLAND) 2022; 12:life12070973. [PMID: 35888063 PMCID: PMC9321111 DOI: 10.3390/life12070973] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 05/25/2022] [Accepted: 06/01/2022] [Indexed: 11/22/2022]
Abstract
Color fundus photographs are the most common type of image used for automatic diagnosis of retinal diseases and abnormalities. As all color photographs, these images contain information about three primary colors, i.e., red, green, and blue, in three separate color channels. This work aims to understand the impact of each channel in the automatic diagnosis of retinal diseases and abnormalities. To this end, the existing works are surveyed extensively to explore which color channel is used most commonly for automatically detecting four leading causes of blindness and one retinal abnormality along with segmenting three retinal landmarks. From this survey, it is clear that all channels together are typically used for neural network-based systems, whereas for non-neural network-based systems, the green channel is most commonly used. However, from the previous works, no conclusion can be drawn regarding the importance of the different channels. Therefore, systematic experiments are conducted to analyse this. A well-known U-shaped deep neural network (U-Net) is used to investigate which color channel is best for segmenting one retinal abnormality and three retinal landmarks.
Collapse
Affiliation(s)
- Sangeeta Biswas
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
- Correspondence: or
| | - Md. Iqbal Aziz Khan
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Md. Tanvir Hossain
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Angkan Biswas
- CAPM Company Limited, Bonani, Dhaka 1213, Bangladesh;
| | - Takayoshi Nakai
- Faculty of Engineering, Shizuoka University, Hamamatsu 432-8561, Japan;
| | - Johan Rohdin
- Faculty of Information Technology, Brno University of Technology, 61200 Brno, Czech Republic;
| |
Collapse
|
6
|
Bhimavarapu U, Battineni G. Automatic Microaneurysms Detection for Early Diagnosis of Diabetic Retinopathy Using Improved Discrete Particle Swarm Optimization. J Pers Med 2022; 12:317. [PMID: 35207805 PMCID: PMC8878235 DOI: 10.3390/jpm12020317] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Revised: 02/17/2022] [Accepted: 02/18/2022] [Indexed: 02/06/2023] Open
Abstract
Diabetic retinopathy (DR) is one of the most important microvascular complications associated with diabetes mellitus. The early signs of DR are microaneurysms, which can lead to complete vision loss. The detection of DR at an early stage can help to avoid non-reversible blindness. To do this, we incorporated fuzzy logic techniques into digital image processing to conduct effective detection. The digital fundus images were segmented using particle swarm optimization to identify microaneurysms. The particle swarm optimization clustering combined the membership functions by grouping the high similarity data into clusters. Model testing was conducted on the publicly available dataset called DIARETDB0, and image segmentation was done by probability-based (PBPSO) clustering algorithms. Different fuzzy models were applied and the outcomes were compared with our probability discrete particle swarm optimization algorithm. The results revealed that the proposed PSO algorithm achieved an accuracy of 99.9% in the early detection of DR.
Collapse
Affiliation(s)
- Usharani Bhimavarapu
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram 522502, Andhra Pradesh, India;
| | - Gopi Battineni
- Clinical Research Center, School of Medicinal and Health Products Sciences, University of Camerino, 62032 Camerino, Italy
| |
Collapse
|
7
|
Detection of exudates from clinical fundus images using machine learning algorithms in diabetic maculopathy. Int J Diabetes Dev Ctries 2022. [DOI: 10.1007/s13410-021-01039-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
|
8
|
EAD-Net: A Novel Lesion Segmentation Method in Diabetic Retinopathy Using Neural Networks. DISEASE MARKERS 2021; 2021:6482665. [PMID: 34512815 PMCID: PMC8429028 DOI: 10.1155/2021/6482665] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Accepted: 08/19/2021] [Indexed: 02/05/2023]
Abstract
Diabetic retinopathy (DR) is a common chronic fundus disease, which has four different kinds of microvessel structure and microvascular lesions: microaneurysms (MAs), hemorrhages (HEs), hard exudates, and soft exudates. Accurate detection and counting of them are a basic but important work. The manual annotation of these lesions is a labor-intensive task in clinical analysis. To solve the problem, we proposed a novel segmentation method for different lesions in DR. Our method is based on a convolutional neural network and can be divided into encoder module, attention module, and decoder module, so we refer it as EAD-Net. After normalization and augmentation, the fundus images were sent to the EAD-Net for automated feature extraction and pixel-wise label prediction. Given the evaluation metrics based on the matching degree between detected candidates and ground truth lesions, our method achieved sensitivity of 92.77%, specificity of 99.98%, and accuracy of 99.97% on the e_ophtha_EX dataset and comparable AUPR (Area under Precision-Recall curve) scores on IDRiD dataset. Moreover, the results on the local dataset also show that our EAD-Net has better performance than original U-net in most metrics, especially in the sensitivity and F1-score, with nearly ten percent improvement. The proposed EAD-Net is a novel method based on clinical DR diagnosis. It has satisfactory results on the segmentation of four different kinds of lesions. These effective segmentations have important clinical significance in the monitoring and diagnosis of DR.
Collapse
|
9
|
Gilbert MJ, Sun JK. Artificial Intelligence in the assessment of diabetic retinopathy from fundus photographs. Semin Ophthalmol 2021; 35:325-332. [PMID: 33539253 DOI: 10.1080/08820538.2020.1855358] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Background: Over the next 25 years, the global prevalence of diabetes is expected to grow to affect 700 million individuals. Consequently, an unprecedented number of patients will be at risk for vision loss from diabetic eye disease. This demand will almost certainly exceed the supply of eye care professionals to individually evaluate each patient on an annual basis, signaling the need for 21st century tools to assist our profession in meeting this challenge. Methods: Review of available literature on artificial intelligence (AI) as applied to diabetic retinopathy (DR) detection and predictionResults: The field of AI has seen exponential growth in evaluating fundus photographs for DR. AI systems employ machine learning and artificial neural networks to teach themselves how to grade DR from libraries of tens of thousands of images and may be able to predict future DR progression based on baseline fundus photographs. Conclusions: AI algorithms are highly promising for the purposes of DR detection and will likely be able to reliably predict DR worsening in the future. A deeper understanding of these systems and how they interpret images is critical as they transition from the bench into the clinic.
Collapse
Affiliation(s)
- Michael J Gilbert
- Joslin Diabetes Center, Beetham Eye Institute , Boston, MA, United States
| | - Jennifer K Sun
- Joslin Diabetes Center, Beetham Eye Institute , Boston, MA, United States.,Department of Ophthalmology, Harvard Medical School , Boston, MA, United States
| |
Collapse
|
10
|
Bilal A, Sun G, Mazhar S. Survey on recent developments in automatic detection of diabetic retinopathy. J Fr Ophtalmol 2021; 44:420-440. [PMID: 33526268 DOI: 10.1016/j.jfo.2020.08.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 08/24/2020] [Indexed: 12/13/2022]
Abstract
Diabetic retinopathy (DR) is a disease facilitated by the rapid spread of diabetes worldwide. DR can blind diabetic individuals. Early detection of DR is essential to restoring vision and providing timely treatment. DR can be detected manually by an ophthalmologist, examining the retinal and fundus images to analyze the macula, morphological changes in blood vessels, hemorrhage, exudates, and/or microaneurysms. This is a time consuming, costly, and challenging task. An automated system can easily perform this function by using artificial intelligence, especially in screening for early DR. Recently, much state-of-the-art research relevant to the identification of DR has been reported. This article describes the current methods of detecting non-proliferative diabetic retinopathy, exudates, hemorrhage, and microaneurysms. In addition, the authors point out future directions in overcoming current challenges in the field of DR research.
Collapse
Affiliation(s)
- A Bilal
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China.
| | - G Sun
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China
| | - S Mazhar
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China
| |
Collapse
|
11
|
Exudates as Landmarks Identified through FCM Clustering in Retinal Images. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app11010142] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The aim of this work was to develop a method for the automatic identification of exudates, using an unsupervised clustering approach. The ability to classify each pixel as belonging to an eventual exudate, as a warning of disease, allows for the tracking of a patient’s status through a noninvasive approach. In the field of diabetic retinopathy detection, we considered four public domain datasets (DIARETDB0/1, IDRID, and e-optha) as benchmarks. In order to refine the final results, a specialist ophthalmologist manually segmented a random selection of DIARETDB0/1 fundus images that presented exudates. An innovative pipeline of morphological procedures and fuzzy C-means clustering was integrated in order to extract exudates with a pixel-wise approach. Our methodology was optimized, and verified and the parameters were fine-tuned in order to define both suitable values and to produce a more accurate segmentation. The method was used on 100 tested images, resulting in averages of sensitivity, specificity, and accuracy equal to 83.3%, 99.2%, and 99.1%, respectively.
Collapse
|
12
|
Wang J, Bai Y, Xia B. Simultaneous Diagnosis of Severity and Features of Diabetic Retinopathy in Fundus Photography Using Deep Learning. IEEE J Biomed Health Inform 2020; 24:3397-3407. [PMID: 32750975 DOI: 10.1109/jbhi.2020.3012547] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Deep learning methods for diabetic retinopathy (DR) diagnosis are usually criticized as being lack of interpretability in the diagnostic result, thus limiting their application in clinic. Simultaneous prediction of DR related features during the DR severity diagnosis is able to resolve this issue by providing supporting evidence (i.e. DR related features) for the diagnostic result (i.e. DR severity). In this study, we propose a hierarchical multi-task deep learning framework for simultaneous diagnosis of DR severity and DR related features in fundus images. A hierarchical structure is introduced to incorporate the casual relationship between DR related features and DR severity levels. In the experiments, the proposed approach was evaluated on two independent testing sets using quadratic weighted Cohen's kappa coefficient, receiver operating characteristic analysis, and precision-recall analysis. A grader study was also conducted to compare the performance of the proposed approach with those of general ophthalmologists with different levels of experience. The results demonstrate that the proposed approach could improve the performance for both DR severity diagnosis and DR related feature detection when comparing with the traditional deep learning-based methods. It achieves performance close to general ophthalmologists with five years of experience when diagnosing DR severity levels, and general ophthalmologists with ten years of experience for referable DR detection.
Collapse
|
13
|
Machine learning and artificial intelligence based Diabetes Mellitus detection and self-management: A systematic review. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2020. [DOI: 10.1016/j.jksuci.2020.06.013] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|
14
|
Wu HQ, Shan YX, Wu H, Zhu DR, Tao HM, Wei HG, Shen XY, Sang AM, Dong JC. Computer aided diabetic retinopathy detection based on ophthalmic photography: a systematic review and Meta-analysis. Int J Ophthalmol 2019; 12:1908-1916. [PMID: 31850177 DOI: 10.18240/ijo.2019.12.14] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2019] [Accepted: 06/10/2019] [Indexed: 12/17/2022] Open
Abstract
AIM To ensure the diagnostic value of computer aided techniques in diabetic retinopathy (DR) detection based on ophthalmic photography (OP). METHODS PubMed, EMBASE, Ei village, IEEE Xplore and Cochrane Library database were searched systematically for literatures about computer aided detection (CAD) in DR detection. The methodological quality of included studies was appraised by the Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS-2). Meta-DiSc was utilized and a random effects model was plotted to summarize data from those included studies. Summary receiver operating characteristic curves were selected to estimate the overall test performance. Subgroup analysis was used to identify the efficiency of CAD in detecting DR, exudates (EXs), microaneurysms (MAs) as well as hemorrhages (HMs), and neovascularizations (NVs). Publication bias was analyzed using STATA. RESULTS Fourteen articles were finally included in this Meta-analysis after literature review. Pooled sensitivity and specificity were 90% (95%CI, 85%-94%) and 90% (95%CI, 80%-96%) respectively for CAD in DR detection. With regard to CAD in EXs detecting, pooled sensitivity, specificity were 89% (95%CI, 88%-90%) and 99% (95%CI, 99%-99%) respectively. In aspect of MAs and HMs detection, pooled sensitivity and specificity of CAD were 42% (95%CI, 41%-44%) and 93% (95%CI, 93%-93%) respectively. Besides, pooled sensitivity and specificity were 94% (95%CI, 89%-97%) and 87% (95%CI, 83%-90%) respectively for CAD in NVs detection. No potential publication bias was observed. CONCLUSION CAD demonstrates overall high diagnostic accuracy for detecting DR and pathological lesions based on OP. Further prospective clinical trials are needed to prove such effect.
Collapse
Affiliation(s)
- Hui-Qun Wu
- Department of Medical Informatics, Medical School of Nantong University, Nantong 226001, Jiangsu Province, China
| | - Yan-Xing Shan
- Department of Medical Informatics, Medical School of Nantong University, Nantong 226001, Jiangsu Province, China
| | - Huan Wu
- Department of Medical Informatics, Medical School of Nantong University, Nantong 226001, Jiangsu Province, China
| | - Di-Ru Zhu
- Department of Medical Informatics, Medical School of Nantong University, Nantong 226001, Jiangsu Province, China
| | - Hui-Min Tao
- Department of Medical Informatics, Medical School of Nantong University, Nantong 226001, Jiangsu Province, China
| | - Hua-Gen Wei
- Department of Medical Informatics, Medical School of Nantong University, Nantong 226001, Jiangsu Province, China
| | - Xiao-Yan Shen
- School of Information Science and Technology, Nantong University, Nantong 226001, Jiangsu Province, China
| | - Ai-Min Sang
- Department of Ophthalmology, Affiliated Hospital of Nantong University, Nantong 226001, Jiangsu Province, China
| | - Jian-Cheng Dong
- Department of Medical Informatics, Medical School of Nantong University, Nantong 226001, Jiangsu Province, China
| |
Collapse
|
15
|
Jang Y, Son J, Park KH, Park SJ, Jung KH. Laterality Classification of Fundus Images Using Interpretable Deep Neural Network. J Digit Imaging 2019; 31:923-928. [PMID: 29948436 DOI: 10.1007/s10278-018-0099-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
In this paper, we aimed to understand and analyze the outputs of a convolutional neural network model that classifies the laterality of fundus images. Our model not only automatizes the classification process, which results in reducing the labors of clinicians, but also highlights the key regions in the image and evaluates the uncertainty for the decision with proper analytic tools. Our model was trained and tested with 25,911 fundus images (43.4% of macula-centered images and 28.3% each of superior and nasal retinal fundus images). Also, activation maps were generated to mark important regions in the image for the classification. Then, uncertainties were quantified to support explanations as to why certain images were incorrectly classified under the proposed model. Our model achieved a mean training accuracy of 99%, which is comparable to the performance of clinicians. Strong activations were detected at the location of optic disc and retinal blood vessels around the disc, which matches to the regions that clinicians attend when deciding the laterality. Uncertainty analysis discovered that misclassified images tend to accompany with high prediction uncertainties and are likely ungradable. We believe that visualization of informative regions and the estimation of uncertainty, along with presentation of the prediction result, would enhance the interpretability of neural network models in a way that clinicians can be benefitted from using the automatic classification system.
Collapse
Affiliation(s)
- Yeonwoo Jang
- Department of Statistics, University of Oxford, Oxford, UK
| | - Jaemin Son
- VUNO Inc., 6F, 507, Gangnam-daero, Seocho-gu, Seoul, Republic of Korea
| | - Kyu Hyung Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Sang Jun Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Kyu-Hwan Jung
- VUNO Inc., 6F, 507, Gangnam-daero, Seocho-gu, Seoul, Republic of Korea.
| |
Collapse
|
16
|
Detection of Hard Exudates Using Evolutionary Feature Selection in Retinal Fundus Images. J Med Syst 2019; 43:209. [DOI: 10.1007/s10916-019-1349-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Accepted: 05/20/2019] [Indexed: 01/29/2023]
|
17
|
Pratheeba C, Singh NN. A Novel Approach for Detection of Hard Exudates Using Random Forest Classifier. J Med Syst 2019; 43:180. [PMID: 31093787 DOI: 10.1007/s10916-019-1310-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2019] [Accepted: 04/25/2019] [Indexed: 10/26/2022]
Abstract
Diabetic Retinopathy is the major cause of blindness for diabetics in which the retina is damaged. Regular screening system help in detecting the early symptoms like exudates, which are due to the leakage of blood pressure of vessels. The significant role of proposed system is detecting the hard exudates in prevention of visual loss and blindness. Many researchers studied and investigated about detecting the exudates region but not satisfied with their results. Fundamental medical image processing steps with different techniques are implemented by the proposed system. Random Forest is a novel classification which is applied on color retinal images able to classify cluster of data with high accuracy. The performance of the proposed system is obtained by analyzing the accuracy obtained from the Random Forest classifier. These images are obtained from Diabetic Retinopathy Database (DIARETDB) database. The simulation results are obtained with the help of MATLAB 2018. By applying novel classification techniques improves the automatic detection of hard exudates from color retinal images. The achieved accuracy is compared with existing classifiers since the proposed Random Forest classifier provides the accuracy of 99.89% applied on color retinal images.
Collapse
Affiliation(s)
- C Pratheeba
- Department of Electronics and Communication Engineering, Udaya School of Engineering, Vellamodi, India.
| | - N Nirmal Singh
- Department of Electronics and Communication Engineering, V.V College of Engineering, Thisayanvilai, Tirunelveli, India
| |
Collapse
|
18
|
Automatic Detection of Hard Exudates in Color Retinal Images Using Dynamic Threshold and SVM Classification: Algorithm Development and Evaluation. BIOMED RESEARCH INTERNATIONAL 2019; 2019:3926930. [PMID: 30809539 PMCID: PMC6364257 DOI: 10.1155/2019/3926930] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/12/2018] [Revised: 12/01/2018] [Accepted: 01/06/2019] [Indexed: 11/17/2022]
Abstract
Diabetic retinopathy (DR) is one of the most common causes of visual impairment. Automatic detection of hard exudates (HE) from retinal photographs is an important step for detection of DR. However, most of existing algorithms for HE detection are complex and inefficient. We have developed and evaluated an automatic retinal image processing algorithm for HE detection using dynamic threshold and fuzzy C-means clustering (FCM) followed by support vector machine (SVM) for classification. The proposed algorithm consisted of four main stages: (i) imaging preprocessing; (ii) localization of optic disc (OD); (iii) determination of candidate HE using dynamic threshold in combination with global threshold based on FCM; and (iv) extraction of eight texture features from the candidate HE region, which were then fed into an SVM classifier for automatic HE classification. The proposed algorithm was trained and cross-validated (10 fold) on a publicly available e-ophtha EX database (47 images) on pixel-level, achieving the overall average sensitivity, PPV, and F-score of 76.5%, 82.7%, and 76.7%. It was tested on another independent DIARETDB1 database (89 images) with the overall average sensitivity, specificity, and accuracy of 97.5%, 97.8%, and 97.7%, respectively. In summary, the satisfactory evaluation results on both retinal imaging databases demonstrated the effectiveness of our proposed algorithm for automatic HE detection, by using dynamic threshold and FCM followed by an SVM for classification.
Collapse
|
19
|
Chanda K, Issac A, Dutta MK. An Adaptive Algorithm for Detection of Exudates Based on Localized Properties of Fundus Images. INTERNATIONAL JOURNAL OF E-HEALTH AND MEDICAL COMMUNICATIONS 2019. [DOI: 10.4018/ijehmc.2019010102] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This article presents an algorithm to detect exudates, which can be considered as one of the many abnormalities, to identify diabetic retinopathy from fundus images. The algorithm is invariant to illumination and works well on poor contrast images with high reflection noise. The artefacts are correctly rejected despite their colour, intensity and contrast being almost similar to that of exudates. Optic disc is localized and segmented using average filter of specially determined size which is an important step in the rejection of false positives. Exudates are located by generating candidate regions using variance and median filters followed by morphological reconstruction. The strategic selection of local properties to decide the threshold, makes this approach novel and adaptive, that is highly accurate for detection of exudates. The proposed method was tested on two publicly available labelled databases (DIARETDB1 and MESSIDOR) and a database from a local hospital and achieved a sensitivity of 96.765% and a positive predictive value of 93.514%.
Collapse
Affiliation(s)
- Katha Chanda
- College of Computing, Georgia Institute of Technology, Atlanta, USA
| | - Ashish Issac
- Department of Electronics & Communication Engineering, Amity University, Noida, India
| | - Malay Kishore Dutta
- Center for Advanced Studies, Dr. A.P.J Abdul Kalam Technical University, Lucknow, India
| |
Collapse
|
20
|
|
21
|
Badgujar R, Deore P. MBO-SVM-based exudate classification in fundus retinal images of diabetic patients. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2018. [DOI: 10.1080/21681163.2018.1487338] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- Ravindra Badgujar
- Department of Electronics & Telecommunication Engineering, R C Patel Institute of Technology, Shirpur, India
| | - Pramod Deore
- Department of Electronics & Telecommunication Engineering, R C Patel Institute of Technology, Shirpur, India
| |
Collapse
|
22
|
Kaur J, Mittal D. A generalized method for the segmentation of exudates from pathological retinal fundus images. Biocybern Biomed Eng 2018. [DOI: 10.1016/j.bbe.2017.10.003] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
23
|
Dheeba J, Jaya T, Singh NA. Breast cancer risk assessment and diagnosis model using fuzzy support vector machine based expert system. J EXP THEOR ARTIF IN 2017. [DOI: 10.1080/0952813x.2017.1280088] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- J. Dheeba
- Department of Computer Science and Engineering, College of Engineering, Perumon, Kollam, India
| | - T. Jaya
- Department of Electronics and Communication Engineering, CSI Institute of Technology, Nagercoil, India
| | | |
Collapse
|
24
|
Amin J, Sharif M, Yasmin M. A Review on Recent Developments for Detection of Diabetic Retinopathy. SCIENTIFICA 2016; 2016:6838976. [PMID: 27777811 PMCID: PMC5061953 DOI: 10.1155/2016/6838976] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2015] [Revised: 04/22/2016] [Accepted: 05/10/2016] [Indexed: 06/01/2023]
Abstract
Diabetic retinopathy is caused by the retinal micro vasculature which may be formed as a result of diabetes mellitus. Blindness may appear as a result of unchecked and severe cases of diabetic retinopathy. Manual inspection of fundus images to check morphological changes in microaneurysms, exudates, blood vessels, hemorrhages, and macula is a very time-consuming and tedious work. It can be made easily with the help of computer-aided system and intervariability for the observer. In this paper, several techniques for detecting microaneurysms, hemorrhages, and exudates are discussed for ultimate detection of nonproliferative diabetic retinopathy. Blood vessels detection techniques are also discussed for the diagnosis of proliferative diabetic retinopathy. Furthermore, the paper elaborates a discussion on the experiments accessed by authors for the detection of diabetic retinopathy. This work will be helpful for the researchers and technical persons who want to utilize the ongoing research in this area.
Collapse
Affiliation(s)
- Javeria Amin
- COMSATS Institute of Information Technology, Department of Computer Science, Wah 47040, Pakistan
| | - Muhammad Sharif
- COMSATS Institute of Information Technology, Department of Computer Science, Wah 47040, Pakistan
| | - Mussarat Yasmin
- COMSATS Institute of Information Technology, Department of Computer Science, Wah 47040, Pakistan
| |
Collapse
|
25
|
Gilbert CE, Babu RG, Gudlavalleti ASV, Anchala R, Shukla R, Ballabh PH, Vashist P, Ramachandra SS, Allagh K, Sagar J, Bandyopadhyay S, Murthy GVS. Eye care infrastructure and human resources for managing diabetic retinopathy in India: The India 11-city 9-state study. Indian J Endocrinol Metab 2016; 20:S3-S10. [PMID: 27144134 PMCID: PMC4847447 DOI: 10.4103/2230-8210.179768] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
BACKGROUND There is a paucity of information on the availability of services for diagnosis and management of diabetic retinopathy (DR) in India. OBJECTIVES The study was undertaken to document existing healthcare infrastructure and practice patterns for managing DR. METHODS This cross-sectional study was conducted in 11 cities and included public and private eye care providers. Both multispecialty and stand-alone eye care facilities were included. Information was collected on the processes used in all steps of the program, from how diabetics were identified for screening through to policies about follow-up after treatment by administering a semistructured questionnaire and by using observational checklists. RESULTS A total of 86 eye units were included (31.4% multispecialty hospitals; 68.6% stand-alone clinics). The availability of a dedicated retina unit was reported by 68.6% (59) facilities. The mean number of outpatient consultations per year was 45,909 per responding facility, with nearly half being new registrations. A mean of 631 persons with sight-threatening-DR (ST-DR) were registered per year per facility. The commonest treatment for ST-DR was laser photocoagulation. Only 58% of the facilities reported having a full-time retina specialist on their rolls. More than half the eye care facilities (47; 54.6%) reported that their ophthalmologists would like further training in retina. Half (51.6%) of the facilities stated that they needed laser or surgical equipment. About 46.5% of the hospitals had a system to track patients needing treatment or for follow-up. CONCLUSIONS The study highlighted existing gaps in service provision at eye care facilities in India.
Collapse
Affiliation(s)
- Clare E. Gilbert
- Department of Clinical Research, International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene and Tropical Medicine, London, UK
| | - R. Giridhara Babu
- South Asia Centre for Disability Inclusive Development Research, Indian Institute of Public Health, Public Health Foundation of India, ANV Arcade, 1 Amar Cooperative Society, Kavuri Hills, Madhapur, Hyderabad, Telangana, India
| | - Aashrai Sai Venkat Gudlavalleti
- South Asia Centre for Disability Inclusive Development Research, Indian Institute of Public Health, Public Health Foundation of India, ANV Arcade, 1 Amar Cooperative Society, Kavuri Hills, Madhapur, Hyderabad, Telangana, India
| | - Raghupathy Anchala
- South Asia Centre for Disability Inclusive Development Research, Indian Institute of Public Health, Public Health Foundation of India, ANV Arcade, 1 Amar Cooperative Society, Kavuri Hills, Madhapur, Hyderabad, Telangana, India
| | - Rajan Shukla
- South Asia Centre for Disability Inclusive Development Research, Indian Institute of Public Health, Public Health Foundation of India, ANV Arcade, 1 Amar Cooperative Society, Kavuri Hills, Madhapur, Hyderabad, Telangana, India
| | - Pant Hira Ballabh
- South Asia Centre for Disability Inclusive Development Research, Indian Institute of Public Health, Public Health Foundation of India, ANV Arcade, 1 Amar Cooperative Society, Kavuri Hills, Madhapur, Hyderabad, Telangana, India
| | - Praveen Vashist
- Department of Community Ophthalmology, Dr. R. P. Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | - Srikrishna S. Ramachandra
- South Asia Centre for Disability Inclusive Development Research, Indian Institute of Public Health, Public Health Foundation of India, ANV Arcade, 1 Amar Cooperative Society, Kavuri Hills, Madhapur, Hyderabad, Telangana, India
| | - Komal Allagh
- South Asia Centre for Disability Inclusive Development Research, Indian Institute of Public Health, Public Health Foundation of India, ANV Arcade, 1 Amar Cooperative Society, Kavuri Hills, Madhapur, Hyderabad, Telangana, India
| | - Jayanti Sagar
- South Asia Centre for Disability Inclusive Development Research, Indian Institute of Public Health, Public Health Foundation of India, ANV Arcade, 1 Amar Cooperative Society, Kavuri Hills, Madhapur, Hyderabad, Telangana, India
| | - Souvik Bandyopadhyay
- South Asia Centre for Disability Inclusive Development Research, Indian Institute of Public Health, Public Health Foundation of India, ANV Arcade, 1 Amar Cooperative Society, Kavuri Hills, Madhapur, Hyderabad, Telangana, India
| | - G. V. S. Murthy
- Department of Clinical Research, International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene and Tropical Medicine, London, UK
- South Asia Centre for Disability Inclusive Development Research, Indian Institute of Public Health, Public Health Foundation of India, ANV Arcade, 1 Amar Cooperative Society, Kavuri Hills, Madhapur, Hyderabad, Telangana, India
| |
Collapse
|