1
|
Liang X, Wen H, Duan Y, He K, Feng X, Zhou G. Nonproliferative diabetic retinopathy dataset(NDRD): A database for diabetic retinopathy screening research and deep learning evaluation. Health Informatics J 2024; 30:14604582241259328. [PMID: 38864242 DOI: 10.1177/14604582241259328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/13/2024]
Abstract
OBJECTIVES In this article, we provide a database of nonproliferative diabetes retinopathy, which focuses on early diabetes retinopathy with hard exudation, and further explore its clinical application in disease recognition. METHODS We collect the photos of nonproliferative diabetes retinopathy taken by Optos Panoramic 200 laser scanning ophthalmoscope, filter out the pictures with poor quality, and label the hard exudative lesions in the images under the guidance of professional medical personnel. To validate the effectiveness of the datasets, five deep learning models are used to perform learning predictions on the datasets. Furthermore, we evaluate the performance of the model using evaluation metrics. RESULTS Nonproliferative diabetes retinopathy is smaller than proliferative retinopathy and more difficult to identify. The existing segmentation models have poor lesion segmentation performance, while the intersection over union (IOU) value for deep lesion segmentation of models targeting small lesions can reach 66.12%, which is higher than ordinary lesion segmentation models, but there is still a lot of room for improvement. CONCLUSION The segmentation of small hard exudative lesions is more challenging than that of large hard exudative lesions. More targeted datasets are needed for model training. Compared with the previous diabetes retina datasets, the NDRD dataset pays more attention to micro lesions.
Collapse
Affiliation(s)
- Xing Liang
- Third Hospital of Shanxi Medical University, Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Tongji Shanxi Hospital, Taiyuan, China
| | - Haiqi Wen
- Taiyuan University of Technology School of Software, Taiyuan, China
| | - Yajian Duan
- Department of Ophthalmology, Shanxi Bethune Hospital, Taiyuan, China
| | - Kan He
- Taiyuan University of Technology School of Mathematics, Taiyuan, China
| | - Xiufang Feng
- Taiyuan University of Technology School of Software, Taiyuan, China
| | - Guohong Zhou
- Department of Ophthalmology, Shanxi Eye Hospital Affiliated to Shanxi Medical UniversityTaiyuan, China
| |
Collapse
|
2
|
Manan MA, Jinchao F, Khan TM, Yaqub M, Ahmed S, Chuhan IS. Semantic segmentation of retinal exudates using a residual encoder-decoder architecture in diabetic retinopathy. Microsc Res Tech 2023; 86:1443-1460. [PMID: 37194727 DOI: 10.1002/jemt.24345] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Revised: 04/21/2023] [Accepted: 05/04/2023] [Indexed: 05/18/2023]
Abstract
Exudates are a common sign of diabetic retinopathy, which is a disease that affects the blood vessels in the retina. Early detection of exudates is critical to avoiding vision problems through continuous screening and treatment. In traditional clinical practice, the involved lesions are manually detected using photographs of the fundus. However, this task is cumbersome and time-consuming and requires intense effort due to the small size of the lesion and the low contrast of the images. Thus, computer-assisted diagnosis of retinal disease based on the detection of red lesions has been actively explored recently. In this paper, we present a comparison of deep convolutional neural network (CNN) architectures and propose a residual CNN with residual skip connections to reduce the parameter for the semantic segmentation of exudates in retinal images. A suitable image augmentation technique is used to improve the performance of network architecture. The proposed network can robustly segment exudates with high accuracy, which makes it suitable for diabetic retinopathy screening. A comparative performance analysis of three benchmark databases: E-ophtha, DIARETDB1, and Hamilton Ophthalmology Institute's Macular Edema, is presented. The proposed method achieves a precision of 0.95, 0.92, 0.97, accuracy of 0.98, 0.98, 0.98, sensitivity of 0.97, 0.95, 0.95, specificity of 0.99, 0.99, 0.99, and area under the curve of 0.97, 0.94, and 0.96, respectively. RESEARCH HIGHLIGHTS: The research focuses on the detection and segmentation of exudates in diabetic retinopathy, a disease affecting the retina. Early detection of exudates is important to avoid vision problems and requires continuous screening and treatment. Currently, manual detection is time-consuming and requires intense effort. The authors compare qualitative results of the state-of-the-art convolutional neural network (CNN) architectures and propose a computer-assisted diagnosis approach based on deep learning, using a residual CNN with residual skip connections to reduce parameters. The proposed method is evaluated on three benchmark databases and demonstrates high accuracy and suitability for diabetic retinopathy screening.
Collapse
Affiliation(s)
- Malik Abdul Manan
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Feng Jinchao
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Tariq M Khan
- School of IT, Deakin University, Waurn Ponds, Australia
| | - Muhammad Yaqub
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Shahzad Ahmed
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Imran Shabir Chuhan
- Interdisciplinary Research Institute, Faculty of Science, Beijing University of Technology, Beijing, China
| |
Collapse
|
3
|
Kukkar A, Gupta D, Beram SM, Soni M, Singh NK, Sharma A, Neware R, Shabaz M, Rizwan A. Optimizing Deep Learning Model Parameters Using Socially Implemented IoMT Systems for Diabetic Retinopathy Classification Problem. IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS 2023; 10:1654-1665. [DOI: 10.1109/tcss.2022.3213369] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/09/2024]
Affiliation(s)
- Ashima Kukkar
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, India
| | - Dinesh Gupta
- Computer Science & Engineering, I. K. Gujral Punjab Technical University, Kapurthala, Punjab, India
| | - Shehab Mohamed Beram
- Department of Computing and Information Systems, Research Centre for Human-Machine Collaboration (HUMAC), Sunway University, Kuala Lumpur, Malaysia
| | - Mukesh Soni
- Department of Computer Science and Engineering, University Centre for Research and Development, Chandigarh University, Mohali, Punjab, India
| | - Nikhil Kumar Singh
- Department of Computer Science and Engineering, Maulana Azad National Institute of Technology, Bhopal, Madhya Pradesh, India
| | - Ashutosh Sharma
- School of Computer Science, University of Petroleum and Energy Studies, Dehradun, India
| | - Rahul Neware
- Department of Computer Science and Engineering, G. H. Raisoni College of Engineering, Nagpur, India
| | - Mohammad Shabaz
- Model Institute of Engineering and Technology, Jammu, Jammu and Kashmir, India
| | - Ali Rizwan
- Department of Industrial Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
4
|
Kaur J, Mittal D, Malebary S, Nayak SR, Kumar D, Kumar M, Gagandeep, Singh S. Automated Detection and Segmentation of Exudates for the Screening of Background Retinopathy. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:4537253. [PMID: 37483301 PMCID: PMC10361834 DOI: 10.1155/2023/4537253] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Accepted: 04/15/2022] [Indexed: 07/25/2023]
Abstract
Exudate, an asymptomatic yellow deposit on retina, is among the primary characteristics of background diabetic retinopathy. Background diabetic retinopathy is a retinopathy related to high blood sugar levels which slowly affects all the organs of the body. The early detection of exudates aids doctors in screening the patients suffering from background diabetic retinopathy. A computer-aided method proposed in the present work detects and then segments the exudates in the images of retina acquired using a digital fundus camera by (i) gradient method to trace the contour of exudates, (ii) marking the connected candidate pixels to remove false exudates pixels, and (iii) linking the edge pixels for the boundary extraction of exudates. The method is tested on 1307 retinal fundus images with varying characteristics. Six hundred and forty-nine images were acquired from hospital and the remaining 658 from open-source benchmark databases, namely, STARE, DRIVE MESSIDOR, DiaretDB1, and e-Ophtha. The exudates segmentation method proposed in this research work results in the retinal fundus image-based (i) accuracy of 98.04%, (ii) sensitivity of 95.345%, and (iii) specificity of 98.63%. The segmentation results for a number of exudates-based evaluations depict the average (i) accuracy of 95.68%, (ii) sensitivity of 93.44%, and (iii) specificity of 97.22%. The substantial combined performance at image and exudates-based evaluations proves the contribution of the proposed method in mass screening as well as treatment process of background diabetic retinopathy.
Collapse
Affiliation(s)
- Jaskirat Kaur
- Department of Electronics and Communication Engineering, Punjab Engineering College (Deemed to be University), Sector 12, Chandigarh 160012, India
| | - Deepti Mittal
- Electrical and Instrumentation Engineering Department, Thapar Institute of Engineering and Technology, Patiala 147004, India
| | - Sharaf Malebary
- Department of Information Technology, Faculty of Computing and Information Technology in Rabigh, King Abdulaziz University, Jeddah 21911, Saudi Arabia
| | - Soumya Ranjan Nayak
- School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar 751024, Odisha, India
| | - Devendra Kumar
- Department of Computer Science, Wachemo University, Hosaena, Ethiopia
| | - Manoj Kumar
- Faculty of Engineering and Information Sciences, University of Wollongong in Dubai, Dubai Knowledge Park, UAE
- MEU Research Unit, Middle East University, Amman 11831, Jordan
| | - Gagandeep
- Computer Science Engineering Department, Chandigarh Engineering College, Mohali, India
| | - Simrandeep Singh
- Electronics and Communication Engineering Department, UCRD, Chandigarh University, Mohali, India
| |
Collapse
|
5
|
Sasmal B, Dhal KG. A survey on the utilization of Superpixel image for clustering based image segmentation. MULTIMEDIA TOOLS AND APPLICATIONS 2023; 82:1-63. [PMID: 37362658 PMCID: PMC9992924 DOI: 10.1007/s11042-023-14861-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 06/22/2022] [Accepted: 02/06/2023] [Indexed: 06/28/2023]
Abstract
Superpixel become increasingly popular in image segmentation field as it greatly helps image segmentation techniques to segment the region of interest accurately in noisy environment and also reduces the computation effort to a great extent. However, selection of proper superpixel generation techniques and superpixel image segmentation techniques play a very crucial role in the domain of different kinds of image segmentation. Clustering is a well-accepted image segmentation technique and proved their effective performance over various image segmentation field. Therefore, this study presents an up-to-date survey on the employment of superpixel image in combined with clustering techniques for the various image segmentation. The contribution of the survey has four parts namely (i) overview of superpixel image generation techniques, (ii) clustering techniques especially efficient partitional clustering techniques, their issues and overcoming strategies, (iii) Review of superpixel combined with clustering strategies exist in literature for various image segmentation, (iv) lastly, the comparative study among superpixel combined with partitional clustering techniques has been performed over oral pathology and leaf images to find out the efficacy of the combination of superpixel and partitional clustering approaches. Our evaluations and observation provide in-depth understanding of several superpixel generation strategies and how they apply to the partitional clustering method.
Collapse
Affiliation(s)
- Buddhadev Sasmal
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, West Bengal India
| | - Krishna Gopal Dhal
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, West Bengal India
| |
Collapse
|
6
|
Exudate identification in retinal fundus images using precise textural verifications. Sci Rep 2023; 13:2824. [PMID: 36808177 PMCID: PMC9938199 DOI: 10.1038/s41598-023-29916-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Accepted: 02/13/2023] [Indexed: 02/19/2023] Open
Abstract
One of the most salient diseases of retina is Diabetic Retinopathy (DR) which may lead to irreparable damages to eye vision in the advanced phases. A large number of the people infected with diabetes experience DR. The early identification of DR signs facilitates the treatment process and prevents from blindness. Hard Exudates (HE) are bright lesions appeared in retinal fundus images of DR patients. Thus, the detection of HEs is an important task preventing the progress of DR. However, the detection of HEs is a challenging process due to their different appearance features. In this paper, an automatic method for the identification of HEs with various sizes and shapes is proposed. The method works based on a pixel-wise approach. It considers several semi-circular regions around each pixel. For each semi-circular region, the intensity changes around several directions and non-necessarily equal radiuses are computed. All pixels for which several semi-circular regions include considerable intensity changes are considered as the pixels located in HEs. In order to reduce false positives, an optic disc localization method is proposed in the post-processing phase. The performance of the proposed method has been evaluated on DIARETDB0 and DIARETDB1 datasets. The experimental results confirm the improved performance of the suggested method in term of accuracy.
Collapse
|
7
|
Kundu S, Karale V, Ghorai G, Sarkar G, Ghosh S, Dhara AK. Nested U-Net for Segmentation of Red Lesions in Retinal Fundus Images and Sub-image Classification for Removal of False Positives. J Digit Imaging 2022; 35:1111-1119. [PMID: 35474556 PMCID: PMC9582103 DOI: 10.1007/s10278-022-00629-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 03/21/2022] [Accepted: 04/03/2022] [Indexed: 11/29/2022] Open
Abstract
Diabetic retinopathy is a pathological change of the retina that occurs for long-term diabetes. The patients become symptomatic in advanced stages of diabetic retinopathy resulting in severe non-proliferative diabetic retinopathy or proliferative diabetic retinopathy stages. There is a need of an automated screening tool for the early detection and treatment of patients with diabetic retinopathy. This paper focuses on the segmentation of red lesions using nested U-Net Zhou et al. (Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Springer, 2018) followed by removal of false positives based on the sub-image classification method. Different sizes of sub-images were studied for the reduction in false positives in the sub-image classification method. The network could capture semantic features and fine details due to dense convolutional blocks connected via skip connections in between down sampling and up sampling paths. False-negative candidates were very few and the sub-image classification network effectively reduced the falsely detected candidates. The proposed framework achieves a sensitivity of [Formula: see text], precision of [Formula: see text], and F1-Score of [Formula: see text] for the DIARETDB1 data set Kalviainen and Uusutalo (Medical Image Understanding and Analysis, Citeseer, 2007). It outperforms the state-of-the-art networks such as U-Net Ronneberger et al. (International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2015) and attention U-Net Oktay et al. (Attention u-net: Learning where to look for the pancreas, 2018).
Collapse
Affiliation(s)
- Swagata Kundu
- Electrical Engineering Department, National Institute of Technology Durgapur, Durgapur, 713209 India
| | - Vikrant Karale
- Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology Kharagpur, Kharagpur, 721302 India
| | - Goutam Ghorai
- Department of Electrical Engineering, Jadavpur University, Kolkata, 700032 India
| | - Gautam Sarkar
- Department of Electrical Engineering, Jadavpur University, Kolkata, 700032 India
| | - Sambuddha Ghosh
- Department of Ophthalmology, Calcutta National Medical College and Hospital, Kolkata, 700014 India
| | - Ashis Kumar Dhara
- Electrical Engineering Department, National Institute of Technology Durgapur, Durgapur, 713209 India
| |
Collapse
|
8
|
Dubey S, Dixit M. Recent developments on computer aided systems for diagnosis of diabetic retinopathy: a review. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:14471-14525. [PMID: 36185322 PMCID: PMC9510498 DOI: 10.1007/s11042-022-13841-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 04/27/2022] [Accepted: 09/06/2022] [Indexed: 06/16/2023]
Abstract
Diabetes is a long-term condition in which the pancreas quits producing insulin or the body's insulin isn't utilised properly. One of the signs of diabetes is Diabetic Retinopathy. Diabetic retinopathy is the most prevalent type of diabetes, if remains unaddressed, diabetic retinopathy can affect all diabetics and become very serious, raising the chances of blindness. It is a chronic systemic condition that affects up to 80% of patients for more than ten years. Many researchers believe that if diabetes individuals are diagnosed early enough, they can be rescued from the condition in 90% of cases. Diabetes damages the capillaries, which are microscopic blood vessels in the retina. On images, blood vessel damage is usually noticeable. Therefore, in this study, several traditional, as well as deep learning-based approaches, are reviewed for the classification and detection of this particular diabetic-based eye disease known as diabetic retinopathy, and also the advantage of one approach over the other is also described. Along with the approaches, the dataset and the evaluation metrics useful for DR detection and classification are also discussed. The main finding of this study is to aware researchers about the different challenges occurs while detecting diabetic retinopathy using computer vision, deep learning techniques. Therefore, a purpose of this review paper is to sum up all the major aspects while detecting DR like lesion identification, classification and segmentation, security attacks on the deep learning models, proper categorization of datasets and evaluation metrics. As deep learning models are quite expensive and more prone to security attacks thus, in future it is advisable to develop a refined, reliable and robust model which overcomes all these aspects which are commonly found while designing deep learning models.
Collapse
Affiliation(s)
- Shradha Dubey
- Madhav Institute of Technology & Science (Department of Computer Science and Engineering), Gwalior, M.P. India
| | - Manish Dixit
- Madhav Institute of Technology & Science (Department of Computer Science and Engineering), Gwalior, M.P. India
| |
Collapse
|
9
|
AI-Based Automatic Detection and Classification of Diabetic Retinopathy Using U-Net and Deep Learning. Symmetry (Basel) 2022. [DOI: 10.3390/sym14071427] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Artificial intelligence is widely applied to automate Diabetic retinopathy diagnosis. Diabetes-related retinal vascular disease is one of the world’s most common leading causes of blindness and vision impairment. Therefore, automated DR detection systems would greatly benefit the early screening and treatment of DR and prevent vision loss caused by it. Researchers have proposed several systems to detect abnormalities in retinal images in the past few years. However, Diabetic Retinopathy automatic detection methods have traditionally been based on hand-crafted feature extraction from the retinal images and using a classifier to obtain the final classification. DNN (Deep neural networks) have made several changes in the previous few years to assist overcome the problem mentioned above. We suggested a two-stage novel approach for automated DR classification in this research. Due to the low fraction of positive instances in the asymmetric Optic Disk (OD) and blood vessels (BV) detection system, preprocessing and data augmentation techniques are used to enhance the image quality and quantity. The first step uses two independent U-Net models for OD (optic disc) and BV (blood vessel) segmentation. In the second stage, the symmetric hybrid CNN-SVD model was created after preprocessing to extract and choose the most discriminant features following OD and BV extraction using Inception-V3 based on transfer learning, and detects DR by recognizing retinal biomarkers such as MA (microaneurysms), HM (hemorrhages), and exudates (EX). On EyePACS-1, Messidor-2, and DIARETDB0, the proposed methodology demonstrated state-of-the-art performance, with an average accuracy of 97.92%, 94.59%, and 93.52%, respectively. Extensive testing and comparisons with baseline approaches indicate the efficacy of the suggested methodology.
Collapse
|
10
|
Wang H, Zhou Y, Zhang J, Lei J, Sun D, Xu F, Xu X. Anomaly segmentation in retinal images with poisson-blending data augmentation. Med Image Anal 2022; 81:102534. [PMID: 35842977 DOI: 10.1016/j.media.2022.102534] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 03/14/2022] [Accepted: 07/08/2022] [Indexed: 11/24/2022]
Abstract
Diabetic retinopathy (DR) is one of the most important complications of diabetes. Accurate segmentation of DR lesions is of great importance for the early diagnosis of DR. However, simultaneous segmentation of multi-type DR lesions is technically challenging because of 1) the lack of pixel-level annotations and 2) the large diversity between different types of DR lesions. In this study, first, we propose a novel Poisson-blending data augmentation (PBDA) algorithm to generate synthetic images, which can be easily utilized to expand the existing training data for lesion segmentation. We perform extensive experiments to recognize the important attributes in the PBDA algorithm. We show that position constraints are of great importance and that the synthesis density of one type of lesion has a joint influence on the segmentation of other types of lesions. Second, we propose a convolutional neural network architecture, named DSR-U-Net++ (i.e., DC-SC residual U-Net++), for the simultaneous segmentation of multi-type DR lesions. Ablation studies showed that the mean area under precision recall curve (AUPR) for all four types of lesions increased by >5% with PBDA. The proposed DSR-U-Net++ with PBDA outperformed the state-of-the-art methods by 1.7%-9.9% on the Indian Diabetic Retinopathy Image Dataset (IDRiD) and 67.3% on the e-ophtha dataset with respect to mean AUPR. The developed method would be an efficient tool to generate large-scale task-specific training data for other medical anomaly segmentation tasks.
Collapse
Affiliation(s)
- Hualin Wang
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Yuhong Zhou
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Jiong Zhang
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, 315300, China
| | - Jianqin Lei
- Department of Ophthalmology, First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, 710049, China
| | - Dongke Sun
- Jiangsu Key Laboratory for Design and Manufacture of Micro-Nano Biomedical Instruments, Southeast University, Southeast University, Nanjing, 211189, China
| | - Feng Xu
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Xiayu Xu
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, 710049, China; Zhejiang Research Institute of Xi'an Jiaotong University, Hangzhou, 311215, China.
| |
Collapse
|
11
|
Alahmadi MD. Medical Image Segmentation with Learning Semantic and Global Contextual Representation. Diagnostics (Basel) 2022; 12:diagnostics12071548. [PMID: 35885454 PMCID: PMC9319384 DOI: 10.3390/diagnostics12071548] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 06/18/2022] [Accepted: 06/20/2022] [Indexed: 11/16/2022] Open
Abstract
Automatic medical image segmentation is an essential step toward accurate diseases diagnosis and designing a follow-up treatment. This assistive method facilitates the cancer detection process and provides a benchmark to highlight the affected area. The U-Net model has become the standard design choice. Although the symmetrical structure of the U-Net model enables this network to encode rich semantic representation, the intrinsic locality of the CNN layers limits this network’s capability in modeling long-range contextual dependency. On the other hand, sequence to sequence Transformer models with a multi-head attention mechanism can enable them to effectively model global contextual dependency. However, the lack of low-level information stemming from the Transformer architecture limits its performance for capturing local representation. In this paper, we propose a two parallel encoder model, where in the first path the CNN module captures the local semantic representation whereas the second path deploys a Transformer module to extract the long-range contextual representation. Next, by adaptively fusing these two feature maps, we encode both representations into a single representative tensor to be further processed by the decoder block. An experimental study demonstrates that our design can provide rich and generic representation features which are highly efficient for a fine-grained semantic segmentation task.
Collapse
Affiliation(s)
- Mohammad D Alahmadi
- Department of Software Engineering, College of Computer Science and Engineering, University of Jeddah, Jeddah 23890, Saudi Arabia
| |
Collapse
|
12
|
Automated grading of diabetic retinopathy using CNN with hierarchical clustering of image patches by siamese network. Phys Eng Sci Med 2022; 45:623-635. [PMID: 35587313 DOI: 10.1007/s13246-022-01129-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Accepted: 04/19/2022] [Indexed: 10/18/2022]
Abstract
Diabetic retinopathy (DR) is a progressive vascular complication that affects people who have diabetes. This retinal abnormality can cause irreversible vision loss or permanent blindness; therefore, it is crucial to undergo frequent eye screening for early recognition and treatment. This paper proposes a feature extraction algorithm using discriminative multi-sized patches, based on deep learning convolutional neural network (CNN) for DR grading. This comprehensive algorithm extracts local and global features for efficient decision-making. Each input image is divided into small-sized patches to extract local-level features and then split into clusters or subsets. Hierarchical clustering by Siamese network with pre-trained CNN is proposed in this paper to select clusters with more discriminative patches. The fine-tuned Xception model of CNN is used to extract the global-level features of larger image patches. Local and global features are combined to improve the overall image-wise classification accuracy. The final support vector machine classifier exhibits 96% of classification accuracy with tenfold cross-validation in classifying DR images.
Collapse
|
13
|
Guo S. LightEyes: A Lightweight Fundus Segmentation Network for Mobile Edge Computing. SENSORS (BASEL, SWITZERLAND) 2022; 22:3112. [PMID: 35590802 PMCID: PMC9104959 DOI: 10.3390/s22093112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 04/15/2022] [Accepted: 04/16/2022] [Indexed: 12/04/2022]
Abstract
Fundus is the only structure that can be observed without trauma to the human body. By analyzing color fundus images, the diagnosis basis for various diseases can be obtained. Recently, fundus image segmentation has witnessed vast progress with the development of deep learning. However, the improvement of segmentation accuracy comes with the complexity of deep models. As a result, these models show low inference speeds and high memory usages when deploying to mobile edges. To promote the deployment of deep fundus segmentation models to mobile devices, we aim to design a lightweight fundus segmentation network. Our observation comes from the fact that high-resolution representations could boost the segmentation of tiny fundus structures, and the classification of small fundus structures depends more on local features. To this end, we propose a lightweight segmentation model called LightEyes. We first design a high-resolution backbone network to learn high-resolution representations, so that the spatial relationship between feature maps can be always retained. Meanwhile, considering high-resolution features means high memory usage; for each layer, we use at most 16 convolutional filters to reduce memory usage and decrease training difficulty. LightEyes has been verified on three kinds of fundus segmentation tasks, including the hard exudate, the microaneurysm, and the vessel, on five publicly available datasets. Experimental results show that LightEyes achieves highly competitive segmentation accuracy and segmentation speed compared with state-of-the-art fundus segmentation models, while running at 1.6 images/s Cambricon-1A speed and 51.3 images/s GPU speed with only 36k parameters.
Collapse
Affiliation(s)
- Song Guo
- School of Information and Control Engineering, Xi'an University of Architecture and Technology, Xi'an 710055, China
| |
Collapse
|
14
|
Deep CNN with Hybrid Binary Local Search and Particle Swarm Optimizer for Exudates Classification from Fundus Images. J Digit Imaging 2022; 35:56-67. [PMID: 34997375 PMCID: PMC8854611 DOI: 10.1007/s10278-021-00534-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Revised: 10/04/2021] [Accepted: 11/03/2021] [Indexed: 02/03/2023] Open
Abstract
Diabetic retinopathy is a chronic condition that causes vision loss if not detected early. In the early stage, it can be diagnosed with the aid of exudates which are called lesions. However, it is arduous to detect the exudate lesion due to the availability of blood vessels and other distractions. To tackle these issues, we proposed a novel exudates classification from the fundus image known as hybrid convolutional neural network (CNN)-based binary local search optimizer-based particle swarm optimization algorithm. The proposed method from this paper exploits image augmentation to enlarge the fundus image to the required size without losing any features. The features from the resized fundus images are extracted as a feature vector and fed into the feed-forward CNN as the input. Henceforth, it classifies the exudates from the fundus image. Further, the hyperparameters are optimized to reduce the computational complexities by utilization of binary local search optimizer (BLSO) and particle swarm optimization (PSO). The experimental analysis is conducted on the public ROC and real-time ARA400 datasets and compared with the state-of-art works such as support vector machine classifiers, multi-modal/multi-scale, random forest, and CNN for the performance metrics. The classification accuracy is high for the proposed work, and thus, our proposed outperforms all the other approaches.
Collapse
|
15
|
Detection of exudates from clinical fundus images using machine learning algorithms in diabetic maculopathy. Int J Diabetes Dev Ctries 2022. [DOI: 10.1007/s13410-021-01039-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
|
16
|
Bilal A, Sun G, Mazhar S, Imran A, Latif J. A Transfer Learning and U-Net-based automatic detection of diabetic retinopathy from fundus images. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2021.2021111] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Anas Bilal
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Guangmin Sun
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Sarah Mazhar
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Azhar Imran
- Department of Creative Technologies, Air University, Islamabad, Pakistan
| | - Jahanzaib Latif
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| |
Collapse
|
17
|
Shao A, Jin K, Li Y, Lou L, Zhou W, Ye J. Overview of global publications on machine learning in diabetic retinopathy from 2011 to 2021: Bibliometric analysis. Front Endocrinol (Lausanne) 2022; 13:1032144. [PMID: 36589855 PMCID: PMC9797582 DOI: 10.3389/fendo.2022.1032144] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 12/05/2022] [Indexed: 12/23/2022] Open
Abstract
PURPOSE To comprehensively analyze and discuss the publications on machine learning (ML) in diabetic retinopathy (DR) following a bibliometric approach. METHODS The global publications on ML in DR from 2011 to 2021 were retrieved from the Web of Science Core Collection (WoSCC) database. We analyzed the publication and citation trend over time and identified highly-cited articles, prolific countries, institutions, journals and the most relevant research domains. VOSviewer and Wordcloud are used to visualize the mainstream research topics and evolution of subtopics in the form of co-occurrence maps of keywords. RESULTS By analyzing a total of 1147 relevant publications, this study found a rapid increase in the number of annual publications, with an average growth rate of 42.68%. India and China were the most productive countries. IEEE Access was the most productive journal in this field. In addition, some notable common points were found in the highly-cited articles. The keywords analysis showed that "diabetic retinopathy", "classification", and "fundus images" were the most frequent keywords for the entire period, as automatic diagnosis of DR was always the mainstream topic in the relevant field. The evolution of keywords highlighted some breakthroughs, including "deep learning" and "optical coherence tomography", indicating the advance in technologies and changes in the research attention. CONCLUSIONS As new research topics have emerged and evolved, studies are becoming increasingly diverse and extensive. Multiple modalities of medical data, new ML techniques and constantly optimized algorithms are the future trends in this multidisciplinary field.
Collapse
Affiliation(s)
- An Shao
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, China
| | - Kai Jin
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, China
| | - Yunxiang Li
- College of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, China
| | - Lixia Lou
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, China
| | - Wuyuan Zhou
- Zhejiang Academy of Science and Technology Information, Hangzhou, China
- *Correspondence: Juan Ye, ; Wuyuan Zhou,
| | - Juan Ye
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, China
- *Correspondence: Juan Ye, ; Wuyuan Zhou,
| |
Collapse
|
18
|
Guo S. Fundus image segmentation via hierarchical feature learning. Comput Biol Med 2021; 138:104928. [PMID: 34662814 DOI: 10.1016/j.compbiomed.2021.104928] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2021] [Revised: 10/06/2021] [Accepted: 10/06/2021] [Indexed: 01/28/2023]
Abstract
Fundus Image Segmentation (FIS) is an essential procedure for the automated diagnosis of ophthalmic diseases. Recently, deep fully convolutional networks have been widely used for FIS with state-of-the-art performance. The representative deep model is the U-Net, which follows an encoder-decoder architecture. I believe it is suboptimal for FIS because consecutive pooling operations in the encoder lead to low-resolution representation and loss of detailed spatial information, which is particularly important for the segmentation of tiny vessels and lesions. Motivated by this, a high-resolution hierarchical network (HHNet) is proposed to learn semantic-rich high-resolution representations and preserve spatial details simultaneously. Specifically, a High-resolution Feature Learning (HFL) module with increasing dilation rates was first designed to learn the high-level high-resolution representations. Then, the HHNet was constructed by incorporating three HFL modules and two feature aggregation modules. The HHNet runs in a coarse-to-fine manner, and fine segmentation maps are output at the last level. Extensive experiments were conducted on fundus lesion segmentation, vessel segmentation, and optic cup segmentation. The experimental results reveal that the proposed method shows highly competitive or even superior performance in terms of segmentation performance and computation cost, indicating its potential advantages in clinical application.
Collapse
Affiliation(s)
- Song Guo
- School of Information and Control Engineering, Xi'an University of Architecture and Technology, Xi'an, 710055, China.
| |
Collapse
|
19
|
EAD-Net: A Novel Lesion Segmentation Method in Diabetic Retinopathy Using Neural Networks. DISEASE MARKERS 2021; 2021:6482665. [PMID: 34512815 PMCID: PMC8429028 DOI: 10.1155/2021/6482665] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Accepted: 08/19/2021] [Indexed: 02/05/2023]
Abstract
Diabetic retinopathy (DR) is a common chronic fundus disease, which has four different kinds of microvessel structure and microvascular lesions: microaneurysms (MAs), hemorrhages (HEs), hard exudates, and soft exudates. Accurate detection and counting of them are a basic but important work. The manual annotation of these lesions is a labor-intensive task in clinical analysis. To solve the problem, we proposed a novel segmentation method for different lesions in DR. Our method is based on a convolutional neural network and can be divided into encoder module, attention module, and decoder module, so we refer it as EAD-Net. After normalization and augmentation, the fundus images were sent to the EAD-Net for automated feature extraction and pixel-wise label prediction. Given the evaluation metrics based on the matching degree between detected candidates and ground truth lesions, our method achieved sensitivity of 92.77%, specificity of 99.98%, and accuracy of 99.97% on the e_ophtha_EX dataset and comparable AUPR (Area under Precision-Recall curve) scores on IDRiD dataset. Moreover, the results on the local dataset also show that our EAD-Net has better performance than original U-net in most metrics, especially in the sensitivity and F1-score, with nearly ten percent improvement. The proposed EAD-Net is a novel method based on clinical DR diagnosis. It has satisfactory results on the segmentation of four different kinds of lesions. These effective segmentations have important clinical significance in the monitoring and diagnosis of DR.
Collapse
|
20
|
Huang C, Zong Y, Ding Y, Luo X, Clawson K, Peng Y. A new deep learning approach for the retinal hard exudates detection based on superpixel multi-feature extraction and patch-based CNN. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.07.145] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
21
|
Liu Q, Liu H, Zhao Y, Liang Y. Dual-Branch Network with Dual-Sampling Modulated Dice Loss for Hard Exudate Segmentation in Colour Fundus Images. IEEE J Biomed Health Inform 2021; 26:1091-1102. [PMID: 34460407 DOI: 10.1109/jbhi.2021.3108169] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Automated segmentation of hard exudates in colour fundus images is a challenge task due to issues of extreme class imbalance and enormous size variation. This paper aims to tackle these issues and proposes a dual-branch network with dual-sampling modulated Dice loss. It consists of two branches: large hard exudate biased segmentation branch and small hard exudate biased segmentation branch. Both of them are responsible for their own duties separately. Furthermore, we propose a dual-sampling modulated Dice loss for the training such that our proposed dual-branch network is able to segment hard exudates in different sizes. In detail, for the first branch, we use a uniform sampler to sample pixels from predicted segmentation mask for Dice loss calculation, which leads to this branch naturally be biased in favour of large hard exudates as Dice loss generates larger cost on misidentification of large hard exudates than small hard exudates. For the second branch, we use a re-balanced sampler to oversample hard exudate pixels and undersample background pixels for loss calculation. In this way, cost on misidentification of small hard exudates is enlarged, which enforces the parameters in the second branch fit small hard exudates well. Considering that large hard exudates are much easier to be correctly identified than small hard exudates, we propose an easy-to-difficult learning strategy by adaptively modulating the losses of two branches. We evaluate our proposed method on two public datasets and results demonstrate that ours achieves state-of-the-art performance.
Collapse
|
22
|
Kurilová V, Goga J, Oravec M, Pavlovičová J, Kajan S. Support vector machine and deep-learning object detection for localisation of hard exudates. Sci Rep 2021; 11:16045. [PMID: 34362989 PMCID: PMC8346563 DOI: 10.1038/s41598-021-95519-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Accepted: 07/26/2021] [Indexed: 02/08/2023] Open
Abstract
Hard exudates are one of the main clinical findings in the retinal images of patients with diabetic retinopathy. Detecting them early significantly impacts the treatment of underlying diseases; therefore, there is a need for automated systems with high reliability. We propose a novel method for identifying and localising hard exudates in retinal images. To achieve fast image pre-scanning, a support vector machine (SVM) classifier was combined with a faster region-based convolutional neural network (faster R-CNN) object detector for the localisation of exudates. Rapid pre-scanning filtered out exudate-free samples using a feature vector extracted from the pre-trained ResNet-50 network. Subsequently, the remaining samples were processed using a faster R-CNN detector for detailed analysis. When evaluating all the exudates as individual objects, the SVM classifier reduced the false positive rate by 29.7% and marginally increased the false negative rate by 16.2%. When evaluating all the images, we recorded a 50% reduction in the false positive rate, without any decrease in the number of false negatives. The interim results suggested that pre-scanning the samples using the SVM prior to implementing the deep-network object detector could simultaneously improve and speed up the current hard exudates detection method, especially when there is paucity of training data.
Collapse
Affiliation(s)
- Veronika Kurilová
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 812 19, Bratislava, Slovakia.
| | - Jozef Goga
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 812 19, Bratislava, Slovakia
| | - Miloš Oravec
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 812 19, Bratislava, Slovakia.
| | - Jarmila Pavlovičová
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 812 19, Bratislava, Slovakia
| | - Slavomír Kajan
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 812 19, Bratislava, Slovakia
| |
Collapse
|
23
|
Wang YL, Yang JY, Yang JY, Zhao XY, Chen YX, Yu WH. Progress of artificial intelligence in diabetic retinopathy screening. Diabetes Metab Res Rev 2021; 37:e3414. [PMID: 33010796 DOI: 10.1002/dmrr.3414] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/10/2020] [Revised: 08/22/2020] [Accepted: 08/23/2020] [Indexed: 12/29/2022]
Abstract
Diabetic retinopathy (DR) is one of the leading causes of blindness worldwide, and the limited availability of qualified ophthalmologists restricts its early diagnosis. For the past few years, artificial intelligence technology has developed rapidly and has been applied in DR screening. The upcoming technology provides support on DR screening and improves the identification of DR lesions with a high sensitivity and specificity. This review aims to summarize the progress on automatic detection and classification models for the diagnosis of DR.
Collapse
Affiliation(s)
- Yue-Lin Wang
- Department of Ophthalmology, Peking Union Medical College Hospital & Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| | - Jing-Yun Yang
- Division of Statistics, School of Economics & Research Center of Financial Information, Shanghai University, Shanghai, China
- Rush Alzheimer's Disease Center & Department of Neurological Sciences, Rush University Medical Center, Chicago, Illinois, USA
| | - Jing-Yuan Yang
- Department of Ophthalmology, Peking Union Medical College Hospital & Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| | - Xin-Yu Zhao
- Department of Ophthalmology, Peking Union Medical College Hospital & Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| | - You-Xin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital & Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| | - Wei-Hong Yu
- Department of Ophthalmology, Peking Union Medical College Hospital & Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| |
Collapse
|
24
|
Ashraf MN, Hussain M, Habib Z. Review of Various Tasks Performed in the Preprocessing Phase of a Diabetic Retinopathy Diagnosis System. Curr Med Imaging 2021; 16:397-426. [PMID: 32410541 DOI: 10.2174/1573405615666190219102427] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 12/31/2018] [Accepted: 01/20/2019] [Indexed: 12/15/2022]
Abstract
Diabetic Retinopathy (DR) is a major cause of blindness in diabetic patients. The increasing population of diabetic patients and difficulty to diagnose it at an early stage are limiting the screening capabilities of manual diagnosis by ophthalmologists. Color fundus images are widely used to detect DR lesions due to their comfortable, cost-effective and non-invasive acquisition procedure. Computer Aided Diagnosis (CAD) of DR based on these images can assist ophthalmologists and help in saving many sight years of diabetic patients. In a CAD system, preprocessing is a crucial phase, which significantly affects its performance. Commonly used preprocessing operations are the enhancement of poor contrast, balancing the illumination imbalance due to the spherical shape of a retina, noise reduction, image resizing to support multi-resolution, color normalization, extraction of a field of view (FOV), etc. Also, the presence of blood vessels and optic discs makes the lesion detection more challenging because these two artifacts exhibit specific attributes, which are similar to those of DR lesions. Preprocessing operations can be broadly divided into three categories: 1) fixing the native defects, 2) segmentation of blood vessels, and 3) localization and segmentation of optic discs. This paper presents a review of the state-of-the-art preprocessing techniques related to three categories of operations, highlighting their significant aspects and limitations. The survey is concluded with the most effective preprocessing methods, which have been shown to improve the accuracy and efficiency of the CAD systems.
Collapse
Affiliation(s)
| | - Muhammad Hussain
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Zulfiqar Habib
- Department of Computer Science, COMSATS University Islamabad, Lahore, Pakistan
| |
Collapse
|
25
|
Shi Z, Wang T, Huang Z, Xie F, Song G. A method for the automatic detection of myopia in Optos fundus images based on deep learning. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2021; 37:e3460. [PMID: 33773080 DOI: 10.1002/cnm.3460] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Revised: 03/08/2021] [Accepted: 03/20/2021] [Indexed: 06/12/2023]
Abstract
Myopia detection is significant for preventing irreversible visual impairment and diagnosing myopic retinopathy. To improve the detection efficiency and accuracy, a Myopia Detection Network (MDNet) that combines the advantages of dense connection and Residual Squeeze-and-Excitation attention is proposed in this paper to automatically detect myopia in Optos fundus images. First, an automatic optic disc recognition method is applied to extract the Regions of Interest and remove the noise disturbances; then, data augmentation techniques are implemented to enlarge the data set and prevent overfitting; moreover, an MDNet composed of Attention Dense blocks is constructed to detect myopia in Optos fundus images. The results show that the Mean Absolute Error of the Spherical Equivalent detected by this network can reach 1.1150 D (diopter), which verifies the feasibility and applicability of this method for the automatic detection of myopia in Optos fundus images.
Collapse
Affiliation(s)
- Zhengjin Shi
- School of Automation and Electrical Engineering, Shenyang Ligong University, Shenyang, China
| | - Tianyu Wang
- School of Automation and Electrical Engineering, Shenyang Ligong University, Shenyang, China
| | - Zheng Huang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Feng Xie
- School of Automation and Electrical Engineering, Shenyang Ligong University, Shenyang, China
| | - Guoli Song
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
| |
Collapse
|
26
|
Sadhana S, Mallika R. An intelligent technique for detection of diabetic retinopathy using improved alexnet model based convoluitonal neural network. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-189582] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Blindness is one of the serious issues in the present medical world scenario mainly caused by Diabetic Retinopathy (DR). It is a diabetes complication, that is produced due to the problems in retina blood vessel. For clinical treatment, it will be extremely helpful, if diabetic retinopathy is detected in early stages. In recent years, the manual detection of DR consumes more time and moreover, the detection of DR in early stages is still a challenging task. In order to avoid these issues, this research work focus on an automated as well as effective solution for detecting DR symptoms from retinal images and requires less time for accurate detection. A Novel histogram equalization technique is used for performing contrast enhancement and equalization in initial pre-processing stage. Then, from these pre-processed images, image patches are extracted regularly. Improved Discrete Curvelet Transform based Grey Level Co-occurrence Matrix (IDCT-GLCM) is used in second stage for extracting features. Then, extracted features are given to Classifier. At last, an Improved Alexnet model-based CNN (IAM-CNN) classification approach is used for diagnosing DR from digital fundus images. In terms of accuracy, specificity and sensitivity, effectiveness and efficiency of proposed method is shown by extensive simulation results.
Collapse
Affiliation(s)
- S. Sadhana
- Research and Development Centre, Bharathiar University, Coimbatore, Tamil Nadu, India
| | - R. Mallika
- Department of Computer Science, C.B.M. College, Coimbatore, Tamil Nadu, India
| |
Collapse
|
27
|
Furtado P. Testing Segmentation Popular Loss and Variations in Three Multiclass Medical Imaging Problems. J Imaging 2021; 7:16. [PMID: 34460615 PMCID: PMC8321275 DOI: 10.3390/jimaging7020016] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 01/16/2021] [Accepted: 01/22/2021] [Indexed: 12/15/2022] Open
Abstract
Image structures are segmented automatically using deep learning (DL) for analysis and processing. The three most popular base loss functions are cross entropy (crossE), intersect-over-the-union (IoU), and dice. Which should be used, is it useful to consider simple variations, such as modifying formula coefficients? How do characteristics of different image structures influence scores? Taking three different medical image segmentation problems (segmentation of organs in magnetic resonance images (MRI), liver in computer tomography images (CT) and diabetic retinopathy lesions in eye fundus images (EFI)), we quantify loss functions and variations, as well as segmentation scores of different targets. We first describe the limitations of metrics, since loss is a metric, then we describe and test alternatives. Experimentally, we observed that DeeplabV3 outperforms UNet and fully convolutional network (FCN) in all datasets. Dice scored 1 to 6 percentage points (pp) higher than cross entropy over all datasets, IoU improved 0 to 3 pp. Varying formula coefficients improved scores, but the best choices depend on the dataset: compared to crossE, different false positive vs. false negative weights improved MRI by 12 pp, and assigning zero weight to background improved EFI by 6 pp. Multiclass segmentation scored higher than n-uniclass segmentation in MRI by 8 pp. EFI lesions score low compared to more constant structures (e.g., optic disk or even organs), but loss modifications improve those scores significantly 6 to 9 pp. Our conclusions are that dice is best, it is worth assigning 0 weight to class background and to test different weights on false positives and false negatives.
Collapse
Affiliation(s)
- Pedro Furtado
- Dei/FCT/CISUC, University of Coimbra, Polo II, 3030-290 Coimbra, Portugal
| |
Collapse
|
28
|
FFU-Net: Feature Fusion U-Net for Lesion Segmentation of Diabetic Retinopathy. BIOMED RESEARCH INTERNATIONAL 2021; 2021:6644071. [PMID: 33490274 PMCID: PMC7801055 DOI: 10.1155/2021/6644071] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/03/2020] [Revised: 11/25/2020] [Accepted: 12/21/2020] [Indexed: 11/18/2022]
Abstract
Diabetic retinopathy is one of the main causes of blindness in human eyes, and lesion segmentation is an important basic work for the diagnosis of diabetic retinopathy. Due to the small lesion areas scattered in fundus images, it is laborious to segment the lesion of diabetic retinopathy effectively with the existing U-Net model. In this paper, we proposed a new lesion segmentation model named FFU-Net (Feature Fusion U-Net) that enhances U-Net from the following points. Firstly, the pooling layer in the network is replaced with a convolutional layer to reduce spatial loss of the fundus image. Then, we integrate multiscale feature fusion (MSFF) block into the encoders which helps the network to learn multiscale features efficiently and enrich the information carried with skip connection and lower-resolution decoder by fusing contextual channel attention (CCA) models. Finally, in order to solve the problems of data imbalance and misclassification, we present a Balanced Focal Loss function. In the experiments on benchmark dataset IDRID, we make an ablation study to verify the effectiveness of each component and compare FFU-Net against several state-of-the-art models. In comparison with baseline U-Net, FFU-Net improves the segmentation performance by 11.97%, 10.68%, and 5.79% on metrics SEN, IOU, and DICE, respectively. The quantitative and qualitative results demonstrate the superiority of our FFU-Net in the task of lesion segmentation of diabetic retinopathy.
Collapse
|
29
|
Furtado P. Loss, post-processing and standard architecture improvements of liver deep learning segmentation from Computed Tomography and magnetic resonance. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
|
30
|
Exudates as Landmarks Identified through FCM Clustering in Retinal Images. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app11010142] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The aim of this work was to develop a method for the automatic identification of exudates, using an unsupervised clustering approach. The ability to classify each pixel as belonging to an eventual exudate, as a warning of disease, allows for the tracking of a patient’s status through a noninvasive approach. In the field of diabetic retinopathy detection, we considered four public domain datasets (DIARETDB0/1, IDRID, and e-optha) as benchmarks. In order to refine the final results, a specialist ophthalmologist manually segmented a random selection of DIARETDB0/1 fundus images that presented exudates. An innovative pipeline of morphological procedures and fuzzy C-means clustering was integrated in order to extract exudates with a pixel-wise approach. Our methodology was optimized, and verified and the parameters were fine-tuned in order to define both suitable values and to produce a more accurate segmentation. The method was used on 100 tested images, resulting in averages of sensitivity, specificity, and accuracy equal to 83.3%, 99.2%, and 99.1%, respectively.
Collapse
|
31
|
|
32
|
Wang H, Yuan G, Zhao X, Peng L, Wang Z, He Y, Qu C, Peng Z. Hard exudate detection based on deep model learned information and multi-feature joint representation for diabetic retinopathy screening. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 191:105398. [PMID: 32092614 DOI: 10.1016/j.cmpb.2020.105398] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2019] [Revised: 01/18/2020] [Accepted: 02/14/2020] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Diabetic retinopathy (DR), which is generally diagnosed by the presence of hemorrhages and hard exudates, is one of the most prevalent causes of visual impairment and blindness. Early detection of hard exudates (HEs) in color fundus photographs can help in preventing such destructive damage. However, this is a challenging task due to high intra-class diversity and high similarity with other structures in the fundus images. Most of the existing methods for detecting HEs are based on characterizing HEs using hand crafted features (HCFs) only, which can not characterize HEs accurately. Deep learning methods are scarce in this domain because they require large-scale sample sets for training which are not generally available for most routine medical imaging research. METHODS To address these challenges, we propose a novel methodology for HE detection using deep convolutional neural network (DCNN) and multi-feature joint representation. Specifically, we present a new optimized mathematical morphological approach that first segments HE candidates accurately. Then, each candidate is characterized using combined features based on deep features with HCFs incorporated, which is implemented by a ridge regression-based feature fusion. This method employs multi-space-based intensity features, geometric features, a gray-level co-occurrence matrix (GLCM)-based texture descriptor, a gray-level size zone matrix (GLSZM)-based texture descriptor to construct HCFs, and a DCNN to automatically learn the deep information of HE. Finally, a random forest is employed to identify the true HEs among candidates. RESULTS The proposed method is evaluated on two benchmark databases. It obtains an F-score of 0.8929 with an area under curve (AUC) of 0.9644 on the e-optha database and an F-score of 0.9326 with an AUC of 0.9323 on the HEI-MED database. These results demonstrate that our approach outperforms state-of-the-art methods. Our model also proves to be suitable for clinical applications based on private clinical images from a local hospital. CONCLUSIONS This newly proposed method integrates the traditional HCFs and deep features learned from DCNN for detecting HEs. It achieves a new state-of-the-art in both detecting HEs and DR screening. Furthermore, the proposed feature selection and fusion strategy reduces feature dimension and improves HE detection performance.
Collapse
Affiliation(s)
- Hui Wang
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| | - Guohui Yuan
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| | - Xuegong Zhao
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| | - Lingbing Peng
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China.
| | - Zhuoran Wang
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| | - Yanmin He
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| | - Chao Qu
- Department of Ophthalmology, Sichuan Academy of Medical Sciences and Sichuan Provincial People's Hospital, Chengdu 610072, China.
| | - Zhenming Peng
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| |
Collapse
|
33
|
Arsalan M, Baek NR, Owais M, Mahmood T, Park KR. Deep Learning-Based Detection of Pigment Signs for Analysis and Diagnosis of Retinitis Pigmentosa. SENSORS (BASEL, SWITZERLAND) 2020; 20:E3454. [PMID: 32570943 PMCID: PMC7349531 DOI: 10.3390/s20123454] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 06/16/2020] [Accepted: 06/16/2020] [Indexed: 12/24/2022]
Abstract
Ophthalmological analysis plays a vital role in the diagnosis of various eye diseases, such as glaucoma, retinitis pigmentosa (RP), and diabetic and hypertensive retinopathy. RP is a genetic retinal disorder that leads to progressive vision degeneration and initially causes night blindness. Currently, the most commonly applied method for diagnosing retinal diseases is optical coherence tomography (OCT)-based disease analysis. In contrast, fundus imaging-based disease diagnosis is considered a low-cost diagnostic solution for retinal diseases. This study focuses on the detection of RP from the fundus image, which is a crucial task because of the low quality of fundus images and non-cooperative image acquisition conditions. Automatic detection of pigment signs in fundus images can help ophthalmologists and medical practitioners in diagnosing and analyzing RP disorders. To accurately segment pigment signs for diagnostic purposes, we present an automatic RP segmentation network (RPS-Net), which is a specifically designed deep learning-based semantic segmentation network to accurately detect and segment the pigment signs with fewer trainable parameters. Compared with the conventional deep learning methods, the proposed method applies a feature enhancement policy through multiple dense connections between the convolutional layers, which enables the network to discriminate between normal and diseased eyes, and accurately segment the diseased area from the background. Because pigment spots can be very small and consist of very few pixels, the RPS-Net provides fine segmentation, even in the case of degraded images, by importing high-frequency information from the preceding layers through concatenation inside and outside the encoder-decoder. To evaluate the proposed RPS-Net, experiments were performed based on 4-fold cross-validation using the publicly available Retinal Images for Pigment Signs (RIPS) dataset for detection and segmentation of retinal pigments. Experimental results show that RPS-Net achieved superior segmentation performance for RP diagnosis, compared with the state-of-the-art methods.
Collapse
Affiliation(s)
| | | | | | | | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea; (M.A.); (N.R.B.); (M.O.); (T.M.)
| |
Collapse
|
34
|
Guo S, Wang K, Kang H, Liu T, Gao Y, Li T. Bin loss for hard exudates segmentation in fundus images. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2018.10.103] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
35
|
Stolte S, Fang R. A survey on medical image analysis in diabetic retinopathy. Med Image Anal 2020; 64:101742. [PMID: 32540699 DOI: 10.1016/j.media.2020.101742] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 02/03/2020] [Accepted: 05/28/2020] [Indexed: 01/12/2023]
Abstract
Diabetic Retinopathy (DR) represents a highly-prevalent complication of diabetes in which individuals suffer from damage to the blood vessels in the retina. The disease manifests itself through lesion presence, starting with microaneurysms, at the nonproliferative stage before being characterized by neovascularization in the proliferative stage. Retinal specialists strive to detect DR early so that the disease can be treated before substantial, irreversible vision loss occurs. The level of DR severity indicates the extent of treatment necessary - vision loss may be preventable by effective diabetes management in mild (early) stages, rather than subjecting the patient to invasive laser surgery. Using artificial intelligence (AI), highly accurate and efficient systems can be developed to help assist medical professionals in screening and diagnosing DR earlier and without the full resources that are available in specialty clinics. In particular, deep learning facilitates diagnosis earlier and with higher sensitivity and specificity. Such systems make decisions based on minimally handcrafted features and pave the way for personalized therapies. Thus, this survey provides a comprehensive description of the current technology used in each step of DR diagnosis. First, it begins with an introduction to the disease and the current technologies and resources available in this space. It proceeds to discuss the frameworks that different teams have used to detect and classify DR. Ultimately, we conclude that deep learning systems offer revolutionary potential to DR identification and prevention of vision loss.
Collapse
Affiliation(s)
- Skylar Stolte
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, 1275 Center Drive, Biomedical Sciences Building JG56 P.O. Box 116131 Gainesville, FL 32611-6131, USA.
| | - Ruogu Fang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, 1275 Center Drive, Biomedical Sciences Building JG56 P.O. Box 116131 Gainesville, FL 32611-6131, USA.
| |
Collapse
|
36
|
Detection of Early Signs of Diabetic Retinopathy Based on Textural and Morphological Information in Fundus Images. SENSORS 2020; 20:s20041005. [PMID: 32069912 PMCID: PMC7071097 DOI: 10.3390/s20041005] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 01/30/2020] [Accepted: 02/10/2020] [Indexed: 02/01/2023]
Abstract
Estimated blind people in the world will exceed 40 million by 2025. To develop novel algorithms based on fundus image descriptors that allow the automatic classification of retinal tissue into healthy and pathological in early stages is necessary. In this paper, we focus on one of the most common pathologies in the current society: diabetic retinopathy. The proposed method avoids the necessity of lesion segmentation or candidate map generation before the classification stage. Local binary patterns and granulometric profiles are locally computed to extract texture and morphological information from retinal images. Different combinations of this information feed classification algorithms to optimally discriminate bright and dark lesions from healthy tissues. Through several experiments, the ability of the proposed system to identify diabetic retinopathy signs is validated using different public databases with a large degree of variability and without image exclusion.
Collapse
|
37
|
DMENet: Diabetic Macular Edema diagnosis using Hierarchical Ensemble of CNNs. PLoS One 2020; 15:e0220677. [PMID: 32040475 PMCID: PMC7010263 DOI: 10.1371/journal.pone.0220677] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Accepted: 01/07/2020] [Indexed: 11/30/2022] Open
Abstract
Diabetic Macular Edema (DME) is an advanced stage of Diabetic Retinopathy (DR) and can lead to permanent vision loss. Currently, it affects 26.7 million people globally and on account of such a huge number of DME cases and the limited number of ophthalmologists, it is desirable to automate the diagnosis process. Computer-assisted, deep learning based diagnosis could help in early detection, following which precision medication can help to mitigate the vision loss. Method: In order to automate the screening of DME, we propose a novel DMENet Algorithm which is built on the pillars of Convolutional Neural Networks (CNNs). DMENet analyses the preprocessed color fundus images and passes it through a two-stage pipeline. The first stage detects the presence or absence of DME whereas the second stage takes only the positive cases and grades the images based on severity. In both the stages, we use a novel Hierarchical Ensemble of CNNs (HE-CNN). This paper uses two of the popular publicly available datasets IDRiD and MESSIDOR for classification. Preprocessing on the images is performed using morphological opening and gaussian kernel. The dataset is augmented to solve the class imbalance problem for better performance of the proposed model. Results: The proposed methodology achieved an average Accuracy of 96.12%, Sensitivity of 96.32%, Specificity of 95.84%, and F−1 score of 0.9609 on MESSIDOR and IDRiD datasets. Conclusion: These excellent results establish the validity of the proposed methodology for use in DME screening and solidifies the applicability of the HE-CNN classification technique in the domain of biomedical imaging.
Collapse
|
38
|
Al Sariera TM, Rangarajan L, Amarnath R. Detection and classification of hard exudates in retinal images. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2020. [DOI: 10.3233/jifs-190492] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
| | - Lalitha Rangarajan
- Department of Studies in Computer Science, University of Mysore, Mysore, India
| | - R. Amarnath
- Department of Studies in Computer Science, University of Mysore, Mysore, India
| |
Collapse
|
39
|
|
40
|
Diabetic retinopathy detection using red lesion localization and convolutional neural networks. Comput Biol Med 2019; 116:103537. [PMID: 31747632 DOI: 10.1016/j.compbiomed.2019.103537] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2019] [Revised: 11/08/2019] [Accepted: 11/10/2019] [Indexed: 11/21/2022]
Abstract
Detecting the early signs of diabetic retinopathy (DR) is essential, as timely treatment might reduce or even prevent vision loss. Moreover, automatically localizing the regions of the retinal image that might contain lesions can favorably assist specialists in the task of detection. In this study, we designed a lesion localization model using a deep network patch-based approach. Our goal was to reduce the complexity of the model while improving its performance. For this purpose, we designed an efficient procedure (including two convolutional neural network models) for selecting the training patches, such that the challenging examples would be given special attention during the training process. Using the labeling of the region, a DR decision can be given to the initial image, without the need for special training. The model is trained on the Standard Diabetic Retinopathy Database, Calibration Level 1 (DIARETDB1) database and is tested on several databases (including Messidor) without any further adaptation. It reaches an area under the receiver operating characteristic curve of 0.912-95%CI(0.897-0.928) for DR screening, and a sensitivity of 0.940-95%CI(0.921-0.959). These values are competitive with other state-of-the-art approaches.
Collapse
|
41
|
Javidi M, Harati A, Pourreza H. Retinal image assessment using bi-level adaptive morphological component analysis. Artif Intell Med 2019; 99:101702. [PMID: 31606110 DOI: 10.1016/j.artmed.2019.07.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2018] [Revised: 07/25/2019] [Accepted: 07/26/2019] [Indexed: 10/26/2022]
Abstract
The automated analysis of retinal images is a widely researched area which can help to diagnose several diseases like diabetic retinopathy in early stages of the disease. More specifically, separation of vessels and lesions is very critical as features of these structures are directly related to the diagnosis and treatment process of diabetic retinopathy. The complexity of the retinal image contents especially in images with severe diabetic retinopathy makes detection of vascular structure and lesions difficult. In this paper, a novel framework based on morphological component analysis (MCA) is presented which benefits from the adaptive representations obtained via dictionary learning. In the proposed Bi-level Adaptive MCA (BAMCA), MCA is extended to locally deal with sparse representation of the retinal images at patch level whereas the decomposition process occurs globally at the image level. BAMCA method with appropriately offline learnt dictionaries is adopted to work on retinal images with severe diabetic retinopathy in order to simultaneously separate vessels and exudate lesions as diagnostically useful morphological components. To obtain the appropriate dictionaries, K-SVD dictionary learning algorithm is modified to use a gated error which guides the process toward learning the main structures of the retinal images using vessel or lesion maps. Computational efficiency of the proposed framework is also increased significantly through some improvement leading to noticeable reduction in run time. We experimentally show how effective dictionaries can be learnt which help BAMCA to successfully separate exudate and vessel components from retinal images even in severe cases of diabetic retinopathy. In this paper, in addition to visual qualitative assessment, the performance of the proposed method is quantitatively measured in the framework of vessel and exudate segmentation. The reported experimental results on public datasets demonstrate that the obtained components can be used to achieve competitive results with regard to the state-of-the-art vessel and exudate segmentation methods.
Collapse
Affiliation(s)
- Malihe Javidi
- Computer Engineering Department, Quchan University of Technology, Quchan, Iran.
| | - Ahad Harati
- Department of Computer Engineering, Ferdowsi University of Mashhad, Mashhad, Iran.
| | - HamidReza Pourreza
- Department of Computer Engineering, Ferdowsi University of Mashhad, Mashhad, Iran.
| |
Collapse
|
42
|
Porwal P, Pachade S, Kokare M, Deshmukh G, Son J, Bae W, Liu L, Wang J, Liu X, Gao L, Wu T, Xiao J, Wang F, Yin B, Wang Y, Danala G, He L, Choi YH, Lee YC, Jung SH, Li Z, Sui X, Wu J, Li X, Zhou T, Toth J, Baran A, Kori A, Chennamsetty SS, Safwan M, Alex V, Lyu X, Cheng L, Chu Q, Li P, Ji X, Zhang S, Shen Y, Dai L, Saha O, Sathish R, Melo T, Araújo T, Harangi B, Sheng B, Fang R, Sheet D, Hajdu A, Zheng Y, Mendonça AM, Zhang S, Campilho A, Zheng B, Shen D, Giancardo L, Quellec G, Mériaudeau F. IDRiD: Diabetic Retinopathy - Segmentation and Grading Challenge. Med Image Anal 2019; 59:101561. [PMID: 31671320 DOI: 10.1016/j.media.2019.101561] [Citation(s) in RCA: 80] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Revised: 09/09/2019] [Accepted: 09/16/2019] [Indexed: 02/07/2023]
Abstract
Diabetic Retinopathy (DR) is the most common cause of avoidable vision loss, predominantly affecting the working-age population across the globe. Screening for DR, coupled with timely consultation and treatment, is a globally trusted policy to avoid vision loss. However, implementation of DR screening programs is challenging due to the scarcity of medical professionals able to screen a growing global diabetic population at risk for DR. Computer-aided disease diagnosis in retinal image analysis could provide a sustainable approach for such large-scale screening effort. The recent scientific advances in computing capacity and machine learning approaches provide an avenue for biomedical scientists to reach this goal. Aiming to advance the state-of-the-art in automatic DR diagnosis, a grand challenge on "Diabetic Retinopathy - Segmentation and Grading" was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI - 2018). In this paper, we report the set-up and results of this challenge that is primarily based on Indian Diabetic Retinopathy Image Dataset (IDRiD). There were three principal sub-challenges: lesion segmentation, disease severity grading, and localization of retinal landmarks and segmentation. These multiple tasks in this challenge allow to test the generalizability of algorithms, and this is what makes it different from existing ones. It received a positive response from the scientific community with 148 submissions from 495 registrations effectively entered in this challenge. This paper outlines the challenge, its organization, the dataset used, evaluation methods and results of top-performing participating solutions. The top-performing approaches utilized a blend of clinical information, data augmentation, and an ensemble of models. These findings have the potential to enable new developments in retinal image analysis and image-based DR screening in particular.
Collapse
Affiliation(s)
- Prasanna Porwal
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India; School of Biomedical Informatics, University of Texas Health Science Center at Houston, USA.
| | - Samiksha Pachade
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India; School of Biomedical Informatics, University of Texas Health Science Center at Houston, USA
| | - Manesh Kokare
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India
| | | | | | | | - Lihong Liu
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | | | - Xinhui Liu
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | | | - TianBo Wu
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | - Jing Xiao
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | | | | | - Yunzhi Wang
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Gopichandh Danala
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Linsheng He
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Yoon Ho Choi
- Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | - Yeong Chan Lee
- Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | - Sang-Hyuk Jung
- Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | - Zhongyu Li
- Department of Computer Science, University of North Carolina at Charlotte, USA
| | - Xiaodan Sui
- School of Information Science and Engineering, Shandong Normal University, China
| | - Junyan Wu
- Cleerly Inc., New York, United States
| | | | - Ting Zhou
- University at Buffalo, New York, United States
| | - Janos Toth
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | - Agnes Baran
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | | | | | | | | | - Xingzheng Lyu
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China; Machine Learning for Bioimage Analysis Group, Bioinformatics Institute, A*STAR, Singapore
| | - Li Cheng
- Machine Learning for Bioimage Analysis Group, Bioinformatics Institute, A*STAR, Singapore; Department of Electric and Computer Engineering, University of Alberta, Canada
| | - Qinhao Chu
- School of Computing, National University of Singapore, Singapore
| | - Pengcheng Li
- School of Computing, National University of Singapore, Singapore
| | - Xin Ji
- Beijing Shanggong Medical Technology Co., Ltd., China
| | - Sanyuan Zhang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Yaxin Shen
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
| | - Ling Dai
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
| | | | | | - Tânia Melo
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal
| | - Teresa Araújo
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Balazs Harangi
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
| | - Ruogu Fang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, USA
| | | | - Andras Hajdu
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, China
| | - Ana Maria Mendonça
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, USA
| | - Aurélio Campilho
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Luca Giancardo
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, USA
| | | | - Fabrice Mériaudeau
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Malaysia; ImViA/IFTIM, Université de Bourgogne, Dijon, France
| |
Collapse
|
43
|
Yan Q, Zhao Y, Zheng Y, Liu Y, Zhou K, Frangi A, Liu J. Automated retinal lesion detection via image saliency analysis. Med Phys 2019; 46:4531-4544. [PMID: 31381173 DOI: 10.1002/mp.13746] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 07/11/2019] [Accepted: 07/22/2019] [Indexed: 01/02/2023] Open
Abstract
BACKGROUND AND OBJECTIVE The detection of abnormalities such as lesions or leakage from retinal images is an important health informatics task for automated early diagnosis of diabetic and malarial retinopathy or other eye diseases, in order to prevent blindness and common systematic conditions. In this work, we propose a novel retinal lesion detection method by adapting the concepts of saliency. METHODS Retinal images are first segmented as superpixels, two new saliency feature representations: uniqueness and compactness, are then derived to represent the superpixels. The pixel level saliency is then estimated from these superpixel saliency values via a bilateral filter. These extracted saliency features form a matrix for low-rank analysis to achieve saliency detection. The precise contour of a lesion is finally extracted from the generated saliency map after removing confounding structures such as blood vessels, the optic disk, and the fovea. The main novelty of this method is that it is an effective tool for detecting different abnormalities at the pixel level from different modalities of retinal images, without the need to tune parameters. RESULTS To evaluate its effectiveness, we have applied our method to seven public datasets of diabetic and malarial retinopathy with four different types of lesions: exudate, hemorrhage, microaneurysms, and leakage. The evaluation was undertaken at the pixel level, lesion level, or image level according to ground truth availability in these datasets. CONCLUSIONS The experimental results show that the proposed method outperforms existing state-of-the-art ones in applicability, effectiveness, and accuracy.
Collapse
Affiliation(s)
- Qifeng Yan
- University of Chinese Academy of Sciences, Beijing, 100049, China.,Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Cixi, 315399, China
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Cixi, 315399, China
| | - Yalin Zheng
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Cixi, 315399, China.,Department of Eye and Vision Science, University of Liverpool, Liverpool, L7 8TX, UK
| | - Yonghuai Liu
- Department of Computer Science, Edge Hill University, Ormskirk, L39 4QP, UK
| | - Kang Zhou
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Cixi, 315399, China.,School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
| | - Alejandro Frangi
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Cixi, 315399, China.,School of Computing, University of Leeds, Leeds, S2 9JT, UK
| | - Jiang Liu
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Cixi, 315399, China.,Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China
| |
Collapse
|
44
|
Playout C, Duval R, Cheriet F. A Novel Weakly Supervised Multitask Architecture for Retinal Lesions Segmentation on Fundus Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2434-2444. [PMID: 30908197 DOI: 10.1109/tmi.2019.2906319] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Obtaining the complete segmentation map of retinal lesions is the first step toward an automated diagnosis tool for retinopathy that is interpretable in its decision-making. However, the limited availability of ground truth lesion detection maps at a pixel level restricts the ability of deep segmentation neural networks to generalize over large databases. In this paper, we propose a novel approach for training a convolutional multi-task architecture with supervised learning and reinforcing it with weakly supervised learning. The architecture is simultaneously trained for three tasks: segmentation of red lesions and of bright lesions, those two tasks done concurrently with lesion detection. In addition, we propose and discuss the advantages of a new preprocessing method that guarantees the color consistency between the raw image and its enhanced version. Our complete system produces segmentations of both red and bright lesions. The method is validated at the pixel level and per-image using four databases and a cross-validation strategy. When evaluated on the task of screening for the presence or absence of lesions on the Messidor image set, the proposed method achieves an area under the ROC curve of 0.839, comparable with the state-of-the-art.
Collapse
|
45
|
Gong C, Erichson NB, Kelly JP, Trutoiu L, Schowengerdt BT, Brunton SL, Seibel EJ. RetinaMatch: Efficient Template Matching of Retina Images for Teleophthalmology. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1993-2004. [PMID: 31217098 DOI: 10.1109/tmi.2019.2923466] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Retinal template matching and registration is an important challenge in teleophthalmology with low-cost imaging devices. However, the images from such devices generally have a small field of view (FOV) and image quality degradations, making matching difficult. In this paper, we develop an efficient and accurate retinal matching technique that combines dimension reduction and mutual information (MI), called RetinaMatch. The dimension reduction initializes the MI optimization as a coarse localization process, which narrows the optimization domain and avoids local optima. The effectiveness of RetinaMatch is demonstrated on the open fundus image database STARE with simulated reduced FOV and anticipated degradations, and on retinal images acquired by adapter-based optics attached to a smartphone. RetinaMatch achieves a success rate over 94% on human retinal images with the matched target registration errors below 2 pixels on average, excluding the observer variability, outperforming standard template matching solutions. In the application of measuring vessel diameter repeatedly, single pixel errors are expected. In addition, our method can be used in the process of image mosaicking with area-based registration, providing a robust approach when feature-based methods fail. To the best of our knowledge, this is the first template matching algorithm for retina images with small template images from unconstrained retinal areas. In the context of the emerging mixed reality market, we envision automated retinal image matching and registration methods as transformative for advanced teleophthalmology and long-term retinal monitoring.
Collapse
|
46
|
Robust intensity variation and inverse surface adaptive thresholding techniques for detection of optic disc and exudates in retinal fundus images. Biocybern Biomed Eng 2019. [DOI: 10.1016/j.bbe.2019.07.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
|
47
|
Randive SN, Senapati RK, Rahulkar AD. A review on computer-aided recent developments for automatic detection of diabetic retinopathy. J Med Eng Technol 2019; 43:87-99. [PMID: 31198073 DOI: 10.1080/03091902.2019.1576790] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
Diabetic retinopathy is a serious microvascular disorder that might result in loss of vision and blindness. It seriously damages the retinal blood vessels and reduces the light-sensitive inner layer of the eye. Due to the manual inspection of retinal fundus images on diabetic retinopathy to detect the morphological abnormalities in Microaneurysms (MAs), Exudates (EXs), Haemorrhages (HMs), and Inter retinal microvascular abnormalities (IRMA) is very difficult and time consuming process. In order to avoid this, the regular follow-up screening process, and early automatic Diabetic Retinopathy detection are necessary. This paper discusses various methods of analysing automatic retinopathy detection and classification of different grading based on the severity levels. In addition, retinal blood vessel detection techniques are also discussed for the ultimate detection and diagnostic procedure of proliferative diabetic retinopathy. Furthermore, the paper elaborately discussed the systematic review accessed by authors on various publicly available databases collected from different medical sources. In the survey, meta-analysis of several methods for diabetic feature extraction, segmentation and various types of classifiers have been used to evaluate the system performance metrics for the diagnosis of DR. This survey will be helpful for the technical persons and researchers who want to focus on enhancing the diagnosis of a system that would be more powerful in real life.
Collapse
Affiliation(s)
- Santosh Nagnath Randive
- a Department of Electronics & Communication Engineering , Koneru Lakshmaiah Education Foundation, Green Fields, Vaddeswaram , Guntur , Andhra Pradesh , India
| | - Ranjan K Senapati
- a Department of Electronics & Communication Engineering , Koneru Lakshmaiah Education Foundation, Green Fields, Vaddeswaram , Guntur , Andhra Pradesh , India
| | - Amol D Rahulkar
- b Department of Electrical and Electronics Engineering , National Institute of Technology , Goa , India
| |
Collapse
|
48
|
Recent Development on Detection Methods for the Diagnosis of Diabetic Retinopathy. Symmetry (Basel) 2019. [DOI: 10.3390/sym11060749] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
Diabetic retinopathy (DR) is a complication of diabetes that exists throughout the world. DR occurs due to a high ratio of glucose in the blood, which causes alterations in the retinal microvasculature. Without preemptive symptoms of DR, it leads to complete vision loss. However, early screening through computer-assisted diagnosis (CAD) tools and proper treatment have the ability to control the prevalence of DR. Manual inspection of morphological changes in retinal anatomic parts are tedious and challenging tasks. Therefore, many CAD systems were developed in the past to assist ophthalmologists for observing inter- and intra-variations. In this paper, a recent review of state-of-the-art CAD systems for diagnosis of DR is presented. We describe all those CAD systems that have been developed by various computational intelligence and image processing techniques. The limitations and future trends of current CAD systems are also described in detail to help researchers. Moreover, potential CAD systems are also compared in terms of statistical parameters to quantitatively evaluate them. The comparison results indicate that there is still a need for accurate development of CAD systems to assist in the clinical diagnosis of diabetic retinopathy.
Collapse
|
49
|
Wang R, Chen B, Meng D, Wang L. Weakly Supervised Lesion Detection From Fundus Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1501-1512. [PMID: 30530359 DOI: 10.1109/tmi.2018.2885376] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Early diagnosis and continuous monitoring of patients suffering from eye diseases have been major concerns in the computer-aided detection techniques. Detecting one or several specific types of retinal lesions has made a significant breakthrough in computer-aided screen in the past few decades. However, due to the variety of retinal lesions and complex normal anatomical structures, automatic detection of lesions with unknown and diverse types from a retina remains a challenging task. In this paper, a weakly supervised method, requiring only a series of normal and abnormal retinal images without need to specifically annotate their locations and types, is proposed for this task. Specifically, a fundus image is understood as a superposition of background, blood vessels, and background noise (lesions included for abnormal images). Background is formulated as a low-rank structure after a series of simple preprocessing steps, including spatial alignment, color normalization, and blood vessels removal. Background noise is regarded as stochastic variable and modeled through Gaussian for normal images and mixture of Gaussian for abnormal images, respectively. The proposed method encodes both the background knowledge of fundus images and the background noise into one unique model, and corporately optimizes the model using normal and abnormal images, which fully depict the low-rank subspace of the background and distinguish the lesions from the background noise in abnormal fundus images. Experimental results demonstrate that the proposed method is of fine arts accuracy and outperforms the previous related methods.
Collapse
|
50
|
Enhanced CAE system for detection of exudates and diagnosis of diabetic retinopathy stages in fundus retinal images using soft computing techniques. POLISH JOURNAL OF MEDICAL PHYSICS AND ENGINEERING 2019. [DOI: 10.2478/pjmpe-2019-0018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Abstract
Diabetic Retinopathy (DR) is one of the leading causes of visual impairment. Diabetic Retinopathy is the most recent technique of identifying the intensity of acid secretion in the eye for diabetic patients. The identification of DR is performed by visual analysis of retinal images for exudates (fat deposits) and the main patterns are traced by ophthalmologists. This paper proposes a fully automated Computer Assisted Evaluation (CAE) system which comprises of a set of algorithms for exudates detection and to classify the different stages of Diabetics Retinopathy, which are identified as either normal or mild or moderate or severe. Experimental validation is performed on a real fundus retinal image database. The segmentation of exudates is achieved using fuzzy C-means clustering and entropy filtering. An optimal set obtained from the statistical textural features (GLCM and GLHM) is extracted from the segmented exudates for classifying the different stages of Diabetics Retinopathy. The different stages of Diabetic Retinopathy are classified using three classifiers such as Back Propagation Neural Network (BPN), Probabilistic Neural Network (PNN) and Support Vector Machine (SVM). The experimental results show that the SVM classifiers outperformed other classifiers for the examined fundus retinal image dataset. The results obtained confirm that with new a set of texture features, the proposed methodology provides better performance when compared to the other methods available in the literature. These results suggest that our proposed method in this paper can be useful as a diagnostic aid system for Diabetic Retinopathy.
Collapse
|