1
|
Liang X, Wen H, Duan Y, He K, Feng X, Zhou G. Nonproliferative diabetic retinopathy dataset(NDRD): A database for diabetic retinopathy screening research and deep learning evaluation. Health Informatics J 2024; 30:14604582241259328. [PMID: 38864242 DOI: 10.1177/14604582241259328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/13/2024]
Abstract
OBJECTIVES In this article, we provide a database of nonproliferative diabetes retinopathy, which focuses on early diabetes retinopathy with hard exudation, and further explore its clinical application in disease recognition. METHODS We collect the photos of nonproliferative diabetes retinopathy taken by Optos Panoramic 200 laser scanning ophthalmoscope, filter out the pictures with poor quality, and label the hard exudative lesions in the images under the guidance of professional medical personnel. To validate the effectiveness of the datasets, five deep learning models are used to perform learning predictions on the datasets. Furthermore, we evaluate the performance of the model using evaluation metrics. RESULTS Nonproliferative diabetes retinopathy is smaller than proliferative retinopathy and more difficult to identify. The existing segmentation models have poor lesion segmentation performance, while the intersection over union (IOU) value for deep lesion segmentation of models targeting small lesions can reach 66.12%, which is higher than ordinary lesion segmentation models, but there is still a lot of room for improvement. CONCLUSION The segmentation of small hard exudative lesions is more challenging than that of large hard exudative lesions. More targeted datasets are needed for model training. Compared with the previous diabetes retina datasets, the NDRD dataset pays more attention to micro lesions.
Collapse
Affiliation(s)
- Xing Liang
- Third Hospital of Shanxi Medical University, Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Tongji Shanxi Hospital, Taiyuan, China
| | - Haiqi Wen
- Taiyuan University of Technology School of Software, Taiyuan, China
| | - Yajian Duan
- Department of Ophthalmology, Shanxi Bethune Hospital, Taiyuan, China
| | - Kan He
- Taiyuan University of Technology School of Mathematics, Taiyuan, China
| | - Xiufang Feng
- Taiyuan University of Technology School of Software, Taiyuan, China
| | - Guohong Zhou
- Department of Ophthalmology, Shanxi Eye Hospital Affiliated to Shanxi Medical UniversityTaiyuan, China
| |
Collapse
|
2
|
Manan MA, Jinchao F, Khan TM, Yaqub M, Ahmed S, Chuhan IS. Semantic segmentation of retinal exudates using a residual encoder-decoder architecture in diabetic retinopathy. Microsc Res Tech 2023; 86:1443-1460. [PMID: 37194727 DOI: 10.1002/jemt.24345] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Revised: 04/21/2023] [Accepted: 05/04/2023] [Indexed: 05/18/2023]
Abstract
Exudates are a common sign of diabetic retinopathy, which is a disease that affects the blood vessels in the retina. Early detection of exudates is critical to avoiding vision problems through continuous screening and treatment. In traditional clinical practice, the involved lesions are manually detected using photographs of the fundus. However, this task is cumbersome and time-consuming and requires intense effort due to the small size of the lesion and the low contrast of the images. Thus, computer-assisted diagnosis of retinal disease based on the detection of red lesions has been actively explored recently. In this paper, we present a comparison of deep convolutional neural network (CNN) architectures and propose a residual CNN with residual skip connections to reduce the parameter for the semantic segmentation of exudates in retinal images. A suitable image augmentation technique is used to improve the performance of network architecture. The proposed network can robustly segment exudates with high accuracy, which makes it suitable for diabetic retinopathy screening. A comparative performance analysis of three benchmark databases: E-ophtha, DIARETDB1, and Hamilton Ophthalmology Institute's Macular Edema, is presented. The proposed method achieves a precision of 0.95, 0.92, 0.97, accuracy of 0.98, 0.98, 0.98, sensitivity of 0.97, 0.95, 0.95, specificity of 0.99, 0.99, 0.99, and area under the curve of 0.97, 0.94, and 0.96, respectively. RESEARCH HIGHLIGHTS: The research focuses on the detection and segmentation of exudates in diabetic retinopathy, a disease affecting the retina. Early detection of exudates is important to avoid vision problems and requires continuous screening and treatment. Currently, manual detection is time-consuming and requires intense effort. The authors compare qualitative results of the state-of-the-art convolutional neural network (CNN) architectures and propose a computer-assisted diagnosis approach based on deep learning, using a residual CNN with residual skip connections to reduce parameters. The proposed method is evaluated on three benchmark databases and demonstrates high accuracy and suitability for diabetic retinopathy screening.
Collapse
Affiliation(s)
- Malik Abdul Manan
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Feng Jinchao
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Tariq M Khan
- School of IT, Deakin University, Waurn Ponds, Australia
| | - Muhammad Yaqub
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Shahzad Ahmed
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Imran Shabir Chuhan
- Interdisciplinary Research Institute, Faculty of Science, Beijing University of Technology, Beijing, China
| |
Collapse
|
3
|
Kaur J, Mittal D, Malebary S, Nayak SR, Kumar D, Kumar M, Gagandeep, Singh S. Automated Detection and Segmentation of Exudates for the Screening of Background Retinopathy. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:4537253. [PMID: 37483301 PMCID: PMC10361834 DOI: 10.1155/2023/4537253] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Accepted: 04/15/2022] [Indexed: 07/25/2023]
Abstract
Exudate, an asymptomatic yellow deposit on retina, is among the primary characteristics of background diabetic retinopathy. Background diabetic retinopathy is a retinopathy related to high blood sugar levels which slowly affects all the organs of the body. The early detection of exudates aids doctors in screening the patients suffering from background diabetic retinopathy. A computer-aided method proposed in the present work detects and then segments the exudates in the images of retina acquired using a digital fundus camera by (i) gradient method to trace the contour of exudates, (ii) marking the connected candidate pixels to remove false exudates pixels, and (iii) linking the edge pixels for the boundary extraction of exudates. The method is tested on 1307 retinal fundus images with varying characteristics. Six hundred and forty-nine images were acquired from hospital and the remaining 658 from open-source benchmark databases, namely, STARE, DRIVE MESSIDOR, DiaretDB1, and e-Ophtha. The exudates segmentation method proposed in this research work results in the retinal fundus image-based (i) accuracy of 98.04%, (ii) sensitivity of 95.345%, and (iii) specificity of 98.63%. The segmentation results for a number of exudates-based evaluations depict the average (i) accuracy of 95.68%, (ii) sensitivity of 93.44%, and (iii) specificity of 97.22%. The substantial combined performance at image and exudates-based evaluations proves the contribution of the proposed method in mass screening as well as treatment process of background diabetic retinopathy.
Collapse
Affiliation(s)
- Jaskirat Kaur
- Department of Electronics and Communication Engineering, Punjab Engineering College (Deemed to be University), Sector 12, Chandigarh 160012, India
| | - Deepti Mittal
- Electrical and Instrumentation Engineering Department, Thapar Institute of Engineering and Technology, Patiala 147004, India
| | - Sharaf Malebary
- Department of Information Technology, Faculty of Computing and Information Technology in Rabigh, King Abdulaziz University, Jeddah 21911, Saudi Arabia
| | - Soumya Ranjan Nayak
- School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar 751024, Odisha, India
| | - Devendra Kumar
- Department of Computer Science, Wachemo University, Hosaena, Ethiopia
| | - Manoj Kumar
- Faculty of Engineering and Information Sciences, University of Wollongong in Dubai, Dubai Knowledge Park, UAE
- MEU Research Unit, Middle East University, Amman 11831, Jordan
| | - Gagandeep
- Computer Science Engineering Department, Chandigarh Engineering College, Mohali, India
| | - Simrandeep Singh
- Electronics and Communication Engineering Department, UCRD, Chandigarh University, Mohali, India
| |
Collapse
|
4
|
Ishtiaq U, Abdullah ERMF, Ishtiaque Z. A Hybrid Technique for Diabetic Retinopathy Detection Based on Ensemble-Optimized CNN and Texture Features. Diagnostics (Basel) 2023; 13:diagnostics13101816. [PMID: 37238304 DOI: 10.3390/diagnostics13101816] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 05/16/2023] [Accepted: 05/17/2023] [Indexed: 05/28/2023] Open
Abstract
One of the most prevalent chronic conditions that can result in permanent vision loss is diabetic retinopathy (DR). Diabetic retinopathy occurs in five stages: no DR, and mild, moderate, severe, and proliferative DR. The early detection of DR is essential for preventing vision loss in diabetic patients. In this paper, we propose a method for the detection and classification of DR stages to determine whether patients are in any of the non-proliferative stages or in the proliferative stage. The hybrid approach based on image preprocessing and ensemble features is the foundation of the proposed classification method. We created a convolutional neural network (CNN) model from scratch for this study. Combining Local Binary Patterns (LBP) and deep learning features resulted in the creation of the ensemble features vector, which was then optimized using the Binary Dragonfly Algorithm (BDA) and the Sine Cosine Algorithm (SCA). Moreover, this optimized feature vector was fed to the machine learning classifiers. The SVM classifier achieved the highest classification accuracy of 98.85% on a publicly available dataset, i.e., Kaggle EyePACS. Rigorous testing and comparisons with state-of-the-art approaches in the literature indicate the effectiveness of the proposed methodology.
Collapse
Affiliation(s)
- Uzair Ishtiaq
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur 50603, Malaysia
- Department of Computer Science, COMSATS University Islamabad, Vehari Campus, Vehari 61100, Pakistan
| | - Erma Rahayu Mohd Faizal Abdullah
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur 50603, Malaysia
| | - Zubair Ishtiaque
- Department of Analytical, Biopharmaceutical and Medical Sciences, Atlantic Technological University, H91 T8NW Galway, Ireland
| |
Collapse
|
5
|
CLC-Net: Contextual and Local Collaborative Network for Lesion Segmentation in Diabetic Retinopathy Images. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.01.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
6
|
Selvachandran G, Quek SG, Paramesran R, Ding W, Son LH. Developments in the detection of diabetic retinopathy: a state-of-the-art review of computer-aided diagnosis and machine learning methods. Artif Intell Rev 2023; 56:915-964. [PMID: 35498558 PMCID: PMC9038999 DOI: 10.1007/s10462-022-10185-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/04/2022] [Indexed: 02/02/2023]
Abstract
The exponential increase in the number of diabetics around the world has led to an equally large increase in the number of diabetic retinopathy (DR) cases which is one of the major complications caused by diabetes. Left unattended, DR worsens the vision and would lead to partial or complete blindness. As the number of diabetics continue to increase exponentially in the coming years, the number of qualified ophthalmologists need to increase in tandem in order to meet the demand for screening of the growing number of diabetic patients. This makes it pertinent to develop ways to automate the detection process of DR. A computer aided diagnosis system has the potential to significantly reduce the burden currently placed on the ophthalmologists. Hence, this review paper is presented with the aim of summarizing, classifying, and analyzing all the recent development on automated DR detection using fundus images from 2015 up to this date. Such work offers an unprecedentedly thorough review of all the recent works on DR, which will potentially increase the understanding of all the recent studies on automated DR detection, particularly on those that deploys machine learning algorithms. Firstly, in this paper, a comprehensive state-of-the-art review of the methods that have been introduced in the detection of DR is presented, with a focus on machine learning models such as convolutional neural networks (CNN) and artificial neural networks (ANN) and various hybrid models. Each AI will then be classified according to its type (e.g. CNN, ANN, SVM), its specific task(s) in performing DR detection. In particular, the models that deploy CNN will be further analyzed and classified according to some important properties of the respective CNN architectures of each model. A total of 150 research articles related to the aforementioned areas that were published in the recent 5 years have been utilized in this review to provide a comprehensive overview of the latest developments in the detection of DR. Supplementary Information The online version contains supplementary material available at 10.1007/s10462-022-10185-6.
Collapse
Affiliation(s)
- Ganeshsree Selvachandran
- Department of Actuarial Science and Applied Statistics, Faculty of Business & Management, UCSI University, Jalan Menara Gading, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Shio Gai Quek
- Department of Actuarial Science and Applied Statistics, Faculty of Business & Management, UCSI University, Jalan Menara Gading, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Raveendran Paramesran
- Institute of Computer Science and Digital Innovation, UCSI University, Jalan Menara Gading, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Weiping Ding
- School of Information Science and Technology, Nantong University, Nantong, 226019 People’s Republic of China
| | - Le Hoang Son
- VNU Information Technology Institute, Vietnam National University, Hanoi, Vietnam
| |
Collapse
|
7
|
Multi-scale Multi-instance Multi-feature Joint Learning Broad Network (M3JLBN) for gastric intestinal metaplasia subtype classification. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
8
|
Lin KY, Urban G, Yang MC, Lee LC, Lu DW, Alward WLM, Baldi P. Accurate Identification of the Trabecular Meshwork under Gonioscopic View in Real Time Using Deep Learning. Ophthalmol Glaucoma 2022; 5:402-412. [PMID: 34798322 DOI: 10.1016/j.ogla.2021.11.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 10/27/2021] [Accepted: 11/10/2021] [Indexed: 06/13/2023]
Abstract
PURPOSE Accurate identification of iridocorneal structures on gonioscopy is difficult to master, and errors can lead to grave surgical complications. This study aimed to develop and train convolutional neural networks (CNNs) to accurately identify the trabecular meshwork (TM) in gonioscopic videos in real time for eventual clinical integrations. DESIGN Cross-sectional study. PARTICIPANTS Adult patients with open angle were identified in academic glaucoma clinics in both Taipei, Taiwan, and Irvine, California. METHODS Neural Encoder-Decoder CNNs (U-nets) were trained to predict a curve marking the TM using an expert-annotated data set of 378 gonioscopy images. The model was trained and evaluated with stratified cross-validation grouped by patients to ensure uncorrelated training and testing sets, as well as on a separate test set and 3 intraoperative gonioscopic videos of ab interno trabeculotomy with Trabectome (totaling 90 seconds long, 30 frames per second). We also evaluated our model's performance by comparing its accuracy against ophthalmologists. MAIN OUTCOME MEASURES Successful development of real-time-capable CNNs that are accurate in predicting and marking the TM's position in video frames of gonioscopic views. Models were evaluated in comparison with human expert annotations of static images and video data. RESULTS The best CNN model produced test set predictions with a median deviation of 0.8% of the video frame's height (15.25 μm) from the human experts' annotations. This error is less than the average vertical height of the TM. The worst test frame prediction of this model had an average deviation of 4% of the frame height (76.28 μm), which is still considered a successful prediction. When challenged with unseen images, the CNN model scored greater than 2 standard deviations above the mean performance of the surveyed general ophthalmologists. CONCLUSIONS Our CNN model can identify the TM in gonioscopy videos in real time with remarkable accuracy, allowing it to be used in connection with a video camera intraoperatively. This model can have applications in surgical training, automated screenings, and intraoperative guidance. The dataset developed in this study is one of the first publicly available gonioscopy image banks (https://lin.hs.uci.edu/research), which may encourage future investigations in this topic.
Collapse
Affiliation(s)
- Ken Y Lin
- Gavin Herbert Eye Institute, Department of Ophthalmology, University of California, Irvine, California; Department of Biomedical Engineering, University of California, Irvine, California.
| | - Gregor Urban
- Department of Computer Science, University of California, Irvine, California
| | - Michael C Yang
- Gavin Herbert Eye Institute, Department of Ophthalmology, University of California, Irvine, California
| | - Lung-Chi Lee
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Da-Wen Lu
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Wallace L M Alward
- Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, Iowa
| | - Pierre Baldi
- Department of Biomedical Engineering, University of California, Irvine, California; Department of Computer Science, University of California, Irvine, California.
| |
Collapse
|
9
|
Andersen JKH, Hubel MS, Rasmussen ML, Grauslund J, Savarimuthu TR. Automatic Detection of Abnormalities and Grading of Diabetic Retinopathy in 6-Field Retinal Images: Integration of Segmentation Into Classification. Transl Vis Sci Technol 2022; 11:19. [PMID: 35731541 PMCID: PMC9233290 DOI: 10.1167/tvst.11.6.19] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose Classification of diabetic retinopathy (DR) is traditionally based on severity grading, given by the most advanced lesion, but potentially leaving out relevant information for risk stratification. In this study, we aimed to develop a deep learning model able to individually segment seven different DR-lesions, in order to test if this would improve a subsequently developed classification model. Methods First, manual segmentation of 34,075 different DR-lesions was used to construct a segmentation model, with performance subsequently compared to another retinal specialist. Second, we constructed a 5-step classification model using a data set of 31,325 expert-annotated retinal 6-field images and evaluated if performance was improved with the integration of presegmentation given by the segmentation model. Results The segmentation model had higher average sensitivity across all abnormalities compared to the retinal expert (0.68 and 0.62) at a comparable average F1-score (0.60 and 0.62). Model sensitivity for microaneurysms, retinal hemorrhages and intraretinal microvascular abnormalities was higher by 42.5%, 8.8%, and 67.5% and F1-scores by 15.8%, 6.5%, and 12.5%, respectively. When presegmentation was included, grading performance increased by 29.7%, 6.0%, and 4.5% for average per class accuracy, quadratic weighted kappa, and multiclass macro area under the curve, with values of 70.4%, 0.90, and 0.92, respectively. Conclusions The segmentation model matched an expert in detecting retinal abnormalities, and presegmentation substantially improved accuracy of the automated classification model. Translational Relevance Presegmentation may yield more accurate automated DR grading models and increase interpretability and trust in model decisions.
Collapse
Affiliation(s)
- Jakob K H Andersen
- The Maersk Mc-Kinney Moeller Institute, SDU Robotics, University of Southern Denmark, Odense, Denmark.,Steno Diabetes Center Odense, Odense University Hospital, Odense, Denmark
| | - Martin S Hubel
- The Maersk Mc-Kinney Moeller Institute, SDU Robotics, University of Southern Denmark, Odense, Denmark
| | - Malin L Rasmussen
- Department of Ophthalmology, Odense University Hospital, Odense, Denmark.,Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| | - Jakob Grauslund
- Department of Ophthalmology, Odense University Hospital, Odense, Denmark.,Department of Clinical Research, University of Southern Denmark, Odense, Denmark.,Steno Diabetes Center Odense, Odense University Hospital, Odense, Denmark
| | - Thiusius R Savarimuthu
- The Maersk Mc-Kinney Moeller Institute, SDU Robotics, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
10
|
Das D, Biswas SK, Bandyopadhyay S. A critical review on diagnosis of diabetic retinopathy using machine learning and deep learning. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:25613-25655. [PMID: 35342328 PMCID: PMC8940593 DOI: 10.1007/s11042-022-12642-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 06/29/2021] [Accepted: 02/09/2022] [Indexed: 06/12/2023]
Abstract
Diabetic Retinopathy (DR) is a health condition caused due to Diabetes Mellitus (DM). It causes vision problems and blindness due to disfigurement of human retina. According to statistics, 80% of diabetes patients battling from long diabetic period of 15 to 20 years, suffer from DR. Hence, it has become a dangerous threat to the health and life of people. To overcome DR, manual diagnosis of the disease is feasible but overwhelming and cumbersome at the same time and hence requires a revolutionary method. Thus, such a health condition necessitates primary recognition and diagnosis to prevent DR from developing into severe stages and prevent blindness. Innumerable Machine Learning (ML) models are proposed by researchers across the globe, to achieve this purpose. Various feature extraction techniques are proposed for extraction of DR features for early detection. However, traditional ML models have shown either meagre generalization throughout feature extraction and classification for deploying smaller datasets or consumes more of training time causing inefficiency in prediction while using larger datasets. Hence Deep Learning (DL), a new domain of ML, is introduced. DL models can handle a smaller dataset with help of efficient data processing techniques. However, they generally incorporate larger datasets for their deep architectures to enhance performance in feature extraction and image classification. This paper gives a detailed review on DR, its features, causes, ML models, state-of-the-art DL models, challenges, comparisons and future directions, for early detection of DR.
Collapse
Affiliation(s)
- Dolly Das
- National Institute of Technology Silchar, Cachar, Assam India
| | | | | |
Collapse
|
11
|
Shaik NS, Cherukuri TK. Hinge attention network: A joint model for diabetic retinopathy severity grading. APPL INTELL 2022. [DOI: 10.1007/s10489-021-03043-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
|
12
|
Shekar S, Satpute N, Gupta A. Review on diabetic retinopathy with deep learning methods. JOURNAL OF MEDICAL IMAGING (BELLINGHAM, WASH.) 2021; 8:060901. [PMID: 34859116 DOI: 10.1117/1.jmi.8.6.060901] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Accepted: 10/27/2021] [Indexed: 11/14/2022]
Abstract
Purpose: The purpose of our review paper is to examine many existing works of literature presenting the different methods utilized for diabetic retinopathy (DR) recognition employing deep learning (DL) and machine learning (ML) techniques, and also to address the difficulties faced in various datasets used by DR. Approach: DR is a progressive illness and may become a reason for vision loss. Early identification of DR lesions is, therefore, helpful and prevents damage to the retina. However, it is a complex job in view of the fact that it is symptomless earlier, and also ophthalmologists have been needed in traditional approaches. Recently, automated identification of DR-based studies has been stated based on image processing, ML, and DL. We analyze the recent literature and provide a comparative study that also includes the limitations of the literature and future work directions. Results: A relative analysis among the databases used, performance metrics employed, and ML and DL techniques adopted recently in DR detection based on various DR features is presented. Conclusion: Our review paper discusses the methods employed in DR detection along with the technical and clinical challenges that are encountered, which is missing in existing reviews, as well as future scopes to assist researchers in the field of retinal imaging.
Collapse
Affiliation(s)
- Shreya Shekar
- College of Engineering Pune, Department of Electronics and Telecommunication Engineering, Pune, Maharashtra, India
| | - Nitin Satpute
- Aarhus University, Department of Electrical and Computer Engineering, Aarhus, Denmark
| | - Aditya Gupta
- College of Engineering Pune, Department of Electronics and Telecommunication Engineering, Pune, Maharashtra, India
| |
Collapse
|
13
|
Attiku Y, He Y, Nittala MG, Sadda SR. Current status and future possibilities of retinal imaging in diabetic retinopathy care applicable to low- and medium-income countries. Indian J Ophthalmol 2021; 69:2968-2976. [PMID: 34708731 PMCID: PMC8725126 DOI: 10.4103/ijo.ijo_1212_21] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Diabetic retinopathy (DR) is a leading cause of blindness among adults and the numbers are projected to rise. There have been dramatic advances in the field of retinal imaging since the first fundus image was captured by Jackman and Webster in 1886. The currently available imaging modalities in the management of DR include fundus photography, fluorescein angiography, autofluorescence imaging, optical coherence tomography, optical coherence tomography angiography, and near-infrared reflectance imaging. These images are obtained using traditional fundus cameras, widefield fundus cameras, handheld fundus cameras, or smartphone-based fundus cameras. Fluorescence lifetime ophthalmoscopy, adaptive optics, multispectral and hyperspectral imaging, and multicolor imaging are the evolving technologies which are being researched for their potential applications in DR. Telemedicine has gained popularity in recent years as remote screening of DR has been made possible. Retinal imaging technologies integrated with artificial intelligence/deep-learning algorithms will likely be the way forward in the screening and grading of DR. We provide an overview of the current and upcoming imaging modalities which are relevant to the management of DR.
Collapse
Affiliation(s)
- Yamini Attiku
- Doheny Image Reading Center, Doheny Eye Institute, Los Angeles, California
| | - Ye He
- Doheny Image Reading Center, Doheny Eye Institute, Los Angeles, California; Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, China
| | | | - SriniVas R Sadda
- Doheny Image Reading Center, Doheny Eye Institute; Department of Ophthalmology, David Geffen School of Medicine at UCLA, Los Angeles, California, USA
| |
Collapse
|
14
|
Huang C, Zong Y, Ding Y, Luo X, Clawson K, Peng Y. A new deep learning approach for the retinal hard exudates detection based on superpixel multi-feature extraction and patch-based CNN. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.07.145] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
15
|
Kurilová V, Goga J, Oravec M, Pavlovičová J, Kajan S. Support vector machine and deep-learning object detection for localisation of hard exudates. Sci Rep 2021; 11:16045. [PMID: 34362989 PMCID: PMC8346563 DOI: 10.1038/s41598-021-95519-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Accepted: 07/26/2021] [Indexed: 02/08/2023] Open
Abstract
Hard exudates are one of the main clinical findings in the retinal images of patients with diabetic retinopathy. Detecting them early significantly impacts the treatment of underlying diseases; therefore, there is a need for automated systems with high reliability. We propose a novel method for identifying and localising hard exudates in retinal images. To achieve fast image pre-scanning, a support vector machine (SVM) classifier was combined with a faster region-based convolutional neural network (faster R-CNN) object detector for the localisation of exudates. Rapid pre-scanning filtered out exudate-free samples using a feature vector extracted from the pre-trained ResNet-50 network. Subsequently, the remaining samples were processed using a faster R-CNN detector for detailed analysis. When evaluating all the exudates as individual objects, the SVM classifier reduced the false positive rate by 29.7% and marginally increased the false negative rate by 16.2%. When evaluating all the images, we recorded a 50% reduction in the false positive rate, without any decrease in the number of false negatives. The interim results suggested that pre-scanning the samples using the SVM prior to implementing the deep-network object detector could simultaneously improve and speed up the current hard exudates detection method, especially when there is paucity of training data.
Collapse
Affiliation(s)
- Veronika Kurilová
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 812 19, Bratislava, Slovakia.
| | - Jozef Goga
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 812 19, Bratislava, Slovakia
| | - Miloš Oravec
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 812 19, Bratislava, Slovakia.
| | - Jarmila Pavlovičová
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 812 19, Bratislava, Slovakia
| | - Slavomír Kajan
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 812 19, Bratislava, Slovakia
| |
Collapse
|
16
|
Deep learning for diabetic retinopathy detection and classification based on fundus images: A review. Comput Biol Med 2021; 135:104599. [PMID: 34247130 DOI: 10.1016/j.compbiomed.2021.104599] [Citation(s) in RCA: 54] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 06/12/2021] [Accepted: 06/18/2021] [Indexed: 02/02/2023]
Abstract
Diabetic Retinopathy is a retina disease caused by diabetes mellitus and it is the leading cause of blindness globally. Early detection and treatment are necessary in order to delay or avoid vision deterioration and vision loss. To that end, many artificial-intelligence-powered methods have been proposed by the research community for the detection and classification of diabetic retinopathy on fundus retina images. This review article provides a thorough analysis of the use of deep learning methods at the various steps of the diabetic retinopathy detection pipeline based on fundus images. We discuss several aspects of that pipeline, ranging from the datasets that are widely used by the research community, the preprocessing techniques employed and how these accelerate and improve the models' performance, to the development of such deep learning models for the diagnosis and grading of the disease as well as the localization of the disease's lesions. We also discuss certain models that have been applied in real clinical settings. Finally, we conclude with some important insights and provide future research directions.
Collapse
|
17
|
A review of diabetic retinopathy: Datasets, approaches, evaluation metrics and future trends. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2021. [DOI: 10.1016/j.jksuci.2021.06.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
18
|
Ashir AM, Ibrahim S, Abdulghani M, Ibrahim AA, Anwar MS. Diabetic Retinopathy Detection Using Local Extrema Quantized Haralick Features with Long Short-Term Memory Network. Int J Biomed Imaging 2021; 2021:6618666. [PMID: 33953736 PMCID: PMC8068542 DOI: 10.1155/2021/6618666] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2020] [Revised: 02/20/2021] [Accepted: 03/31/2021] [Indexed: 11/18/2022] Open
Abstract
Diabetic retinopathy is one of the leading diseases affecting eyes. Lack of early detection and treatment can lead to total blindness of the diseased eyes. Recently, numerous researchers have attempted producing automatic diabetic retinopathy detection techniques to supplement diagnosis and early treatment of diabetic retinopathy symptoms. In this manuscript, a new approach has been proposed. The proposed approach utilizes the feature extracted from the fundus image using a local extrema information with quantized Haralick features. The quantized features encode not only the textural Haralick features but also exploit the multiresolution information of numerous symptoms in diabetic retinopathy. Long Short-Term Memory network together with local extrema pattern provides a probabilistic approach to analyze each segment of the image with higher precision which helps to suppress false positive occurrences. The proposed approach analyzes the retina vasculature and hard-exudate symptoms of diabetic retinopathy on two different public datasets. The experimental results evaluated using performance matrices such as specificity, accuracy, and sensitivity reveal promising indices. Similarly, comparison with the related state-of-the-art researches highlights the validity of the proposed method. The proposed approach performs better than most of the researches used for comparison.
Collapse
Affiliation(s)
- Abubakar M. Ashir
- Department of Computer Engineering, Tishk International University, Erbil, KRD, Iraq
| | - Salisu Ibrahim
- Department of Mathematic Education, Tishk International University, Erbil, KRD, Iraq
| | - Mohammed Abdulghani
- Department of Computer Engineering, Tishk International University, Erbil, KRD, Iraq
| | | | - Mohammed S. Anwar
- Department of Computer Engineering, Tishk International University, Erbil, KRD, Iraq
| |
Collapse
|
19
|
Bilal A, Sun G, Mazhar S. Survey on recent developments in automatic detection of diabetic retinopathy. J Fr Ophtalmol 2021; 44:420-440. [PMID: 33526268 DOI: 10.1016/j.jfo.2020.08.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 08/24/2020] [Indexed: 12/13/2022]
Abstract
Diabetic retinopathy (DR) is a disease facilitated by the rapid spread of diabetes worldwide. DR can blind diabetic individuals. Early detection of DR is essential to restoring vision and providing timely treatment. DR can be detected manually by an ophthalmologist, examining the retinal and fundus images to analyze the macula, morphological changes in blood vessels, hemorrhage, exudates, and/or microaneurysms. This is a time consuming, costly, and challenging task. An automated system can easily perform this function by using artificial intelligence, especially in screening for early DR. Recently, much state-of-the-art research relevant to the identification of DR has been reported. This article describes the current methods of detecting non-proliferative diabetic retinopathy, exudates, hemorrhage, and microaneurysms. In addition, the authors point out future directions in overcoming current challenges in the field of DR research.
Collapse
Affiliation(s)
- A Bilal
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China.
| | - G Sun
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China
| | - S Mazhar
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China
| |
Collapse
|
20
|
Qummar S, Khan FG, Shah S, Khan A, Din A, Gao J. Deep Learning Techniques for Diabetic Retinopathy Detection. Curr Med Imaging 2021; 16:1201-1213. [DOI: 10.2174/1573405616666200213114026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Revised: 11/26/2019] [Accepted: 12/19/2019] [Indexed: 11/22/2022]
Abstract
Diabetes occurs due to the excess of glucose in the blood that may affect many organs
of the body. Elevated blood sugar in the body causes many problems including Diabetic Retinopathy
(DR). DR occurs due to the mutilation of the blood vessels in the retina. The manual detection
of DR by ophthalmologists is complicated and time-consuming. Therefore, automatic detection is
required, and recently different machine and deep learning techniques have been applied to detect
and classify DR. In this paper, we conducted a study of the various techniques available in the literature
for the identification/classification of DR, the strengths and weaknesses of available datasets
for each method, and provides the future directions. Moreover, we also discussed the different
steps of detection, that are: segmentation of blood vessels in a retina, detection of lesions, and other
abnormalities of DR.
Collapse
Affiliation(s)
- Sehrish Qummar
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad, Pakistan
| | - Fiaz Gul Khan
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad, Pakistan
| | - Sajid Shah
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad, Pakistan
| | - Ahmad Khan
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad, Pakistan
| | - Ahmad Din
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad, Pakistan
| | - Jinfeng Gao
- Department of Information Engineering, Huanghuai University, Henan, China
| |
Collapse
|
21
|
Wang J, Bai Y, Xia B. Simultaneous Diagnosis of Severity and Features of Diabetic Retinopathy in Fundus Photography Using Deep Learning. IEEE J Biomed Health Inform 2020; 24:3397-3407. [PMID: 32750975 DOI: 10.1109/jbhi.2020.3012547] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Deep learning methods for diabetic retinopathy (DR) diagnosis are usually criticized as being lack of interpretability in the diagnostic result, thus limiting their application in clinic. Simultaneous prediction of DR related features during the DR severity diagnosis is able to resolve this issue by providing supporting evidence (i.e. DR related features) for the diagnostic result (i.e. DR severity). In this study, we propose a hierarchical multi-task deep learning framework for simultaneous diagnosis of DR severity and DR related features in fundus images. A hierarchical structure is introduced to incorporate the casual relationship between DR related features and DR severity levels. In the experiments, the proposed approach was evaluated on two independent testing sets using quadratic weighted Cohen's kappa coefficient, receiver operating characteristic analysis, and precision-recall analysis. A grader study was also conducted to compare the performance of the proposed approach with those of general ophthalmologists with different levels of experience. The results demonstrate that the proposed approach could improve the performance for both DR severity diagnosis and DR related feature detection when comparing with the traditional deep learning-based methods. It achieves performance close to general ophthalmologists with five years of experience when diagnosing DR severity levels, and general ophthalmologists with ten years of experience for referable DR detection.
Collapse
|
22
|
Romero-Oraá R, García M, Oraá-Pérez J, López-Gálvez MI, Hornero R. Effective Fundus Image Decomposition for the Detection of Red Lesions and Hard Exudates to Aid in the Diagnosis of Diabetic Retinopathy. SENSORS (BASEL, SWITZERLAND) 2020; 20:E6549. [PMID: 33207825 PMCID: PMC7698181 DOI: 10.3390/s20226549] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 11/07/2020] [Accepted: 11/13/2020] [Indexed: 06/11/2023]
Abstract
Diabetic retinopathy (DR) is characterized by the presence of red lesions (RLs), such as microaneurysms and hemorrhages, and bright lesions, such as exudates (EXs). Early DR diagnosis is paramount to prevent serious sight damage. Computer-assisted diagnostic systems are based on the detection of those lesions through the analysis of fundus images. In this paper, a novel method is proposed for the automatic detection of RLs and EXs. As the main contribution, the fundus image was decomposed into various layers, including the lesion candidates, the reflective features of the retina, and the choroidal vasculature visible in tigroid retinas. We used a proprietary database containing 564 images, randomly divided into a training set and a test set, and the public database DiaretDB1 to verify the robustness of the algorithm. Lesion detection results were computed per pixel and per image. Using the proprietary database, 88.34% per-image accuracy (ACCi), 91.07% per-pixel positive predictive value (PPVp), and 85.25% per-pixel sensitivity (SEp) were reached for the detection of RLs. Using the public database, 90.16% ACCi, 96.26% PPV_p, and 84.79% SEp were obtained. As for the detection of EXs, 95.41% ACCi, 96.01% PPV_p, and 89.42% SE_p were reached with the proprietary database. Using the public database, 91.80% ACCi, 98.59% PPVp, and 91.65% SEp were obtained. The proposed method could be useful to aid in the diagnosis of DR, reducing the workload of specialists and improving the attention to diabetic patients.
Collapse
Affiliation(s)
- Roberto Romero-Oraá
- Biomedical Engineering Group, Universidad de Valladolid, 47011 Valladolid, Spain; (M.G.); (J.O.-P.); (M.I.L.-G.); (R.H.)
- Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), 28029 Madrid, Spain
| | - María García
- Biomedical Engineering Group, Universidad de Valladolid, 47011 Valladolid, Spain; (M.G.); (J.O.-P.); (M.I.L.-G.); (R.H.)
- Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), 28029 Madrid, Spain
| | - Javier Oraá-Pérez
- Biomedical Engineering Group, Universidad de Valladolid, 47011 Valladolid, Spain; (M.G.); (J.O.-P.); (M.I.L.-G.); (R.H.)
| | - María I. López-Gálvez
- Biomedical Engineering Group, Universidad de Valladolid, 47011 Valladolid, Spain; (M.G.); (J.O.-P.); (M.I.L.-G.); (R.H.)
- Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), 28029 Madrid, Spain
- Department of Ophthalmology, Hospital Clínico Universitario de Valladolid, 47003 Valladolid, Spain
- Instituto Universitario de Oftalmobiología Aplicada (IOBA), Universidad de Valladolid, 47011 Valladolid, Spain
| | - Roberto Hornero
- Biomedical Engineering Group, Universidad de Valladolid, 47011 Valladolid, Spain; (M.G.); (J.O.-P.); (M.I.L.-G.); (R.H.)
- Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), 28029 Madrid, Spain
- Instituto de Investigación en Matemáticas (IMUVA), Universidad de Valladolid, 47011 Valladolid, Spain
| |
Collapse
|
23
|
Araújo T, Aresta G, Mendonça L, Penas S, Maia C, Carneiro Â, Mendonça AM, Campilho A. DR|GRADUATE: Uncertainty-aware deep learning-based diabetic retinopathy grading in eye fundus images. Med Image Anal 2020; 63:101715. [DOI: 10.1016/j.media.2020.101715] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2019] [Revised: 03/09/2020] [Accepted: 04/24/2020] [Indexed: 01/01/2023]
|
24
|
Wang H, Yuan G, Zhao X, Peng L, Wang Z, He Y, Qu C, Peng Z. Hard exudate detection based on deep model learned information and multi-feature joint representation for diabetic retinopathy screening. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 191:105398. [PMID: 32092614 DOI: 10.1016/j.cmpb.2020.105398] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2019] [Revised: 01/18/2020] [Accepted: 02/14/2020] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Diabetic retinopathy (DR), which is generally diagnosed by the presence of hemorrhages and hard exudates, is one of the most prevalent causes of visual impairment and blindness. Early detection of hard exudates (HEs) in color fundus photographs can help in preventing such destructive damage. However, this is a challenging task due to high intra-class diversity and high similarity with other structures in the fundus images. Most of the existing methods for detecting HEs are based on characterizing HEs using hand crafted features (HCFs) only, which can not characterize HEs accurately. Deep learning methods are scarce in this domain because they require large-scale sample sets for training which are not generally available for most routine medical imaging research. METHODS To address these challenges, we propose a novel methodology for HE detection using deep convolutional neural network (DCNN) and multi-feature joint representation. Specifically, we present a new optimized mathematical morphological approach that first segments HE candidates accurately. Then, each candidate is characterized using combined features based on deep features with HCFs incorporated, which is implemented by a ridge regression-based feature fusion. This method employs multi-space-based intensity features, geometric features, a gray-level co-occurrence matrix (GLCM)-based texture descriptor, a gray-level size zone matrix (GLSZM)-based texture descriptor to construct HCFs, and a DCNN to automatically learn the deep information of HE. Finally, a random forest is employed to identify the true HEs among candidates. RESULTS The proposed method is evaluated on two benchmark databases. It obtains an F-score of 0.8929 with an area under curve (AUC) of 0.9644 on the e-optha database and an F-score of 0.9326 with an AUC of 0.9323 on the HEI-MED database. These results demonstrate that our approach outperforms state-of-the-art methods. Our model also proves to be suitable for clinical applications based on private clinical images from a local hospital. CONCLUSIONS This newly proposed method integrates the traditional HCFs and deep features learned from DCNN for detecting HEs. It achieves a new state-of-the-art in both detecting HEs and DR screening. Furthermore, the proposed feature selection and fusion strategy reduces feature dimension and improves HE detection performance.
Collapse
Affiliation(s)
- Hui Wang
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| | - Guohui Yuan
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| | - Xuegong Zhao
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| | - Lingbing Peng
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China.
| | - Zhuoran Wang
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| | - Yanmin He
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| | - Chao Qu
- Department of Ophthalmology, Sichuan Academy of Medical Sciences and Sichuan Provincial People's Hospital, Chengdu 610072, China.
| | - Zhenming Peng
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| |
Collapse
|
25
|
Arsalan M, Baek NR, Owais M, Mahmood T, Park KR. Deep Learning-Based Detection of Pigment Signs for Analysis and Diagnosis of Retinitis Pigmentosa. SENSORS (BASEL, SWITZERLAND) 2020; 20:E3454. [PMID: 32570943 PMCID: PMC7349531 DOI: 10.3390/s20123454] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 06/16/2020] [Accepted: 06/16/2020] [Indexed: 12/24/2022]
Abstract
Ophthalmological analysis plays a vital role in the diagnosis of various eye diseases, such as glaucoma, retinitis pigmentosa (RP), and diabetic and hypertensive retinopathy. RP is a genetic retinal disorder that leads to progressive vision degeneration and initially causes night blindness. Currently, the most commonly applied method for diagnosing retinal diseases is optical coherence tomography (OCT)-based disease analysis. In contrast, fundus imaging-based disease diagnosis is considered a low-cost diagnostic solution for retinal diseases. This study focuses on the detection of RP from the fundus image, which is a crucial task because of the low quality of fundus images and non-cooperative image acquisition conditions. Automatic detection of pigment signs in fundus images can help ophthalmologists and medical practitioners in diagnosing and analyzing RP disorders. To accurately segment pigment signs for diagnostic purposes, we present an automatic RP segmentation network (RPS-Net), which is a specifically designed deep learning-based semantic segmentation network to accurately detect and segment the pigment signs with fewer trainable parameters. Compared with the conventional deep learning methods, the proposed method applies a feature enhancement policy through multiple dense connections between the convolutional layers, which enables the network to discriminate between normal and diseased eyes, and accurately segment the diseased area from the background. Because pigment spots can be very small and consist of very few pixels, the RPS-Net provides fine segmentation, even in the case of degraded images, by importing high-frequency information from the preceding layers through concatenation inside and outside the encoder-decoder. To evaluate the proposed RPS-Net, experiments were performed based on 4-fold cross-validation using the publicly available Retinal Images for Pigment Signs (RIPS) dataset for detection and segmentation of retinal pigments. Experimental results show that RPS-Net achieved superior segmentation performance for RP diagnosis, compared with the state-of-the-art methods.
Collapse
Affiliation(s)
| | | | | | | | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea; (M.A.); (N.R.B.); (M.O.); (T.M.)
| |
Collapse
|
26
|
Guo S, Wang K, Kang H, Liu T, Gao Y, Li T. Bin loss for hard exudates segmentation in fundus images. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2018.10.103] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
27
|
Detection of Early Signs of Diabetic Retinopathy Based on Textural and Morphological Information in Fundus Images. SENSORS 2020; 20:s20041005. [PMID: 32069912 PMCID: PMC7071097 DOI: 10.3390/s20041005] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 01/30/2020] [Accepted: 02/10/2020] [Indexed: 02/01/2023]
Abstract
Estimated blind people in the world will exceed 40 million by 2025. To develop novel algorithms based on fundus image descriptors that allow the automatic classification of retinal tissue into healthy and pathological in early stages is necessary. In this paper, we focus on one of the most common pathologies in the current society: diabetic retinopathy. The proposed method avoids the necessity of lesion segmentation or candidate map generation before the classification stage. Local binary patterns and granulometric profiles are locally computed to extract texture and morphological information from retinal images. Different combinations of this information feed classification algorithms to optimally discriminate bright and dark lesions from healthy tissues. Through several experiments, the ability of the proposed system to identify diabetic retinopathy signs is validated using different public databases with a large degree of variability and without image exclusion.
Collapse
|
28
|
Teo BG, Dhillon SK. An automated 3D modeling pipeline for constructing 3D models of MONOGENEAN HARDPART using machine learning techniques. BMC Bioinformatics 2019; 20:658. [PMID: 31870297 PMCID: PMC6929343 DOI: 10.1186/s12859-019-3210-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Accepted: 11/12/2019] [Indexed: 11/23/2022] Open
Abstract
BACKGROUND Studying structural and functional morphology of small organisms such as monogenean, is difficult due to the lack of visualization in three dimensions. One possible way to resolve this visualization issue is to create digital 3D models which may aid researchers in studying morphology and function of the monogenean. However, the development of 3D models is a tedious procedure as one will have to repeat an entire complicated modelling process for every new target 3D shape using a comprehensive 3D modelling software. This study was designed to develop an alternative 3D modelling approach to build 3D models of monogenean anchors, which can be used to understand these morphological structures in three dimensions. This alternative 3D modelling approach is aimed to avoid repeating the tedious modelling procedure for every single target 3D model from scratch. RESULT An automated 3D modeling pipeline empowered by an Artificial Neural Network (ANN) was developed. This automated 3D modelling pipeline enables automated deformation of a generic 3D model of monogenean anchor into another target 3D anchor. The 3D modelling pipeline empowered by ANN has managed to automate the generation of the 8 target 3D models (representing 8 species: Dactylogyrus primaries, Pellucidhaptor merus, Dactylogyrus falcatus, Dactylogyrus vastator, Dactylogyrus pterocleidus, Dactylogyrus falciunguis, Chauhanellus auriculatum and Chauhanellus caelatus) of monogenean anchor from the respective 2D illustrations input without repeating the tedious modelling procedure. CONCLUSIONS Despite some constraints and limitation, the automated 3D modelling pipeline developed in this study has demonstrated a working idea of application of machine learning approach in a 3D modelling work. This study has not only developed an automated 3D modelling pipeline but also has demonstrated a cross-disciplinary research design that integrates machine learning into a specific domain of study such as 3D modelling of the biological structures.
Collapse
Affiliation(s)
- Bee Guan Teo
- School of Engineering, Monash University Malaysia, Kuala Lumpur, Malaysia
- Data Science and Bioinformatics Laboratory, Institute of Biological Sciences, Faculty of Science, University of Malaya, Kuala Lumpur, Malaysia
| | - Sarinder Kaur Dhillon
- Data Science and Bioinformatics Laboratory, Institute of Biological Sciences, Faculty of Science, University of Malaya, Kuala Lumpur, Malaysia
| |
Collapse
|
29
|
Deep learning based computer-aided diagnosis systems for diabetic retinopathy: A survey. Artif Intell Med 2019; 99:101701. [DOI: 10.1016/j.artmed.2019.07.009] [Citation(s) in RCA: 95] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2018] [Revised: 07/19/2019] [Accepted: 07/26/2019] [Indexed: 02/06/2023]
|
30
|
Liu YP, Li Z, Xu C, Li J, Liang R. Referable diabetic retinopathy identification from eye fundus images with weighted path for convolutional neural network. Artif Intell Med 2019; 99:101694. [PMID: 31606108 DOI: 10.1016/j.artmed.2019.07.002] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Revised: 06/25/2019] [Accepted: 07/09/2019] [Indexed: 02/07/2023]
Abstract
Diabetic retinopathy (DR) is the most common cause of blindness in middle-age subjects and low DR screening rates demonstrates the need for an automated image assessment system, which can benefit from the development of deep learning techniques. Therefore, the effective classification performance is significant in favor of the referable DR identification task. In this paper, we propose a new strategy, which applies multiple weighted paths into convolutional neural network, called the WP-CNN, motivated by the ensemble learning. In WP-CNN, multiple path weight coefficients are optimized by back propagation, and the output features are averaged for redundancy reduction and fast convergence. The experiment results show that with the efficient training convergence rate WP-CNN achieves an accuracy of 94.23% with sensitivity of 90.94%, specificity of 95.74%, an area under the receiver operating curve of 0.9823 and F1-score of 0.9087. By taking full advantage of the multipath mechanism, the proposed WP-CNN is shown to be accurate and effective for referable DR identification compared to the state-of-art algorithms.
Collapse
Affiliation(s)
- Yi-Peng Liu
- College of Computer Science & Technology, Zhejiang University of Technology, Hangzhou, 310023, China
| | - Zhanqing Li
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, China
| | - Cong Xu
- The Department of Physics, Southern University of Science and Technology, Shenzhen, 518055, China
| | - Jing Li
- Cancer Institute of Integrative Medicine, Tongde Hospital of Zhejiang Province, Hangzhou, 310012, China.
| | - Ronghua Liang
- College of Computer Science & Technology, Zhejiang University of Technology, Hangzhou, 310023, China
| |
Collapse
|
31
|
Guo S, Li T, Kang H, Li N, Zhang Y, Wang K. L-Seg: An end-to-end unified framework for multi-lesion segmentation of fundus images. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.04.019] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
32
|
Randive SN, Senapati RK, Rahulkar AD. A review on computer-aided recent developments for automatic detection of diabetic retinopathy. J Med Eng Technol 2019; 43:87-99. [PMID: 31198073 DOI: 10.1080/03091902.2019.1576790] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
Diabetic retinopathy is a serious microvascular disorder that might result in loss of vision and blindness. It seriously damages the retinal blood vessels and reduces the light-sensitive inner layer of the eye. Due to the manual inspection of retinal fundus images on diabetic retinopathy to detect the morphological abnormalities in Microaneurysms (MAs), Exudates (EXs), Haemorrhages (HMs), and Inter retinal microvascular abnormalities (IRMA) is very difficult and time consuming process. In order to avoid this, the regular follow-up screening process, and early automatic Diabetic Retinopathy detection are necessary. This paper discusses various methods of analysing automatic retinopathy detection and classification of different grading based on the severity levels. In addition, retinal blood vessel detection techniques are also discussed for the ultimate detection and diagnostic procedure of proliferative diabetic retinopathy. Furthermore, the paper elaborately discussed the systematic review accessed by authors on various publicly available databases collected from different medical sources. In the survey, meta-analysis of several methods for diabetic feature extraction, segmentation and various types of classifiers have been used to evaluate the system performance metrics for the diagnosis of DR. This survey will be helpful for the technical persons and researchers who want to focus on enhancing the diagnosis of a system that would be more powerful in real life.
Collapse
Affiliation(s)
- Santosh Nagnath Randive
- a Department of Electronics & Communication Engineering , Koneru Lakshmaiah Education Foundation, Green Fields, Vaddeswaram , Guntur , Andhra Pradesh , India
| | - Ranjan K Senapati
- a Department of Electronics & Communication Engineering , Koneru Lakshmaiah Education Foundation, Green Fields, Vaddeswaram , Guntur , Andhra Pradesh , India
| | - Amol D Rahulkar
- b Department of Electrical and Electronics Engineering , National Institute of Technology , Goa , India
| |
Collapse
|
33
|
Recent Development on Detection Methods for the Diagnosis of Diabetic Retinopathy. Symmetry (Basel) 2019. [DOI: 10.3390/sym11060749] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
Diabetic retinopathy (DR) is a complication of diabetes that exists throughout the world. DR occurs due to a high ratio of glucose in the blood, which causes alterations in the retinal microvasculature. Without preemptive symptoms of DR, it leads to complete vision loss. However, early screening through computer-assisted diagnosis (CAD) tools and proper treatment have the ability to control the prevalence of DR. Manual inspection of morphological changes in retinal anatomic parts are tedious and challenging tasks. Therefore, many CAD systems were developed in the past to assist ophthalmologists for observing inter- and intra-variations. In this paper, a recent review of state-of-the-art CAD systems for diagnosis of DR is presented. We describe all those CAD systems that have been developed by various computational intelligence and image processing techniques. The limitations and future trends of current CAD systems are also described in detail to help researchers. Moreover, potential CAD systems are also compared in terms of statistical parameters to quantitatively evaluate them. The comparison results indicate that there is still a need for accurate development of CAD systems to assist in the clinical diagnosis of diabetic retinopathy.
Collapse
|
34
|
Khojasteh P, Aliahmad B, Kumar DK. A novel color space of fundus images for automatic exudates detection. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2018.12.004] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
35
|
Applications of Artificial Intelligence in Ophthalmology: General Overview. J Ophthalmol 2018; 2018:5278196. [PMID: 30581604 PMCID: PMC6276430 DOI: 10.1155/2018/5278196] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2018] [Revised: 10/06/2018] [Accepted: 10/17/2018] [Indexed: 12/26/2022] Open
Abstract
With the emergence of unmanned plane, autonomous vehicles, face recognition, and language processing, the artificial intelligence (AI) has remarkably revolutionized our lifestyle. Recent studies indicate that AI has astounding potential to perform much better than human beings in some tasks, especially in the image recognition field. As the amount of image data in imaging center of ophthalmology is increasing dramatically, analyzing and processing these data is in urgent need. AI has been tried to apply to decipher medical data and has made extraordinary progress in intelligent diagnosis. In this paper, we presented the basic workflow for building an AI model and systematically reviewed applications of AI in the diagnosis of eye diseases. Future work should focus on setting up systematic AI platforms to diagnose general eye diseases based on multimodal data in the real world.
Collapse
|
36
|
Exudate detection in fundus images using deeply-learnable features. Comput Biol Med 2018; 104:62-69. [PMID: 30439600 DOI: 10.1016/j.compbiomed.2018.10.031] [Citation(s) in RCA: 79] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2018] [Revised: 10/27/2018] [Accepted: 10/27/2018] [Indexed: 01/28/2023]
Abstract
Presence of exudates on a retina is an early sign of diabetic retinopathy, and automatic detection of these can improve the diagnosis of the disease. Convolutional Neural Networks (CNNs) have been used for automatic exudate detection, but with poor performance. This study has investigated different deep learning techniques to maximize the sensitivity and specificity. We have compared multiple deep learning methods, and both supervised and unsupervised classifiers for improving the performance of automatic exudate detection, i.e., CNNs, pre-trained Residual Networks (ResNet-50) and Discriminative Restricted Boltzmann Machines. The experiments were conducted on two publicly available databases: (i) DIARETDB1 and (ii) e-Ophtha. The results show that ResNet-50 with Support Vector Machines outperformed other networks with an accuracy and sensitivity of 98% and 0.99, respectively. This shows that ResNet-50 can be used for the analysis of the fundus images to detect exudates.
Collapse
|
37
|
Pedrosa M, Silva JM, Silva JF, Matos S, Costa C. SCREEN-DR: Collaborative platform for diabetic retinopathy. Int J Med Inform 2018; 120:137-146. [PMID: 30409338 DOI: 10.1016/j.ijmedinf.2018.10.005] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2018] [Revised: 09/25/2018] [Accepted: 10/14/2018] [Indexed: 10/28/2022]
Abstract
BACKGROUND AND OBJECTIVE Diabetic retinopathy (DR) is the most prevalent microvascular complication of diabetes mellitus and can lead to irreversible visual loss. Screening programs, based on retinal imaging techniques, are fundamental to detect the disease since the initial stages are asymptomatic. Most of these examinations reflect negative cases and many have poor image quality, representing an important inefficiency factor. The SCREEN-DR project aims to tackle this limitation, by researching and developing computer-aided methods for diabetic retinopathy detection. This article presents a multidisciplinary collaborative platform that was created to meet the needs of physicians and researchers, aiming at the creation of machine learning algorithms to facilitate the screening process. METHODS Our proposal is a collaborative platform for textual and visual annotation of image datasets. The architecture and layout were optimized for annotating DR images by gathering feedback from several physicians during the design and conceptualization of the platform. It allows the aggregation and indexing of imagiology studies from diverse sources, and supports the creation and annotation of phenotype-specific datasets to feed artificial intelligence algorithms. The platform makes use of an anonymization pipeline and role-based access control for securing personal data. RESULTS The SCREEN-DR platform has been deployed in the production environment of the SCREEN-DR project at http://demo.dicoogle.com/screen-dr, and the source code of the project is publicly available. We provide a description of the platform's interface and use cases it supports. At the time of publication, four physicians have created a total of 1826 annotations for 701 distinct images, and the annotated data has been used for training classification models.
Collapse
|
38
|
Zheng R, Liu L, Zhang S, Zheng C, Bunyak F, Xu R, Li B, Sun M. Detection of exudates in fundus photographs with imbalanced learning using conditional generative adversarial network. BIOMEDICAL OPTICS EXPRESS 2018; 9:4863-4878. [PMID: 30319908 PMCID: PMC6179403 DOI: 10.1364/boe.9.004863] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2018] [Revised: 08/29/2018] [Accepted: 09/02/2018] [Indexed: 05/31/2023]
Abstract
Diabetic retinopathy (DR) is a leading cause of blindness worldwide. However, 90% of DR caused blindness can be prevented if diagnosed and intervened early. Retinal exudates can be observed at the early stage of DR and can be used as signs for early DR diagnosis. Deep convolutional neural networks (DCNNs) have been applied for exudate detection with promising results. However, there exist two main challenges when applying the DCNN based methods for exudate detection. One is the very limited number of labeled data available from medical experts, and another is the severely imbalanced distribution of data of different classes. First, there are many more images of normal eyes than those of eyes with exudates, particularly for screening datasets. Second, the number of normal pixels (non-exudates) is much greater than the number of abnormal pixels (exudates) in images containing exudates. To tackle the small sample set problem, an ensemble convolutional neural network (MU-net) based on a U-net structure is presented in this paper. To alleviate the imbalance data problem, the conditional generative adversarial network (cGAN) is adopted to generate label-preserving minority class data specifically to implement the data augmentation. The network was trained on one dataset (e_ophtha_EX) and tested on the other three public datasets (DiaReTDB1, HEI-MED and MESSIDOR). CGAN, as a data augmentation method, significantly improves network robustness and generalization properties, achieving F1-scores of 92.79%, 92.46%, 91.27%, and 94.34%, respectively, as measured at the lesion level. While without cGAN, the corresponding F1-scores were 92.66%, 91.41%, 90.72%, and 90.58%, respectively. When measured at the image level, with cGAN we achieved the accuracy of 95.45%, 92.13%, 88.76%, and 89.58%, compared with the values achieved without cGAN of 86.36%, 87.64%, 76.33%, and 86.42%, respectively.
Collapse
Affiliation(s)
- Rui Zheng
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei, Anhui 230022,
China
| | - Lei Liu
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, Anhui 230022,
China
| | - Shulin Zhang
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei, Anhui 230022,
China
| | - Chun Zheng
- The 105 Hospital of PLA, Hefei, Anhui 230031,
China
| | - Filiz Bunyak
- Department of Computer Science, University of Missouri, Columbia, MO 65211,
USA
| | - Ronald Xu
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei, Anhui 230022,
China
| | - Bin Li
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, Anhui 230022,
China
| | - Mingzhai Sun
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei, Anhui 230022,
China
| |
Collapse
|
39
|
Pujitha AK, Sivaswamy J. Solution to overcome the sparsity issue of annotated data in medical domain. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2018. [DOI: 10.1049/trit.2018.1010] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Appan K. Pujitha
- Center for Visual Information Technology, IIIT HyderabadHyderabadIndia
| | | |
Collapse
|
40
|
Simultaneous Segmentation of Multiple Retinal Pathologies Using Fully Convolutional Deep Neural Network. ACTA ACUST UNITED AC 2018. [DOI: 10.1007/978-3-319-95921-4_29] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/21/2023]
|
41
|
Elloumi Y, Akil M, Kehtarnavaz N. A mobile computer aided system for optic nerve head detection. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 162:139-148. [PMID: 29903480 DOI: 10.1016/j.cmpb.2018.05.004] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/22/2017] [Revised: 04/17/2018] [Accepted: 05/03/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVE The detection of optic nerve head (ONH) in retinal fundus images plays a key role in identifying Diabetic Retinopathy (DR) as well as other abnormal conditions in eye examinations. This paper presents a method and its associated software towards the development of an Android smartphone app based on a previously developed ONH detection algorithm. The development of this app and the use of the d-Eye lens which can be snapped onto a smartphone provide a mobile and cost-effective computer-aided diagnosis (CAD) system in ophthalmology. In particular, this CAD system would allow eye examination to be conducted in remote locations with limited access to clinical facilities. METHODS A pre-processing step is first carried out to enable the ONH detection on the smartphone platform. Then, the optimization steps taken to run the algorithm in a computationally and memory efficient manner on the smartphone platform is discussed. RESULTS The smartphone code of the ONH detection algorithm was applied to the STARE and DRIVE databases resulting in about 96% and 100% detection rates, respectively, with an average execution time of about 2 s and 1.3 s. In addition, two other databases captured by the d-Eye and iExaminer snap-on lenses for smartphones were considered resulting in about 93% and 91% detection rates, respectively, with an average execution time of about 2.7 s and 2.2 s, respectively.
Collapse
Affiliation(s)
- Yaroub Elloumi
- Gaspard Monge Computer Science Laboratory, ESIEE-Paris, University Paris-Est Marne-la-Vallée, France; Medical Technology and Image Processing Laboratory, Faculty of medicine, University of Monastir, Tunisia.
| | - Mohamed Akil
- Gaspard Monge Computer Science Laboratory, ESIEE-Paris, University Paris-Est Marne-la-Vallée, France
| | - Nasser Kehtarnavaz
- Department of Electrical and Computer Engineering, University of Texas at Dallas, Richardson, TX 75080, USA
| |
Collapse
|
42
|
|
43
|
Mo J, Zhang L, Feng Y. Exudate-based diabetic macular edema recognition in retinal images using cascaded deep residual networks. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.02.035] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
44
|
Al Rahhal MM, Bazi Y, Al Zuair M, Othman E, BenJdira B. Convolutional Neural Networks for Electrocardiogram Classification. J Med Biol Eng 2018. [DOI: 10.1007/s40846-018-0389-7] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
45
|
Jiang S, Chin KS, Tsui KL. A universal deep learning approach for modeling the flow of patients under different severities. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 154:191-203. [PMID: 29249343 DOI: 10.1016/j.cmpb.2017.11.003] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2017] [Revised: 08/31/2017] [Accepted: 11/06/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND AND OBJECTIVE The Accident and Emergency Department (A&ED) is the frontline for providing emergency care in hospitals. Unfortunately, relative A&ED resources have failed to keep up with continuously increasing demand in recent years, which leads to overcrowding in A&ED. Knowing the fluctuation of patient arrival volume in advance is a significant premise to relieve this pressure. Based on this motivation, the objective of this study is to explore an integrated framework with high accuracy for predicting A&ED patient flow under different triage levels, by combining a novel feature selection process with deep neural networks. METHODS Administrative data is collected from an actual A&ED and categorized into five groups based on different triage levels. A genetic algorithm (GA)-based feature selection algorithm is improved and implemented as a pre-processing step for this time-series prediction problem, in order to explore key features affecting patient flow. In our improved GA, a fitness-based crossover is proposed to maintain the joint information of multiple features during iterative process, instead of traditional point-based crossover. Deep neural networks (DNN) is employed as the prediction model to utilize their universal adaptability and high flexibility. In the model-training process, the learning algorithm is well-configured based on a parallel stochastic gradient descent algorithm. Two effective regularization strategies are integrated in one DNN framework to avoid overfitting. All introduced hyper-parameters are optimized efficiently by grid-search in one pass. RESULTS As for feature selection, our improved GA-based feature selection algorithm has outperformed a typical GA and four state-of-the-art feature selection algorithms (mRMR, SAFS, VIFR, and CFR). As for the prediction accuracy of proposed integrated framework, compared with other frequently used statistical models (GLM, seasonal-ARIMA, ARIMAX, and ANN) and modern machine models (SVM-RBF, SVM-linear, RF, and R-LASSO), the proposed integrated "DNN-I-GA" framework achieves higher prediction accuracy on both MAPE and RMSE metrics in pairwise comparisons. CONCLUSIONS The contribution of our study is two-fold. Theoretically, the traditional GA-based feature selection process is improved to have less hyper-parameters and higher efficiency, and the joint information of multiple features is maintained by fitness-based crossover operator. The universal property of DNN is further enhanced by merging different regularization strategies. Practically, features selected by our improved GA can be used to acquire an underlying relationship between patient flows and input features. Predictive values are significant indicators of patients' demand and can be used by A&ED managers to make resource planning and allocation. High accuracy achieved by the present framework in different cases enhances the reliability of downstream decision makings.
Collapse
Affiliation(s)
- Shancheng Jiang
- Dept. of Systems Engineering and Engineering Management, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon Tong, Hong Kong.
| | - Kwai-Sang Chin
- Dept. of Systems Engineering and Engineering Management, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon Tong, Hong Kong.
| | - Kwok L Tsui
- Dept. of Systems Engineering and Engineering Management, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon Tong, Hong Kong.
| |
Collapse
|
46
|
Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JAWM, van Ginneken B, Sánchez CI. A survey on deep learning in medical image analysis. Med Image Anal 2017; 42:60-88. [PMID: 28778026 DOI: 10.1016/j.media.2017.07.005] [Citation(s) in RCA: 4356] [Impact Index Per Article: 622.3] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 07/24/2017] [Accepted: 07/25/2017] [Indexed: 02/07/2023]
Affiliation(s)
- Geert Litjens
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Thijs Kooi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | | | - Francesco Ciompi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Mohsen Ghafoorian
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Clara I Sánchez
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
47
|
Boosted Exudate Segmentation in Retinal Images Using Residual Nets. FETAL, INFANT AND OPHTHALMIC MEDICAL IMAGE ANALYSIS 2017. [DOI: 10.1007/978-3-319-67561-9_24] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|