1
|
Zhelev Z, Peters J, Rogers M, Allen M, Kijauskaite G, Seedat F, Wilkinson E, Hyde C. Test accuracy of artificial intelligence-based grading of fundus images in diabetic retinopathy screening: A systematic review. J Med Screen 2023; 30:97-112. [PMID: 36617971 PMCID: PMC10399100 DOI: 10.1177/09691413221144382] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Revised: 11/14/2022] [Accepted: 11/18/2022] [Indexed: 01/10/2023]
Abstract
OBJECTIVES To systematically review the accuracy of artificial intelligence (AI)-based systems for grading of fundus images in diabetic retinopathy (DR) screening. METHODS We searched MEDLINE, EMBASE, the Cochrane Library and the ClinicalTrials.gov from 1st January 2000 to 27th August 2021. Accuracy studies published in English were included if they met the pre-specified inclusion criteria. Selection of studies for inclusion, data extraction and quality assessment were conducted by one author with a second reviewer independently screening and checking 20% of titles. Results were analysed narratively. RESULTS Forty-three studies evaluating 15 deep learning (DL) and 4 machine learning (ML) systems were included. Nine systems were evaluated in a single study each. Most studies were judged to be at high or unclear risk of bias in at least one QUADAS-2 domain. Sensitivity for referable DR and higher grades was ≥85% while specificity varied and was <80% for all ML systems and in 6/31 studies evaluating DL systems. Studies reported high accuracy for detection of ungradable images, but the latter were analysed and reported inconsistently. Seven studies reported that AI was more sensitive but less specific than human graders. CONCLUSIONS AI-based systems are more sensitive than human graders and could be safe to use in clinical practice but have variable specificity. However, for many systems evidence is limited, at high risk of bias and may not generalise across settings. Therefore, pre-implementation assessment in the target clinical pathway is essential to obtain reliable and applicable accuracy estimates.
Collapse
Affiliation(s)
- Zhivko Zhelev
- Exeter Test Group, University of Exeter Medical School, University of Exeter, Exeter, UK
| | - Jaime Peters
- Exeter Test Group, University of Exeter Medical School, University of Exeter, Exeter, UK
| | - Morwenna Rogers
- NIHR ARC South West Peninsula (PenARC), University of Exeter Medical School, University of Exeter, Exeter, UK
| | - Michael Allen
- University of Exeter Medical School, University of Exeter, Exeter, UK
| | | | | | | | - Christopher Hyde
- Exeter Test Group, University of Exeter Medical School, University of Exeter, Exeter, UK
| |
Collapse
|
2
|
Bai Y, Zhang X, Wang C, Gu H, Zhao M, Shi F. Microaneurysms detection in retinal fundus images based on shape constraint with region-context features. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2023]
|
3
|
Hemanth SV, Alagarsamy S. Hybrid adaptive deep learning classifier for early detection of diabetic retinopathy using optimal feature extraction and classification. J Diabetes Metab Disord 2023; 22:881-895. [PMID: 37255780 PMCID: PMC10225400 DOI: 10.1007/s40200-023-01220-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 03/29/2023] [Indexed: 06/01/2023]
Abstract
Objectives Diabetic retinopathy (DR) is one of the leading causes of blindness. It is important to use a comprehensive learning method to identify the DR. However, comprehensive learning methods often rely heavily on encrypted data, which can be costly and time consuming. Also, the DR function is not displayed and is scattered in the high-definition image below. Methods Therefore, learning how to distribute such DR functions is a big challenge. In this work, we proposed a hybrid adaptive deep learning classifier for early detection of diabetic retinopathy (HADL-DR). First, we provide an improved multichannel-based generative adversarial network (MGAN) with semi-maintenance to detect blood vessels segmentation. Results By reducing the reliance on the encoded data, the following high-resolution images can be used to detect the indivisible features of some semi-observed MGAN references. Scale invariant feature transform (SIFT) function is then extracted and the best function is selected using the improved sequential approximation optimization (SAO) algorithm. After that, a hybrid recurrent neural network with long short-term memory (RNN-LSTM) is utilized for DR classification. The proposed RNN-LSTM classifier evaluated through standard benchmark Kaggle and Messidor datasets. Conclusion Finally, the simulation results are compared with the existing state-of-art classifiers in terms of accuracy, precision, recall, f-measure and area under cover (AUC), it is seen that more successful results are obtained.
Collapse
Affiliation(s)
- S V Hemanth
- Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education (Deemed to Be University), Krishnankoil, TamilNadu India
| | - Saravanan Alagarsamy
- Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education (Deemed to Be University), Krishnankoil, TamilNadu India
| |
Collapse
|
4
|
Exudate identification in retinal fundus images using precise textural verifications. Sci Rep 2023; 13:2824. [PMID: 36808177 PMCID: PMC9938199 DOI: 10.1038/s41598-023-29916-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Accepted: 02/13/2023] [Indexed: 02/19/2023] Open
Abstract
One of the most salient diseases of retina is Diabetic Retinopathy (DR) which may lead to irreparable damages to eye vision in the advanced phases. A large number of the people infected with diabetes experience DR. The early identification of DR signs facilitates the treatment process and prevents from blindness. Hard Exudates (HE) are bright lesions appeared in retinal fundus images of DR patients. Thus, the detection of HEs is an important task preventing the progress of DR. However, the detection of HEs is a challenging process due to their different appearance features. In this paper, an automatic method for the identification of HEs with various sizes and shapes is proposed. The method works based on a pixel-wise approach. It considers several semi-circular regions around each pixel. For each semi-circular region, the intensity changes around several directions and non-necessarily equal radiuses are computed. All pixels for which several semi-circular regions include considerable intensity changes are considered as the pixels located in HEs. In order to reduce false positives, an optic disc localization method is proposed in the post-processing phase. The performance of the proposed method has been evaluated on DIARETDB0 and DIARETDB1 datasets. The experimental results confirm the improved performance of the suggested method in term of accuracy.
Collapse
|
5
|
Comparing Conventional and Deep Feature Models for Classifying Fundus Photography of Hemorrhages. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:7387174. [DOI: 10.1155/2022/7387174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 03/27/2022] [Accepted: 04/08/2022] [Indexed: 11/20/2022]
Abstract
Diabetic retinopathy is an eye-related pathology creating abnormalities and causing visual impairment, proper treatment of which requires identifying irregularities. This research uses a hemorrhage detection method and compares the classification of conventional and deep features. Especially, the method identifies hemorrhage connected with blood vessels or residing at the retinal border and was reported challenging. Initially, adaptive brightness adjustment and contrast enhancement rectify degraded images. Prospective locations of hemorrhages are estimated by a Gaussian matched filter, entropy thresholding, and morphological operation. Hemorrhages are segmented by a novel technique based on the regional variance of intensities. Features are then extracted by conventional methods and deep models for training support vector machines and the results are evaluated. Evaluation metrics for each model are promising, but findings suggest that comparatively, deep models are more effective than conventional features.
Collapse
|
6
|
Ribeiro L, Marques IP, Coimbra R, Santos T, Madeira MH, Santos AR, Barreto P, Lobo C, Cunha-Vaz J. Characterization of One-Year Progression of Risk Phenotypes of Diabetic Retinopathy. Ophthalmol Ther 2021; 11:333-345. [PMID: 34865186 PMCID: PMC8770718 DOI: 10.1007/s40123-021-00437-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 11/18/2021] [Indexed: 11/29/2022] Open
Abstract
Introduction We characterized the progression of different diabetic retinopathy (DR) phenotypes in type 2 diabetes (T2D). Methods A prospective longitudinal cohort study (CORDIS, NCT03696810) was conducted with three visits (baseline, 6 months, and 1 year). Demographic and systemic data included age, sex, diabetes duration, lipid profile, and hemoglobin A1c (HbA1c). Ophthalmological examinations included best-corrected visual acuity (BCVA), color fundus photography (CFP), and optical coherence tomography (OCT and OCTA). Phenotype classification was performed at the 6-month visit based on microaneurysm turnover (MAT, on CFP) and central retinal thickness (CRT, on OCT). Only risk phenotypes B (MAT < 6 and increased CRT) and C (MAT ≥ 6 with or without increased CRT) were included. ETDRS grading was performed at the baseline visit based on seven-field CFP. Results A total of 133 T2D individuals were included in the study; 81 (60%) eyes were classified as phenotype B and 52 (40%) eyes as phenotype C. Of these, 128 completed the 1-year follow-up. At baseline, eyes with phenotype C showed greater capillary closure (superior capillary plexus, deep capillary plexus, and full retina, p < 0.001) and increased foveal avascular zone (FAZ) area (p < 0.001), indicating more advanced microvascular disease. Neurodegeneration represented by thinning of the ganglion cell layer + inner plexiform layer (GCL + IPL) was present in both phenotypes. When analyzing the 1-year progression of each phenotype, only phenotype C revealed a significant decrease in BCVA (p = 0.02) and enlargement of the FAZ (p = 0.03). A significant progressive decrease in the vessel density of the deep capillary layer and in MAT occurred in both phenotypes, but these changes were particularly relevant in phenotype C and ETDRS grades 43–47. During the 1-year period, both phenotypes B and C showed progression in GCL + IPL thinning (p < 0.001). Conclusions In the 1-year period of follow-up, both phenotypes B and C showed progression in retinal neurodegeneration, whereas phenotype C showed more marked disease progression at the microvascular level.
Collapse
Affiliation(s)
- Luísa Ribeiro
- AIBILI-Association for Innovation and Biomedical Research on Light and Image, Coimbra, Portugal. .,Coimbra Institute for Clinical and Biomedical Research (iCBR), Faculty of Medicine, University of Coimbra, Coimbra, Portugal. .,Center for Innovative Biomedicine and Biotechnology (CIBB), University of Coimbra, 3000-548, Coimbra, Portugal.
| | - Inês P Marques
- AIBILI-Association for Innovation and Biomedical Research on Light and Image, Coimbra, Portugal.,Coimbra Institute for Clinical and Biomedical Research (iCBR), Faculty of Medicine, University of Coimbra, Coimbra, Portugal.,Center for Innovative Biomedicine and Biotechnology (CIBB), University of Coimbra, 3000-548, Coimbra, Portugal
| | - Rita Coimbra
- AIBILI-Association for Innovation and Biomedical Research on Light and Image, Coimbra, Portugal
| | - Torcato Santos
- AIBILI-Association for Innovation and Biomedical Research on Light and Image, Coimbra, Portugal
| | - Maria H Madeira
- AIBILI-Association for Innovation and Biomedical Research on Light and Image, Coimbra, Portugal.,Coimbra Institute for Clinical and Biomedical Research (iCBR), Faculty of Medicine, University of Coimbra, Coimbra, Portugal.,Center for Innovative Biomedicine and Biotechnology (CIBB), University of Coimbra, 3000-548, Coimbra, Portugal
| | - Ana Rita Santos
- AIBILI-Association for Innovation and Biomedical Research on Light and Image, Coimbra, Portugal.,Coimbra Institute for Clinical and Biomedical Research (iCBR), Faculty of Medicine, University of Coimbra, Coimbra, Portugal.,Center for Innovative Biomedicine and Biotechnology (CIBB), University of Coimbra, 3000-548, Coimbra, Portugal.,Department of Orthoptics, School of Health, Polytechnic of Porto, Porto, Portugal
| | - Patrícia Barreto
- AIBILI-Association for Innovation and Biomedical Research on Light and Image, Coimbra, Portugal
| | - Conceição Lobo
- AIBILI-Association for Innovation and Biomedical Research on Light and Image, Coimbra, Portugal.,Coimbra Institute for Clinical and Biomedical Research (iCBR), Faculty of Medicine, University of Coimbra, Coimbra, Portugal.,Center for Innovative Biomedicine and Biotechnology (CIBB), University of Coimbra, 3000-548, Coimbra, Portugal.,Department of Ophthalmology, Centro Hospitalar e Universitário de Coimbra (CHUC), Coimbra, Portugal
| | - José Cunha-Vaz
- AIBILI-Association for Innovation and Biomedical Research on Light and Image, Coimbra, Portugal. .,Coimbra Institute for Clinical and Biomedical Research (iCBR), Faculty of Medicine, University of Coimbra, Coimbra, Portugal. .,Center for Innovative Biomedicine and Biotechnology (CIBB), University of Coimbra, 3000-548, Coimbra, Portugal.
| |
Collapse
|
7
|
Red-lesion extraction in retinal fundus images by directional intensity changes' analysis. Sci Rep 2021; 11:18223. [PMID: 34521886 PMCID: PMC8440775 DOI: 10.1038/s41598-021-97649-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2020] [Accepted: 08/18/2021] [Indexed: 12/31/2022] Open
Abstract
Diabetic retinopathy (DR) is an important retinal disease threatening people with the long diabetic history. Blood leakage in retina leads to the formation of red lesions in retina the analysis of which is helpful in the determination of severity of disease. In this paper, a novel red-lesion extraction method is proposed. The new method firstly determines the boundary pixels of blood vessel and red lesions. Then, it determines the distinguishing features of boundary pixels of red-lesions to discriminate them from other boundary pixels. The main point utilized here is that a red lesion can be observed as significant intensity changes in almost all directions in the fundus image. This can be feasible through considering special neighborhood windows around the extracted boundary pixels. The performance of the proposed method has been evaluated for three different datasets including Diaretdb0, Diaretdb1 and Kaggle datasets. It is shown that the method is capable of providing the values of 0.87 and 0.88 for sensitivity and specificity of Diaretdb1, 0.89 and 0.9 for sensitivity and specificity of Diaretdb0, 0.82 and 0.9 for sensitivity and specificity of Kaggle. Also, the proposed method has a time-efficient performance in the red-lesion extraction process.
Collapse
|
8
|
A review of diabetic retinopathy: Datasets, approaches, evaluation metrics and future trends. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2021. [DOI: 10.1016/j.jksuci.2021.06.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
9
|
Xu R, Xu Y, Quan Y. Structure-Texture Image Decomposition Using Discriminative Patch Recurrence. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:1542-1555. [PMID: 33320812 DOI: 10.1109/tip.2020.3043665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Morphology component analysis provides an effective framework for structure-texture image decomposition, which characterizes the structure and texture components by sparsifying them with certain transforms respectively. Due to the complexity and randomness of texture, it is challenging to design effective sparsifying transforms for texture components. This paper aims at exploiting the recurrence of texture patterns, one important property of texture, to develop a nonlocal transform for texture component sparsification. Since the plain patch recurrence holds for both cartoon contours and texture regions, the nonlocal sparsifying transform constructed based on such patch recurrence sparsifies both the structure and texture components well. As a result, cartoon contours could be wrongly assigned to the texture component, yielding ambiguity in decomposition. To address this issue, we introduce a discriminative prior on patch recurrence, that the spatial arrangement of recurrent patches in texture regions exhibits isotropic structure which differs from that of cartoon contours. Based on the prior, a nonlocal transform is constructed which only sparsifies texture regions well. Incorporating the constructed transform into morphology component analysis, we propose an effective approach for structure-texture decomposition. Extensive experiments have demonstrated the superior performance of our approach over existing ones.
Collapse
|
10
|
Li Q, Li S, He Z, Guan H, Chen R, Xu Y, Wang T, Qi S, Mei J, Wang W. DeepRetina: Layer Segmentation of Retina in OCT Images Using Deep Learning. Transl Vis Sci Technol 2020; 9:61. [PMID: 33329940 PMCID: PMC7726589 DOI: 10.1167/tvst.9.2.61] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Accepted: 10/19/2020] [Indexed: 11/24/2022] Open
Abstract
Purpose To automate the segmentation of retinal layers, we propose DeepRetina, a method based on deep neural networks. Methods DeepRetina uses the improved Xception65 to extract and learn the characteristics of retinal layers. The Xception65-extracted feature maps are inputted to an atrous spatial pyramid pooling module to obtain multiscale feature information. This information is then recovered to capture clearer retinal layer boundaries in the encoder-decoder module, thus completing retinal layer auto-segmentation of the retinal optical coherence tomography (OCT) images. Results We validated this method using a retinal OCT image database containing 280 volumes (40 B-scans per volume) to demonstrate its effectiveness. The results showed that the method exhibits excellent performance in terms of the mean intersection over union and sensitivity (Se), which are as high as 90.41 and 92.15%, respectively. The intersection over union and Se values of the nerve fiber layer, ganglion cell layer, inner plexiform layer, inner nuclear layer, outer plexiform layer, outer nuclear layer, outer limiting membrane, photoreceptor inner segment, photoreceptor outer segment, and pigment epithelium layer were found to be above 88%. Conclusions DeepRetina can automate the segmentation of retinal layers and has great potential for the early diagnosis of fundus retinal diseases. In addition, our approach will provide a segmentation model framework for other types of tissues and cells in clinical practice. Translational Relevance Automating the segmentation of retinal layers can help effectively diagnose and monitor clinical retinal diseases. In addition, it requires only a small amount of manual segmentation, significantly improving work efficiency.
Collapse
Affiliation(s)
- Qiaoliang Li
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Department of Biomedical Engineering, School of Medicine, Shenzhen University, Xueyuan Avenue, Nanshan District, Shenzhen, Guangdong Province, China
| | - Shiyu Li
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Department of Biomedical Engineering, School of Medicine, Shenzhen University, Xueyuan Avenue, Nanshan District, Shenzhen, Guangdong Province, China
| | - Zhuoying He
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Department of Biomedical Engineering, School of Medicine, Shenzhen University, Xueyuan Avenue, Nanshan District, Shenzhen, Guangdong Province, China
| | - Huimin Guan
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Department of Biomedical Engineering, School of Medicine, Shenzhen University, Xueyuan Avenue, Nanshan District, Shenzhen, Guangdong Province, China
| | - Runmin Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Department of Biomedical Engineering, School of Medicine, Shenzhen University, Xueyuan Avenue, Nanshan District, Shenzhen, Guangdong Province, China
| | - Ying Xu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Department of Biomedical Engineering, School of Medicine, Shenzhen University, Xueyuan Avenue, Nanshan District, Shenzhen, Guangdong Province, China
| | - Tao Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Department of Biomedical Engineering, School of Medicine, Shenzhen University, Xueyuan Avenue, Nanshan District, Shenzhen, Guangdong Province, China
| | - Suwen Qi
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Department of Biomedical Engineering, School of Medicine, Shenzhen University, Xueyuan Avenue, Nanshan District, Shenzhen, Guangdong Province, China
| | - Jun Mei
- Medical Imaging Department of Shenzhen Eye Hospital Affiliated to Jinan University, Shenzhen, Guangdong Province, China
| | - Wei Wang
- Department of Pathology, Shenzhen University General Hospital, Shenzhen, Guangdong Province, China
| |
Collapse
|
11
|
Javidi M, Harati A, Pourreza H. Retinal image assessment using bi-level adaptive morphological component analysis. Artif Intell Med 2019; 99:101702. [PMID: 31606110 DOI: 10.1016/j.artmed.2019.07.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2018] [Revised: 07/25/2019] [Accepted: 07/26/2019] [Indexed: 10/26/2022]
Abstract
The automated analysis of retinal images is a widely researched area which can help to diagnose several diseases like diabetic retinopathy in early stages of the disease. More specifically, separation of vessels and lesions is very critical as features of these structures are directly related to the diagnosis and treatment process of diabetic retinopathy. The complexity of the retinal image contents especially in images with severe diabetic retinopathy makes detection of vascular structure and lesions difficult. In this paper, a novel framework based on morphological component analysis (MCA) is presented which benefits from the adaptive representations obtained via dictionary learning. In the proposed Bi-level Adaptive MCA (BAMCA), MCA is extended to locally deal with sparse representation of the retinal images at patch level whereas the decomposition process occurs globally at the image level. BAMCA method with appropriately offline learnt dictionaries is adopted to work on retinal images with severe diabetic retinopathy in order to simultaneously separate vessels and exudate lesions as diagnostically useful morphological components. To obtain the appropriate dictionaries, K-SVD dictionary learning algorithm is modified to use a gated error which guides the process toward learning the main structures of the retinal images using vessel or lesion maps. Computational efficiency of the proposed framework is also increased significantly through some improvement leading to noticeable reduction in run time. We experimentally show how effective dictionaries can be learnt which help BAMCA to successfully separate exudate and vessel components from retinal images even in severe cases of diabetic retinopathy. In this paper, in addition to visual qualitative assessment, the performance of the proposed method is quantitatively measured in the framework of vessel and exudate segmentation. The reported experimental results on public datasets demonstrate that the obtained components can be used to achieve competitive results with regard to the state-of-the-art vessel and exudate segmentation methods.
Collapse
Affiliation(s)
- Malihe Javidi
- Computer Engineering Department, Quchan University of Technology, Quchan, Iran.
| | - Ahad Harati
- Department of Computer Engineering, Ferdowsi University of Mashhad, Mashhad, Iran.
| | - HamidReza Pourreza
- Department of Computer Engineering, Ferdowsi University of Mashhad, Mashhad, Iran.
| |
Collapse
|
12
|
Porwal P, Pachade S, Kokare M, Deshmukh G, Son J, Bae W, Liu L, Wang J, Liu X, Gao L, Wu T, Xiao J, Wang F, Yin B, Wang Y, Danala G, He L, Choi YH, Lee YC, Jung SH, Li Z, Sui X, Wu J, Li X, Zhou T, Toth J, Baran A, Kori A, Chennamsetty SS, Safwan M, Alex V, Lyu X, Cheng L, Chu Q, Li P, Ji X, Zhang S, Shen Y, Dai L, Saha O, Sathish R, Melo T, Araújo T, Harangi B, Sheng B, Fang R, Sheet D, Hajdu A, Zheng Y, Mendonça AM, Zhang S, Campilho A, Zheng B, Shen D, Giancardo L, Quellec G, Mériaudeau F. IDRiD: Diabetic Retinopathy - Segmentation and Grading Challenge. Med Image Anal 2019; 59:101561. [PMID: 31671320 DOI: 10.1016/j.media.2019.101561] [Citation(s) in RCA: 69] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Revised: 09/09/2019] [Accepted: 09/16/2019] [Indexed: 02/07/2023]
Abstract
Diabetic Retinopathy (DR) is the most common cause of avoidable vision loss, predominantly affecting the working-age population across the globe. Screening for DR, coupled with timely consultation and treatment, is a globally trusted policy to avoid vision loss. However, implementation of DR screening programs is challenging due to the scarcity of medical professionals able to screen a growing global diabetic population at risk for DR. Computer-aided disease diagnosis in retinal image analysis could provide a sustainable approach for such large-scale screening effort. The recent scientific advances in computing capacity and machine learning approaches provide an avenue for biomedical scientists to reach this goal. Aiming to advance the state-of-the-art in automatic DR diagnosis, a grand challenge on "Diabetic Retinopathy - Segmentation and Grading" was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI - 2018). In this paper, we report the set-up and results of this challenge that is primarily based on Indian Diabetic Retinopathy Image Dataset (IDRiD). There were three principal sub-challenges: lesion segmentation, disease severity grading, and localization of retinal landmarks and segmentation. These multiple tasks in this challenge allow to test the generalizability of algorithms, and this is what makes it different from existing ones. It received a positive response from the scientific community with 148 submissions from 495 registrations effectively entered in this challenge. This paper outlines the challenge, its organization, the dataset used, evaluation methods and results of top-performing participating solutions. The top-performing approaches utilized a blend of clinical information, data augmentation, and an ensemble of models. These findings have the potential to enable new developments in retinal image analysis and image-based DR screening in particular.
Collapse
Affiliation(s)
- Prasanna Porwal
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India; School of Biomedical Informatics, University of Texas Health Science Center at Houston, USA.
| | - Samiksha Pachade
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India; School of Biomedical Informatics, University of Texas Health Science Center at Houston, USA
| | - Manesh Kokare
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India
| | | | | | | | - Lihong Liu
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | | | - Xinhui Liu
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | | | - TianBo Wu
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | - Jing Xiao
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | | | | | - Yunzhi Wang
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Gopichandh Danala
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Linsheng He
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Yoon Ho Choi
- Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | - Yeong Chan Lee
- Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | - Sang-Hyuk Jung
- Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | - Zhongyu Li
- Department of Computer Science, University of North Carolina at Charlotte, USA
| | - Xiaodan Sui
- School of Information Science and Engineering, Shandong Normal University, China
| | - Junyan Wu
- Cleerly Inc., New York, United States
| | | | - Ting Zhou
- University at Buffalo, New York, United States
| | - Janos Toth
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | - Agnes Baran
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | | | | | | | | | - Xingzheng Lyu
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China; Machine Learning for Bioimage Analysis Group, Bioinformatics Institute, A*STAR, Singapore
| | - Li Cheng
- Machine Learning for Bioimage Analysis Group, Bioinformatics Institute, A*STAR, Singapore; Department of Electric and Computer Engineering, University of Alberta, Canada
| | - Qinhao Chu
- School of Computing, National University of Singapore, Singapore
| | - Pengcheng Li
- School of Computing, National University of Singapore, Singapore
| | - Xin Ji
- Beijing Shanggong Medical Technology Co., Ltd., China
| | - Sanyuan Zhang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Yaxin Shen
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
| | - Ling Dai
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
| | | | | | - Tânia Melo
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal
| | - Teresa Araújo
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Balazs Harangi
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
| | - Ruogu Fang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, USA
| | | | - Andras Hajdu
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, China
| | - Ana Maria Mendonça
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, USA
| | - Aurélio Campilho
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Luca Giancardo
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, USA
| | | | - Fabrice Mériaudeau
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Malaysia; ImViA/IFTIM, Université de Bourgogne, Dijon, France
| |
Collapse
|
13
|
Gong C, Erichson NB, Kelly JP, Trutoiu L, Schowengerdt BT, Brunton SL, Seibel EJ. RetinaMatch: Efficient Template Matching of Retina Images for Teleophthalmology. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1993-2004. [PMID: 31217098 DOI: 10.1109/tmi.2019.2923466] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Retinal template matching and registration is an important challenge in teleophthalmology with low-cost imaging devices. However, the images from such devices generally have a small field of view (FOV) and image quality degradations, making matching difficult. In this paper, we develop an efficient and accurate retinal matching technique that combines dimension reduction and mutual information (MI), called RetinaMatch. The dimension reduction initializes the MI optimization as a coarse localization process, which narrows the optimization domain and avoids local optima. The effectiveness of RetinaMatch is demonstrated on the open fundus image database STARE with simulated reduced FOV and anticipated degradations, and on retinal images acquired by adapter-based optics attached to a smartphone. RetinaMatch achieves a success rate over 94% on human retinal images with the matched target registration errors below 2 pixels on average, excluding the observer variability, outperforming standard template matching solutions. In the application of measuring vessel diameter repeatedly, single pixel errors are expected. In addition, our method can be used in the process of image mosaicking with area-based registration, providing a robust approach when feature-based methods fail. To the best of our knowledge, this is the first template matching algorithm for retina images with small template images from unconstrained retinal areas. In the context of the emerging mixed reality market, we envision automated retinal image matching and registration methods as transformative for advanced teleophthalmology and long-term retinal monitoring.
Collapse
|
14
|
Recent Development on Detection Methods for the Diagnosis of Diabetic Retinopathy. Symmetry (Basel) 2019. [DOI: 10.3390/sym11060749] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
Diabetic retinopathy (DR) is a complication of diabetes that exists throughout the world. DR occurs due to a high ratio of glucose in the blood, which causes alterations in the retinal microvasculature. Without preemptive symptoms of DR, it leads to complete vision loss. However, early screening through computer-assisted diagnosis (CAD) tools and proper treatment have the ability to control the prevalence of DR. Manual inspection of morphological changes in retinal anatomic parts are tedious and challenging tasks. Therefore, many CAD systems were developed in the past to assist ophthalmologists for observing inter- and intra-variations. In this paper, a recent review of state-of-the-art CAD systems for diagnosis of DR is presented. We describe all those CAD systems that have been developed by various computational intelligence and image processing techniques. The limitations and future trends of current CAD systems are also described in detail to help researchers. Moreover, potential CAD systems are also compared in terms of statistical parameters to quantitatively evaluate them. The comparison results indicate that there is still a need for accurate development of CAD systems to assist in the clinical diagnosis of diabetic retinopathy.
Collapse
|
15
|
S K, D M. Distinguising Proof of Diabetic Retinopathy Detection by Hybrid Approaches in Two Dimensional Retinal Fundus Images. J Med Syst 2019; 43:173. [PMID: 31069550 DOI: 10.1007/s10916-019-1313-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2019] [Accepted: 04/25/2019] [Indexed: 12/29/2022]
Abstract
Diabetes is characterized by constant high level of blood glucose. The human body needs to maintain insulin at very constrict range. The patients who are all affected by diabetes for a long time affected by eye disease called Diabetic Retinopathy (DR). The retinal landmarks namely Optic disc is predicted and masked to decrease the false positive in the exudates detection. The abnormalities like Exudates, Microaneurysms and Hemorrhages are segmented to classify the various stages of DR. The proposed approach is employed to separate the landmarks of retina and lesions of retina for the classification of stages of DR. The segmentation algorithms like Gabor double-sided hysteresis thresholding, maximum intensity variation, inverse surface adaptive thresholding, multi-agent approach and toboggan segmentation are used to detect and segment BVs, ODs, EXs, MAs and HAs. The feature vector formation and machine learning algorithm used to classify the various stages of DR are evaluated using images available in various retinal databases, and their performance measures are presented in this paper.
Collapse
Affiliation(s)
- Karkuzhali S
- Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education ( Deemed to be University), Srivilliputtur, Tamilnadu, India.
| | - Manimegalai D
- Department of Information Technology, National Engineering College, Kovilpatti, Tamilnadu, India
| |
Collapse
|
16
|
Stevenson CH, Hong SC, Ogbuehi KC. Development of an artificial intelligence system to classify pathology and clinical features on retinal fundus images. Clin Exp Ophthalmol 2018; 47:484-489. [PMID: 30370587 DOI: 10.1111/ceo.13433] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2018] [Revised: 09/12/2018] [Accepted: 10/05/2018] [Indexed: 12/30/2022]
Abstract
IMPORTANCE Artificial intelligence (AI) algorithms are under development for use in diabetic retinopathy photo screening pathways. To be clinically acceptable, such systems must also be able to classify other fundus abnormalities and clinical features at the point of care. BACKGROUND We aimed to develop an AI system that can detect several fundus pathologies and report relevant clinical features. DESIGN Convolutional neural network training with retrospective data set. PARTICIPANTS Colour fundus photos were obtained from publicly available fundus image databases. METHODS Images were uploaded to a web-based AI platform for training and validation of AI classifiers. Separate classifiers were created for each fundus pathology and clinical feature. MAIN OUTCOME MEASURES Accuracy, sensitivity, specificity and area under receiver operating characteristic curve (AUC) for each classifier. RESULTS We obtained 4435 images from publicly available fundus image databases. AI classifiers were developed for each disease state above. Although statistical performance was limited by the small sample size, average accuracy was 89%, average sensitivity was 75%, average specificity was 89% and average AUC was 0.58. CONCLUSION AND RELEVANCE This study is a proof-of-concept AI system that could be implemented within a diabetic photo-screening pathway. Performance was promising but not yet at the level that would be required for clinical application. We have shown that it is possible for clinicians to develop AI classifiers with no previous programming or AI knowledge, using standard laptop computers.
Collapse
Affiliation(s)
- Clark H Stevenson
- Dunedin Hospital Eye Department, Dunedin, New Zealand.,University of Otago, Dunedin, New Zealand
| | | | | |
Collapse
|
17
|
Biyani R, Patre B. Algorithms for red lesion detection in Diabetic Retinopathy: A review. Biomed Pharmacother 2018; 107:681-688. [DOI: 10.1016/j.biopha.2018.07.175] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Revised: 07/31/2018] [Accepted: 07/31/2018] [Indexed: 11/27/2022] Open
|
18
|
Zheng R, Liu L, Zhang S, Zheng C, Bunyak F, Xu R, Li B, Sun M. Detection of exudates in fundus photographs with imbalanced learning using conditional generative adversarial network. BIOMEDICAL OPTICS EXPRESS 2018; 9:4863-4878. [PMID: 30319908 PMCID: PMC6179403 DOI: 10.1364/boe.9.004863] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2018] [Revised: 08/29/2018] [Accepted: 09/02/2018] [Indexed: 05/31/2023]
Abstract
Diabetic retinopathy (DR) is a leading cause of blindness worldwide. However, 90% of DR caused blindness can be prevented if diagnosed and intervened early. Retinal exudates can be observed at the early stage of DR and can be used as signs for early DR diagnosis. Deep convolutional neural networks (DCNNs) have been applied for exudate detection with promising results. However, there exist two main challenges when applying the DCNN based methods for exudate detection. One is the very limited number of labeled data available from medical experts, and another is the severely imbalanced distribution of data of different classes. First, there are many more images of normal eyes than those of eyes with exudates, particularly for screening datasets. Second, the number of normal pixels (non-exudates) is much greater than the number of abnormal pixels (exudates) in images containing exudates. To tackle the small sample set problem, an ensemble convolutional neural network (MU-net) based on a U-net structure is presented in this paper. To alleviate the imbalance data problem, the conditional generative adversarial network (cGAN) is adopted to generate label-preserving minority class data specifically to implement the data augmentation. The network was trained on one dataset (e_ophtha_EX) and tested on the other three public datasets (DiaReTDB1, HEI-MED and MESSIDOR). CGAN, as a data augmentation method, significantly improves network robustness and generalization properties, achieving F1-scores of 92.79%, 92.46%, 91.27%, and 94.34%, respectively, as measured at the lesion level. While without cGAN, the corresponding F1-scores were 92.66%, 91.41%, 90.72%, and 90.58%, respectively. When measured at the image level, with cGAN we achieved the accuracy of 95.45%, 92.13%, 88.76%, and 89.58%, compared with the values achieved without cGAN of 86.36%, 87.64%, 76.33%, and 86.42%, respectively.
Collapse
Affiliation(s)
- Rui Zheng
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei, Anhui 230022,
China
| | - Lei Liu
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, Anhui 230022,
China
| | - Shulin Zhang
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei, Anhui 230022,
China
| | - Chun Zheng
- The 105 Hospital of PLA, Hefei, Anhui 230031,
China
| | - Filiz Bunyak
- Department of Computer Science, University of Missouri, Columbia, MO 65211,
USA
| | - Ronald Xu
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei, Anhui 230022,
China
| | - Bin Li
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, Anhui 230022,
China
| | - Mingzhai Sun
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei, Anhui 230022,
China
| |
Collapse
|
19
|
Abràmoff MD, Lavin PT, Birch M, Shah N, Folk JC. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digit Med 2018; 1:39. [PMID: 31304320 PMCID: PMC6550188 DOI: 10.1038/s41746-018-0040-6] [Citation(s) in RCA: 624] [Impact Index Per Article: 104.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2018] [Revised: 07/06/2018] [Accepted: 07/10/2018] [Indexed: 02/08/2023] Open
Abstract
Artificial Intelligence (AI) has long promised to increase healthcare affordability, quality and accessibility but FDA, until recently, had never authorized an autonomous AI diagnostic system. This pivotal trial of an AI system to detect diabetic retinopathy (DR) in people with diabetes enrolled 900 subjects, with no history of DR at primary care clinics, by comparing to Wisconsin Fundus Photograph Reading Center (FPRC) widefield stereoscopic photography and macular Optical Coherence Tomography (OCT), by FPRC certified photographers, and FPRC grading of Early Treatment Diabetic Retinopathy Study Severity Scale (ETDRS) and Diabetic Macular Edema (DME). More than mild DR (mtmDR) was defined as ETDRS level 35 or higher, and/or DME, in at least one eye. AI system operators underwent a standardized training protocol before study start. Median age was 59 years (range, 22–84 years); among participants, 47.5% of participants were male; 16.1% were Hispanic, 83.3% not Hispanic; 28.6% African American and 63.4% were not; 198 (23.8%) had mtmDR. The AI system exceeded all pre-specified superiority endpoints at sensitivity of 87.2% (95% CI, 81.8–91.2%) (>85%), specificity of 90.7% (95% CI, 88.3–92.7%) (>82.5%), and imageability rate of 96.1% (95% CI, 94.6–97.3%), demonstrating AI’s ability to bring specialty-level diagnostics to primary care settings. Based on these results, FDA authorized the system for use by health care providers to detect more than mild DR and diabetic macular edema, making it, the first FDA authorized autonomous AI diagnostic system in any field of medicine, with the potential to help prevent vision loss in thousands of people with diabetes annually. ClinicalTrials.gov NCT02963441
Collapse
Affiliation(s)
- Michael D Abràmoff
- 1Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA 52242 USA.,2Veterans Administration Medical Center, Iowa City, IA 52242 USA.,IDx LLC, Coralville, IA 52241 USA.,4Institute for Vision Research, University of Iowa, Iowa City, IA 52242 USA
| | - Philip T Lavin
- Boston Biostatistics Research Foundation, Inc., 3 Cahill Park Drive, Framingham, MA 01702 USA
| | - Michele Birch
- 6Department of Family Medicine, Director of Academic Services, University of North Carolina School of Medicine, Charlotte, NC 28204 USA
| | - Nilay Shah
- 7The Emmes Corporation, 401 North Washington Street, Suite 700, Rockville, MD 20850 USA
| | - James C Folk
- 1Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA 52242 USA.,2Veterans Administration Medical Center, Iowa City, IA 52242 USA.,IDx LLC, Coralville, IA 52241 USA
| |
Collapse
|
20
|
Schmidt-Erfurth U, Sadeghipour A, Gerendas BS, Waldstein SM, Bogunović H. Artificial intelligence in retina. Prog Retin Eye Res 2018; 67:1-29. [PMID: 30076935 DOI: 10.1016/j.preteyeres.2018.07.004] [Citation(s) in RCA: 366] [Impact Index Per Article: 61.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Revised: 07/24/2018] [Accepted: 07/31/2018] [Indexed: 02/08/2023]
Abstract
Major advances in diagnostic technologies are offering unprecedented insight into the condition of the retina and beyond ocular disease. Digital images providing millions of morphological datasets can fast and non-invasively be analyzed in a comprehensive manner using artificial intelligence (AI). Methods based on machine learning (ML) and particularly deep learning (DL) are able to identify, localize and quantify pathological features in almost every macular and retinal disease. Convolutional neural networks thereby mimic the path of the human brain for object recognition through learning of pathological features from training sets, supervised ML, or even extrapolation from patterns recognized independently, unsupervised ML. The methods of AI-based retinal analyses are diverse and differ widely in their applicability, interpretability and reliability in different datasets and diseases. Fully automated AI-based systems have recently been approved for screening of diabetic retinopathy (DR). The overall potential of ML/DL includes screening, diagnostic grading as well as guidance of therapy with automated detection of disease activity, recurrences, quantification of therapeutic effects and identification of relevant targets for novel therapeutic approaches. Prediction and prognostic conclusions further expand the potential benefit of AI in retina which will enable personalized health care as well as large scale management and will empower the ophthalmologist to provide high quality diagnosis/therapy and successfully deal with the complexity of 21st century ophthalmology.
Collapse
Affiliation(s)
- Ursula Schmidt-Erfurth
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Amir Sadeghipour
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Bianca S Gerendas
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Sebastian M Waldstein
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Hrvoje Bogunović
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| |
Collapse
|
21
|
Badgujar R, Deore P. MBO-SVM-based exudate classification in fundus retinal images of diabetic patients. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2018. [DOI: 10.1080/21681163.2018.1487338] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- Ravindra Badgujar
- Department of Electronics & Telecommunication Engineering, R C Patel Institute of Technology, Shirpur, India
| | - Pramod Deore
- Department of Electronics & Telecommunication Engineering, R C Patel Institute of Technology, Shirpur, India
| |
Collapse
|
22
|
Adal KM, van Etten PG, Martinez JP, Rouwen KW, Vermeer KA, van Vliet LJ. An Automated System for the Detection and Classification of Retinal Changes Due to Red Lesions in Longitudinal Fundus Images. IEEE Trans Biomed Eng 2018; 65:1382-1390. [DOI: 10.1109/tbme.2017.2752701] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
23
|
Kar SS, Maity SP. Automatic Detection of Retinal Lesions for Screening of Diabetic Retinopathy. IEEE Trans Biomed Eng 2018; 65:608-618. [DOI: 10.1109/tbme.2017.2707578] [Citation(s) in RCA: 73] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
24
|
Nørgaard MF, Grauslund J. Automated Screening for Diabetic Retinopathy - A Systematic Review. Ophthalmic Res 2018; 60:9-17. [PMID: 29339646 DOI: 10.1159/000486284] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2017] [Accepted: 12/12/2017] [Indexed: 12/26/2022]
Abstract
PURPOSE Worldwide ophthalmologists are challenged by the rapid rise in the prevalence of diabetes. Diabetic retinopathy (DR) is the most common complication in diabetes, and possible consequences range from mild visual impairment to blindness. Repetitive screening for DR is cost-effective, but it is also a costly and strenuous affair. Several studies have examined the application of automated image analysis to solve this problem. Large populations are needed to assess the efficacy of such programs, and a standardized and rigorous methodology is important to give an indication of system performance in actual clinical settings. METHODS In a systematic review, we aimed to identify studies with methodology and design that are similar or replicate actual screening scenarios. A total of 1,231 publications were identified through PubMed, Cochrane Library, and Embase searches. Three manual search strategies were carried out to identify publications missed in the primary search. Four levels of screening identified 7 studies applicable for inclusion. RESULTS Seven studies were included. The detection of DR had high sensitivities (87.0-95.2%) but lower specificities (49.6-68.8%). False-negative results were related to mild DR with a low risk of progression within 1 year. Several studies reported missed cases of diabetic macular edema. A meta-analysis was not conducted as studies were not suitable for direct comparison or statistical analysis. CONCLUSION The study demonstrates that despite limited specificity, automated retinal image analysis may potentially be valuable in different DR screening scenarios with a relatively high sensitivity and a substantial workload reduction.
Collapse
Affiliation(s)
- Mads Fonager Nørgaard
- Department of Ophthalmology, Odense University Hospital, Odense, Denmark.,Research Unit of Ophthalmology, Department of Clinical Research, Faculty of Health Sciences, University of Southern Denmark, Odense, Denmark
| | - Jakob Grauslund
- Department of Ophthalmology, Odense University Hospital, Odense, Denmark.,Research Unit of Ophthalmology, Department of Clinical Research, Faculty of Health Sciences, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
25
|
Kaur J, Mittal D. A generalized method for the segmentation of exudates from pathological retinal fundus images. Biocybern Biomed Eng 2018. [DOI: 10.1016/j.bbe.2017.10.003] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
26
|
Kaur J, Mittal D. Estimation of severity level of non-proliferative diabetic retinopathy for clinical aid. Biocybern Biomed Eng 2018. [DOI: 10.1016/j.bbe.2018.05.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
27
|
Al-Jarrah MA, Shatnawi H. Non-proliferative diabetic retinopathy symptoms detection and classification using neural network. J Med Eng Technol 2017; 41:498-505. [DOI: 10.1080/03091902.2017.1358772] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Affiliation(s)
| | - Hadeel Shatnawi
- Computer Engineering Department, Yarmouk University, Irbid, Jordan
| |
Collapse
|
28
|
Veiga D, Martins N, Ferreira M, Monteiro J. Automatic microaneurysm detection using laws texture masks and support vector machines. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING-IMAGING AND VISUALIZATION 2017. [DOI: 10.1080/21681163.2017.1296379] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Diana Veiga
- Enermeter, Braga, Portugal
- Centro Algoritmi, University of Minho, Guimarães, Portugal
| | | | | | - João Monteiro
- Centro Algoritmi, University of Minho, Guimarães, Portugal
| |
Collapse
|
29
|
Jordan KC, Menolotto M, Bolster NM, Livingstone IAT, Giardini ME. A review of feature-based retinal image analysis. EXPERT REVIEW OF OPHTHALMOLOGY 2017. [DOI: 10.1080/17469899.2017.1307105] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
30
|
Srivastava R, Duan L, Wong DWK, Liu J, Wong TY. Detecting retinal microaneurysms and hemorrhages with robustness to the presence of blood vessels. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2017; 138:83-91. [PMID: 27886718 DOI: 10.1016/j.cmpb.2016.10.017] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2016] [Revised: 09/05/2016] [Accepted: 10/18/2016] [Indexed: 06/06/2023]
Abstract
BACKGROUND AND OBJECTIVES Diabetic Retinopathy is the leading cause of blindness in developed countries in the age group 20-74 years. It is characterized by lesions on the retina and this paper focuses on detecting two of these lesions, Microaneurysms and Hemorrhages, which are also known as red lesions. This paper attempts to deal with two problems in detecting red lesions from retinal fundus images: (1) false detections on blood vessels; and (2) different size of red lesions. METHODS To deal with false detections on blood vessels, novel filters have been proposed which can distinguish between red lesions and blood vessels. This distinction is based on the fact that vessels are elongated while red lesions are usually circular blob-like structures. The second problem of the different size of lesions is dealt with by applying the proposed filters on patches of different sizes instead of filtering the full image. These patches are obtained by dividing the original image using a grid whose size determines the patch size. Different grid sizes were used and lesion detection results for these grid sizes were combined using Multiple Kernel Learning. RESULTS Experiments on a dataset of 143 images showed that proposed filters detected Microaneurysms and Hemorrhages successfully even when these lesions were close to blood vessels. In addition, using Multiple Kernel Learning improved the results when compared to using a grid of one size only. The areas under receiver operating characteristic curve were found to be 0.97 and 0.92 for Microaneurysms and Hemorrhages respectively which are better than the existing related works. CONCLUSIONS Proposed filters are robust to the presence of blood vessels and surpass related works in detecting red lesions from retinal fundus images. Improved lesion detection using the proposed approach can help in automatic detection of Diabetic Retinopathy.
Collapse
Affiliation(s)
| | - Lixin Duan
- Institute for Infocomm Research, Singapore 138632
| | | | - Jiang Liu
- Institute for Infocomm Research, Singapore 138632
| | | |
Collapse
|
31
|
Imani E, Pourreza HR. A novel method for retinal exudate segmentation using signal separation algorithm. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2016; 133:195-205. [PMID: 27393810 DOI: 10.1016/j.cmpb.2016.05.016] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2015] [Revised: 04/24/2016] [Accepted: 05/27/2016] [Indexed: 06/06/2023]
Abstract
Diabetic retinopathy is one of the major causes of blindness in the world. Early diagnosis of this disease is vital to the prevention of visual loss. The analysis of retinal lesions such as exudates, microaneurysms and hemorrhages is a prerequisite to detect diabetic disorders such as diabetic retinopathy and macular edema in fundus images. This paper presents an automatic method for the detection of retinal exudates. The novelty of this method lies in the use of Morphological Component Analysis (MCA) algorithm to separate lesions from normal retinal structures to facilitate the detection process. In the first stage, vessels are separated from lesions using the MCA algorithm with appropriate dictionaries. Then, the lesion part of retinal image is prepared for the detection of exudate regions. The final exudate map is created using dynamic thresholding and mathematical morphologies. Performance of the proposed method is measured on the three publicly available DiaretDB, HEI-MED and e-ophtha datasets. Accordingly, the AUC of 0.961 and 0.948 and 0.937 is achieved respectively, which are greater than most of the state-of-the-art methods.
Collapse
Affiliation(s)
- Elaheh Imani
- Machine Vision Lab., Ferdowsi University of Mashhad, Mashhad, Iran
| | | |
Collapse
|