1
|
Jia Y, Chen G, Chi H. Retinal fundus image super-resolution based on generative adversarial network guided with vascular structure prior. Sci Rep 2024; 14:22786. [PMID: 39354105 PMCID: PMC11445418 DOI: 10.1038/s41598-024-74186-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2024] [Accepted: 09/24/2024] [Indexed: 10/03/2024] Open
Abstract
Many ophthalmic and systemic diseases can be screened by analyzing retinal fundus images. The clarity and resolution of retinal fundus images directly determine the effectiveness of clinical diagnosis. Deep learning methods based on generative adversarial networks are used in various research fields due to their powerful generative capabilities, especially image super-resolution. Although Real-ESRGAN is a recently proposed method that excels in processing real-world degraded images, it suffers from structural distortions when super-resolving retinal fundus images are rich in structural information. To address this shortcoming, we first process the input image using a pre-trained U-Net model to obtain a structural segmentation map of the retinal vessels and use the segmentation map as the structural prior. The spatial feature transform layer is then used to better integrate the structural prior into the generation process of the generator. In addition, we introduce channel and spatial attention modules into the skip connections of the discriminator to emphasize meaningful features and accordingly enhance the discriminative power of the discriminator. Based on the original loss functions, we introduce the L1 loss function to measure the pixel-level differences between the segmentation maps of retinal vascular structures in the high-resolution images and the super-resolution images to further constrain the super-resolution images. Simulation results on retinal image datasets show that our improved algorithm results have a better visual performance by suppressing structural distortions in the super-resolution images.
Collapse
Affiliation(s)
- Yanfei Jia
- School of Electrical and Information Engineering, Beihua University, Jilin, 132013, China.
| | - Guangda Chen
- School of Electrical and Information Engineering, Beihua University, Jilin, 132013, China
| | - Haotian Chi
- College of electronic science and engineering, Jilin University, Changchun, 130015, China
| |
Collapse
|
2
|
Santos AR, Ghate S, Lopes M, Rocha AC, Santos T, Reste-Ferreira D, Manivannan N, Foote K, Cunha-Vaz J. ETDRS grading with CLARUS ultra-widefield images shows agreement with 7-fields colour fundus photography. BMC Ophthalmol 2024; 24:387. [PMID: 39227901 PMCID: PMC11369991 DOI: 10.1186/s12886-024-03537-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 06/23/2024] [Indexed: 09/05/2024] Open
Abstract
BACKGROUND To analyse and compare the grading of diabetic retinopathy (DR) severity level using standard 35° ETDRS 7-fields photography and CLARUS™ 500 ultra-widefield imaging system. METHODS A cross-sectional analysis of retinal images of patients with type 2 diabetes (n = 160 eyes) was performed for this study. All patients underwent 7-fields colour fundus photography (CFP) at 35° on a standard Topcon TRC-50DX® camera, and ultra-widefield (UWF) imaging at 200° on a CLARUS™ 500 (ZEISS, Dublin, CA, USA) by an automatic montage of two 133° images (nasal and temporal). 35° 7-fields photographs were graded by two graders, according to the Early Treatment Diabetic Retinopathy Study (ETDRS). For CLARUS UWF images, a prototype 7-fields grid was applied using the CLARUS review software, and the same ETDRS grading procedures were performed inside that area only. Grading of DR severity level was compared between these two methods to evaluate the agreement between both imaging techniques. RESULTS Images of 160 eyes from 83 diabetic patients were considered for analysis. According to the 35° ETDRS 7-fields images, 22 eyes were evaluated as DR severity level 10-20, 64 eyes were evaluated as DR level 35, 41 eyes level 43, 21 eyes level 47, 7 eyes level 53, and 5 eyes level 61. The same DR severity level was achieved with CLARUS 500 UWF images in 92 eyes (57%), showing a perfect agreement (k > 0.80) with the 7-fields 35° technique. Fifty-seven eyes (36%) showed a higher DR level with CLARUS UWF images, mostly due to a better visualization of haemorrhages and a higher detection rate of intraretinal microvascular abnormalities (IRMA). Only 11 eyes (7%) showed a lower severity level with the CLARUS UWF system, due to the presence of artifacts or media opacities that precluded the correct evaluation of DR lesions. CONCLUSIONS UWF CLARUS 500 device showed nearly perfect agreement with standard 35° 7-fields images in all ETDRS severity levels. Moreover, CLARUS images showed an increased ability to detect haemorrhages and IRMA helping with finer evaluation of lesions, thus demonstrating that a UWF photograph can be used to grade ETDRS severity level with a better visualization than the standard 7-fields images. TRIAL REGISTRATION Approved by the AIBILI - Association for Innovation and Biomedical Research on Light and Image Ethics Committee for Health with number CEC/009/17- EYEMARKER.
Collapse
Affiliation(s)
- Ana Rita Santos
- AIBILI - Association for Innovation and Biomedical Research on Light and Image, Coimbra, Portugal.
- CORC - Coimbra Ophthalmology Reading Centre, Coimbra, Portugal.
- Center for Translational Health and Medical Biotechnology Research (TBIO)/Health Research Network (RISE-Health), ESS, Polytechnic of Porto, Porto, Portugal.
| | | | - Marta Lopes
- AIBILI - Association for Innovation and Biomedical Research on Light and Image, Coimbra, Portugal
- CORC - Coimbra Ophthalmology Reading Centre, Coimbra, Portugal
| | - Ana Cláudia Rocha
- AIBILI - Association for Innovation and Biomedical Research on Light and Image, Coimbra, Portugal
- CORC - Coimbra Ophthalmology Reading Centre, Coimbra, Portugal
| | - Torcato Santos
- AIBILI - Association for Innovation and Biomedical Research on Light and Image, Coimbra, Portugal
| | - Débora Reste-Ferreira
- AIBILI - Association for Innovation and Biomedical Research on Light and Image, Coimbra, Portugal
| | | | | | - José Cunha-Vaz
- AIBILI - Association for Innovation and Biomedical Research on Light and Image, Coimbra, Portugal
- Faculty of Medicine, University of Coimbra, Coimbra, Portugal
| |
Collapse
|
3
|
Woof W, de Guimarães TAC, Al-Khuzaei S, Daich Varela M, Sen S, Bagga P, Mendes B, Shah M, Burke P, Parry D, Lin S, Naik G, Ghoshal B, Liefers B, Fu DJ, Georgiou M, Nguyen Q, da Silva AS, Liu Y, Fujinami-Yokokawa Y, Sumodhee D, Patel P, Furman J, Moghul I, Moosajee M, Sallum J, De Silva SR, Lorenz B, Holz F, Fujinami K, Webster AR, Mahroo O, Downes SM, Madhusudhan S, Balaskas K, Michaelides M, Pontikos N. Quantification of Fundus Autofluorescence Features in a Molecularly Characterized Cohort of More Than 3500 Inherited Retinal Disease Patients from the United Kingdom. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.03.24.24304809. [PMID: 38585957 PMCID: PMC10996753 DOI: 10.1101/2024.03.24.24304809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/09/2024]
Abstract
Purpose To quantify relevant fundus autofluorescence (FAF) image features cross-sectionally and longitudinally in a large cohort of inherited retinal diseases (IRDs) patients. Design Retrospective study of imaging data (55-degree blue-FAF on Heidelberg Spectralis) from patients. Participants Patients with a clinical and molecularly confirmed diagnosis of IRD who have undergone FAF 55-degree imaging at Moorfields Eye Hospital (MEH) and the Royal Liverpool Hospital (RLH) between 2004 and 2019. Methods Five FAF features of interest were defined: vessels, optic disc, perimacular ring of increased signal (ring), relative hypo-autofluorescence (hypo-AF) and hyper-autofluorescence (hyper-AF). Features were manually annotated by six graders in a subset of patients based on a defined grading protocol to produce segmentation masks to train an AI model, AIRDetect, which was then applied to the entire MEH imaging dataset. Main Outcome Measures Quantitative FAF imaging features including area in mm 2 and vessel metrics, were analysed cross-sectionally by gene and age, and longitudinally to determine rate of progression. AIRDetect feature segmentation and detection were validated with Dice score and precision/recall, respectively. Results A total of 45,749 FAF images from 3,606 IRD patients from MEH covering 170 genes were automatically segmented using AIRDetect. Model-grader Dice scores for disc, hypo-AF, hyper-AF, ring and vessels were respectively 0.86, 0.72, 0.69, 0.68 and 0.65. The five genes with the largest hypo-AF areas were CHM , ABCC6 , ABCA4 , RDH12 , and RPE65 , with mean per-patient areas of 41.5, 30.0, 21.9, 21.4, and 15.1 mm 2 . The five genes with the largest hyper-AF areas were BEST1 , CDH23 , RDH12 , MYO7A , and NR2E3 , with mean areas of 0.49, 0.45, 0.44, 0.39, and 0.34 mm 2 respectively. The five genes with largest ring areas were CDH23 , NR2E3 , CRX , EYS and MYO7A, with mean areas of 3.63, 3.32, 2.84, 2.39, and 2.16 mm 2 . Vessel density was found to be highest in EFEMP1 , BEST1 , TIMP3 , RS1 , and PRPH2 (10.6%, 10.3%, 9.8%, 9.7%, 8.9%) and was lower in Retinitis Pigmentosa (RP) and Leber Congenital Amaurosis genes. Longitudinal analysis of decreasing ring area in four RP genes ( RPGR, USH2A, RHO, EYS ) found EYS to be the fastest progressor at -0.18 mm 2 /year. Conclusions We have conducted the first large-scale cross-sectional and longitudinal quantitative analysis of FAF features across a diverse range of IRDs using a novel AI approach.
Collapse
|
4
|
Chuter B, Huynh J, Bowd C, Walker E, Rezapour J, Brye N, Belghith A, Fazio MA, Girkin CA, De Moraes G, Liebmann JM, Weinreb RN, Zangwill LM, Christopher M. Deep Learning Identifies High-Quality Fundus Photographs and Increases Accuracy in Automated Primary Open Angle Glaucoma Detection. Transl Vis Sci Technol 2024; 13:23. [PMID: 38285462 PMCID: PMC10829806 DOI: 10.1167/tvst.13.1.23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 12/26/2023] [Indexed: 01/30/2024] Open
Abstract
Purpose To develop and evaluate a deep learning (DL) model to assess fundus photograph quality, and quantitatively measure its impact on automated POAG detection in independent study populations. Methods Image quality ground truth was determined by manual review of 2815 fundus photographs of healthy and POAG eyes from the Diagnostic Innovations in Glaucoma Study and African Descent and Glaucoma Evaluation Study (DIGS/ADAGES), as well as 11,350 from the Ocular Hypertension Treatment Study (OHTS). Human experts assessed a photograph as high quality if of sufficient quality to determine POAG status and poor quality if not. A DL quality model was trained on photographs from DIGS/ADAGES and tested on OHTS. The effect of DL quality assessment on DL POAG detection was measured using area under the receiver operating characteristic (AUROC). Results The DL quality model yielded an AUROC of 0.97 for differentiating between high- and low-quality photographs; qualitative human review affirmed high model performance. Diagnostic accuracy of the DL POAG model was significantly greater (P < 0.001) in good (AUROC, 0.87; 95% CI, 0.80-0.92) compared with poor quality photographs (AUROC, 0.77; 95% CI, 0.67-0.88). Conclusions The DL quality model was able to accurately assess fundus photograph quality. Using automated quality assessment to filter out low-quality photographs increased the accuracy of a DL POAG detection model. Translational Relevance Incorporating DL quality assessment into automated review of fundus photographs can help to decrease the burden of manual review and improve accuracy for automated DL POAG detection.
Collapse
Affiliation(s)
- Benton Chuter
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Justin Huynh
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Christopher Bowd
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Evan Walker
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Jasmin Rezapour
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
- Department of Ophthalmology, University Medical Center Mainz, Germany
| | - Nicole Brye
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Akram Belghith
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Massimo A. Fazio
- School of Medicine, Callahan Eye Hospital, University of Alabama-Birmingham, Birmingham, Alabama, United States
| | - Christopher A. Girkin
- School of Medicine, Callahan Eye Hospital, University of Alabama-Birmingham, Birmingham, Alabama, United States
| | - Gustavo De Moraes
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Medical Center, New York, New York, United States
| | - Jeffrey M. Liebmann
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Medical Center, New York, New York, United States
| | - Robert N. Weinreb
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Linda M. Zangwill
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Mark Christopher
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| |
Collapse
|
5
|
Bryan JM, Bryar PJ, Mirza RG. Convolutional Neural Networks Accurately Identify Ungradable Images in a Diabetic Retinopathy Telemedicine Screening Program. Telemed J E Health 2023; 29:1349-1355. [PMID: 36730708 DOI: 10.1089/tmj.2022.0357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
Purpose: Diabetic retinopathy (DR) is a microvascular complication of diabetes mellitus (DM). Standard of care for patients with DM is an annual eye examination or retinal imaging to assess for DR, the latter of which may be completed through telemedicine approaches. One significant issue is poor-quality images that prevent adequate screening and are thus ungradable. We used artificial intelligence to enable point-of-care (at time of imaging) identification of ungradable images in a DR screening program. Methods: Nonmydriatic retinal images were gathered from patients with DM imaged during a primary care or endocrinology visit from September 1, 2017, to June 1, 2021. The Topcon TRC-NW400 retinal camera (Topcon Corp., Tokyo, Japan) was used. Images were interpreted by 5 ophthalmologists for gradeability, presence and stage of DR, and presence of non-DR pathologies. A convolutional neural network with Inception V3 network architecture was trained to assess image gradeability. Images were divided into training and test sets, and 10-fold cross-validation was performed. Results: A total of 1,377 images from 537 patients (56.1% female, median age 58) were analyzed. Ophthalmologists classified 25.9% of images as ungradable. Of gradable images, 18.6% had DR of varying degrees and 26.5% had non-DR pathology. 10 fold cross-validation produced an average area under receiver operating characteristic curve (AUC) of 0.922 (standard deviation: 0.027, range: 0.882 to 0.961). The final model exhibited similar test set performance with an AUC of 0.924. Conclusions: This model accurately assesses gradeability of nonmydriatic retinal images. It could be used for increasing the efficiency of DR screening programs by enabling point-of-care identification of poor-quality images.
Collapse
Affiliation(s)
- John M Bryan
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| | - Paul J Bryar
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| | - Rukhsana G Mirza
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| |
Collapse
|
6
|
Duong R, Abou-Samra A, Bogaard JD, Shildkrot Y. Asteroid Hyalosis: An Update on Prevalence, Risk Factors, Emerging Clinical Impact and Management Strategies. Clin Ophthalmol 2023; 17:1739-1754. [PMID: 37361691 PMCID: PMC10290459 DOI: 10.2147/opth.s389111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Accepted: 06/12/2023] [Indexed: 06/28/2023] Open
Abstract
Asteroid hyalosis (AH) is a benign clinical entity characterized by the presence of multiple refractile spherical calcium and phospholipids within the vitreous body. First described by Benson in 1894, this entity has been well documented in the clinical literature and is named due to the resemblance of asteroid bodies on clinical examination to a starry night sky. Today, a growing body of epidemiologic data estimates the global prevalence of asteroid hyalosis to be around 1%, and there is a strong established association between AH and older age. While pathophysiology remains unclear, a variety of systemic and ocular risk factors for AH have recently been suggested in the literature and may provide insight into possible mechanisms for asteroid body (AB) development. As vision is rarely affected, clinical management is focused on differentiation of asteroid hyalosis from mimicking conditions, evaluation of the underlying retina for other pathology and consideration of vitrectomy in rare cases with visual impairment. Taking into account the recent technologic advances in large-scale medical databases, improving imaging modalities, and the popularity of telemedicine, this review summarizes the growing body of literature of AH epidemiology and pathophysiology and provides updates on the clinical diagnosis and management of AH.
Collapse
Affiliation(s)
- Ryan Duong
- Department of Ophthalmology, University of Virginia, Charlottesville, VA, USA
| | - Abdullah Abou-Samra
- Department of Ophthalmology, University of Virginia, Charlottesville, VA, USA
| | - Joseph D Bogaard
- Department of Ophthalmology, University of Virginia, Charlottesville, VA, USA
| | - Yevgeniy Shildkrot
- RetinaCare of Virginia, Augusta Eye Associates PLC, Fishersville, VA, USA
- Virginia Commonwealth University, Richmond, VA, USA
| |
Collapse
|
7
|
de Oliveira JAE, Nakayama LF, Zago Ribeiro L, de Oliveira TVF, Choi SNJH, Neto EM, Cardoso VS, Dib SA, Melo GB, Regatieri CVS, Malerbi FK. Clinical validation of a smartphone-based retinal camera for diabetic retinopathy screening. Acta Diabetol 2023:10.1007/s00592-023-02105-z. [PMID: 37149834 DOI: 10.1007/s00592-023-02105-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 04/22/2023] [Indexed: 05/08/2023]
Abstract
AIMS This study aims to compare the performance of a handheld fundus camera (Eyer) and standard tabletop fundus cameras (Visucam 500, Visucam 540, and Canon CR-2) for diabetic retinopathy and diabetic macular edema screening. METHODS This was a multicenter, cross-sectional study that included images from 327 individuals with diabetes. The participants underwent pharmacological mydriasis and fundus photography in two fields (macula and optic disk centered) with both strategies. All images were acquired by trained healthcare professionals, de-identified, and graded independently by two masked ophthalmologists, with a third senior ophthalmologist adjudicating in discordant cases. The International Classification of Diabetic Retinopathy was used for grading, and demographic data, diabetic retinopathy classification, artifacts, and image quality were compared between devices. The tabletop senior ophthalmologist adjudication label was used as the ground truth for comparative analysis. A univariate and stepwise multivariate logistic regression was performed to determine the relationship of each independent factor in referable diabetic retinopathy. RESULTS The mean age of participants was 57.03 years (SD 16.82, 9-90 years), and the mean duration of diabetes was 16.35 years (SD 9.69, 1-60 years). Age (P = .005), diabetes duration (P = .004), body mass index (P = .005), and hypertension (P < .001) were statistically different between referable and non-referable patients. Multivariate logistic regression analysis revealed a positive association between male sex (OR 1.687) and hypertension (OR 3.603) with referable diabetic retinopathy. The agreement between devices for diabetic retinopathy classification was 73.18%, with a weighted kappa of 0.808 (almost perfect). The agreement for macular edema was 88.48%, with a kappa of 0.809 (almost perfect). For referable diabetic retinopathy, the agreement was 85.88%, with a kappa of 0.716 (substantial), sensitivity of 0.906, and specificity of 0.808. As for image quality, 84.02% of tabletop fundus camera images were gradable and 85.31% of the Eyer images were gradable. CONCLUSIONS Our study shows that the handheld retinal camera Eyer performed comparably to standard tabletop fundus cameras for diabetic retinopathy and macular edema screening. The high agreement with tabletop devices, portability, and low costs makes the handheld retinal camera a promising tool for increasing coverage of diabetic retinopathy screening programs, particularly in low-income countries. Early diagnosis and treatment have the potential to prevent avoidable blindness, and the present validation study brings evidence that supports its contribution to diabetic retinopathy early diagnosis and treatment.
Collapse
Affiliation(s)
| | - Luis Filipe Nakayama
- Department of Ophthalmology, São Paulo Federal University, São Paulo, SP, Brazil.
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA, 02139, USA.
| | - Lucas Zago Ribeiro
- Department of Ophthalmology, São Paulo Federal University, São Paulo, SP, Brazil
| | | | | | | | | | - Sergio Atala Dib
- Division of Endocrinology and Metabolism, Sao Paulo Federal University, São Paulo, SP, Brazil
| | | | | | | |
Collapse
|
8
|
EfficientNetV2 Based Ensemble Model for Quality Estimation of Diabetic Retinopathy Images from DeepDRiD. Diagnostics (Basel) 2023; 13:diagnostics13040622. [PMID: 36832110 PMCID: PMC9955381 DOI: 10.3390/diagnostics13040622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 01/30/2023] [Accepted: 02/06/2023] [Indexed: 02/10/2023] Open
Abstract
Diabetic retinopathy (DR) is one of the major complications caused by diabetes and is usually identified from retinal fundus images. Screening of DR from digital fundus images could be time-consuming and error-prone for ophthalmologists. For efficient DR screening, good quality of the fundus image is essential and thereby reduces diagnostic errors. Hence, in this work, an automated method for quality estimation (QE) of digital fundus images using an ensemble of recent state-of-the-art EfficientNetV2 deep neural network models is proposed. The ensemble method was cross-validated and tested on one of the largest openly available datasets, the Deep Diabetic Retinopathy Image Dataset (DeepDRiD). We obtained a test accuracy of 75% for the QE, outperforming the existing methods on the DeepDRiD. Hence, the proposed ensemble method may be a potential tool for automated QE of fundus images and could be handy to ophthalmologists.
Collapse
|