1
|
Goh KL, Abbott CJ, Campbell TG, Cohn AC, Ong DN, Wickremasinghe SS, Hodgson LAB, Guymer RH, Wu Z. Clinical performance of predicting late age-related macular degeneration development using multimodal imaging. Clin Exp Ophthalmol 2024. [PMID: 38812454 DOI: 10.1111/ceo.14405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 04/17/2024] [Accepted: 05/17/2024] [Indexed: 05/31/2024]
Abstract
BACKGROUND To examine whether the clinical performance of predicting late age-related macular degeneration (AMD) development is improved through using multimodal imaging (MMI) compared to using colour fundus photography (CFP) alone, and how this compares with a basic prediction model using well-established AMD risk factors. METHODS Individuals with AMD in this study underwent MMI, including optical coherence tomography (OCT), fundus autofluorescence, near-infrared reflectance and CFP at baseline, and then at 6-monthly intervals for 3-years to determine MMI-defined late AMD development. Four retinal specialists independently assessed the likelihood that each eye at baseline would progress to MMI-defined late AMD over 3-years with CFP, and then with MMI. Predictive performance with CFP and MMI were compared to each other, and to a basic prediction model using age, presence of pigmentary abnormalities, and OCT-based drusen volume. RESULTS The predictive performance of the clinicians using CFP [AUC = 0.75; 95% confidence interval (CI) = 0.68-0.82] improved when using MMI (AUC = 0.79; 95% CI = 0.72-0.85; p = 0.034). However, a basic prediction model outperformed clinicians using either CFP or MMI (AUC = 0.85; 95% CI = 0.78-91; p ≤ 0.002). CONCLUSIONS Clinical performance for predicting late AMD development was improved by using MMI compared to CFP. However, a basic prediction model using well-established AMD risk factors outperformed retinal specialists, suggesting that such a model could further improve personalised counselling and monitoring of individuals with the early stages of AMD in clinical practice.
Collapse
Affiliation(s)
- Kai Lyn Goh
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Carla J Abbott
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Thomas G Campbell
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Amy C Cohn
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Dai Ni Ong
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Sanjeewa S Wickremasinghe
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Lauren A B Hodgson
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Robyn H Guymer
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| | - Zhichao Wu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, Australia
| |
Collapse
|
2
|
Parmar UPS, Surico PL, Singh RB, Romano F, Salati C, Spadea L, Musa M, Gagliano C, Mori T, Zeppieri M. Artificial Intelligence (AI) for Early Diagnosis of Retinal Diseases. MEDICINA (KAUNAS, LITHUANIA) 2024; 60:527. [PMID: 38674173 PMCID: PMC11052176 DOI: 10.3390/medicina60040527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Revised: 03/12/2024] [Accepted: 03/21/2024] [Indexed: 04/28/2024]
Abstract
Artificial intelligence (AI) has emerged as a transformative tool in the field of ophthalmology, revolutionizing disease diagnosis and management. This paper provides a comprehensive overview of AI applications in various retinal diseases, highlighting its potential to enhance screening efficiency, facilitate early diagnosis, and improve patient outcomes. Herein, we elucidate the fundamental concepts of AI, including machine learning (ML) and deep learning (DL), and their application in ophthalmology, underscoring the significance of AI-driven solutions in addressing the complexity and variability of retinal diseases. Furthermore, we delve into the specific applications of AI in retinal diseases such as diabetic retinopathy (DR), age-related macular degeneration (AMD), Macular Neovascularization, retinopathy of prematurity (ROP), retinal vein occlusion (RVO), hypertensive retinopathy (HR), Retinitis Pigmentosa, Stargardt disease, best vitelliform macular dystrophy, and sickle cell retinopathy. We focus on the current landscape of AI technologies, including various AI models, their performance metrics, and clinical implications. Furthermore, we aim to address challenges and pitfalls associated with the integration of AI in clinical practice, including the "black box phenomenon", biases in data representation, and limitations in comprehensive patient assessment. In conclusion, this review emphasizes the collaborative role of AI alongside healthcare professionals, advocating for a synergistic approach to healthcare delivery. It highlights the importance of leveraging AI to augment, rather than replace, human expertise, thereby maximizing its potential to revolutionize healthcare delivery, mitigate healthcare disparities, and improve patient outcomes in the evolving landscape of medicine.
Collapse
Affiliation(s)
| | - Pier Luigi Surico
- Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA 02114, USA
- Department of Ophthalmology, Campus Bio-Medico University, 00128 Rome, Italy
- Fondazione Policlinico Universitario Campus Bio-Medico, 00128 Rome, Italy
| | - Rohan Bir Singh
- Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA 02114, USA
| | - Francesco Romano
- Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA 02114, USA
| | - Carlo Salati
- Department of Ophthalmology, University Hospital of Udine, p.le S. Maria della Misericordia 15, 33100 Udine, Italy
| | - Leopoldo Spadea
- Eye Clinic, Policlinico Umberto I, “Sapienza” University of Rome, 00142 Rome, Italy
| | - Mutali Musa
- Department of Optometry, University of Benin, Benin City 300238, Edo State, Nigeria
| | - Caterina Gagliano
- Faculty of Medicine and Surgery, University of Enna “Kore”, Piazza dell’Università, 94100 Enna, Italy
- Eye Clinic, Catania University, San Marco Hospital, Viale Carlo Azeglio Ciampi, 95121 Catania, Italy
| | - Tommaso Mori
- Department of Ophthalmology, Campus Bio-Medico University, 00128 Rome, Italy
- Fondazione Policlinico Universitario Campus Bio-Medico, 00128 Rome, Italy
- Department of Ophthalmology, University of California San Diego, La Jolla, CA 92122, USA
| | - Marco Zeppieri
- Department of Ophthalmology, University Hospital of Udine, p.le S. Maria della Misericordia 15, 33100 Udine, Italy
| |
Collapse
|
3
|
Sharafi SM, Ebrahimiadib N, Roohipourmoallai R, Farahani AD, Fooladi MI, Khalili Pour E. Automated diagnosis of plus disease in retinopathy of prematurity using quantification of vessels characteristics. Sci Rep 2024; 14:6375. [PMID: 38493272 PMCID: PMC10944526 DOI: 10.1038/s41598-024-57072-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Accepted: 03/14/2024] [Indexed: 03/18/2024] Open
Abstract
The condition known as Plus disease is distinguished by atypical alterations in the retinal vasculature of neonates born prematurely. It has been demonstrated that the diagnosis of Plus disease is subjective and qualitative in nature. The utilization of quantitative methods and computer-based image analysis to enhance the objectivity of Plus disease diagnosis has been extensively established in the literature. This study presents the development of a computer-based image analysis method aimed at automatically distinguishing Plus images from non-Plus images. The proposed methodology conducts a quantitative analysis of the vascular characteristics linked to Plus disease, thereby aiding physicians in making informed judgments. A collection of 76 posterior retinal images from a diverse group of infants who underwent screening for Retinopathy of Prematurity (ROP) was obtained. A reference standard diagnosis was established as the majority of the labeling performed by three experts in ROP during two separate sessions. The process of segmenting retinal vessels was carried out using a semi-automatic methodology. Computer algorithms were developed to compute the tortuosity, dilation, and density of vessels in various retinal regions as potential discriminative characteristics. A classifier was provided with a set of selected features in order to distinguish between Plus images and non-Plus images. This study included 76 infants (49 [64.5%] boys) with mean birth weight of 1305 ± 427 g and mean gestational age of 29.3 ± 3 weeks. The average level of agreement among experts for the diagnosis of plus disease was found to be 79% with a standard deviation of 5.3%. In terms of intra-expert agreement, the average was 85% with a standard deviation of 3%. Furthermore, the average tortuosity of the five most tortuous vessels was significantly higher in Plus images compared to non-Plus images (p ≤ 0.0001). The curvature values based on points were found to be significantly higher in Plus images compared to non-Plus images (p ≤ 0.0001). The maximum diameter of vessels within a region extending 5-disc diameters away from the border of the optic disc (referred to as 5DD) exhibited a statistically significant increase in Plus images compared to non-Plus images (p ≤ 0.0001). The density of vessels in Plus images was found to be significantly higher compared to non-Plus images (p ≤ 0.0001). The classifier's accuracy in distinguishing between Plus and non-Plus images, as determined through tenfold cross-validation, was found to be 0.86 ± 0.01. This accuracy was observed to be higher than the diagnostic accuracy of one out of three experts when compared to the reference standard. The implemented algorithm in the current study demonstrated a commendable level of accuracy in detecting Plus disease in cases of retinopathy of prematurity, exhibiting comparable performance to that of expert diagnoses. By engaging in an objective analysis of the characteristics of vessels, there exists the possibility of conducting a quantitative assessment of the disease progression's features. The utilization of this automated system has the potential to enhance physicians' ability to diagnose Plus disease, thereby offering valuable contributions to the management of ROP through the integration of traditional ophthalmoscopy and image-based telemedicine methodologies.
Collapse
Affiliation(s)
- Sayed Mehran Sharafi
- Retinopathy of Prematurity Department, Retina Ward, Farabi Eye Hospital, Tehran University of Medical Sciences, South Kargar Street, Qazvin Square, Tehran, Iran
| | - Nazanin Ebrahimiadib
- Ophthalmology Department, College of Medicine, University of Florida, Gainesville, FL, USA
| | - Ramak Roohipourmoallai
- Department of Ophthalmology, Morsani College of Medicine, University of South Florida, Tempa, FL, USA
| | - Afsar Dastjani Farahani
- Retinopathy of Prematurity Department, Retina Ward, Farabi Eye Hospital, Tehran University of Medical Sciences, South Kargar Street, Qazvin Square, Tehran, Iran
| | - Marjan Imani Fooladi
- Clinical Pediatric Ophthalmology Department, UPMC, Children's Hospital of Pittsburgh, Pittsburgh, PA, USA
| | - Elias Khalili Pour
- Retinopathy of Prematurity Department, Retina Ward, Farabi Eye Hospital, Tehran University of Medical Sciences, South Kargar Street, Qazvin Square, Tehran, Iran.
| |
Collapse
|
4
|
Tran C, Shen K, Liu K, Ashok A, Ramirez-Zamora A, Chen J, Li Y, Fang R. Deep learning predicts prevalent and incident Parkinson's disease from UK Biobank fundus imaging. Sci Rep 2024; 14:3637. [PMID: 38351326 PMCID: PMC10864361 DOI: 10.1038/s41598-024-54251-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 02/10/2024] [Indexed: 02/16/2024] Open
Abstract
Parkinson's disease is the world's fastest-growing neurological disorder. Research to elucidate the mechanisms of Parkinson's disease and automate diagnostics would greatly improve the treatment of patients with Parkinson's disease. Current diagnostic methods are expensive and have limited availability. Considering the insidious and preclinical onset and progression of the disease, a desirable screening should be diagnostically accurate even before the onset of symptoms to allow medical interventions. We highlight retinal fundus imaging, often termed a window to the brain, as a diagnostic screening modality for Parkinson's disease. We conducted a systematic evaluation of conventional machine learning and deep learning techniques to classify Parkinson's disease from UK Biobank fundus imaging. Our results suggest Parkinson's disease individuals can be differentiated from age and gender-matched healthy subjects with 68% accuracy. This accuracy is maintained when predicting either prevalent or incident Parkinson's disease. Explainability and trustworthiness are enhanced by visual attribution maps of localized biomarkers and quantified metrics of model robustness to data perturbations.
Collapse
Affiliation(s)
- Charlie Tran
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, 32611, USA
| | - Kai Shen
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, 32611, USA
| | - Kang Liu
- Department of Physics, University of Florida, Gainesville, FL, 32661, USA
| | - Akshay Ashok
- Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL, 32611, USA
| | | | - Jinghua Chen
- Department of Ophthalmology, University of Florida, Gainesville, FL, 32661, USA
| | - Yulin Li
- Department of Biostatistics, University of Florida, Gainesville, FL, 32661, USA
| | - Ruogu Fang
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, 32611, USA.
- J. Crayton Pruitt Family Department of Biomedical Engineering, Herbert Wertheim College of Engineering, University of Florida, 1275 Center Drive, PO Box 116131, Gainesville, FL, 32611-6131, USA.
- Center for Cognitive Aging and Memory, University of Florida, Gainesville, FL, 32611, USA.
| |
Collapse
|
5
|
Yao Y, Yang J, Sun H, Kong H, Wang S, Xu K, Dai W, Jiang S, Bai Q, Xing S, Yuan J, Liu X, Lu F, Chen Z, Qu J, Su J. DeepGraFT: A novel semantic segmentation auxiliary ROI-based deep learning framework for effective fundus tessellation classification. Comput Biol Med 2024; 169:107881. [PMID: 38159401 DOI: 10.1016/j.compbiomed.2023.107881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 12/04/2023] [Accepted: 12/18/2023] [Indexed: 01/03/2024]
Abstract
Fundus tessellation (FT) is a prevalent clinical feature associated with myopia and has implications in the development of myopic maculopathy, which causes irreversible visual impairment. Accurate classification of FT in color fundus photo can help predict the disease progression and prognosis. However, the lack of precise detection and classification tools has created an unmet medical need, underscoring the importance of exploring the clinical utility of FT. Thus, to address this gap, we introduce an automatic FT grading system (called DeepGraFT) using classification-and-segmentation co-decision models by deep learning. ConvNeXt, utilizing transfer learning from pretrained ImageNet weights, was employed for the classification algorithm, aligning with a region of interest based on the ETDRS grading system to boost performance. A segmentation model was developed to detect FT exits, complementing the classification for improved grading accuracy. The training set of DeepGraFT was from our in-house cohort (MAGIC), and the validation sets consisted of the rest part of in-house cohort and an independent public cohort (UK Biobank). DeepGraFT demonstrated a high performance in the training stage and achieved an impressive accuracy in validation phase (in-house cohort: 86.85 %; public cohort: 81.50 %). Furthermore, our findings demonstrated that DeepGraFT surpasses machine learning-based classification models in FT classification, achieving a 5.57 % increase in accuracy. Ablation analysis revealed that the introduced modules significantly enhanced classification effectiveness and elevated accuracy from 79.85 % to 86.85 %. Further analysis using the results provided by DeepGraFT unveiled a significant negative association between FT and spherical equivalent (SE) in the UK Biobank cohort. In conclusion, DeepGraFT accentuates potential benefits of the deep learning model in automating the grading of FT and allows for potential utility as a clinical-decision support tool for predicting progression of pathological myopia.
Collapse
Affiliation(s)
- Yinghao Yao
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Jiaying Yang
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Haojun Sun
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Hengte Kong
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Sheng Wang
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Ke Xu
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Wei Dai
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Siyi Jiang
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - QingShi Bai
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Shilai Xing
- Institute of PSI Genomics, Wenzhou Global Eye & Vision Innovation Center, Wenzhou, 325024, China
| | - Jian Yuan
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Xinting Liu
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Fan Lu
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Zhenhui Chen
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| | - Jia Qu
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| | - Jianzhong Su
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| |
Collapse
|
6
|
Dow ER, Jeong HK, Katz EA, Toth CA, Wang D, Lee T, Kuo D, Allingham MJ, Hadziahmetovic M, Mettu PS, Schuman S, Carin L, Keane PA, Henao R, Lad EM. A Deep-Learning Algorithm to Predict Short-Term Progression to Geographic Atrophy on Spectral-Domain Optical Coherence Tomography. JAMA Ophthalmol 2023; 141:1052-1061. [PMID: 37856139 PMCID: PMC10587827 DOI: 10.1001/jamaophthalmol.2023.4659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 08/27/2023] [Indexed: 10/20/2023]
Abstract
Importance The identification of patients at risk of progressing from intermediate age-related macular degeneration (iAMD) to geographic atrophy (GA) is essential for clinical trials aimed at preventing disease progression. DeepGAze is a fully automated and accurate convolutional neural network-based deep learning algorithm for predicting progression from iAMD to GA within 1 year from spectral-domain optical coherence tomography (SD-OCT) scans. Objective To develop a deep-learning algorithm based on volumetric SD-OCT scans to predict the progression from iAMD to GA during the year following the scan. Design, Setting, and Participants This retrospective cohort study included participants with iAMD at baseline and who either progressed or did not progress to GA within the subsequent 13 months. Participants were included from centers in 4 US states. Data set 1 included patients from the Age-Related Eye Disease Study 2 AREDS2 (Ancillary Spectral-Domain Optical Coherence Tomography) A2A study (July 2008 to August 2015). Data sets 2 and 3 included patients with imaging taken in routine clinical care at a tertiary referral center and associated satellites between January 2013 and January 2023. The stored imaging data were retrieved for the purpose of this study from July 1, 2022, to February 1, 2023. Data were analyzed from May 2021 to July 2023. Exposure A position-aware convolutional neural network with proactive pseudointervention was trained and cross-validated on Bioptigen SD-OCT volumes (data set 1) and validated on 2 external data sets comprising Heidelberg Spectralis SD-OCT scans (data sets 2 and 3). Main Outcomes and Measures Prediction of progression to GA within 13 months was evaluated with area under the receiver-operator characteristic curves (AUROC) as well as area under the precision-recall curve (AUPRC), sensitivity, specificity, positive predictive value, negative predictive value, and accuracy. Results The study included a total of 417 patients: 316 in data set 1 (mean [SD] age, 74 [8]; 185 [59%] female), 53 in data set 2, (mean [SD] age, 83 [8]; 32 [60%] female), and 48 in data set 3 (mean [SD] age, 81 [8]; 32 [67%] female). The AUROC for prediction of progression from iAMD to GA within 1 year was 0.94 (95% CI, 0.92-0.95; AUPRC, 0.90 [95% CI, 0.85-0.95]; sensitivity, 0.88 [95% CI, 0.84-0.92]; specificity, 0.90 [95% CI, 0.87-0.92]) for data set 1. The addition of expert-annotated SD-OCT features to the model resulted in no improvement compared to the fully autonomous model (AUROC, 0.95; 95% CI, 0.92-0.95; P = .19). On an independent validation data set (data set 2), the model predicted progression to GA with an AUROC of 0.94 (95% CI, 0.91-0.96; AUPRC, 0.92 [0.89-0.94]; sensitivity, 0.91 [95% CI, 0.74-0.98]; specificity, 0.80 [95% CI, 0.63-0.91]). At a high-specificity operating point, simulated clinical trial recruitment was enriched for patients progressing to GA within 1 year by 8.3- to 20.7-fold (data sets 2 and 3). Conclusions and Relevance The fully automated, position-aware deep-learning algorithm assessed in this study successfully predicted progression from iAMD to GA over a clinically meaningful time frame. The ability to predict imminent GA progression could facilitate clinical trials aimed at preventing the condition and could guide clinical decision-making regarding screening frequency or treatment initiation.
Collapse
Affiliation(s)
- Eliot R. Dow
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Hyeon Ki Jeong
- Department of Biostatistics and Bioinformatics, Duke University Medical Center, Durham, North Carolina
| | - Ella Arnon Katz
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Cynthia A. Toth
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Dong Wang
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina
| | - Terry Lee
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - David Kuo
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Michael J. Allingham
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Majda Hadziahmetovic
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Priyatham S. Mettu
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Stefanie Schuman
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Lawrence Carin
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina
- King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
| | - Pearse A. Keane
- University College London Institute of Ophthalmology, National Institute for Health and Care Research, Biomedical Research Centre, Moorfields Eye Hospital National Health Services Foundation Trust, London, United Kingdom
| | - Ricardo Henao
- Department of Biostatistics and Bioinformatics, Duke University Medical Center, Durham, North Carolina
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina
- King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
| | - Eleonora M. Lad
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| |
Collapse
|
7
|
Khosravi P, Huck NA, Shahraki K, Hunter SC, Danza CN, Kim SY, Forbes BJ, Dai S, Levin AV, Binenbaum G, Chang PD, Suh DW. Deep Learning Approach for Differentiating Etiologies of Pediatric Retinal Hemorrhages: A Multicenter Study. Int J Mol Sci 2023; 24:15105. [PMID: 37894785 PMCID: PMC10606803 DOI: 10.3390/ijms242015105] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 09/29/2023] [Accepted: 10/10/2023] [Indexed: 10/29/2023] Open
Abstract
Retinal hemorrhages in pediatric patients can be a diagnostic challenge for ophthalmologists. These hemorrhages can occur due to various underlying etiologies, including abusive head trauma, accidental trauma, and medical conditions. Accurate identification of the etiology is crucial for appropriate management and legal considerations. In recent years, deep learning techniques have shown promise in assisting healthcare professionals in making more accurate and timely diagnosis of a variety of disorders. We explore the potential of deep learning approaches for differentiating etiologies of pediatric retinal hemorrhages. Our study, which spanned multiple centers, analyzed 898 images, resulting in a final dataset of 597 retinal hemorrhage fundus photos categorized into medical (49.9%) and trauma (50.1%) etiologies. Deep learning models, specifically those based on ResNet and transformer architectures, were applied; FastViT-SA12, a hybrid transformer model, achieved the highest accuracy (90.55%) and area under the receiver operating characteristic curve (AUC) of 90.55%, while ResNet18 secured the highest sensitivity value (96.77%) on an independent test dataset. The study highlighted areas for optimization in artificial intelligence (AI) models specifically for pediatric retinal hemorrhages. While AI proves valuable in diagnosing these hemorrhages, the expertise of medical professionals remains irreplaceable. Collaborative efforts between AI specialists and pediatric ophthalmologists are crucial to fully harness AI's potential in diagnosing etiologies of pediatric retinal hemorrhages.
Collapse
Affiliation(s)
- Pooya Khosravi
- Department of Ophthalmology, School of Medicine, University of California, Irvine, CA 92697, USA; (P.K.); (N.A.H.); (K.S.); (C.N.D.)
- Gavin Herbert Eye Institute, University of California, Irvine, CA 92697, USA
- Donald Bren School of Information and Computer Sciences, University of California, Irvine, CA 92697, USA;
| | - Nolan A. Huck
- Department of Ophthalmology, School of Medicine, University of California, Irvine, CA 92697, USA; (P.K.); (N.A.H.); (K.S.); (C.N.D.)
- Gavin Herbert Eye Institute, University of California, Irvine, CA 92697, USA
| | - Kourosh Shahraki
- Department of Ophthalmology, School of Medicine, University of California, Irvine, CA 92697, USA; (P.K.); (N.A.H.); (K.S.); (C.N.D.)
- Gavin Herbert Eye Institute, University of California, Irvine, CA 92697, USA
| | - Stephen C. Hunter
- School of Medicine, University of California, 900 University Ave, Riverside, CA 92521, USA;
| | - Clifford Neil Danza
- Department of Ophthalmology, School of Medicine, University of California, Irvine, CA 92697, USA; (P.K.); (N.A.H.); (K.S.); (C.N.D.)
- Gavin Herbert Eye Institute, University of California, Irvine, CA 92697, USA
| | - So Young Kim
- Department of Ophthalmology, College of Medicine, Soonchunhyang University, Cheonan 31151, Chungcheongnam-do, Republic of Korea;
| | - Brian J. Forbes
- Division of Ophthalmology, Children’s Hospital of Philadelphia, Philadelphia, PA 19104, USA; (B.J.F.); (G.B.)
| | - Shuan Dai
- Department of Ophthalmology, Queensland Children’s Hospital, South Brisbane, QLD 4101, Australia;
| | - Alex V. Levin
- Department of Ophthalmology, Flaum Eye Institute, Golisano Children’s Hospital, Rochester, NY 14642, USA;
| | - Gil Binenbaum
- Division of Ophthalmology, Children’s Hospital of Philadelphia, Philadelphia, PA 19104, USA; (B.J.F.); (G.B.)
| | - Peter D. Chang
- Donald Bren School of Information and Computer Sciences, University of California, Irvine, CA 92697, USA;
- Department of Radiological Sciences, School of Medicine, University of California, Irvine, CA 92697, USA
| | - Donny W. Suh
- Department of Ophthalmology, School of Medicine, University of California, Irvine, CA 92697, USA; (P.K.); (N.A.H.); (K.S.); (C.N.D.)
- Gavin Herbert Eye Institute, University of California, Irvine, CA 92697, USA
| |
Collapse
|
8
|
Liu YF, Ji YK, Fei FQ, Chen NM, Zhu ZT, Fei XZ. Research progress in artificial intelligence assisted diabetic retinopathy diagnosis. Int J Ophthalmol 2023; 16:1395-1405. [PMID: 37724288 PMCID: PMC10475636 DOI: 10.18240/ijo.2023.09.05] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 06/14/2023] [Indexed: 09/20/2023] Open
Abstract
Diabetic retinopathy (DR) is one of the most common retinal vascular diseases and one of the main causes of blindness worldwide. Early detection and treatment can effectively delay vision decline and even blindness in patients with DR. In recent years, artificial intelligence (AI) models constructed by machine learning and deep learning (DL) algorithms have been widely used in ophthalmology research, especially in diagnosing and treating ophthalmic diseases, particularly DR. Regarding DR, AI has mainly been used in its diagnosis, grading, and lesion recognition and segmentation, and good research and application results have been achieved. This study summarizes the research progress in AI models based on machine learning and DL algorithms for DR diagnosis and discusses some limitations and challenges in AI research.
Collapse
Affiliation(s)
- Yun-Fang Liu
- Department of Ophthalmology, First People's Hospital of Huzhou, Huzhou University, Huzhou 313000, Zhejiang Province, China
| | - Yu-Ke Ji
- Eye Hospital, Nanjing Medical University, Nanjing 210000, Jiangsu Province, China
| | - Fang-Qin Fei
- Department of Endocrinology, First People's Hospital of Huzhou, Huzhou University, Huzhou 313000, Zhejiang Province, China
| | - Nai-Mei Chen
- Department of Ophthalmology, Huai'an Hospital of Huai'an City, Huai'an 223000, Jiangsu Province, China
| | - Zhen-Tao Zhu
- Department of Ophthalmology, Huai'an Hospital of Huai'an City, Huai'an 223000, Jiangsu Province, China
| | - Xing-Zhen Fei
- Department of Endocrinology, First People's Hospital of Huzhou, Huzhou University, Huzhou 313000, Zhejiang Province, China
| |
Collapse
|
9
|
Ochoa-Astorga JE, Wang L, Du W, Peng Y. A Straightforward Bifurcation Pattern-Based Fundus Image Registration Method. SENSORS (BASEL, SWITZERLAND) 2023; 23:7809. [PMID: 37765866 PMCID: PMC10534639 DOI: 10.3390/s23187809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 08/23/2023] [Accepted: 09/08/2023] [Indexed: 09/29/2023]
Abstract
Fundus image registration is crucial in eye disease examination, as it enables the alignment of overlapping fundus images, facilitating a comprehensive assessment of conditions like diabetic retinopathy, where a single image's limited field of view might be insufficient. By combining multiple images, the field of view for retinal analysis is extended, and resolution is enhanced through super-resolution imaging. Moreover, this method facilitates patient follow-up through longitudinal studies. This paper proposes a straightforward method for fundus image registration based on bifurcations, which serve as prominent landmarks. The approach aims to establish a baseline for fundus image registration using these landmarks as feature points, addressing the current challenge of validation in this field. The proposed approach involves the use of a robust vascular tree segmentation method to detect feature points within a specified range. The method involves coarse vessel segmentation to analyze patterns in the skeleton of the segmentation foreground, followed by feature description based on the generation of a histogram of oriented gradients and determination of image relation through a transformation matrix. Image blending produces a seamless registered image. Evaluation on the FIRE dataset using registration error as the key parameter for accuracy demonstrates the method's effectiveness. The results show the superior performance of the proposed method compared to other techniques using vessel-based feature extraction or partially based on SURF, achieving an area under the curve of 0.526 for the entire FIRE dataset.
Collapse
Affiliation(s)
| | - Linni Wang
- Retina & Neuron-Ophthalmology, Tianjin Medical University Eye Hospital, Tianjin 300084, China
| | - Weiwei Du
- Information and Human Science, Kyoto Institute of Technology University, Kyoto 6068585, Japan;
| | - Yahui Peng
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China;
| |
Collapse
|
10
|
Usman TM, Saheed YK, Nsang A, Ajibesin A, Rakshit S. A systematic literature review of machine learning based risk prediction models for diabetic retinopathy progression. Artif Intell Med 2023; 143:102617. [PMID: 37673580 DOI: 10.1016/j.artmed.2023.102617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 06/20/2023] [Accepted: 06/21/2023] [Indexed: 09/08/2023]
Abstract
Diabetic Retinopathy (DR) is the most popular debilitating impairment of diabetes and it progresses symptom-free until a sudden loss of vision occurs. Understanding the progression of DR is a pressing issue in clinical research and practice. In this systematic review of articles on Machine Learning (ML) based risk prediction models for DR progression, ever since the use of Artificial Intelligence (AI) for DR detection, there have been more cross-sectional studies with different algorithms of use of AI, there haven't been many longitudinal studies for the AI based risk prediction models. This paper proposes a novel review to fill in the gaps identified in current reviews and facilitate other researchers with current research solutions for developing AI-based risk prediction models for DR progression and closely related problems; synthesize the current results from these studies and identify research challenges, limitations and gaps to inform the selection of machine learning techniques and predictors to build novel prediction models. Additionally, this paper suggested six (6) deep AI-related technical and critical discussion of the adopted strategies and approaches. The Systematic Literature Review (SLR) methodology was employed to gather relevant studies. We searched IEEE Xplore, PubMed, Springer Link, Google Scholar, and Science Direct electronic databases for papers published from January 2017 to 30th April 2023. Thirteen (13) studies were chosen on the basis of their relevance to the review questions and satisfying the selection criteria. However, findings from the literature review exposed some critical research gaps that need to be addressed in future research to improve on the performance of risk prediction models for DR progression.
Collapse
Affiliation(s)
| | | | | | | | - Sandip Rakshit
- The Business School, RMIT University Vietnam, Ho chi Minh City, 700000 Vietnam.
| |
Collapse
|
11
|
Li Z, Wang L, Wu X, Jiang J, Qiang W, Xie H, Zhou H, Wu S, Shao Y, Chen W. Artificial intelligence in ophthalmology: The path to the real-world clinic. Cell Rep Med 2023:101095. [PMID: 37385253 PMCID: PMC10394169 DOI: 10.1016/j.xcrm.2023.101095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 04/17/2023] [Accepted: 06/07/2023] [Indexed: 07/01/2023]
Abstract
Artificial intelligence (AI) has great potential to transform healthcare by enhancing the workflow and productivity of clinicians, enabling existing staff to serve more patients, improving patient outcomes, and reducing health disparities. In the field of ophthalmology, AI systems have shown performance comparable with or even better than experienced ophthalmologists in tasks such as diabetic retinopathy detection and grading. However, despite these quite good results, very few AI systems have been deployed in real-world clinical settings, challenging the true value of these systems. This review provides an overview of the current main AI applications in ophthalmology, describes the challenges that need to be overcome prior to clinical implementation of the AI systems, and discusses the strategies that may pave the way to the clinical translation of these systems.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| | - Lei Wang
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Xuefang Wu
- Guizhou Provincial People's Hospital, Guizhou University, Guiyang 550002, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - Wei Qiang
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - He Xie
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Hongjian Zhou
- Department of Computer Science, University of Oxford, Oxford, Oxfordshire OX1 2JD, UK
| | - Shanjun Wu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Yi Shao
- Department of Ophthalmology, the First Affiliated Hospital of Nanchang University, Nanchang 330006, China.
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| |
Collapse
|
12
|
Ji Y, Ji Y, Liu Y, Zhao Y, Zhang L. Research progress on diagnosing retinal vascular diseases based on artificial intelligence and fundus images. Front Cell Dev Biol 2023; 11:1168327. [PMID: 37056999 PMCID: PMC10086262 DOI: 10.3389/fcell.2023.1168327] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 03/20/2023] [Indexed: 03/30/2023] Open
Abstract
As the only blood vessels that can directly be seen in the whole body, pathological changes in retinal vessels are related to the metabolic state of the whole body and many systems, which seriously affect the vision and quality of life of patients. Timely diagnosis and treatment are key to improving vision prognosis. In recent years, with the rapid development of artificial intelligence, the application of artificial intelligence in ophthalmology has become increasingly extensive and in-depth, especially in the field of retinal vascular diseases. Research study results based on artificial intelligence and fundus images are remarkable and provides a great possibility for early diagnosis and treatment. This paper reviews the recent research progress on artificial intelligence in retinal vascular diseases (including diabetic retinopathy, hypertensive retinopathy, retinal vein occlusion, retinopathy of prematurity, and age-related macular degeneration). The limitations and challenges of the research process are also discussed.
Collapse
Affiliation(s)
- Yuke Ji
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Yun Ji
- Affiliated Hospital of Shandong University of traditional Chinese Medicine, Jinan, Shandong, China
| | - Yunfang Liu
- Department of Ophthalmology, The First People’s Hospital of Huzhou, Huzhou, Zhejiang, China
| | - Ying Zhao
- Affiliated Hospital of Shandong University of traditional Chinese Medicine, Jinan, Shandong, China
- *Correspondence: Liya Zhang, ; Ying Zhao,
| | - Liya Zhang
- Department of Ophthalmology, The First People’s Hospital of Huzhou, Huzhou, Zhejiang, China
- *Correspondence: Liya Zhang, ; Ying Zhao,
| |
Collapse
|
13
|
Kang G, Baek SH, Kim YH, Kim DH, Park JW. Genetic Risk Assessment of Nonsyndromic Cleft Lip with or without Cleft Palate by Linking Genetic Networks and Deep Learning Models. Int J Mol Sci 2023; 24:ijms24054557. [PMID: 36901988 PMCID: PMC10003462 DOI: 10.3390/ijms24054557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Revised: 02/13/2023] [Accepted: 02/20/2023] [Indexed: 03/02/2023] Open
Abstract
Recent deep learning algorithms have further improved risk classification capabilities. However, an appropriate feature selection method is required to overcome dimensionality issues in population-based genetic studies. In this Korean case-control study of nonsyndromic cleft lip with or without cleft palate (NSCL/P), we compared the predictive performance of models that were developed by using the genetic-algorithm-optimized neural networks ensemble (GANNE) technique with those models that were generated by eight conventional risk classification methods, including polygenic risk score (PRS), random forest (RF), support vector machine (SVM), extreme gradient boosting (XGBoost), and deep-learning-based artificial neural network (ANN). GANNE, which is capable of automatic input SNP selection, exhibited the highest predictive power, especially in the 10-SNP model (AUC of 88.2%), thus improving the AUC by 23% and 17% compared to PRS and ANN, respectively. Genes mapped with input SNPs that were selected by using a genetic algorithm (GA) were functionally validated for risks of developing NSCL/P in gene ontology and protein-protein interaction (PPI) network analyses. The IRF6 gene, which is most frequently selected via GA, was also a major hub gene in the PPI network. Genes such as RUNX2, MTHFR, PVRL1, TGFB3, and TBX22 significantly contributed to predicting NSCL/P risk. GANNE is an efficient disease risk classification method using a minimum optimal set of SNPs; however, further validation studies are needed to ensure the clinical utility of the model for predicting NSCL/P risk.
Collapse
Affiliation(s)
- Geon Kang
- Department of Medical Genetics, College of Medicine, Hallym University, Chuncheon 24252, Republic of Korea
| | - Seung-Hak Baek
- Department of Orthodontics, School of Dentistry, Seoul National University, Seoul 03080, Republic of Korea
| | - Young Ho Kim
- Department of Orthodontics, The Institute of Oral Health Science, Samsung Medical Center, School of Medicine, Sungkyunkwan University, Seoul 06351, Republic of Korea
| | - Dong-Hyun Kim
- Department of Social and Preventive Medicine, College of Medicine, Hallym University, Chuncheon 24252, Republic of Korea
| | - Ji Wan Park
- Department of Medical Genetics, College of Medicine, Hallym University, Chuncheon 24252, Republic of Korea
- Correspondence:
| |
Collapse
|
14
|
Domínguez C, Heras J, Mata E, Pascual V, Royo D, Zapata MÁ. Binary and multi-class automated detection of age-related macular degeneration using convolutional- and transformer-based architectures. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 229:107302. [PMID: 36528999 DOI: 10.1016/j.cmpb.2022.107302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 09/05/2022] [Accepted: 12/05/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVE Age-related macular degeneration (AMD) is an eye disease that happens when ageing causes damage to the macula, and it is the leading cause of blindness in developed countries. Screening retinal fundus images allows ophthalmologists to early detect, diagnose and treat this disease; however, the manual interpretation of images is a time-consuming task. In this paper, we aim to study different deep learning methods to diagnose AMD. METHODS We have conducted a thorough study of two families of deep learning models based on convolutional neural networks (CNN) and transformer architectures to automatically diagnose referable/non-referable AMD, and grade AMD severity scales (no AMD, early AMD, intermediate AMD, and advanced AMD). In addition, we have analysed several progressive resizing strategies and ensemble methods for convolutional-based architectures to further improve the performance of the models. RESULTS As a first result, we have shown that transformer-based architectures obtain considerably worse results than convolutional-based architectures for diagnosing AMD. Moreover, we have built a model for diagnosing referable AMD that yielded a mean F1-score (SD) of 92.60% (0.47), a mean AUROC (SD) of 97.53% (0.40), and a mean weighted kappa coefficient (SD) of 85.28% (0.91); and an ensemble of models for grading AMD severity scales with a mean accuracy (SD) of 82.55% (2.92), and a mean weighted kappa coefficient (SD) of 84.76% (2.45). CONCLUSIONS This work shows that working with convolutional based architectures is more suitable than using transformer based models for classifying and grading AMD from retinal fundus images. Furthermore, convolutional models can be improved by means of progressive resizing strategies and ensemble methods.
Collapse
Affiliation(s)
- César Domínguez
- Department of Mathematics and Computer Science, University of La Rioja, Spain
| | - Jónathan Heras
- Department of Mathematics and Computer Science, University of La Rioja, Spain.
| | - Eloy Mata
- Department of Mathematics and Computer Science, University of La Rioja, Spain
| | - Vico Pascual
- Department of Mathematics and Computer Science, University of La Rioja, Spain
| | | | - Miguel Ángel Zapata
- UPRetina, Barcelona, Spain; Hospital Vall Hebron, Passeig Roser 126, Sant Cugat del Vallés, 08195 Barcelona, Spain
| |
Collapse
|
15
|
Celebi ARC, Bulut E, Sezer A. Artificial intelligence based detection of age-related macular degeneration using optical coherence tomography with unique image preprocessing. Eur J Ophthalmol 2023; 33:65-73. [PMID: 35469472 DOI: 10.1177/11206721221096294] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
PURPOSE The aim of the study is to improve the accuracy of age related macular degeneration (AMD) disease in its earlier phases with proposed Capsule Network (CapsNet) architecture trained on speckle noise reduced spectral domain optical coherence tomography (SD-OCT) images based on an optimized Bayesian non-local mean (OBNLM) filter augmentation techniques. METHODS A total of 726 local SD-OCT images were collected and labelled as 159 drusen, 145 dry AMD, 156 wet AMD and 266 normal. Region of interest (ROI) was identified. Speckle noise in SD-OCT images were reduced based on OBNLM filter. The processed images were fed to proposed CapsNet architecture to clasify SD-OCT images. Accuracy rates were calculated in both public and local dataset. RESULTS Accuracy rate of local SD-OCT image dataset classification was achieved to a value of 96.39% after performing data augmentation and speckle noise reduction with OBNLM. The performance of proposed CapsNet was also evaluated on the public Kaggle dataset under the same processing procedures and the accuracy rate was calculated as 98.07%. The sensitivity and specificity rates were 96.72% and 99.98%, respectively. CONCLUSIONS The classification success of proposed CapsNet may be improved with robust pre-processing steps like; determination of ROI and denoised SD-OCT images based on OBNLM. These impactful image preprocessing steps yielded higher accuracy rates for determining different types of AMD including its precursor lesion on the both local and public dataset with proposed CapsNet architecture.
Collapse
Affiliation(s)
- Ali Riza Cenk Celebi
- Department of Ophthalmology, Acibadem University School of Medicine, Istanbul, Turkey
| | - Erkan Bulut
- Department of Ophthalmology, Beylikduzu Public Hospital, Istanbul, Turkey
| | - Aysun Sezer
- United'Informatique et d'Ingenierie des Systemes, 52849ENSTA-ParisTech, Universite de Paris-Saclay, Villefranche Sur Mer, Provence-Alpes-Côte d'azur, France
| |
Collapse
|
16
|
The Need for Artificial Intelligence Based Risk Factor Analysis for Age-Related Macular Degeneration: A Review. Diagnostics (Basel) 2022; 13:diagnostics13010130. [PMID: 36611422 PMCID: PMC9818762 DOI: 10.3390/diagnostics13010130] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 12/16/2022] [Accepted: 12/22/2022] [Indexed: 01/04/2023] Open
Abstract
In epidemiology, a risk factor is a variable associated with increased disease risk. Understanding the role of risk factors is significant for developing a strategy to improve global health. There is strong evidence that risk factors like smoking, alcohol consumption, previous cataract surgery, age, high-density lipoprotein (HDL) cholesterol, BMI, female gender, and focal hyper-pigmentation are independently associated with age-related macular degeneration (AMD). Currently, in the literature, statistical techniques like logistic regression, multivariable logistic regression, etc., are being used to identify AMD risk factors by employing numerical/categorical data. However, artificial intelligence (AI) techniques have not been used so far in the literature for identifying risk factors for AMD. On the other hand, artificial intelligence (AI) based tools can anticipate when a person is at risk of developing chronic diseases like cancer, dementia, asthma, etc., in providing personalized care. AI-based techniques can employ numerical/categorical and/or image data thus resulting in multimodal data analysis, which provides the need for AI-based tools to be used for risk factor analysis in ophthalmology. This review summarizes the statistical techniques used to identify various risk factors and the higher benefits that AI techniques provide for AMD-related disease prediction. Additional studies are required to review different techniques for risk factor identification for other ophthalmic diseases like glaucoma, diabetic macular edema, retinopathy of prematurity, cataract, and diabetic retinopathy.
Collapse
|
17
|
Ji Y, Liu S, Hong X, Lu Y, Wu X, Li K, Li K, Liu Y. Advances in artificial intelligence applications for ocular surface diseases diagnosis. Front Cell Dev Biol 2022; 10:1107689. [PMID: 36605721 PMCID: PMC9808405 DOI: 10.3389/fcell.2022.1107689] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 12/05/2022] [Indexed: 01/07/2023] Open
Abstract
In recent years, with the rapid development of computer technology, continual optimization of various learning algorithms and architectures, and establishment of numerous large databases, artificial intelligence (AI) has been unprecedentedly developed and applied in the field of ophthalmology. In the past, ophthalmological AI research mainly focused on posterior segment diseases, such as diabetic retinopathy, retinopathy of prematurity, age-related macular degeneration, retinal vein occlusion, and glaucoma optic neuropathy. Meanwhile, an increasing number of studies have employed AI to diagnose ocular surface diseases. In this review, we summarize the research progress of AI in the diagnosis of several ocular surface diseases, namely keratitis, keratoconus, dry eye, and pterygium. We discuss the limitations and challenges of AI in the diagnosis of ocular surface diseases, as well as prospects for the future.
Collapse
Affiliation(s)
- Yuke Ji
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Sha Liu
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Xiangqian Hong
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Yi Lu
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Xingyang Wu
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Kunke Li
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China,*Correspondence: Yunfang Liu, ; Keran Li, ; Kunke Li,
| | - Keran Li
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China,*Correspondence: Yunfang Liu, ; Keran Li, ; Kunke Li,
| | - Yunfang Liu
- Department of Ophthalmology, First Affiliated Hospital of Huzhou University, Huzhou, China,*Correspondence: Yunfang Liu, ; Keran Li, ; Kunke Li,
| |
Collapse
|
18
|
Ganjdanesh A, Zhang J, Yan S, Chen W, Huang H. Multimodal Genotype and Phenotype Data Integration to Improve Partial Data-Based Longitudinal Prediction. J Comput Biol 2022; 29:1324-1345. [PMID: 36383766 PMCID: PMC9835299 DOI: 10.1089/cmb.2022.0378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Multimodal data analysis has attracted ever-increasing attention in computational biology and bioinformatics community recently. However, existing multimodal learning approaches need all data modalities available at both training and prediction stages, thus they cannot be applied to many real-world biomedical applications, which often have a missing modality problem as the collection of all modalities is prohibitively costly. Meanwhile, two diagnosis-related pieces of information are of main interest during the examination of a subject regarding a chronic disease (with longitudinal progression): their current status (diagnosis) and how it will change before next visit (longitudinal outcome). Correct responses to these queries can identify susceptible individuals and provide the means of early interventions for them. In this article, we develop a novel adversarial mutual learning framework for longitudinal disease progression prediction, allowing us to leverage multiple data modalities available for training to train a performant model that uses a single modality for prediction. Specifically, in our framework, a single-modal model (which utilizes the main modality) learns from a pretrained multimodal model (which accepts both main and auxiliary modalities as input) in a mutual learning manner to (1) infer outcome-related representations of the auxiliary modalities based on its own representations for the main modality during adversarial training and (2) successfully combine them to predict the longitudinal outcome. We apply our method to analyze the retinal imaging genetics for the early diagnosis of age-related macular degeneration (AMD) disease, that is, simultaneous assessment of the severity of AMD at the time of the current visit and the prognosis of the condition at the subsequent visit. Our experiments using the Age-Related Eye Disease Study dataset show that our method is more effective than baselines at classifying patients' current and forecasting their future AMD severity.
Collapse
Affiliation(s)
- Alireza Ganjdanesh
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Jipeng Zhang
- Department of Biostatistics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Sarah Yan
- West Windsor-Plainsboro High School South, Princeton Junction, New Jersey, USA
| | - Wei Chen
- Department of Biostatistics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
- Department of Pediatrics, UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania, USA
- Department of Human Genetics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Heng Huang
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| |
Collapse
|
19
|
Multi-Stage Temporal Convolution Network for COVID-19 Variant Classification. Diagnostics (Basel) 2022; 12:diagnostics12112736. [DOI: 10.3390/diagnostics12112736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 10/18/2022] [Accepted: 10/31/2022] [Indexed: 11/11/2022] Open
Abstract
The outbreak of the novel coronavirus disease COVID-19 (SARS-CoV-2) has developed into a global epidemic. Due to the pathogenic virus’s high transmission rate, accurate identification and early prediction are required for subsequent therapy. Moreover, the virus’s polymorphic nature allows it to evolve and adapt to various environments, making prediction difficult. However, other diseases, such as dengue, MERS-CoV, Ebola, SARS-CoV-1, and influenza, necessitate the employment of a predictor based on their genomic information. To alleviate the situation, we propose a deep learning-based mechanism for the classification of various SARS-CoV-2 virus variants, including the most recent, Omicron. Our model uses a neural network with a temporal convolution neural network to accurately identify different variants of COVID-19. The proposed model first encodes the sequences in the numerical descriptor, and then the convolution operation is applied for discriminative feature extraction from the encoded sequences. The sequential relations between the features are collected using a temporal convolution network to classify COVID-19 variants accurately. We collected recent data from the NCBI, on which the proposed method outperforms various baselines with a high margin.
Collapse
|
20
|
Jin K, Ye J. Artificial intelligence and deep learning in ophthalmology: Current status and future perspectives. ADVANCES IN OPHTHALMOLOGY PRACTICE AND RESEARCH 2022; 2:100078. [PMID: 37846285 PMCID: PMC10577833 DOI: 10.1016/j.aopr.2022.100078] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 08/01/2022] [Accepted: 08/18/2022] [Indexed: 10/18/2023]
Abstract
Background The ophthalmology field was among the first to adopt artificial intelligence (AI) in medicine. The availability of digitized ocular images and substantial data have made deep learning (DL) a popular topic. Main text At the moment, AI in ophthalmology is mostly used to improve disease diagnosis and assist decision-making aiming at ophthalmic diseases like diabetic retinopathy (DR), glaucoma, age-related macular degeneration (AMD), cataract and other anterior segment diseases. However, most of the AI systems developed to date are still in the experimental stages, with only a few having achieved clinical applications. There are a number of reasons for this phenomenon, including security, privacy, poor pervasiveness, trust and explainability concerns. Conclusions This review summarizes AI applications in ophthalmology, highlighting significant clinical considerations for adopting AI techniques and discussing the potential challenges and future directions.
Collapse
Affiliation(s)
- Kai Jin
- Department of Ophthalmology, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Juan Ye
- Department of Ophthalmology, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| |
Collapse
|
21
|
Sheng B, Chen X, Li T, Ma T, Yang Y, Bi L, Zhang X. An overview of artificial intelligence in diabetic retinopathy and other ocular diseases. Front Public Health 2022; 10:971943. [PMID: 36388304 PMCID: PMC9650481 DOI: 10.3389/fpubh.2022.971943] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 10/04/2022] [Indexed: 01/25/2023] Open
Abstract
Artificial intelligence (AI), also known as machine intelligence, is a branch of science that empowers machines using human intelligence. AI refers to the technology of rendering human intelligence through computer programs. From healthcare to the precise prevention, diagnosis, and management of diseases, AI is progressing rapidly in various interdisciplinary fields, including ophthalmology. Ophthalmology is at the forefront of AI in medicine because the diagnosis of ocular diseases heavy reliance on imaging. Recently, deep learning-based AI screening and prediction models have been applied to the most common visual impairment and blindness diseases, including glaucoma, cataract, age-related macular degeneration (ARMD), and diabetic retinopathy (DR). The success of AI in medicine is primarily attributed to the development of deep learning algorithms, which are computational models composed of multiple layers of simulated neurons. These models can learn the representations of data at multiple levels of abstraction. The Inception-v3 algorithm and transfer learning concept have been applied in DR and ARMD to reuse fundus image features learned from natural images (non-medical images) to train an AI system with a fraction of the commonly used training data (<1%). The trained AI system achieved performance comparable to that of human experts in classifying ARMD and diabetic macular edema on optical coherence tomography images. In this study, we highlight the fundamental concepts of AI and its application in these four major ocular diseases and further discuss the current challenges, as well as the prospects in ophthalmology.
Collapse
Affiliation(s)
- Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
| | - Xiaosi Chen
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Tingyao Li
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
| | - Tianxing Ma
- Chongqing University-University of Cincinnati Joint Co-op Institute, Chongqing University, Chongqing, China
| | - Yang Yang
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Lei Bi
- School of Computer Science, University of Sydney, Sydney, NSW, Australia
| | - Xinyuan Zhang
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
22
|
Valmaggia P, Friedli P, Hörmann B, Kaiser P, Scholl HPN, Cattin PC, Sandkühler R, Maloca PM. Feasibility of Automated Segmentation of Pigmented Choroidal Lesions in OCT Data With Deep Learning. Transl Vis Sci Technol 2022; 11:25. [PMID: 36156729 PMCID: PMC9526362 DOI: 10.1167/tvst.11.9.25] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose To evaluate the feasibility of automated segmentation of pigmented choroidal lesions (PCLs) in optical coherence tomography (OCT) data and compare the performance of different deep neural networks. Methods Swept-source OCT image volumes were annotated pixel-wise for PCLs and background. Three deep neural network architectures were applied to the data: the multi-dimensional gated recurrent units (MD-GRU), the V-Net, and the nnU-Net. The nnU-Net was used to compare the performance of two-dimensional (2D) versus three-dimensional (3D) predictions. Results A total of 121 OCT volumes were analyzed (100 normal and 21 PCLs). Automated PCL segmentations were successful with all neural networks. The 3D nnU-Net predictions showed the highest recall with a mean of 0.77 ± 0.22 (MD-GRU, 0.60 ± 0.31; V-Net, 0.61 ± 0.25). The 3D nnU-Net predicted PCLs with a Dice coefficient of 0.78 ± 0.13, outperforming MD-GRU (0.62 ± 0.23) and V-Net (0.59 ± 0.24). The smallest distance to the manual annotation was found using 3D nnU-Net with a mean maximum Hausdorff distance of 315 ± 172 µm (MD-GRU, 1542 ± 1169 µm; V-Net, 2408 ± 1060 µm). The 3D nnU-Net showed a superior performance compared with stacked 2D predictions. Conclusions The feasibility of automated deep learning segmentation of PCLs was demonstrated in OCT data. The neural network architecture had a relevant impact on PCL predictions. Translational Relevance This work serves as proof of concept for segmentations of choroidal pathologies in volumetric OCT data; improvements are conceivable to meet clinical demands for the diagnosis, monitoring, and treatment evaluation of PCLs.
Collapse
Affiliation(s)
- Philippe Valmaggia
- Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland.,Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland.,Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
| | | | | | | | - Hendrik P N Scholl
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland.,Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
| | - Philippe C Cattin
- Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland
| | - Robin Sandkühler
- Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland
| | - Peter M Maloca
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland.,Department of Ophthalmology, University Hospital Basel, Basel, Switzerland.,Moorfields Eye Hospital NHS Foundation Trust, London, EC1V 2PD, UK
| |
Collapse
|
23
|
Lee J, Wanyan T, Chen Q, Keenan TDL, Glicksberg BS, Chew EY, Lu Z, Wang F, Peng Y. Predicting Age-related Macular Degeneration Progression with Longitudinal Fundus Images Using Deep Learning. MACHINE LEARNING IN MEDICAL IMAGING. MLMI (WORKSHOP) 2022; 13583:11-20. [PMID: 36656604 PMCID: PMC9842432 DOI: 10.1007/978-3-031-21014-3_2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Accurately predicting a patient's risk of progressing to late age-related macular degeneration (AMD) is difficult but crucial for personalized medicine. While existing risk prediction models for progression to late AMD are useful for triaging patients, none utilizes longitudinal color fundus photographs (CFPs) in a patient's history to estimate the risk of late AMD in a given subsequent time interval. In this work, we seek to evaluate how deep neural networks capture the sequential information in longitudinal CFPs and improve the prediction of 2-year and 5-year risk of progression to late AMD. Specifically, we proposed two deep learning models, CNN-LSTM and CNN-Transformer, which use a Long-Short Term Memory (LSTM) and a Transformer, respectively with convolutional neural networks (CNN), to capture the sequential information in longitudinal CFPs. We evaluated our models in comparison to baselines on the Age-Related Eye Disease Study, one of the largest longitudinal AMD cohorts with CFPs. The proposed models outperformed the baseline models that utilized only single-visit CFPs to predict the risk of late AMD (0.879 vs 0.868 in AUC for 2-year prediction, and 0.879 vs 0.862 for 5-year prediction). Further experiments showed that utilizing longitudinal CFPs over a longer time period was helpful for deep learning models to predict the risk of late AMD. We made the source code available at https://github.com/bionlplab/AMD_prognosis_mlmi2022 to catalyze future works that seek to develop deep learning models for late AMD prediction.
Collapse
Affiliation(s)
- Junghwan Lee
- Columbia University, New York, USA,Weill Cornell Medicine, New York, USA
| | - Tingyi Wanyan
- Indiana University, Bloomington, USA,Ichan School of Medicine at Mount Sinai, New York, USA,Weill Cornell Medicine, New York, USA
| | - Qingyu Chen
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, USA
| | | | | | - Emily Y. Chew
- National Eye Institute, National Institutes of Health, Bethesda, USA
| | - Zhiyong Lu
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, USA
| | - Fei Wang
- Weill Cornell Medicine, New York, USA
| | | |
Collapse
|
24
|
Charng J, Alam K, Swartz G, Kugelman J, Alonso-Caneiro D, Mackey DA, Chen FK. Deep learning: applications in retinal and optic nerve diseases. Clin Exp Optom 2022:1-10. [PMID: 35999058 DOI: 10.1080/08164622.2022.2111201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022] Open
Abstract
Deep learning (DL) represents a paradigm-shifting, burgeoning field of research with emerging clinical applications in optometry. Unlike traditional programming, which relies on human-set specific rules, DL works by exposing the algorithm to a large amount of annotated data and allowing the software to develop its own set of rules (i.e. learn) by adjusting the parameters inside the model (network) during a training process in order to complete the task on its own. One major limitation of traditional programming is that, with complex tasks, it may require an extensive set of rules to accurately complete the assignment. Additionally, traditional programming can be susceptible to human bias from programmer experience. With the dramatic increase in the amount and the complexity of clinical data, DL has been utilised to automate data analysis and thus to assist clinicians in patient management. This review will present the latest advances in DL, for managing posterior eye diseases as well as DL-based solutions for patients with vision loss.
Collapse
Affiliation(s)
- Jason Charng
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Khyber Alam
- Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Gavin Swartz
- Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Jason Kugelman
- School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Australia
| | - David Alonso-Caneiro
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Australia
| | - David A Mackey
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia.,Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | - Fred K Chen
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia.,Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia.,Department of Ophthalmology, Royal Perth Hospital, Western Australia, Perth, Australia
| |
Collapse
|
25
|
Zhang Q, Sampani K, Xu M, Cai S, Deng Y, Li H, Sun JK, Karniadakis GE. AOSLO-net: A Deep Learning-Based Method for Automatic Segmentation of Retinal Microaneurysms From Adaptive Optics Scanning Laser Ophthalmoscopy Images. Transl Vis Sci Technol 2022; 11:7. [PMID: 35938881 PMCID: PMC9366726 DOI: 10.1167/tvst.11.8.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 07/02/2022] [Indexed: 11/24/2022] Open
Abstract
Purpose Accurate segmentation of microaneurysms (MAs) from adaptive optics scanning laser ophthalmoscopy (AOSLO) images is crucial for identifying MA morphologies and assessing the hemodynamics inside the MAs. Herein, we introduce AOSLO-net to perform automatic MA segmentation from AOSLO images of diabetic retinas. Method AOSLO-net is composed of a deep neural network based on UNet with a pretrained EfficientNet as the encoder. We have designed customized preprocessing and postprocessing policies for AOSLO images, including generation of multichannel images, de-noising, contrast enhancement, ensemble and union of model predictions, to optimize the MA segmentation. AOSLO-net is trained and tested using 87 MAs imaged from 28 eyes of 20 subjects with varying severity of diabetic retinopathy (DR), which is the largest available AOSLO dataset for MA detection. To avoid the overfitting in the model training process, we augment the training data by flipping, rotating, scaling the original image to increase the diversity of data available for model training. Results The validity of the model is demonstrated by the good agreement between the predictions of AOSLO-net and the MA masks generated by ophthalmologists and skillful trainees on 87 patient-specific MA images. Our results show that AOSLO-net outperforms the state-of-the-art segmentation model (nnUNet) both in accuracy (e.g., intersection over union and Dice scores), as well as computational cost. Conclusions We demonstrate that AOSLO-net provides high-quality of MA segmentation from AOSLO images that enables correct MA morphological classification. Translational Relevance As the first attempt to automatically segment retinal MAs from AOSLO images, AOSLO-net could facilitate the pathological study of DR and help ophthalmologists make disease prognoses.
Collapse
Affiliation(s)
- Qian Zhang
- Division of Applied Mathematics, Brown University, Providence, RI, USA
| | - Konstantina Sampani
- Beetham Eye Institute, Joslin Diabetes Center, Department of Medicine and Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Mengjia Xu
- Division of Applied Mathematics, Brown University, Providence, RI, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Shengze Cai
- Division of Applied Mathematics, Brown University, Providence, RI, USA
| | - Yixiang Deng
- School of Engineering, Brown University, Providence, RI, USA
| | - He Li
- School of Engineering, Brown University, Providence, RI, USA
| | - Jennifer K. Sun
- Beetham Eye Institute, Joslin Diabetes Center, Department of Medicine and Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - George Em Karniadakis
- Division of Applied Mathematics and School of Engineering, Brown University, Providence, RI, USA
| |
Collapse
|
26
|
García-Layana A, López-Gálvez M, García-Arumí J, Arias L, Gea-Sánchez A, Marín-Méndez JJ, Sayar-Beristain O, Sedano-Gil G, Aslam TM, Minnella AM, Ibáñez IL, de Dios Hernández JM, Seddon JM. A Screening Tool for Self-Evaluation of Risk for Age-Related Macular Degeneration: Validation in a Spanish Population. Transl Vis Sci Technol 2022; 11:23. [PMID: 35749108 PMCID: PMC9234358 DOI: 10.1167/tvst.11.6.23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
Purpose The objectives of this study were the creation and validation of a screening tool for age-related macular degeneration (AMD) for routine assessment by primary care physicians, ophthalmologists, other healthcare professionals, and the general population. Methods A simple, self-administered questionnaire (Simplified Théa AMD Risk-Assessment Scale [STARS] version 4.0) which included well-established risk factors for AMD, such as family history, smoking, and dietary factors, was administered to patients during ophthalmology visits. A fundus examination was performed to determine presence of large soft drusen, pigmentary abnormalities, or late AMD. Based on data from the questionnaire and the clinical examination, predictive models were developed to estimate probability of the Age-Related Eye Disease Study (AREDS) score (categorized as low risk/high risk). The models were evaluated by area under the receiving operating characteristic curve analysis. Results A total of 3854 subjects completed the questionnaire and underwent a fundus examination. Early/intermediate and late AMD were detected in 15.9% and 23.8% of the patients, respectively. A predictive model was developed with training, validation, and test datasets. The model in the test set had an area under the curve of 0.745 (95% confidence interval [CI] = 0.705-0.784), a positive predictive value of 0.500 (95% CI = 0.449-0.557), and a negative predictive value of 0.810 (95% CI = 0.770-0.844). Conclusions The STARS questionnaire version 4.0 and the model identify patients at high risk of developing late AMD. Translational Relevance The screening instrument described could be useful to evaluate the risk of late AMD in patients >55 years without having an eye examination, which could lead to more timely referrals and encourage lifestyle changes.
Collapse
Affiliation(s)
- Alfredo García-Layana
- Retinal Pathologies and New Therapies Group, Experimental Ophthalmology Laboratory, Department of Ophthalmology, Clínica Universidad de Navarra, Pamplona, Spain,Navarra Institute for Health Research, IdiSNA, Pamplona, Spain,Red Temática de Investigación Cooperativa Sanitaria en Enfermedades Oculares (Oftared), Instituto de Salud Carlos III, Madrid, Spain
| | - Maribel López-Gálvez
- Red Temática de Investigación Cooperativa Sanitaria en Enfermedades Oculares (Oftared), Instituto de Salud Carlos III, Madrid, Spain,Retina Group, IOBA, Campus Miguel Delibes, Valladolid, Spain,Grupo de Ingeniería Biomédica, Universidad de Valladolid, Campus Miguel Delibes. Valladolid, Spain,Department of Ophthalmology, Hospital Clínico Universitario de Valladolid, Valladolid, Spain
| | - José García-Arumí
- Department of Ophthalmology, Vall d'Hebron University Hospital, Barcelona, Spain
| | - Luis Arias
- Department of Ophthalmology, Bellvitge University Hospital, University of Barcelona, Barcelona, Spain
| | - Alfredo Gea-Sánchez
- Preventive Medicine and Public Health, School of Medicine, University of Navarra, Pamplona, Spain
| | | | | | | | - Tariq M. Aslam
- School of Pharmacy and Optometry, University of Manchester and Manchester Royal Eye Hospital, Manchester, UK
| | - Angelo M. Minnella
- UOC Oculistica, Università Cattolica del S. Cuore, Fondazione Policlinico Universitario A. Gemelli-IRCCS, Rome, Italy
| | - Isabel López Ibáñez
- Department of Family and Community Medicine, Centro de Salud Nápoles y Sicilia, Valencia, Spain
| | | | - Johanna M. Seddon
- Department of Ophthalmology and Visual Sciences, University of Massachusetts Medical School, Worcester, Massachusetts, USA
| |
Collapse
|
27
|
Developing and validating a multivariable prediction model which predicts progression of intermediate to late age-related macular degeneration-the PINNACLE trial protocol. Eye (Lond) 2022; 37:1275-1283. [PMID: 35614343 PMCID: PMC9130980 DOI: 10.1038/s41433-022-02097-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Revised: 04/27/2022] [Accepted: 05/06/2022] [Indexed: 11/08/2022] Open
Abstract
AIMS Age-related macular degeneration (AMD) is characterised by a progressive loss of central vision. Intermediate AMD is a risk factor for progression to advanced stages categorised as geographic atrophy (GA) and neovascular AMD. However, rates of progression to advanced stages vary between individuals. Recent advances in imaging and computing technologies have enabled deep phenotyping of intermediate AMD. The aim of this project is to utilise machine learning (ML) and advanced statistical modelling as an innovative approach to discover novel features and accurately quantify markers of pathological retinal ageing that can individualise progression to advanced AMD. METHODS The PINNACLE study consists of both retrospective and prospective parts. In the retrospective part, more than 400,000 optical coherent tomography (OCT) images collected from four University Teaching Hospitals and the UK Biobank Population Study are being pooled, centrally stored and pre-processed. With this large dataset featuring eyes with AMD at various stages and healthy controls, we aim to identify imaging biomarkers for disease progression for intermediate AMD via supervised and unsupervised ML. The prospective study part will firstly characterise the progression of intermediate AMD in patients followed between one and three years; secondly, it will validate the utility of biomarkers identified in the retrospective cohort as predictors of progression towards late AMD. Patients aged 55-90 years old with intermediate AMD in at least one eye will be recruited across multiple sites in UK, Austria and Switzerland for visual function tests, multimodal retinal imaging and genotyping. Imaging will be repeated every four months to identify early focal signs of deterioration on spectral-domain optical coherence tomography (OCT) by human graders. A focal event triggers more frequent follow-up with visual function and imaging tests. The primary outcome is the sensitivity and specificity of the OCT imaging biomarkers. Secondary outcomes include sensitivity and specificity of novel multimodal imaging characteristics at predicting disease progression, ROC curves, time from development of imaging change to development of these endpoints, structure-function correlations, structure-genotype correlation and predictive risk models. CONCLUSIONS This is one of the first studies in intermediate AMD to combine both ML, retrospective and prospective AMD patient data with the goal of identifying biomarkers of progression and to report the natural history of progression of intermediate AMD with multimodal retinal imaging.
Collapse
|
28
|
Dow ER, Keenan TDL, Lad EM, Lee AY, Lee CS, Loewenstein A, Eydelman MB, Chew EY, Keane PA, Lim JI. From Data to Deployment: The Collaborative Community on Ophthalmic Imaging Roadmap for Artificial Intelligence in Age-Related Macular Degeneration. Ophthalmology 2022; 129:e43-e59. [PMID: 35016892 PMCID: PMC9859710 DOI: 10.1016/j.ophtha.2022.01.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Revised: 12/16/2021] [Accepted: 01/04/2022] [Indexed: 01/25/2023] Open
Abstract
OBJECTIVE Health care systems worldwide are challenged to provide adequate care for the 200 million individuals with age-related macular degeneration (AMD). Artificial intelligence (AI) has the potential to make a significant, positive impact on the diagnosis and management of patients with AMD; however, the development of effective AI devices for clinical care faces numerous considerations and challenges, a fact evidenced by a current absence of Food and Drug Administration (FDA)-approved AI devices for AMD. PURPOSE To delineate the state of AI for AMD, including current data, standards, achievements, and challenges. METHODS Members of the Collaborative Community on Ophthalmic Imaging Working Group for AI in AMD attended an inaugural meeting on September 7, 2020, to discuss the topic. Subsequently, they undertook a comprehensive review of the medical literature relevant to the topic. Members engaged in meetings and discussion through December 2021 to synthesize the information and arrive at a consensus. RESULTS Existing infrastructure for robust AI development for AMD includes several large, labeled data sets of color fundus photography and OCT images; however, image data often do not contain the metadata necessary for the development of reliable, valid, and generalizable models. Data sharing for AMD model development is made difficult by restrictions on data privacy and security, although potential solutions are under investigation. Computing resources may be adequate for current applications, but knowledge of machine learning development may be scarce in many clinical ophthalmology settings. Despite these challenges, researchers have produced promising AI models for AMD for screening, diagnosis, prediction, and monitoring. Future goals include defining benchmarks to facilitate regulatory authorization and subsequent clinical setting generalization. CONCLUSIONS Delivering an FDA-authorized, AI-based device for clinical care in AMD involves numerous considerations, including the identification of an appropriate clinical application; acquisition and development of a large, high-quality data set; development of the AI architecture; training and validation of the model; and functional interactions between the model output and clinical end user. The research efforts undertaken to date represent starting points for the medical devices that eventually will benefit providers, health care systems, and patients.
Collapse
Affiliation(s)
- Eliot R Dow
- Byers Eye Institute, Stanford University, Palo Alto, California
| | - Tiarnan D L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Eleonora M Lad
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Cecilia S Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Anat Loewenstein
- Division of Ophthalmology, Tel Aviv Medical Center, Tel Aviv, Israel
| | - Malvina B Eydelman
- Office of Health Technology 1, Center of Devices and Radiological Health, Food and Drug Administration, Silver Spring, Maryland
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland.
| | - Pearse A Keane
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom.
| | - Jennifer I Lim
- Department of Ophthalmology, University of Illinois at Chicago, Chicago, Illinois.
| |
Collapse
|
29
|
Pereira A, Oakley JD, Sodhi SK, Russakoff DB, Choudhry N. Proof-of-Concept Analysis of a Deep Learning Model to Conduct Automated Segmentation of OCT Images for Macular Hole Volume. Ophthalmic Surg Lasers Imaging Retina 2022; 53:208-214. [PMID: 35417293 DOI: 10.3928/23258160-20220315-02] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
BACKGROUND AND OBJECTIVE To determine whether an automated artificial intelligence (AI) model could assess macular hole (MH) volume on swept-source optical coherence tomography (OCT) images. PATIENTS AND METHODS This was a proof-of-concept consecutive case series. Patients with an idiopathic full-thickness MH undergoing pars plana vitrectomy surgery with 1 year of follow-up were considered for inclusion. MHs were manually graded by a vitreoretinal surgeon from preoperative OCT images to delineate MH volume. This information was used to train a fully three-dimensional convolutional neural network for automatic segmentation. The main outcome was the correlation of manual MH volume to automated volume segmentation. RESULTS The correlation between manual and automated MH volume was R2 = 0.94 (n = 24). Automated MH volume demonstrated a higher correlation to change in visual acuity from preoperative to the postoperative 1-year time point compared with the minimum linear diameter (volume: R2 = 0.53; minimum linear diameter: R2 = 0.39). CONCLUSION MH automated volume segmentation on OCT imaging demonstrated high correlation to manual MH volume measurements. [Ophthalmic Surg Lasers Imaging Retina. 2022;53(4):208-214.].
Collapse
|
30
|
Zhu S, Lu B, Wang C, Wu M, Zheng B, Jiang Q, Wei R, Cao Q, Yang W. Screening of Common Retinal Diseases Using Six-Category Models Based on EfficientNet. Front Med (Lausanne) 2022; 9:808402. [PMID: 35280876 PMCID: PMC8904395 DOI: 10.3389/fmed.2022.808402] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Accepted: 01/12/2022] [Indexed: 11/21/2022] Open
Abstract
Purpose A six-category model of common retinal diseases is proposed to help primary medical institutions in the preliminary screening of the five common retinal diseases. Methods A total of 2,400 fundus images of normal and five common retinal diseases were provided by a cooperative hospital. Two six-category deep learning models of common retinal diseases based on the EfficientNet-B4 and ResNet50 models were trained. The results from the six-category models in this study and the results from a five-category model in our previous study based on ResNet50 were compared. A total of 1,315 fundus images were used to test the models, the clinical diagnosis results and the diagnosis results of the two six-category models were compared. The main evaluation indicators were sensitivity, specificity, F1-score, area under the curve (AUC), 95% confidence interval, kappa and accuracy, and the receiver operator characteristic curves of the two six-category models were compared in the study. Results The diagnostic accuracy rate of EfficientNet-B4 model was 95.59%, the kappa value was 94.61%, and there was high diagnostic consistency. The AUC of the normal diagnosis and the five retinal diseases were all above 0.95. The sensitivity, specificity, and F1-score for the diagnosis of normal fundus images were 100, 99.9, and 99.83%, respectively. The specificity and F1-score for RVO diagnosis were 95.68, 98.61, and 93.09%, respectively. The sensitivity, specificity, and F1-score for high myopia diagnosis were 96.1, 99.6, and 97.37%, respectively. The sensitivity, specificity, and F1-score for glaucoma diagnosis were 97.62, 99.07, and 94.62%, respectively. The sensitivity, specificity, and F1-score for DR diagnosis were 90.76, 99.16, and 93.3%, respectively. The sensitivity, specificity, and F1-score for MD diagnosis were 92.27, 98.5, and 91.51%, respectively. Conclusion The EfficientNet-B4 model was used to design a six-category model of common retinal diseases. It can be used to diagnose the normal fundus and five common retinal diseases based on fundus images. It can help primary doctors in the screening for common retinal diseases, and give suitable suggestions and recommendations. Timely referral can improve the efficiency of diagnosis of eye diseases in rural areas and avoid delaying treatment.
Collapse
Affiliation(s)
- Shaojun Zhu
- School of Information Engineering, Huzhou University, Huzhou, China.,Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou, China
| | - Bing Lu
- School of Information Engineering, Huzhou University, Huzhou, China
| | - Chenghu Wang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Maonian Wu
- School of Information Engineering, Huzhou University, Huzhou, China.,Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou, China
| | - Bo Zheng
- School of Information Engineering, Huzhou University, Huzhou, China.,Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou, China
| | - Qin Jiang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Ruili Wei
- Department of Ophthalmology, Shanghai Changzheng Hospital, Huangpu, China
| | - Qixin Cao
- Huzhou Traditional Chinese Medicine Hospital Affiliated to Zhejiang University of Traditional Chinese Medicine, Huzhou, China
| | - Weihua Yang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| |
Collapse
|
31
|
Leong YY, Vasseneix C, Finkelstein MT, Milea D, Najjar RP. Artificial Intelligence Meets Neuro-Ophthalmology. Asia Pac J Ophthalmol (Phila) 2022; 11:111-125. [PMID: 35533331 DOI: 10.1097/apo.0000000000000512] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
ABSTRACT Recent advances in artificial intelligence have provided ophthalmologists with fast, accurate, and automated means for diagnosing and treating ocular conditions, paving the way to a modern and scalable eye care system. Compared to other ophthalmic disciplines, neuro-ophthalmology has, until recently, not benefitted from significant advances in the area of artificial intelligence. In this narrative review, we summarize and discuss recent advancements utilizing artificial intelligence for the detection of structural and functional optic nerve head abnormalities, and ocular movement disorders in neuro-ophthalmology.
Collapse
Affiliation(s)
| | - Caroline Vasseneix
- Singapore Eye Research Institute, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | | | - Dan Milea
- Singapore National Eye Center, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Raymond P Najjar
- Singapore Eye Research Institute, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| |
Collapse
|
32
|
Ganjdanesh A, Zhang J, Chew EY, Ding Y, Huang H, Chen W. LONGL-Net: temporal correlation structure guided deep learning model to predict longitudinal age-related macular degeneration severity. PNAS NEXUS 2022; 1:pgab003. [PMID: 35360552 PMCID: PMC8962776 DOI: 10.1093/pnasnexus/pgab003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 11/15/2021] [Indexed: 01/28/2023]
Abstract
Age-related macular degeneration (AMD) is the principal cause of blindness in developed countries, and its prevalence will increase to 288 million people in 2040. Therefore, automated grading and prediction methods can be highly beneficial for recognizing susceptible subjects to late-AMD and enabling clinicians to start preventive actions for them. Clinically, AMD severity is quantified by Color Fundus Photographs (CFP) of the retina, and many machine-learning-based methods are proposed for grading AMD severity. However, few models were developed to predict the longitudinal progression status, i.e. predicting future late-AMD risk based on the current CFP, which is more clinically interesting. In this paper, we propose a new deep-learning-based classification model (LONGL-Net) that can simultaneously grade the current CFP and predict the longitudinal outcome, i.e. whether the subject will be in late-AMD in the future time-point. We design a new temporal-correlation-structure-guided Generative Adversarial Network model that learns the interrelations of temporal changes in CFPs in consecutive time-points and provides interpretability for the classifier's decisions by forecasting AMD symptoms in the future CFPs. We used about 30,000 CFP images from 4,628 participants in the Age-Related Eye Disease Study. Our classifier showed average 0.905 (95% CI: 0.886-0.922) AUC and 0.762 (95% CI: 0.733-0.792) accuracy on the 3-class classification problem of simultaneously grading current time-point's AMD condition and predicting late AMD progression of subjects in the future time-point. We further validated our model on the UK Biobank dataset, where our model showed average 0.905 accuracy and 0.797 sensitivity in grading 300 CFP images.
Collapse
Affiliation(s)
- Alireza Ganjdanesh
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA 15261, USA
| | - Jipeng Zhang
- Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Ying Ding
- Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Heng Huang
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA 15261, USA
| | - Wei Chen
- Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh, Pittsburgh, PA 15213, USA
- Division of Pulmonary Medicine, Department of Pediatrics, UPMC Children's Hospital of Pittsburgh, University of Pittsburgh, Pittsburgh, PA 15219, USA
| |
Collapse
|
33
|
Mortensen PW, Wong TY, Milea D, Lee AG. The Eye Is a Window to Systemic and Neuro-Ophthalmic Diseases. Asia Pac J Ophthalmol (Phila) 2022; 11:91-93. [PMID: 35533329 DOI: 10.1097/apo.0000000000000531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Affiliation(s)
- Peter W Mortensen
- Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, TX, US
| | - Tien Y Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Duke-NUS Medical School, Singapore
- Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Dan Milea
- Singapore Eye Research Institute, Singapore National Eye Centre, Duke-NUS Medical School, Singapore
- Copenhagen University, Denmark
| | - Andrew G Lee
- Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, TX, US
- Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, NY, US
- Department of Ophthalmology, University of Texas Medical Branch, Galveston, Texas, US
- University of Texas MD Anderson Cancer Center, Houston, Texas, US
- Texas A and M College of Medicine, Bryan, Texas, US
- Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, Iowa, US
| |
Collapse
|
34
|
Gutfleisch M, Ester O, Aydin S, Quassowski M, Spital G, Lommatzsch A, Rothaus K, Dubis AM, Pauleikhoff D. Clinically applicable deep learning-based decision aids for treatment of neovascular AMD. Graefes Arch Clin Exp Ophthalmol 2022; 260:2217-2230. [DOI: 10.1007/s00417-022-05565-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 01/06/2022] [Accepted: 01/11/2022] [Indexed: 01/22/2023] Open
|
35
|
Clinical Validation of Saliency Maps for Understanding Deep Neural Networks in Ophthalmology. Med Image Anal 2022; 77:102364. [DOI: 10.1016/j.media.2022.102364] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Revised: 11/02/2021] [Accepted: 01/10/2022] [Indexed: 01/17/2023]
|
36
|
Govindaiah A, Baten A, Smith RT, Balasubramanian S, Bhuiyan A. Optimized Prediction Models from Fundus Imaging and Genetics for Late Age-Related Macular Degeneration. J Pers Med 2021; 11:1127. [PMID: 34834479 PMCID: PMC8617775 DOI: 10.3390/jpm11111127] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2021] [Revised: 10/26/2021] [Accepted: 10/27/2021] [Indexed: 01/30/2023] Open
Abstract
Age-related macular degeneration (AMD) is a leading cause of blindness in the developed world. In this study, we compare the performance of retinal fundus images and genetic-information-based machine learning models for the prediction of late AMD. Using data from the Age-related Eye Disease Study, we built machine learning models with various combinations of genetic, socio-demographic/clinical, and retinal image data to predict late AMD using its severity and category in a single visit, in 2, 5, and 10 years. We compared their performance in sensitivity, specificity, accuracy, and unweighted kappa. The 2-year model based on retinal image and socio-demographic (S-D) parameters achieved a sensitivity of 91.34%, specificity of 84.49% while the same for genetic and S-D-parameters-based model was 79.79% and 66.84%. For the 5-year model, the retinal image and S-D-parameters-based model also outperformed the genetic and S-D parameters-based model. The two 10-year models achieved similar sensitivities of 74.24% and 75.79%, respectively, but the retinal image and S-D-parameters-based model was otherwise superior. The retinal-image-based models were not further improved by adding genetic data. Retinal imaging and S-D data can build an excellent machine learning predictor of developing late AMD over 2-5 years; the retinal imaging model appears to be the preferred prognostic tool for efficient patient management.
Collapse
Affiliation(s)
| | - Abdul Baten
- AgResearch, Palmerston North 4442, New Zealand;
| | | | | | | |
Collapse
|
37
|
Wassan JT, Zheng H, Wang H. Role of Deep Learning in Predicting Aging-Related Diseases: A Scoping Review. Cells 2021; 10:cells10112924. [PMID: 34831148 PMCID: PMC8616301 DOI: 10.3390/cells10112924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Revised: 10/22/2021] [Accepted: 10/26/2021] [Indexed: 11/16/2022] Open
Abstract
Aging refers to progressive physiological changes in a cell, an organ, or the whole body of an individual, over time. Aging-related diseases are highly prevalent and could impact an individual’s physical health. Recently, artificial intelligence (AI) methods have been used to predict aging-related diseases and issues, aiding clinical providers in decision-making based on patient’s medical records. Deep learning (DL), as one of the most recent generations of AI technologies, has embraced rapid progress in the early prediction and classification of aging-related issues. In this paper, a scoping review of publications using DL approaches to predict common aging-related diseases (such as age-related macular degeneration, cardiovascular and respiratory diseases, arthritis, Alzheimer’s and lifestyle patterns related to disease progression), was performed. Google Scholar, IEEE and PubMed are used to search DL papers on common aging-related issues published between January 2017 and August 2021. These papers were reviewed, evaluated, and the findings were summarized. Overall, 34 studies met the inclusion criteria. These studies indicate that DL could help clinicians in diagnosing disease at its early stages by mapping diagnostic predictions into observable clinical presentations; and achieving high predictive performance (e.g., more than 90% accurate predictions of diseases in aging).
Collapse
Affiliation(s)
| | - Huiru Zheng
- School of Computing, Ulster University, Belfast BT15 1ED, UK;
- Correspondence:
| | - Haiying Wang
- School of Computing, Ulster University, Belfast BT15 1ED, UK;
| |
Collapse
|
38
|
Tak N, Reddy AJ, Martel J, Martel JB. Clinical Wide-Field Retinal Image Deep Learning Classification of Exudative and Non-Exudative Age-Related Macular Degeneration. Cureus 2021; 13:e17579. [PMID: 34646633 PMCID: PMC8480936 DOI: 10.7759/cureus.17579] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/29/2021] [Indexed: 11/19/2022] Open
Abstract
Background: Age-related macular degeneration (AMD) is a disease that currently affects approximately 196 million individuals and is projected to affect 288 million in 2040. As a result, better and earlier detection methods for this disease are needed in an effort to provide a higher quality of care. One way to achieve this is through the utilization of machine learning. A deep neural network, specifically a convoluted neural network (CNN) can be trained to differentiate between different types of AMD images given the proper training data. Methods: In this study, a CNN was trained on 420 Optos wide-field retinal images for 70 epochs in order to classify between exudative and non-exudative AMD. These images were obtained and labeled by ophthalmologists from the Martel Eye Clinic in Rancho Cordova, CA. Results: After completing the study, a model was created with 88% accuracy. Both the training and validation loss started above 1 and ended below 0.2. Despite only analyzing a single image at a time, the model was still able to accurately identify if the individual had AMD in both eyes or one eye only. The model had the most trouble with bilateral non-exudative AMD. Overall the model was fairly accurate in the other categories. It was noted that the neural network was able to further differentiate from a single image if the disease is present in left, right, or both eyes. This is a point of contention for further investigation as it is impossible for the artificial intelligence (AI) to extrapolate the condition of both eyes from only one image. Conclusion: This research fostered the development of a CNN that was able to differentiate between exudative and non-exudative AMD. As well as determine if the disease is present in the right, left, or both eyes with a relatively high degree of accuracy. The model was trained on clinical data and can theoretically be used to classify other clinical images it has never encountered before.
Collapse
Affiliation(s)
- Nathaniel Tak
- Ophthalmology, California Northstate University College of Medicine, Elk Grove, USA
| | - Akshay J Reddy
- Opthalmology, California Northstate University College of Medicine, Elk Grove, USA
| | - Juliette Martel
- Health Sciences, California Northstate University, Rancho Cordova, USA
| | - James B Martel
- Ophthalmology, California Northstate University College of Medicine, Elk Grove, USA
| |
Collapse
|
39
|
Luo X, Li J, Chen M, Yang X, Li X. Ophthalmic Disease Detection via Deep Learning With a Novel Mixture Loss Function. IEEE J Biomed Health Inform 2021; 25:3332-3339. [PMID: 34033552 DOI: 10.1109/jbhi.2021.3083605] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
With the popularization of computer-aided diagnosis (CAD) technologies, more and more deep learning methods are developed to facilitate the detection of ophthalmic diseases. In this article, the deep learning-based detections for some common eye diseases, including cataract, glaucoma, and age-related macular degeneration (AMD), are analyzed. Generally speaking, morphological change in retina reveals the presence of eye disease. Then, while using some existing deep learning methods to achieve this analysis task, the satisfactory performance may not be given, since fundus images usually suffer from the impact of data imbalance and outliers. It is, therefore, expected that with the exploration of effective and robust deep learning algorithms, the detection performance could be further improved. Here, we propose a deep learning model combined with a novel mixture loss function to automatically detect eye diseases, through the analysis of retinal fundus color images. Specifically, given the good generalization and robustness of focal loss and correntropy-induced loss functions in addressing complex dataset with class imbalance and outliers, we present a mixture of those two losses in deep neural network model to improve the recognition performance of classifier for biomedical data. The proposed model is evaluated on a real-life ophthalmic dataset. Meanwhile, the performance of deep learning model with our proposed loss function is compared with the baseline models, while adopting accuracy, sensitivity, specificity, Kappa, and area under the receiver operating characteristic curve (AUC) as the evaluation metrics. The experimental results verify the effectiveness and robustness of the proposed algorithm.
Collapse
|
40
|
Romond K, Alam M, Kravets S, Sisternes LD, Leng T, Lim JI, Rubin D, Hallak JA. Imaging and artificial intelligence for progression of age-related macular degeneration. Exp Biol Med (Maywood) 2021; 246:2159-2169. [PMID: 34404252 DOI: 10.1177/15353702211031547] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Age-related macular degeneration (AMD) is a leading cause of severe vision loss. With our aging population, it may affect 288 million people globally by the year 2040. AMD progresses from an early and intermediate dry form to an advanced one, which manifests as choroidal neovascularization and geographic atrophy. Conversion to AMD-related exudation is known as progression to neovascular AMD, and presence of geographic atrophy is known as progression to advanced dry AMD. AMD progression predictions could enable timely monitoring, earlier detection and treatment, improving vision outcomes. Machine learning approaches, a subset of artificial intelligence applications, applied on imaging data are showing promising results in predicting progression. Extracted biomarkers, specifically from optical coherence tomography scans, are informative in predicting progression events. The purpose of this mini review is to provide an overview about current machine learning applications in artificial intelligence for predicting AMD progression, and describe the various methods, data-input types, and imaging modalities used to identify high-risk patients. With advances in computational capabilities, artificial intelligence applications are likely to transform patient care and management in AMD. External validation studies that improve generalizability to populations and devices, as well as evaluating systems in real-world clinical settings are needed to improve the clinical translations of artificial intelligence AMD applications.
Collapse
Affiliation(s)
- Kathleen Romond
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| | - Minhaj Alam
- Department of Biomedical Data Science, Stanford University, Stanford, CA 94304, USA
| | - Sasha Kravets
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA.,Division of Epidemiology and Biostatistics, School of Public Health, University of Illinois at Chicago, Chicago, IL 60612, USA
| | | | - Theodore Leng
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, CA 94303, USA
| | - Jennifer I Lim
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| | - Daniel Rubin
- Department of Biomedical Data Science, Stanford University, Stanford, CA 94304, USA
| | - Joelle A Hallak
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| |
Collapse
|
41
|
Research on an Intelligent Lightweight-Assisted Pterygium Diagnosis Model Based on Anterior Segment Images. DISEASE MARKERS 2021; 2021:7651462. [PMID: 34367378 PMCID: PMC8342163 DOI: 10.1155/2021/7651462] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/19/2021] [Accepted: 07/16/2021] [Indexed: 12/13/2022]
Abstract
Aims The lack of primary ophthalmologists in China results in the inability of basic-level hospitals to diagnose pterygium patients. To solve this problem, an intelligent-assisted lightweight pterygium diagnosis model based on anterior segment images is proposed in this study. Methods Pterygium is a common and frequently occurring disease in ophthalmology, and fibrous tissue hyperplasia is both a diagnostic biomarker and a surgical biomarker. The model diagnosed pterygium based on biomarkers of pterygium. First, a total of 436 anterior segment images were collected; then, two intelligent-assisted lightweight pterygium diagnosis models (MobileNet 1 and MobileNet 2) based on raw data and augmented data were trained via transfer learning. The results of the lightweight models were compared with the clinical results. The classic models (AlexNet, VGG16 and ResNet18) were also used for training and testing, and their results were compared with the lightweight models. A total of 188 anterior segment images were used for testing. Sensitivity, specificity, F1-score, accuracy, kappa, area under the concentration-time curve (AUC), 95% CI, size, and parameters are the evaluation indicators in this study. Results There are 188 anterior segment images that were used for testing the five intelligent-assisted pterygium diagnosis models. The overall evaluation index for the MobileNet2 model was the best. The sensitivity, specificity, F1-score, and AUC of the MobileNet2 model for the normal anterior segment image diagnosis were 96.72%, 98.43%, 96.72%, and 0976, respectively; for the pterygium observation period anterior segment image diagnosis, the sensitivity, specificity, F1-score, and AUC were 83.7%, 90.48%, 82.54%, and 0.872, respectively; for the surgery period anterior segment image diagnosis, the sensitivity, specificity, F1-score, and AUC were 84.62%, 93.50%, 85.94%, and 0.891, respectively. The kappa value of the MobileNet2 model was 77.64%, the accuracy was 85.11%, the model size was 13.5 M, and the parameter size was 4.2 M. Conclusion This study used deep learning methods to propose a three-category intelligent lightweight-assisted pterygium diagnosis model. The developed model can be used to screen patients for pterygium problems initially, provide reasonable suggestions, and provide timely referrals. It can help primary doctors improve pterygium diagnoses, confer social benefits, and lay the foundation for future models to be embedded in mobile devices.
Collapse
|
42
|
Chen X, Zhao J, Iselin KC, Borroni D, Romano D, Gokul A, McGhee CNJ, Zhao Y, Sedaghat MR, Momeni-Moghaddam H, Ziaei M, Kaye S, Romano V, Zheng Y. Keratoconus detection of changes using deep learning of colour-coded maps. BMJ Open Ophthalmol 2021; 6:e000824. [PMID: 34337155 PMCID: PMC8278890 DOI: 10.1136/bmjophth-2021-000824] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Accepted: 07/05/2021] [Indexed: 12/26/2022] Open
Abstract
Objective To evaluate the accuracy of convolutional neural networks technique (CNN) in detecting keratoconus using colour-coded corneal maps obtained by a Scheimpflug camera. Design Multicentre retrospective study. Methods and analysis We included the images of keratoconic and healthy volunteers’ eyes provided by three centres: Royal Liverpool University Hospital (Liverpool, UK), Sedaghat Eye Clinic (Mashhad, Iran) and The New Zealand National Eye Center (New Zealand). Corneal tomography scans were used to train and test CNN models, which included healthy controls. Keratoconic scans were classified according to the Amsler-Krumeich classification. Keratoconic scans from Iran were used as an independent testing set. Four maps were considered for each scan: axial map, anterior and posterior elevation map, and pachymetry map. Results A CNN model detected keratoconus versus health eyes with an accuracy of 0.9785 on the testing set, considering all four maps concatenated. Considering each map independently, the accuracy was 0.9283 for axial map, 0.9642 for thickness map, 0.9642 for the front elevation map and 0.9749 for the back elevation map. The accuracy of models in recognising between healthy controls and stage 1 was 0.90, between stages 1 and 2 was 0.9032, and between stages 2 and 3 was 0.8537 using the concatenated map. Conclusion CNN provides excellent detection performance for keratoconus and accurately grades different severities of disease using the colour-coded maps obtained by the Scheimpflug camera. CNN has the potential to be further developed, validated and adopted for screening and management of keratoconus.
Collapse
Affiliation(s)
- Xu Chen
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Jiaxin Zhao
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Katja C Iselin
- Department of Ophthalmology, St Paul's Eye Unit, Royal Liverpool University Hospital, Liverpool, UK
| | - Davide Borroni
- Department of Ophthalmology, St Paul's Eye Unit, Royal Liverpool University Hospital, Liverpool, UK
| | - Davide Romano
- Department of Ophthalmology, St Paul's Eye Unit, Royal Liverpool University Hospital, Liverpool, UK
| | - Akilesh Gokul
- Department of Ophthalmology, New Zealand National Eye Centre, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Charles N J McGhee
- Department of Ophthalmology, New Zealand National Eye Centre, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Mohammad-Reza Sedaghat
- Eye Research Center, Mashhad University of Medical Sciences, Mashhad, Iran.,Health Promotion Research Center, Zahedan University of Medical Sciences, Zahedan, Iran
| | - Hamed Momeni-Moghaddam
- Eye Research Center, Mashhad University of Medical Sciences, Mashhad, Iran.,Health Promotion Research Center, Zahedan University of Medical Sciences, Zahedan, Iran
| | - Mohammed Ziaei
- Department of Ophthalmology, New Zealand National Eye Centre, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Stephen Kaye
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK.,Department of Ophthalmology, St Paul's Eye Unit, Royal Liverpool University Hospital, Liverpool, UK
| | - Vito Romano
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK.,Department of Ophthalmology, St Paul's Eye Unit, Royal Liverpool University Hospital, Liverpool, UK
| | - Yalin Zheng
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| |
Collapse
|
43
|
Yellapragada B, Hornauer S, Snyder K, Yu S, Yiu G. Self-Supervised Feature Learning and Phenotyping for Assessing Age-Related Macular Degeneration Using Retinal Fundus Images. Ophthalmol Retina 2021; 6:116-129. [PMID: 34217854 DOI: 10.1016/j.oret.2021.06.010] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Revised: 06/24/2021] [Accepted: 06/25/2021] [Indexed: 12/18/2022]
Abstract
OBJECTIVE Diseases such as age-related macular degeneration (AMD) are classified based on human rubrics that are prone to bias. Supervised neural networks trained using human-generated labels require labor-intensive annotations and are restricted to specific trained tasks. Here, we trained a self-supervised deep learning network using unlabeled fundus images, enabling data-driven feature classification of AMD severity and discovery of ocular phenotypes. DESIGN Development of a self-supervised training pipeline to evaluate fundus photographs from the Age-Related Eye Disease Study (AREDS). PARTICIPANTS One hundred thousand eight hundred forty-eight human-graded fundus images from 4757 AREDS participants between 55 and 80 years of age. METHODS We trained a deep neural network with self-supervised Non-Parametric Instance Discrimination (NPID) using AREDS fundus images without labels then evaluated its performance in grading AMD severity using 2-step, 4-step, and 9-step classification schemes using a supervised classifier. We compared balanced and unbalanced accuracies of NPID against supervised-trained networks and ophthalmologists, explored network behavior using hierarchical learning of image subsets and spherical k-means clustering of feature vectors, then searched for ocular features that can be identified without labels. MAIN OUTCOME MEASURES Accuracy and kappa statistics. RESULTS NPID demonstrated versatility across different AMD classification schemes without re-training and achieved balanced accuracies comparable with those of supervised-trained networks or human ophthalmologists in classifying advanced AMD (82% vs. 81-92% or 89%), referable AMD (87% vs. 90-92% or 96%), or on the 4-step AMD severity scale (65% vs. 63-75% or 67%), despite never directly using these labels during self-supervised feature learning. Drusen area drove network predictions on the 4-step scale, while depigmentation and geographic atrophy (GA) areas correlated with advanced AMD classes. Self-supervised learning revealed grader-mislabeled images and susceptibility of some classes within more granular AMD scales to misclassification by both ophthalmologists and neural networks. Importantly, self-supervised learning enabled data-driven discovery of AMD features such as GA and other ocular phenotypes of the choroid (e.g., tessellated or blonde fundi), vitreous (e.g., asteroid hyalosis), and lens (e.g., nuclear cataracts) that were not predefined by human labels. CONCLUSIONS Self-supervised learning enables AMD severity grading comparable with that of ophthalmologists and supervised networks, reveals biases of human-defined AMD classification systems, and allows unbiased, data-driven discovery of AMD and non-AMD ocular phenotypes.
Collapse
Affiliation(s)
- Baladitya Yellapragada
- Department of Vision Science, University of California, Berkeley, Berkeley, California; International Computer Science Institute, Berkeley, California; Department of Ophthalmology & Vision Science, University of California, Davis, Sacramento, California
| | - Sascha Hornauer
- International Computer Science Institute, Berkeley, California
| | - Kiersten Snyder
- Department of Ophthalmology & Vision Science, University of California, Davis, Sacramento, California
| | - Stella Yu
- Department of Vision Science, University of California, Berkeley, Berkeley, California; International Computer Science Institute, Berkeley, California
| | - Glenn Yiu
- Department of Ophthalmology & Vision Science, University of California, Davis, Sacramento, California.
| |
Collapse
|
44
|
Han X, Steven K, Qassim A, Marshall HN, Bean C, Tremeer M, An J, Siggs OM, Gharahkhani P, Craig JE, Hewitt AW, Trzaskowski M, MacGregor S. Automated AI labeling of optic nerve head enables insights into cross-ancestry glaucoma risk and genetic discovery in >280,000 images from UKB and CLSA. Am J Hum Genet 2021; 108:1204-1216. [PMID: 34077762 DOI: 10.1016/j.ajhg.2021.05.005] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Accepted: 05/10/2021] [Indexed: 02/06/2023] Open
Abstract
Cupping of the optic nerve head, a highly heritable trait, is a hallmark of glaucomatous optic neuropathy. Two key parameters are vertical cup-to-disc ratio (VCDR) and vertical disc diameter (VDD). However, manual assessment often suffers from poor accuracy and is time intensive. Here, we show convolutional neural network models can accurately estimate VCDR and VDD for 282,100 images from both UK Biobank and an independent study (Canadian Longitudinal Study on Aging), enabling cross-ancestry epidemiological studies and new genetic discovery for these optic nerve head parameters. Using the AI approach, we perform a systematic comparison of the distribution of VCDR and VDD and compare these with intraocular pressure and glaucoma diagnoses across various genetically determined ancestries, which provides an explanation for the high rates of normal tension glaucoma in East Asia. We then used the large number of AI gradings to conduct a more powerful genome-wide association study (GWAS) of optic nerve head parameters. Using the AI-based gradings increased estimates of heritability by ∼50% for VCDR and VDD. Our GWAS identified more than 200 loci associated with both VCDR and VDD (double the number of loci from previous studies) and uncovered dozens of biological pathways; many of the loci we discovered also confer risk for glaucoma.
Collapse
|
45
|
Perepelkina T, Fulton AB. Artificial Intelligence (AI) Applications for Age-Related Macular Degeneration (AMD) and Other Retinal Dystrophies. Semin Ophthalmol 2021; 36:304-309. [PMID: 33764255 DOI: 10.1080/08820538.2021.1896756] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Artificial intelligence (AI), with its subdivisions (machine and deep learning), is a new branch of computer science that has shown impressive results across a variety of domains. The applications of AI to medicine and biology are being widely investigated. Medical specialties that rely heavily on images, including radiology, dermatology, oncology and ophthalmology, were the first to explore AI approaches in analysis and diagnosis. Applications of AI in ophthalmology have concentrated on diseases with high prevalence, such as diabetic retinopathy, retinopathy of prematurity, age-related macular degeneration (AMD), and glaucoma. Here we provide an overview of AI applications for diagnosis, classification, and clinical management of AMD and other macular dystrophies.
Collapse
Affiliation(s)
- Tatiana Perepelkina
- Department of Ophthalmology, Boston Children's Hospital, Harvard Medical School, Boston, United States
| | - Anne B Fulton
- Department of Ophthalmology, Boston Children's Hospital, Harvard Medical School, Boston, United States
| |
Collapse
|
46
|
Saha I, Ghosh N, Maity D, Seal A, Plewczynski D. COVID-DeepPredictor: Recurrent Neural Network to Predict SARS-CoV-2 and Other Pathogenic Viruses. Front Genet 2021; 12:569120. [PMID: 33643375 PMCID: PMC7906283 DOI: 10.3389/fgene.2021.569120] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Accepted: 01/13/2021] [Indexed: 11/13/2022] Open
Abstract
The COVID-19 disease for Novel coronavirus (SARS-CoV-2) has turned out to be a global pandemic. The high transmission rate of this pathogenic virus demands an early prediction and proper identification for the subsequent treatment. However, polymorphic nature of this virus allows it to adapt and sustain in different kinds of environment which makes it difficult to predict. On the other hand, there are other pathogens like SARS-CoV-1, MERS-CoV, Ebola, Dengue, and Influenza as well, so that a predictor is highly required to distinguish them with the use of their genomic information. To mitigate this problem, in this work COVID-DeepPredictor is proposed on the framework of deep learning to identify an unknown sequence of these pathogens. COVID-DeepPredictor uses Long Short Term Memory as Recurrent Neural Network for the underlying prediction with an alignment-free technique. In this regard, k-mer technique is applied to create Bag-of-Descriptors (BoDs) in order to generate Bag-of-Unique-Descriptors (BoUDs) as vocabulary and subsequently embedded representation is prepared for the given virus sequences. This predictor is not only validated for the dataset using K -fold cross-validation but also for unseen test datasets of SARS-CoV-2 sequences and sequences from other viruses as well. To verify the efficacy of COVID-DeepPredictor, it has been compared with other state-of-the-art prediction techniques based on Linear Discriminant Analysis, Random Forests, and Gradient Boosting Method. COVID-DeepPredictor achieves 100% prediction accuracy on validation dataset while on test datasets, the accuracy ranges from 99.51 to 99.94%. It shows superior results over other prediction techniques as well. In addition to this, accuracy and runtime of COVID-DeepPredictor are considered simultaneously to determine the value of k in k-mer, a comparative study among k values in k-mer, Bag-of-Descriptors (BoDs), and Bag-of-Unique-Descriptors (BoUDs) and a comparison between COVID-DeepPredictor and Nucleotide BLAST have also been performed. The code, training, and test datasets used for COVID-DeepPredictor are available at http://www.nitttrkol.ac.in/indrajit/projects/COVID-DeepPredictor/.
Collapse
Affiliation(s)
- Indrajit Saha
- Department of Computer Science and Engineering, National Institute of Technical Teachers' Training and Research, Kolkata, India
| | - Nimisha Ghosh
- Department of Computer Science and Information Technology, Institute of Technical Education and Research, Siksha ‘O’ Anusandhan (Deemed to Be University), Bhubaneswar, India
| | - Debasree Maity
- Department of Electronics and Communication Engineering, MCKV Institute of Engineering, Howrah, India
| | - Arjit Seal
- Cognizant Technology Solutions Pvt. Ltd., Kolkata, India
| | - Dariusz Plewczynski
- Laboratory of Bioinformatics and Computational Genomics, Faculty of Mathematics and Information Science, Warsaw University of Technology, Warsaw, Poland
- Laboratory of Functional and Structural Genomics, Centre of New Technologies, University of Warsaw, Warsaw, Poland
| |
Collapse
|
47
|
Gunasekeran DV, Tham YC, Ting DSW, Tan GSW, Wong TY. Digital health during COVID-19: lessons from operationalising new models of care in ophthalmology. LANCET DIGITAL HEALTH 2021; 3:e124-e134. [PMID: 33509383 DOI: 10.1016/s2589-7500(20)30287-9] [Citation(s) in RCA: 74] [Impact Index Per Article: 24.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 11/11/2020] [Accepted: 11/18/2020] [Indexed: 12/13/2022]
Abstract
The COVID-19 pandemic has resulted in massive disruptions within health care, both directly as a result of the infectious disease outbreak, and indirectly because of public health measures to mitigate against transmission. This disruption has caused rapid dynamic fluctuations in demand, capacity, and even contextual aspects of health care. Therefore, the traditional face-to-face patient-physician care model has had to be re-examined in many countries, with digital technology and new models of care being rapidly deployed to meet the various challenges of the pandemic. This Viewpoint highlights new models in ophthalmology that have adapted to incorporate digital health solutions such as telehealth, artificial intelligence decision support for triaging and clinical care, and home monitoring. These models can be operationalised for different clinical applications based on the technology, clinical need, demand from patients, and manpower availability, ranging from out-of-hospital models including the hub-and-spoke pre-hospital model, to front-line models such as the inflow funnel model and monitoring models such as the so-called lighthouse model for provider-led monitoring. Lessons learnt from operationalising these models for ophthalmology in the context of COVID-19 are discussed, along with their relevance for other specialty domains.
Collapse
Affiliation(s)
- Dinesh V Gunasekeran
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Duke-NUS Medical School, Singapore
| | - Daniel S W Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Duke-NUS Medical School, Singapore
| | - Gavin S W Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Duke-NUS Medical School, Singapore
| | - Tien Y Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Duke-NUS Medical School, Singapore.
| |
Collapse
|
48
|
AMD Genetics: Methods and Analyses for Association, Progression, and Prediction. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2021; 1256:191-200. [PMID: 33848002 DOI: 10.1007/978-3-030-66014-7_7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
Age-related macular degeneration (AMD) is a multifactorial neurodegenerative disease, which is a leading cause of vision loss among the elderly in the developed countries. As one of the most successful examples of genome-wide association study (GWAS), a large number of genetic studies have been conducted to explore the genetic basis for AMD and its progression, of which over 30 loci were identified and confirmed. In this chapter, we review the recent development and findings of GWAS for AMD risk and progression. Then, we present emerging methods and models for predicting AMD development or its progression using large-scale genetic data. Finally, we discuss a set of novel statistical and analytical methods that were recently developed to tackle the challenges such as analyzing bilateral correlated eye-level outcomes that are subject to censoring with high-dimensional genetic data. Future directions for analytical studies of AMD genetics are also proposed.
Collapse
|
49
|
Bridge J, Harding S, Zheng Y. Development and validation of a novel prognostic model for predicting AMD progression using longitudinal fundus images. BMJ Open Ophthalmol 2020; 5:e000569. [PMID: 33083553 PMCID: PMC7566421 DOI: 10.1136/bmjophth-2020-000569] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2020] [Revised: 09/20/2020] [Accepted: 09/27/2020] [Indexed: 01/27/2023] Open
Abstract
Objective To develop a prognostic tool to predict the progression of age-related eye disease progression using longitudinal colour fundus imaging. Methods and analysis Previous prognostic models using deep learning with imaging data require annotation during training or only use a single time point. We propose a novel deep learning method to predict the progression of diseases using longitudinal imaging data with uneven time intervals, which requires no prior feature extraction. Given previous images from a patient, our method aims to predict whether the patient will progress onto the next stage of the disease. The proposed method uses InceptionV3 to produce feature vectors for each image. In order to account for uneven intervals, a novel interval scaling is proposed. Finally, a recurrent neural network is used to prognosticate the disease. We demonstrate our method on a longitudinal dataset of colour fundus images from 4903 eyes with age-related macular degeneration (AMD), taken from the Age-Related Eye Disease Study, to predict progression to late AMD. Results Our method attains a testing sensitivity of 0.878, a specificity of 0.887 and an area under the receiver operating characteristic of 0.950. We compare our method to previous methods, displaying superior performance in our model. Class activation maps display how the network reaches the final decision. Conclusion The proposed method can be used to predict progression to advanced AMD at some future visit. Using multiple images at different time points improves predictive performance.
Collapse
Affiliation(s)
- Joshua Bridge
- Department of Eye and Vision Science, University of Liverpool, Liverpool, UK
| | - Simon Harding
- Department of Eye and Vision Science, University of Liverpool, Liverpool, UK
| | - Yalin Zheng
- Department of Eye and Vision Science, University of Liverpool, Liverpool, UK
| |
Collapse
|
50
|
Accelerating ophthalmic artificial intelligence research: the role of an open access data repository. Curr Opin Ophthalmol 2020; 31:337-350. [PMID: 32740059 DOI: 10.1097/icu.0000000000000678] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
PURPOSE OF REVIEW Artificial intelligence has already provided multiple clinically relevant applications in ophthalmology. Yet, the explosion of nonstandardized reporting of high-performing algorithms are rendered useless without robust and streamlined implementation guidelines. The development of protocols and checklists will accelerate the translation of research publications to impact on patient care. RECENT FINDINGS Beyond technological scepticism, we lack uniformity in analysing algorithmic performance generalizability, and benchmarking impacts across clinical settings. No regulatory guardrails have been set to minimize bias or optimize interpretability; no consensus clinical acceptability thresholds or systematized postdeployment monitoring has been set. Moreover, stakeholders with misaligned incentives deepen the landscape complexity especially when it comes to the requisite data integration and harmonization to advance the field. Therefore, despite increasing algorithmic accuracy and commoditization, the infamous 'implementation gap' persists. Open clinical data repositories have been shown to rapidly accelerate research, minimize redundancies and disseminate the expertise and knowledge required to overcome existing barriers. Drawing upon the longstanding success of existing governance frameworks and robust data use and sharing agreements, the ophthalmic community has tremendous opportunity in ushering artificial intelligence into medicine. By collaboratively building a powerful resource of open, anonymized multimodal ophthalmic data, the next generation of clinicians can advance data-driven eye care in unprecedented ways. SUMMARY This piece demonstrates that with readily accessible data, immense progress can be achieved clinically and methodologically to realize artificial intelligence's impact on clinical care. Exponentially progressive network effects can be seen by consolidating, curating and distributing data amongst both clinicians and data scientists.
Collapse
|