1
|
Chaurasia AK, Greatbatch CJ, Han X, Gharahkhani P, Mackey DA, MacGregor S, Craig JE, Hewitt AW. Highly Accurate and Precise Automated Cup-to-Disc Ratio Quantification for Glaucoma Screening. OPHTHALMOLOGY SCIENCE 2024; 4:100540. [PMID: 39051045 PMCID: PMC11268341 DOI: 10.1016/j.xops.2024.100540] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 03/26/2024] [Accepted: 04/22/2024] [Indexed: 07/27/2024]
Abstract
Objective An enlarged cup-to-disc ratio (CDR) is a hallmark of glaucomatous optic neuropathy. Manual assessment of the CDR may be less accurate and more time-consuming than automated methods. Here, we sought to develop and validate a deep learning-based algorithm to automatically determine the CDR from fundus images. Design Algorithm development for estimating CDR using fundus data from a population-based observational study. Participants A total of 181 768 fundus images from the United Kingdom Biobank (UKBB), Drishti_GS, and EyePACS. Methods FastAI and PyTorch libraries were used to train a convolutional neural network-based model on fundus images from the UKBB. Models were constructed to determine image gradability (classification analysis) as well as to estimate CDR (regression analysis). The best-performing model was then validated for use in glaucoma screening using a multiethnic dataset from EyePACS and Drishti_GS. Main Outcome Measures The area under the receiver operating characteristic curve and coefficient of determination. Results Our gradability model vgg19_batch normalization (bn) achieved an accuracy of 97.13% on a validation set of 16 045 images, with 99.26% precision and area under the receiver operating characteristic curve of 96.56%. Using regression analysis, our best-performing model (trained on the vgg19_bn architecture) attained a coefficient of determination of 0.8514 (95% confidence interval [CI]: 0.8459-0.8568), while the mean squared error was 0.0050 (95% CI: 0.0048-0.0051) and mean absolute error was 0.0551 (95% CI: 0.0543-0.0559) on a validation set of 12 183 images for determining CDR. The regression point was converted into classification metrics using a tolerance of 0.2 for 20 classes; the classification metrics achieved an accuracy of 99.20%. The EyePACS dataset (98 172 healthy, 3270 glaucoma) was then used to externally validate the model for glaucoma classification, with an accuracy, sensitivity, and specificity of 82.49%, 72.02%, and 82.83%, respectively. Conclusions Our models were precise in determining image gradability and estimating CDR. Although our artificial intelligence-derived CDR estimates achieve high accuracy, the CDR threshold for glaucoma screening will vary depending on other clinical parameters. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Abadh K. Chaurasia
- Menzies Institute for Medical Research, University of Tasmania, Hobart, Australia
| | - Connor J. Greatbatch
- Menzies Institute for Medical Research, University of Tasmania, Hobart, Australia
| | - Xikun Han
- QIMR Berghofer Medical Research Institute, Brisbane, Australia
- School of Medicine, University of Queensland, Brisbane, Australia
| | - Puya Gharahkhani
- QIMR Berghofer Medical Research Institute, Brisbane, Australia
- School of Medicine, University of Queensland, Brisbane, Australia
- Faculty of Health, School of Biomedical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - David A. Mackey
- Lions Eye Institute, Centre for Vision Sciences, University of Western Australia, Nedlands, Australia
| | - Stuart MacGregor
- QIMR Berghofer Medical Research Institute, Brisbane, Australia
- School of Medicine, University of Queensland, Brisbane, Australia
| | - Jamie E. Craig
- Department of Ophthalmology, Flinders University, Flinders Medical Centre, Bedford Park, Australia
| | - Alex W. Hewitt
- Menzies Institute for Medical Research, University of Tasmania, Hobart, Australia
- Centre for Eye Research Australia, University of Melbourne, Melbourne, Australia
| |
Collapse
|
2
|
Chuter B, Huynh J, Bowd C, Walker E, Rezapour J, Brye N, Belghith A, Fazio MA, Girkin CA, De Moraes G, Liebmann JM, Weinreb RN, Zangwill LM, Christopher M. Deep Learning Identifies High-Quality Fundus Photographs and Increases Accuracy in Automated Primary Open Angle Glaucoma Detection. Transl Vis Sci Technol 2024; 13:23. [PMID: 38285462 PMCID: PMC10829806 DOI: 10.1167/tvst.13.1.23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 12/26/2023] [Indexed: 01/30/2024] Open
Abstract
Purpose To develop and evaluate a deep learning (DL) model to assess fundus photograph quality, and quantitatively measure its impact on automated POAG detection in independent study populations. Methods Image quality ground truth was determined by manual review of 2815 fundus photographs of healthy and POAG eyes from the Diagnostic Innovations in Glaucoma Study and African Descent and Glaucoma Evaluation Study (DIGS/ADAGES), as well as 11,350 from the Ocular Hypertension Treatment Study (OHTS). Human experts assessed a photograph as high quality if of sufficient quality to determine POAG status and poor quality if not. A DL quality model was trained on photographs from DIGS/ADAGES and tested on OHTS. The effect of DL quality assessment on DL POAG detection was measured using area under the receiver operating characteristic (AUROC). Results The DL quality model yielded an AUROC of 0.97 for differentiating between high- and low-quality photographs; qualitative human review affirmed high model performance. Diagnostic accuracy of the DL POAG model was significantly greater (P < 0.001) in good (AUROC, 0.87; 95% CI, 0.80-0.92) compared with poor quality photographs (AUROC, 0.77; 95% CI, 0.67-0.88). Conclusions The DL quality model was able to accurately assess fundus photograph quality. Using automated quality assessment to filter out low-quality photographs increased the accuracy of a DL POAG detection model. Translational Relevance Incorporating DL quality assessment into automated review of fundus photographs can help to decrease the burden of manual review and improve accuracy for automated DL POAG detection.
Collapse
Affiliation(s)
- Benton Chuter
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Justin Huynh
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Christopher Bowd
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Evan Walker
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Jasmin Rezapour
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
- Department of Ophthalmology, University Medical Center Mainz, Germany
| | - Nicole Brye
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Akram Belghith
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Massimo A. Fazio
- School of Medicine, Callahan Eye Hospital, University of Alabama-Birmingham, Birmingham, Alabama, United States
| | - Christopher A. Girkin
- School of Medicine, Callahan Eye Hospital, University of Alabama-Birmingham, Birmingham, Alabama, United States
| | - Gustavo De Moraes
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Medical Center, New York, New York, United States
| | - Jeffrey M. Liebmann
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Medical Center, New York, New York, United States
| | - Robert N. Weinreb
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Linda M. Zangwill
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Mark Christopher
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| |
Collapse
|
3
|
Guo T, Liu K, Zou H, Xu X, Yang J, Yu Q. Refined image quality assessment for color fundus photography based on deep learning. Digit Health 2024; 10:20552076231207582. [PMID: 38425654 PMCID: PMC10903193 DOI: 10.1177/20552076231207582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/26/2023] [Indexed: 03/02/2024] Open
Abstract
Purpose Color fundus photography is widely used in clinical and screening settings for eye diseases. Poor image quality greatly affects the reliability of further evaluation and diagnosis. In this study, we developed an automated assessment module for color fundus photography image quality assessment using deep learning. Methods A total of 55,931 color fundus photography images from multiple centers in Shanghai and the public database were collected and annotated as training, validation, and testing data sets. The pre-diagnosis image quality assessment module based on the multi-task deep neural network was designed. The detailed criterion of color fundus photography image quality including three subcategories with three levels of grading was applied to improve precision and objectivity. The auxiliary tasks such as the localization of the optic nerve head and macula, the classification of laterality, and the field of view were also included to assist the quality assessment. Finally, we validated our module internally and externally by evaluating the area under the receiver operating characteristic curve, sensitivity, specificity, accuracy, and quadratic weighted Kappa. Results The "Location" subcategory achieved area under the receiver operating characteristic curves of 0.991, 0.920, and 0.946 for the three grades, respectively. The "Clarity" subcategory achieved area under the receiver operating characteristic curves of 0.980, 0.917, and 0.954 for the three grades, respectively. The "Artifact" subcategory achieved area under the receiver operating characteristic curves of 0.976, 0.952, and 0.986 for the three grades, respectively. The accuracy and Kappa of overall quality reach 88.15% and 89.70%, respectively, on the internal set. These two indicators on the external set were 86.63% and 88.55%, respectively, which were very close to that of the internal set. Conclusions This work showed that our deep module was able to evaluate the color fundus photography image quality using more detailed three subcategories with three grade criteria. The promising results on both internal and external validation indicated the strength and generalizability of our module.
Collapse
Affiliation(s)
- Tianjiao Guo
- Institute of Medical Robotics, Shanghai Jiao Tong University, China
- Department of Automation, Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, China
- School of Biomedical Engineering, Shanghai Jiao Tong University, China
| | - Kun Liu
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Clinical Research Center for Eye Diseases, China
| | - Haidong Zou
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Clinical Research Center for Eye Diseases, China
| | - Xun Xu
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Clinical Research Center for Eye Diseases, China
| | - Jie Yang
- Department of Automation, Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, China
| | - Qi Yu
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Clinical Research Center for Eye Diseases, China
| |
Collapse
|
4
|
Gao Z, Pan X, Shao J, Jiang X, Su Z, Jin K, Ye J. Automatic interpretation and clinical evaluation for fundus fluorescein angiography images of diabetic retinopathy patients by deep learning. Br J Ophthalmol 2023; 107:1852-1858. [PMID: 36171054 DOI: 10.1136/bjo-2022-321472] [Citation(s) in RCA: 34] [Impact Index Per Article: 34.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 09/04/2022] [Indexed: 11/03/2022]
Abstract
BACKGROUND/AIMS Fundus fluorescein angiography (FFA) is an important technique to evaluate diabetic retinopathy (DR) and other retinal diseases. The interpretation of FFA images is complex and time-consuming, and the ability of diagnosis is uneven among different ophthalmologists. The aim of the study is to develop a clinically usable multilevel classification deep learning model for FFA images, including prediagnosis assessment and lesion classification. METHODS A total of 15 599 FFA images of 1558 eyes from 845 patients diagnosed with DR were collected and annotated. Three convolutional neural network (CNN) models were trained to generate the label of image quality, location, laterality of eye, phase and five lesions. Performance of the models was evaluated by accuracy, F-1 score, the area under the curve and human-machine comparison. The images with false positive and false negative results were analysed in detail. RESULTS Compared with LeNet-5 and VGG16, ResNet18 got the best result, achieving an accuracy of 80.79%-93.34% for prediagnosis assessment and an accuracy of 63.67%-88.88% for lesion detection. The human-machine comparison showed that the CNN had similar accuracy with junior ophthalmologists. The false positive and false negative analysis indicated a direction of improvement. CONCLUSION This is the first study to do automated standardised labelling on FFA images. Our model is able to be applied in clinical practice, and will make great contributions to the development of intelligent diagnosis of FFA images.
Collapse
Affiliation(s)
- Zhiyuan Gao
- Department of Ophthalmology, Zhejiang University School of Medicine Second Affiliated Hospital, Hangzhou, Zhejiang, China
| | - Xiangji Pan
- Department of Ophthalmology, Zhejiang University School of Medicine Second Affiliated Hospital, Hangzhou, Zhejiang, China
| | - Ji Shao
- Department of Ophthalmology, Zhejiang University School of Medicine Second Affiliated Hospital, Hangzhou, Zhejiang, China
| | - Xiaoyu Jiang
- College of Control Science and Engineering, Zhejiang University, Hangzhou, Zhejiang, China
| | - Zhaoan Su
- Department of Ophthalmology, Zhejiang University School of Medicine Second Affiliated Hospital, Hangzhou, Zhejiang, China
| | - Kai Jin
- Department of Ophthalmology, Zhejiang University School of Medicine Second Affiliated Hospital, Hangzhou, Zhejiang, China
| | - Juan Ye
- Department of Ophthalmology, Zhejiang University School of Medicine Second Affiliated Hospital, Hangzhou, Zhejiang, China
| |
Collapse
|
5
|
Bryan JM, Bryar PJ, Mirza RG. Convolutional Neural Networks Accurately Identify Ungradable Images in a Diabetic Retinopathy Telemedicine Screening Program. Telemed J E Health 2023; 29:1349-1355. [PMID: 36730708 DOI: 10.1089/tmj.2022.0357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
Purpose: Diabetic retinopathy (DR) is a microvascular complication of diabetes mellitus (DM). Standard of care for patients with DM is an annual eye examination or retinal imaging to assess for DR, the latter of which may be completed through telemedicine approaches. One significant issue is poor-quality images that prevent adequate screening and are thus ungradable. We used artificial intelligence to enable point-of-care (at time of imaging) identification of ungradable images in a DR screening program. Methods: Nonmydriatic retinal images were gathered from patients with DM imaged during a primary care or endocrinology visit from September 1, 2017, to June 1, 2021. The Topcon TRC-NW400 retinal camera (Topcon Corp., Tokyo, Japan) was used. Images were interpreted by 5 ophthalmologists for gradeability, presence and stage of DR, and presence of non-DR pathologies. A convolutional neural network with Inception V3 network architecture was trained to assess image gradeability. Images were divided into training and test sets, and 10-fold cross-validation was performed. Results: A total of 1,377 images from 537 patients (56.1% female, median age 58) were analyzed. Ophthalmologists classified 25.9% of images as ungradable. Of gradable images, 18.6% had DR of varying degrees and 26.5% had non-DR pathology. 10 fold cross-validation produced an average area under receiver operating characteristic curve (AUC) of 0.922 (standard deviation: 0.027, range: 0.882 to 0.961). The final model exhibited similar test set performance with an AUC of 0.924. Conclusions: This model accurately assesses gradeability of nonmydriatic retinal images. It could be used for increasing the efficiency of DR screening programs by enabling point-of-care identification of poor-quality images.
Collapse
Affiliation(s)
- John M Bryan
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| | - Paul J Bryar
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| | - Rukhsana G Mirza
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| |
Collapse
|
6
|
Chan E, Tang Z, Najjar RP, Narayanaswamy A, Sathianvichitr K, Newman NJ, Biousse V, Milea D. A Deep Learning System for Automated Quality Evaluation of Optic Disc Photographs in Neuro-Ophthalmic Disorders. Diagnostics (Basel) 2023; 13:diagnostics13010160. [PMID: 36611452 PMCID: PMC9818957 DOI: 10.3390/diagnostics13010160] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/27/2022] [Accepted: 12/28/2022] [Indexed: 01/05/2023] Open
Abstract
The quality of ocular fundus photographs can affect the accuracy of the morphologic assessment of the optic nerve head (ONH), either by humans or by deep learning systems (DLS). In order to automatically identify ONH photographs of optimal quality, we have developed, trained, and tested a DLS, using an international, multicentre, multi-ethnic dataset of 5015 ocular fundus photographs from 31 centres in 20 countries participating to the Brain and Optic Nerve Study with Artificial Intelligence (BONSAI). The reference standard in image quality was established by three experts who independently classified photographs as of "good", "borderline", or "poor" quality. The DLS was trained on 4208 fundus photographs and tested on an independent external dataset of 807 photographs, using a multi-class model, evaluated with a one-vs-rest classification strategy. In the external-testing dataset, the DLS could identify with excellent performance "good" quality photographs (AUC = 0.93 (95% CI, 0.91-0.95), accuracy = 91.4% (95% CI, 90.0-92.9%), sensitivity = 93.8% (95% CI, 92.5-95.2%), specificity = 75.9% (95% CI, 69.7-82.1%) and "poor" quality photographs (AUC = 1.00 (95% CI, 0.99-1.00), accuracy = 99.1% (95% CI, 98.6-99.6%), sensitivity = 81.5% (95% CI, 70.6-93.8%), specificity = 99.7% (95% CI, 99.6-100.0%). "Borderline" quality images were also accurately classified (AUC = 0.90 (95% CI, 0.88-0.93), accuracy = 90.6% (95% CI, 89.1-92.2%), sensitivity = 65.4% (95% CI, 56.6-72.9%), specificity = 93.4% (95% CI, 92.1-94.8%). The overall accuracy to distinguish among the three classes was 90.6% (95% CI, 89.1-92.1%), suggesting that this DLS could select optimal quality fundus photographs in patients with neuro-ophthalmic and neurological disorders affecting the ONH.
Collapse
Affiliation(s)
- Ebenezer Chan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Duke-NUS School of Medicine, Singapore 169857, Singapore
| | - Zhiqun Tang
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
| | - Raymond P. Najjar
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Duke-NUS School of Medicine, Singapore 169857, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 117597, Singapore
- Center for Innovation & Precision Eye Health, National University of Singapore, Singapore 119077, Singapore
| | - Arun Narayanaswamy
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Glaucoma Department, Singapore National Eye Centre, Singapore 168751, Singapore
| | | | - Nancy J. Newman
- Departments of Ophthalmology and Neurology, Emory University, Atlanta, GA 30322, USA
| | - Valérie Biousse
- Departments of Ophthalmology and Neurology, Emory University, Atlanta, GA 30322, USA
| | - Dan Milea
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Duke-NUS School of Medicine, Singapore 169857, Singapore
- Department of Ophthalmology, Rigshospitalet, University of Copenhagen, 2600 Copenhagen, Denmark
- Department of Ophthalmology, Angers University Hospital, 49100 Angers, France
- Neuro-Ophthalmology Department, Singapore National Eye Centre, Singapore 168751, Singapore
- Correspondence:
| | | |
Collapse
|
7
|
Cheung CY, Ran AR, Wang S, Chan VTT, Sham K, Hilal S, Venketasubramanian N, Cheng CY, Sabanayagam C, Tham YC, Schmetterer L, McKay GJ, Williams MA, Wong A, Au LWC, Lu Z, Yam JC, Tham CC, Chen JJ, Dumitrascu OM, Heng PA, Kwok TCY, Mok VCT, Milea D, Chen CLH, Wong TY. A deep learning model for detection of Alzheimer's disease based on retinal photographs: a retrospective, multicentre case-control study. Lancet Digit Health 2022; 4:e806-e815. [PMID: 36192349 DOI: 10.1016/s2589-7500(22)00169-8] [Citation(s) in RCA: 52] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Revised: 08/12/2022] [Accepted: 08/19/2022] [Indexed: 10/14/2022]
Abstract
BACKGROUND There is no simple model to screen for Alzheimer's disease, partly because the diagnosis of Alzheimer's disease itself is complex-typically involving expensive and sometimes invasive tests not commonly available outside highly specialised clinical settings. We aimed to develop a deep learning algorithm that could use retinal photographs alone, which is the most common method of non-invasive imaging the retina to detect Alzheimer's disease-dementia. METHODS In this retrospective, multicentre case-control study, we trained, validated, and tested a deep learning algorithm to detect Alzheimer's disease-dementia from retinal photographs using retrospectively collected data from 11 studies that recruited patients with Alzheimer's disease-dementia and people without disease from different countries. Our main aim was to develop a bilateral model to detect Alzheimer's disease-dementia from retinal photographs alone. We designed and internally validated the bilateral deep learning model using retinal photographs from six studies. We used the EfficientNet-b2 network as the backbone of the model to extract features from the images. Integrated features from four retinal photographs (optic nerve head-centred and macula-centred fields from both eyes) for each individual were used to develop supervised deep learning models and equip the network with unsupervised domain adaptation technique, to address dataset discrepancy between the different studies. We tested the trained model using five other studies, three of which used PET as a biomarker of significant amyloid β burden (testing the deep learning model between amyloid β positive vs amyloid β negative). FINDINGS 12 949 retinal photographs from 648 patients with Alzheimer's disease and 3240 people without the disease were used to train, validate, and test the deep learning model. In the internal validation dataset, the deep learning model had 83·6% (SD 2·5) accuracy, 93·2% (SD 2·2) sensitivity, 82·0% (SD 3·1) specificity, and an area under the receiver operating characteristic curve (AUROC) of 0·93 (0·01) for detecting Alzheimer's disease-dementia. In the testing datasets, the bilateral deep learning model had accuracies ranging from 79·6% (SD 15·5) to 92·1% (11·4) and AUROCs ranging from 0·73 (SD 0·24) to 0·91 (0·10). In the datasets with data on PET, the model was able to differentiate between participants who were amyloid β positive and those who were amyloid β negative: accuracies ranged from 80·6 (SD 13·4%) to 89·3 (13·7%) and AUROC ranged from 0·68 (SD 0·24) to 0·86 (0·16). In subgroup analyses, the discriminative performance of the model was improved in patients with eye disease (accuracy 89·6% [SD 12·5%]) versus those without eye disease (71·7% [11·6%]) and patients with diabetes (81·9% [SD 20·3%]) versus those without the disease (72·4% [11·7%]). INTERPRETATION A retinal photograph-based deep learning algorithm can detect Alzheimer's disease with good accuracy, showing its potential for screening Alzheimer's disease in a community setting. FUNDING BrightFocus Foundation.
Collapse
Affiliation(s)
- Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China.
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Shujun Wang
- Department of Computer Science and Engineering, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Victor T T Chan
- Department of Ophthalmology and Visual Sciences, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China; Department of Ophthalmology and Visual Sciences, Prince of Wales Hospital, Hong Kong Special Administrative Region, China
| | - Kaiser Sham
- Department of Ophthalmology and Visual Sciences, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Saima Hilal
- Memory Aging &Cognition Centre, National University Health System, Singapore; Department of Pharmacology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Saw Swee Hock School of Public Health, National University of Singapore and National University Health System, Singapore
| | | | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-National University of Singapore Medical School, Singapore
| | - Charumathi Sabanayagam
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-National University of Singapore Medical School, Singapore
| | - Yih Chung Tham
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-National University of Singapore Medical School, Singapore
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Singapore Eye Research Institute, Advanced Ocular Engineering and School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore
| | - Gareth J McKay
- Centre for Public Health, Royal Victoria Hospital, Queen's University Belfast, Belfast, UK
| | | | - Adrian Wong
- Gerald Choa Neuroscience Institute, Therese Pei Fong Chow Research Centre for Prevention of Dementia, Lui Che Woo Institute of Innovative Medicine, Division of Neurology, Department of Medicine and Therapeutics, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Lisa W C Au
- Gerald Choa Neuroscience Institute, Therese Pei Fong Chow Research Centre for Prevention of Dementia, Lui Che Woo Institute of Innovative Medicine, Division of Neurology, Department of Medicine and Therapeutics, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Zhihui Lu
- Jockey Club Centre for Osteoporosis Care and Control, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China; Department of Medicine and Therapeutics, Faculty of Medicine, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Jason C Yam
- Department of Ophthalmology and Visual Sciences, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Clement C Tham
- Department of Ophthalmology and Visual Sciences, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - John J Chen
- Department of Ophthalmology and Department of Neurology, Mayo Clinic, Rochester, MN, USA
| | - Oana M Dumitrascu
- Department of Neurology and Department of Ophthalmology, Division of Cerebrovascular Diseases, Mayo Clinic College of Medicine and Science, Scottsdale, AZ, USA
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Timothy C Y Kwok
- Jockey Club Centre for Osteoporosis Care and Control, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China; Department of Medicine and Therapeutics, Faculty of Medicine, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Vincent C T Mok
- Gerald Choa Neuroscience Institute, Therese Pei Fong Chow Research Centre for Prevention of Dementia, Lui Che Woo Institute of Innovative Medicine, Division of Neurology, Department of Medicine and Therapeutics, the Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Dan Milea
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-National University of Singapore Medical School, Singapore
| | - Christopher Li-Hsian Chen
- Memory Aging &Cognition Centre, National University Health System, Singapore; Department of Pharmacology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-National University of Singapore Medical School, Singapore; Tsinghua Medicine, Tsinghua University, Beijing, China
| |
Collapse
|
8
|
Ferro Desideri L, Rutigliani C, Corazza P, Nastasi A, Roda M, Nicolo M, Traverso CE, Vagge A. The upcoming role of Artificial Intelligence (AI) for retinal and glaucomatous diseases. JOURNAL OF OPTOMETRY 2022; 15 Suppl 1:S50-S57. [PMID: 36216736 PMCID: PMC9732476 DOI: 10.1016/j.optom.2022.08.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/14/2022] [Accepted: 08/16/2022] [Indexed: 06/16/2023]
Abstract
In recent years, the role of artificial intelligence (AI) and deep learning (DL) models is attracting increasing global interest in the field of ophthalmology. DL models are considered the current state-of-art among the AI technologies. In fact, DL systems have the capability to recognize, quantify and describe pathological clinical features. Their role is currently being investigated for the early diagnosis and management of several retinal diseases and glaucoma. The application of DL models to fundus photographs, visual fields and optical coherence tomography (OCT) imaging has provided promising results in the early detection of diabetic retinopathy (DR), wet age-related macular degeneration (w-AMD), retinopathy of prematurity (ROP) and glaucoma. In this review we analyze the current evidence of AI applied to these ocular diseases, as well as discuss the possible future developments and potential clinical implications, without neglecting the present limitations and challenges in order to adopt AI and DL models as powerful tools in the everyday routine clinical practice.
Collapse
Affiliation(s)
- Lorenzo Ferro Desideri
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy.
| | | | - Paolo Corazza
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy
| | | | - Matilde Roda
- Ophthalmology Unit, Department of Experimental, Diagnostic and Specialty Medicine (DIMES), Alma Mater Studiorum University of Bologna and S.Orsola-Malpighi Teaching Hospital, Bologna, Italy
| | - Massimo Nicolo
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy
| | - Carlo Enrico Traverso
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy
| | - Aldo Vagge
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy
| |
Collapse
|
9
|
Nderitu P, Nunez do Rio JM, Webster ML, Mann SS, Hopkins D, Cardoso MJ, Modat M, Bergeles C, Jackson TL. Automated image curation in diabetic retinopathy screening using deep learning. Sci Rep 2022; 12:11196. [PMID: 35778615 PMCID: PMC9249740 DOI: 10.1038/s41598-022-15491-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 06/24/2022] [Indexed: 11/20/2022] Open
Abstract
Diabetic retinopathy (DR) screening images are heterogeneous and contain undesirable non-retinal, incorrect field and ungradable samples which require curation, a laborious task to perform manually. We developed and validated single and multi-output laterality, retinal presence, retinal field and gradability classification deep learning (DL) models for automated curation. The internal dataset comprised of 7743 images from DR screening (UK) with 1479 external test images (Portugal and Paraguay). Internal vs external multi-output laterality AUROC were right (0.994 vs 0.905), left (0.994 vs 0.911) and unidentifiable (0.996 vs 0.680). Retinal presence AUROC were (1.000 vs 1.000). Retinal field AUROC were macula (0.994 vs 0.955), nasal (0.995 vs 0.962) and other retinal field (0.997 vs 0.944). Gradability AUROC were (0.985 vs 0.918). DL effectively detects laterality, retinal presence, retinal field and gradability of DR screening images with generalisation between centres and populations. DL models could be used for automated image curation within DR screening.
Collapse
Affiliation(s)
- Paul Nderitu
- Section of Ophthalmology, King's College London, London, UK.
- King's Ophthalmology Research Unit, King's College Hospital, London, UK.
| | | | - Ms Laura Webster
- South East London Diabetic Eye Screening Programme, Guy's and St Thomas' Foundation Trust, London, UK
| | - Samantha S Mann
- South East London Diabetic Eye Screening Programme, Guy's and St Thomas' Foundation Trust, London, UK
- Department of Ophthalmology, Guy's and St Thomas' Foundation Trust, London, UK
| | - David Hopkins
- Department of Diabetes, School of Life Course Sciences, King's College London, London, UK
- Institute of Diabetes, Endocrinology and Obesity, King's Health Partners, London, UK
| | - M Jorge Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Marc Modat
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Christos Bergeles
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Timothy L Jackson
- Section of Ophthalmology, King's College London, London, UK
- King's Ophthalmology Research Unit, King's College Hospital, London, UK
| |
Collapse
|
10
|
Shi C, Lee J, Wang G, Dou X, Yuan F, Zee B. Assessment of image quality on color fundus retinal images using the automatic retinal image analysis. Sci Rep 2022; 12:10455. [PMID: 35729197 PMCID: PMC9213403 DOI: 10.1038/s41598-022-13919-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 05/30/2022] [Indexed: 01/03/2023] Open
Abstract
Image quality assessment is essential for retinopathy detection on color fundus retinal image. However, most studies focused on the classification of good and poor quality without considering the different types of poor quality. This study developed an automatic retinal image analysis (ARIA) method, incorporating transfer net ResNet50 deep network with the automatic features generation approach to automatically assess image quality, and distinguish eye-abnormality-associated-poor-quality from artefact-associated-poor-quality on color fundus retinal images. A total of 2434 retinal images, including 1439 good quality and 995 poor quality (483 eye-abnormality-associated-poor-quality and 512 artefact-associated-poor-quality), were used for training, testing, and 10-ford cross-validation. We also analyzed the external validation with the clinical diagnosis of eye abnormality as the reference standard to evaluate the performance of the method. The sensitivity, specificity, and accuracy for testing good quality against poor quality were 98.0%, 99.1%, and 98.6%, and for differentiating between eye-abnormality-associated-poor-quality and artefact-associated-poor-quality were 92.2%, 93.8%, and 93.0%, respectively. In external validation, our method achieved an area under the ROC curve of 0.997 for the overall quality classification and 0.915 for the classification of two types of poor quality. The proposed approach, ARIA, showed good performance in testing, 10-fold cross validation and external validation. This study provides a novel angle for image quality screening based on the different poor quality types and corresponding dealing methods. It suggested that the ARIA can be used as a screening tool in the preliminary stage of retinopathy grading by telemedicine or artificial intelligence analysis.
Collapse
Affiliation(s)
- Chuying Shi
- Division of Biostatistics, Centre for Clinical Research and Biostatistics, Jockey Club School of Public Health and Primary Care, Faculty of Medicine, The Chinese University of Hong Kong, New Territories, Hong Kong, China
| | - Jack Lee
- Division of Biostatistics, Centre for Clinical Research and Biostatistics, Jockey Club School of Public Health and Primary Care, Faculty of Medicine, The Chinese University of Hong Kong, New Territories, Hong Kong, China
| | - Gechun Wang
- Department of Ophthalmology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Xinyan Dou
- Department of Ophthalmology, Wusong Hospital, Shanghai, China
| | - Fei Yuan
- Department of Ophthalmology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Benny Zee
- Division of Biostatistics, Centre for Clinical Research and Biostatistics, Jockey Club School of Public Health and Primary Care, Faculty of Medicine, The Chinese University of Hong Kong, New Territories, Hong Kong, China.
| |
Collapse
|
11
|
Li P, Liu J. Early Diagnosis and Quantitative Analysis of Stages in Retinopathy of Prematurity Based on Deep Convolutional Neural Networks. Transl Vis Sci Technol 2022; 11:17. [PMID: 35579887 PMCID: PMC9123509 DOI: 10.1167/tvst.11.5.17] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose Retinopathy of prematurity (ROP) is a leading cause of childhood blindness. An accurate and timely diagnosis of the early stages of ROP allows ophthalmologists to recommend appropriate treatment while blindness is still preventable. The purpose of this study was to develop an automatic deep convolutional neural network-based system that provided a diagnosis of stage I to III ROP with feature parameters. Methods We developed three data sets containing 18,827 retinal images of preterm infants. These retinal images were obtained from the ophthalmology department of Jiaxing Maternal and Child Health Hospital in China. After segmenting images, we calculated the region of interest (ROI). We trained our system based on segmented ROI images from the training data set, tested the performance of the classifier on the test data set, and evaluated the widths of the demarcation lines or ridges extracted by the system, as well as the ratios of vascular proliferation within the ROI on a comparison data set. Results The trained network achieved a sensitivity of 90.21% with 97.67% specificity for the diagnosis of stage I ROP, 92.75% sensitivity with 98.74% specificity for stage II ROP, and 91.84% sensitivity with 99.29% sensitivity for stage III ROP. When the system diagnosed normal images, the sensitivity and specificity reached 95.93% and 96.41%, respectively. The widths (in pixels) of the demarcation lines or ridges for normal, stage I, stage II, and stage III were 15.22 ± 1.06, 26.35 ± 1.36, and 30.75 ± 1.55. The ratios of the vascular proliferation within the ROI were 1.40 ± 0.29, 1.54 ± 0.26, and 1.81 ± 0.33. All parameters were statistically different among the groups. When physicians integrated quantitative parameters of the extracted features with their clinic diagnosis, the κ score was significantly improved. Conclusions Our system achieved a high accuracy of diagnosis for stage I to III ROP. It used the quantitative analysis of the extracted features to assist physicians in providing classification decisions. Translational Relevance The high performance of the system suggests potential applications in ancillary diagnosis of the early stages of ROP.
Collapse
Affiliation(s)
- Peng Li
- School of Electronic and Information Engineering, Tongji University, Shanghai,China.,Department of Electronic and Information Engineering, Tongji Zhejiang College, Jiaxing, China
| | - Jia Liu
- Optometry Center, Jiaxing Maternity and Child Health Care Hospital, Jiaxing, China
| |
Collapse
|