51
|
Widen the Applicability of a Convolutional Neural-Network-Assisted Glaucoma Detection Algorithm of Limited Training Images across Different Datasets. Biomedicines 2022; 10:biomedicines10061314. [PMID: 35740336 PMCID: PMC9219722 DOI: 10.3390/biomedicines10061314] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 05/22/2022] [Accepted: 05/30/2022] [Indexed: 02/04/2023] Open
Abstract
Automated glaucoma detection using deep learning may increase the diagnostic rate of glaucoma to prevent blindness, but generalizable models are currently unavailable despite the use of huge training datasets. This study aims to evaluate the performance of a convolutional neural network (CNN) classifier trained with a limited number of high-quality fundus images in detecting glaucoma and methods to improve its performance across different datasets. A CNN classifier was constructed using EfficientNet B3 and 944 images collected from one medical center (core model) and externally validated using three datasets. The performance of the core model was compared with (1) the integrated model constructed by using all training images from the four datasets and (2) the dataset-specific model built by fine-tuning the core model with training images from the external datasets. The diagnostic accuracy of the core model was 95.62% but dropped to ranges of 52.5–80.0% on the external datasets. Dataset-specific models exhibited superior diagnostic performance on the external datasets compared to other models, with a diagnostic accuracy of 87.50–92.5%. The findings suggest that dataset-specific tuning of the core CNN classifier effectively improves its applicability across different datasets when increasing training images fails to achieve generalization.
Collapse
|
52
|
Shin Y, Cho H, Shin YU, Seong M, Choi JW, Lee WJ. Comparison between Deep-Learning-Based Ultra-Wide-Field Fundus Imaging and True-Colour Confocal Scanning for Diagnosing Glaucoma. J Clin Med 2022; 11:jcm11113168. [PMID: 35683577 PMCID: PMC9181263 DOI: 10.3390/jcm11113168] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 05/25/2022] [Accepted: 06/01/2022] [Indexed: 02/05/2023] Open
Abstract
In this retrospective, comparative study, we evaluated and compared the performance of two confocal imaging modalities in detecting glaucoma based on a deep learning (DL) classifier: ultra-wide-field (UWF) fundus imaging and true-colour confocal scanning. A total of 777 eyes, including 273 normal control eyes and 504 glaucomatous eyes, were tested. A convolutional neural network was used for each true-colour confocal scan (Eidon AF™, CenterVue, Padova, Italy) and UWF fundus image (Optomap™, Optos PLC, Dunfermline, UK) to detect glaucoma. The diagnostic model was trained using 545 training and 232 test images. The presence of glaucoma was determined, and the accuracy and area under the receiver operating characteristic curve (AUC) metrics were assessed for diagnostic power comparison. DL-based UWF fundus imaging achieved an AUC of 0.904 (95% confidence interval (CI): 0.861−0.937) and accuracy of 83.62%. In contrast, DL-based true-colour confocal scanning achieved an AUC of 0.868 (95% CI: 0.824−0.912) and accuracy of 81.46%. Both DL-based confocal imaging modalities showed no significant differences in their ability to diagnose glaucoma (p = 0.135) and were comparable to the traditional optical coherence tomography parameter-based methods (all p > 0.005). Therefore, using a DL-based algorithm on true-colour confocal scanning and UWF fundus imaging, we confirmed that both confocal fundus imaging techniques had high value in diagnosing glaucoma.
Collapse
Affiliation(s)
- Younji Shin
- Department of Electrical Engineering, Hanyang University, Seoul 04763, Korea;
| | - Hyunsoo Cho
- Department of Ophthalmology, Hanyang University College of Medicine, Seoul 04763, Korea; (H.C.); (Y.U.S.); (M.S.)
| | - Yong Un Shin
- Department of Ophthalmology, Hanyang University College of Medicine, Seoul 04763, Korea; (H.C.); (Y.U.S.); (M.S.)
| | - Mincheol Seong
- Department of Ophthalmology, Hanyang University College of Medicine, Seoul 04763, Korea; (H.C.); (Y.U.S.); (M.S.)
| | - Jun Won Choi
- Department of Electrical Engineering, Hanyang University, Seoul 04763, Korea;
- Correspondence: (J.W.C.); (W.J.L.); Tel.: +82-2-2290-2316 (J.W.C.); +82-2-2290-8570 (W.J.L.)
| | - Won June Lee
- Department of Ophthalmology, Hanyang University College of Medicine, Seoul 04763, Korea; (H.C.); (Y.U.S.); (M.S.)
- Correspondence: (J.W.C.); (W.J.L.); Tel.: +82-2-2290-2316 (J.W.C.); +82-2-2290-8570 (W.J.L.)
| |
Collapse
|
53
|
|
54
|
Lazaridis G, Montesano G, Afgeh SS, Mohamed-Noriega J, Ourselin S, Lorenzi M, Garway-Heath DF. Predicting Visual Fields From Optical Coherence Tomography via an Ensemble of Deep Representation Learners. Am J Ophthalmol 2022; 238:52-65. [PMID: 34998718 DOI: 10.1016/j.ajo.2021.12.020] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 12/23/2021] [Accepted: 12/27/2021] [Indexed: 02/04/2023]
Abstract
PURPOSE To develop and validate a deep learning method of predicting visual function from spectral domain optical coherence tomography (SD-OCT)-derived retinal nerve fiber layer thickness (RNFLT) measurements and corresponding SD-OCT images. DESIGN Development and evaluation of diagnostic technology. METHODS Two deep learning ensemble models to predict pointwise VF sensitivity from SD-OCT images (model 1: RNFLT profile only; model 2: RNFLT profile plus SD-OCT image) and 2 reference models were developed. All models were tested in an independent test-retest data set comprising 2181 SD-OCT/VF pairs; the median of ∼10 VFs per eye was taken as the best available estimate (BAE) of the true VF. The performance of single VFs predicting the BAE VF was also evaluated. The training data set comprised 954 eyes of 220 healthy and 332 glaucomatous participants, and the test data set, 144 eyes of 72 glaucomatous participants. The main outcome measures included the pointwise prediction mean error (ME), mean absolute error (MAE), and correlation of predictions with the BAE VF sensitivity. RESULTS The median mean deviation was -4.17 dB (-14.22 to 0.88). Model 2 had excellent accuracy (ME 0.5 dB, SD 0.8) and overall performance (MAE 2.3 dB, SD 3.1), and significantly (paired t test) outperformed the other methods. For single VFs predicting the BAE VF, the pointwise MAE was 1.5 dB (SD 0.7). The association between SD-OCT and single VF predictions of the BAE pointwise VF sensitivities was R2 = 0.78 and R2 = 0.88, respectively. CONCLUSIONS Our method outperformed standard statistical and deep learning approaches. Predictions of BAEs from OCT images approached the accuracy of single real VF estimates of the BAE.
Collapse
Affiliation(s)
- Georgios Lazaridis
- From the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology (G.L., G.M., J.M.-N., D.F.G.-H.), London, United Kingdom; Centre for Medical Image Computing, University College London (G.L.), London, United Kingdom.
| | - Giovanni Montesano
- From the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology (G.L., G.M., J.M.-N., D.F.G.-H.), London, United Kingdom; Optometry and Visual Sciences, City, University of London, London, United Kingdom
| | | | - Jibran Mohamed-Noriega
- From the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology (G.L., G.M., J.M.-N., D.F.G.-H.), London, United Kingdom; Departamento de Oftalmología, Hospital Universitario (J.M.-N.), UANL, México
| | - Sebastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, King's College London (S.O.), London, United Kingdom and
| | - Marco Lorenzi
- Université Côte d'Azur, Inria Sophia Antipolis, Epione Research Project (M.L.), Valbonne, France
| | - David F Garway-Heath
- From the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology (G.L., G.M., J.M.-N., D.F.G.-H.), London, United Kingdom
| |
Collapse
|
55
|
Kaskar OG, Wells-Gray E, Fleischman D, Grace L. Evaluating machine learning classifiers for glaucoma referral decision support in primary care settings. Sci Rep 2022; 12:8518. [PMID: 35595794 PMCID: PMC9122936 DOI: 10.1038/s41598-022-12270-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Accepted: 04/18/2022] [Indexed: 11/09/2022] Open
Abstract
Several artificial intelligence algorithms have been proposed to help diagnose glaucoma by analyzing the functional and/or structural changes in the eye. These algorithms require carefully curated datasets with access to ocular images. In the current study, we have modeled and evaluated classifiers to predict self-reported glaucoma using a single, easily obtained ocular feature (intraocular pressure (IOP)) and non-ocular features (age, gender, race, body mass index, systolic and diastolic blood pressure, and comorbidities). The classifiers were trained on publicly available data of 3015 subjects without a glaucoma diagnosis at the time of enrollment. 337 subjects subsequently self-reported a glaucoma diagnosis in a span of 1–12 years after enrollment. The classifiers were evaluated on the ability to identify these subjects by only using their features recorded at the time of enrollment. Support vector machine, logistic regression, and adaptive boosting performed similarly on the dataset with F1 scores of 0.31, 0.30, and 0.28, respectively. Logistic regression had the highest sensitivity at 60% with a specificity of 69%. Predictive classifiers using primarily non-ocular features have the potential to be used for identifying suspected glaucoma in non-eye care settings, including primary care. Further research into finding additional features that improve the performance of predictive classifiers is warranted.
Collapse
Affiliation(s)
- Omkar G Kaskar
- North Carolina State University, Raleigh, NC, 27695, USA
| | | | - David Fleischman
- University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Landon Grace
- North Carolina State University, Raleigh, NC, 27695, USA.
| |
Collapse
|
56
|
Chaurasia AK, Greatbatch CJ, Hewitt AW. Diagnostic Accuracy of Artificial Intelligence in Glaucoma Screening and Clinical Practice. J Glaucoma 2022; 31:285-299. [PMID: 35302538 DOI: 10.1097/ijg.0000000000002015] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 02/26/2022] [Indexed: 11/25/2022]
Abstract
PURPOSE Artificial intelligence (AI) has been shown as a diagnostic tool for glaucoma detection through imaging modalities. However, these tools are yet to be deployed into clinical practice. This meta-analysis determined overall AI performance for glaucoma diagnosis and identified potential factors affecting their implementation. METHODS We searched databases (Embase, Medline, Web of Science, and Scopus) for studies that developed or investigated the use of AI for glaucoma detection using fundus and optical coherence tomography (OCT) images. A bivariate random-effects model was used to determine the summary estimates for diagnostic outcomes. The Preferred Reporting Items for Systematic Reviews and Meta-Analysis of Diagnostic Test Accuracy (PRISMA-DTA) extension was followed, and the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool was used for bias and applicability assessment. RESULTS Seventy-nine articles met inclusion criteria, with a subset of 66 containing adequate data for quantitative analysis. The pooled area under receiver operating characteristic curve across all studies for glaucoma detection was 96.3%, with a sensitivity of 92.0% (95% confidence interval: 89.0-94.0) and specificity of 94.0% (95% confidence interval: 92.0-95.0). The pooled area under receiver operating characteristic curve on fundus and OCT images was 96.2% and 96.0%, respectively. Mixed data set and external data validation had unsatisfactory diagnostic outcomes. CONCLUSION Although AI has the potential to revolutionize glaucoma care, this meta-analysis highlights that before such algorithms can be implemented into clinical care, a number of issues need to be addressed. With substantial heterogeneity across studies, many factors were found to affect the diagnostic performance. We recommend implementing a standard diagnostic protocol for grading, implementing external data validation, and analysis across different ethnicity groups.
Collapse
Affiliation(s)
- Abadh K Chaurasia
- Menzies Institute for Medical Research, School of Medicine, University of Tasmania, Tasmania
| | - Connor J Greatbatch
- Menzies Institute for Medical Research, School of Medicine, University of Tasmania, Tasmania
| | - Alex W Hewitt
- Menzies Institute for Medical Research, School of Medicine, University of Tasmania, Tasmania
- Centre for Eye Research Australia, University of Melbourne, Melbourne, Australia
| |
Collapse
|
57
|
A Comprehensive Review of Methods and Equipment for Aiding Automatic Glaucoma Tracking. Diagnostics (Basel) 2022; 12:diagnostics12040935. [PMID: 35453985 PMCID: PMC9031684 DOI: 10.3390/diagnostics12040935] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 04/05/2022] [Accepted: 04/07/2022] [Indexed: 02/01/2023] Open
Abstract
Glaucoma is a chronic optic neuropathy characterized by irreversible damage to the retinal nerve fiber layer (RNFL), resulting in changes in the visual field (VC). Glaucoma screening is performed through a complete ophthalmological examination, using images of the optic papilla obtained in vivo for the evaluation of glaucomatous characteristics, eye pressure, and visual field. Identifying the glaucomatous papilla is quite important, as optical papillary images are considered the gold standard for tracking. Therefore, this article presents a review of the diagnostic methods used to identify the glaucomatous papilla through technology over the last five years. Based on the analyzed works, the current state-of-the-art methods are identified, the current challenges are analyzed, and the shortcomings of these methods are investigated, especially from the point of view of automation and independence in performing these measurements. Finally, the topics for future work and the challenges that need to be solved are proposed.
Collapse
|
58
|
Fan R, Bowd C, Christopher M, Brye N, Proudfoot JA, Rezapour J, Belghith A, Goldbaum MH, Chuter B, Girkin CA, Fazio MA, Liebmann JM, Weinreb RN, Gordon MO, Kass MA, Kriegman D, Zangwill LM. Detecting Glaucoma in the Ocular Hypertension Study Using Deep Learning. JAMA Ophthalmol 2022; 140:383-391. [PMID: 35297959 PMCID: PMC8931672 DOI: 10.1001/jamaophthalmol.2022.0244] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
IMPORTANCE Automated deep learning (DL) analyses of fundus photographs potentially can reduce the cost and improve the efficiency of reading center assessment of end points in clinical trials. OBJECTIVE To investigate the diagnostic accuracy of DL algorithms trained on fundus photographs from the Ocular Hypertension Treatment Study (OHTS) to detect primary open-angle glaucoma (POAG). DESIGN, SETTING, AND PARTICIPANTS In this diagnostic study, 1636 OHTS participants from 22 sites with a mean (range) follow-up of 10.7 (0-14.3) years. A total of 66 715 photographs from 3272 eyes were used to train and test a ResNet-50 model to detect the OHTS Endpoint Committee POAG determination based on optic disc (287 eyes, 3502 photographs) and/or visual field (198 eyes, 2300 visual fields) changes. Three independent test sets were used to evaluate the generalizability of the model. MAIN OUTCOMES AND MEASURES Areas under the receiver operating characteristic curve (AUROC) and sensitivities at fixed specificities were calculated to compare model performance. Evaluation of false-positive rates was used to determine whether the DL model detected POAG before the OHTS Endpoint Committee POAG determination. RESULTS A total of 1147 participants were included in the training set (661 [57.6%] female; mean age, 57.2 years; 95% CI, 56.6-57.8), 167 in the validation set (97 [58.1%] female; mean age, 57.1 years; 95% CI, 55.6-58.7), and 322 in the test set (173 [53.7%] female; mean age, 57.2 years; 95% CI, 56.1-58.2). The DL model achieved an AUROC of 0.88 (95% CI, 0.82-0.92) for the OHTS Endpoint Committee determination of optic disc or VF changes. For the OHTS end points based on optic disc changes or visual field changes, AUROCs were 0.91 (95% CI, 0.88-0.94) and 0.86 (95% CI, 0.76-0.93), respectively. False-positive rates (at 90% specificity) were higher in photographs of eyes that later developed POAG by disc or visual field (27.5% [56 of 204]) compared with eyes that did not develop POAG (11.4% [50 of 440]) during follow-up. The diagnostic accuracy of the DL model developed on the optic disc end point applied to 3 independent data sets was lower, with AUROCs ranging from 0.74 (95% CI, 0.70-0.77) to 0.79 (95% CI, 0.78-0.81). CONCLUSIONS AND RELEVANCE The model's high diagnostic accuracy using OHTS photographs suggests that DL has the potential to standardize and automate POAG determination for clinical trials and management. In addition, the higher false-positive rate in early photographs of eyes that later developed POAG suggests that DL models detected POAG in some eyes earlier than the OHTS Endpoint Committee, reflecting the OHTS design that emphasized a high specificity for POAG determination by requiring a clinically significant change from baseline.
Collapse
Affiliation(s)
- Rui Fan
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, La Jolla
- Department of Computer Science and Engineering, University of California, San Diego, La Jolla
- Department of Control Science and Engineering, College of Electronics and Information Engineering, Tongji University, Shanghai, China
| | - Christopher Bowd
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, La Jolla
| | - Mark Christopher
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, La Jolla
| | - Nicole Brye
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, La Jolla
| | - James A. Proudfoot
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, La Jolla
| | - Jasmin Rezapour
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, La Jolla
- Department of Ophthalmology, Universitätsmedizin der Johannes Gutenberg-Universität Mainz, Mainz, Rheinland-Pfalz, Germany
| | - Akram Belghith
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, La Jolla
| | - Michael H. Goldbaum
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, La Jolla
| | - Benton Chuter
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, La Jolla
| | - Christopher A. Girkin
- Department of Ophthalmology, School of Medicine, The University of Alabama at Birmingham
| | - Massimo A. Fazio
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, La Jolla
- Department of Ophthalmology, School of Medicine, The University of Alabama at Birmingham
- Department of Biomedical Engineering, School of Engineering, The University of Alabama at Birmingham
| | - Jeffrey M. Liebmann
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Columbia University Medical Center, New York, New York
| | - Robert N. Weinreb
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, La Jolla
| | - Mae O. Gordon
- Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, Washington University in St Louis, St Louis, Missouri
| | - Michael A. Kass
- Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, Washington University in St Louis, St Louis, Missouri
| | - David Kriegman
- Department of Computer Science and Engineering, University of California, San Diego, La Jolla
| | - Linda M. Zangwill
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, La Jolla
| |
Collapse
|
59
|
Deep Learning Image Analysis of Optical Coherence Tomography Angiography Measured Vessel Density Improves Classification of Healthy and Glaucoma Eyes. Am J Ophthalmol 2022; 236:298-308. [PMID: 34780803 PMCID: PMC10042115 DOI: 10.1016/j.ajo.2021.11.008] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 11/07/2021] [Accepted: 11/08/2021] [Indexed: 11/23/2022]
Abstract
PURPOSE To compare convolutional neural network (CNN) analysis of en face vessel density images to gradient boosting classifier (GBC) analysis of instrument-provided, feature-based optical coherence tomography angiography (OCTA) vessel density measurements and OCT retinal nerve fiber layer (RNFL) thickness measurements for classifying healthy and glaucomatous eyes. DESIGN Comparison of diagnostic approaches. METHODS A total of 130 eyes of 80 healthy individuals and 275 eyes of 185 glaucoma patients with optic nerve head (ONH) OCTA and OCT imaging were included. Classification performance of a VGG16 CNN trained and tested on entire en face 4.5 × 4.5-mm radial peripapillary capillary OCTA ONH images was compared to the performance of separate GBC models trained and tested on standard OCTA and OCT measurements. Five-fold cross-validation was used to test predictions for CNNs and GBCs. Areas under the precision recall curves (AUPRC) were calculated to control for training/test set size imbalance and were compared. RESULTS Adjusted AUPRCs for GBC models were 0.89 (95% CI = 0.82, 0.92) for whole image vessel density GBC, 0.89 (0.83, 0.92) for whole image capillary density GBC, 0.91 (0.88, 0.93) for combined whole image vessel and whole image capillary density GBC, and 0.93 (0.91, 095) for RNFL thickness GBC. The adjusted AUPRC using CNN analysis of en face vessel density images was 0.97 (0.95, 0.99) resulting in significantly improved classification compared to GBC OCTA-based results and GBC OCT-based results (P ≤ 0.01 for all comparisons). CONCLUSION Deep learning en face image analysis improves on feature-based GBC models for classifying healthy and glaucoma eyes.
Collapse
|
60
|
Lee J, Warner E, Shaikhouni S, Bitzer M, Kretzler M, Gipson D, Pennathur S, Bellovich K, Bhat Z, Gadegbeku C, Massengill S, Perumal K, Saha J, Yang Y, Luo J, Zhang X, Mariani L, Hodgin JB, Rao A. Unsupervised machine learning for identifying important visual features through bag-of-words using histopathology data from chronic kidney disease. Sci Rep 2022; 12:4832. [PMID: 35318420 PMCID: PMC8941143 DOI: 10.1038/s41598-022-08974-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 03/14/2022] [Indexed: 12/22/2022] Open
Abstract
Pathologists use visual classification to assess patient kidney biopsy samples when diagnosing the underlying cause of kidney disease. However, the assessment is qualitative, or semi-quantitative at best, and reproducibility is challenging. To discover previously unknown features which predict patient outcomes and overcome substantial interobserver variability, we developed an unsupervised bag-of-words model. Our study applied to the C-PROBE cohort of patients with chronic kidney disease (CKD). 107,471 histopathology images were obtained from 161 biopsy cores and identified important morphological features in biopsy tissue that are highly predictive of the presence of CKD both at the time of biopsy and in one year. To evaluate the performance of our model, we estimated the AUC and its 95% confidence interval. We show that this method is reliable and reproducible and can achieve 0.93 AUC at predicting glomerular filtration rate at the time of biopsy as well as predicting a loss of function at one year. Additionally, with this method, we ranked the identified morphological features according to their importance as diagnostic markers for chronic kidney disease. In this study, we have demonstrated the feasibility of using an unsupervised machine learning method without human input in order to predict the level of kidney function in CKD. The results from our study indicate that the visual dictionary, or visual image pattern, obtained from unsupervised machine learning can predict outcomes using machine-derived values that correspond to both known and unknown clinically relevant features.
Collapse
Affiliation(s)
- Joonsang Lee
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Elisa Warner
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Salma Shaikhouni
- Department of Internal Medicine, Nephrology, University of Michigan, Ann Arbor, MI, USA
| | - Markus Bitzer
- Department of Internal Medicine, Nephrology, University of Michigan, Ann Arbor, MI, USA
| | - Matthias Kretzler
- Department of Internal Medicine, Nephrology, University of Michigan, Ann Arbor, MI, USA
| | - Debbie Gipson
- Department of Pediatrics, Pediatric Nephrology, University of Michigan, Ann Arbor, MI, USA
| | - Subramaniam Pennathur
- Department of Internal Medicine, Nephrology, University of Michigan, Ann Arbor, MI, USA
| | - Keith Bellovich
- Department of Internal Medicine, Nephrology, St. Clair Nephrology Research, Detroit, MI, USA
| | - Zeenat Bhat
- Department of Internal Medicine, Nephrology, Wayne State University, Detroit, MI, USA
| | - Crystal Gadegbeku
- Department of Internal Medicine, Nephrology, Cleveland Clinic, Cleveland, OH, USA
| | - Susan Massengill
- Department of Pediatrics, Pediatric Nephrology, Levine Children's Hospital, Charlotte, NC, USA
| | - Kalyani Perumal
- Department of Internal Medicine, Nephrology, Department of JH Stroger Hospital, Chicago, IL, USA
| | - Jharna Saha
- Department of Pathology, University of Michigan, Ann Arbor, MI, USA
| | - Yingbao Yang
- Department of Pathology, University of Michigan, Ann Arbor, MI, USA
| | - Jinghui Luo
- Department of Pathology, University of Michigan, Ann Arbor, MI, USA
| | - Xin Zhang
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Laura Mariani
- Department of Internal Medicine, Nephrology, University of Michigan, Ann Arbor, MI, USA
| | - Jeffrey B Hodgin
- Department of Pathology, University of Michigan, Ann Arbor, MI, USA.
| | - Arvind Rao
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA.
- Department of Biostatistics, University of Michigan, Ann Arbor, MI, USA.
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, USA.
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA.
| |
Collapse
|
61
|
DSLN: Dual-tutor student learning network for multiracial glaucoma detection. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07078-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
62
|
Ittoop SM, Jaccard N, Lanouette G, Kahook MY. The Role of Artificial Intelligence in the Diagnosis and Management of Glaucoma. J Glaucoma 2022; 31:137-146. [PMID: 34930873 DOI: 10.1097/ijg.0000000000001972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Accepted: 12/10/2021] [Indexed: 11/26/2022]
Abstract
Glaucomatous optic neuropathy is the leading cause of irreversible blindness worldwide. Diagnosis and monitoring of disease involves integrating information from the clinical examination with subjective data from visual field testing and objective biometric data that includes pachymetry, corneal hysteresis, and optic nerve and retinal imaging. This intricate process is further complicated by the lack of clear definitions for the presence and progression of glaucomatous optic neuropathy, which makes it vulnerable to clinician interpretation error. Artificial intelligence (AI) and AI-enabled workflows have been proposed as a plausible solution. Applications derived from this field of computer science can improve the quality and robustness of insights obtained from clinical data that can enhance the clinician's approach to patient care. This review clarifies key terms and concepts used in AI literature, discusses the current advances of AI in glaucoma, elucidates the clinical advantages and challenges to implementing this technology, and highlights potential future applications.
Collapse
Affiliation(s)
- Sabita M Ittoop
- The George Washington University Medical Faculty Associates, Washington, DC
| | | | | | - Malik Y Kahook
- Sue Anschutz-Rodgers Eye Center, The University of Colorado School of Medicine, Aurora, CO
| |
Collapse
|
63
|
Yuksel Elgin C, Chen D, Al‐Aswad LA. Ophthalmic imaging for the diagnosis and monitoring of glaucoma: A review. Clin Exp Ophthalmol 2022; 50:183-197. [DOI: 10.1111/ceo.14044] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Revised: 12/16/2021] [Accepted: 01/03/2022] [Indexed: 12/21/2022]
Affiliation(s)
- Cansu Yuksel Elgin
- Department of Ophthalmology, NYU Langone Health NYU Grossman School of Medicine New York New York USA
| | - Dinah Chen
- Department of Ophthalmology, NYU Langone Health NYU Grossman School of Medicine New York New York USA
| | - Lama A. Al‐Aswad
- Department of Ophthalmology, NYU Langone Health NYU Grossman School of Medicine New York New York USA
- Department of Population Health, NYU Langone Health NYU Grossman School of Medicine New York New York USA
| |
Collapse
|
64
|
Li M, Wan C. The use of deep learning technology for the detection of optic neuropathy. Quant Imaging Med Surg 2022; 12:2129-2143. [PMID: 35284277 PMCID: PMC8899937 DOI: 10.21037/qims-21-728] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 10/26/2021] [Indexed: 03/14/2024]
Abstract
The emergence of computer graphics processing units (GPUs), improvements in mathematical models, and the availability of big data, has allowed artificial intelligence (AI) to use machine learning and deep learning (DL) technology to achieve robust performance in various fields of medicine. The DL system provides improved capabilities, especially in image recognition and image processing. Recent progress in the sorting of AI data sets has stimulated great interest in the development of DL algorithms. Compared with subjective evaluation and other traditional methods, DL algorithms can identify diseases faster and more accurately in diagnostic tests. Medical imaging is of great significance in the clinical diagnosis and individualized treatment of ophthalmic diseases. Based on the morphological data sets of millions of data points, various image-related diagnostic techniques can now impart high-resolution information on anatomical and functional changes, thereby providing unprecedented insights in ophthalmic clinical practice. As ophthalmology relies heavily on imaging examinations, it is one of the first medical fields to apply DL algorithms in clinical practice. Such algorithms can assist in the analysis of large amounts of data acquired from the examination of auxiliary images. In recent years, rapid advancements in imaging technology have facilitated the application of DL in the automatic identification and classification of pathologies that are characteristic of ophthalmic diseases, thereby providing high quality diagnostic information. This paper reviews the origins, development, and application of DL technology. The technical and clinical problems associated with building DL systems to meet clinical needs and the potential challenges of clinical application are discussed, especially in relation to the field of optic nerve diseases.
Collapse
Affiliation(s)
- Mei Li
- Department of Ophthalmology, Yanan People’s Hospital, Yanan, China
| | - Chao Wan
- Department of Ophthalmology, the First Hospital of China Medical University, Shenyang, China
| |
Collapse
|
65
|
Singh LK, Pooja, Garg H, Khanna M. Deep learning system applicability for rapid glaucoma prediction from fundus images across various data sets. EVOLVING SYSTEMS 2022. [DOI: 10.1007/s12530-022-09426-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
|
66
|
Detecting glaucoma with only OCT: Implications for the clinic, research, screening, and AI development. Prog Retin Eye Res 2022; 90:101052. [PMID: 35216894 DOI: 10.1016/j.preteyeres.2022.101052] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 01/21/2022] [Accepted: 02/01/2022] [Indexed: 12/25/2022]
Abstract
A method for detecting glaucoma based only on optical coherence tomography (OCT) is of potential value for routine clinical decisions, for inclusion criteria for research studies and trials, for large-scale clinical screening, as well as for the development of artificial intelligence (AI) decision models. Recent work suggests that the OCT probability (p-) maps, also known as deviation maps, can play a key role in an OCT-based method. However, artifacts seen on the p-maps of healthy control eyes can resemble patterns of damage due to glaucoma. We document in section 2 that these glaucoma-like artifacts are relatively common and are probably due to normal anatomical variations in healthy eyes. We also introduce a simple anatomical artifact model based upon known anatomical variations to help distinguish these artifacts from actual glaucomatous damage. In section 3, we apply this model to an OCT-based method for detecting glaucoma that starts with an examination of the retinal nerve fiber layer (RNFL) p-map. While this method requires a judgment by the clinician, sections 4 and 5 describe automated methods that do not. In section 4, the simple model helps explain the relatively poor performance of commonly employed summary statistics, including circumpapillary RNFL thickness. In section 5, the model helps account for the success of an AI deep learning model, which in turn validates our focus on the RNFL p-map. Finally, in section 6 we consider the implications of OCT-based methods for the clinic, research, screening, and the development of AI models.
Collapse
|
67
|
Akbar S, Hassan SA, Shoukat A, Alyami J, Bahaj SA. Detection of microscopic glaucoma through fundus images using deep transfer learning approach. Microsc Res Tech 2022; 85:2259-2276. [PMID: 35170136 DOI: 10.1002/jemt.24083] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2021] [Revised: 01/05/2022] [Accepted: 01/27/2022] [Indexed: 11/07/2022]
Abstract
Glaucoma disease in humans can lead to blindness if it progresses to the point where it affects the oculus' optic nerve head. It is not easily detected since there are no symptoms, but it can be detected using tonometry, ophthalmoscopy, and perimeter. However, advances in artificial intelligence approaches have permitted machine learning techniques to diagnose at an early stage. Numerous methods have been proposed using Machine Learning to diagnose glaucoma with different data sets and techniques but these are complex methods. Although, medical imaging instruments are used as glaucoma screening methods, fundus imaging specifically is the most used screening technique for glaucoma detection. This study presents a novel DenseNet and DarkNet combination to classify normal and glaucoma affected fundus image. These frameworks have been trained and tested on three data sets of high-resolution fundus (HRF), RIM 1, and ACRIMA. A total of 658 images have been used for healthy eyes and 612 images for glaucoma-affected eyes classification. It has also been observed that the fusion of DenseNet and DarkNet outperforms the two CNN networks and achieved 99.7% accuracy, 98.9% sensitivity, 100% specificity for the HRF database. In contrast, for the RIM1 database, 89.3% accuracy, 93.3% sensitivity, 88.46% specificity has been attained. Moreover, for the ACRIMA database, 99% accuracy, 100% sensitivity, 99% specificity has been achieved. Therefore, the proposed method is robust and efficient with less computational time and complexity compared to the literature available.
Collapse
Affiliation(s)
- Shahzad Akbar
- Riphah College of Computing, Riphah International University, Faisalabad Campus, Faisalabad, Pakistan
| | - Syed Ale Hassan
- Riphah College of Computing, Riphah International University, Faisalabad Campus, Faisalabad, Pakistan
| | - Ayesha Shoukat
- Riphah College of Computing, Riphah International University, Faisalabad Campus, Faisalabad, Pakistan
| | - Jaber Alyami
- Department of Diagnostic Radiology, Faculty of Applied Medical Sciences, King Abdulaziz University, Jeddah, 21589, Saudi Arabia.,Imaging Unit, King Fahd Medical Research Center, King Abdulaziz University, Jeddah, 21589, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department, College of Business Administration, Prince Sattam Bin Abdulaziz University, Alkharj, 11942, Saudi Arabia
| |
Collapse
|
68
|
Zoetmulder R, Gavves E, Caan M, Marquering H. Domain- and task-specific transfer learning for medical segmentation tasks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 214:106539. [PMID: 34875512 DOI: 10.1016/j.cmpb.2021.106539] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2021] [Revised: 10/25/2021] [Accepted: 11/14/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVES Transfer learning is a valuable approach to perform medical image segmentation in settings with limited cases available for training convolutional neural networks (CNN). Both the source task and the source domain influence transfer learning performance on a given target medical image segmentation task. This study aims to assess transfer learning-based medical segmentation task performance for various source task and domain combinations. METHODS CNNs were pre-trained on classification, segmentation, and self-supervised tasks on two domains: natural images and T1 brain MRI. Next, these CNNs were fine-tuned on three target T1 brain MRI segmentation tasks: stroke lesion, MS lesions, and brain anatomy segmentation. In all experiments, the CNN architecture and transfer learning strategy were the same. The segmentation accuracy on all target tasks was evaluated using the mIOU or Dice coefficients. The detection accuracy was evaluated for the stroke and MS lesion target tasks only. RESULTS CNNs pre-trained on a segmentation task on the same domain as the target tasks resulted in higher or similar segmentation accuracy compared to other source task and domain combinations. Pre-training a CNN on ImageNet resulted in a comparable, but not consistently higher lesion detection rate, despite the amount of training data used being 10 times larger. CONCLUSIONS This study suggests that optimal transfer learning for medical segmentation is achieved with a similar task and domain for pre-training. As a result, CNNs can be effectively pre-trained on smaller datasets by selecting a source domain and task similar to the target domain and task.
Collapse
Affiliation(s)
- Riaan Zoetmulder
- Biomedical Engineering and Physics, Amsterdam UMC, Location AMC, Meibergdreef 15, 1105 AZ Amsterdam, the Netherlands; University of Amsterdam, Science Park 904, 1098 XH Amsterdam, the Netherlands.
| | - Efstratios Gavves
- University of Amsterdam, Science Park 904, 1098 XH Amsterdam, the Netherlands
| | - Matthan Caan
- Biomedical Engineering and Physics, Amsterdam UMC, Location AMC, Meibergdreef 15, 1105 AZ Amsterdam, the Netherlands
| | - Henk Marquering
- Biomedical Engineering and Physics, Amsterdam UMC, Location AMC, Meibergdreef 15, 1105 AZ Amsterdam, the Netherlands; Radiology & Nuclear Medicine, Amsterdam UMC, Location AMC, Meibergdreef 15, 1105 AZ Amsterdam, the Netherlands
| |
Collapse
|
69
|
Soffer S, Morgenthau AS, Shimon O, Barash Y, Konen E, Glicksberg BS, Klang E. Artificial Intelligence for Interstitial Lung Disease Analysis on Chest Computed Tomography: A Systematic Review. Acad Radiol 2022; 29 Suppl 2:S226-S235. [PMID: 34219012 DOI: 10.1016/j.acra.2021.05.014] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 05/10/2021] [Accepted: 05/11/2021] [Indexed: 12/22/2022]
Abstract
RATIONALE AND OBJECTIVES High-resolution computed tomography (HRCT) is paramount in the assessment of interstitial lung disease (ILD). Yet, HRCT interpretation of ILDs may be hampered by inter- and intra-observer variability. Recently, artificial intelligence (AI) has revolutionized medical image analysis. This technology has the potential to advance patient care in ILD. We aimed to systematically evaluate the application of AI for the analysis of ILD in HRCT. MATERIALS AND METHODS We searched MEDLINE/PubMed databases for original publications of deep learning for ILD analysis on chest CT. The search included studies published up to March 1, 2021. The risk of bias evaluation included tailored Quality Assessment of Diagnostic Accuracy Studies and the modified Joanna Briggs Institute Critical Appraisal checklist. RESULTS Data was extracted from 19 retrospective studies. Deep learning techniques included detection, segmentation, and classification of ILD on HRCT. Most studies focused on the classification of ILD into different morphological patterns. Accuracies of 78%-91% were achieved. Two studies demonstrated near-expert performance for the diagnosis of idiopathic pulmonary fibrosis (IPF). The Quality Assessment of Diagnostic Accuracy Studies tool identified a high risk of bias in 15/19 (78.9%) of the studies. CONCLUSION AI has the potential to contribute to the radiologic diagnosis and classification of ILD. However, the accuracy performance is still not satisfactory, and research is limited by a small number of retrospective studies. Hence, the existing published data may not be sufficiently reliable. Only well-designed prospective controlled studies can accurately assess the value of existing AI tools for ILD evaluation.
Collapse
|
70
|
Review of Machine Learning Applications Using Retinal Fundus Images. Diagnostics (Basel) 2022; 12:diagnostics12010134. [PMID: 35054301 PMCID: PMC8774893 DOI: 10.3390/diagnostics12010134] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 01/03/2022] [Accepted: 01/03/2022] [Indexed: 02/04/2023] Open
Abstract
Automating screening and diagnosis in the medical field saves time and reduces the chances of misdiagnosis while saving on labor and cost for physicians. With the feasibility and development of deep learning methods, machines are now able to interpret complex features in medical data, which leads to rapid advancements in automation. Such efforts have been made in ophthalmology to analyze retinal images and build frameworks based on analysis for the identification of retinopathy and the assessment of its severity. This paper reviews recent state-of-the-art works utilizing the color fundus image taken from one of the imaging modalities used in ophthalmology. Specifically, the deep learning methods of automated screening and diagnosis for diabetic retinopathy (DR), age-related macular degeneration (AMD), and glaucoma are investigated. In addition, the machine learning techniques applied to the retinal vasculature extraction from the fundus image are covered. The challenges in developing these systems are also discussed.
Collapse
|
71
|
Bunod R, Augstburger E, Brasnu E, Labbe A, Baudouin C. [Artificial intelligence and glaucoma: A literature review]. J Fr Ophtalmol 2022; 45:216-232. [PMID: 34991909 DOI: 10.1016/j.jfo.2021.11.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 11/18/2021] [Indexed: 11/26/2022]
Abstract
In recent years, research in artificial intelligence (AI) has experienced an unprecedented surge in the field of ophthalmology, in particular glaucoma. The diagnosis and follow-up of glaucoma is complex and relies on a body of clinical evidence and ancillary tests. This large amount of information from structural and functional testing of the optic nerve and macula makes glaucoma a particularly appropriate field for the application of AI. In this paper, we will review work using AI in the field of glaucoma, whether for screening, diagnosis or detection of progression. Many AI strategies have shown promising results for glaucoma detection using fundus photography, optical coherence tomography, or automated perimetry. The combination of these imaging modalities increases the performance of AI algorithms, with results comparable to those of humans. We will discuss potential applications as well as obstacles and limitations to the deployment and validation of such models. While there is no doubt that AI has the potential to revolutionize glaucoma management and screening, research in the coming years will need to address unavoidable questions regarding the clinical significance of such results and the explicability of the predictions.
Collapse
Affiliation(s)
- R Bunod
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France.
| | - E Augstburger
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France
| | - E Brasnu
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France; CHNO des Quinze-Vingts, IHU FOReSIGHT, INSERM-DGOS CIC 1423, 17, rue Moreau, 75012 Paris, France; Sorbonne universités, INSERM, CNRS, institut de la Vision, 17, rue Moreau, 75012 Paris, France
| | - A Labbe
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France; CHNO des Quinze-Vingts, IHU FOReSIGHT, INSERM-DGOS CIC 1423, 17, rue Moreau, 75012 Paris, France; Sorbonne universités, INSERM, CNRS, institut de la Vision, 17, rue Moreau, 75012 Paris, France; Service d'ophtalmologie, hôpital Ambroise-Paré, AP-HP, université de Paris Saclay, 9, avenue Charles-de-Gaulle, 92100 Boulogne-Billancourt, France
| | - C Baudouin
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France; CHNO des Quinze-Vingts, IHU FOReSIGHT, INSERM-DGOS CIC 1423, 17, rue Moreau, 75012 Paris, France; Sorbonne universités, INSERM, CNRS, institut de la Vision, 17, rue Moreau, 75012 Paris, France; Service d'ophtalmologie, hôpital Ambroise-Paré, AP-HP, université de Paris Saclay, 9, avenue Charles-de-Gaulle, 92100 Boulogne-Billancourt, France
| |
Collapse
|
72
|
Schuman JS, Angeles Ramos Cadena MDL, McGee R, Al-Aswad LA, Medeiros FA. A Case for The Use of Artificial Intelligence in Glaucoma Assessment. Ophthalmol Glaucoma 2021; 5:e3-e13. [PMID: 34954220 PMCID: PMC9133028 DOI: 10.1016/j.ogla.2021.12.003] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 12/15/2021] [Accepted: 12/16/2021] [Indexed: 12/23/2022]
Abstract
We hypothesize that artificial intelligence applied to relevant clinical testing in glaucoma has the potential to enhance the ability to detect glaucoma. This premise was discussed at the recent Collaborative Community for Ophthalmic Imaging meeting, "The Future of Artificial Intelligence-Enabled Ophthalmic Image Interpretation: Accelerating Innovation and Implementation Pathways," held virtually September 3-4, 2020. The Collaborative Community in Ophthalmic Imaging (CCOI) is an independent self-governing consortium of stakeholders with broad international representation from academic institutions, government agencies, and the private sector whose mission is to act as a forum for the purpose of helping speed innovation in healthcare technology. It was one of the first two such organizations officially designated by the FDA in September 2019 in response to their announcement of the collaborative community program as a strategic priority for 2018-2020. Further information on the CCOI can be found online at their website (https://www.cc-oi.org/about). Artificial intelligence for glaucoma diagnosis would have high utility globally, as access to care is limited in many parts of the world and half of all people with glaucoma are unaware of their illness. The application of artificial intelligence technology to glaucoma diagnosis has the potential to broadly increase access to care worldwide, in essence flattening the Earth by providing expert level evaluation to individuals even in the most remote regions of the planet.
Collapse
Affiliation(s)
- Joel S Schuman
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, USA; Departments of Biomedical Engineering and Electrical and Computer Engineering, New York University Tandon School of Engineering, Brooklyn, NY, USA; Center for Neural Science, NYU, New York, NY, USA; Neuroscience Institute, NYU Langone Health, New York, NY, USA.
| | | | - Rebecca McGee
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, USA
| | - Lama A Al-Aswad
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, USA; Department of Population Health, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, USA
| | - Felipe A Medeiros
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA; Department of Electrical and Computer Engineering, Pratt School of Engineering, Duke University, Durham, NC, USA
| | | |
Collapse
|
73
|
Schottenhamml J, Würfl T, Mardin S, Ploner SB, Husvogt L, Hohberger B, Lämmer R, Mardin C, Maier A. Glaucoma classification in 3 x 3 mm en face macular scans using deep learning in a different plexus. BIOMEDICAL OPTICS EXPRESS 2021; 12:7434-7444. [PMID: 35003844 PMCID: PMC8713669 DOI: 10.1364/boe.439991] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 09/10/2021] [Accepted: 09/13/2021] [Indexed: 06/14/2023]
Abstract
Glaucoma is among the leading causes of irreversible blindness worldwide. If diagnosed and treated early enough, the disease progression can be stopped or slowed down. Therefore, it would be very valuable to detect early stages of glaucoma, which are mostly asymptomatic, by broad screening. This study examines different computational features that can be automatically deduced from images and their performance on the classification task of differentiating glaucoma patients and healthy controls. Data used for this study are 3 x 3 mm en face optical coherence tomography angiography (OCTA) images of different retinal projections (of the whole retina, the superficial vascular plexus (SVP), the intermediate capillary plexus (ICP) and the deep capillary plexus (DCP)) centered around the fovea. Our results show quantitatively that the automatically extracted features from convolutional neural networks (CNNs) perform similarly well or better than handcrafted ones when used to distinguish glaucoma patients from healthy controls. On the whole retina projection and the SVP projection, CNNs outperform the handcrafted features presented in the literature. Area under receiver operating characteristics (AUROC) on the SVP projection is 0.967, which is comparable to the best reported values in the literature. This is achieved despite using the small 3 × 3 mm field of view, which has been reported as disadvantageous for handcrafted vessel density features in previous works. A detailed analysis of our CNN method, using attention maps, suggests that this performance increase can be partially explained by the CNN automatically relying more on areas of higher relevance for feature extraction.
Collapse
Affiliation(s)
- Julia Schottenhamml
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Department of Ophthalmology and Eye Hospital, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | | | - Sophia Mardin
- Department of Information Systems and Services, Otto-Friedrich-Universitat Bamberg, Bamberg, Germany
| | - Stefan B Ploner
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Lennart Husvogt
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Bettina Hohberger
- Department of Ophthalmology and Eye Hospital, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Robert Lämmer
- Department of Ophthalmology and Eye Hospital, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Christian Mardin
- Department of Ophthalmology and Eye Hospital, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| |
Collapse
|
74
|
Peroni A, Paviotti A, Campigotto M, Abegão Pinto L, Cutolo CA, Gong J, Patel S, Cobb C, Gillan S, Tatham A, Trucco E. Semantic segmentation of gonio-photographs via adaptive ROI localisation and uncertainty estimation. BMJ Open Ophthalmol 2021; 6:e000898. [PMID: 34901467 PMCID: PMC8627415 DOI: 10.1136/bmjophth-2021-000898] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Accepted: 10/21/2021] [Indexed: 12/12/2022] Open
Abstract
OBJECTIVE To develop and test a deep learning (DL) model for semantic segmentation of anatomical layers of the anterior chamber angle (ACA) in digital gonio-photographs. METHODS AND ANALYSIS We used a pilot dataset of 274 ACA sector images, annotated by expert ophthalmologists to delineate five anatomical layers: iris root, ciliary body band, scleral spur, trabecular meshwork and cornea. Narrow depth-of-field and peripheral vignetting prevented clinicians from annotating part of each image with sufficient confidence, introducing a degree of subjectivity and features correlation in the ground truth. To overcome these limitations, we present a DL model, designed and trained to perform two tasks simultaneously: (1) maximise the segmentation accuracy within the annotated region of each frame and (2) identify a region of interest (ROI) based on local image informativeness. Moreover, our calibrated model provides results interpretability returning pixel-wise classification uncertainty through Monte Carlo dropout. RESULTS The model was trained and validated in a 5-fold cross-validation experiment on ~90% of available data, achieving ~91% average segmentation accuracy within the annotated part of each ground truth image of the hold-out test set. An appropriate ROI was successfully identified in all test frames. The uncertainty estimation module located correctly inaccuracies and errors of segmentation outputs. CONCLUSION The proposed model improves the only previously published work on gonio-photographs segmentation and may be a valid support for the automatic processing of these images to evaluate local tissue morphology. Uncertainty estimation is expected to facilitate acceptance of this system in clinical settings.
Collapse
Affiliation(s)
- Andrea Peroni
- VAMPIRE Project, Computing, School of Science and Engineering, University of Dundee, Dundee, UK
| | - Anna Paviotti
- Department of Research and Development, NIDEK Technologies Srl, Albignasego, Italy
| | - Mauro Campigotto
- Department of Research and Development, NIDEK Technologies Srl, Albignasego, Italy
| | - Luis Abegão Pinto
- Department of Ophthalmology, Hospital de Santa Maria, Lisbon, Portugal
| | | | - Jacintha Gong
- Department of Ophthalmology, Ninewells Hospital, NHS Tayside, Dundee, UK
| | - Sirjhun Patel
- Department of Ophthalmology, Ninewells Hospital, NHS Tayside, Dundee, UK
| | - Caroline Cobb
- Department of Ophthalmology, Ninewells Hospital, NHS Tayside, Dundee, UK
| | - Stewart Gillan
- Department of Ophthalmology, Ninewells Hospital, NHS Tayside, Dundee, UK
| | - Andrew Tatham
- Department of Ophthalmology, Princess Alexandra Eye Pavilion, NHS Lothian, Edinburgh, UK
| | - Emanuele Trucco
- VAMPIRE Project, Computing, School of Science and Engineering, University of Dundee, Dundee, UK
| |
Collapse
|
75
|
Guo C, Yu M, Li J. Prediction of Different Eye Diseases Based on Fundus Photography via Deep Transfer Learning. J Clin Med 2021; 10:jcm10235481. [PMID: 34884192 PMCID: PMC8658397 DOI: 10.3390/jcm10235481] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 11/15/2021] [Accepted: 11/16/2021] [Indexed: 12/02/2022] Open
Abstract
With recent advancements in machine learning, especially in deep learning, the prediction of eye diseases based on fundus photography using deep convolutional neural networks (DCNNs) has attracted great attention. However, studies focusing on identifying the right disease among several candidates, which is a better approximation of clinical diagnosis in practice comparing with the case that aims to distinguish one particular eye disease from normal controls, are limited. The performance of existing algorithms for multi-class classification of fundus images is at most mediocre. Moreover, in many studies consisting of different eye diseases, labeled images are quite limited mainly due to privacy concern of patients. In this case, it is infeasible to train huge DCNNs, which usually have millions of parameters. To address these challenges, we propose to utilize a lightweight deep learning architecture called MobileNetV2 and transfer learning to distinguish four common eye diseases, including Glaucoma, Maculopathy, Pathological Myopia, and Retinitis Pigmentosa, from normal controls using a small training data. We also apply a visualization approach to highlight the loci that are most related to the disease labels to make the model more explainable. The highlighted area chosen by the algorithm itself may give some hints for further fundus image studies. Our experimental results show that our system achieves an average accuracy of 96.2%, sensitivity of 90.4%, and specificity of 97.6% on the test data via five independent runs, and outperforms two other deep learning-based algorithms both in terms of accuracy and efficiency.
Collapse
Affiliation(s)
- Chen Guo
- Department of Computer and Data Sciences, Case Western Reserve University, Cleveland, OH 44106, USA;
| | - Minzhong Yu
- Department of Ophthalmology, University Hospitals, Case Western Reserve University, Cleveland, OH 44101, USA
- Correspondence: (M.Y.); (J.L.)
| | - Jing Li
- Department of Computer and Data Sciences, Case Western Reserve University, Cleveland, OH 44106, USA;
- Correspondence: (M.Y.); (J.L.)
| |
Collapse
|
76
|
Christopher M, Bowd C, Proudfoot JA, Belghith A, Goldbaum MH, Rezapour J, Fazio MA, Girkin CA, De Moraes G, Liebmann JM, Weinreb RN, Zangwill LM. Deep Learning Estimation of 10-2 and 24-2 Visual Field Metrics Based on Thickness Maps from Macula OCT. Ophthalmology 2021; 128:1534-1548. [PMID: 33901527 DOI: 10.1016/j.ophtha.2021.04.022] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Revised: 03/16/2021] [Accepted: 04/19/2021] [Indexed: 01/27/2023] Open
Abstract
PURPOSE To develop deep learning (DL) systems estimating visual function from macula-centered spectral-domain (SD) OCT images. DESIGN Evaluation of a diagnostic technology. PARTICIPANTS A total of 2408 10-2 visual field (VF) SD OCT pairs and 2999 24-2 VF SD OCT pairs collected from 645 healthy and glaucoma subjects (1222 eyes). METHODS Deep learning models were trained on thickness maps from Spectralis macula SD OCT to estimate 10-2 and 24-2 VF mean deviation (MD) and pattern standard deviation (PSD). Individual and combined DL models were trained using thickness data from 6 layers (retinal nerve fiber layer [RNFL], ganglion cell layer [GCL], inner plexiform layer [IPL], ganglion cell-IPL [GCIPL], ganglion cell complex [GCC] and retina). Linear regression of mean layer thicknesses were used for comparison. MAIN OUTCOME MEASURES Deep learning models were evaluated using R2 and mean absolute error (MAE) compared with 10-2 and 24-2 VF measurements. RESULTS Combined DL models estimating 10-2 achieved R2 of 0.82 (95% confidence interval [CI], 0.68-0.89) for MD and 0.69 (95% CI, 0.55-0.81) for PSD and MAEs of 1.9 dB (95% CI, 1.6-2.4 dB) for MD and 1.5 dB (95% CI, 1.2-1.9 dB) for PSD. This was significantly better than mean thickness estimates for 10-2 MD (0.61 [95% CI, 0.47-0.71] and 3.0 dB [95% CI, 2.5-3.5 dB]) and 10-2 PSD (0.46 [95% CI, 0.31-0.60] and 2.3 dB [95% CI, 1.8-2.7 dB]). Combined DL models estimating 24-2 achieved R2 of 0.79 (95% CI, 0.72-0.84) for MD and 0.68 (95% CI, 0.53-0.79) for PSD and MAEs of 2.1 dB (95% CI, 1.8-2.5 dB) for MD and 1.5 dB (95% CI, 1.3-1.9 dB) for PSD. This was significantly better than mean thickness estimates for 24-2 MD (0.41 [95% CI, 0.26-0.57] and 3.4 dB [95% CI, 2.7-4.5 dB]) and 24-2 PSD (0.38 [95% CI, 0.20-0.57] and 2.4 dB [95% CI, 2.0-2.8 dB]). The GCIPL (R2 = 0.79) and GCC (R2 = 0.75) had the highest performance estimating 10-2 and 24-2 MD, respectively. CONCLUSIONS Deep learning models improved estimates of functional loss from SD OCT imaging. Accurate estimates can help clinicians to individualize VF testing to patients.
Collapse
Affiliation(s)
- Mark Christopher
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Christopher Bowd
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - James A Proudfoot
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Akram Belghith
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Michael H Goldbaum
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Jasmin Rezapour
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California; Department of Ophthalmology, University Medical Center Mainz, Mainz, Germany
| | - Massimo A Fazio
- School of Medicine, University of Alabama-Birmingham, Birmingham, Alabama
| | | | - Gustavo De Moraes
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Medical Center, New York, New York
| | - Jeffrey M Liebmann
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Medical Center, New York, New York
| | - Robert N Weinreb
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Linda M Zangwill
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California.
| |
Collapse
|
77
|
Hemelings R, Elen B, Barbosa-Breda J, Blaschko MB, De Boever P, Stalmans I. Deep learning on fundus images detects glaucoma beyond the optic disc. Sci Rep 2021; 11:20313. [PMID: 34645908 PMCID: PMC8514536 DOI: 10.1038/s41598-021-99605-1] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 09/21/2021] [Indexed: 02/07/2023] Open
Abstract
Although unprecedented sensitivity and specificity values are reported, recent glaucoma detection deep learning models lack in decision transparency. Here, we propose a methodology that advances explainable deep learning in the field of glaucoma detection and vertical cup-disc ratio (VCDR), an important risk factor. We trained and evaluated deep learning models using fundus images that underwent a certain cropping policy. We defined the crop radius as a percentage of image size, centered on the optic nerve head (ONH), with an equidistant spaced range from 10-60% (ONH crop policy). The inverse of the cropping mask was also applied (periphery crop policy). Trained models using original images resulted in an area under the curve (AUC) of 0.94 [95% CI 0.92-0.96] for glaucoma detection, and a coefficient of determination (R2) equal to 77% [95% CI 0.77-0.79] for VCDR estimation. Models that were trained on images with absence of the ONH are still able to obtain significant performance (0.88 [95% CI 0.85-0.90] AUC for glaucoma detection and 37% [95% CI 0.35-0.40] R2 score for VCDR estimation in the most extreme setup of 60% ONH crop). Our findings provide the first irrefutable evidence that deep learning can detect glaucoma from fundus image regions outside the ONH.
Collapse
Affiliation(s)
- Ruben Hemelings
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium.
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium.
| | - Bart Elen
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium
| | - João Barbosa-Breda
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium
- Cardiovascular R&D Center, Faculty of Medicine of the University of Porto, Alameda Prof. Hernâni Monteiro, 4200-319, Porto, Portugal
- Department of Ophthalmology, Centro Hospitalar E Universitário São João, Alameda Prof. Hernâni Monteiro, 4200-319, Porto, Portugal
| | | | - Patrick De Boever
- Hasselt University, Agoralaan building D, 3590, Diepenbeek, Belgium
- Department of Biology, University of Antwerp, 2610, Wilrijk, Belgium
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium
| | - Ingeborg Stalmans
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium
- Ophthalmology Department, UZ Leuven, Herestraat 49, 3000, Leuven, Belgium
| |
Collapse
|
78
|
Ajitha S, Akkara JD, Judy MV. Identification of glaucoma from fundus images using deep learning techniques. Indian J Ophthalmol 2021; 69:2702-2709. [PMID: 34571619 PMCID: PMC8597466 DOI: 10.4103/ijo.ijo_92_21] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
Purpose Glaucoma is one of the preeminent causes of incurable visual disability and blindness across the world due to elevated intraocular pressure within the eyes. Accurate and timely diagnosis is essential for preventing visual disability. Manual detection of glaucoma is a challenging task that needs expertise and years of experience. Methods In this paper, we suggest a powerful and accurate algorithm using a convolutional neural network (CNN) for the automatic diagnosis of glaucoma. In this work, 1113 fundus images consisting of 660 normal and 453 glaucomatous images from four databases have been used for the diagnosis of glaucoma. A 13-layer CNN is potently trained from this dataset to mine vital features, and these features are classified into either glaucomatous or normal class during testing. The proposed algorithm is implemented in Google Colab, which made the task straightforward without spending hours installing the environment and supporting libraries. To evaluate the effectiveness of our algorithm, the dataset is divided into 70% for training, 20% for validation, and the remaining 10% utilized for testing. The training images are augmented to 12012 fundus images. Results Our model with SoftMax classifier achieved an accuracy of 93.86%, sensitivity of 85.42%, specificity of 100%, and precision of 100%. In contrast, the model with the SVM classifier achieved accuracy, sensitivity, specificity, and precision of 95.61, 89.58, 100, and 100%, respectively. Conclusion These results demonstrate the ability of the deep learning model to identify glaucoma from fundus images and suggest that the proposed system can help ophthalmologists in a fast, accurate, and reliable diagnosis of glaucoma.
Collapse
Affiliation(s)
- S Ajitha
- Department of Computer Applications, Cochin University of Science and Technology, Kochi, Kerala, India
| | - John D Akkara
- Ophthalmology Department, Sri Ramachandra Institute of Higher Education and Research, Chennai, Tamil Nadu, India
| | - M V Judy
- Department of Computer Applications, Cochin University of Science and Technology, Kochi, Kerala, India
| |
Collapse
|
79
|
Krishnadas R. The many challenges in automated glaucoma diagnosis based on fundus imaging. Indian J Ophthalmol 2021; 69:2566-2567. [PMID: 34571593 PMCID: PMC8597437 DOI: 10.4103/ijo.ijo_2294_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Affiliation(s)
- R Krishnadas
- Consultant, Glaucoma Services, Aravind Eye Care System, Madurai, Tamil Nadu, India
| |
Collapse
|
80
|
Buisson M, Navel V, Labbé A, Watson SL, Baker JS, Murtagh P, Chiambaretta F, Dutheil F. Deep learning versus ophthalmologists for screening for glaucoma on fundus examination: A systematic review and meta-analysis. Clin Exp Ophthalmol 2021; 49:1027-1038. [PMID: 34506041 DOI: 10.1111/ceo.14000] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Revised: 09/02/2021] [Accepted: 09/08/2021] [Indexed: 11/29/2022]
Abstract
BACKGROUND In this systematic review and meta-analysis, we aimed to compare deep learning versus ophthalmologists in glaucoma diagnosis on fundus examinations. METHOD PubMed, Cochrane, Embase, ClinicalTrials.gov and ScienceDirect databases were searched for studies reporting a comparison between the glaucoma diagnosis performance of deep learning and ophthalmologists on fundus examinations on the same datasets, until 10 December 2020. Studies had to report an area under the receiver operating characteristics (AUC) with SD or enough data to generate one. RESULTS We included six studies in our meta-analysis. There was no difference in AUC between ophthalmologists (AUC = 82.0, 95% confidence intervals [CI] 65.4-98.6) and deep learning (97.0, 89.4-104.5). There was also no difference using several pessimistic and optimistic variants of our meta-analysis: the best (82.2, 60.0-104.3) or worst (77.7, 53.1-102.3) ophthalmologists versus the best (97.1, 89.5-104.7) or worst (97.1, 88.5-105.6) deep learning of each study. We did not retrieve any factors influencing those results. CONCLUSION Deep learning had similar performance compared to ophthalmologists in glaucoma diagnosis from fundus examinations. Further studies should evaluate deep learning in clinical situations.
Collapse
Affiliation(s)
- Mathieu Buisson
- CHU Clermont-Ferrand, Ophthalmology, University Hospital of Clermont-Ferrand, Clermont-Ferrand, France
| | - Valentin Navel
- CHU Clermont-Ferrand, Ophthalmology, University Hospital of Clermont-Ferrand, Clermont-Ferrand, France.,CNRS UMR 6293, INSERM U1103, Genetic Reproduction and Development Laboratory (GReD), Translational Approach to Epithelial Injury and Repair Team, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Antoine Labbé
- Department of Ophthalmology III, Quinze-Vingts National Ophthalmology Hospital, IHU FOReSIGHT, Paris, France.,Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France.,Department of Ophthalmology, Ambroise Paré Hospital, APHP, Université de Versailles Saint-Quentin en Yvelines, Versailles, France
| | - Stephanie L Watson
- Save Sight Institute, Discipline of Ophthalmology, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia.,Corneal Unit, Sydney Eye Hospital, Sydney, New South Wales, Australia
| | - Julien S Baker
- Centre for Health and Exercise Science Research, Department of Sport, Physical Education and Health, Hong Kong Baptist University, Kowloon Tong, Hong Kong
| | - Patrick Murtagh
- Department of Ophthalmology, Royal Victoria Eye and Ear Hospital, Dublin, Ireland
| | - Frédéric Chiambaretta
- CHU Clermont-Ferrand, Ophthalmology, University Hospital of Clermont-Ferrand, Clermont-Ferrand, France.,CNRS UMR 6293, INSERM U1103, Genetic Reproduction and Development Laboratory (GReD), Translational Approach to Epithelial Injury and Repair Team, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Frédéric Dutheil
- Université Clermont Auvergne, CNRS, LaPSCo, Physiological and Psychosocial Stress, CHU Clermont-Ferrand, University Hospital of Clermont-Ferrand, Preventive and Occupational Medicine, Witty Fit, Clermont-Ferrand, France
| |
Collapse
|
81
|
Yuen V, Ran A, Shi J, Sham K, Yang D, Chan VTT, Chan R, Yam JC, Tham CC, McKay GJ, Williams MA, Schmetterer L, Cheng CY, Mok V, Chen CL, Wong TY, Cheung CY. Deep-Learning-Based Pre-Diagnosis Assessment Module for Retinal Photographs: A Multicenter Study. Transl Vis Sci Technol 2021; 10:16. [PMID: 34524409 PMCID: PMC8444486 DOI: 10.1167/tvst.10.11.16] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Accepted: 08/12/2021] [Indexed: 12/23/2022] Open
Abstract
Purpose Artificial intelligence (AI) deep learning (DL) has been shown to have significant potential for eye disease detection and screening on retinal photographs in different clinical settings, particular in primary care. However, an automated pre-diagnosis image assessment is essential to streamline the application of the developed AI-DL algorithms. In this study, we developed and validated a DL-based pre-diagnosis assessment module for retinal photographs, targeting image quality (gradable vs. ungradable), field of view (macula-centered vs. optic-disc-centered), and laterality of the eye (right vs. left). Methods A total of 21,348 retinal photographs from 1914 subjects from various clinical settings in Hong Kong, Singapore, and the United Kingdom were used for training, internal validation, and external testing for the DL module, developed by two DL-based algorithms (EfficientNet-B0 and MobileNet-V2). Results For image-quality assessment, the pre-diagnosis module achieved area under the receiver operating characteristic curve (AUROC) values of 0.975, 0.999, and 0.987 in the internal validation dataset and the two external testing datasets, respectively. For field-of-view assessment, the module had an AUROC value of 1.000 in all of the datasets. For laterality-of-the-eye assessment, the module had AUROC values of 1.000, 0.999, and 0.985 in the internal validation dataset and the two external testing datasets, respectively. Conclusions Our study showed that this three-in-one DL module for assessing image quality, field of view, and laterality of the eye of retinal photographs achieved excellent performance and generalizability across different centers and ethnicities. Translational Relevance The proposed DL-based pre-diagnosis module realized accurate and automated assessments of image quality, field of view, and laterality of the eye of retinal photographs, which could be further integrated into AI-based models to improve operational flow for enhancing disease screening and diagnosis.
Collapse
Affiliation(s)
- Vincent Yuen
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Anran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Jian Shi
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Kaiser Sham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Victor T. T. Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Raymond Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Jason C. Yam
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
- Hong Kong Eye Hospital, Hong Kong
| | - Clement C. Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
- Hong Kong Eye Hospital, Hong Kong
| | - Gareth J. McKay
- Center for Public Health, Royal Victoria Hospital, Queen's University Belfast, Belfast, UK
| | - Michael A. Williams
- Center for Medical Education, Royal Victoria Hospital, Queen's University Belfast, Belfast, UK
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Nanyang Technological University, Singapore
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
| | - Vincent Mok
- Gerald Choa Neuroscience Center, Therese Pei Fong Chow Research Center for Prevention of Dementia, Lui Che Woo Institute of Innovative Medicine, Department of Medicine and Therapeutics, The Chinese University of Hong Kong, Hong Kong
| | - Christopher L. Chen
- Memory, Aging and Cognition Center, Department of Pharmacology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Tien Y. Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
| | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
82
|
Bowd C, Belghith A, Christopher M, Goldbaum MH, Fazio MA, Girkin CA, Liebmann JM, de Moraes CG, Weinreb RN, Zangwill LM. Individualized Glaucoma Change Detection Using Deep Learning Auto Encoder-Based Regions of Interest. Transl Vis Sci Technol 2021; 10:19. [PMID: 34293095 PMCID: PMC8300051 DOI: 10.1167/tvst.10.8.19] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose To compare change over time in eye-specific optical coherence tomography (OCT) retinal nerve fiber layer (RNFL)-based region-of-interest (ROI) maps developed using unsupervised deep-learning auto-encoders (DL-AE) to circumpapillary RNFL (cpRNFL) thickness for the detection of glaucomatous progression. Methods Forty-four progressing glaucoma eyes (by stereophotograph assessment), 189 nonprogressing glaucoma eyes (by stereophotograph assessment), and 109 healthy eyes were followed for ≥3 years with ≥4 visits using OCT. The San Diego Automated Layer Segmentation Algorithm was used to automatically segment the RNFL layer from raw three-dimensional OCT images. For each longitudinal series, DL-AEs were used to generate individualized eye-based ROI maps by identifying RNFL regions of likely progression and no change. Sensitivities and specificities for detecting change over time and rates of change over time were compared for the DL-AE ROI and global cpRNFL thickness measurements derived from a 2.22-mm to 3.45-mm annulus centered on the optic disc. Results The sensitivity for detecting change in progressing eyes was greater for DL-AE ROIs than for global cpRNFL annulus thicknesses (0.90 and 0.63, respectively). The specificity for detecting not likely progression in nonprogressing eyes was similar (0.92 and 0.93, respectively). The mean rates of change in DL-AE ROI were significantly faster than for cpRNFL annulus thickness in progressing eyes (-1.28 µm/y vs. -0.83 µm/y) and nonprogressing eyes (-1.03 µm/y vs. -0.78 µm/y). Conclusions Eye-specific ROIs identified using DL-AE analysis of OCT images show promise for improving assessment of glaucomatous progression. Translational Relevance The detection and monitoring of structural glaucomatous progression can be improved by considering eye-specific regions of likely progression identified using deep learning.
Collapse
Affiliation(s)
- Christopher Bowd
- Hamilton Glaucoma Center, Shiley Eye Institute, The Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, CA, USA
| | - Akram Belghith
- Hamilton Glaucoma Center, Shiley Eye Institute, The Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, CA, USA
| | - Mark Christopher
- Hamilton Glaucoma Center, Shiley Eye Institute, The Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, CA, USA
| | - Michael H Goldbaum
- Hamilton Glaucoma Center, Shiley Eye Institute, The Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, CA, USA
| | - Massimo A Fazio
- School of Medicine, University of Alabama-Birmingham, Birmingham, AL, USA
| | | | - Jeffrey M Liebmann
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Medical Center, New York, NY, USA
| | - Carlos Gustavo de Moraes
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Medical Center, New York, NY, USA
| | - Robert N Weinreb
- Hamilton Glaucoma Center, Shiley Eye Institute, The Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, CA, USA
| | - Linda M Zangwill
- Hamilton Glaucoma Center, Shiley Eye Institute, The Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, CA, USA
| |
Collapse
|
83
|
Research on an Intelligent Lightweight-Assisted Pterygium Diagnosis Model Based on Anterior Segment Images. DISEASE MARKERS 2021; 2021:7651462. [PMID: 34367378 PMCID: PMC8342163 DOI: 10.1155/2021/7651462] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/19/2021] [Accepted: 07/16/2021] [Indexed: 12/13/2022]
Abstract
Aims The lack of primary ophthalmologists in China results in the inability of basic-level hospitals to diagnose pterygium patients. To solve this problem, an intelligent-assisted lightweight pterygium diagnosis model based on anterior segment images is proposed in this study. Methods Pterygium is a common and frequently occurring disease in ophthalmology, and fibrous tissue hyperplasia is both a diagnostic biomarker and a surgical biomarker. The model diagnosed pterygium based on biomarkers of pterygium. First, a total of 436 anterior segment images were collected; then, two intelligent-assisted lightweight pterygium diagnosis models (MobileNet 1 and MobileNet 2) based on raw data and augmented data were trained via transfer learning. The results of the lightweight models were compared with the clinical results. The classic models (AlexNet, VGG16 and ResNet18) were also used for training and testing, and their results were compared with the lightweight models. A total of 188 anterior segment images were used for testing. Sensitivity, specificity, F1-score, accuracy, kappa, area under the concentration-time curve (AUC), 95% CI, size, and parameters are the evaluation indicators in this study. Results There are 188 anterior segment images that were used for testing the five intelligent-assisted pterygium diagnosis models. The overall evaluation index for the MobileNet2 model was the best. The sensitivity, specificity, F1-score, and AUC of the MobileNet2 model for the normal anterior segment image diagnosis were 96.72%, 98.43%, 96.72%, and 0976, respectively; for the pterygium observation period anterior segment image diagnosis, the sensitivity, specificity, F1-score, and AUC were 83.7%, 90.48%, 82.54%, and 0.872, respectively; for the surgery period anterior segment image diagnosis, the sensitivity, specificity, F1-score, and AUC were 84.62%, 93.50%, 85.94%, and 0.891, respectively. The kappa value of the MobileNet2 model was 77.64%, the accuracy was 85.11%, the model size was 13.5 M, and the parameter size was 4.2 M. Conclusion This study used deep learning methods to propose a three-category intelligent lightweight-assisted pterygium diagnosis model. The developed model can be used to screen patients for pterygium problems initially, provide reasonable suggestions, and provide timely referrals. It can help primary doctors improve pterygium diagnoses, confer social benefits, and lay the foundation for future models to be embedded in mobile devices.
Collapse
|
84
|
Fully Automated Colorimetric Analysis of the Optic Nerve Aided by Deep Learning and Its Association with Perimetry and OCT for the Study of Glaucoma. J Clin Med 2021; 10:jcm10153231. [PMID: 34362014 PMCID: PMC8347493 DOI: 10.3390/jcm10153231] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 07/14/2021] [Accepted: 07/20/2021] [Indexed: 01/04/2023] Open
Abstract
Background: Laguna-ONhE is an application for the colorimetric analysis of optic nerve images, which topographically assesses the cup and the presence of haemoglobin. Its latest version has been fully automated with five deep learning models. In this paper, perimetry in combination with Laguna-ONhE or Cirrus-OCT was evaluated. Methods: The morphology and perfusion estimated by Laguna ONhE were compiled into a “Globin Distribution Function” (GDF). Visual field irregularity was measured with the usual pattern standard deviation (PSD) and the threshold coefficient of variation (TCV), which analyses its harmony without taking into account age-corrected values. In total, 477 normal eyes, 235 confirmed, and 98 suspected glaucoma cases were examined with Cirrus-OCT and different fundus cameras and perimeters. Results: The best Receiver Operating Characteristic (ROC) analysis results for confirmed and suspected glaucoma were obtained with the combination of GDF and TCV (AUC: 0.995 and 0.935, respectively. Sensitivities: 94.5% and 45.9%, respectively, for 99% specificity). The best combination of OCT and perimetry was obtained with the vertical cup/disc ratio and PSD (AUC: 0.988 and 0.847, respectively. Sensitivities: 84.7% and 18.4%, respectively, for 99% specificity). Conclusion: Using Laguna ONhE, morphology, perfusion, and function can be mutually enhanced with the methods described for the purpose of glaucoma assessment, providing early sensitivity.
Collapse
|
85
|
Nakahara K, Asaoka R, Tanito M, Shibata N, Mitsuhashi K, Fujino Y, Matsuura M, Inoue T, Azuma K, Obata R, Murata H. Deep learning-assisted (automatic) diagnosis of glaucoma using a smartphone. Br J Ophthalmol 2021; 106:587-592. [PMID: 34261663 DOI: 10.1136/bjophthalmol-2020-318107] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Accepted: 01/07/2021] [Indexed: 11/04/2022]
Abstract
BACKGROUND/AIMS To validate a deep learning algorithm to diagnose glaucoma from fundus photography obtained with a smartphone. METHODS A training dataset consisting of 1364 colour fundus photographs with glaucomatous indications and 1768 colour fundus photographs without glaucomatous features was obtained using an ordinary fundus camera. The testing dataset consisted of 73 eyes of 73 patients with glaucoma and 89 eyes of 89 normative subjects. In the testing dataset, fundus photographs were acquired using an ordinary fundus camera and a smartphone. A deep learning algorithm was developed to diagnose glaucoma using a training dataset. The trained neural network was evaluated by prediction result of the diagnostic of glaucoma or normal over the test datasets, using images from both an ordinary fundus camera and a smartphone. Diagnostic accuracy was assessed using the area under the receiver operating characteristic curve (AROC). RESULTS The AROC with a fundus camera was 98.9% and 84.2% with a smartphone. When validated only in eyes with advanced glaucoma (mean deviation value < -12 dB, N=26), the AROC with a fundus camera was 99.3% and 90.0% with a smartphone. There were significant differences between these AROC values using different cameras. CONCLUSION The usefulness of a deep learning algorithm to automatically screen for glaucoma from smartphone-based fundus photographs was validated. The algorithm had a considerable high diagnostic ability, particularly in eyes with advanced glaucoma.
Collapse
Affiliation(s)
| | - Ryo Asaoka
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Shizuoka, Japan .,Seirei Christopher University, Shizuoka, Hamamatsu, Japan.,Nanovision Research Division, Research Institute of Electronics, Shizuoka University, Hamamatsu, Japan.,The Graduate School for the Creation of New Photonics Industries, Hamamatsu, Japan.,Department of Ophthalmology, University of Tokyo, Tokyo, Japan
| | - Masaki Tanito
- Department of Ophthalmology, Shimane University Faculty of Medicine, Shimane, Japan
| | | | | | - Yuri Fujino
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Shizuoka, Japan.,Department of Ophthalmology, University of Tokyo, Tokyo, Japan.,Department of Ophthalmology, Shimane University Faculty of Medicine, Shimane, Japan
| | - Masato Matsuura
- Department of Ophthalmology, University of Tokyo, Tokyo, Japan
| | - Tatsuya Inoue
- Department of Ophthalmology, University of Tokyo, Tokyo, Japan.,Department of Ophthalmology and Microtechnology, Yokohama City University School of Medicine, Kanagawa, Japan
| | - Keiko Azuma
- Department of Ophthalmology, University of Tokyo, Tokyo, Japan
| | - Ryo Obata
- Department of Ophthalmology, University of Tokyo, Tokyo, Japan
| | - Hiroshi Murata
- Department of Ophthalmology, University of Tokyo, Tokyo, Japan
| |
Collapse
|
86
|
Chen TC, Lim WS, Wang VY, Ko ML, Chiu SI, Huang YS, Lai F, Yang CM, Hu FR, Jang JSR, Yang CH. Artificial Intelligence-Assisted Early Detection of Retinitis Pigmentosa - the Most Common Inherited Retinal Degeneration. J Digit Imaging 2021; 34:948-958. [PMID: 34244880 DOI: 10.1007/s10278-021-00479-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 06/02/2021] [Accepted: 06/21/2021] [Indexed: 12/01/2022] Open
Abstract
The purpose of this study was to detect the presence of retinitis pigmentosa (RP) based on color fundus photographs using a deep learning model. A total of 1670 color fundus photographs from the Taiwan inherited retinal degeneration project and National Taiwan University Hospital were acquired and preprocessed. The fundus photographs were labeled RP or normal and divided into training and validation datasets (n = 1284) and a test dataset (n = 386). Three transfer learning models based on pre-trained Inception V3, Inception Resnet V2, and Xception deep learning architectures, respectively, were developed to classify the presence of RP on fundus images. The model sensitivity, specificity, and area under the receiver operating characteristic (AUROC) curve were compared. The results from the best transfer learning model were compared with the reading results of two general ophthalmologists, one retinal specialist, and one specialist in retina and inherited retinal degenerations. A total of 935 RP and 324 normal images were used to train the models. The test dataset consisted of 193 RP and 193 normal images. Among the three transfer learning models evaluated, the Xception model had the best performance, achieving an AUROC of 96.74%. Gradient-weighted class activation mapping indicated that the contrast between the periphery and the macula on fundus photographs was an important feature in detecting RP. False-positive results were mostly obtained in cases of high myopia with highly tessellated retina, and false-negative results were mostly obtained in cases of unclear media, such as cataract, that led to a decrease in the contrast between the peripheral retina and the macula. Our model demonstrated the highest accuracy of 96.00%, which was comparable with the average results of 81.50%, of the other four ophthalmologists. Moreover, the accuracy was obtained at the same level of sensitivity (95.71%), as compared to an inherited retinal disease specialist. RP is an important disease, but its early and precise diagnosis is challenging. We developed and evaluated a transfer-learning-based model to detect RP from color fundus photographs. The results of this study validate the utility of deep learning in automating the identification of RP from fundus photographs.
Collapse
Affiliation(s)
- Ta-Ching Chen
- Department of Ophthalmology, College of Medicine, National Taiwan University, Taipei, Taiwan.,Graduate Institute of Clinical Medicine, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Wee Shin Lim
- Department of Computer Science and Information Engineering, National Taiwan University, No. 1, Sec. 4, Roosevelt Road, Taipei, Taiwan
| | - Victoria Y Wang
- Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - Mei-Lan Ko
- Department of Ophthalmology, National Taiwan University Hospital Hsin-Chu Branch, Hsinchu, Taiwan
| | - Shu-I Chiu
- Department of Computer Science and Information Engineering, National Taiwan University, No. 1, Sec. 4, Roosevelt Road, Taipei, Taiwan
| | - Yu-Shu Huang
- Department of Ophthalmology, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Feipei Lai
- Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan
| | - Chung-May Yang
- Department of Ophthalmology, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Fung-Rong Hu
- Department of Ophthalmology, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Jyh-Shing Roger Jang
- Department of Computer Science and Information Engineering, National Taiwan University, No. 1, Sec. 4, Roosevelt Road, Taipei, Taiwan.
| | - Chang-Hao Yang
- Department of Ophthalmology, College of Medicine, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
87
|
Saeed AQ, Sheikh Abdullah SNH, Che-Hamzah J, Abdul Ghani AT. Accuracy of Using Generative Adversarial Networks for Glaucoma Detection During the COVID-19 Pandemic: A Systematic Review and Bibliometric Analysis. J Med Internet Res 2021; 23:e27414. [PMID: 34236992 PMCID: PMC8493455 DOI: 10.2196/27414] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 05/11/2021] [Accepted: 07/05/2021] [Indexed: 01/19/2023] Open
Abstract
Background Glaucoma leads to irreversible blindness. Globally, it is the second most common retinal disease that leads to blindness, slightly less common than cataracts. Therefore, there is a great need to avoid the silent growth of this disease using recently developed generative adversarial networks (GANs). Objective This paper aims to introduce a GAN technology for the diagnosis of eye disorders, particularly glaucoma. This paper illustrates deep adversarial learning as a potential diagnostic tool and the challenges involved in its implementation. This study describes and analyzes many of the pitfalls and problems that researchers will need to overcome to implement this kind of technology. Methods To organize this review comprehensively, articles and reviews were collected using the following keywords: (“Glaucoma,” “optic disc,” “blood vessels”) and (“receptive field,” “loss function,” “GAN,” “Generative Adversarial Network,” “Deep learning,” “CNN,” “convolutional neural network” OR encoder). The records were identified from 5 highly reputed databases: IEEE Xplore, Web of Science, Scopus, ScienceDirect, and PubMed. These libraries broadly cover the technical and medical literature. Publications within the last 5 years, specifically 2015-2020, were included because the target GAN technique was invented only in 2014 and the publishing date of the collected papers was not earlier than 2016. Duplicate records were removed, and irrelevant titles and abstracts were excluded. In addition, we excluded papers that used optical coherence tomography and visual field images, except for those with 2D images. A large-scale systematic analysis was performed, and then a summarized taxonomy was generated. Furthermore, the results of the collected articles were summarized and a visual representation of the results was presented on a T-shaped matrix diagram. This study was conducted between March 2020 and November 2020. Results We found 59 articles after conducting a comprehensive survey of the literature. Among the 59 articles, 30 present actual attempts to synthesize images and provide accurate segmentation/classification using single/multiple landmarks or share certain experiences. The other 29 articles discuss the recent advances in GANs, do practical experiments, and contain analytical studies of retinal disease. Conclusions Recent deep learning techniques, namely GANs, have shown encouraging performance in retinal disease detection. Although this methodology involves an extensive computing budget and optimization process, it saturates the greedy nature of deep learning techniques by synthesizing images and solves major medical issues. This paper contributes to this research field by offering a thorough analysis of existing works, highlighting current limitations, and suggesting alternatives to support other researchers and participants in further improving and strengthening future work. Finally, new directions for this research have been identified.
Collapse
Affiliation(s)
- Ali Q Saeed
- Faculty of Information Science & Technology (FTSM), Universiti Kebangsaan Malaysia (UKM), UKM, 43600 Bangi, Selangor, Malaysia, Selangor, MY.,Computer Center, Northern Technical University, Ninevah, IQ
| | - Siti Norul Huda Sheikh Abdullah
- Faculty of Information Science & Technology (FTSM), Universiti Kebangsaan Malaysia (UKM), UKM, 43600 Bangi, Selangor, Malaysia, Selangor, MY
| | - Jemaima Che-Hamzah
- Department of Ophthalmology, Faculty of Medicine, Universiti Kebangsaan Malaysia (UKM), Cheras, Kuala Lumpur, MY
| | - Ahmad Tarmizi Abdul Ghani
- Faculty of Information Science & Technology (FTSM), Universiti Kebangsaan Malaysia (UKM), UKM, 43600 Bangi, Selangor, Malaysia, Selangor, MY
| |
Collapse
|
88
|
Yellapragada B, Hornauer S, Snyder K, Yu S, Yiu G. Self-Supervised Feature Learning and Phenotyping for Assessing Age-Related Macular Degeneration Using Retinal Fundus Images. Ophthalmol Retina 2021; 6:116-129. [PMID: 34217854 DOI: 10.1016/j.oret.2021.06.010] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Revised: 06/24/2021] [Accepted: 06/25/2021] [Indexed: 12/18/2022]
Abstract
OBJECTIVE Diseases such as age-related macular degeneration (AMD) are classified based on human rubrics that are prone to bias. Supervised neural networks trained using human-generated labels require labor-intensive annotations and are restricted to specific trained tasks. Here, we trained a self-supervised deep learning network using unlabeled fundus images, enabling data-driven feature classification of AMD severity and discovery of ocular phenotypes. DESIGN Development of a self-supervised training pipeline to evaluate fundus photographs from the Age-Related Eye Disease Study (AREDS). PARTICIPANTS One hundred thousand eight hundred forty-eight human-graded fundus images from 4757 AREDS participants between 55 and 80 years of age. METHODS We trained a deep neural network with self-supervised Non-Parametric Instance Discrimination (NPID) using AREDS fundus images without labels then evaluated its performance in grading AMD severity using 2-step, 4-step, and 9-step classification schemes using a supervised classifier. We compared balanced and unbalanced accuracies of NPID against supervised-trained networks and ophthalmologists, explored network behavior using hierarchical learning of image subsets and spherical k-means clustering of feature vectors, then searched for ocular features that can be identified without labels. MAIN OUTCOME MEASURES Accuracy and kappa statistics. RESULTS NPID demonstrated versatility across different AMD classification schemes without re-training and achieved balanced accuracies comparable with those of supervised-trained networks or human ophthalmologists in classifying advanced AMD (82% vs. 81-92% or 89%), referable AMD (87% vs. 90-92% or 96%), or on the 4-step AMD severity scale (65% vs. 63-75% or 67%), despite never directly using these labels during self-supervised feature learning. Drusen area drove network predictions on the 4-step scale, while depigmentation and geographic atrophy (GA) areas correlated with advanced AMD classes. Self-supervised learning revealed grader-mislabeled images and susceptibility of some classes within more granular AMD scales to misclassification by both ophthalmologists and neural networks. Importantly, self-supervised learning enabled data-driven discovery of AMD features such as GA and other ocular phenotypes of the choroid (e.g., tessellated or blonde fundi), vitreous (e.g., asteroid hyalosis), and lens (e.g., nuclear cataracts) that were not predefined by human labels. CONCLUSIONS Self-supervised learning enables AMD severity grading comparable with that of ophthalmologists and supervised networks, reveals biases of human-defined AMD classification systems, and allows unbiased, data-driven discovery of AMD and non-AMD ocular phenotypes.
Collapse
Affiliation(s)
- Baladitya Yellapragada
- Department of Vision Science, University of California, Berkeley, Berkeley, California; International Computer Science Institute, Berkeley, California; Department of Ophthalmology & Vision Science, University of California, Davis, Sacramento, California
| | - Sascha Hornauer
- International Computer Science Institute, Berkeley, California
| | - Kiersten Snyder
- Department of Ophthalmology & Vision Science, University of California, Davis, Sacramento, California
| | - Stella Yu
- Department of Vision Science, University of California, Berkeley, Berkeley, California; International Computer Science Institute, Berkeley, California
| | - Glenn Yiu
- Department of Ophthalmology & Vision Science, University of California, Davis, Sacramento, California.
| |
Collapse
|
89
|
Islam MP, Nakano Y, Lee U, Tokuda K, Kochi N. TheLNet270v1 - A Novel Deep-Network Architecture for the Automatic Classification of Thermal Images for Greenhouse Plants. FRONTIERS IN PLANT SCIENCE 2021; 12:630425. [PMID: 34276715 PMCID: PMC8280754 DOI: 10.3389/fpls.2021.630425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Accepted: 06/02/2021] [Indexed: 06/13/2023]
Abstract
The real challenge for separating leaf pixels from background pixels in thermal images is associated with various factors such as the amount of emitted and reflected thermal radiation from the targeted plant, absorption of reflected radiation by the humidity of the greenhouse, and the outside environment. We proposed TheLNet270v1 (thermal leaf network with 270 layers version 1) to recover the leaf canopy from its background in real time with higher accuracy than previous systems. The proposed network had an accuracy of 91% (mean boundary F1 score or BF score) to distinguish canopy pixels from background pixels and then segment the image into two classes: leaf and background. We evaluated the classification (segment) performance by using more than 13,766 images and obtained 95.75% training and 95.23% validation accuracies without overfitting issues. This research aimed to develop a deep learning technique for the automatic segmentation of thermal images to continuously monitor the canopy surface temperature inside a greenhouse.
Collapse
Affiliation(s)
- Md. Parvez Islam
- Agricultural AI Research Promotion Office, RCAIT, National Agriculture and Food Research Organization (NARO), Tsukuba, Japan
| | - Yuka Nakano
- Institute of Vegetable and Flower Research, NARO, Tsukuba, Japan
| | - Unseok Lee
- Agricultural AI Research Promotion Office, RCAIT, National Agriculture and Food Research Organization (NARO), Tsukuba, Japan
| | - Keinichi Tokuda
- Agricultural AI Research Promotion Office, RCAIT, National Agriculture and Food Research Organization (NARO), Tsukuba, Japan
| | - Nobuo Kochi
- Agricultural AI Research Promotion Office, RCAIT, National Agriculture and Food Research Organization (NARO), Tsukuba, Japan
| |
Collapse
|
90
|
Olivas LG, Alférez GH, Castillo J. Glaucoma detection in Latino population through OCT's RNFL thickness map using transfer learning. Int Ophthalmol 2021; 41:3727-3741. [PMID: 34212255 DOI: 10.1007/s10792-021-01931-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Accepted: 06/19/2021] [Indexed: 11/25/2022]
Abstract
PURPOSE Glaucoma is the leading cause of irreversible blindness worldwide. It is estimated that over 60 million people around the world have this disease, with only part of them knowing they have it. Timely and early diagnosis is vital to delay/prevent patient blindness. Deep learning (DL) could be a tool for ophthalmologists to give a more informed and objective diagnosis. However, there is a lack of studies that apply DL for glaucoma detection to Latino population. Our contribution is to use transfer learning to retrain MobileNet and Inception V3 models with images of the retinal nerve fiber layer thickness map of Mexican patients, obtained with optical coherence tomography (OCT) from the Instituto de la Visión, a clinic in the northern part of Mexico. METHODS The IBM Foundational Methodology for Data Science was used in this study. The MobileNet and Inception V3 topologies were chosen as the analytical approaches to classify OCT images in two classes, namely glaucomatous and non-glaucomatous. The OCT files were collected from a Zeiss OCT machine at the Instituto de la Visión, and classified by an expert into the two classes under study. These images conform a dataset of 333 files in total. Since this research work is focused on RNFL thickness map images, the OCT files were cropped to obtain only the RNFL thickness map images of the corresponding eye. This action was carried out for images in both classes, glaucomatous and non-glaucomatous. Since some images were damaged (with black spots in which data was missing), these images were cut-out and cut-off. After the preparation process, 50 images per class were used for training. Fifteen images per class, different than the ones used in the training stage, were used for running predictions. In total, 260 images were used in the experiments, 130 per eye. Four models were generated, two trained with MobileNet, one for the left eye and one for the right eye, and another two trained with Inception V3. TensorFlow was used for running transfer learning. RESULTS The evaluation results of the MobileNet model for the left eye are, accuracy: 86%, precision: 87%, recall: 87%, and F1 score: 87%. The evaluation results of the MobileNet model for the right eye are, accuracy: 90%, precision: 90%, recall: 90%, and F1 score: 90%. The evaluation results of the Inception V3 model for the left eye are, accuracy: 90%, precision: 90%, recall: 90%, and F1 score: 90%. The evaluation results of the Inception V3 model for the right eye are, accuracy: 90%, precision: 90%, recall: 90%, and F1 score: 90%. CONCLUSION In average, the evaluation results for right eye images were the same for both models. The Inception V3 model showed slight better average results than the MobileNet model in the case of classifying left eye images.
Collapse
Affiliation(s)
- Liza G Olivas
- School of Engineering and Technology, Universidad de Montemorelos, Montemorelos, NL, Mexico
| | - Germán H Alférez
- School of Engineering and Technology, Universidad de Montemorelos, Montemorelos, NL, Mexico.
| | - Javier Castillo
- School of Medicine, Universidad de Montemorelos, Montemorelos, NL, Mexico
| |
Collapse
|
91
|
Suguna G, Lavanya R. Performance Assessment of EyeNet Model in Glaucoma Diagnosis. PATTERN RECOGNITION AND IMAGE ANALYSIS 2021. [DOI: 10.1134/s1054661821020164] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
92
|
Zheng B, Jiang Q, Lu B, He K, Wu MN, Hao XL, Zhou HX, Zhu SJ, Yang WH. Five-Category Intelligent Auxiliary Diagnosis Model of Common Fundus Diseases Based on Fundus Images. Transl Vis Sci Technol 2021; 10:20. [PMID: 34132760 PMCID: PMC8212443 DOI: 10.1167/tvst.10.7.20] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Purpose The discrepancy of the number between ophthalmologists and patients in China is large. Retinal vein occlusion (RVO), high myopia, glaucoma, and diabetic retinopathy (DR) are common fundus diseases. Therefore, in this study, a five-category intelligent auxiliary diagnosis model for common fundus diseases is proposed; the model's area of focus is marked. Methods A total of 2000 fundus images were collected; 3 different 5-category intelligent auxiliary diagnosis models for common fundus diseases were trained via different transfer learning and image preprocessing techniques. A total of 1134 fundus images were used for testing. The clinical diagnostic results were compared with the diagnostic results. The main evaluation indicators included sensitivity, specificity, F1-score, area under the concentration-time curve (AUC), 95% confidence interval (CI), kappa, and accuracy. The interpretation methods were used to obtain the model's area of focus in the fundus image. Results The accuracy rates of the 3 intelligent auxiliary diagnosis models on the 1134 fundus images were all above 90%, the kappa values were all above 88%, the diagnosis consistency was good, and the AUC approached 0.90. For the 4 common fundus diseases, the best results of sensitivity, specificity, and F1-scores of the 3 models were 88.27%, 97.12%, and 84.02%; 89.94%, 99.52%, and 93.90%; 95.24%, 96.43%, and 85.11%; and 88.24%, 98.21%, and 89.55%, respectively. Conclusions This study designed a five-category intelligent auxiliary diagnosis model for common fundus diseases. It can be used to obtain the diagnostic category of fundus images and the model's area of focus. Translational Relevance This study will help the primary doctors to provide effective services to all ophthalmologic patients.
Collapse
Affiliation(s)
- Bo Zheng
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Qin Jiang
- Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| | - Bing Lu
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Kai He
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Mao-Nian Wu
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Xiu-Lan Hao
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Hong-Xia Zhou
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China.,College of Computer and Information, Hehai University, Nanjing, Jiangsu, China
| | - Shao-Jun Zhu
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Wei-Hua Yang
- Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| |
Collapse
|
93
|
Liu H, Li L, Wormstone IM, Qiao C, Zhang C, Liu P, Li S, Wang H, Mou D, Pang R, Yang D, Zangwill LM, Moghimi S, Hou H, Bowd C, Jiang L, Chen Y, Hu M, Xu Y, Kang H, Ji X, Chang R, Tham C, Cheung C, Ting DSW, Wong TY, Wang Z, Weinreb RN, Xu M, Wang N. Development and Validation of a Deep Learning System to Detect Glaucomatous Optic Neuropathy Using Fundus Photographs. JAMA Ophthalmol 2021; 137:1353-1360. [PMID: 31513266 DOI: 10.1001/jamaophthalmol.2019.3501] [Citation(s) in RCA: 163] [Impact Index Per Article: 54.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Importance A deep learning system (DLS) that could automatically detect glaucomatous optic neuropathy (GON) with high sensitivity and specificity could expedite screening for GON. Objective To establish a DLS for detection of GON using retinal fundus images and glaucoma diagnosis with convoluted neural networks (GD-CNN) that has the ability to be generalized across populations. Design, Setting, and Participants In this cross-sectional study, a DLS for the classification of GON was developed for automated classification of GON using retinal fundus images obtained from the Chinese Glaucoma Study Alliance, the Handan Eye Study, and online databases. The researchers selected 241 032 images were selected as the training data set. The images were entered into the databases on June 9, 2009, obtained on July 11, 2018, and analyses were performed on December 15, 2018. The generalization of the DLS was tested in several validation data sets, which allowed assessment of the DLS in a clinical setting without exclusions, testing against variable image quality based on fundus photographs obtained from websites, evaluation in a population-based study that reflects a natural distribution of patients with glaucoma within the cohort and an additive data set that has a diverse ethnic distribution. An online learning system was established to transfer the trained and validated DLS to generalize the results with fundus images from new sources. To better understand the DLS decision-making process, a prediction visualization test was performed that identified regions of the fundus images utilized by the DLS for diagnosis. Exposures Use of a deep learning system. Main Outcomes and Measures Area under the receiver operating characteristics curve (AUC), sensitivity and specificity for DLS with reference to professional graders. Results From a total of 274 413 fundus images initially obtained from CGSA, 269 601 images passed initial image quality review and were graded for GON. A total of 241 032 images (definite GON 29 865 [12.4%], probable GON 11 046 [4.6%], unlikely GON 200 121 [83%]) from 68 013 patients were selected using random sampling to train the GD-CNN model. Validation and evaluation of the GD-CNN model was assessed using the remaining 28 569 images from CGSA. The AUC of the GD-CNN model in primary local validation data sets was 0.996 (95% CI, 0.995-0.998), with sensitivity of 96.2% and specificity of 97.7%. The most common reason for both false-negative and false-positive grading by GD-CNN (51 of 119 [46.3%] and 191 of 588 [32.3%]) and manual grading (50 of 113 [44.2%] and 183 of 538 [34.0%]) was pathologic or high myopia. Conclusions and Relevance Application of GD-CNN to fundus images from different settings and varying image quality demonstrated a high sensitivity, specificity, and generalizability for detecting GON. These findings suggest that automated DLS could enhance current screening programs in a cost-effective and time-efficient manner.
Collapse
Affiliation(s)
- Hanruo Liu
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Liu Li
- School of Electronic and Information Engineering, Beihang University, Beijing, China
| | - I Michael Wormstone
- School of Biological Sciences, University of East Anglia, Norwich, United Kingdom
| | - Chunyan Qiao
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Chun Zhang
- Department of Ophthalmology, Peking University Third Hospital, Beijing, China
| | - Ping Liu
- Ophthalmology Hospital, First Hospital of Harbin Medical University, Harbin, Heilongjiang, China
| | - Shuning Li
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Huaizhou Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Dapeng Mou
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Ruiqi Pang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Diya Yang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Linda M Zangwill
- Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Sasan Moghimi
- Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Huiyuan Hou
- Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Christopher Bowd
- Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Lai Jiang
- School of Electronic and Information Engineering, Beihang University, Beijing, China
| | - Yihan Chen
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Man Hu
- Department of Ophthalmology, Beijing Children's Hospital, Capital Medical University, Beijing, China
| | - Yongli Xu
- Department of Mathematics, Beijing University of Chemical Technology, Beijing, China
| | - Hong Kang
- College of Computer Science,Nankai University, Tianjin, China
| | - Xin Ji
- Beijing Shanggong Medical Technology Co., Ltd, Beijing, China
| | - Robert Chang
- Department of Ophthalmology, Byers Eye Institute at Stanford University, Palo Alto, California
| | - Clement Tham
- Department of Ophthalmology and Visual Sciences, Faculty of Medicine, The Chinese University of Hong Kong, Kowloon, Hong Kong, China
| | - Carol Cheung
- Department of Ophthalmology and Visual Sciences, Faculty of Medicine, The Chinese University of Hong Kong, Kowloon, Hong Kong, China
| | | | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
| | - Zulin Wang
- School of Electronic and Information Engineering, Beihang University, Beijing, China
| | - Robert N Weinreb
- Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Mai Xu
- School of Electronic and Information Engineering, Beihang University, Beijing, China
| | - Ningli Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| |
Collapse
|
94
|
Rehman AU, Taj IA, Sajid M, Karimov KS. An ensemble framework based on Deep CNNs architecture for glaucoma classification using fundus photography. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2021; 18:5321-5346. [PMID: 34517490 DOI: 10.3934/mbe.2021270] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Glaucoma is a chronic ocular degenerative disease that can cause blindness if left untreated in its early stages. Deep Convolutional Neural Networks (Deep CNNs) and its variants have provided superior performance in glaucoma classification, segmentation, and detection. In this paper, we propose a two-staged glaucoma classification scheme based on Deep CNN architectures. In stage one, four different ImageNet pre-trained Deep CNN architectures, i.e., AlexNet, InceptionV3, InceptionResNetV2, and NasNet-Large are used and it is observed that NasNet-Large architecture provides superior performance in terms of sensitivity (99.1%), specificity (99.4%), accuracy (99.3%), and area under the receiver operating characteristic curve (97.8%) metrics. A detailed performance comparison is also presented among these on public datasets, i.e., ACRIMA, ORIGA-Light, and RIM-ONE as well as locally available datasets, i.e., AFIO, and HMC. In the second stage, we propose an ensemble classifier with two novel ensembling techniques, i.e., accuracy based weighted voting, and accuracy/score based weighted averaging to further improve the glaucoma classification results. It is shown that ensemble with accuracy/score based scheme improves the accuracy (99.5%) for diverse databases. As an outcome of this study, it is presented that the NasNet-Large architecture is a feasible option in terms of its performance as a single classifier while ensemble classifier further improves the generalized performance for automatic glaucoma classification.
Collapse
Affiliation(s)
- Aziz Ur Rehman
- Faculty of Electrical Engineering, GIK Institute of Engineering Sciences and Technology, Topi 23640, District Swabi, KPK, Pakistan
| | - Imtiaz A Taj
- Department of Electrical Engineering, Capital University of Science and Technology Islamabad Expressway, Kahuta Road, Zone-V Islamabad, Pakistan
| | - Muhammad Sajid
- Department of Electrical Engineering, Mirpur University of Science and Technology (MUST), Mirpur 10250 (AJK), Pakistan
| | - Khasan S Karimov
- Faculty of Electrical Engineering, GIK Institute of Engineering Sciences and Technology, Topi 23640, District Swabi, KPK, Pakistan
- Centre for Innovative and New Technologies of Academy of Sciences of the Republic of Tajikistan, 734015, Rudaki Ave., 33. Dushanbe Tajikistan
| |
Collapse
|
95
|
Xiao X, Xue L, Ye L, Li H, He Y. Health care cost and benefits of artificial intelligence-assisted population-based glaucoma screening for the elderly in remote areas of China: a cost-offset analysis. BMC Public Health 2021; 21:1065. [PMID: 34088286 PMCID: PMC8178835 DOI: 10.1186/s12889-021-11097-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 05/17/2021] [Indexed: 12/04/2022] Open
Abstract
Background Population-based screening was essential for glaucoma management. Although various studies have investigated the cost-effectiveness of glaucoma screening, policymakers facing with uncontrollably growing total health expenses were deeply concerned about the potential financial consequences of glaucoma screening. This present study was aimed to explore the impact of glaucoma screening with artificial intelligence (AI) automated diagnosis from a budgetary standpoint in Changjiang county, China. Methods A Markov model based on health care system’s perspective was adapted from previously published studies to predict disease progression and healthcare costs. A cohort of 19,395 individuals aged 65 and above were simulated over a 15-year timeframe. Fur illustrative purpose, we only considered primary angle-closure glaucoma (PACG) in this study. Prevalence, disease progression risks between stages, compliance rates were obtained from publish studies. We did a meta-analysis to estimate diagnostic performance of AI automated diagnosis system from fundus image. Screening costs were provided by the Changjiang screening programme, whereas treatment costs were derived from electronic medical records from two county hospitals. Main outcomes included the number of PACG patients and health care costs. Cost-offset analysis was employed to compare projected health outcomes and medical care costs under the screening with what they would have been without screening. One-way sensitivity analysis was conducted to quantify uncertainties around model results. Results Among people aged 65 and above in Changjiang county, it was predicted that there were 1940 PACG patients under the AI-assisted screening scenario, compared with 2104 patients without screening in 15 years’ time. Specifically, the screening would reduce patients with primary angle closure suspect by 7.7%, primary angle closure by 8.8%, PACG by 16.7%, and visual blindness by 33.3%. Due to early diagnosis and treatment under the screening, healthcare costs surged dramatically to $107,761.4 dollar in the first year and then were constantly declining over time, while without screening costs grew from $14,759.8 in the second year until peaking at $17,900.9 in the 9th year. However, cost-offset analysis revealed that additional healthcare costs resulted from the screening could not be offset by decreased disease progression. The 5-, 10-, and 15-year accumulated incremental costs of screening versus no screening were estimated to be $396,362.8, $424,907.9, and $434,903.2, respectively. As a result, the incremental cost per PACG of any stages prevented was $1464.3. Conclusions This study represented the first attempt to address decision-maker’s budgetary concerns when adopting glaucoma screening by developing a Markov prediction model to project health outcomes and costs. Population screening combined with AI automated diagnosis for PACG in China were able to reduce disease progression risks. However, the excess costs of screening could never be offset by reduction in disease progression. Further studies examining the cost-effectiveness or cost-utility of AI-assisted glaucoma screening were needed. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-021-11097-w.
Collapse
Affiliation(s)
- Xuan Xiao
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060, China
| | - Long Xue
- School of Public Health, Fudan University, Shanghai, 200433, China
| | - Lin Ye
- Department of Eye Plastic and Lacrimal Disease, Shenzhen Eye Hospital of Jinan University, Shenzhen, 518040, China
| | - Hongzheng Li
- School of Public Health, Fudan University, Shanghai, 200433, China
| | - Yunzhen He
- School of Public Health, Fudan University, Shanghai, 200433, China.
| |
Collapse
|
96
|
Singh H, Saini SS, Lakshminarayanan V. Rapid classification of glaucomatous fundus images. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2021; 38:765-774. [PMID: 34143145 DOI: 10.1364/josaa.415395] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 04/13/2021] [Indexed: 06/12/2023]
Abstract
We propose a new method for training convolutional neural networks (CNNs) and use it to classify glaucoma from fundus images. This method integrates reinforcement learning along with supervised learning and uses it for transfer learning. The training method uses hill climbing techniques via two different climber types, namely, "random movement" and "random detection," integrated with a supervised learning model through a stochastic gradient descent with momentum model. The model was trained and tested using the Drishti-GS and RIM-ONE-r2 datasets having glaucomatous and normal fundus images. The performance for prediction was tested by transfer learning on five CNN architectures, namely, GoogLeNet, DenseNet-201, NASNet, VGG-19, and Inception-Resnet v2. A five-fold classification was used for evaluating the performance, and high sensitivities while maintaining high accuracies were achieved. Of the models tested, the DenseNet-201 architecture performed the best in terms of sensitivity and area under the curve. This method of training allows transfer learning on small datasets and can be applied for tele-ophthalmology applications including training with local datasets.
Collapse
|
97
|
Gomel N, Azem N, Baruch T, Hollander N, Rachmiel R, Kurtz S, Waisbourd M. Teleophthalmology Screening for Early Detection of Ocular Diseases in Underserved Populations in Israel. Telemed J E Health 2021; 28:233-239. [PMID: 33999746 DOI: 10.1089/tmj.2021.0098] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Background: The purpose of this study was to investigate the feasibility and effectiveness of an innovative telemedicine community-based intervention to increase detection of previously undiagnosed ocular diseases in high-risk populations in Israel. Methods: A team comprising an ocular technician, a project manager, and a driver was sent to underserved areas in Israel. Patient demographics, ocular, and medical information were recorded. Visual acuity (VA), intraocular pressure and fundus photographs were obtained. The data were transferred to the Ophthalmology Reading Center in Tel-Aviv Medical Center, where it was interpreted by an ophthalmologist. A letter was sent to the patients indicating examination results. It instructed them to return for a follow-up examination if indicated. Results: A total of 124 individuals underwent telemedicine remote screening examinations in 10 locations. The mean age was 79.9 ± 7.2 years, with female predominance of 67%. The major pathologies detected were (1) reduction in VA >6/12 in at least one eye (n = 48, 38.7%); (2) glaucoma suspicion in the optic disk (n = 18, 14.5%); (3) ocular hypertension >21 mmHg (n = 15, 12.1%); (4) age-related macular degeneration (AMD; n = 15, 12.1%); (5) diabetic retinopathy (n = 6, 4.8%); (6) visually significant cataract (n = 6, 4.8%); and (7) other pathologies (n = 11, 8.9%); 97.7% of the patients reported high satisfaction rates (they were satisfied or very satisfied from the project model). Conclusions: Our pilot telemedicine screening project effectively detected ocular diseases in underserved areas in Israel and helped improve access to eye care. This project has the potential of reaching a national level, allow for early diagnosis, and prevent vision loss and blindness in underserved areas.
Collapse
Affiliation(s)
- Nir Gomel
- Division of Ophthalmology, Tel-Aviv Medical Center, Tel Aviv, Israel.,Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Nur Azem
- Division of Ophthalmology, Tel-Aviv Medical Center, Tel Aviv, Israel.,Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | | | | | - Rony Rachmiel
- Division of Ophthalmology, Tel-Aviv Medical Center, Tel Aviv, Israel.,Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Shimon Kurtz
- Division of Ophthalmology, Tel-Aviv Medical Center, Tel Aviv, Israel.,Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Michael Waisbourd
- Division of Ophthalmology, Tel-Aviv Medical Center, Tel Aviv, Israel.,Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
98
|
Gao Q, Amason J, Cousins S, Pajic M, Hadziahmetovic M. Automated Identification of Referable Retinal Pathology in Teleophthalmology Setting. Transl Vis Sci Technol 2021; 10:30. [PMID: 34036304 PMCID: PMC8161696 DOI: 10.1167/tvst.10.6.30] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Accepted: 01/31/2021] [Indexed: 02/06/2023] Open
Abstract
Purpose This study aims to meet a growing need for a fully automated, learning-based interpretation tool for retinal images obtained remotely (e.g. teleophthalmology) through different imaging modalities that may include imperfect (uninterpretable) images. Methods A retrospective study of 1148 optical coherence tomography (OCT) and color fundus photography (CFP) retinal images obtained using Topcon's Maestro care unit on 647 patients with diabetes. To identify retinal pathology, a Convolutional Neural Network (CNN) with dual-modal inputs (i.e. CFP and OCT images) was developed. We developed a novel alternate gradient descent algorithm to train the CNN, which allows for the use of uninterpretable CFP/OCT images (i.e. ungradable images that do not contain sufficient image biomarkers for the reviewer to conclude absence or presence of retinal pathology). Specifically, a 9:1 ratio to split the training and testing dataset was used for training and validating the CNN. Paired CFP/OCT inputs (obtained from a single eye of a patient) were grouped as retinal pathology negative (RPN; 924 images) in the absence of retinal pathology in both imaging modalities, or if one of the imaging modalities was uninterpretable and the other without retinal pathology. If any imaging modality exhibited referable retinal pathology, the corresponding CFP/OCT inputs were deemed retinal pathology positive (RPP; 224 images) if any imaging modality exhibited referable retinal pathology. Results Our approach achieved 88.60% (95% confidence interval [CI] = 82.76% to 94.43%) accuracy in identifying pathology, along with the false negative rate (FNR) of 12.28% (95% CI = 6.26% to 18.31%), recall (sensitivity) of 87.72% (95% CI = 81.69% to 93.74%), specificity of 89.47% (95% CI = 83.84% to 95.11%), and area under the curve of receiver operating characteristic (AUC-ROC) was 92.74% (95% CI = 87.71% to 97.76%). Conclusions Our model can be successfully deployed in clinical practice to facilitate automated remote retinal pathology identification. Translational Relevance A fully automated tool for early diagnosis of retinal pathology might allow for earlier treatment and improved visual outcomes.
Collapse
Affiliation(s)
- Qitong Gao
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA
| | - Joshua Amason
- Department of Ophthalmology, Duke University, Durham, NC, USA
| | - Scott Cousins
- Department of Ophthalmology, Duke University, Durham, NC, USA
| | - Miroslav Pajic
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA
- Department of Computer Science, Duke University, Durham, NC, USA
| | | |
Collapse
|
99
|
Chan EJJ, Najjar RP, Tang Z, Milea D. Deep Learning for Retinal Image Quality Assessment of Optic Nerve Head Disorders. Asia Pac J Ophthalmol (Phila) 2021; 10:282-288. [PMID: 34383719 DOI: 10.1097/apo.0000000000000404] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
ABSTRACT Deep learning (DL)-based retinal image quality assessment (RIQA) algorithms have been gaining popularity, as a solution to reduce the frequency of diagnostically unusable images. Most existing RIQA tools target retinal conditions, with a dearth of studies looking into RIQA models for optic nerve head (ONH) disorders. The recent success of DL systems in detecting ONH abnormalities on color fundus images prompts the development of tailored RIQA algorithms for these specific conditions. In this review, we discuss recent progress in DL-based RIQA models in general and the need for RIQA models tailored for ONH disorders. Finally, we propose suggestions for such models in the future.
Collapse
Affiliation(s)
| | - Raymond P Najjar
- Duke-NUS School of Medicine, Singapore
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
| | - Zhiqun Tang
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
| | - Dan Milea
- Duke-NUS School of Medicine, Singapore
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
- Ophthalmology Department, Singapore National Eye Centre, Singapore
- Rigshospitalet, Copenhagen University, Denmark
| |
Collapse
|
100
|
Xu X, Guan Y, Li J, Ma Z, Zhang L, Li L. Automatic glaucoma detection based on transfer induced attention network. Biomed Eng Online 2021; 20:39. [PMID: 33892734 PMCID: PMC8066979 DOI: 10.1186/s12938-021-00877-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Accepted: 04/13/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Glaucoma is one of the causes that leads to irreversible vision loss. Automatic glaucoma detection based on fundus images has been widely studied in recent years. However, existing methods mainly depend on a considerable amount of labeled data to train the model, which is a serious constraint for real-world glaucoma detection. METHODS In this paper, we introduce a transfer learning technique that leverages the fundus feature learned from similar ophthalmic data to facilitate diagnosing glaucoma. Specifically, a Transfer Induced Attention Network (TIA-Net) for automatic glaucoma detection is proposed, which extracts the discriminative features that fully characterize the glaucoma-related deep patterns under limited supervision. By integrating the channel-wise attention and maximum mean discrepancy, our proposed method can achieve a smooth transition between general and specific features, thus enhancing the feature transferability. RESULTS To delimit the boundary between general and specific features precisely, we first investigate how many layers should be transferred during training with the source dataset network. Next, we compare our proposed model to previously mentioned methods and analyze their performance. Finally, with the advantages of the model design, we provide a transparent and interpretable transferring visualization by highlighting the key specific features in each fundus image. We evaluate the effectiveness of TIA-Net on two real clinical datasets and achieve an accuracy of 85.7%/76.6%, sensitivity of 84.9%/75.3%, specificity of 86.9%/77.2%, and AUC of 0.929 and 0.835, far better than other state-of-the-art methods. CONCLUSION Different from previous studies applied classic CNN models to transfer features from the non-medical dataset, we leverage knowledge from the similar ophthalmic dataset and propose an attention-based deep transfer learning model for the glaucoma diagnosis task. Extensive experiments on two real clinical datasets show that our TIA-Net outperforms other state-of-the-art methods, and meanwhile, it has certain medical value and significance for the early diagnosis of other medical tasks.
Collapse
Affiliation(s)
- Xi Xu
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Yu Guan
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Zerui Ma
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Li Zhang
- Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Li Li
- Beijing Children’s Hospital, Capital Medical University, Beijing, China
| |
Collapse
|