101
|
Hemelings R, Elen B, Barbosa-Breda J, Blaschko MB, De Boever P, Stalmans I. Deep learning on fundus images detects glaucoma beyond the optic disc. Sci Rep 2021; 11:20313. [PMID: 34645908 PMCID: PMC8514536 DOI: 10.1038/s41598-021-99605-1] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 09/21/2021] [Indexed: 02/07/2023] Open
Abstract
Although unprecedented sensitivity and specificity values are reported, recent glaucoma detection deep learning models lack in decision transparency. Here, we propose a methodology that advances explainable deep learning in the field of glaucoma detection and vertical cup-disc ratio (VCDR), an important risk factor. We trained and evaluated deep learning models using fundus images that underwent a certain cropping policy. We defined the crop radius as a percentage of image size, centered on the optic nerve head (ONH), with an equidistant spaced range from 10-60% (ONH crop policy). The inverse of the cropping mask was also applied (periphery crop policy). Trained models using original images resulted in an area under the curve (AUC) of 0.94 [95% CI 0.92-0.96] for glaucoma detection, and a coefficient of determination (R2) equal to 77% [95% CI 0.77-0.79] for VCDR estimation. Models that were trained on images with absence of the ONH are still able to obtain significant performance (0.88 [95% CI 0.85-0.90] AUC for glaucoma detection and 37% [95% CI 0.35-0.40] R2 score for VCDR estimation in the most extreme setup of 60% ONH crop). Our findings provide the first irrefutable evidence that deep learning can detect glaucoma from fundus image regions outside the ONH.
Collapse
Affiliation(s)
- Ruben Hemelings
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium.
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium.
| | - Bart Elen
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium
| | - João Barbosa-Breda
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium
- Cardiovascular R&D Center, Faculty of Medicine of the University of Porto, Alameda Prof. Hernâni Monteiro, 4200-319, Porto, Portugal
- Department of Ophthalmology, Centro Hospitalar E Universitário São João, Alameda Prof. Hernâni Monteiro, 4200-319, Porto, Portugal
| | | | - Patrick De Boever
- Hasselt University, Agoralaan building D, 3590, Diepenbeek, Belgium
- Department of Biology, University of Antwerp, 2610, Wilrijk, Belgium
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium
| | - Ingeborg Stalmans
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium
- Ophthalmology Department, UZ Leuven, Herestraat 49, 3000, Leuven, Belgium
| |
Collapse
|
102
|
Wong YL, Noor M, James KL, Aslam TM. Ophthalmology Going Greener: A Narrative Review. Ophthalmol Ther 2021; 10:845-857. [PMID: 34633635 PMCID: PMC8502635 DOI: 10.1007/s40123-021-00404-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Accepted: 09/28/2021] [Indexed: 11/30/2022] Open
Abstract
The combined effects of fossil fuel combustion, mass agricultural production and deforestation, industrialisation and the evolution of modern transport systems have resulted in high levels of carbon emissions and accumulation of greenhouse gases, causing profound climate change and ozone layer depletion. The consequential depletion of Earth's natural ecosystems and biodiversity is not only a devastating loss but a threat to human health. Sustainability-the ability to continue activities indefinitely-underpins the principal solutions to these problems. Globally, the healthcare sector is a major contributor to carbon emissions, with waste production and transport systems being amongst the highest contributing factors. The aim of this review is to explore modalities by which the healthcare sector, particularly ophthalmology, can reduce carbon emissions, related costs and overall environmental impact, whilst maintaining a high standard of patient care.
Collapse
Affiliation(s)
- Yee Ling Wong
- Manchester Royal Eye Hospital, Manchester University NHS Foundation Trust, Manchester, UK.
| | - Maha Noor
- Manchester University NHS Foundation Trust, Manchester, UK
| | - Katherine L James
- Manchester Royal Eye Hospital, Manchester University NHS Foundation Trust, Manchester, UK
| | - Tariq M Aslam
- Manchester Royal Eye Hospital, Manchester University NHS Foundation Trust, Manchester, UK.,School of Pharmacy and Optometry, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, UK
| |
Collapse
|
103
|
Wu Y, Szymanska M, Hu Y, Fazal MI, Jiang N, Yetisen AK, Cordeiro MF. Measures of disease activity in glaucoma. Biosens Bioelectron 2021; 196:113700. [PMID: 34653715 DOI: 10.1016/j.bios.2021.113700] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 10/01/2021] [Accepted: 10/08/2021] [Indexed: 12/13/2022]
Abstract
Glaucoma is the leading cause of irreversible blindness globally which significantly affects the quality of life and has a substantial economic impact. Effective detective methods are necessary to identify glaucoma as early as possible. Regular eye examinations are important for detecting the disease early and preventing deterioration of vision and quality of life. Current methods of measuring disease activity are powerful in describing the functional and structural changes in glaucomatous eyes. However, there is still a need for a novel tool to detect glaucoma earlier and more accurately. Tear fluid biomarker analysis and new imaging technology provide novel surrogate endpoints of glaucoma. Artificial intelligence is a post-diagnostic tool that can analyse ophthalmic test results. A detail review of currently used clinical tests in glaucoma include intraocular pressure test, visual field test and optical coherence tomography are presented. The advanced technologies for glaucoma measurement which can identify specific disease characteristics, as well as the mechanism, performance and future perspectives of these devices are highlighted. Applications of AI in diagnosis and prediction in glaucoma are mentioned. With the development in imaging tools, sensor technologies and artificial intelligence, diagnostic evaluation of glaucoma must assess more variables to facilitate earlier diagnosis and management in the future.
Collapse
Affiliation(s)
- Yue Wu
- Department of Surgery and Cancer, Imperial College London, South Kensington, London, United Kingdom; Department of Chemical Engineering, Imperial College London, South Kensington, London, United Kingdom
| | - Maja Szymanska
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, United Kingdom
| | - Yubing Hu
- Department of Chemical Engineering, Imperial College London, South Kensington, London, United Kingdom.
| | - M Ihsan Fazal
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, United Kingdom
| | - Nan Jiang
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu, 610041, China
| | - Ali K Yetisen
- Department of Chemical Engineering, Imperial College London, South Kensington, London, United Kingdom
| | - M Francesca Cordeiro
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, United Kingdom; The Western Eye Hospital, Imperial College Healthcare NHS Trust (ICHNT), London, United Kingdom; Glaucoma and Retinal Neurodegeneration Group, Department of Visual Neuroscience, UCL Institute of Ophthalmology, London, United Kingdom.
| |
Collapse
|
104
|
Chen JS, Coyner AS, Ostmo S, Sonmez K, Bajimaya S, Pradhan E, Valikodath N, Cole ED, Al-Khaled T, Chan RVP, Singh P, Kalpathy-Cramer J, Chiang MF, Campbell JP. Deep Learning for the Diagnosis of Stage in Retinopathy of Prematurity: Accuracy and Generalizability across Populations and Cameras. Ophthalmol Retina 2021; 5:1027-1035. [PMID: 33561545 PMCID: PMC8364291 DOI: 10.1016/j.oret.2020.12.013] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 12/02/2020] [Accepted: 12/16/2020] [Indexed: 12/23/2022]
Abstract
PURPOSE Stage is an important feature to identify in retinal images of infants at risk of retinopathy of prematurity (ROP). The purpose of this study was to implement a convolutional neural network (CNN) for binary detection of stages 1, 2, and 3 in ROP and to evaluate its generalizability across different populations and camera systems. DESIGN Diagnostic validation study of CNN for stage detection. PARTICIPANTS Retinal fundus images obtained from preterm infants during routine ROP screenings. METHODS Two datasets were used: 5943 fundus images obtained by RetCam camera (Natus Medical, Pleasanton, CA) from 9 North American institutions and 5049 images obtained by 3nethra camera (Forus Health Incorporated, Bengaluru, India) from 4 hospitals in Nepal. Images were labeled based on the presence of stage by 1 to 3 expert graders. Three CNN models were trained using 5-fold cross-validation on datasets from North America alone, Nepal alone, and a combined dataset and were evaluated on 2 held-out test sets consisting of 708 and 247 images from the Nepali and North American datasets, respectively. MAIN OUTCOME MEASURES Convolutional neural network performance was evaluated using area under the receiver operating characteristic curve (AUROC), area under the precision-recall curve (AUPRC), sensitivity, and specificity. RESULTS Both the North American- and Nepali-trained models demonstrated high performance on a test set from the same population: AUROC, 0.99; AUPRC, 0.98; sensitivity, 94%; and AUROC, 0.97; AUPRC, 0.91; and sensitivity, 73%; respectively. However, the performance of each model decreased to AUROC of 0.96 and AUPRC of 0.88 (sensitivity, 52%) and AUROC of 0.62 and AUPRC of 0.36 (sensitivity, 44%) when evaluated on a test set from the other population. Compared with the models trained on individual datasets, the model trained on a combined dataset achieved improved performance on each respective test set: sensitivity improved from 94% to 98% on the North American test set and from 73% to 82% on the Nepali test set. CONCLUSIONS A CNN can identify accurately the presence of ROP stage in retinal images, but performance depends on the similarity between training and testing populations. We demonstrated that internal and external performance can be improved by increasing the heterogeneity of the training dataset features of the training dataset, in this case by combining images from different populations and cameras.
Collapse
Affiliation(s)
- Jimmy S Chen
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Aaron S Coyner
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Kemal Sonmez
- Cancer Early Detection Advanced Research Center, Knight Cancer Institute, Oregon Health & Science University, Portland, Oregon
| | | | - Eli Pradhan
- Tilganga Institute of Ophthalmology, Kathmandu, Nepal
| | - Nita Valikodath
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Emily D Cole
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Tala Al-Khaled
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - R V Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Praveer Singh
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts; Center for Clinical Data Science, Massachusetts General Hospital and Brigham and Women's Hospital, Boston, Massachusetts
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts; Center for Clinical Data Science, Massachusetts General Hospital and Brigham and Women's Hospital, Boston, Massachusetts
| | - Michael F Chiang
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon; Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon
| | - J Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon.
| |
Collapse
|
105
|
Buisson M, Navel V, Labbé A, Watson SL, Baker JS, Murtagh P, Chiambaretta F, Dutheil F. Deep learning versus ophthalmologists for screening for glaucoma on fundus examination: A systematic review and meta-analysis. Clin Exp Ophthalmol 2021; 49:1027-1038. [PMID: 34506041 DOI: 10.1111/ceo.14000] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Revised: 09/02/2021] [Accepted: 09/08/2021] [Indexed: 11/29/2022]
Abstract
BACKGROUND In this systematic review and meta-analysis, we aimed to compare deep learning versus ophthalmologists in glaucoma diagnosis on fundus examinations. METHOD PubMed, Cochrane, Embase, ClinicalTrials.gov and ScienceDirect databases were searched for studies reporting a comparison between the glaucoma diagnosis performance of deep learning and ophthalmologists on fundus examinations on the same datasets, until 10 December 2020. Studies had to report an area under the receiver operating characteristics (AUC) with SD or enough data to generate one. RESULTS We included six studies in our meta-analysis. There was no difference in AUC between ophthalmologists (AUC = 82.0, 95% confidence intervals [CI] 65.4-98.6) and deep learning (97.0, 89.4-104.5). There was also no difference using several pessimistic and optimistic variants of our meta-analysis: the best (82.2, 60.0-104.3) or worst (77.7, 53.1-102.3) ophthalmologists versus the best (97.1, 89.5-104.7) or worst (97.1, 88.5-105.6) deep learning of each study. We did not retrieve any factors influencing those results. CONCLUSION Deep learning had similar performance compared to ophthalmologists in glaucoma diagnosis from fundus examinations. Further studies should evaluate deep learning in clinical situations.
Collapse
Affiliation(s)
- Mathieu Buisson
- CHU Clermont-Ferrand, Ophthalmology, University Hospital of Clermont-Ferrand, Clermont-Ferrand, France
| | - Valentin Navel
- CHU Clermont-Ferrand, Ophthalmology, University Hospital of Clermont-Ferrand, Clermont-Ferrand, France.,CNRS UMR 6293, INSERM U1103, Genetic Reproduction and Development Laboratory (GReD), Translational Approach to Epithelial Injury and Repair Team, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Antoine Labbé
- Department of Ophthalmology III, Quinze-Vingts National Ophthalmology Hospital, IHU FOReSIGHT, Paris, France.,Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France.,Department of Ophthalmology, Ambroise Paré Hospital, APHP, Université de Versailles Saint-Quentin en Yvelines, Versailles, France
| | - Stephanie L Watson
- Save Sight Institute, Discipline of Ophthalmology, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia.,Corneal Unit, Sydney Eye Hospital, Sydney, New South Wales, Australia
| | - Julien S Baker
- Centre for Health and Exercise Science Research, Department of Sport, Physical Education and Health, Hong Kong Baptist University, Kowloon Tong, Hong Kong
| | - Patrick Murtagh
- Department of Ophthalmology, Royal Victoria Eye and Ear Hospital, Dublin, Ireland
| | - Frédéric Chiambaretta
- CHU Clermont-Ferrand, Ophthalmology, University Hospital of Clermont-Ferrand, Clermont-Ferrand, France.,CNRS UMR 6293, INSERM U1103, Genetic Reproduction and Development Laboratory (GReD), Translational Approach to Epithelial Injury and Repair Team, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Frédéric Dutheil
- Université Clermont Auvergne, CNRS, LaPSCo, Physiological and Psychosocial Stress, CHU Clermont-Ferrand, University Hospital of Clermont-Ferrand, Preventive and Occupational Medicine, Witty Fit, Clermont-Ferrand, France
| |
Collapse
|
106
|
Liu TYA, Wei J, Zhu H, Subramanian PS, Myung D, Yi PH, Hui FK, Unberath M, Ting DSW, Miller NR. Detection of Optic Disc Abnormalities in Color Fundus Photographs Using Deep Learning. J Neuroophthalmol 2021; 41:368-374. [PMID: 34415271 PMCID: PMC10637344 DOI: 10.1097/wno.0000000000001358] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
BACKGROUND To date, deep learning-based detection of optic disc abnormalities in color fundus photographs has mostly been limited to the field of glaucoma. However, many life-threatening systemic and neurological conditions can manifest as optic disc abnormalities. In this study, we aimed to extend the application of deep learning (DL) in optic disc analyses to detect a spectrum of nonglaucomatous optic neuropathies. METHODS Using transfer learning, we trained a ResNet-152 deep convolutional neural network (DCNN) to distinguish between normal and abnormal optic discs in color fundus photographs (CFPs). Our training data set included 944 deidentified CFPs (abnormal 364; normal 580). Our testing data set included 151 deidentified CFPs (abnormal 71; normal 80). Both the training and testing data sets contained a wide range of optic disc abnormalities, including but not limited to ischemic optic neuropathy, atrophy, compressive optic neuropathy, hereditary optic neuropathy, hypoplasia, papilledema, and toxic optic neuropathy. The standard measures of performance (sensitivity, specificity, and area under the curve of the receiver operating characteristic curve (AUC-ROC)) were used for evaluation. RESULTS During the 10-fold cross-validation test, our DCNN for distinguishing between normal and abnormal optic discs achieved the following mean performance: AUC-ROC 0.99 (95 CI: 0.98-0.99), sensitivity 94% (95 CI: 91%-97%), and specificity 96% (95 CI: 93%-99%). When evaluated against the external testing data set, our model achieved the following mean performance: AUC-ROC 0.87, sensitivity 90%, and specificity 69%. CONCLUSION In summary, we have developed a deep learning algorithm that is capable of detecting a spectrum of optic disc abnormalities in color fundus photographs, with a focus on neuro-ophthalmological etiologies. As the next step, we plan to validate our algorithm prospectively as a focused screening tool in the emergency department, which if successful could be beneficial because current practice pattern and training predict a shortage of neuro-ophthalmologists and ophthalmologists in general in the near future.
Collapse
Affiliation(s)
- T Y Alvin Liu
- Department of Ophthalmology (TYAL, NRM), Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland; Department of Biomedical Engineering (JW), Johns Hopkins University, Baltimore, Maryland; Malone Center for Engineering in Healthcare (HZ, MU), Johns Hopkins University, Baltimore, Maryland; Department of Radiology (PHY, FKH), Johns Hopkins University, Baltimore, Maryland; Singapore Eye Research Institute (DSWT), Singapore National Eye Center, Duke-NUS Medical School, National University of Singapore, Singapore ; Department of Ophthalmology (PSS), University of Colorado School of Medicine, Aurora, Colorado; and Department of Ophthalmology (DM), Byers Eye Institute, Stanford University, Palo Alto, California
| | | | | | | | | | | | | | | | | | | |
Collapse
|
107
|
Zebardast N, Sekimitsu S, Wang J, Elze T, Gharahkhani P, Cole BS, Lin MM, Segrè AV, Wiggs JL. Characteristics of p.Gln368Ter Myocilin Variant and Influence of Polygenic Risk on Glaucoma Penetrance in the UK Biobank. Ophthalmology 2021; 128:1300-1311. [PMID: 33713785 PMCID: PMC9134646 DOI: 10.1016/j.ophtha.2021.03.007] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Revised: 03/03/2021] [Accepted: 03/04/2021] [Indexed: 11/19/2022] Open
Abstract
PURPOSE MYOC (myocilin) mutations account for 3% to 5% of primary open-angle glaucoma (POAG) cases. We aimed to understand the true population-wide penetrance and characteristics of glaucoma among individuals with the most common MYOC variant (p.Gln368Ter) and the impact of a POAG polygenic risk score (PRS) in this population. DESIGN Cross-sectional population-based study. PARTICIPANTS Individuals with the p.Gln368Ter variant among 77 959 UK Biobank participants with fundus photographs (FPs). METHODS A genome-wide POAG PRS was computed, and 2 masked graders reviewed FPs for disc-defined glaucoma (DDG). MAIN OUTCOME MEASURES Penetrance of glaucoma. RESULTS Two hundred individuals carried the p.Gln368Ter heterozygous genotype, and 177 had gradable FPs. One hundred thirty-two showed no evidence of glaucoma, 45 (25.4%) had probable/definite glaucoma in at least 1 eye, and 19 (10.7%) had bilateral glaucoma. No differences were found in age, race/ethnicity, or gender among groups (P > 0.05). Of those with DDG, 31% self-reported or had International Classification of Diseases codes for glaucoma, whereas 69% were undiagnosed. Those with DDG had higher medication-adjusted cornea-corrected intraocular pressure (IOPcc) (P < 0.001) vs. those without glaucoma. This difference in IOPcc was larger in those with DDG with a prior glaucoma diagnosis versus those not diagnosed (P < 0.001). Most p.Gln368Ter carriers showed IOP in the normal range (≤21 mmHg), although this proportion was lower in those with DDG (P < 0.02) and those with prior glaucoma diagnosis (P < 0.03). Prevalence of DDG increased with each decile of POAG PRS. Individuals with DDG demonstrated significantly higher PRS compared with those without glaucoma (0.37 ± 0.97 vs. 0.01 ± 0.90; P = 0.03). Of those with DDG, individuals with a prior diagnosis of glaucoma had higher PRS compared with undiagnosed individuals (1.31 ± 0.64 vs. 0.00 ± 0.81; P < 0.001) and 27.5 times (95% confidence interval, 2.5-306.6) adjusted odds of being in the top decile of PRS for POAG. CONCLUSIONS One in 4 individuals with the MYOC p.Gln368Ter mutation demonstrated evidence of glaucoma, a substantially higher penetrance than previously estimated, with 69% of cases undetected. A large portion of p.Gln368Ter carriers, including those with DDG, have IOP in the normal range, despite similar age. Polygenic risk score increases disease penetrance and severity, supporting the usefulness of PRS in risk stratification among MYOC p.Gln368Ter carriers.
Collapse
Affiliation(s)
- Nazlee Zebardast
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts.
| | | | - Jiali Wang
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts; Ocular Genomics Institute, Harvard Medical School, Boston, Massachusetts
| | - Tobias Elze
- Schepens Eye Research Institute, Harvard Medical School, Boston, Massachusetts
| | - Puya Gharahkhani
- Statistical Genetics Group, Department of Genetics and Computational Biology, QIMR Berghofer Medical Research Institute, Brisbane, Australia
| | - Brian S Cole
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts; Ocular Genomics Institute, Harvard Medical School, Boston, Massachusetts
| | - Michael M Lin
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts
| | - Ayellet V Segrè
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts; Ocular Genomics Institute, Harvard Medical School, Boston, Massachusetts
| | - Janey L Wiggs
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts; Ocular Genomics Institute, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
108
|
Yuen V, Ran A, Shi J, Sham K, Yang D, Chan VTT, Chan R, Yam JC, Tham CC, McKay GJ, Williams MA, Schmetterer L, Cheng CY, Mok V, Chen CL, Wong TY, Cheung CY. Deep-Learning-Based Pre-Diagnosis Assessment Module for Retinal Photographs: A Multicenter Study. Transl Vis Sci Technol 2021; 10:16. [PMID: 34524409 PMCID: PMC8444486 DOI: 10.1167/tvst.10.11.16] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Accepted: 08/12/2021] [Indexed: 12/23/2022] Open
Abstract
Purpose Artificial intelligence (AI) deep learning (DL) has been shown to have significant potential for eye disease detection and screening on retinal photographs in different clinical settings, particular in primary care. However, an automated pre-diagnosis image assessment is essential to streamline the application of the developed AI-DL algorithms. In this study, we developed and validated a DL-based pre-diagnosis assessment module for retinal photographs, targeting image quality (gradable vs. ungradable), field of view (macula-centered vs. optic-disc-centered), and laterality of the eye (right vs. left). Methods A total of 21,348 retinal photographs from 1914 subjects from various clinical settings in Hong Kong, Singapore, and the United Kingdom were used for training, internal validation, and external testing for the DL module, developed by two DL-based algorithms (EfficientNet-B0 and MobileNet-V2). Results For image-quality assessment, the pre-diagnosis module achieved area under the receiver operating characteristic curve (AUROC) values of 0.975, 0.999, and 0.987 in the internal validation dataset and the two external testing datasets, respectively. For field-of-view assessment, the module had an AUROC value of 1.000 in all of the datasets. For laterality-of-the-eye assessment, the module had AUROC values of 1.000, 0.999, and 0.985 in the internal validation dataset and the two external testing datasets, respectively. Conclusions Our study showed that this three-in-one DL module for assessing image quality, field of view, and laterality of the eye of retinal photographs achieved excellent performance and generalizability across different centers and ethnicities. Translational Relevance The proposed DL-based pre-diagnosis module realized accurate and automated assessments of image quality, field of view, and laterality of the eye of retinal photographs, which could be further integrated into AI-based models to improve operational flow for enhancing disease screening and diagnosis.
Collapse
Affiliation(s)
- Vincent Yuen
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Anran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Jian Shi
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Kaiser Sham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Victor T. T. Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Raymond Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Jason C. Yam
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
- Hong Kong Eye Hospital, Hong Kong
| | - Clement C. Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
- Hong Kong Eye Hospital, Hong Kong
| | - Gareth J. McKay
- Center for Public Health, Royal Victoria Hospital, Queen's University Belfast, Belfast, UK
| | - Michael A. Williams
- Center for Medical Education, Royal Victoria Hospital, Queen's University Belfast, Belfast, UK
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Nanyang Technological University, Singapore
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
| | - Vincent Mok
- Gerald Choa Neuroscience Center, Therese Pei Fong Chow Research Center for Prevention of Dementia, Lui Che Woo Institute of Innovative Medicine, Department of Medicine and Therapeutics, The Chinese University of Hong Kong, Hong Kong
| | - Christopher L. Chen
- Memory, Aging and Cognition Center, Department of Pharmacology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Tien Y. Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
| | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
109
|
Betzler BK, Yang HHS, Thakur S, Yu M, Quek TC, Soh ZD, Lee G, Tham YC, Wong TY, Rim TH, Cheng CY. Gender Prediction for a Multiethnic Population via Deep Learning Across Different Retinal Fundus Photograph Fields: Retrospective Cross-sectional Study. JMIR Med Inform 2021; 9:e25165. [PMID: 34402800 PMCID: PMC8408758 DOI: 10.2196/25165] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 04/06/2021] [Accepted: 06/22/2021] [Indexed: 11/26/2022] Open
Abstract
Background Deep learning algorithms have been built for the detection of systemic and eye diseases based on fundus photographs. The retina possesses features that can be affected by gender differences, and the extent to which these features are captured via photography differs depending on the retinal image field. Objective We aimed to compare deep learning algorithms’ performance in predicting gender based on different fields of fundus photographs (optic disc–centered, macula-centered, and peripheral fields). Methods This retrospective cross-sectional study included 172,170 fundus photographs of 9956 adults aged ≥40 years from the Singapore Epidemiology of Eye Diseases Study. Optic disc–centered, macula-centered, and peripheral field fundus images were included in this study as input data for a deep learning model for gender prediction. Performance was estimated at the individual level and image level. Receiver operating characteristic curves for binary classification were calculated. Results The deep learning algorithms predicted gender with an area under the receiver operating characteristic curve (AUC) of 0.94 at the individual level and an AUC of 0.87 at the image level. Across the three image field types, the best performance was seen when using optic disc–centered field images (younger subgroups: AUC=0.91; older subgroups: AUC=0.86), and algorithms that used peripheral field images had the lowest performance (younger subgroups: AUC=0.85; older subgroups: AUC=0.76). Across the three ethnic subgroups, algorithm performance was lowest in the Indian subgroup (AUC=0.88) compared to that in the Malay (AUC=0.91) and Chinese (AUC=0.91) subgroups when the algorithms were tested on optic disc–centered images. Algorithms’ performance in gender prediction at the image level was better in younger subgroups (aged <65 years; AUC=0.89) than in older subgroups (aged ≥65 years; AUC=0.82). Conclusions We confirmed that gender among the Asian population can be predicted with fundus photographs by using deep learning, and our algorithms’ performance in terms of gender prediction differed according to the field of fundus photographs, age subgroups, and ethnic groups. Our work provides a further understanding of using deep learning models for the prediction of gender-related diseases. Further validation of our findings is still needed.
Collapse
Affiliation(s)
- Bjorn Kaijun Betzler
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Henrik Hee Seung Yang
- Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore
| | - Sahil Thakur
- Singapore Eye Research Institute, Singapore, Singapore
| | - Marco Yu
- Singapore Eye Research Institute, Singapore, Singapore
| | | | - Zhi Da Soh
- Singapore Eye Research Institute, Singapore, Singapore
| | | | - Yih-Chung Tham
- Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore.,Singapore Eye Research Institute, Singapore, Singapore
| | - Tien Yin Wong
- Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore.,Singapore Eye Research Institute, Singapore, Singapore
| | - Tyler Hyungtaek Rim
- Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore.,Singapore Eye Research Institute, Singapore, Singapore
| | - Ching-Yu Cheng
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore.,Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore.,Singapore Eye Research Institute, Singapore, Singapore
| |
Collapse
|
110
|
Wong SH, Tsai JC. Telehealth and Screening Strategies in the Diagnosis and Management of Glaucoma. J Clin Med 2021; 10:3452. [PMID: 34441748 PMCID: PMC8396962 DOI: 10.3390/jcm10163452] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 07/31/2021] [Accepted: 08/02/2021] [Indexed: 11/16/2022] Open
Abstract
Telehealth has become a viable option for glaucoma screening and glaucoma monitoring due to advances in technology. The ability to measure intraocular pressure without an anesthetic and to take optic nerve photographs without pharmacologic pupillary dilation using portable equipment have allowed glaucoma screening programs to generate enough data for assessment. At home, patients can perform visual acuity testing, web-based visual field testing, rebound tonometry, and video visits with the physician to monitor for glaucomatous progression. Artificial intelligence will enhance the accuracy of data interpretation and inspire confidence in popularizing telehealth for glaucoma.
Collapse
Affiliation(s)
- Sze H. Wong
- Department of Ophthalmology, Icahn School of Medicine at Mount Sinai, New York Eye and Ear Infirmary of Mount Sinai, New York, NY 10003, USA;
| | | |
Collapse
|
111
|
Bowd C, Belghith A, Christopher M, Goldbaum MH, Fazio MA, Girkin CA, Liebmann JM, de Moraes CG, Weinreb RN, Zangwill LM. Individualized Glaucoma Change Detection Using Deep Learning Auto Encoder-Based Regions of Interest. Transl Vis Sci Technol 2021; 10:19. [PMID: 34293095 PMCID: PMC8300051 DOI: 10.1167/tvst.10.8.19] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose To compare change over time in eye-specific optical coherence tomography (OCT) retinal nerve fiber layer (RNFL)-based region-of-interest (ROI) maps developed using unsupervised deep-learning auto-encoders (DL-AE) to circumpapillary RNFL (cpRNFL) thickness for the detection of glaucomatous progression. Methods Forty-four progressing glaucoma eyes (by stereophotograph assessment), 189 nonprogressing glaucoma eyes (by stereophotograph assessment), and 109 healthy eyes were followed for ≥3 years with ≥4 visits using OCT. The San Diego Automated Layer Segmentation Algorithm was used to automatically segment the RNFL layer from raw three-dimensional OCT images. For each longitudinal series, DL-AEs were used to generate individualized eye-based ROI maps by identifying RNFL regions of likely progression and no change. Sensitivities and specificities for detecting change over time and rates of change over time were compared for the DL-AE ROI and global cpRNFL thickness measurements derived from a 2.22-mm to 3.45-mm annulus centered on the optic disc. Results The sensitivity for detecting change in progressing eyes was greater for DL-AE ROIs than for global cpRNFL annulus thicknesses (0.90 and 0.63, respectively). The specificity for detecting not likely progression in nonprogressing eyes was similar (0.92 and 0.93, respectively). The mean rates of change in DL-AE ROI were significantly faster than for cpRNFL annulus thickness in progressing eyes (-1.28 µm/y vs. -0.83 µm/y) and nonprogressing eyes (-1.03 µm/y vs. -0.78 µm/y). Conclusions Eye-specific ROIs identified using DL-AE analysis of OCT images show promise for improving assessment of glaucomatous progression. Translational Relevance The detection and monitoring of structural glaucomatous progression can be improved by considering eye-specific regions of likely progression identified using deep learning.
Collapse
Affiliation(s)
- Christopher Bowd
- Hamilton Glaucoma Center, Shiley Eye Institute, The Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, CA, USA
| | - Akram Belghith
- Hamilton Glaucoma Center, Shiley Eye Institute, The Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, CA, USA
| | - Mark Christopher
- Hamilton Glaucoma Center, Shiley Eye Institute, The Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, CA, USA
| | - Michael H Goldbaum
- Hamilton Glaucoma Center, Shiley Eye Institute, The Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, CA, USA
| | - Massimo A Fazio
- School of Medicine, University of Alabama-Birmingham, Birmingham, AL, USA
| | | | - Jeffrey M Liebmann
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Medical Center, New York, NY, USA
| | - Carlos Gustavo de Moraes
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Medical Center, New York, NY, USA
| | - Robert N Weinreb
- Hamilton Glaucoma Center, Shiley Eye Institute, The Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, CA, USA
| | - Linda M Zangwill
- Hamilton Glaucoma Center, Shiley Eye Institute, The Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, CA, USA
| |
Collapse
|
112
|
Fully Automated Colorimetric Analysis of the Optic Nerve Aided by Deep Learning and Its Association with Perimetry and OCT for the Study of Glaucoma. J Clin Med 2021; 10:jcm10153231. [PMID: 34362014 PMCID: PMC8347493 DOI: 10.3390/jcm10153231] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 07/14/2021] [Accepted: 07/20/2021] [Indexed: 01/04/2023] Open
Abstract
Background: Laguna-ONhE is an application for the colorimetric analysis of optic nerve images, which topographically assesses the cup and the presence of haemoglobin. Its latest version has been fully automated with five deep learning models. In this paper, perimetry in combination with Laguna-ONhE or Cirrus-OCT was evaluated. Methods: The morphology and perfusion estimated by Laguna ONhE were compiled into a “Globin Distribution Function” (GDF). Visual field irregularity was measured with the usual pattern standard deviation (PSD) and the threshold coefficient of variation (TCV), which analyses its harmony without taking into account age-corrected values. In total, 477 normal eyes, 235 confirmed, and 98 suspected glaucoma cases were examined with Cirrus-OCT and different fundus cameras and perimeters. Results: The best Receiver Operating Characteristic (ROC) analysis results for confirmed and suspected glaucoma were obtained with the combination of GDF and TCV (AUC: 0.995 and 0.935, respectively. Sensitivities: 94.5% and 45.9%, respectively, for 99% specificity). The best combination of OCT and perimetry was obtained with the vertical cup/disc ratio and PSD (AUC: 0.988 and 0.847, respectively. Sensitivities: 84.7% and 18.4%, respectively, for 99% specificity). Conclusion: Using Laguna ONhE, morphology, perfusion, and function can be mutually enhanced with the methods described for the purpose of glaucoma assessment, providing early sensitivity.
Collapse
|
113
|
Saeed AQ, Sheikh Abdullah SNH, Che-Hamzah J, Abdul Ghani AT. Accuracy of Using Generative Adversarial Networks for Glaucoma Detection During the COVID-19 Pandemic: A Systematic Review and Bibliometric Analysis. J Med Internet Res 2021; 23:e27414. [PMID: 34236992 PMCID: PMC8493455 DOI: 10.2196/27414] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 05/11/2021] [Accepted: 07/05/2021] [Indexed: 01/19/2023] Open
Abstract
Background Glaucoma leads to irreversible blindness. Globally, it is the second most common retinal disease that leads to blindness, slightly less common than cataracts. Therefore, there is a great need to avoid the silent growth of this disease using recently developed generative adversarial networks (GANs). Objective This paper aims to introduce a GAN technology for the diagnosis of eye disorders, particularly glaucoma. This paper illustrates deep adversarial learning as a potential diagnostic tool and the challenges involved in its implementation. This study describes and analyzes many of the pitfalls and problems that researchers will need to overcome to implement this kind of technology. Methods To organize this review comprehensively, articles and reviews were collected using the following keywords: (“Glaucoma,” “optic disc,” “blood vessels”) and (“receptive field,” “loss function,” “GAN,” “Generative Adversarial Network,” “Deep learning,” “CNN,” “convolutional neural network” OR encoder). The records were identified from 5 highly reputed databases: IEEE Xplore, Web of Science, Scopus, ScienceDirect, and PubMed. These libraries broadly cover the technical and medical literature. Publications within the last 5 years, specifically 2015-2020, were included because the target GAN technique was invented only in 2014 and the publishing date of the collected papers was not earlier than 2016. Duplicate records were removed, and irrelevant titles and abstracts were excluded. In addition, we excluded papers that used optical coherence tomography and visual field images, except for those with 2D images. A large-scale systematic analysis was performed, and then a summarized taxonomy was generated. Furthermore, the results of the collected articles were summarized and a visual representation of the results was presented on a T-shaped matrix diagram. This study was conducted between March 2020 and November 2020. Results We found 59 articles after conducting a comprehensive survey of the literature. Among the 59 articles, 30 present actual attempts to synthesize images and provide accurate segmentation/classification using single/multiple landmarks or share certain experiences. The other 29 articles discuss the recent advances in GANs, do practical experiments, and contain analytical studies of retinal disease. Conclusions Recent deep learning techniques, namely GANs, have shown encouraging performance in retinal disease detection. Although this methodology involves an extensive computing budget and optimization process, it saturates the greedy nature of deep learning techniques by synthesizing images and solves major medical issues. This paper contributes to this research field by offering a thorough analysis of existing works, highlighting current limitations, and suggesting alternatives to support other researchers and participants in further improving and strengthening future work. Finally, new directions for this research have been identified.
Collapse
Affiliation(s)
- Ali Q Saeed
- Faculty of Information Science & Technology (FTSM), Universiti Kebangsaan Malaysia (UKM), UKM, 43600 Bangi, Selangor, Malaysia, Selangor, MY.,Computer Center, Northern Technical University, Ninevah, IQ
| | - Siti Norul Huda Sheikh Abdullah
- Faculty of Information Science & Technology (FTSM), Universiti Kebangsaan Malaysia (UKM), UKM, 43600 Bangi, Selangor, Malaysia, Selangor, MY
| | - Jemaima Che-Hamzah
- Department of Ophthalmology, Faculty of Medicine, Universiti Kebangsaan Malaysia (UKM), Cheras, Kuala Lumpur, MY
| | - Ahmad Tarmizi Abdul Ghani
- Faculty of Information Science & Technology (FTSM), Universiti Kebangsaan Malaysia (UKM), UKM, 43600 Bangi, Selangor, Malaysia, Selangor, MY
| |
Collapse
|
114
|
Zheng B, Jiang Q, Lu B, He K, Wu MN, Hao XL, Zhou HX, Zhu SJ, Yang WH. Five-Category Intelligent Auxiliary Diagnosis Model of Common Fundus Diseases Based on Fundus Images. Transl Vis Sci Technol 2021; 10:20. [PMID: 34132760 PMCID: PMC8212443 DOI: 10.1167/tvst.10.7.20] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Purpose The discrepancy of the number between ophthalmologists and patients in China is large. Retinal vein occlusion (RVO), high myopia, glaucoma, and diabetic retinopathy (DR) are common fundus diseases. Therefore, in this study, a five-category intelligent auxiliary diagnosis model for common fundus diseases is proposed; the model's area of focus is marked. Methods A total of 2000 fundus images were collected; 3 different 5-category intelligent auxiliary diagnosis models for common fundus diseases were trained via different transfer learning and image preprocessing techniques. A total of 1134 fundus images were used for testing. The clinical diagnostic results were compared with the diagnostic results. The main evaluation indicators included sensitivity, specificity, F1-score, area under the concentration-time curve (AUC), 95% confidence interval (CI), kappa, and accuracy. The interpretation methods were used to obtain the model's area of focus in the fundus image. Results The accuracy rates of the 3 intelligent auxiliary diagnosis models on the 1134 fundus images were all above 90%, the kappa values were all above 88%, the diagnosis consistency was good, and the AUC approached 0.90. For the 4 common fundus diseases, the best results of sensitivity, specificity, and F1-scores of the 3 models were 88.27%, 97.12%, and 84.02%; 89.94%, 99.52%, and 93.90%; 95.24%, 96.43%, and 85.11%; and 88.24%, 98.21%, and 89.55%, respectively. Conclusions This study designed a five-category intelligent auxiliary diagnosis model for common fundus diseases. It can be used to obtain the diagnostic category of fundus images and the model's area of focus. Translational Relevance This study will help the primary doctors to provide effective services to all ophthalmologic patients.
Collapse
Affiliation(s)
- Bo Zheng
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Qin Jiang
- Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| | - Bing Lu
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Kai He
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Mao-Nian Wu
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Xiu-Lan Hao
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Hong-Xia Zhou
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China.,College of Computer and Information, Hehai University, Nanjing, Jiangsu, China
| | - Shao-Jun Zhu
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Wei-Hua Yang
- Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| |
Collapse
|
115
|
Xiao X, Xue L, Ye L, Li H, He Y. Health care cost and benefits of artificial intelligence-assisted population-based glaucoma screening for the elderly in remote areas of China: a cost-offset analysis. BMC Public Health 2021; 21:1065. [PMID: 34088286 PMCID: PMC8178835 DOI: 10.1186/s12889-021-11097-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 05/17/2021] [Indexed: 12/04/2022] Open
Abstract
Background Population-based screening was essential for glaucoma management. Although various studies have investigated the cost-effectiveness of glaucoma screening, policymakers facing with uncontrollably growing total health expenses were deeply concerned about the potential financial consequences of glaucoma screening. This present study was aimed to explore the impact of glaucoma screening with artificial intelligence (AI) automated diagnosis from a budgetary standpoint in Changjiang county, China. Methods A Markov model based on health care system’s perspective was adapted from previously published studies to predict disease progression and healthcare costs. A cohort of 19,395 individuals aged 65 and above were simulated over a 15-year timeframe. Fur illustrative purpose, we only considered primary angle-closure glaucoma (PACG) in this study. Prevalence, disease progression risks between stages, compliance rates were obtained from publish studies. We did a meta-analysis to estimate diagnostic performance of AI automated diagnosis system from fundus image. Screening costs were provided by the Changjiang screening programme, whereas treatment costs were derived from electronic medical records from two county hospitals. Main outcomes included the number of PACG patients and health care costs. Cost-offset analysis was employed to compare projected health outcomes and medical care costs under the screening with what they would have been without screening. One-way sensitivity analysis was conducted to quantify uncertainties around model results. Results Among people aged 65 and above in Changjiang county, it was predicted that there were 1940 PACG patients under the AI-assisted screening scenario, compared with 2104 patients without screening in 15 years’ time. Specifically, the screening would reduce patients with primary angle closure suspect by 7.7%, primary angle closure by 8.8%, PACG by 16.7%, and visual blindness by 33.3%. Due to early diagnosis and treatment under the screening, healthcare costs surged dramatically to $107,761.4 dollar in the first year and then were constantly declining over time, while without screening costs grew from $14,759.8 in the second year until peaking at $17,900.9 in the 9th year. However, cost-offset analysis revealed that additional healthcare costs resulted from the screening could not be offset by decreased disease progression. The 5-, 10-, and 15-year accumulated incremental costs of screening versus no screening were estimated to be $396,362.8, $424,907.9, and $434,903.2, respectively. As a result, the incremental cost per PACG of any stages prevented was $1464.3. Conclusions This study represented the first attempt to address decision-maker’s budgetary concerns when adopting glaucoma screening by developing a Markov prediction model to project health outcomes and costs. Population screening combined with AI automated diagnosis for PACG in China were able to reduce disease progression risks. However, the excess costs of screening could never be offset by reduction in disease progression. Further studies examining the cost-effectiveness or cost-utility of AI-assisted glaucoma screening were needed. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-021-11097-w.
Collapse
Affiliation(s)
- Xuan Xiao
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060, China
| | - Long Xue
- School of Public Health, Fudan University, Shanghai, 200433, China
| | - Lin Ye
- Department of Eye Plastic and Lacrimal Disease, Shenzhen Eye Hospital of Jinan University, Shenzhen, 518040, China
| | - Hongzheng Li
- School of Public Health, Fudan University, Shanghai, 200433, China
| | - Yunzhen He
- School of Public Health, Fudan University, Shanghai, 200433, China.
| |
Collapse
|
116
|
Krishna Adithya V, Williams BM, Czanner S, Kavitha S, Friedman DS, Willoughby CE, Venkatesh R, Czanner G. EffUnet-SpaGen: An Efficient and Spatial Generative Approach to Glaucoma Detection. J Imaging 2021; 7:92. [PMID: 39080880 PMCID: PMC8321378 DOI: 10.3390/jimaging7060092] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Revised: 05/21/2021] [Accepted: 05/27/2021] [Indexed: 12/11/2022] Open
Abstract
Current research in automated disease detection focuses on making algorithms "slimmer" reducing the need for large training datasets and accelerating recalibration for new data while achieving high accuracy. The development of slimmer models has become a hot research topic in medical imaging. In this work, we develop a two-phase model for glaucoma detection, identifying and exploiting a redundancy in fundus image data relating particularly to the geometry. We propose a novel algorithm for the cup and disc segmentation "EffUnet" with an efficient convolution block and combine this with an extended spatial generative approach for geometry modelling and classification, termed "SpaGen" We demonstrate the high accuracy achievable by EffUnet in detecting the optic disc and cup boundaries and show how our algorithm can be quickly trained with new data by recalibrating the EffUnet layer only. Our resulting glaucoma detection algorithm, "EffUnet-SpaGen", is optimized to significantly reduce the computational burden while at the same time surpassing the current state-of-art in glaucoma detection algorithms with AUROC 0.997 and 0.969 in the benchmark online datasets ORIGA and DRISHTI, respectively. Our algorithm also allows deformed areas of the optic rim to be displayed and investigated, providing explainability, which is crucial to successful adoption and implementation in clinical settings.
Collapse
Affiliation(s)
- Venkatesh Krishna Adithya
- Department of Glaucoma, Aravind Eye Care System, Thavalakuppam, Pondicherry 605007, India; (V.K.A.); (S.K.); (R.V.)
| | - Bryan M. Williams
- School of Computing and Communications, Lancaster University, Bailrigg, Lancaster LA1 4WA, UK;
| | - Silvester Czanner
- School of Computer Science and Mathematics, Liverpool John Moores University, Liverpool L3 3AF, UK;
| | - Srinivasan Kavitha
- Department of Glaucoma, Aravind Eye Care System, Thavalakuppam, Pondicherry 605007, India; (V.K.A.); (S.K.); (R.V.)
| | - David S. Friedman
- Glaucoma Center of Excellence, Harvard Medical School, Boston, MA 02114, USA;
| | - Colin E. Willoughby
- Biomedical Research Institute, Ulster University, Coleraine, Co. Londonderry BT52 1SA, UK;
| | - Rengaraj Venkatesh
- Department of Glaucoma, Aravind Eye Care System, Thavalakuppam, Pondicherry 605007, India; (V.K.A.); (S.K.); (R.V.)
| | - Gabriela Czanner
- School of Computer Science and Mathematics, Liverpool John Moores University, Liverpool L3 3AF, UK;
| |
Collapse
|
117
|
Chan EJJ, Najjar RP, Tang Z, Milea D. Deep Learning for Retinal Image Quality Assessment of Optic Nerve Head Disorders. Asia Pac J Ophthalmol (Phila) 2021; 10:282-288. [PMID: 34383719 DOI: 10.1097/apo.0000000000000404] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
ABSTRACT Deep learning (DL)-based retinal image quality assessment (RIQA) algorithms have been gaining popularity, as a solution to reduce the frequency of diagnostically unusable images. Most existing RIQA tools target retinal conditions, with a dearth of studies looking into RIQA models for optic nerve head (ONH) disorders. The recent success of DL systems in detecting ONH abnormalities on color fundus images prompts the development of tailored RIQA algorithms for these specific conditions. In this review, we discuss recent progress in DL-based RIQA models in general and the need for RIQA models tailored for ONH disorders. Finally, we propose suggestions for such models in the future.
Collapse
Affiliation(s)
| | - Raymond P Najjar
- Duke-NUS School of Medicine, Singapore
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
| | - Zhiqun Tang
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
| | - Dan Milea
- Duke-NUS School of Medicine, Singapore
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
- Ophthalmology Department, Singapore National Eye Centre, Singapore
- Rigshospitalet, Copenhagen University, Denmark
| |
Collapse
|
118
|
Li JPO, Liu H, Ting DSJ, Jeon S, Chan RVP, Kim JE, Sim DA, Thomas PBM, Lin H, Chen Y, Sakomoto T, Loewenstein A, Lam DSC, Pasquale LR, Wong TY, Lam LA, Ting DSW. Digital technology, tele-medicine and artificial intelligence in ophthalmology: A global perspective. Prog Retin Eye Res 2021; 82:100900. [PMID: 32898686 PMCID: PMC7474840 DOI: 10.1016/j.preteyeres.2020.100900] [Citation(s) in RCA: 201] [Impact Index Per Article: 67.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Revised: 08/25/2020] [Accepted: 08/31/2020] [Indexed: 12/29/2022]
Abstract
The simultaneous maturation of multiple digital and telecommunications technologies in 2020 has created an unprecedented opportunity for ophthalmology to adapt to new models of care using tele-health supported by digital innovations. These digital innovations include artificial intelligence (AI), 5th generation (5G) telecommunication networks and the Internet of Things (IoT), creating an inter-dependent ecosystem offering opportunities to develop new models of eye care addressing the challenges of COVID-19 and beyond. Ophthalmology has thrived in some of these areas partly due to its many image-based investigations. Tele-health and AI provide synchronous solutions to challenges facing ophthalmologists and healthcare providers worldwide. This article reviews how countries across the world have utilised these digital innovations to tackle diabetic retinopathy, retinopathy of prematurity, age-related macular degeneration, glaucoma, refractive error correction, cataract and other anterior segment disorders. The review summarises the digital strategies that countries are developing and discusses technologies that may increasingly enter the clinical workflow and processes of ophthalmologists. Furthermore as countries around the world have initiated a series of escalating containment and mitigation measures during the COVID-19 pandemic, the delivery of eye care services globally has been significantly impacted. As ophthalmic services adapt and form a "new normal", the rapid adoption of some of telehealth and digital innovation during the pandemic is also discussed. Finally, challenges for validation and clinical implementation are considered, as well as recommendations on future directions.
Collapse
Affiliation(s)
- Ji-Peng Olivia Li
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Hanruo Liu
- Beijing Tongren Hospital; Capital Medical University; Beijing Institute of Ophthalmology; Beijing, China
| | - Darren S J Ting
- Academic Ophthalmology, University of Nottingham, United Kingdom
| | - Sohee Jeon
- Keye Eye Center, Seoul, Republic of Korea
| | | | - Judy E Kim
- Medical College of Wisconsin, Milwaukee, WI, USA
| | - Dawn A Sim
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Peter B M Thomas
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Haotian Lin
- Zhongshan Ophthalmic Center, State Key Laboratory of Ophthalmology, Guangzhou, China
| | - Youxin Chen
- Peking Union Medical College Hospital, Beijing, China
| | - Taiji Sakomoto
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Japan
| | | | - Dennis S C Lam
- C-MER Dennis Lam Eye Center, C-Mer International Eye Care Group Limited, Hong Kong, Hong Kong; International Eye Research Institute of the Chinese University of Hong Kong (Shenzhen), Shenzhen, China
| | - Louis R Pasquale
- Department of Ophthalmology, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Tien Y Wong
- Singapore National Eye Center, Duke-NUS Medical School Singapore, Singapore
| | - Linda A Lam
- USC Roski Eye Institute, University of Southern California (USC) Keck School of Medicine, Los Angeles, CA, USA
| | - Daniel S W Ting
- Singapore National Eye Center, Duke-NUS Medical School Singapore, Singapore.
| |
Collapse
|
119
|
Lee EB, Wang SY, Chang RT. Interpreting Deep Learning Studies in Glaucoma: Unresolved Challenges. Asia Pac J Ophthalmol (Phila) 2021; 10:261-267. [PMID: 34383718 DOI: 10.1097/apo.0000000000000395] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023] Open
Abstract
ABSTRACT Deep learning algorithms as tools for automated image classification have recently experienced rapid growth in imaging-dependent medical specialties, including ophthalmology. However, only a few algorithms tailored to specific health conditions have been able to achieve regulatory approval for autonomous diagnosis. There is now an international effort to establish optimized thresholds for algorithm performance benchmarking in a rapidly evolving artificial intelligence field. This review examines the largest deep learning studies in glaucoma, with special focus on identifying recurrent challenges and limitations within these studies which preclude widespread clinical deployment. We focus on the 3 most common input modalities when diagnosing glaucoma, namely, fundus photographs, spectral domain optical coherence tomography scans, and standard automated perimetry data. We then analyze 3 major challenges present in all studies: defining the algorithm output of glaucoma, determining reliable ground truth datasets, and compiling representative training datasets.
Collapse
Affiliation(s)
- Eric Boya Lee
- Byers Eye Institute, Department of Ophthalmology, Stanford University, CA
| | | | | |
Collapse
|
120
|
Du Y, Chen Q, Fan Y, Zhu J, He J, Zou H, Sun D, Xin B, Feng D, Fulham M, Wang X, Wang L, Xu X. Automatic identification of myopic maculopathy related imaging features in optic disc region via machine learning methods. J Transl Med 2021; 19:167. [PMID: 33902640 PMCID: PMC8074495 DOI: 10.1186/s12967-021-02818-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Accepted: 04/02/2021] [Indexed: 12/16/2022] Open
Abstract
BACKGROUND Myopic maculopathy (MM) is the most serious and irreversible complication of pathologic myopia, which is a major cause of visual impairment and blindness. Clinic proposed limited number of factors related to MM. To explore additional features strongly related with MM from optic disc region, we employ a machine learning based radiomics analysis method, which could explore and quantify more hidden or imperceptible MM-related features to the naked eyes and contribute to a more comprehensive understanding of MM and therefore may assist to distinguish the high-risk population in an early stage. METHODS A total of 457 eyes (313 patients) were enrolled and were divided into severe MM group and without severe MM group. Radiomics analysis was applied to depict features significantly correlated with severe MM from optic disc region. Receiver Operating Characteristic were used to evaluate these features' performance of classifying severe MM. RESULTS Eight new MM-related image features were discovered from the optic disc region, which described the shapes, textural patterns and intensity distributions of optic disc region. Compared with clinically reported MM-related features, these newly discovered features exhibited better abilities on severe MM classification. And the mean values of most features were markedly changed between patients with peripapillary diffuse chorioretinal atrophy (PDCA) and macular diffuse chorioretinal atrophy (MDCA). CONCLUSIONS Machine learning and radiomics method are useful tools for mining more MM-related features from the optic disc region, by which complex or even hidden MM-related features can be discovered and decoded. In this paper, eight new MM-related image features were found, which would be useful for further quantitative study of MM-progression. As a nontrivial byproduct, marked changes between PDCA and MDCA was discovered by both new image features and clinic features.
Collapse
Affiliation(s)
- Yuchen Du
- The Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University (SJTU), 800 Dongchuan RD. Minhang District, Shanghai, 200240, People's Republic of China
- Department of Preventative Ophthalmology, Shanghai Eye Diseases Prevention and Treatment Center, Shanghai Eye Hospital, No. 380 Kangding Road, Shanghai, 200040, China
- Department of Ophthalmology, Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai Engineering Center for Visual Science and Photo Medicine, Shanghai General Hospital, SJTU School of Medicine, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai, 20080, China
| | - Qiuying Chen
- Department of Preventative Ophthalmology, Shanghai Eye Diseases Prevention and Treatment Center, Shanghai Eye Hospital, No. 380 Kangding Road, Shanghai, 200040, China
- Department of Ophthalmology, Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai Engineering Center for Visual Science and Photo Medicine, Shanghai General Hospital, SJTU School of Medicine, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai, 20080, China
| | - Ying Fan
- Department of Preventative Ophthalmology, Shanghai Eye Diseases Prevention and Treatment Center, Shanghai Eye Hospital, No. 380 Kangding Road, Shanghai, 200040, China
- Department of Ophthalmology, Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai Engineering Center for Visual Science and Photo Medicine, Shanghai General Hospital, SJTU School of Medicine, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai, 20080, China
| | - Jianfeng Zhu
- Department of Preventative Ophthalmology, Shanghai Eye Diseases Prevention and Treatment Center, Shanghai Eye Hospital, No. 380 Kangding Road, Shanghai, 200040, China
| | - Jiangnan He
- Department of Preventative Ophthalmology, Shanghai Eye Diseases Prevention and Treatment Center, Shanghai Eye Hospital, No. 380 Kangding Road, Shanghai, 200040, China
| | - Haidong Zou
- Department of Preventative Ophthalmology, Shanghai Eye Diseases Prevention and Treatment Center, Shanghai Eye Hospital, No. 380 Kangding Road, Shanghai, 200040, China
- Department of Ophthalmology, Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai Engineering Center for Visual Science and Photo Medicine, Shanghai General Hospital, SJTU School of Medicine, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai, 20080, China
| | - Dazhen Sun
- The Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University (SJTU), 800 Dongchuan RD. Minhang District, Shanghai, 200240, People's Republic of China
| | - Bowen Xin
- Biomedical and Multimedia Information Technology Research Group, School of Computer Science, The University of Sydney, Sydney, NSW, 2006, Australia
| | - David Feng
- Biomedical and Multimedia Information Technology Research Group, School of Computer Science, The University of Sydney, Sydney, NSW, 2006, Australia
| | - Michael Fulham
- Department of Molecular Imaging, Royal Prince Alfred Hospital and the University of Sydney, Sydney, Australia
| | - Xiuying Wang
- Biomedical and Multimedia Information Technology Research Group, School of Computer Science, The University of Sydney, Sydney, NSW, 2006, Australia
| | - Lisheng Wang
- The Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University (SJTU), 800 Dongchuan RD. Minhang District, Shanghai, 200240, People's Republic of China.
| | - Xun Xu
- Department of Preventative Ophthalmology, Shanghai Eye Diseases Prevention and Treatment Center, Shanghai Eye Hospital, No. 380 Kangding Road, Shanghai, 200040, China.
- Department of Ophthalmology, Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai Engineering Center for Visual Science and Photo Medicine, Shanghai General Hospital, SJTU School of Medicine, Shanghai, China.
- National Clinical Research Center for Eye Diseases, Shanghai, 20080, China.
| |
Collapse
|
121
|
Risk factors for open-angle glaucoma and recommendations for glaucoma screening. Ophthalmologe 2021; 118:145-152. [PMID: 33881589 DOI: 10.1007/s00347-021-01378-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/19/2021] [Indexed: 10/21/2022]
Abstract
Open-angle glaucomas are a group of chronic progressive optic nerve neuropathies with a gonioscopic open anterior chamber angle. They are one of the main causes of visual impairment and blindness in industrialized countries. The aim of this article is to discuss and evaluate the epidemiology and risk factors for the development of open-angle glaucoma and to present the screening procedure for open-angle glaucoma according to the recently published S2e guidelines of the Association of the Scientific Medical Societies in Germany (AWMF).
Collapse
|
122
|
Aggarwal R, Sounderajah V, Martin G, Ting DSW, Karthikesalingam A, King D, Ashrafian H, Darzi A. Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis. NPJ Digit Med 2021; 4:65. [PMID: 33828217 PMCID: PMC8027892 DOI: 10.1038/s41746-021-00438-z] [Citation(s) in RCA: 250] [Impact Index Per Article: 83.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 02/25/2021] [Indexed: 12/19/2022] Open
Abstract
Deep learning (DL) has the potential to transform medical diagnostics. However, the diagnostic accuracy of DL is uncertain. Our aim was to evaluate the diagnostic accuracy of DL algorithms to identify pathology in medical imaging. Searches were conducted in Medline and EMBASE up to January 2020. We identified 11,921 studies, of which 503 were included in the systematic review. Eighty-two studies in ophthalmology, 82 in breast disease and 115 in respiratory disease were included for meta-analysis. Two hundred twenty-four studies in other specialities were included for qualitative review. Peer-reviewed studies that reported on the diagnostic accuracy of DL algorithms to identify pathology using medical imaging were included. Primary outcomes were measures of diagnostic accuracy, study design and reporting standards in the literature. Estimates were pooled using random-effects meta-analysis. In ophthalmology, AUC's ranged between 0.933 and 1 for diagnosing diabetic retinopathy, age-related macular degeneration and glaucoma on retinal fundus photographs and optical coherence tomography. In respiratory imaging, AUC's ranged between 0.864 and 0.937 for diagnosing lung nodules or lung cancer on chest X-ray or CT scan. For breast imaging, AUC's ranged between 0.868 and 0.909 for diagnosing breast cancer on mammogram, ultrasound, MRI and digital breast tomosynthesis. Heterogeneity was high between studies and extensive variation in methodology, terminology and outcome measures was noted. This can lead to an overestimation of the diagnostic accuracy of DL algorithms on medical imaging. There is an immediate need for the development of artificial intelligence-specific EQUATOR guidelines, particularly STARD, in order to provide guidance around key issues in this field.
Collapse
Affiliation(s)
- Ravi Aggarwal
- Institute of Global Health Innovation, Imperial College London, London, UK
| | | | - Guy Martin
- Institute of Global Health Innovation, Imperial College London, London, UK
| | - Daniel S W Ting
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | | | - Dominic King
- Institute of Global Health Innovation, Imperial College London, London, UK
| | - Hutan Ashrafian
- Institute of Global Health Innovation, Imperial College London, London, UK.
| | - Ara Darzi
- Institute of Global Health Innovation, Imperial College London, London, UK
| |
Collapse
|
123
|
Cho H, Hwang YH, Chung JK, Lee KB, Park JS, Kim HG, Jeong JH. Deep Learning Ensemble Method for Classifying Glaucoma Stages Using Fundus Photographs and Convolutional Neural Networks. Curr Eye Res 2021; 46:1516-1524. [PMID: 33820457 DOI: 10.1080/02713683.2021.1900268] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Purpose: This study developed and evaluated a deep learning ensemble method to automatically grade the stages of glaucoma depending on its severity.Materials and Methods: After cross-validation of three glaucoma specialists, the final dataset comprised of 3,460 fundus photographs taken from 2,204 patients were divided into three classes: unaffected controls, early-stage glaucoma, and late-stage glaucoma. The mean deviation value of standard automated perimetry was used to classify the glaucoma cases. We modeled 56 convolutional neural networks (CNN) with different characteristics and developed an ensemble system to derive the best performance by combining several modeling results.Results: The proposed method with an accuracy of 88.1% and an average area under the receiver operating characteristic of 0.975 demonstrates significantly better performance to classify glaucoma stages compared to the best single CNN model that has an accuracy of 85.2% and an average area under the receiver operating characteristic of 0.950. The false negative is the least adjacent misprediction, and it is less in the proposed method than in the best single CNN model.Conclusions: The method of averaging multiple CNN models can better classify glaucoma stages by using fundus photographs than a single CNN model. The ensemble method would be useful as a clinical decision support system in glaucoma screening for primary care because it provides high and stable performance with a relatively small amount of data.
Collapse
Affiliation(s)
- Hyeonsung Cho
- Intelligence and Robot System Research Group, Electronics & Telecommunication Research Institute, Daejeon, Republic of Korea
| | - Young Hoon Hwang
- Department of Ophthalmology, Chungnam National University Hospital, Daejeon, Republic of Korea
| | - Jae Keun Chung
- Department of Ophthalmology, Chungnam National University Hospital, Daejeon, Republic of Korea
| | - Kwan Bok Lee
- Department of Ophthalmology, Chungnam National University Hospital, Daejeon, Republic of Korea
| | - Ji Sang Park
- Intelligence and Robot System Research Group, Electronics & Telecommunication Research Institute, Daejeon, Republic of Korea
| | - Hong-Gee Kim
- Biomedical Knowledge Engineering Laboratory, Seoul National University, Seoul, Republic of Korea
| | - Jae Hoon Jeong
- Department of Ophthalmology, Konyang University Hospital, Konyag University College of Medicine, Daejeon, Republic of Korea
| |
Collapse
|
124
|
Abstract
Ophthalmology has been at the forefront of medical specialties adopting artificial intelligence. This is primarily due to the "image-centric" nature of the field. Thanks to the abundance of patients' OCT scans, analysis of OCT imaging has greatly benefited from artificial intelligence to expand patient screening and facilitate clinical decision-making.In this review, we define the concepts of artificial intelligence, machine learning, and deep learning and how different artificial intelligence algorithms have been applied in OCT image analysis for disease screening, diagnosis, management, and prognosis.Finally, we address some of the challenges and limitations that might affect the incorporation of artificial intelligence in ophthalmology. These limitations mainly revolve around the quality and accuracy of datasets used in the algorithms and their generalizability, false negatives, and the cultural challenges around the adoption of the technology.
Collapse
Affiliation(s)
- Mohammad Dahrouj
- Department of Ophthalmology, Retina Service, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - John B Miller
- Department of Ophthalmology, Harvard Retinal Imaging Lab, Massachusetts Eye and Ear, Boston, MA, USA
| |
Collapse
|
125
|
Xu Y, Hu M, Liu H, Yang H, Wang H, Lu S, Liang T, Li X, Xu M, Li L, Li H, Ji X, Wang Z, Li L, Weinreb RN, Wang N. A hierarchical deep learning approach with transparency and interpretability based on small samples for glaucoma diagnosis. NPJ Digit Med 2021; 4:48. [PMID: 33707616 PMCID: PMC7952384 DOI: 10.1038/s41746-021-00417-4] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Accepted: 02/08/2021] [Indexed: 12/11/2022] Open
Abstract
The application of deep learning algorithms for medical diagnosis in the real world faces challenges with transparency and interpretability. The labeling of large-scale samples leads to costly investment in developing deep learning algorithms. The application of human prior knowledge is an effective way to solve these problems. Previously, we developed a deep learning system for glaucoma diagnosis based on a large number of samples that had high sensitivity and specificity. However, it is a black box and the specific analytic methods cannot be elucidated. Here, we establish a hierarchical deep learning system based on a small number of samples that comprehensively simulates the diagnostic thinking of human experts. This system can extract the anatomical characteristics of the fundus images, including the optic disc, optic cup, and appearance of the retinal nerve fiber layer to realize automatic diagnosis of glaucoma. In addition, this system is transparent and interpretable, and the intermediate process of prediction can be visualized. Applying this system to three validation datasets of fundus images, we demonstrate performance comparable to that of human experts in diagnosing glaucoma. Moreover, it markedly improves the diagnostic accuracy of ophthalmologists. This system may expedite the screening and diagnosis of glaucoma, resulting in improved clinical outcomes.
Collapse
Affiliation(s)
- Yongli Xu
- Department of Mathematics, Beijing University of Chemical Technology, Beijing, China
| | - Man Hu
- National Key Discipline of Pediatrics, Ministry of Education, Department of Ophthalmology, Beijing Children's Hospital, Capital Medical University, Beijing, China
| | - Hanruo Liu
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology & Visual Science Key Lab, Beijing, China.,School of Information and Electronics, Beijing Institute of Technology, Beijing, China
| | - Hao Yang
- Department of Mathematics, Beijing University of Chemical Technology, Beijing, China
| | - Huaizhou Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology & Visual Science Key Lab, Beijing, China
| | - Shuai Lu
- Department of Mathematics, Beijing University of Chemical Technology, Beijing, China.,School of Information and Electronics, Beijing Institute of Technology, Beijing, China
| | - Tianwei Liang
- National Key Discipline of Pediatrics, Ministry of Education, Department of Ophthalmology, Beijing Children's Hospital, Capital Medical University, Beijing, China
| | - Xiaoxing Li
- Department of Mathematics, Beijing University of Chemical Technology, Beijing, China
| | - Mai Xu
- School of Electronic and Information Engineering, Beihang University, Beijing, China
| | - Liu Li
- School of Electronic and Information Engineering, Beihang University, Beijing, China
| | - Huiqi Li
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China
| | - Xin Ji
- Beijing Shanggong Medical Technology co., Ltd, Beijing, China
| | - Zhijun Wang
- Beijing Shanggong Medical Technology co., Ltd, Beijing, China
| | - Li Li
- National Key Discipline of Pediatrics, Ministry of Education, Department of Ophthalmology, Beijing Children's Hospital, Capital Medical University, Beijing, China.
| | - Robert N Weinreb
- Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA
| | - Ningli Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology & Visual Science Key Lab, Beijing, China. .,Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University & Capital Medical University, Beijing Tongren Hospital, Beijing, China.
| |
Collapse
|
126
|
Chai Y, Bian Y, Liu H, Li J, Xu J. Glaucoma diagnosis in the Chinese context: An uncertainty information-centric Bayesian deep learning model. Inf Process Manag 2021. [DOI: 10.1016/j.ipm.2020.102454] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
127
|
|
128
|
Lu L, Ren P, Lu Q, Zhou E, Yu W, Huang J, He X, Han W. Analyzing fundus images to detect diabetic retinopathy (DR) using deep learning system in the Yangtze River delta region of China. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:226. [PMID: 33708853 PMCID: PMC7940941 DOI: 10.21037/atm-20-3275] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Background This study aimed to establish and evaluate an artificial intelligence-based deep learning system (DLS) for automatic detection of diabetic retinopathy. This could be important in developing an advanced tele-screening system for diabetic retinopathy. Methods A DLS with a convolutional neural network was developed to recognize fundus images of referable diabetic retinopathy. A total data set of 41,866 color fundus images were obtained from 17 cities in the Yangtze River Delta Urban Agglomeration (YRDUA). Five experienced retinal specialists and 15 ophthalmologists were recruited to verify images. For training, 80% of the data set was used, and the other 20% served as the validation data set. To effectively understand the learning process, the DLS automatically superimposed a heatmap on the original image. The regions utilized by the DLS were highlighted for diagnosis. Results Using the local validation data set, the DLS achieved an area under the curve of 0.9824. Based on the manual screening criteria, an operating point was set at about 0.9 sensitivity to evaluate the DLS. Specificity was recorded at 0.9609 and sensitivity was 0.9003. The DLSs showed excellent reliability, repeatability, and high efficiency. After analyzing the misclassification, it was found that 88.6% of the false-positives were mild non-proliferative diabetic retinopathy (NPDR) whereas, 81.6% of the false-negatives were intraretinal microvascular abnormalities. Conclusions The DLS efficiently detected fundus images from complex sources in the real world. Incorporating DLS technology in tele-screening will advance the current screening programs to offer a cost-effective and time-efficient solution for detecting diabetic retinopathy.
Collapse
Affiliation(s)
- Li Lu
- Department of Ophthalmology, The First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China.,Department of Ophthalmology, The First Affiliated Hospital of University of Science and Technology of China, Hefei, China
| | - Peifang Ren
- Department of Ophthalmology, The First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Qianyi Lu
- Department of Ophthalmology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Enliang Zhou
- Department of Ophthalmology, The First Affiliated Hospital of University of Science and Technology of China, Hefei, China
| | - Wangshu Yu
- Department of Ophthalmology, The First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Jiani Huang
- Department of Ophthalmology, The First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Xiaoying He
- Department of Ophthalmology, The First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Wei Han
- Department of Ophthalmology, The First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| |
Collapse
|
129
|
Hemelings R, Elen B, Blaschko MB, Jacob J, Stalmans I, De Boever P. Pathological myopia classification with simultaneous lesion segmentation using deep learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 199:105920. [PMID: 33412285 DOI: 10.1016/j.cmpb.2020.105920] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Accepted: 12/21/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVES Pathological myopia (PM) is the seventh leading cause of blindness, with a reported global prevalence up to 3%. Early and automated PM detection from fundus images could aid to prevent blindness in a world population that is characterized by a rising myopia prevalence. We aim to assess the use of convolutional neural networks (CNNs) for the detection of PM and semantic segmentation of myopia-induced lesions from fundus images on a recently introduced reference data set. METHODS This investigation reports on the results of CNNs developed for the recently introduced Pathological Myopia (PALM) dataset, which consists of 1200 images. Our CNN bundles lesion segmentation and PM classification, as the two tasks are heavily intertwined. Domain knowledge is also inserted through the introduction of a new Optic Nerve Head (ONH)-based prediction enhancement for the segmentation of atrophy and fovea localization. Finally, we are the first to approach fovea localization using segmentation instead of detection or regression models. Evaluation metrics include area under the receiver operating characteristic curve (AUC) for PM detection, Euclidean distance for fovea localization, and Dice and F1 metrics for the semantic segmentation tasks (optic disc, retinal atrophy and retinal detachment). RESULTS Models trained with 400 available training images achieved an AUC of 0.9867 for PM detection, and a Euclidean distance of 58.27 pixels on the fovea localization task, evaluated on a test set of 400 images. Dice and F1 metrics for semantic segmentation of lesions scored 0.9303 and 0.9869 on optic disc, 0.8001 and 0.9135 on retinal atrophy, and 0.8073 and 0.7059 on retinal detachment, respectively. CONCLUSIONS We report a successful approach for a simultaneous classification of pathological myopia and segmentation of associated lesions. Our work was acknowledged with an award in the context of the "Pathological Myopia detection from retinal images" challenge held during the IEEE International Symposium on Biomedical Imaging (April 2019). Considering that (pathological) myopia cases are often identified as false positives and negatives in glaucoma deep learning models, we envisage that the current work could aid in future research to discriminate between glaucomatous and highly-myopic eyes, complemented by the localization and segmentation of landmarks such as fovea, optic disc and atrophy.
Collapse
Affiliation(s)
- Ruben Hemelings
- Research Group Ophthalmology, KU Leuven, Herestraat 49, 3000 Leuven, Belgium; VITO NV, Boeretang 200, 2400 Mol, Belgium.
| | - Bart Elen
- VITO NV, Boeretang 200, 2400 Mol, Belgium
| | | | - Julie Jacob
- Ophthalmology Department, UZ Leuven, Herestraat 49, 3000 Leuven, Belgium
| | - Ingeborg Stalmans
- Research Group Ophthalmology, KU Leuven, Herestraat 49, 3000 Leuven, Belgium; Ophthalmology Department, UZ Leuven, Herestraat 49, 3000 Leuven, Belgium
| | - Patrick De Boever
- Hasselt University, Agoralaan building D, 3590 Diepenbeek, Belgium; VITO NV, Boeretang 200, 2400 Mol, Belgium
| |
Collapse
|
130
|
Ludwig T, Oukid I, Wong J, Ting S, Huysentruyt K, Roy P, Foussat AC, Vandenplas Y. Machine Learning Supports Automated Digital Image Scoring of Stool Consistency in Diapers. J Pediatr Gastroenterol Nutr 2021; 72:255-261. [PMID: 33275399 PMCID: PMC7815249 DOI: 10.1097/mpg.0000000000003007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Accepted: 10/09/2020] [Indexed: 12/12/2022]
Abstract
BACKGROUND/AIMS Accurate stool consistency classification of non-toilet-trained children remains challenging. This study evaluated the feasibility of automated classification of stool consistencies from diaper photos using machine learning (ML). METHODS In total, 2687 usable smartphone photos of diapers with stool from 96 children younger than 24 months were obtained after independent ethical study approval. Stool consistency was assessed from each photo according to the original 7 types of the Brussels Infant and Toddler Stool Scale independently by study participants and 2 researchers. A health care professional assigned a final score in case of scoring disagreement between the researchers. A proof-of-concept ML model was built upon this collected photo database, using transfer learning to re-train the classification layer of a pretrained deep convolutional neural network model. The model was built on random training (n = 2478) and test (n = 209) subsets. RESULTS Agreements between study participants and both researchers were 58.0% and 48.5%, respectively, and between researchers 77.5% (assessable n = 2366). The model classified 60.3% of the test photos in exact agreement with the final score. With respect to the 4-class grouping of the 7 Brussels Infant and Toddler Stool Scale types, the agreement between model-based and researcher classification was 77.0%. CONCLUSION The automated and objective scoring of stool consistency from diaper photos by the ML model shows robust agreement with human raters and overcomes limitations of other methods relying on caregiver reporting. Integrated with a smartphone application, this new framework for photo database construction and ML classification has numerous potential applications in clinical studies and home assessment.
Collapse
Affiliation(s)
- Thomas Ludwig
- Danone Nutricia Research, Precision Nutrition D-lab, Biopolis, Singapore
| | | | - Jill Wong
- Danone Nutricia Research, Precision Nutrition D-lab, Biopolis, Singapore
| | - Steven Ting
- Danone Nutricia Research, Precision Nutrition D-lab, Biopolis, Singapore
| | - Koen Huysentruyt
- KidZ Health Castle, UZ Brussel, Vrije Universiteit Brussel, Brussels, Belgium
| | - Puspita Roy
- Danone Nutricia Research, Precision Nutrition D-lab, Biopolis, Singapore
| | - Agathe C. Foussat
- Danone Nutricia Research, Precision Nutrition D-lab, Biopolis, Singapore
| | - Yvan Vandenplas
- KidZ Health Castle, UZ Brussel, Vrije Universiteit Brussel, Brussels, Belgium
| |
Collapse
|
131
|
Li T, Bo W, Hu C, Kang H, Liu H, Wang K, Fu H. Applications of deep learning in fundus images: A review. Med Image Anal 2021; 69:101971. [PMID: 33524824 DOI: 10.1016/j.media.2021.101971] [Citation(s) in RCA: 81] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 01/12/2021] [Indexed: 02/06/2023]
Abstract
The use of fundus images for the early screening of eye diseases is of great clinical importance. Due to its powerful performance, deep learning is becoming more and more popular in related applications, such as lesion segmentation, biomarkers segmentation, disease diagnosis and image synthesis. Therefore, it is very necessary to summarize the recent developments in deep learning for fundus images with a review paper. In this review, we introduce 143 application papers with a carefully designed hierarchy. Moreover, 33 publicly available datasets are presented. Summaries and analyses are provided for each task. Finally, limitations common to all tasks are revealed and possible solutions are given. We will also release and regularly update the state-of-the-art results and newly-released datasets at https://github.com/nkicsl/Fundus_Review to adapt to the rapid development of this field.
Collapse
Affiliation(s)
- Tao Li
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Wang Bo
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Chunyu Hu
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Hong Kang
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Hanruo Liu
- Beijing Tongren Hospital, Capital Medical University, Address, Beijing 100730 China
| | - Kai Wang
- College of Computer Science, Nankai University, Tianjin 300350, China.
| | - Huazhu Fu
- Inception Institute of Artificial Intelligence (IIAI), Abu Dhabi, UAE
| |
Collapse
|
132
|
Milea D, Singhal S, Najjar RP. Artificial intelligence for detection of optic disc abnormalities. Curr Opin Neurol 2021; 33:106-110. [PMID: 31789676 DOI: 10.1097/wco.0000000000000773] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
PURPOSE OF REVIEW The aim of this review is to highlight novel artificial intelligence-based methods for the detection of optic disc abnormalities, with particular focus on neurology and neuro-ophthalmology. RECENT FINDINGS Methods for detection of optic disc abnormalities on retinal fundus images have evolved considerably over the last few years, from classical ophthalmoscopy to artificial intelligence-based identification methods being applied to retinal imaging with the aim of predicting sight and life-threatening complications of underlying brain or optic nerve conditions. SUMMARY Artificial intelligence and in particular newly developed deep-learning systems are playing an increasingly important role for the detection and classification of acquired neuro-ophthalmic optic disc abnormalities on ocular fundus images. The implementation of automatic deep-learning methods for detection of abnormal optic discs, coupled with innovative hardware solutions for fundus imaging, could revolutionize the practice of neurologists and other non-ophthalmic healthcare providers.
Collapse
Affiliation(s)
- Dan Milea
- Singapore National Eye Centre.,Singapore Eye Research Institute.,Duke-NUS Medical School, Singapore
| | - Shweta Singhal
- Singapore National Eye Centre.,Singapore Eye Research Institute.,Duke-NUS Medical School, Singapore.,Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Raymond P Najjar
- Singapore Eye Research Institute.,Duke-NUS Medical School, Singapore
| |
Collapse
|
133
|
Tham YC, Anees A, Zhang L, Goh JHL, Rim TH, Nusinovici S, Hamzah H, Chee ML, Tjio G, Li S, Xu X, Goh R, Tang F, Cheung CYL, Wang YX, Nangia V, Jonas JB, Gopinath B, Mitchell P, Husain R, Lamoureux E, Sabanayagam C, Wang JJ, Aung T, Liu Y, Wong TY, Cheng CY. Referral for disease-related visual impairment using retinal photograph-based deep learning: a proof-of-concept, model development study. LANCET DIGITAL HEALTH 2021; 3:e29-e40. [DOI: 10.1016/s2589-7500(20)30271-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Revised: 10/14/2020] [Accepted: 10/24/2020] [Indexed: 11/26/2022]
|
134
|
Li Z, Jiang J, Zhou H, Zheng Q, Liu X, Chen K, Weng H, Chen W. Development of a deep learning-based image eligibility verification system for detecting and filtering out ineligible fundus images: A multicentre study. Int J Med Inform 2020; 147:104363. [PMID: 33388480 DOI: 10.1016/j.ijmedinf.2020.104363] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2020] [Revised: 12/07/2020] [Accepted: 12/08/2020] [Indexed: 12/26/2022]
Abstract
BACKGROUND Recent advances in artificial intelligence (AI) have shown great promise in detecting some diseases based on medical images. Most studies developed AI diagnostic systems only using eligible images. However, in real-world settings, ineligible images (including poor-quality and poor-location images) that can compromise downstream analysis are inevitable, leading to uncertainty about the performance of these AI systems. This study aims to develop a deep learning-based image eligibility verification system (DLIEVS) for detecting and filtering out ineligible fundus images. METHODS A total of 18,031 fundus images (9,188 subjects) collected from 4 clinical centres were used to develop and evaluate the DLIEVS for detecting eligible, poor-location, and poor-quality fundus images. Four deep learning algorithms (AlexNet, DenseNet121, Inception V3, and ResNet50) were leveraged to train models to obtain the best model for the DLIEVS. The performance of the DLIEVS was evaluated using the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity, as compared with a reference standard determined by retina experts. RESULTS In the internal test dataset, the best algorithm (DenseNet121) achieved AUCs of 1.000, 0.999, and 1.000 for the classification of eligible, poor-location, and poor-quality images, respectively. In the external test datasets, the AUCs of the best algorithm (DenseNet121) for detecting eligible, poor-location, and poor-quality images were ranged from 0.999-1.000, 0.997-1.000, and 0.997-0.999, respectively. CONCLUSIONS Our DLIEVS can accurately discriminate poor-quality and poor-location images from eligible images. This system has the potential to serve as a pre-screening technique to filter out ineligible images obtained from real-world settings, ensuring only eligible images will be applied in the subsequent image-based AI diagnostic analyses.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| | - Jiewei Jiang
- School of Electronics Engineering, Xi'an University of Posts and Telecommunications, Xi'an, 710121, China
| | - Heding Zhou
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Qinxiang Zheng
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Xiaotian Liu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Kuan Chen
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Hongfei Weng
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| |
Collapse
|
135
|
Mirzania D, Thompson AC, Muir KW. Applications of deep learning in detection of glaucoma: A systematic review. Eur J Ophthalmol 2020; 31:1618-1642. [PMID: 33274641 DOI: 10.1177/1120672120977346] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Glaucoma is the leading cause of irreversible blindness and disability worldwide. Nevertheless, the majority of patients do not know they have the disease and detection of glaucoma progression using standard technology remains a challenge in clinical practice. Artificial intelligence (AI) is an expanding field that offers the potential to improve diagnosis and screening for glaucoma with minimal reliance on human input. Deep learning (DL) algorithms have risen to the forefront of AI by providing nearly human-level performance, at times exceeding the performance of humans for detection of glaucoma on structural and functional tests. A succinct summary of present studies and challenges to be addressed in this field is needed. Following PRISMA guidelines, we conducted a systematic review of studies that applied DL methods for detection of glaucoma using color fundus photographs, optical coherence tomography (OCT), or standard automated perimetry (SAP). In this review article we describe recent advances in DL as applied to the diagnosis of glaucoma and glaucoma progression for application in screening and clinical settings, as well as the challenges that remain when applying this novel technique in glaucoma.
Collapse
Affiliation(s)
| | - Atalie C Thompson
- Duke University School of Medicine, Durham, NC, USA.,Durham VA Medical Center, Durham, NC, USA
| | - Kelly W Muir
- Duke University School of Medicine, Durham, NC, USA.,Durham VA Medical Center, Durham, NC, USA
| |
Collapse
|
136
|
Sun J, Huang X, Egwuagu C, Badr Y, Dryden SC, Fowler BT, Yousefi S. Identifying Mouse Autoimmune Uveitis from Fundus Photographs Using Deep Learning. Transl Vis Sci Technol 2020; 9:59. [PMID: 33294300 PMCID: PMC7718814 DOI: 10.1167/tvst.9.2.59] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Accepted: 09/25/2020] [Indexed: 01/09/2023] Open
Abstract
Purpose To develop a deep learning model for objective evaluation of experimental autoimmune uveitis (EAU), the animal model of posterior uveitis that reveals its essential pathological features via fundus photographs. Methods We developed a deep learning construct to identify uveitis using reference mouse fundus images and further categorized the severity levels of disease into mild and severe EAU. We evaluated the performance of the model using the area under the receiver operating characteristic curve (AUC) and confusion matrices. We further assessed the clinical relevance of the model by visualizing the principal components of features at different layers and through the use of gradient-weighted class activation maps, which presented retinal regions having the most significant influence on the model. Results Our model was trained, validated, and tested on 1500 fundus images (training, 1200; validation, 150; testing, 150) and achieved an average AUC of 0.98 for identifying the normal, trace (small and local lesions), and disease classes (large and spreading lesions). The AUCs of the model using an independent subset with 180 images were 1.00 (95% confidence interval [CI], 0.99-1.00), 0.97 (95% CI, 0.94-0.99), and 0.96 (95% CI, 0.90-1.00) for the normal, trace and disease classes, respectively. Conclusions The proposed deep learning model is able to identify three severity levels of EAU with high accuracy. The model also achieved high accuracy on independent validation subsets, reflecting a substantial degree of generalizability. Translational Relevance The proposed model represents an important new tool for use in animal medical research and provides a step toward clinical uveitis identification in clinical practice.
Collapse
Affiliation(s)
- Jian Sun
- Molecular Immunology Section, Laboratory of Immunology, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Xiaoqin Huang
- The Pennsylvania State University Great Valley, Malvern, PA, USA
| | - Charles Egwuagu
- Molecular Immunology Section, Laboratory of Immunology, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Youakim Badr
- The Pennsylvania State University Great Valley, Malvern, PA, USA
| | | | | | - Siamak Yousefi
- University of Tennessee Health Science Center, Memphis, TN, USA
| |
Collapse
|
137
|
Yoo TK, Ryu IH, Kim JK, Lee IS, Kim JS, Kim HK, Choi JY. Deep learning can generate traditional retinal fundus photographs using ultra-widefield images via generative adversarial networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 197:105761. [PMID: 32961385 DOI: 10.1016/j.cmpb.2020.105761] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Accepted: 09/12/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Retinal imaging has two major modalities, traditional fundus photography (TFP) and ultra-widefield fundus photography (UWFP). This study demonstrates the feasibility of a state-of-the-art deep learning-based domain transfer from UWFP to TFP. METHODS A cycle-consistent generative adversarial network (CycleGAN) was used to automatically translate the UWFP to the TFP domain. The model was based on an unpaired dataset including anonymized 451 UWFP and 745 TFP images. To apply CycleGAN to an independent dataset, we randomly divided the data into training (90%) and test (10%) datasets. After automated image registration and masking dark frames, the generator and discriminator networks were trained. Additional twelve publicly available paired TFP and UWFP images were used to calculate the intensity histograms and structural similarity (SSIM) indices. RESULTS We observed that all UWFP images were successfully translated into TFP-style images by CycleGAN, and the main structural information of the retina and optic nerve was retained. The model did not generate fake features in the output images. Average histograms demonstrated that the intensity distribution of the generated output images provided a good match to the ground truth images, with an average SSIM level of 0.802. CONCLUSIONS Our approach enables automated synthesis of TFP images directly from UWFP without a manual pre-conditioning process. The generated TFP images might be useful for clinicians in investigating posterior pole and for researchers in integrating TFP and UWFP databases. This is also likely to save scan time and will be more cost-effective for patients by avoiding additional examinations for an accurate diagnosis.
Collapse
Affiliation(s)
- Tae Keun Yoo
- Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, Cheongju, South Korea.
| | - Ik Hee Ryu
- B&VIIT Eye Center, Seoul, South Korea; VISUWORKS, Seoul, South Korea
| | - Jin Kuk Kim
- B&VIIT Eye Center, Seoul, South Korea; VISUWORKS, Seoul, South Korea
| | | | | | - Hong Kyu Kim
- Department of Ophthalmology, Dankook University Hospital, Dankook University College of Medicine, Cheonan, South Korea
| | - Joon Yul Choi
- Epilepsy Center, Neurological Institute, Cleveland Clinic, Cleveland, Ohio, United States
| |
Collapse
|
138
|
Shekhawat NS, Niziol LM, Sharma SS, Joseph S, Robin AL, Gillespie BW, Musch DC, Woodward MA, Venkatesh R. The Utility of Routine Fundus Photography Screening for Posterior Segment Disease: A Stepped-wedge, Cluster-randomized Trial in South India. Ophthalmology 2020; 128:1060-1069. [PMID: 33253756 DOI: 10.1016/j.ophtha.2020.11.025] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Revised: 11/13/2020] [Accepted: 11/23/2020] [Indexed: 10/22/2022] Open
Abstract
PURPOSE To assess whether routine fundus photography (RFP) to screen for posterior segment disease at community eye clinics (vision centers [VCs]) in India increases referral to centralized ophthalmolic care. DESIGN Stepped-wedge, cluster-randomized trial. PARTICIPANTS Patients aged 40 to 75 years and those aged 20 to 40 years with a known history of hypertension or diabetes mellitus presenting to 4 technician-run VCs associated with the Aravind Eye Care System in India. METHODS VCs (clusters) were randomized to standard care or RFP across five 2-week study periods (steps). Patients in each cluster received standard care initially. At the start of each subsequent step, a randomly chosen cluster crossed over to providing RFP to eligible patients. All clusters took part in RFP during the last step. Standard care involved technician eye exams, optional fundus photography, and teleconsultation with an ophthalmologist. RFP involved eye exams, dilation and 40-degree fundus photography, and teleconsultation with an ophthalmologist. MAIN OUTCOME MEASURES Standard care and RFP clusters were compared by the proportion of patients referred for in-person evaluation by an ophthalmologist because of fundus photography findings and urgency of referral (urgently in ≤ 2 weeks vs. nonurgently in > 2 weeks). Generalized linear mixed models adjusting for cluster and step were used to estimate the odds of referral due to fundus photography findings compared with standard care. RESULTS A total of 1447 patients were enrolled across the VCs, including 737 in the standard care group and 710 in the RFP group. Compared with standard care, the RFP group had a higher proportion of referrals due to fundus photography findings (11.3% vs. 4.4%), nonurgent referrals due to fundus photography (9.3% vs. 3.3%), and urgent referrals due to fundus photography (1.8% vs. 1.1%). The RFP intervention was associated with a 2-fold increased odds of being referred because of photography findings compared with standard care (odds ratio, 2.07; 95% confidence interval, 0.98-4.40; P = 0.058). CONCLUSIONS Adding RFP to community eye clinics was associated with an increased odds of referral compared with standard care. This increase in referral was mostly due to nonurgent posterior segment disease.
Collapse
Affiliation(s)
- Nakul S Shekhawat
- Department of Ophthalmology and Visual Sciences, University of Michigan, Ann Arbor, Michigan; Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Leslie M Niziol
- Department of Ophthalmology and Visual Sciences, University of Michigan, Ann Arbor, Michigan
| | | | - Sanil Joseph
- Lions Aravind Institute of Community Ophthalmology, Madurai, Tamil Nadu, India
| | - Alan L Robin
- Department of Ophthalmology and Visual Sciences, University of Michigan, Ann Arbor, Michigan
| | - Brenda W Gillespie
- Department of Epidemiology, University of Michigan, Ann Arbor, Michigan; Department of Biostatistics, University of Michigan, Ann Arbor, Michigan
| | - David C Musch
- Department of Ophthalmology and Visual Sciences, University of Michigan, Ann Arbor, Michigan; Department of Epidemiology, University of Michigan, Ann Arbor, Michigan
| | - Maria A Woodward
- Department of Ophthalmology and Visual Sciences, University of Michigan, Ann Arbor, Michigan.
| | | |
Collapse
|
139
|
Schuster AK, Wagner FM, Pfeiffer N, Hoff Mann EM. [Risk factors for open-angle glaucoma and recommendations for glaucoma screening]. Ophthalmologe 2020; 117:1149-1160. [PMID: 33095295 DOI: 10.1007/s00347-020-01251-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Open-angle glaucomas are a group of chronic progressive optic nerve neuropathies with a gonioscopic open anterior chamber angle. They are one of the main causes of visual impairment and blindness in industrialized countries. The aim of this article is to discuss and evaluate the epidemiology and risk factors for the development of open-angle glaucoma and to present the screening procedure for open-angle glaucoma according to the recently published S2e guidelines of the Association of the Scientific Medical Societies in Germany (AWMF).
Collapse
Affiliation(s)
- Alexander K Schuster
- Augenklinik und Poliklinik, Universitätsmedizin Mainz, Langenbeckstr. 1, 55131, Mainz, Deutschland.
| | - Felix M Wagner
- Augenklinik und Poliklinik, Universitätsmedizin Mainz, Langenbeckstr. 1, 55131, Mainz, Deutschland
| | - Norbert Pfeiffer
- Augenklinik und Poliklinik, Universitätsmedizin Mainz, Langenbeckstr. 1, 55131, Mainz, Deutschland
| | - Esther M Hoff Mann
- Augenklinik und Poliklinik, Universitätsmedizin Mainz, Langenbeckstr. 1, 55131, Mainz, Deutschland
| |
Collapse
|
140
|
Li F, Shi JX, Yan L, Wang YG, Zhang XD, Jiang MS, Wu ZZ, Zhou KQ. Lesion-aware convolutional neural network for chest radiograph classification. Clin Radiol 2020; 76:155.e1-155.e14. [PMID: 33077154 DOI: 10.1016/j.crad.2020.08.027] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Accepted: 08/18/2020] [Indexed: 01/18/2023]
Abstract
AIM To investigate the performance of a deep-learning approach termed lesion-aware convolutional neural network (LACNN) to identify 14 different thoracic diseases on chest X-rays (CXRs). MATERIALS AND METHODS In total, 10,738 CXRs of 3,526 patients were collected retrospectively. Of these, 1,937 CXRs of 598 patients were selected for training and optimising the lesion-detection network (LDN) of LACNN. The remaining 8,801 CXRs from 2,928 patients were used to train and test the classification network of LACNN. The discriminative performance of the deep-learning approach was compared with that obtained by the radiologists. In addition, its generalisation was validated on the independent public dataset, ChestX-ray14. The decision-making process of the model was visualised by occlusion testing, and the effect of the integration of CXRs and non-image data on model performance was also investigated. In a systematic evaluation, F1 score, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) metrics were calculated. RESULTS The model generated statistically significantly higher AUC performance compared with radiologists on atelectasis, mass, and nodule, with AUC values of 0.831 (95% confidence interval [CI]: 0.807-0.855), 0.959 (95% CI: 0.944-0.974), and 0.928 (95% CI: 0.906-0.950), respectively. For the other 11 pathologies, there were no statistically significant differences. The average time to complete each CXR classification in the testing dataset was substantially longer for the radiologists (∼35 seconds) than for the LACNN (∼0.197 seconds). In the ChestX-ray14 dataset, the present model also showed competitive performance in comparison with other state-of-the-art deep-learning approaches. Model performance was slightly improved when introducing non-image data. CONCLUSION The proposed LACNN achieved radiologist-level performance in identifying thoracic diseases on CXRs, and could potentially expand patient access to CXR diagnostics.
Collapse
Affiliation(s)
- F Li
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - J-X Shi
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - L Yan
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Y-G Wang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - X-D Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - M-S Jiang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China.
| | - Z-Z Wu
- Department of Precision Mechanical Engineering, Shanghai University, Shanghai, China
| | - K-Q Zhou
- Liver Cancer Institute, Zhongshan Hospital, Shanghai, China
| |
Collapse
|
141
|
Mursch-Edlmayr AS, Ng WS, Diniz-Filho A, Sousa DC, Arnold L, Schlenker MB, Duenas-Angeles K, Keane PA, Crowston JG, Jayaram H. Artificial Intelligence Algorithms to Diagnose Glaucoma and Detect Glaucoma Progression: Translation to Clinical Practice. Transl Vis Sci Technol 2020; 9:55. [PMID: 33117612 PMCID: PMC7571273 DOI: 10.1167/tvst.9.2.55] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 09/18/2020] [Indexed: 12/11/2022] Open
Abstract
Purpose This concise review aims to explore the potential for the clinical implementation of artificial intelligence (AI) strategies for detecting glaucoma and monitoring glaucoma progression. Methods Nonsystematic literature review using the search combinations “Artificial Intelligence,” “Deep Learning,” “Machine Learning,” “Neural Networks,” “Bayesian Networks,” “Glaucoma Diagnosis,” and “Glaucoma Progression.” Information on sensitivity and specificity regarding glaucoma diagnosis and progression analysis as well as methodological details were extracted. Results Numerous AI strategies provide promising levels of specificity and sensitivity for structural (e.g. optical coherence tomography [OCT] imaging, fundus photography) and functional (visual field [VF] testing) test modalities used for the detection of glaucoma. Area under receiver operating curve (AROC) values of > 0.90 were achieved with every modality. Combining structural and functional inputs has been shown to even more improve the diagnostic ability. Regarding glaucoma progression, AI strategies can detect progression earlier than conventional methods or potentially from one single VF test. Conclusions AI algorithms applied to fundus photographs for screening purposes may provide good results using a simple and widely accessible test. However, for patients who are likely to have glaucoma more sophisticated methods should be used including data from OCT and perimetry. Outputs may serve as an adjunct to assist clinical decision making, whereas also enhancing the efficiency, productivity, and quality of the delivery of glaucoma care. Patients with diagnosed glaucoma may benefit from future algorithms to evaluate their risk of progression. Challenges are yet to be overcome, including the external validity of AI strategies, a move from a “black box” toward “explainable AI,” and likely regulatory hurdles. However, it is clear that AI can enhance the role of specialist clinicians and will inevitably shape the future of the delivery of glaucoma care to the next generation. Translational Relevance The promising levels of diagnostic accuracy reported by AI strategies across the modalities used in clinical practice for glaucoma detection can pave the way for the development of reliable models appropriate for their translation into clinical practice. Future incorporation of AI into healthcare models may help address the current limitations of access and timely management of patients with glaucoma across the world.
Collapse
Affiliation(s)
| | - Wai Siene Ng
- Cardiff Eye Unit, University Hospital of Wales, Cardiff, UK
| | - Alberto Diniz-Filho
- Department of Ophthalmology and Otorhinolaryngology, Federal University of Minas Gerais, Belo Horizonte, Brazil
| | - David C Sousa
- Department of Ophthalmology, Hospital de Santa Maria, Lisbon, Portugal
| | - Louis Arnold
- Department of Ophthalmology, University Hospital, Dijon, France
| | - Matthew B Schlenker
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Canada
| | - Karla Duenas-Angeles
- Department of Ophthalmology, Universidad Nacional Autónoma de Mexico, Mexico City, Mexico
| | - Pearse A Keane
- NIHR Biomedical Research Centre for Ophthalmology, UCL Institute of Ophthalmology & Moorfields Eye Hospital, London, UK
| | - Jonathan G Crowston
- Centre for Vision Research, Duke-NUS Medical School, Singapore.,Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Hari Jayaram
- NIHR Biomedical Research Centre for Ophthalmology, UCL Institute of Ophthalmology & Moorfields Eye Hospital, London, UK
| |
Collapse
|
142
|
Deep learning in glaucoma with optical coherence tomography: a review. Eye (Lond) 2020; 35:188-201. [PMID: 33028972 DOI: 10.1038/s41433-020-01191-5] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Revised: 09/06/2020] [Accepted: 09/14/2020] [Indexed: 01/27/2023] Open
Abstract
Deep learning (DL), a subset of artificial intelligence (AI) based on deep neural networks, has made significant breakthroughs in medical imaging, particularly for image classification and pattern recognition. In ophthalmology, applying DL for glaucoma assessment with optical coherence tomography (OCT), including OCT traditional reports, two-dimensional (2D) B-scans, and three-dimensional (3D) volumetric scans, has increasingly raised research interests. Studies have demonstrated that using DL for interpreting OCT is efficient, accurate, and with good performance for discriminating glaucomatous eyes from normal eyes, suggesting that incorporation of DL technology in OCT for glaucoma assessment could potentially address some gaps in the current practice and clinical workflow. However, further research is crucial in tackling some existing challenges, such as annotation standardization (i.e., setting a standard for ground truth labelling among different studies), development of DL-powered IT infrastructure for real-world implementation, prospective validation in unseen datasets for further evaluation of generalizability, cost-effectiveness analysis after integration of DL, the AI "black box" explanation problem. This review summarizes recent studies on the application of DL on OCT for glaucoma assessment, identifies the potential clinical impact arising from the development and deployment of the DL models, and discusses future research directions.
Collapse
|
143
|
Li Z, Guo C, Lin D, Nie D, Zhu Y, Chen C, Zhao L, Wang J, Zhang X, Dongye M, Wang D, Xu F, Jin C, Zhang P, Han Y, Yan P, Han Y, Lin H. Deep learning for automated glaucomatous optic neuropathy detection from ultra-widefield fundus images. Br J Ophthalmol 2020; 105:1548-1554. [DOI: 10.1136/bjophthalmol-2020-317327] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Revised: 08/22/2020] [Accepted: 08/25/2020] [Indexed: 12/28/2022]
Abstract
Background/AimsTo develop a deep learning system for automated glaucomatous optic neuropathy (GON) detection using ultra-widefield fundus (UWF) images.MethodsWe trained, validated and externally evaluated a deep learning system for GON detection based on 22 972 UWF images from 10 590 subjects that were collected at 4 different institutions in China and Japan. The InceptionResNetV2 neural network architecture was used to develop the system. The area under the receiver operating characteristic curve (AUC), sensitivity and specificity were used to assess the performance of detecting GON by the system. The data set from the Zhongshan Ophthalmic Center (ZOC) was selected to compare the performance of the system to that of ophthalmologists who mainly conducted UWF image analysis in clinics.ResultsThe system for GON detection achieved AUCs of 0.983–0.999 with sensitivities of 97.5–98.2% and specificities of 94.3–98.4% in four independent data sets. The most common reasons for false-negative results were confounding optic disc characteristics caused by high myopia or pathological myopia (n=39 (53%)). The leading cause for false-positive results was having other fundus lesions (n=401 (96%)). The performance of the system in the ZOC data set was comparable to that of an experienced ophthalmologist (p>0.05).ConclusionOur deep learning system can accurately detect GON from UWF images in an automated fashion. It may be used as a screening tool to improve the accessibility of screening and promote the early diagnosis and management of glaucoma.
Collapse
|
144
|
Kuo MT, Hsu BWY, Yin YK, Fang PC, Lai HY, Chen A, Yu MS, Tseng VS. A deep learning approach in diagnosing fungal keratitis based on corneal photographs. Sci Rep 2020; 10:14424. [PMID: 32879364 PMCID: PMC7468230 DOI: 10.1038/s41598-020-71425-9] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2019] [Accepted: 08/11/2020] [Indexed: 02/08/2023] Open
Abstract
Fungal keratitis (FK) is the most devastating and vision-threatening microbial keratitis, but clinical diagnosis a great challenge. This study aimed to develop and verify a deep learning (DL)-based corneal photograph model for diagnosing FK. Corneal photos of laboratory-confirmed microbial keratitis were consecutively collected from a single referral center. A DL framework with DenseNet architecture was used to automatically recognize FK from the photo. The diagnoses of FK via corneal photograph for comparing DL-based models were made in the Expert and NCS-Oph group through a majority decision of three non-corneal specialty ophthalmologist and three corneal specialists, respectively. The average percentage of sensitivity, specificity, positive predictive value, and negative predictive value was approximately 71, 68, 60, and 78. The sensitivity was higher than that of the NCS-Oph (52%, P < .01), whereas the specificity was lower than that of the NCS-Oph (83%, P < .01). The average accuracy of around 70% was comparable with that of the NCS-Oph. Therefore, the sensitive DL-based diagnostic model is a promising tool for improving first-line medical care at rural area in early identification of FK.
Collapse
Affiliation(s)
- Ming-Tse Kuo
- Department of Ophthalmology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, No.123, Dapi Rd., Niaosong Dist., Kaohsiung, 833, Taiwan, ROC.
| | - Benny Wei-Yun Hsu
- Department of Computer Science, National Chiao Tung University, No. 1001, Daxue Rd., East Dist., Hsinchu, 300, Taiwan, ROC
| | - Yu-Kai Yin
- Department of Computer Science, National Chiao Tung University, No. 1001, Daxue Rd., East Dist., Hsinchu, 300, Taiwan, ROC
| | - Po-Chiung Fang
- Department of Ophthalmology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, No.123, Dapi Rd., Niaosong Dist., Kaohsiung, 833, Taiwan, ROC
| | - Hung-Yin Lai
- Department of Ophthalmology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, No.123, Dapi Rd., Niaosong Dist., Kaohsiung, 833, Taiwan, ROC
| | - Alexander Chen
- Department of Ophthalmology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, No.123, Dapi Rd., Niaosong Dist., Kaohsiung, 833, Taiwan, ROC
| | - Meng-Shan Yu
- Department of Ophthalmology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, No.123, Dapi Rd., Niaosong Dist., Kaohsiung, 833, Taiwan, ROC
| | - Vincent S Tseng
- Department of Computer Science, National Chiao Tung University, No. 1001, Daxue Rd., East Dist., Hsinchu, 300, Taiwan, ROC.
| |
Collapse
|
145
|
Interpretation of artificial intelligence studies for the ophthalmologist. Curr Opin Ophthalmol 2020; 31:351-356. [PMID: 32740068 DOI: 10.1097/icu.0000000000000695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW The use of artificial intelligence (AI) in ophthalmology has increased dramatically. However, interpretation of these studies can be a daunting prospect for the ophthalmologist without a background in computer or data science. This review aims to share some practical considerations for interpretation of AI studies in ophthalmology. RECENT FINDINGS It can be easy to get lost in the technical details of studies involving AI. Nevertheless, it is important for clinicians to remember that the fundamental questions in interpreting these studies remain unchanged - What does this study show, and how does this affect my patients? Being guided by familiar principles like study purpose, impact, validity, and generalizability, these studies become more accessible to the ophthalmologist. Although it may not be necessary for nondomain experts to understand the exact AI technical details, we explain some broad concepts in relation to AI technical architecture and dataset management. SUMMARY The expansion of AI into healthcare and ophthalmology is here to stay. AI systems have made the transition from bench to bedside, and are already being applied to patient care. In this context, 'AI education' is crucial for ophthalmologists to be confident in interpretation and translation of new developments in this field to their own clinical practice.
Collapse
|
146
|
Yang HK, Kim YJ, Sung JY, Kim DH, Kim KG, Hwang JM. Efficacy for Differentiating Nonglaucomatous Versus Glaucomatous Optic Neuropathy Using Deep Learning Systems. Am J Ophthalmol 2020; 216:140-146. [PMID: 32247778 DOI: 10.1016/j.ajo.2020.03.035] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Revised: 03/19/2020] [Accepted: 03/23/2020] [Indexed: 01/22/2023]
Abstract
PURPOSE We sought to assess the performance of deep learning approaches for differentiating nonglaucomatous optic neuropathy with disc pallor (NGON) vs glaucomatous optic neuropathy (GON) on color fundus photographs by the use of image recognition. DESIGN Development of an Artificial Intelligence Classification algorithm. METHODS This single-institution analysis included 3815 fundus images from the picture archiving and communication system of Seoul National University Bundang Hospital consisting of 2883 normal optic disc images, 446 NGON images, and 486 GON images. The presence of NGON and GON was interpreted by 2 expert neuro-ophthalmologists and had corroborated evidence on visual field testing and optical coherence tomography. Images were preprocessed in size and color enhancement before input. We applied the convolutional neural network (CNN) of ResNet-50 architecture. The area under the precision-recall curve (average precision) was evaluated for the efficacy of deep learning algorithms to assess the performance of classifying NGON and GON. RESULTS The diagnostic accuracy of the ResNet-50 model to detect GON among NGON images showed a sensitivity of 93.4% and specificity of 81.8%. The area under the precision-recall curve for differentiating NGON vs GON showed an average precision value of 0.874. False positive cases were found with extensive areas of peripapillary atrophy and tilted optic discs. CONCLUSION Artificial intelligence-based deep learning algorithms for detecting optic disc diseases showed excellent performance in differentiating NGON and GON on color fundus photographs, necessitating further research for clinical application.
Collapse
|
147
|
Thompson AC, Jammal AA, Medeiros FA. A Review of Deep Learning for Screening, Diagnosis, and Detection of Glaucoma Progression. Transl Vis Sci Technol 2020; 9:42. [PMID: 32855846 PMCID: PMC7424906 DOI: 10.1167/tvst.9.2.42] [Citation(s) in RCA: 71] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Accepted: 05/21/2020] [Indexed: 12/23/2022] Open
Abstract
Because of recent advances in computing technology and the availability of large datasets, deep learning has risen to the forefront of artificial intelligence, with performances that often equal, or sometimes even exceed, those of human subjects on a variety of tasks, especially those related to image classification and pattern recognition. As one of the medical fields that is highly dependent on ancillary imaging tests, ophthalmology has been in a prime position to witness the application of deep learning algorithms that can help analyze the vast amount of data coming from those tests. In particular, glaucoma stands as one of the conditions where application of deep learning algorithms could potentially lead to better use of the vast amount of information coming from structural and functional tests evaluating the optic nerve and macula. The purpose of this article is to critically review recent applications of deep learning models in glaucoma, discussing their advantages but also focusing on the challenges inherent to the development of such models for screening, diagnosis and detection of progression. After a brief general overview of deep learning and how it compares to traditional machine learning classifiers, we discuss issues related to the training and validation of deep learning models and how they specifically apply to glaucoma. We then discuss specific scenarios where deep learning has been proposed for use in glaucoma, such as screening with fundus photography, and diagnosis and detection of glaucoma progression with optical coherence tomography and standard automated perimetry. Translational Relevance Deep learning algorithms have the potential to significantly improve diagnostic capabilities in glaucoma, but their application in clinical practice requires careful validation, with consideration of the target population, the reference standards used to build the models, and potential sources of bias.
Collapse
Affiliation(s)
- Atalie C Thompson
- Vision, Imaging and Performance Laboratory (VIP), Duke Eye Center, Duke University, Durham, NC, USA
| | - Alessandro A Jammal
- Vision, Imaging and Performance Laboratory (VIP), Duke Eye Center, Duke University, Durham, NC, USA
| | - Felipe A Medeiros
- Vision, Imaging and Performance Laboratory (VIP), Duke Eye Center, Duke University, Durham, NC, USA
| |
Collapse
|
148
|
Kim KE, Kim JM, Song JE, Kee C, Han JC, Hyun SH. Development and Validation of a Deep Learning System for Diagnosing Glaucoma Using Optical Coherence Tomography. J Clin Med 2020; 9:E2167. [PMID: 32659918 PMCID: PMC7408821 DOI: 10.3390/jcm9072167] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Revised: 06/28/2020] [Accepted: 07/06/2020] [Indexed: 11/30/2022] Open
Abstract
This study aimed to develop and validate a deep learning system for diagnosing glaucoma using optical coherence tomography (OCT). A training set of 1822 eyes (332 control, 1490 glaucoma) with 7288 OCT images, an internal validation set of 425 eyes (104 control, 321 glaucoma) with 1700 images, and an external validation set of 355 eyes (108 control, 247 glaucoma) with 1420 images were included. Deviation and thickness maps of retinal nerve fiber layer (RNFL) and ganglion cell-inner plexiform layer (GCIPL) analyses were used to develop the deep learning system for glaucoma diagnosis based on the visual geometry group deep convolutional neural network (VGG-19) model. The diagnostic abilities of deep learning models using different OCT maps were evaluated, and the best model was compared with the diagnostic results produced by two glaucoma specialists. The glaucoma-diagnostic ability was highest when the deep learning system used the RNFL thickness map alone (area under the receiver operating characteristic curve (AUROC) 0.987), followed by the RNFL deviation map (AUROC 0.974), the GCIPL thickness map (AUROC 0.966), and the GCIPL deviation map (AUROC 0.903). Among combination sets, use of the RNFL and GCIPL deviation map showed the highest diagnostic ability, showing similar results when tested via an external validation dataset. The inclusion of the axial length did not significantly affect the diagnostic performance of the deep learning system. The location of glaucomatous damage showed generally high level of agreement between the heatmap and the diagnosis of glaucoma specialists, with 90.0% agreement when using the RNFL thickness map and 88.0% when using the GCIPL thickness map. In conclusion, our deep learning system showed high glaucoma-diagnostic abilities using OCT thickness and deviation maps. It also showed detection patterns similar to those of glaucoma specialists, showing promising results for future clinical application as an interpretable computer-aided diagnosis.
Collapse
Affiliation(s)
- Ko Eun Kim
- Department of Ophthalmology, Nowon Eulji Medical Center, Eulji University School of Medicine, Seoul 01830, Korea;
| | - Joon Mo Kim
- Department of Ophthalmology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul 03181, Korea; (J.M.K.); (J.E.S.)
| | - Ji Eun Song
- Department of Ophthalmology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul 03181, Korea; (J.M.K.); (J.E.S.)
| | - Changwon Kee
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 06351, Korea;
| | - Jong Chul Han
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 06351, Korea;
- Institute of Biomedical Artificial Intelligence, SAIHST, Sungkyunkwan University, Seoul 06351, Korea
| | - Seung Hyup Hyun
- Department of Nuclear Medicine, Medical AI Research Lab, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 06351, Korea
| |
Collapse
|
149
|
Tom E, Keane PA, Blazes M, Pasquale LR, Chiang MF, Lee AY, Lee CS. Protecting Data Privacy in the Age of AI-Enabled Ophthalmology. Transl Vis Sci Technol 2020; 9:36. [PMID: 32855840 PMCID: PMC7424948 DOI: 10.1167/tvst.9.2.36] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Accepted: 04/02/2020] [Indexed: 12/16/2022] Open
Affiliation(s)
- Elysse Tom
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
| | - Pearse A Keane
- Medical Retina Service, Moorfields Eye Hospital NHS Foundation Trust, London, UK.,Institute of Ophthalmology, University College London, London, UK
| | - Marian Blazes
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
| | - Louis R Pasquale
- Eye and Vision Research Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Michael F Chiang
- Departments of Ophthalmology and Medical Informatics & Clinical Epidemiology, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
| | - Cecilia S Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
| | | |
Collapse
|
150
|
He M, Li Z, Liu C, Shi D, Tan Z. Deployment of Artificial Intelligence in Real-World Practice: Opportunity and Challenge. Asia Pac J Ophthalmol (Phila) 2020; 9:299-307. [PMID: 32694344 DOI: 10.1097/apo.0000000000000301] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Artificial intelligence has rapidly evolved from the experimental phase to the implementation phase in many image-driven clinical disciplines, including ophthalmology. A combination of the increasing availability of large datasets and computing power with revolutionary progress in deep learning has created unprecedented opportunities for major breakthrough improvements in the performance and accuracy of automated diagnoses that primarily focus on image recognition and feature detection. Such an automated disease classification would significantly improve the accessibility, efficiency, and cost-effectiveness of eye care systems where it is less dependent on human input, potentially enabling diagnosis to be cheaper, quicker, and more consistent. Although this technology will have a profound impact on clinical flow and practice patterns sooner or later, translating such a technology into clinical practice is challenging and requires similar levels of accountability and effectiveness as any new medication or medical device due to the potential problems of bias, and ethical, medical, and legal issues that might arise. The objective of this review is to summarize the opportunities and challenges of this transition and to facilitate the integration of artificial intelligence (AI) into routine clinical practice based on our best understanding and experience in this area.
Collapse
Affiliation(s)
- Mingguang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, Australia
| | - Zhixi Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- School of Computer Science, University of Technology Sydney, Ultimo NSW, Australia
| | - Danli Shi
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zachary Tan
- Faculty of Medicine, The University of Queensland, Brisbane, Australia
- Schwarzman College, Tsinghua University, Beijing, China
| |
Collapse
|