1
|
Domalpally A, Slater R, Linderman RE, Balaji R, Bogost J, Voland R, Pak J, Blodi BA, Channa R, Fong D, Chew EY. Strong versus Weak Data Labeling for Artificial Intelligence Algorithms in the Measurement of Geographic Atrophy. OPHTHALMOLOGY SCIENCE 2024; 4:100477. [PMID: 38827491 PMCID: PMC11141255 DOI: 10.1016/j.xops.2024.100477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 11/15/2023] [Accepted: 01/19/2024] [Indexed: 06/04/2024]
Abstract
Purpose To gain an understanding of data labeling requirements to train deep learning models for measurement of geographic atrophy (GA) with fundus autofluorescence (FAF) images. Design Evaluation of artificial intelligence (AI) algorithms. Subjects The Age-Related Eye Disease Study 2 (AREDS2) images were used for training and cross-validation, and GA clinical trial images were used for testing. Methods Training data consisted of 2 sets of FAF images; 1 with area measurements only and no indication of GA location (Weakly labeled) and the second with GA segmentation masks (Strongly labeled). Main Outcome Measures Bland-Altman plots and scatter plots were used to compare GA area measurement between ground truth and AI measurements. The Dice coefficient was used to compare accuracy of segmentation of the Strong model. Results In the cross-validation AREDS2 data set (n = 601), the mean (standard deviation [SD]) area of GA measured by human grader, Weakly labeled AI model, and Strongly labeled AI model was 6.65 (6.3) mm2, 6.83 (6.29) mm2, and 6.58 (6.24) mm2, respectively. The mean difference between ground truth and AI was 0.18 mm2 (95% confidence interval, [CI], -7.57 to 7.92) for the Weakly labeled model and -0.07 mm2 (95% CI, -1.61 to 1.47) for the Strongly labeled model. With GlaxoSmithKline testing data (n = 156), the mean (SD) GA area was 9.79 (5.6) mm2, 8.82 (4.61) mm2, and 9.55 (5.66) mm2 for human grader, Strongly labeled AI model, and Weakly labeled AI model, respectively. The mean difference between ground truth and AI for the 2 models was -0.97 mm2 (95% CI, -4.36 to 2.41) and -0.24 mm2 (95% CI, -4.98 to 4.49), respectively. The Dice coefficient was 0.99 for intergrader agreement, 0.89 for the cross-validation data, and 0.92 for the testing data. Conclusions Deep learning models can achieve reasonable accuracy even with Weakly labeled data. Training methods that integrate large volumes of Weakly labeled images with small number of Strongly labeled images offer a promising solution to overcome the burden of cost and time for data labeling. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Amitha Domalpally
- A-EYE Research Unit, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
- Wisconsin Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Robert Slater
- A-EYE Research Unit, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Rachel E. Linderman
- A-EYE Research Unit, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
- Wisconsin Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Rohit Balaji
- Wisconsin Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Jacob Bogost
- A-EYE Research Unit, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Rick Voland
- Wisconsin Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Jeong Pak
- Wisconsin Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Barbara A. Blodi
- A-EYE Research Unit, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | - Roomasa Channa
- Wisconsin Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin
| | | | - Emily Y. Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| |
Collapse
|
2
|
Chang P, von der Emde L, Pfau M, Künzel S, Fleckenstein M, Schmitz-Valckenberg S, Holz FG. [Use of artificial intelligence in geographic atrophy in age-related macular degeneration]. DIE OPHTHALMOLOGIE 2024; 121:616-622. [PMID: 39083094 DOI: 10.1007/s00347-024-02080-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Revised: 06/25/2024] [Accepted: 06/25/2024] [Indexed: 08/03/2024]
Abstract
The first regulatory approval of treatment for geographic atrophy (GA) secondary to age-related macular degeneration in the USA constitutes an important milestone; however, due to the nature of GA as a non-acute, insidiously progressing pathology, the ophthalmologist faces specific challenges concerning risk stratification, making treatment decisions, monitoring of treatment and patient education. Innovative retinal imaging modalities, such as fundus autofluorescence (FAF) and optical coherence tomography (OCT) have enabled identification of typical morphological alterations in relation to GA, which are also suitable for the quantitative characterization of GA. Solutions based on artificial intelligence (AI) enable automated detection and quantification of GA-specific biomarkers on retinal imaging data, also retrospectively and over time. Moreover, AI solutions can be used for the diagnosis and segmentation of GA as well as the prediction of structure and function without and under GA treatment, thereby making a valuable contribution to treatment monitoring and the identification of high-risk patients and patient education. The integration of AI solutions into existing clinical processes and software systems enables the broad implementation of informed and personalized treatment of GA secondary to AMD.
Collapse
Affiliation(s)
- Petrus Chang
- Augenklinik, Universitätsklinikum Bonn, Venusberg-Campus 1, 53127, Bonn, Deutschland.
| | - Leon von der Emde
- Augenklinik, Universitätsklinikum Bonn, Venusberg-Campus 1, 53127, Bonn, Deutschland
| | - Maximilian Pfau
- Institut für Molekulare und Klinische Ophthalmologie Basel, Basel, Schweiz
| | - Sandrine Künzel
- Augenklinik, Universitätsklinikum Bonn, Venusberg-Campus 1, 53127, Bonn, Deutschland
| | - Monika Fleckenstein
- Department of Ophthalmology and Visual Science, John A. Moran Eye Center, University of Utah, Salt Lake City, UT, USA
| | - Steffen Schmitz-Valckenberg
- Department of Ophthalmology and Visual Science, John A. Moran Eye Center, University of Utah, Salt Lake City, UT, USA
| | - Frank G Holz
- Augenklinik, Universitätsklinikum Bonn, Venusberg-Campus 1, 53127, Bonn, Deutschland
| |
Collapse
|
3
|
Safai A, Froines C, Slater R, Linderman RE, Bogost J, Pacheco C, Voland R, Pak J, Tiwari P, Channa R, Domalpally A. Quantifying Geographic Atrophy in Age-Related Macular Degeneration: A Comparative Analysis Across 12 Deep Learning Models. Invest Ophthalmol Vis Sci 2024; 65:42. [PMID: 39046755 PMCID: PMC11271806 DOI: 10.1167/iovs.65.8.42] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Accepted: 07/02/2024] [Indexed: 07/25/2024] Open
Abstract
Purpose AI algorithms have shown impressive performance in segmenting geographic atrophy (GA) from fundus autofluorescence (FAF) images. However, selection of artificial intelligence (AI) architecture is an important variable in model development. Here, we explore 12 distinct AI architecture combinations to determine the most effective approach for GA segmentation. Methods We investigated various AI architectures, each with distinct combinations of encoders and decoders. The architectures included three decoders-FPN (Feature Pyramid Network), UNet, and PSPNet (Pyramid Scene Parsing Network)-and serve as the foundation framework for segmentation task. Encoders including EfficientNet, ResNet (Residual Networks), VGG (Visual Geometry Group) and Mix Vision Transformer (mViT) have a role in extracting optimum latent features for accurate GA segmentation. Performance was measured through comparison of GA areas between human and AI predictions and Dice Coefficient (DC). Results The training dataset included 601 FAF images from AREDS2 study and validation included 156 FAF images from the GlaxoSmithKline study. The mean absolute difference between grader measured and AI predicted areas ranged from -0.08 (95% CI = -1.35, 1.19) to 0.73 mm2 (95% CI = -5.75,4.29) and DC between 0.884-0.993. The best-performing models were UNet and FPN frameworks with mViT, and the least-performing models were PSPNet framework. Conclusions The choice of AI architecture impacts GA segmentation performance. Vision transformers with FPN and UNet architectures demonstrate stronger suitability for this task compared to Convolutional Neural Network- and PSPNet-based models. Selecting an AI architecture must be tailored to the specific goals of the project, and developers should consider which architecture is ideal for their project.
Collapse
Affiliation(s)
- Apoorva Safai
- A-EYE Research Unit, Dept of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin, United States
- Depts of Radiology and Biomedical Engineering, University of Wisconsin, Madison, Wisconsin, United States
| | - Colin Froines
- Wisconsin Reading Center, Dept of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin, United States
| | - Robert Slater
- A-EYE Research Unit, Dept of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin, United States
| | - Rachel E. Linderman
- A-EYE Research Unit, Dept of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin, United States
- Wisconsin Reading Center, Dept of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin, United States
| | - Jacob Bogost
- A-EYE Research Unit, Dept of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin, United States
| | - Caleb Pacheco
- A-EYE Research Unit, Dept of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin, United States
| | - Rickie Voland
- Wisconsin Reading Center, Dept of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin, United States
| | - Jeong Pak
- Wisconsin Reading Center, Dept of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin, United States
| | - Pallavi Tiwari
- Depts of Radiology and Biomedical Engineering, University of Wisconsin, Madison, Wisconsin, United States
| | - Roomasa Channa
- A-EYE Research Unit, Dept of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin, United States
| | - Amitha Domalpally
- A-EYE Research Unit, Dept of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin, United States
- Wisconsin Reading Center, Dept of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, Wisconsin, United States
| |
Collapse
|
4
|
Yao H, Wu Z, Gao SS, Guymer RH, Steffen V, Chen H, Hejrati M, Zhang M. Deep Learning Approaches for Detecting of Nascent Geographic Atrophy in Age-Related Macular Degeneration. OPHTHALMOLOGY SCIENCE 2024; 4:100428. [PMID: 38284101 PMCID: PMC10818248 DOI: 10.1016/j.xops.2023.100428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 10/31/2023] [Accepted: 11/08/2023] [Indexed: 01/30/2024]
Abstract
Purpose Nascent geographic atrophy (nGA) refers to specific features seen on OCT B-scans, which are strongly associated with the future development of geographic atrophy (GA). This study sought to develop a deep learning model to screen OCT B-scans for nGA that warrant further manual review (an artificial intelligence [AI]-assisted approach), and to determine the extent of reduction in OCT B-scan load requiring manual review while maintaining near-perfect nGA detection performance. Design Development and evaluation of a deep learning model. Participants One thousand eight hundred and eighty four OCT volume scans (49 B-scans per volume) without neovascular age-related macular degeneration from 280 eyes of 140 participants with bilateral large drusen at baseline, seen at 6-monthly intervals up to a 36-month period (from which 40 eyes developed nGA). Methods OCT volume and B-scans were labeled for the presence of nGA. Their presence at the volume scan level provided the ground truth for training a deep learning model to identify OCT B-scans that potentially showed nGA requiring manual review. Using a threshold that provided a sensitivity of 0.99, the B-scans identified were assigned the ground truth label with the AI-assisted approach. The performance of this approach for detecting nGA across all visits, or at the visit of nGA onset, was evaluated using fivefold cross-validation. Main Outcome Measures Sensitivity for detecting nGA, and proportion of OCT B-scans requiring manual review. Results The AI-assisted approach (utilizing outputs from the deep learning model to guide manual review) had a sensitivity of 0.97 (95% confidence interval [CI] = 0.93-1.00) and 0.95 (95% CI = 0.87-1.00) for detecting nGA across all visits and at the visit of nGA onset, respectively, when requiring manual review of only 2.7% and 1.9% of selected OCT B-scans, respectively. Conclusions A deep learning model could be used to enable near-perfect detection of nGA onset while reducing the number of OCT B-scans requiring manual review by over 50-fold. This AI-assisted approach shows promise for substantially reducing the current burden of manual review of OCT B-scans to detect this crucial feature that portends future development of GA. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Heming Yao
- gRED Computational Science, Genentech, Inc., South San Francisco, California
| | - Zhichao Wu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
- Ophthalmology Division, Department of Surgery, The University of Melbourne, Melbourne, Victoria, Australia
| | - Simon S. Gao
- gRED Computational Science, Genentech, Inc., South San Francisco, California
| | - Robyn H. Guymer
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
- Ophthalmology Division, Department of Surgery, The University of Melbourne, Melbourne, Victoria, Australia
| | - Verena Steffen
- gRED Computational Science, Genentech, Inc., South San Francisco, California
| | - Hao Chen
- gRED Computational Science, Genentech, Inc., South San Francisco, California
| | - Mohsen Hejrati
- gRED Computational Science, Genentech, Inc., South San Francisco, California
| | - Miao Zhang
- gRED Computational Science, Genentech, Inc., South San Francisco, California
| |
Collapse
|
5
|
Sarao V, Veritti D, De Nardin A, Misciagna M, Foresti G, Lanzetta P. Explainable artificial intelligence model for the detection of geographic atrophy using colour retinal photographs. BMJ Open Ophthalmol 2023; 8:e001411. [PMID: 38057106 PMCID: PMC10711821 DOI: 10.1136/bmjophth-2023-001411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 11/22/2023] [Indexed: 12/08/2023] Open
Abstract
OBJECTIVE To develop and validate an explainable artificial intelligence (AI) model for detecting geographic atrophy (GA) via colour retinal photographs. METHODS AND ANALYSIS We conducted a prospective study where colour fundus images were collected from healthy individuals and patients with retinal diseases using an automated imaging system. All images were categorised into three classes: healthy, GA and other retinal diseases, by two experienced retinologists. Simultaneously, an explainable learning model using class activation mapping techniques categorised each image into one of the three classes. The AI system's performance was then compared with manual evaluations. RESULTS A total of 540 colour retinal photographs were collected. Data was divided such that 300 images from each class trained the AI model, 120 for validation and 120 for performance testing. In distinguishing between GA and healthy eyes, the model demonstrated a sensitivity of 100%, specificity of 97.5% and an overall diagnostic accuracy of 98.4%. Performance metrics like area under the receiver operating characteristic (AUC-ROC, 0.988) and the precision-recall (AUC-PR, 0.952) curves reinforced the model's robust achievement. When differentiating GA from other retinal conditions, the model preserved a diagnostic accuracy of 96.8%, a precision of 90.9% and a recall of 100%, leading to an F1-score of 0.952. The AUC-ROC and AUC-PR scores were 0.975 and 0.909, respectively. CONCLUSIONS Our explainable AI model exhibits excellent performance in detecting GA using colour retinal images. With its high sensitivity, specificity and overall diagnostic accuracy, the AI model stands as a powerful tool for the automated diagnosis of GA.
Collapse
Affiliation(s)
- Valentina Sarao
- Department of Medicine-Ophthalmology, University of Udine, Udine, Italy
- Istituto Europeo di Microchirurgia Oculare (IEMO), Udine, Italy
| | - Daniele Veritti
- Department of Medicine-Ophthalmology, University of Udine, Udine, Italy
| | - Axel De Nardin
- Department of Mathematics, Computer Science and Physics, University of Udine, Udine, Italy
| | - Micaela Misciagna
- Department of Medicine-Ophthalmology, University of Udine, Udine, Italy
| | - Gianluca Foresti
- Department of Mathematics, Computer Science and Physics, University of Udine, Udine, Italy
| | - Paolo Lanzetta
- Department of Medicine-Ophthalmology, University of Udine, Udine, Italy
- Istituto Europeo di Microchirurgia Oculare (IEMO), Udine, Italy
| |
Collapse
|
6
|
Elsawy A, Keenan TD, Chen Q, Shi X, Thavikulwat AT, Bhandari S, Chew EY, Lu Z. Deep-GA-Net for Accurate and Explainable Detection of Geographic Atrophy on OCT Scans. OPHTHALMOLOGY SCIENCE 2023; 3:100311. [PMID: 37304045 PMCID: PMC10251072 DOI: 10.1016/j.xops.2023.100311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 04/06/2023] [Accepted: 04/07/2023] [Indexed: 06/13/2023]
Abstract
Objective To propose Deep-GA-Net, a 3-dimensional (3D) deep learning network with 3D attention layer, for the detection of geographic atrophy (GA) on spectral domain OCT (SD-OCT) scans, explain its decision making, and compare it with existing methods. Design Deep learning model development. Participants Three hundred eleven participants from the Age-Related Eye Disease Study 2 Ancillary SD-OCT Study. Methods A dataset of 1284 SD-OCT scans from 311 participants was used to develop Deep-GA-Net. Cross-validation was used to evaluate Deep-GA-Net, where each testing set contained no participant from the corresponding training set. En face heatmaps and important regions at the B-scan level were used to visualize the outputs of Deep-GA-Net, and 3 ophthalmologists graded the presence or absence of GA in them to assess the explainability (i.e., understandability and interpretability) of its detections. Main Outcome Measures Accuracy, area under receiver operating characteristic curve (AUC), area under precision-recall curve (APR). Results Compared with other networks, Deep-GA-Net achieved the best metrics, with accuracy of 0.93, AUC of 0.94, and APR of 0.91, and received the best gradings of 0.98 and 0.68 on the en face heatmap and B-scan grading tasks, respectively. Conclusions Deep-GA-Net was able to detect GA accurately from SD-OCT scans. The visualizations of Deep-GA-Net were more explainable, as suggested by 3 ophthalmologists. The code and pretrained models are publicly available at https://github.com/ncbi/Deep-GA-Net. Financial Disclosures The author(s) have no proprietary or commercial interest in any materials discussed in this article.
Collapse
Affiliation(s)
- Amr Elsawy
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | - Tiarnan D.L. Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Qingyu Chen
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | - Xioashuang Shi
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
| | - Alisa T. Thavikulwat
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Sanjeeb Bhandari
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Emily Y. Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Zhiyong Lu
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| |
Collapse
|
7
|
Khosravi P, Huck NA, Shahraki K, Hunter SC, Danza CN, Kim SY, Forbes BJ, Dai S, Levin AV, Binenbaum G, Chang PD, Suh DW. Deep Learning Approach for Differentiating Etiologies of Pediatric Retinal Hemorrhages: A Multicenter Study. Int J Mol Sci 2023; 24:15105. [PMID: 37894785 PMCID: PMC10606803 DOI: 10.3390/ijms242015105] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 09/29/2023] [Accepted: 10/10/2023] [Indexed: 10/29/2023] Open
Abstract
Retinal hemorrhages in pediatric patients can be a diagnostic challenge for ophthalmologists. These hemorrhages can occur due to various underlying etiologies, including abusive head trauma, accidental trauma, and medical conditions. Accurate identification of the etiology is crucial for appropriate management and legal considerations. In recent years, deep learning techniques have shown promise in assisting healthcare professionals in making more accurate and timely diagnosis of a variety of disorders. We explore the potential of deep learning approaches for differentiating etiologies of pediatric retinal hemorrhages. Our study, which spanned multiple centers, analyzed 898 images, resulting in a final dataset of 597 retinal hemorrhage fundus photos categorized into medical (49.9%) and trauma (50.1%) etiologies. Deep learning models, specifically those based on ResNet and transformer architectures, were applied; FastViT-SA12, a hybrid transformer model, achieved the highest accuracy (90.55%) and area under the receiver operating characteristic curve (AUC) of 90.55%, while ResNet18 secured the highest sensitivity value (96.77%) on an independent test dataset. The study highlighted areas for optimization in artificial intelligence (AI) models specifically for pediatric retinal hemorrhages. While AI proves valuable in diagnosing these hemorrhages, the expertise of medical professionals remains irreplaceable. Collaborative efforts between AI specialists and pediatric ophthalmologists are crucial to fully harness AI's potential in diagnosing etiologies of pediatric retinal hemorrhages.
Collapse
Affiliation(s)
- Pooya Khosravi
- Department of Ophthalmology, School of Medicine, University of California, Irvine, CA 92697, USA; (P.K.); (N.A.H.); (K.S.); (C.N.D.)
- Gavin Herbert Eye Institute, University of California, Irvine, CA 92697, USA
- Donald Bren School of Information and Computer Sciences, University of California, Irvine, CA 92697, USA;
| | - Nolan A. Huck
- Department of Ophthalmology, School of Medicine, University of California, Irvine, CA 92697, USA; (P.K.); (N.A.H.); (K.S.); (C.N.D.)
- Gavin Herbert Eye Institute, University of California, Irvine, CA 92697, USA
| | - Kourosh Shahraki
- Department of Ophthalmology, School of Medicine, University of California, Irvine, CA 92697, USA; (P.K.); (N.A.H.); (K.S.); (C.N.D.)
- Gavin Herbert Eye Institute, University of California, Irvine, CA 92697, USA
| | - Stephen C. Hunter
- School of Medicine, University of California, 900 University Ave, Riverside, CA 92521, USA;
| | - Clifford Neil Danza
- Department of Ophthalmology, School of Medicine, University of California, Irvine, CA 92697, USA; (P.K.); (N.A.H.); (K.S.); (C.N.D.)
- Gavin Herbert Eye Institute, University of California, Irvine, CA 92697, USA
| | - So Young Kim
- Department of Ophthalmology, College of Medicine, Soonchunhyang University, Cheonan 31151, Chungcheongnam-do, Republic of Korea;
| | - Brian J. Forbes
- Division of Ophthalmology, Children’s Hospital of Philadelphia, Philadelphia, PA 19104, USA; (B.J.F.); (G.B.)
| | - Shuan Dai
- Department of Ophthalmology, Queensland Children’s Hospital, South Brisbane, QLD 4101, Australia;
| | - Alex V. Levin
- Department of Ophthalmology, Flaum Eye Institute, Golisano Children’s Hospital, Rochester, NY 14642, USA;
| | - Gil Binenbaum
- Division of Ophthalmology, Children’s Hospital of Philadelphia, Philadelphia, PA 19104, USA; (B.J.F.); (G.B.)
| | - Peter D. Chang
- Donald Bren School of Information and Computer Sciences, University of California, Irvine, CA 92697, USA;
- Department of Radiological Sciences, School of Medicine, University of California, Irvine, CA 92697, USA
| | - Donny W. Suh
- Department of Ophthalmology, School of Medicine, University of California, Irvine, CA 92697, USA; (P.K.); (N.A.H.); (K.S.); (C.N.D.)
- Gavin Herbert Eye Institute, University of California, Irvine, CA 92697, USA
| |
Collapse
|
8
|
Hu M, Wu B, Lu D, Xie J, Chen Y, Yang Z, Dai W. Two-step hierarchical neural network for classification of dry age-related macular degeneration using optical coherence tomography images. Front Med (Lausanne) 2023; 10:1221453. [PMID: 37547613 PMCID: PMC10403700 DOI: 10.3389/fmed.2023.1221453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 07/03/2023] [Indexed: 08/08/2023] Open
Abstract
Purpose The aim of this study is to apply deep learning techniques for the development and validation of a system that categorizes various phases of dry age-related macular degeneration (AMD), including nascent geographic atrophy (nGA), through the analysis of optical coherence tomography (OCT) images. Methods A total of 3,401 OCT macular images obtained from 338 patients admitted to Shenyang Aier Eye Hospital in 2019-2021 were collected for the development of the classification model. We adopted a convolutional neural network (CNN) model and introduced hierarchical structure along with image enhancement techniques to train a two-step CNN model to detect and classify normal and three phases of dry AMD: atrophy-associated drusen regression, nGA, and geographic atrophy (GA). Five-fold cross-validation was used to evaluate the performance of the multi-label classification model. Results Experimental results obtained from five-fold cross-validation with different dry AMD classification models show that the proposed two-step hierarchical model with image enhancement achieves the best classification performance, with a f1-score of 91.32% and a kappa coefficients of 96.09% compared to the state-of-the-art models. The results obtained from the ablation study demonstrate that the proposed method not only improves accuracy across all categories in comparison to a traditional flat CNN model, but also substantially enhances the classification performance of nGA, with an improvement from 66.79 to 81.65%. Conclusion This study introduces a novel two-step hierarchical deep learning approach in categorizing dry AMD progression phases, and demonstrates its efficacy. The high classification performance suggests its potential for guiding individualized treatment plans for patients with macular degeneration.
Collapse
Affiliation(s)
- Min Hu
- Changsha Aier Eye Hospital, Changsha, China
| | - Bin Wu
- Department of Retina, Shenyang Aier Excellence Eye Hospital, Shenyang, China
| | - Di Lu
- Department of Retina, Shenyang Aier Optometry Hospital, Shenyang, China
| | - Jing Xie
- Changsha Aier Eye Hospital, Changsha, China
| | - Yiqiang Chen
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Zhikuan Yang
- Aier Institute of Optometry and Vision Science, Changsha, China
| | - Weiwei Dai
- Changsha Aier Eye Hospital, Changsha, China
- Anhui Aier Eye Hospital, Anhui Medical University, Hefei, China
| |
Collapse
|
9
|
Fang W, Wu J, Cheng M, Zhu X, Du M, Chen C, Liao W, Zhi K, Pan W. Diagnosis of invasive fungal infections: challenges and recent developments. J Biomed Sci 2023; 30:42. [PMID: 37337179 DOI: 10.1186/s12929-023-00926-2] [Citation(s) in RCA: 32] [Impact Index Per Article: 32.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 02/13/2023] [Indexed: 06/21/2023] Open
Abstract
BACKGROUND The global burden of invasive fungal infections (IFIs) has shown an upsurge in recent years due to the higher load of immunocompromised patients suffering from various diseases. The role of early and accurate diagnosis in the aggressive containment of the fungal infection at the initial stages becomes crucial thus, preventing the development of a life-threatening situation. With the changing demands of clinical mycology, the field of fungal diagnostics has evolved and come a long way from traditional methods of microscopy and culturing to more advanced non-culture-based tools. With the advent of more powerful approaches such as novel PCR assays, T2 Candida, microfluidic chip technology, next generation sequencing, new generation biosensors, nanotechnology-based tools, artificial intelligence-based models, the face of fungal diagnostics is constantly changing for the better. All these advances have been reviewed here giving the latest update to our readers in the most orderly flow. MAIN TEXT A detailed literature survey was conducted by the team followed by data collection, pertinent data extraction, in-depth analysis, and composing the various sub-sections and the final review. The review is unique in its kind as it discusses the advances in molecular methods; advances in serology-based methods; advances in biosensor technology; and advances in machine learning-based models, all under one roof. To the best of our knowledge, there has been no review covering all of these fields (especially biosensor technology and machine learning using artificial intelligence) with relevance to invasive fungal infections. CONCLUSION The review will undoubtedly assist in updating the scientific community's understanding of the most recent advancements that are on the horizon and that may be implemented as adjuncts to the traditional diagnostic algorithms.
Collapse
Affiliation(s)
- Wenjie Fang
- Department of Dermatology, Shanghai Key Laboratory of Molecular Medical Mycology, Second Affiliated Hospital of Naval Medical University, Shanghai, 200003, China
| | - Junqi Wu
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, 200433, China
- Shanghai Engineering Research Center of Lung Transplantation, Shanghai, 200433, China
| | - Mingrong Cheng
- Department of Anorectal Surgery, The Third Affiliated Hospital of Guizhou Medical University, Guizhou, 558000, China
| | - Xinlin Zhu
- Department of Dermatology, Shanghai Key Laboratory of Molecular Medical Mycology, Second Affiliated Hospital of Naval Medical University, Shanghai, 200003, China
| | - Mingwei Du
- Department of Dermatology, Shanghai Key Laboratory of Molecular Medical Mycology, Second Affiliated Hospital of Naval Medical University, Shanghai, 200003, China
| | - Chang Chen
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, 200433, China
- Shanghai Engineering Research Center of Lung Transplantation, Shanghai, 200433, China
| | - Wanqing Liao
- Department of Dermatology, Shanghai Key Laboratory of Molecular Medical Mycology, Second Affiliated Hospital of Naval Medical University, Shanghai, 200003, China
| | - Kangkang Zhi
- Department of Vascular and Endovascular Surgery, Second Affiliated Hospital of Naval Medical University, Shanghai, 200003, China.
| | - Weihua Pan
- Department of Dermatology, Shanghai Key Laboratory of Molecular Medical Mycology, Second Affiliated Hospital of Naval Medical University, Shanghai, 200003, China.
| |
Collapse
|
10
|
Hassan E, Elmougy S, Ibraheem MR, Hossain MS, AlMutib K, Ghoneim A, AlQahtani SA, Talaat FM. Enhanced Deep Learning Model for Classification of Retinal Optical Coherence Tomography Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:5393. [PMID: 37420558 DOI: 10.3390/s23125393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 05/15/2023] [Accepted: 05/25/2023] [Indexed: 07/09/2023]
Abstract
Retinal optical coherence tomography (OCT) imaging is a valuable tool for assessing the condition of the back part of the eye. The condition has a great effect on the specificity of diagnosis, the monitoring of many physiological and pathological procedures, and the response and evaluation of therapeutic effectiveness in various fields of clinical practices, including primary eye diseases and systemic diseases such as diabetes. Therefore, precise diagnosis, classification, and automated image analysis models are crucial. In this paper, we propose an enhanced optical coherence tomography (EOCT) model to classify retinal OCT based on modified ResNet (50) and random forest algorithms, which are used in the proposed study's training strategy to enhance performance. The Adam optimizer is applied during the training process to increase the efficiency of the ResNet (50) model compared with the common pre-trained models, such as spatial separable convolutions and visual geometry group (VGG) (16). The experimentation results show that the sensitivity, specificity, precision, negative predictive value, false discovery rate, false negative rate accuracy, and Matthew's correlation coefficient are 0.9836, 0.9615, 0.9740, 0.9756, 0.0385, 0.0260, 0.0164, 0.9747, 0.9788, and 0.9474, respectively.
Collapse
Affiliation(s)
- Esraa Hassan
- Faculty of Artificial Intelligence, Kafrelsheikh University, Kafrelsheikh 33516, Egypt
| | - Samir Elmougy
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Mai R Ibraheem
- Department of Information Technology, Faculty of Computers and information, Kafrelsheikh University, Kafrelsheikh 33516, Egypt
| | - M Shamim Hossain
- Research Chair of Pervasive and Mobile Computing, Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia
| | - Khalid AlMutib
- Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11574, Saudi Arabia
| | - Ahmed Ghoneim
- Research Chair of Pervasive and Mobile Computing, Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia
| | - Salman A AlQahtani
- Research Chair of Pervasive and Mobile Computing, Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11574, Saudi Arabia
| | - Fatma M Talaat
- Faculty of Artificial Intelligence, Kafrelsheikh University, Kafrelsheikh 33516, Egypt
| |
Collapse
|
11
|
Wei W, Anantharanjit R, Patel RP, Cordeiro MF. Detection of macular atrophy in age-related macular degeneration aided by artificial intelligence. Expert Rev Mol Diagn 2023:1-10. [PMID: 37144908 DOI: 10.1080/14737159.2023.2208751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
INTRODUCTION Age-related macular degeneration (AMD) is a leading cause of irreversible visual impairment worldwide. The endpoint of AMD, both in its dry or wet form, is macular atrophy (MA) which is characterized by the permanent loss of the RPE and overlying photoreceptors either in dry AMD or in wet AMD. A recognized unmet need in AMD is the early detection of MA development. AREAS COVERED Artificial Intelligence (AI) has demonstrated great impact in detection of retinal diseases, especially with its robust ability to analyze big data afforded by ophthalmic imaging modalities, such as color fundus photography (CFP), fundus autofluorescence (FAF), near-infrared reflectance (NIR), and optical coherence tomography (OCT). Among these, OCT has been shown to have great promise in identifying early MA using the new criteria in 2018. EXPERT OPINION There are few studies in which AI-OCT methods have been used to identify MA; however, results are very promising when compared to other imaging modalities. In this paper, we review the development and advances of ophthalmic imaging modalities and their combination with AI technology to detect MA in AMD. In addition, we emphasize the application of AI-OCT as an objective, cost-effective tool for the early detection and monitoring of the progression of MA in AMD.
Collapse
Affiliation(s)
- Wei Wei
- Department of Ophthalmology, Ningbo Medical Center Lihuili Hospital, Ningbo, China
- Department of Surgery & Cancer, Imperial College London, London, UK
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Ophthalmology Research Group, London, UK
| | - Rajeevan Anantharanjit
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Ophthalmology Research Group, London, UK
- Western Eye Hospital, Imperial College Healthcare NHS trust, London, UK
| | - Radhika Pooja Patel
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Ophthalmology Research Group, London, UK
- Western Eye Hospital, Imperial College Healthcare NHS trust, London, UK
| | - Maria Francesca Cordeiro
- Department of Surgery & Cancer, Imperial College London, London, UK
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Ophthalmology Research Group, London, UK
- Western Eye Hospital, Imperial College Healthcare NHS trust, London, UK
| |
Collapse
|
12
|
Wang Y, Jia X, Wei S, Li X. A deep learning model established for evaluating lid margin signs with colour anterior segment photography. Eye (Lond) 2023; 37:1377-1382. [PMID: 35739245 PMCID: PMC10170093 DOI: 10.1038/s41433-022-02088-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2021] [Revised: 03/30/2022] [Accepted: 05/04/2022] [Indexed: 11/09/2022] Open
Abstract
OBJECTIVES To evaluate the feasibility of applying a deep learning model to identify lid margin signs from colour anterior segment photography. METHODS We collected a total of 832 colour anterior segment photographs from 428 dry eye patients. Eight lid margin signs were labelled by human ophthalmologists. Eight deep learning models were constructed based on VGGNet-13 and trained to identify lid margin signs. Sensitivity, specificity, receiver operative characteristic (ROC) curves and area under the curve (AUC) were applied to evaluate the models. RESULTS The AUC for rounding of posterior lid margin was 0.979 and was 0.977 and 0.980 for lid margin irregularity and vascularization. For hyperkeratinization, the AUC was 0.964. The AUCs for meibomian gland orifice (MGO) retroplacement and plugging were 0.963 and 0.968. For the mucocutaneous junction (MCJ) anteroplacement and retroplacement model, the AUCs were 0.950 and 0.978. The sensitivity and specificity for rounding of posterior lid margin were 0.974 and 0.921. For irregularity, the sensitivity and specificity were 0.930 and 0.938, and those for vascularization were 0.923 and 0.961. The hyperkeratinization model achieved a sensitivity and specificity of 0.889 and 0.948. The model identifying MGO plugging and retroplacement achieved a sensitivity of 0.979 and 0.909 with a specificity of 0.867 and 0.967. The sensitivity of MCJ anteroplacement and retroplacement were 0.875/0.969, with a specificity of 0.966/0.888. CONCLUSIONS The deep learning model could identify lid margin signs with high sensitivity and specificity. The study provided the potentiality of applying artificial intelligence in lid margin evaluation to assist dry eye decision-making.
Collapse
Affiliation(s)
- Yuexin Wang
- Department of Ophthalmology, Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China
| | - Xingheng Jia
- School of Vehicle and Mobility, Tsinghua University, Beijing, China
| | - Shanshan Wei
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing, China
| | - Xuemin Li
- Department of Ophthalmology, Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China.
| |
Collapse
|
13
|
Lu J, Cheng Y, Li J, Liu Z, Shen M, Zhang Q, Liu J, Herrera G, Hiya FE, Morin R, Joseph J, Gregori G, Rosenfeld PJ, Wang RK. Automated segmentation and quantification of calcified drusen in 3D swept source OCT imaging. BIOMEDICAL OPTICS EXPRESS 2023; 14:1292-1306. [PMID: 36950236 PMCID: PMC10026581 DOI: 10.1364/boe.485999] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 02/18/2023] [Accepted: 02/19/2023] [Indexed: 06/18/2023]
Abstract
Qualitative and quantitative assessments of calcified drusen are clinically important for determining the risk of disease progression in age-related macular degeneration (AMD). This paper reports the development of an automated algorithm to segment and quantify calcified drusen on swept-source optical coherence tomography (SS-OCT) images. The algorithm leverages the higher scattering property of calcified drusen compared with soft drusen. Calcified drusen have a higher optical attenuation coefficient (OAC), which results in a choroidal hypotransmission defect (hypoTD) below the calcified drusen. We show that it is possible to automatically segment calcified drusen from 3D SS-OCT scans by combining the OAC within drusen and the hypoTDs under drusen. We also propose a correction method for the segmentation of the retina pigment epithelium (RPE) overlying calcified drusen by automatically correcting the RPE by an amount of the OAC peak width along each A-line, leading to more accurate segmentation and quantification of drusen in general, and the calcified drusen in particular. A total of 29 eyes with nonexudative AMD and calcified drusen imaged with SS-OCT using the 6 × 6 mm2 scanning pattern were used in this study to test the performance of the proposed automated method. We demonstrated that the method achieved good agreement with the human expert graders in identifying the area of calcified drusen (Dice similarity coefficient: 68.27 ± 11.09%, correlation coefficient of the area measurements: r = 0.9422, the mean bias of the area measurements = 0.04781 mm2).
Collapse
Affiliation(s)
- Jie Lu
- Department of Bioengineering, University of Washington, Seattle, Washington, USA
| | - Yuxuan Cheng
- Department of Bioengineering, University of Washington, Seattle, Washington, USA
| | - Jianqing Li
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Ziyu Liu
- Department of Bioengineering, University of Washington, Seattle, Washington, USA
| | - Mengxi Shen
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Qinqin Zhang
- Department of Bioengineering, University of Washington, Seattle, Washington, USA
- Research and Development, Carl Zeiss Meditec, Inc., Dublin, CA, USA
| | - Jeremy Liu
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Gissel Herrera
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Farhan E. Hiya
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Rosalyn Morin
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Joan Joseph
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Giovanni Gregori
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Philip J. Rosenfeld
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Ruikang K. Wang
- Department of Bioengineering, University of Washington, Seattle, Washington, USA
- Department of Ophthalmology, University of Washington, Seattle, Washington, USA
| |
Collapse
|
14
|
Pramil V, de Sisternes L, Omlor L, Lewis W, Sheikh H, Chu Z, Manivannan N, Durbin M, Wang RK, Rosenfeld PJ, Shen M, Guymer R, Liang MC, Gregori G, Waheed NK. A Deep Learning Model for Automated Segmentation of Geographic Atrophy Imaged Using Swept-Source OCT. Ophthalmol Retina 2023; 7:127-141. [PMID: 35970318 DOI: 10.1016/j.oret.2022.08.007] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Revised: 07/21/2022] [Accepted: 08/08/2022] [Indexed: 06/15/2023]
Abstract
PURPOSE To present a deep learning algorithm for segmentation of geographic atrophy (GA) using en face swept-source OCT (SS-OCT) images that is accurate and reproducible for the assessment of GA growth over time. DESIGN Retrospective review of images obtained as part of a prospective natural history study. SUBJECTS Patients with GA (n = 90), patients with early or intermediate age-related macular degeneration (n = 32), and healthy controls (n = 16). METHODS An automated algorithm using scan volume data to generate 3 image inputs characterizing the main OCT features of GA-hypertransmission in subretinal pigment epithelium (sub-RPE) slab, regions of RPE loss, and loss of retinal thickness-was trained using 126 images (93 with GA and 33 without GA, from the same number of eyes) using a fivefold cross-validation method and data augmentation techniques. It was tested in an independent set of one hundred eighty 6 × 6-mm2 macular SS-OCT scans consisting of 3 repeated scans of 30 eyes with GA at baseline and follow-up as well as 45 images obtained from 42 eyes without GA. MAIN OUTCOME MEASURES The GA area, enlargement rate of GA area, square root of GA area, and square root of the enlargement rate of GA area measurements were calculated using the automated algorithm and compared with ground truth calculations performed by 2 manual graders. The repeatability of these measurements was determined using intraclass coefficients (ICCs). RESULTS There were no significant differences in the GA areas, enlargement rates of GA area, square roots of GA area, and square roots of the enlargement rates of GA area between the graders and the automated algorithm. The algorithm showed high repeatability, with ICCs of 0.99 and 0.94 for the GA area measurements and the enlargement rates of GA area, respectively. The repeatability limit for the GA area measurements made by grader 1, grader 2, and the automated algorithm was 0.28, 0.33, and 0.92 mm2, respectively. CONCLUSIONS When compared with manual methods, this proposed deep learning-based automated algorithm for GA segmentation using en face SS-OCT images was able to accurately delineate GA and produce reproducible measurements of the enlargement rates of GA.
Collapse
Affiliation(s)
- Varsha Pramil
- Tufts University School of Medicine, Boston, Massachusetts; New England Eye Center, Tufts New England Medical Center, Boston, Massachusetts
| | | | - Lars Omlor
- Carl Zeiss Meditec, Inc, Dublin, California
| | - Warren Lewis
- Carl Zeiss Meditec, Inc, Dublin, California; Bayside Photonics, Inc, Yellow Springs, Ohio
| | - Harris Sheikh
- New England Eye Center, Tufts New England Medical Center, Boston, Massachusetts
| | - Zhongdi Chu
- Department of Biomedical Engineering, University of Washington Seattle, Seattle, Washington
| | | | | | - Ruikang K Wang
- Department of Biomedical Engineering, University of Washington Seattle, Seattle, Washington
| | - Philip J Rosenfeld
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida
| | - Mengxi Shen
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida
| | - Robyn Guymer
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Department of Surgery (Ophthalmology), University of Melbourne, Melbourne, Australia
| | - Michelle C Liang
- Tufts University School of Medicine, Boston, Massachusetts; New England Eye Center, Tufts New England Medical Center, Boston, Massachusetts
| | - Giovanni Gregori
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida
| | - Nadia K Waheed
- Tufts University School of Medicine, Boston, Massachusetts; New England Eye Center, Tufts New England Medical Center, Boston, Massachusetts.
| |
Collapse
|
15
|
A Deep Learning Model for Evaluating Meibomian Glands Morphology from Meibography. J Clin Med 2023; 12:jcm12031053. [PMID: 36769701 PMCID: PMC9918190 DOI: 10.3390/jcm12031053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 01/03/2023] [Accepted: 01/20/2023] [Indexed: 02/03/2023] Open
Abstract
To develop a deep learning model for automatically segmenting tarsus and meibomian gland areas on meibography, we included 1087 meibography images from dry eye patients. The contour of the tarsus and each meibomian gland was labeled manually by human experts. The dataset was divided into training, validation, and test sets. We built a convolutional neural network-based U-net and trained the model to segment the tarsus and meibomian gland area. Accuracy, sensitivity, specificity, and receiver operating characteristic curve (ROC) were calculated to evaluate the model. The area under the curve (AUC) values for models segmenting the tarsus and meibomian gland area were 0.985 and 0.938, respectively. The deep learning model achieved a sensitivity and specificity of 0.975 and 0.99, respectively, with an accuracy of 0.985 for segmenting the tarsus area. For meibomian gland area segmentation, the model obtained a high specificity of 0.96, with high accuracy of 0.937 and a moderate sensitivity of 0.751. The present research trained a deep learning model to automatically segment tarsus and the meibomian gland area from infrared meibography, and the model demonstrated outstanding accuracy in segmentation. With further improvement, the model could potentially be applied to assess the meibomian gland that facilitates dry eye evaluation in various clinical and research scenarios.
Collapse
|
16
|
Developing a Deep Learning Model to Evaluate Bulbar Conjunctival Injection with Color Anterior Segment Photographs. J Clin Med 2023; 12:jcm12020715. [PMID: 36675643 PMCID: PMC9867092 DOI: 10.3390/jcm12020715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Revised: 01/08/2023] [Accepted: 01/10/2023] [Indexed: 01/18/2023] Open
Abstract
The present research aims to evaluate the feasibility of a deep-learning model in identifying bulbar conjunctival injection grading. Methods: We collected 1401 color anterior segment photographs demonstrating the cornea and bulbar conjunctival. The ground truth was bulbar conjunctival injection scores labeled by human ophthalmologists. Two convolutional neural network-based models were constructed and trained. Accuracy, precision, recall, F1-score, Kappa, and the area under the curve (AUC) were calculated to evaluate the efficiency of the deep learning models. The micro-average and macro-average AUC values for model grading bulbar conjunctival injection were 0.98 and 0.98, respectively. The deep learning model achieved a high accuracy of 87.12%, a precision of 87.13%, a recall of 87.12%, an F1-score of 87.07%, and Cohen's Kappa of 0.8153. The deep learning model demonstrated excellent performance in evaluating the severity of bulbar conjunctival injection, and it has the potential to help evaluate ocular surface diseases and determine disease progression and recovery.
Collapse
|
17
|
Ganjdanesh A, Zhang J, Yan S, Chen W, Huang H. Multimodal Genotype and Phenotype Data Integration to Improve Partial Data-Based Longitudinal Prediction. J Comput Biol 2022; 29:1324-1345. [PMID: 36383766 PMCID: PMC9835299 DOI: 10.1089/cmb.2022.0378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Multimodal data analysis has attracted ever-increasing attention in computational biology and bioinformatics community recently. However, existing multimodal learning approaches need all data modalities available at both training and prediction stages, thus they cannot be applied to many real-world biomedical applications, which often have a missing modality problem as the collection of all modalities is prohibitively costly. Meanwhile, two diagnosis-related pieces of information are of main interest during the examination of a subject regarding a chronic disease (with longitudinal progression): their current status (diagnosis) and how it will change before next visit (longitudinal outcome). Correct responses to these queries can identify susceptible individuals and provide the means of early interventions for them. In this article, we develop a novel adversarial mutual learning framework for longitudinal disease progression prediction, allowing us to leverage multiple data modalities available for training to train a performant model that uses a single modality for prediction. Specifically, in our framework, a single-modal model (which utilizes the main modality) learns from a pretrained multimodal model (which accepts both main and auxiliary modalities as input) in a mutual learning manner to (1) infer outcome-related representations of the auxiliary modalities based on its own representations for the main modality during adversarial training and (2) successfully combine them to predict the longitudinal outcome. We apply our method to analyze the retinal imaging genetics for the early diagnosis of age-related macular degeneration (AMD) disease, that is, simultaneous assessment of the severity of AMD at the time of the current visit and the prognosis of the condition at the subsequent visit. Our experiments using the Age-Related Eye Disease Study dataset show that our method is more effective than baselines at classifying patients' current and forecasting their future AMD severity.
Collapse
Affiliation(s)
- Alireza Ganjdanesh
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Jipeng Zhang
- Department of Biostatistics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Sarah Yan
- West Windsor-Plainsboro High School South, Princeton Junction, New Jersey, USA
| | - Wei Chen
- Department of Biostatistics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
- Department of Pediatrics, UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania, USA
- Department of Human Genetics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Heng Huang
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| |
Collapse
|
18
|
Gojić G, Petrović VB, Dragan D, Gajić DB, Mišković D, Džinić V, Grgić Z, Pantelić J, Oros A. Comparing the Clinical Viability of Automated Fundus Image Segmentation Methods. SENSORS (BASEL, SWITZERLAND) 2022; 22:9101. [PMID: 36501801 PMCID: PMC9735987 DOI: 10.3390/s22239101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 11/16/2022] [Accepted: 11/17/2022] [Indexed: 06/17/2023]
Abstract
Recent methods for automatic blood vessel segmentation from fundus images have been commonly implemented as convolutional neural networks. While these networks report high values for objective metrics, the clinical viability of recovered segmentation masks remains unexplored. In this paper, we perform a pilot study to assess the clinical viability of automatically generated segmentation masks in the diagnosis of diseases affecting retinal vascularization. Five ophthalmologists with clinical experience were asked to participate in the study. The results demonstrate low classification accuracy, inferring that generated segmentation masks cannot be used as a standalone resource in general clinical practice. The results also hint at possible clinical infeasibility in experimental design. In the follow-up experiment, we evaluate the clinical quality of masks by having ophthalmologists rank generation methods. The ranking is established with high intra-observer consistency, indicating better subjective performance for a subset of tested networks. The study also demonstrates that objective metrics are not correlated with subjective metrics in retinal segmentation tasks for the methods involved, suggesting that objective metrics commonly used in scientific papers to measure the method's performance are not plausible criteria for choosing clinically robust solutions.
Collapse
Affiliation(s)
- Gorana Gojić
- The Institute for Artificial Intelligence Research and Development of Serbia, 21102 Novi Sad, Serbia
- Faculty of Technical Sciences, University of Novi Sad, 21102 Novi Sad, Serbia
| | - Veljko B. Petrović
- Faculty of Technical Sciences, University of Novi Sad, 21102 Novi Sad, Serbia
| | - Dinu Dragan
- Faculty of Technical Sciences, University of Novi Sad, 21102 Novi Sad, Serbia
| | - Dušan B. Gajić
- Faculty of Technical Sciences, University of Novi Sad, 21102 Novi Sad, Serbia
| | - Dragiša Mišković
- The Institute for Artificial Intelligence Research and Development of Serbia, 21102 Novi Sad, Serbia
| | | | | | - Jelica Pantelić
- Institute of Eye Diseases, University Clinical Center of Serbia, 11000 Belgrade, Serbia
| | - Ana Oros
- Eye Clinic Džinić, 21107 Novi Sad, Serbia
- Institute of Neonatology, 11000 Belgrade, Serbia
| |
Collapse
|
19
|
Charng J, Alam K, Swartz G, Kugelman J, Alonso-Caneiro D, Mackey DA, Chen FK. Deep learning: applications in retinal and optic nerve diseases. Clin Exp Optom 2022:1-10. [PMID: 35999058 DOI: 10.1080/08164622.2022.2111201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022] Open
Abstract
Deep learning (DL) represents a paradigm-shifting, burgeoning field of research with emerging clinical applications in optometry. Unlike traditional programming, which relies on human-set specific rules, DL works by exposing the algorithm to a large amount of annotated data and allowing the software to develop its own set of rules (i.e. learn) by adjusting the parameters inside the model (network) during a training process in order to complete the task on its own. One major limitation of traditional programming is that, with complex tasks, it may require an extensive set of rules to accurately complete the assignment. Additionally, traditional programming can be susceptible to human bias from programmer experience. With the dramatic increase in the amount and the complexity of clinical data, DL has been utilised to automate data analysis and thus to assist clinicians in patient management. This review will present the latest advances in DL, for managing posterior eye diseases as well as DL-based solutions for patients with vision loss.
Collapse
Affiliation(s)
- Jason Charng
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Khyber Alam
- Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Gavin Swartz
- Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Jason Kugelman
- School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Australia
| | - David Alonso-Caneiro
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Australia
| | - David A Mackey
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia.,Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | - Fred K Chen
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia.,Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia.,Department of Ophthalmology, Royal Perth Hospital, Western Australia, Perth, Australia
| |
Collapse
|
20
|
Lin M, Hou B, Liu L, Gordon M, Kass M, Wang F, Van Tassel SH, Peng Y. Automated diagnosing primary open-angle glaucoma from fundus image by simulating human's grading with deep learning. Sci Rep 2022; 12:14080. [PMID: 35982106 PMCID: PMC9388536 DOI: 10.1038/s41598-022-17753-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 07/30/2022] [Indexed: 11/09/2022] Open
Abstract
Primary open-angle glaucoma (POAG) is a leading cause of irreversible blindness worldwide. Although deep learning methods have been proposed to diagnose POAG, it remains challenging to develop a robust and explainable algorithm to automatically facilitate the downstream diagnostic tasks. In this study, we present an automated classification algorithm, GlaucomaNet, to identify POAG using variable fundus photographs from different populations and settings. GlaucomaNet consists of two convolutional neural networks to simulate the human grading process: learning the discriminative features and fusing the features for grading. We evaluated GlaucomaNet on two datasets: Ocular Hypertension Treatment Study (OHTS) participants and the Large-scale Attention-based Glaucoma (LAG) dataset. GlaucomaNet achieved the highest AUC of 0.904 and 0.997 for POAG diagnosis on OHTS and LAG datasets. An ensemble of network architectures further improved diagnostic accuracy. By simulating the human grading process, GlaucomaNet demonstrated high accuracy with increased transparency in POAG diagnosis (comprehensiveness scores of 97% and 36%). These methods also address two well-known challenges in the field: the need for increased image data diversity and relying heavily on perimetry for POAG diagnosis. These results highlight the potential of deep learning to assist and enhance clinical POAG diagnosis. GlaucomaNet is publicly available on https://github.com/bionlplab/GlaucomaNet .
Collapse
Affiliation(s)
- Mingquan Lin
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA
| | - Bojian Hou
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA
| | - Lei Liu
- Institute for Public Health, Washington University School of Medicine, St. Louis, MO, USA
| | - Mae Gordon
- Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, St. Louis, MO, USA
| | - Michael Kass
- Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, St. Louis, MO, USA
| | - Fei Wang
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA.
| | | | - Yifan Peng
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA.
| |
Collapse
|
21
|
Gunasekaran K, Pitchai R, Chaitanya GK, Selvaraj D, Annie Sheryl S, Almoallim HS, Alharbi SA, Raghavan SS, Tesemma BG. A Deep Learning Framework for Earlier Prediction of Diabetic Retinopathy from Fundus Photographs. BIOMED RESEARCH INTERNATIONAL 2022; 2022:3163496. [PMID: 35711528 PMCID: PMC9197616 DOI: 10.1155/2022/3163496] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 04/27/2022] [Accepted: 05/11/2022] [Indexed: 11/17/2022]
Abstract
Diabetic patients can also be identified immediately utilizing retinopathy photos, but it is a challenging task. The blood veins visible in fundus photographs are used in several disease diagnosis approaches. We sought to replicate the findings published in implementation and verification of a deep learning approach for diabetic retinopathy identification in retinal fundus pictures. To address this issue, the suggested investigative study uses recurrent neural networks (RNN) to retrieve characteristics from deep networks. As a result, using computational approaches to identify certain disorders automatically might be a fantastic solution. We developed and tested several iterations of a deep learning framework to forecast the progression of diabetic retinopathy in diabetic individuals who have undergone teleretinal diabetic retinopathy assessment in a basic healthcare environment. A collection of one-field or three-field colour fundus pictures served as the input for both iterations. Utilizing the proposed DRNN methodology, advanced identification of the diabetic state was performed utilizing HE detected in an eye's blood vessel. This research demonstrates the difficulties in duplicating deep learning approach findings, as well as the necessity for more reproduction and replication research to verify deep learning techniques, particularly in the field of healthcare picture processing. This development investigates the utilization of several other Deep Neural Network Frameworks on photographs from the dataset after they have been treated to suitable image computation methods such as local average colour subtraction to assist in highlighting the germane characteristics from a fundoscopy, thus, also enhancing the identification and assessment procedure of diabetic retinopathy and serving as a skilled guidelines framework for practitioners all over the globe.
Collapse
Affiliation(s)
- K. Gunasekaran
- Department of Computer Science and Engineering, Sri Indu College of Engineering and Technology, Hyderabad, Telangana 501510, India
| | - R. Pitchai
- Department of Computer Science and Engineering, B V Raju Institute of Technology, Narsapur, Telangana 502313, India
| | - Gogineni Krishna Chaitanya
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh 522502, India
| | - D. Selvaraj
- Department of Electronics and Communication Engineering, Panimalar Engineering College, Chennai, Tamil Nadu 600123, India
| | - S. Annie Sheryl
- Department of Computer Science and Engineering, Panimalar Institute of Technology, Chennai, Tamil Nadu 600123, India
| | - Hesham S. Almoallim
- Department of Oral and Maxillofacial Surgery, College of Dentistry, King Saud University, PO Box-60169, Riyadh-11545, Saudi Arabia
| | - Sulaiman Ali Alharbi
- Department of Botany and Microbiology, College of Science, King Saud University, PO Box-2455, Riyadh-11451, Saudi Arabia
| | - S. S. Raghavan
- Department of Microbiology, University of Texas Health and Science Center at Tyler, Tyler-75703, TX, USA
| | | |
Collapse
|
22
|
Dong L, He W, Zhang R, Ge Z, Wang YX, Zhou J, Xu J, Shao L, Wang Q, Yan Y, Xie Y, Fang L, Wang H, Wang Y, Zhu X, Wang J, Zhang C, Wang H, Wang Y, Chen R, Wan Q, Yang J, Zhou W, Li H, Yao X, Yang Z, Xiong J, Wang X, Huang Y, Chen Y, Wang Z, Rong C, Gao J, Zhang H, Wu S, Jonas JB, Wei WB. Artificial Intelligence for Screening of Multiple Retinal and Optic Nerve Diseases. JAMA Netw Open 2022; 5:e229960. [PMID: 35503220 PMCID: PMC9066285 DOI: 10.1001/jamanetworkopen.2022.9960] [Citation(s) in RCA: 43] [Impact Index Per Article: 21.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
IMPORTANCE The lack of experienced ophthalmologists limits the early diagnosis of retinal diseases. Artificial intelligence can be an efficient real-time way for screening retinal diseases. OBJECTIVE To develop and prospectively validate a deep learning (DL) algorithm that, based on ocular fundus images, recognizes numerous retinal diseases simultaneously in clinical practice. DESIGN, SETTING, AND PARTICIPANTS This multicenter, diagnostic study at 65 public medical screening centers and hospitals in 19 Chinese provinces included individuals attending annual routine medical examinations and participants of population-based and community-based studies. EXPOSURES Based on 120 002 ocular fundus photographs, the Retinal Artificial Intelligence Diagnosis System (RAIDS) was developed to identify 10 retinal diseases. RAIDS was validated in a prospective collected data set, and the performance between RAIDS and ophthalmologists was compared in the data sets of the population-based Beijing Eye Study and the community-based Kailuan Eye Study. MAIN OUTCOMES AND MEASURES The performance of each classifier included sensitivity, specificity, accuracy, F1 score, and Cohen κ score. RESULTS In the prospective validation data set of 208 758 images collected from 110 784 individuals (median [range] age, 42 [8-87] years; 115 443 [55.3%] female), RAIDS achieved a sensitivity of 89.8% (95% CI, 89.5%-90.1%) to detect any of 10 retinal diseases. RAIDS differentiated 10 retinal diseases with accuracies ranging from 95.3% to 99.9%, without marked differences between medical screening centers and geographical regions in China. Compared with retinal specialists, RAIDS achieved a higher sensitivity for detection of any retinal abnormality (RAIDS, 91.7% [95% CI, 90.6%-92.8%]; certified ophthalmologists, 83.7% [95% CI, 82.1%-85.1%]; junior retinal specialists, 86.4% [95% CI, 84.9%-87.7%]; and senior retinal specialists, 88.5% [95% CI, 87.1%-89.8%]). RAIDS reached a superior or similar diagnostic sensitivity compared with senior retinal specialists in the detection of 7 of 10 retinal diseases (ie, referral diabetic retinopathy, referral possible glaucoma, macular hole, epiretinal macular membrane, hypertensive retinopathy, myelinated fibers, and retinitis pigmentosa). It achieved a performance comparable with the performance by certified ophthalmologists in 2 diseases (ie, age-related macular degeneration and retinal vein occlusion). Compared with ophthalmologists, RAIDS needed 96% to 97% less time for the image assessment. CONCLUSIONS AND RELEVANCE In this diagnostic study, the DL system was associated with accurately distinguishing 10 retinal diseases in real time. This technology may help overcome the lack of experienced ophthalmologists in underdeveloped areas.
Collapse
Affiliation(s)
- Li Dong
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wanji He
- Beijing Airdoc Technology Co, Ltd, Beijing, China
| | - Ruiheng Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Zongyuan Ge
- eResearch Centre, Monash University, Melbourne, Victoria, Australia
- ECSE, Faculty of Engineering, Monash University, Melbourne, Victoria, Australia
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Ophthalmology and Visual Science Key Lab, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jinqiong Zhou
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jie Xu
- Beijing Institute of Ophthalmology, Beijing Ophthalmology and Visual Science Key Lab, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Lei Shao
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Qian Wang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yanni Yan
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Ying Xie
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Department of Ophthalmology, Shanxi Provincial People's Hospital, Taiyuan, China
| | - Lijian Fang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Department of Ophthalmology, Beijing Liangxiang Hospital, Capital Medical University, Beijing, China
| | - Haiwei Wang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Department of Ophthalmology, Fuxing Hospital, Capital Medical University, Beijing, China
| | - Yenan Wang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Department of Ophthalmology, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Xiaobo Zhu
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Department of Ophthalmology, Dongfang Hospital, Beijing University of Chinese Medicine, Beijing, China
| | - Jinyuan Wang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Chuan Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Heng Wang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yining Wang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Rongtian Chen
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Qianqian Wan
- Department of Ophthalmology, the Second Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Jingyan Yang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wenda Zhou
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Heyan Li
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xuan Yao
- Beijing Airdoc Technology Co, Ltd, Beijing, China
| | - Zhiwen Yang
- Beijing Airdoc Technology Co, Ltd, Beijing, China
| | | | - Xin Wang
- Beijing Airdoc Technology Co, Ltd, Beijing, China
| | - Yelin Huang
- Beijing Airdoc Technology Co, Ltd, Beijing, China
| | - Yuzhong Chen
- Beijing Airdoc Technology Co, Ltd, Beijing, China
| | - Zhaohui Wang
- iKang Guobin Healthcare Group Co, Ltd, Beijing, China
| | - Ce Rong
- iKang Guobin Healthcare Group Co, Ltd, Beijing, China
| | - Jianxiong Gao
- iKang Guobin Healthcare Group Co, Ltd, Beijing, China
| | | | - Shouling Wu
- Department of Cardiology, Kailuan General Hospital, Tangshan, Hebei, China
| | - Jost B Jonas
- Department of Ophthalmology, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Institute of Molecular and Clinical Ophthalmology Basel, Switzerland
| | - Wen Bin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
23
|
Dow ER, Keenan TDL, Lad EM, Lee AY, Lee CS, Loewenstein A, Eydelman MB, Chew EY, Keane PA, Lim JI. From Data to Deployment: The Collaborative Community on Ophthalmic Imaging Roadmap for Artificial Intelligence in Age-Related Macular Degeneration. Ophthalmology 2022; 129:e43-e59. [PMID: 35016892 PMCID: PMC9859710 DOI: 10.1016/j.ophtha.2022.01.002] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Revised: 12/16/2021] [Accepted: 01/04/2022] [Indexed: 01/25/2023] Open
Abstract
OBJECTIVE Health care systems worldwide are challenged to provide adequate care for the 200 million individuals with age-related macular degeneration (AMD). Artificial intelligence (AI) has the potential to make a significant, positive impact on the diagnosis and management of patients with AMD; however, the development of effective AI devices for clinical care faces numerous considerations and challenges, a fact evidenced by a current absence of Food and Drug Administration (FDA)-approved AI devices for AMD. PURPOSE To delineate the state of AI for AMD, including current data, standards, achievements, and challenges. METHODS Members of the Collaborative Community on Ophthalmic Imaging Working Group for AI in AMD attended an inaugural meeting on September 7, 2020, to discuss the topic. Subsequently, they undertook a comprehensive review of the medical literature relevant to the topic. Members engaged in meetings and discussion through December 2021 to synthesize the information and arrive at a consensus. RESULTS Existing infrastructure for robust AI development for AMD includes several large, labeled data sets of color fundus photography and OCT images; however, image data often do not contain the metadata necessary for the development of reliable, valid, and generalizable models. Data sharing for AMD model development is made difficult by restrictions on data privacy and security, although potential solutions are under investigation. Computing resources may be adequate for current applications, but knowledge of machine learning development may be scarce in many clinical ophthalmology settings. Despite these challenges, researchers have produced promising AI models for AMD for screening, diagnosis, prediction, and monitoring. Future goals include defining benchmarks to facilitate regulatory authorization and subsequent clinical setting generalization. CONCLUSIONS Delivering an FDA-authorized, AI-based device for clinical care in AMD involves numerous considerations, including the identification of an appropriate clinical application; acquisition and development of a large, high-quality data set; development of the AI architecture; training and validation of the model; and functional interactions between the model output and clinical end user. The research efforts undertaken to date represent starting points for the medical devices that eventually will benefit providers, health care systems, and patients.
Collapse
Affiliation(s)
- Eliot R Dow
- Byers Eye Institute, Stanford University, Palo Alto, California
| | - Tiarnan D L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Eleonora M Lad
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Cecilia S Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Anat Loewenstein
- Division of Ophthalmology, Tel Aviv Medical Center, Tel Aviv, Israel
| | - Malvina B Eydelman
- Office of Health Technology 1, Center of Devices and Radiological Health, Food and Drug Administration, Silver Spring, Maryland
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland.
| | - Pearse A Keane
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom.
| | - Jennifer I Lim
- Department of Ophthalmology, University of Illinois at Chicago, Chicago, Illinois.
| |
Collapse
|
24
|
Yasser I, Khalifa F, Abdeltawab H, Ghazal M, Sandhu HS, El-Baz A. Automated Diagnosis of Optical Coherence Tomography Angiography (OCTA) Based on Machine Learning Techniques. SENSORS 2022; 22:s22062342. [PMID: 35336513 PMCID: PMC8952189 DOI: 10.3390/s22062342] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 03/11/2022] [Accepted: 03/14/2022] [Indexed: 12/17/2022]
Abstract
Diabetic retinopathy (DR) refers to the ophthalmological complications of diabetes mellitus. It is primarily a disease of the retinal vasculature that can lead to vision loss. Optical coherence tomography angiography (OCTA) demonstrates the ability to detect the changes in the retinal vascular system, which can help in the early detection of DR. In this paper, we describe a novel framework that can detect DR from OCTA based on capturing the appearance and morphological markers of the retinal vascular system. This new framework consists of the following main steps: (1) extracting retinal vascular system from OCTA images based on using joint Markov-Gibbs Random Field (MGRF) model to model the appearance of OCTA images and (2) estimating the distance map inside the extracted vascular system to be used as imaging markers that describe the morphology of the retinal vascular (RV) system. The OCTA images, extracted vascular system, and the RV-estimated distance map is then composed into a three-dimensional matrix to be used as an input to a convolutional neural network (CNN). The main motivation for using this data representation is that it combines the low-level data as well as high-level processed data to allow the CNN to capture significant features to increase its ability to distinguish DR from the normal retina. This has been applied on multi-scale levels to include the original full dimension images as well as sub-images extracted from the original OCTA images. The proposed approach was tested on in-vivo data using about 91 patients, which were qualitatively graded by retinal experts. In addition, it was quantitatively validated using datasets based on three metrics: sensitivity, specificity, and overall accuracy. Results showed the capability of the proposed approach, outperforming the current deep learning as well as features-based detecting DR approaches.
Collapse
Affiliation(s)
- Ibrahim Yasser
- Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt;
| | - Fahmi Khalifa
- Department of Bioengineering, University of Louisville, Louisville, KY 40292, USA; (F.K.); (H.A.); (H.S.S.)
| | - Hisham Abdeltawab
- Department of Bioengineering, University of Louisville, Louisville, KY 40292, USA; (F.K.); (H.A.); (H.S.S.)
| | - Mohammed Ghazal
- Electrical and Computer Engineering Department, Abu Dhabi University, Abu Dhabi P.O. Box 59911, United Arab Emirates;
| | - Harpal Singh Sandhu
- Department of Bioengineering, University of Louisville, Louisville, KY 40292, USA; (F.K.); (H.A.); (H.S.S.)
| | - Ayman El-Baz
- Department of Bioengineering, University of Louisville, Louisville, KY 40292, USA; (F.K.); (H.A.); (H.S.S.)
- Correspondence:
| |
Collapse
|
25
|
Chu Z, Wang L, Zhou X, Shi Y, Cheng Y, Laiginhas R, Zhou H, Shen M, Zhang Q, de Sisternes L, Lee AY, Gregori G, Rosenfeld PJ, Wang RK. Automatic geographic atrophy segmentation using optical attenuation in OCT scans with deep learning. BIOMEDICAL OPTICS EXPRESS 2022; 13:1328-1343. [PMID: 35414972 PMCID: PMC8973176 DOI: 10.1364/boe.449314] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 01/29/2022] [Accepted: 01/30/2022] [Indexed: 05/22/2023]
Abstract
A deep learning algorithm was developed to automatically identify, segment, and quantify geographic atrophy (GA) based on optical attenuation coefficients (OACs) calculated from optical coherence tomography (OCT) datasets. Normal eyes and eyes with GA secondary to age-related macular degeneration were imaged with swept-source OCT using 6 × 6 mm scanning patterns. OACs calculated from OCT scans were used to generate customized composite en face OAC images. GA lesions were identified and measured using customized en face sub-retinal pigment epithelium (subRPE) OCT images. Two deep learning models with the same U-Net architecture were trained using OAC images and subRPE OCT images. Model performance was evaluated using DICE similarity coefficients (DSCs). The GA areas were calculated and compared with manual segmentations using Pearson's correlation and Bland-Altman plots. In total, 80 GA eyes and 60 normal eyes were included in this study, out of which, 16 GA eyes and 12 normal eyes were used to test the models. Both models identified GA with 100% sensitivity and specificity on the subject level. With the GA eyes, the model trained with OAC images achieved significantly higher DSCs, stronger correlation to manual results and smaller mean bias than the model trained with subRPE OCT images (0.940 ± 0.032 vs 0.889 ± 0.056, p = 0.03, paired t-test, r = 0.995 vs r = 0.959, mean bias = 0.011 mm vs mean bias = 0.117 mm). In summary, the proposed deep learning model using composite OAC images effectively and accurately identified, segmented, and quantified GA using OCT scans.
Collapse
Affiliation(s)
- Zhongdi Chu
- Department of Bioengineering, University of Washington, Seattle, Washington, 98195, USA
| | - Liang Wang
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, 33136, USA
| | - Xiao Zhou
- Department of Bioengineering, University of Washington, Seattle, Washington, 98195, USA
| | - Yingying Shi
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, 33136, USA
| | - Yuxuan Cheng
- Department of Bioengineering, University of Washington, Seattle, Washington, 98195, USA
| | - Rita Laiginhas
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, 33136, USA
| | - Hao Zhou
- Department of Bioengineering, University of Washington, Seattle, Washington, 98195, USA
| | - Mengxi Shen
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, 33136, USA
| | - Qinqin Zhang
- Department of Bioengineering, University of Washington, Seattle, Washington, 98195, USA
| | - Luis de Sisternes
- Research and Development, Carl Zeiss Meditec, Inc, Dublin, California, 94568, USA
| | - Aaron Y. Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington, 98195, USA
| | - Giovanni Gregori
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, 33136, USA
| | - Philip J. Rosenfeld
- Department of Ophthalmology, Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida, 33136, USA
| | - Ruikang K. Wang
- Department of Bioengineering, University of Washington, Seattle, Washington, 98195, USA
- Department of Ophthalmology, University of Washington, Seattle, Washington, 98195, USA
| |
Collapse
|
26
|
Ganjdanesh A, Zhang J, Chew EY, Ding Y, Huang H, Chen W. LONGL-Net: temporal correlation structure guided deep learning model to predict longitudinal age-related macular degeneration severity. PNAS NEXUS 2022; 1:pgab003. [PMID: 35360552 PMCID: PMC8962776 DOI: 10.1093/pnasnexus/pgab003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 11/15/2021] [Indexed: 01/28/2023]
Abstract
Age-related macular degeneration (AMD) is the principal cause of blindness in developed countries, and its prevalence will increase to 288 million people in 2040. Therefore, automated grading and prediction methods can be highly beneficial for recognizing susceptible subjects to late-AMD and enabling clinicians to start preventive actions for them. Clinically, AMD severity is quantified by Color Fundus Photographs (CFP) of the retina, and many machine-learning-based methods are proposed for grading AMD severity. However, few models were developed to predict the longitudinal progression status, i.e. predicting future late-AMD risk based on the current CFP, which is more clinically interesting. In this paper, we propose a new deep-learning-based classification model (LONGL-Net) that can simultaneously grade the current CFP and predict the longitudinal outcome, i.e. whether the subject will be in late-AMD in the future time-point. We design a new temporal-correlation-structure-guided Generative Adversarial Network model that learns the interrelations of temporal changes in CFPs in consecutive time-points and provides interpretability for the classifier's decisions by forecasting AMD symptoms in the future CFPs. We used about 30,000 CFP images from 4,628 participants in the Age-Related Eye Disease Study. Our classifier showed average 0.905 (95% CI: 0.886-0.922) AUC and 0.762 (95% CI: 0.733-0.792) accuracy on the 3-class classification problem of simultaneously grading current time-point's AMD condition and predicting late AMD progression of subjects in the future time-point. We further validated our model on the UK Biobank dataset, where our model showed average 0.905 accuracy and 0.797 sensitivity in grading 300 CFP images.
Collapse
Affiliation(s)
- Alireza Ganjdanesh
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA 15261, USA
| | - Jipeng Zhang
- Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Ying Ding
- Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Heng Huang
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA 15261, USA
| | - Wei Chen
- Department of Biostatistics, Graduate School of Public Health, University of Pittsburgh, Pittsburgh, PA 15213, USA
- Division of Pulmonary Medicine, Department of Pediatrics, UPMC Children's Hospital of Pittsburgh, University of Pittsburgh, Pittsburgh, PA 15219, USA
| |
Collapse
|
27
|
Potapenko I, Kristensen M, Thiesson B, Ilginis T, Lykke Sørensen T, Nouri Hajari J, Fuchs J, Hamann S, Cour M. Detection of oedema on optical coherence tomography images using deep learning model trained on noisy clinical data. Acta Ophthalmol 2022; 100:103-110. [PMID: 33991170 DOI: 10.1111/aos.14895] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Accepted: 04/18/2021] [Indexed: 12/28/2022]
Abstract
PURPOSE To meet the demands imposed by the continuing growth of the Age-related macular degeneration (AMD) patient population, automation of follow-ups by detecting retinal oedema using deep learning might be a viable approach. However, preparing and labelling data for training is time consuming. In this study, we investigate the feasibility of training a convolutional neural network (CNN) to accurately detect retinal oedema on optical coherence tomography (OCT) images of AMD patients with labels derived directly from clinical treatment decisions, without extensive preprocessing or relabelling. METHODS A total of 50 439 OCT images with associated treatment information were retrieved from databases at the Department of Ophthalmology, Rigshospitalet, Copenhagen, Denmark between 01.06.2007 and 01.06.2018. A CNN was trained on the retrieved data with the recorded treatment decisions as labels and validated on a subset of the data relabelled by three ophthalmologists to denote presence of oedema. RESULTS Moderate inter-grader agreement on presence of oedema in the relabelled data was found (76.4%). Despite different training and validation labels, the CNN performed on par with inter-grader agreement in detecting oedema on OCT images (AUC 0.97, accuracy 90.9%) and previously published models based on relabelled datasets. CONCLUSION The level of performance shown by the current model might make it valuable in detecting disease activity in automated AMD patient follow-up systems. Our approach demonstrates that high accuracy is not necessarily constrained by incongruent training and validation labels. These results might encourage the use of existing clinical databases for development of deep learning based algorithms without labour-intensive preprocessing in the future.
Collapse
Affiliation(s)
- Ivan Potapenko
- Department of Ophthalmology Rigshospitalet Copenhagen Denmark
- Faculty of Health and Medical Sciences University of Copenhagen Copenhagen Denmark
| | | | - Bo Thiesson
- Enversion A/S Aarhus Denmark
- Department of Engineering Aarhus University Aarhus Denmark
| | - Tomas Ilginis
- Department of Ophthalmology Rigshospitalet Copenhagen Denmark
| | - Torben Lykke Sørensen
- Faculty of Health and Medical Sciences University of Copenhagen Copenhagen Denmark
- Department of Ophthalmology Zealand University Hospital Roskilde Denmark
| | | | - Josefine Fuchs
- Department of Ophthalmology Rigshospitalet Copenhagen Denmark
| | - Steffen Hamann
- Department of Ophthalmology Rigshospitalet Copenhagen Denmark
- Faculty of Health and Medical Sciences University of Copenhagen Copenhagen Denmark
| | - Morten Cour
- Department of Ophthalmology Rigshospitalet Copenhagen Denmark
- Faculty of Health and Medical Sciences University of Copenhagen Copenhagen Denmark
| |
Collapse
|
28
|
Review of Machine Learning Applications Using Retinal Fundus Images. Diagnostics (Basel) 2022; 12:diagnostics12010134. [PMID: 35054301 PMCID: PMC8774893 DOI: 10.3390/diagnostics12010134] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 01/03/2022] [Accepted: 01/03/2022] [Indexed: 02/04/2023] Open
Abstract
Automating screening and diagnosis in the medical field saves time and reduces the chances of misdiagnosis while saving on labor and cost for physicians. With the feasibility and development of deep learning methods, machines are now able to interpret complex features in medical data, which leads to rapid advancements in automation. Such efforts have been made in ophthalmology to analyze retinal images and build frameworks based on analysis for the identification of retinopathy and the assessment of its severity. This paper reviews recent state-of-the-art works utilizing the color fundus image taken from one of the imaging modalities used in ophthalmology. Specifically, the deep learning methods of automated screening and diagnosis for diabetic retinopathy (DR), age-related macular degeneration (AMD), and glaucoma are investigated. In addition, the machine learning techniques applied to the retinal vasculature extraction from the fundus image are covered. The challenges in developing these systems are also discussed.
Collapse
|
29
|
Wang Z, Keane PA, Chiang M, Cheung CY, Wong TY, Ting DSW. Artificial Intelligence and Deep Learning in Ophthalmology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
30
|
Lu L, Ren P, Tang X, Yang M, Yuan M, Yu W, Huang J, Zhou E, Lu L, He Q, Zhu M, Ke G, Han W. AI-Model for Identifying Pathologic Myopia Based on Deep Learning Algorithms of Myopic Maculopathy Classification and "Plus" Lesion Detection in Fundus Images. Front Cell Dev Biol 2021; 9:719262. [PMID: 34722502 PMCID: PMC8554089 DOI: 10.3389/fcell.2021.719262] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 09/20/2021] [Indexed: 01/24/2023] Open
Abstract
Background: Pathologic myopia (PM) associated with myopic maculopathy (MM) and “Plus” lesions is a major cause of irreversible visual impairment worldwide. Therefore, we aimed to develop a series of deep learning algorithms and artificial intelligence (AI)–models for automatic PM identification, MM classification, and “Plus” lesion detection based on retinal fundus images. Materials and Methods: Consecutive 37,659 retinal fundus images from 32,419 patients were collected. After excluding 5,649 ungradable images, a total dataset of 32,010 color retinal fundus images was manually graded for training and cross-validation according to the META-PM classification. We also retrospectively recruited 1,000 images from 732 patients from the three other hospitals in Zhejiang Province, serving as the external validation dataset. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, accuracy, and quadratic-weighted kappa score were calculated to evaluate the classification algorithms. The precision, recall, and F1-score were calculated to evaluate the object detection algorithms. The performance of all the algorithms was compared with the experts’ performance. To better understand the algorithms and clarify the direction of optimization, misclassification and visualization heatmap analyses were performed. Results: In five-fold cross-validation, algorithm I achieved robust performance, with accuracy = 97.36% (95% CI: 0.9697, 0.9775), AUC = 0.995 (95% CI: 0.9933, 0.9967), sensitivity = 93.92% (95% CI: 0.9333, 0.9451), and specificity = 98.19% (95% CI: 0.9787, 0.9852). The macro-AUC, accuracy, and quadratic-weighted kappa were 0.979, 96.74% (95% CI: 0.963, 0.9718), and 0.988 (95% CI: 0.986, 0.990) for algorithm II. Algorithm III achieved an accuracy of 0.9703 to 0.9941 for classifying the “Plus” lesions and an F1-score of 0.6855 to 0.8890 for detecting and localizing lesions. The performance metrics in external validation dataset were comparable to those of the experts and were slightly inferior to those of cross-validation. Conclusion: Our algorithms and AI-models were confirmed to achieve robust performance in real-world conditions. The application of our algorithms and AI-models has promise for facilitating clinical diagnosis and healthcare screening for PM on a large scale.
Collapse
Affiliation(s)
- Li Lu
- Department of Ophthalmology, Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Peifang Ren
- Department of Ophthalmology, The First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Xuyuan Tang
- Department of Ophthalmology, The First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Ming Yang
- Department of Ophthalmology, Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Minjie Yuan
- Department of Ophthalmology, Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Wangshu Yu
- Department of Ophthalmology, Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Jiani Huang
- Department of Ophthalmology, Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Enliang Zhou
- Department of Ophthalmology, The First Affiliated Hospital of University of Science and Technology of China, Hefei, China
| | - Lixian Lu
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Qin He
- Department of Ophthalmology, The First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Miaomiao Zhu
- Department of Ophthalmology, The First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Genjie Ke
- Department of Ophthalmology, The First Affiliated Hospital of University of Science and Technology of China, Hefei, China
| | - Wei Han
- Department of Ophthalmology, Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| |
Collapse
|
31
|
Sun G, Liu X, Yu X. Multi-path cascaded U-net for vessel segmentation from fundus fluorescein angiography sequential images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 211:106422. [PMID: 34598080 DOI: 10.1016/j.cmpb.2021.106422] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2020] [Accepted: 09/13/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Fundus fluorescein angiography (FFA) technique is widely used in the examination of retinal diseases. In analysis of FFA sequential images, accurate vessel segmentation is a prerequisite for quantification of vascular morphology. Current vessel segmentation methods concentrate mainly on color fundus images and they are limited in processing FFA sequential images with varying background and vessels. METHODS We proposed a multi-path cascaded U-net (MCU-net) architecture for vessel segmentation in FFA sequential images, which is capable of integrating vessel features from different image modes to improve segmentation accuracy. Firstly, two modes of synthetic FFA images that enhance details of small vessels and large vessels are prepared, and are then used together with the raw FFA image as inputs of the MCU-net. By fusion of vessel features from the three modes of FFA images, a vascular probability map is generated as output of MCU-net. RESULTS The proposed MCU-net was trained and tested on the public Duke dataset and our own dataset for FFA sequential images as well as on the DRIVE dataset for color fundus images. Results show that MCU-net outperforms current state-of-the-art methods in terms of F1-score, sensitivity and accuracy, and is able of reserving details such as thin vessels and vascular connections. It also shows good robustness in processing FFA images captured at different perfusion stages. CONCLUSIONS The proposed method can segment vessels from FFA sequential images with high accuracy and shows good robustness to FFA images in different perfusion stages. This method has potential applications in quantitative analysis of vascular morphology in FFA sequential images.
Collapse
Affiliation(s)
- Gang Sun
- College of Electrical & Information Engineering, Hunan University
| | - Xiaoyan Liu
- College of Electrical & Information Engineering, Hunan University; Hunan Key Laboratory of Intelligent Robot Technology in Electronic Manufacturing.
| | - Xuefei Yu
- College of Electrical & Information Engineering, Hunan University
| |
Collapse
|
32
|
Shi X, Keenan TD, Chen Q, De Silva T, Thavikulwat AT, Broadhead G, Bhandari S, Cukras C, Chew EY, Lu Z. Improving Interpretability in Machine Diagnosis. OPHTHALMOLOGY SCIENCE 2021; 1:100038. [PMID: 36247813 PMCID: PMC9559084 DOI: 10.1016/j.xops.2021.100038] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 07/02/2021] [Accepted: 07/02/2021] [Indexed: 11/28/2022]
Abstract
Purpose Manually identifying geographic atrophy (GA) presence and location on OCT volume scans can be challenging and time consuming. This study developed a deep learning model simultaneously (1) to perform automated detection of GA presence or absence from OCT volume scans and (2) to provide interpretability by demonstrating which regions of which B-scans show GA. Design Med-XAI-Net, an interpretable deep learning model was developed to detect GA presence or absence from OCT volume scans using only volume scan labels, as well as to interpret the most relevant B-scans and B-scan regions. Participants One thousand two hundred eighty-four OCT volume scans (each containing 100 B-scans) from 311 participants, including 321 volumes with GA and 963 volumes without GA. Methods Med-XAI-Net simulates the human diagnostic process by using a region-attention module to locate the most relevant region in each B-scan, followed by an image-attention module to select the most relevant B-scans for classifying GA presence or absence in each OCT volume scan. Med-XAI-Net was trained and tested (80% and 20% participants, respectively) using gold standard volume scan labels from human expert graders. Main Outcome Measures Accuracy, area under the receiver operating characteristic (ROC) curve, F1 score, sensitivity, and specificity. Results In the detection of GA presence or absence, Med-XAI-Net obtained superior performance (91.5%, 93.5%, 82.3%, 82.8%, and 94.6% on accuracy, area under the ROC curve, F1 score, sensitivity, and specificity, respectively) to that of 2 other state-of-the-art deep learning methods. The performance of ophthalmologists grading only the 5 B-scans selected by Med-XAI-Net as most relevant (95.7%, 95.4%, 91.2%, and 100%, respectively) was almost identical to that of ophthalmologists grading all volume scans (96.0%, 95.7%, 91.8%, and 100%, respectively). Even grading only 1 region in 1 B-scan, the ophthalmologists demonstrated moderately high performance (89.0%, 87.4%, 77.6%, and 100%, respectively). Conclusions Despite using ground truth labels during training at the volume scan level only, Med-XAI-Net was effective in locating GA in B-scans and selecting relevant B-scans within each volume scan for GA diagnosis. These results illustrate the strengths of Med-XAI-Net in interpreting which regions and B-scans contribute to GA detection in the volume scan.
Collapse
|
33
|
Lin D, Xiong J, Liu C, Zhao L, Li Z, Yu S, Wu X, Ge Z, Hu X, Wang B, Fu M, Zhao X, Wang X, Zhu Y, Chen C, Li T, Li Y, Wei W, Zhao M, Li J, Xu F, Ding L, Tan G, Xiang Y, Hu Y, Zhang P, Han Y, Li JPO, Wei L, Zhu P, Liu Y, Chen W, Ting DSW, Wong TY, Chen Y, Lin H. Application of Comprehensive Artificial intelligence Retinal Expert (CARE) system: a national real-world evidence study. LANCET DIGITAL HEALTH 2021; 3:e486-e495. [PMID: 34325853 DOI: 10.1016/s2589-7500(21)00086-8] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Revised: 04/21/2021] [Accepted: 05/07/2021] [Indexed: 12/15/2022]
Abstract
BACKGROUND Medical artificial intelligence (AI) has entered the clinical implementation phase, although real-world performance of deep-learning systems (DLSs) for screening fundus disease remains unsatisfactory. Our study aimed to train a clinically applicable DLS for fundus diseases using data derived from the real world, and externally test the model using fundus photographs collected prospectively from the settings in which the model would most likely be adopted. METHODS In this national real-world evidence study, we trained a DLS, the Comprehensive AI Retinal Expert (CARE) system, to identify the 14 most common retinal abnormalities using 207 228 colour fundus photographs derived from 16 clinical settings with different disease distributions. CARE was internally validated using 21 867 photographs and externally tested using 18 136 photographs prospectively collected from 35 real-world settings across China where CARE might be adopted, including eight tertiary hospitals, six community hospitals, and 21 physical examination centres. The performance of CARE was further compared with that of 16 ophthalmologists and tested using datasets with non-Chinese ethnicities and previously unused camera types. This study was registered with ClinicalTrials.gov, NCT04213430, and is currently closed. FINDINGS The area under the receiver operating characteristic curve (AUC) in the internal validation set was 0·955 (SD 0·046). AUC values in the external test set were 0·965 (0·035) in tertiary hospitals, 0·983 (0·031) in community hospitals, and 0·953 (0·042) in physical examination centres. The performance of CARE was similar to that of ophthalmologists. Large variations in sensitivity were observed among the ophthalmologists in different regions and with varying experience. The system retained strong identification performance when tested using the non-Chinese dataset (AUC 0·960, 95% CI 0·957-0·964 in referable diabetic retinopathy). INTERPRETATION Our DLS (CARE) showed satisfactory performance for screening multiple retinal abnormalities in real-world settings using prospectively collected fundus photographs, and so could allow the system to be implemented and adopted for clinical care. FUNDING This study was funded by the National Key R&D Programme of China, the Science and Technology Planning Projects of Guangdong Province, the National Natural Science Foundation of China, the Natural Science Foundation of Guangdong Province, and the Fundamental Research Funds for the Central Universities. TRANSLATION For the Chinese translation of the abstract see Supplementary Materials section.
Collapse
Affiliation(s)
- Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Jianhao Xiong
- Beijing Eaglevision Technology Development, Beijing, China
| | - Congxin Liu
- Beijing Eaglevision Technology Development, Beijing, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Shanshan Yu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Zongyuan Ge
- Department of Electrical and Computer Systems Engineering, Faculty of Engineering, Monash University, Melbourne, VIC, Australia
| | - Xinyue Hu
- Beijing Eaglevision Technology Development, Beijing, China
| | - Bin Wang
- Beijing Eaglevision Technology Development, Beijing, China
| | - Meng Fu
- Beijing Eaglevision Technology Development, Beijing, China
| | - Xin Zhao
- Beijing Eaglevision Technology Development, Beijing, China
| | - Xin Wang
- Centre for Precision Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yi Zhu
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Chuan Chen
- Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Tao Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yonghao Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Wenbin Wei
- Beijing Tongren Eye Centre, Beijing Key Laboratory of Intraocular Tumour Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Mingwei Zhao
- Department of Ophthalmology, Ophthalmology and Optometry Centre, Peking University People's Hospital, Beijing, China
| | - Jianqiao Li
- Department of Ophthalmology, Qilu Hospital of Shandong University, Jinan, Shandong, China
| | - Fan Xu
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, Guangxi, China
| | - Lin Ding
- Department of Ophthalmology, People's Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Shanxi, China
| | - Gang Tan
- Department of Ophthalmology, University of South China, Hengyang, Hunan, China
| | - Yi Xiang
- Department of Ophthalmology, The Central Hospital of Wuhan, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Yongcheng Hu
- Bayannur Paralympic Eye Hospital, Bayannur, Inner Mongolia, China
| | - Ping Zhang
- Bayannur Paralympic Eye Hospital, Bayannur, Inner Mongolia, China
| | - Yu Han
- Department of Ophthalmology, Eye and ENT Hospital, Fudan University, Shanghai, China
| | | | - Lai Wei
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Pengzhi Zhu
- Guangdong Medical Devices Quality Surveillance and Test Institute, Guangzhou, Guangdong, China
| | - Yizhi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Weirong Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Daniel S W Ting
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Tien Y Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Yuzhong Chen
- Beijing Eaglevision Technology Development, Beijing, China.
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China; Centre for Precision Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China.
| |
Collapse
|
34
|
Takhchidi K, Gliznitsa PV, Svetozarskiy SN, Bursov AI, Shusterzon KA. Labelling of data on fundus color pictures used to train a deep learning model enhances its macular pathology recognition capabilities. BULLETIN OF RUSSIAN STATE MEDICAL UNIVERSITY 2021. [DOI: 10.24075/brsmu.2021.040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Retinal diseases remain one of the leading causes of visual impairments in the world. The development of automated diagnostic methods can improve the efficiency and availability of the macular pathology mass screening programs. The objective of this work was to develop and validate deep learning algorithms detecting macular pathology (age-related macular degeneration, AMD) based on the analysis of color fundus photographs with and without data labeling. We used 1200 color fundus photographs from local databases, including 575 retinal images of AMD patients and 625 pictures of the retina of healthy people. The deep learning algorithm was deployed in the Faster RCNN neural network with ResNet50 for convolution. The process employed the transfer learning method. As a result, in the absence of labeling, the accuracy of the model was unsatisfactory (79%) because the neural network selected the areas of attention incorrectly. Data labeling improved the efficacy of the developed method: with the test dataset, the model determined the areas with informative features adequately, and the classification accuracy reached 96.6%. Thus, image data labeling significantly improves the accuracy of retinal color images recognition by a neural network and enables development and training of effective models with limited datasets.
Collapse
Affiliation(s)
- KhP Takhchidi
- Pirogov Russian National Research Medical University, Moscow, Russia
| | - PV Gliznitsa
- OOO Innovatsioonniye Tekhnologii (Innovative Technologies, LLC), Nizhny Novgorod, Russia
| | - SN Svetozarskiy
- Volga District Medical Center under the Federal Medical-Biological Agency, Nizhny Novgorod, Russia
| | - AI Bursov
- Ivannikov Institute for System Programming of RAS, Moscow, Russia
| | - KA Shusterzon
- L.A. Melentiev Energy Systems Institute, Irkutsk, Russia
| |
Collapse
|
35
|
Deep learning-based automated detection for diabetic retinopathy and diabetic macular oedema in retinal fundus photographs. Eye (Lond) 2021; 36:1433-1441. [PMID: 34211137 DOI: 10.1038/s41433-021-01552-8] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Revised: 03/24/2021] [Accepted: 04/13/2021] [Indexed: 02/07/2023] Open
Abstract
OBJECTIVES To present and validate a deep ensemble algorithm to detect diabetic retinopathy (DR) and diabetic macular oedema (DMO) using retinal fundus images. METHODS A total of 8739 retinal fundus images were collected from a retrospective cohort of 3285 patients. For detecting DR and DMO, a multiple improved Inception-v4 ensembling approach was developed. We measured the algorithm's performance and made a comparison with that of human experts on our primary dataset, while its generalization was assessed on the publicly available Messidor-2 dataset. Also, we investigated systematically the impact of the size and number of input images used in training on model's performance, respectively. Further, the time budget of training/inference versus model performance was analyzed. RESULTS On our primary test dataset, the model achieved an 0.992 (95% CI, 0.989-0.995) AUC corresponding to 0.925 (95% CI, 0.916-0.936) sensitivity and 0.961 (95% CI, 0.950-0.972) specificity for referable DR, while the sensitivity and specificity for ophthalmologists ranged from 0.845 to 0.936, and from 0.912 to 0.971, respectively. For referable DMO, our model generated an AUC of 0.994 (95% CI, 0.992-0.996) with a 0.930 (95% CI, 0.919-0.941) sensitivity and 0.971 (95% CI, 0.965-0.978) specificity, whereas ophthalmologists obtained sensitivities ranging between 0.852 and 0.946, and specificities ranging between 0.926 and 0.985. CONCLUSION This study showed that the deep ensemble model exhibited excellent performance in detecting DR and DMO, and had good robustness and generalization, which could potentially help support and expand DR/DMO screening programs.
Collapse
|
36
|
Chen Q, Keenan TD, Allot A, Peng Y, Agrón E, Domalpally A, Klaver CCW, Luttikhuizen DT, Colyer MH, Cukras CA, Wiley HE, Teresa Magone M, Cousineau-Krieger C, Wong WT, Zhu Y, Chew EY, Lu Z. Multimodal, multitask, multiattention (M3) deep learning detection of reticular pseudodrusen: Toward automated and accessible classification of age-related macular degeneration. J Am Med Inform Assoc 2021; 28:1135-1148. [PMID: 33792724 PMCID: PMC8200273 DOI: 10.1093/jamia/ocaa302] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2020] [Accepted: 11/16/2020] [Indexed: 02/06/2023] Open
Abstract
OBJECTIVE Reticular pseudodrusen (RPD), a key feature of age-related macular degeneration (AMD), are poorly detected by human experts on standard color fundus photography (CFP) and typically require advanced imaging modalities such as fundus autofluorescence (FAF). The objective was to develop and evaluate the performance of a novel multimodal, multitask, multiattention (M3) deep learning framework on RPD detection. MATERIALS AND METHODS A deep learning framework (M3) was developed to detect RPD presence accurately using CFP alone, FAF alone, or both, employing >8000 CFP-FAF image pairs obtained prospectively (Age-Related Eye Disease Study 2). The M3 framework includes multimodal (detection from single or multiple image modalities), multitask (training different tasks simultaneously to improve generalizability), and multiattention (improving ensembled feature representation) operation. Performance on RPD detection was compared with state-of-the-art deep learning models and 13 ophthalmologists; performance on detection of 2 other AMD features (geographic atrophy and pigmentary abnormalities) was also evaluated. RESULTS For RPD detection, M3 achieved an area under the receiver-operating characteristic curve (AUROC) of 0.832, 0.931, and 0.933 for CFP alone, FAF alone, and both, respectively. M3 performance on CFP was very substantially superior to human retinal specialists (median F1 score = 0.644 vs 0.350). External validation (the Rotterdam Study) demonstrated high accuracy on CFP alone (AUROC, 0.965). The M3 framework also accurately detected geographic atrophy and pigmentary abnormalities (AUROC, 0.909 and 0.912, respectively), demonstrating its generalizability. CONCLUSIONS This study demonstrates the successful development, robust evaluation, and external validation of a novel deep learning framework that enables accessible, accurate, and automated AMD diagnosis and prognosis.
Collapse
Affiliation(s)
- Qingyu Chen
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland, USA
| | - Tiarnan D.L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland, USA
| | - Alexis Allot
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland, USA
| | - Yifan Peng
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland, USA
| | - Elvira Agrón
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland, USA
| | - Amitha Domalpally
- Fundus Photograph Reading Center, University of Wisconsin, Madison, Wisconsin, USA
| | | | | | - Marcus H Colyer
- Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, Maryland, USA
| | - Catherine A Cukras
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland, USA
| | - Henry E Wiley
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland, USA
| | - M Teresa Magone
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland, USA
| | - Chantal Cousineau-Krieger
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland, USA
| | - Wai T Wong
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland, USA
- Section on Neuron-Glia Interactions in Retinal Disease, Laboratory of Retinal Cell and Molecular Biology, National Eye Institute, National Institutes of Health, Bethesda, Maryland, USA
| | - Yingying Zhu
- Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, Texas, USA
- Department of Radiology, Clinical Center, National Institutes of Health, Bethesda, Maryland, USA
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland, USA
| | - Zhiyong Lu
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland, USA
| | | |
Collapse
|
37
|
Dong L, Yang Q, Zhang RH, Wei WB. Artificial intelligence for the detection of age-related macular degeneration in color fundus photographs: A systematic review and meta-analysis. EClinicalMedicine 2021; 35:100875. [PMID: 34027334 PMCID: PMC8129891 DOI: 10.1016/j.eclinm.2021.100875] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 04/14/2021] [Accepted: 04/15/2021] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Age-related macular degeneration (AMD) is one of the leading causes of vision loss in the elderly population. The application of artificial intelligence (AI) provides convenience for the diagnosis of AMD. This systematic review and meta-analysis aimed to quantify the performance of AI in detecting AMD in fundus photographs. METHODS We searched PubMed, Embase, Web of Science and the Cochrane Library before December 31st, 2020 for studies reporting the application of AI in detecting AMD in color fundus photographs. Then, we pooled the data for analysis. PROSPERO registration number: CRD42020197532. FINDINGS 19 studies were finally selected for systematic review and 13 of them were included in the quantitative synthesis. All studies adopted human graders as reference standard. The pooled area under the receiver operating characteristic curve (AUROC) was 0.983 (95% confidence interval (CI):0.979-0.987). The pooled sensitivity, specificity, and diagnostic odds ratio (DOR) were 0.88 (95% CI:0.88-0.88), 0.90 (95% CI:0.90-0.91), and 275.27 (95% CI:158.43-478.27), respectively. Threshold analysis was performed and a potential threshold effect was detected among the studies (Spearman correlation coefficient: -0.600, P = 0.030), which was the main cause for the heterogeneity. For studies applying convolutional neural networks in the Age-Related Eye Disease Study database, the pooled AUROC, sensitivity, specificity, and DOR were 0.983 (95% CI:0.978-0.988), 0.88 (95% CI:0.88-0.88), 0.91 (95% CI:0.91-0.91), and 273.14 (95% CI:130.79-570.43), respectively. INTERPRETATION Our data indicated that AI was able to detect AMD in color fundus photographs. The application of AI-based automatic tools is beneficial for the diagnosis of AMD. FUNDING Capital Health Research and Development of Special (2020-1-2052).
Collapse
|
38
|
Ghahramani G, Brendel M, Lin M, Chen Q, Keenan T, Chen K, Chew E, Lu Z, Peng Y, Wang F. Multi-task deep learning-based survival analysis on the prognosis of late AMD using the longitudinal data in AREDS. AMIA ... ANNUAL SYMPOSIUM PROCEEDINGS. AMIA SYMPOSIUM 2021; 2021:506-515. [PMID: 35308963 PMCID: PMC8861665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Age-related macular degeneration (AMD) is the leading cause of vision loss. Some patients experience vision loss over a delayed timeframe, others at a rapid pace. Physicians analyze time-of-visit fundus photographs to predict patient risk of developing late-AMD, the most severe form of AMD. Our study hypothesizes that 1) incorporating historical data improves predictive strength of developing late-AMD and 2) state-of-the-art deep-learning techniques extract more predictive image features than clinicians do. We incorporate longitudinal data from the Age-Related Eye Disease Studies and deep-learning extracted image features in survival settings to predict development of late- AMD. To extract image features, we used multi-task learning frameworks to train convolutional neural networks. Our findings show 1) incorporating longitudinal data improves prediction of late-AMD for clinical standard features, but only the current visit is informative when using complex features and 2) "deep-features" are more informative than clinician derived features. We make codes publicly available at https://github.com/bionlplab/AMD_prognosis_amia2021.
Collapse
Affiliation(s)
- Gregory Ghahramani
- Department of Physiology, Biophysics, and Systems Biology, Weill Cornell Medicine, New York, NY USA
| | - Matthew Brendel
- Department of Physiology, Biophysics, and Systems Biology, Weill Cornell Medicine, New York, NY USA
| | - Mingquan Lin
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY USA
| | - Qingyu Chen
- National Center for Biotechnology Information (NCBI), National Library of Medicine (NLM), National Institutes of Health (NIH), Bethesda, MD USA
| | - Tiarnan Keenan
- National Eye Institute (NEI), National Institutes of Health (NIH), Bethesda, MD USA
| | - Kun Chen
- Department of Statistics, University of Connecticut, Storrs, CT USA
| | - Emily Chew
- National Eye Institute (NEI), National Institutes of Health (NIH), Bethesda, MD USA
| | - Zhiyong Lu
- National Center for Biotechnology Information (NCBI), National Library of Medicine (NLM), National Institutes of Health (NIH), Bethesda, MD USA
| | - Yifan Peng
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY USA
| | - Fei Wang
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY USA
| |
Collapse
|
39
|
Artificial Intelligence and Deep Learning in Ophthalmology. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_200-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
40
|
Sun J, Huang X, Egwuagu C, Badr Y, Dryden SC, Fowler BT, Yousefi S. Identifying Mouse Autoimmune Uveitis from Fundus Photographs Using Deep Learning. Transl Vis Sci Technol 2020; 9:59. [PMID: 33294300 PMCID: PMC7718814 DOI: 10.1167/tvst.9.2.59] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Accepted: 09/25/2020] [Indexed: 01/09/2023] Open
Abstract
Purpose To develop a deep learning model for objective evaluation of experimental autoimmune uveitis (EAU), the animal model of posterior uveitis that reveals its essential pathological features via fundus photographs. Methods We developed a deep learning construct to identify uveitis using reference mouse fundus images and further categorized the severity levels of disease into mild and severe EAU. We evaluated the performance of the model using the area under the receiver operating characteristic curve (AUC) and confusion matrices. We further assessed the clinical relevance of the model by visualizing the principal components of features at different layers and through the use of gradient-weighted class activation maps, which presented retinal regions having the most significant influence on the model. Results Our model was trained, validated, and tested on 1500 fundus images (training, 1200; validation, 150; testing, 150) and achieved an average AUC of 0.98 for identifying the normal, trace (small and local lesions), and disease classes (large and spreading lesions). The AUCs of the model using an independent subset with 180 images were 1.00 (95% confidence interval [CI], 0.99-1.00), 0.97 (95% CI, 0.94-0.99), and 0.96 (95% CI, 0.90-1.00) for the normal, trace and disease classes, respectively. Conclusions The proposed deep learning model is able to identify three severity levels of EAU with high accuracy. The model also achieved high accuracy on independent validation subsets, reflecting a substantial degree of generalizability. Translational Relevance The proposed model represents an important new tool for use in animal medical research and provides a step toward clinical uveitis identification in clinical practice.
Collapse
Affiliation(s)
- Jian Sun
- Molecular Immunology Section, Laboratory of Immunology, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Xiaoqin Huang
- The Pennsylvania State University Great Valley, Malvern, PA, USA
| | - Charles Egwuagu
- Molecular Immunology Section, Laboratory of Immunology, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Youakim Badr
- The Pennsylvania State University Great Valley, Malvern, PA, USA
| | | | | | - Siamak Yousefi
- University of Tennessee Health Science Center, Memphis, TN, USA
| |
Collapse
|
41
|
Keenan TDL, Chen Q, Peng Y, Domalpally A, Agrón E, Hwang CK, Thavikulwat AT, Lee DH, Li D, Wong WT, Lu Z, Chew EY. Deep Learning Automated Detection of Reticular Pseudodrusen from Fundus Autofluorescence Images or Color Fundus Photographs in AREDS2. Ophthalmology 2020; 127:1674-1687. [PMID: 32447042 PMCID: PMC11079794 DOI: 10.1016/j.ophtha.2020.05.036] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Revised: 05/10/2020] [Accepted: 05/13/2020] [Indexed: 11/27/2022] Open
Abstract
PURPOSE To develop deep learning models for detecting reticular pseudodrusen (RPD) using fundus autofluorescence (FAF) images or, alternatively, color fundus photographs (CFP) in the context of age-related macular degeneration (AMD). DESIGN Application of deep learning models to the Age-Related Eye Disease Study 2 (AREDS2) dataset. PARTICIPANTS FAF and CFP images (n = 11 535) from 2450 AREDS2 participants. Gold standard labels from reading center grading of the FAF images were transferred to the corresponding CFP images. METHODS A deep learning model was trained to detect RPD in eyes with intermediate to late AMD using FAF images (FAF model). Using label transfer from FAF to CFP images, a deep learning model was trained to detect RPD from CFP (CFP model). Performance was compared with 4 ophthalmologists using a random subset from the full test set. MAIN OUTCOME MEASURES Area under the receiver operating characteristic curve (AUC), κ value, accuracy, and F1 score. RESULTS The FAF model had an AUC of 0.939 (95% confidence interval [CI], 0.927-0.950), a κ value of 0.718 (95% CI, 0.685-0.751), and accuracy of 0.899 (95% CI, 0.887-0.911). The CFP model showed equivalent values of 0.832 (95% CI, 0.812-0.851), 0.470 (95% CI, 0.426-0.511), and 0.809 (95% CI, 0.793-0.825), respectively. The FAF model demonstrated superior performance to 4 ophthalmologists, showing a higher κ value of 0.789 (95% CI, 0.675-0.875) versus a range of 0.367 to 0.756 and higher accuracy of 0.937 (95% CI, 0.907-0.963) versus a range of 0.696 to 0.933. The CFP model demonstrated substantially superior performance to 4 ophthalmologists, showing a higher κ value of 0.471 (95% CI, 0.330-0.606) versus a range of 0.105 to 0.180 and higher accuracy of 0.844 (95% CI, 0.798-0.886) versus a range of 0.717 to 0.814. CONCLUSIONS Deep learning-enabled automated detection of RPD presence from FAF images achieved a high level of accuracy, equal or superior to that of ophthalmologists. Automated RPD detection using CFP achieved a lower accuracy that still surpassed that of ophthalmologists. Deep learning models can assist, and even augment, the detection of this clinically important AMD-associated lesion.
Collapse
Affiliation(s)
- Tiarnan D L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Qingyu Chen
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | - Yifan Peng
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | - Amitha Domalpally
- Fundus Photograph Reading Center, University of Wisconsin-Madison, Madison, Wisconsin
| | - Elvira Agrón
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Christopher K Hwang
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Alisa T Thavikulwat
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Debora H Lee
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Daniel Li
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | - Wai T Wong
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland; Section on Neuron-Glia Interactions in Retinal Disease, Laboratory of Retinal Cell and Molecular Biology, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Zhiyong Lu
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland.
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland.
| |
Collapse
|
42
|
Optical coherence tomography angiography in diabetic retinopathy: an updated review. Eye (Lond) 2020; 35:149-161. [PMID: 33099579 DOI: 10.1038/s41433-020-01233-y] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 09/27/2020] [Accepted: 10/15/2020] [Indexed: 12/17/2022] Open
Abstract
Diabetic retinopathy (DR) is a common microvascular complication of diabetes mellitus. Optical coherence tomography angiography (OCTA) has been developed to visualize the retinal microvasculature and choriocapillaris based on the motion contrast of circulating blood cells. Depth-resolved ability and non-invasive nature of OCTA allow for repeated examinations and visualization of microvasculature at the retinal capillary plexuses and choriocapillaris. OCTA enables quantification of microvascular alterations in the retinal capillary network, in addition to the detection of classical features associated with DR, including microaneurysms, intraretinal microvascular abnormalities, and neovascularization. OCTA has a promising role as an objective tool for quantifying extent of microvascular damage and identify eyes with diabetic macular ischaemia contributed to visual loss. Furthermore, OCTA can identify preclinical microvascular abnormalities preceding the onset of clinically detectable DR. In this review, we focused on the applications of OCTA derived quantitative metrics that are relevant to early detection, staging and progression of DR. Advancement of OCTA technology in clinical research will ultimately lead to enhancement of individualised management of DR and prevention of visual impairment in patients with diabetes.
Collapse
|
43
|
Arslan J, Samarasinghe G, Benke KK, Sowmya A, Wu Z, Guymer RH, Baird PN. Artificial Intelligence Algorithms for Analysis of Geographic Atrophy: A Review and Evaluation. Transl Vis Sci Technol 2020; 9:57. [PMID: 33173613 PMCID: PMC7594588 DOI: 10.1167/tvst.9.2.57] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Accepted: 09/28/2020] [Indexed: 12/28/2022] Open
Abstract
Purpose The purpose of this study was to summarize and evaluate artificial intelligence (AI) algorithms used in geographic atrophy (GA) diagnostic processes (e.g. isolating lesions or disease progression). Methods The search strategy and selection of publications were both conducted in accordance with the Preferred of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. PubMed and Web of Science were used to extract literary data. The algorithms were summarized by objective, performance, and scope of coverage of GA diagnosis (e.g. lesion automation and GA progression). Results Twenty-seven studies were identified for this review. A total of 18 publications focused on lesion segmentation only, 2 were designed to detect and classify GA, 2 were designed to predict future overall GA progression, 3 focused on prediction of future spatial GA progression, and 2 focused on prediction of visual function in GA. GA-related algorithms reported sensitivities from 0.47 to 0.98, specificities from 0.73 to 0.99, accuracies from 0.42 to 0.995, and Dice coefficients from 0.66 to 0.89. Conclusions Current GA-AI publications have a predominant focus on lesion segmentation and a minor focus on classification and progression analysis. AI could be applied to other facets of GA diagnoses, such as understanding the role of hyperfluorescent areas in GA. Using AI for GA has several advantages, including improved diagnostic accuracy and faster processing speeds. Translational Relevance AI can be used to quantify GA lesions and therefore allows one to impute visual function and quality-of-life. However, there is a need for the development of reliable and objective models and software to predict the rate of GA progression and to quantify improvements due to interventions.
Collapse
Affiliation(s)
- Janan Arslan
- Centre for Eye Research Australia, University of Melbourne, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
- Department of Surgery, Ophthalmology, University of Melbourne, Victoria, Australia
| | - Gihan Samarasinghe
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Kurt K. Benke
- School of Engineering, University of Melbourne, Parkville, Victoria, Australia
- Centre for AgriBioscience, AgriBio, Bundoora, Victoria, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Zhichao Wu
- Centre for Eye Research Australia, University of Melbourne, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | - Robyn H. Guymer
- Centre for Eye Research Australia, University of Melbourne, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
- Department of Surgery, Ophthalmology, University of Melbourne, Victoria, Australia
| | - Paul N. Baird
- Department of Surgery, Ophthalmology, University of Melbourne, Victoria, Australia
| |
Collapse
|
44
|
Chew EY. Age-related Macular Degeneration: Nutrition, Genes and Deep Learning-The LXXVI Edward Jackson Memorial Lecture. Am J Ophthalmol 2020; 217:335-347. [PMID: 32574780 PMCID: PMC8324084 DOI: 10.1016/j.ajo.2020.05.042] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2020] [Revised: 05/26/2020] [Accepted: 05/28/2020] [Indexed: 12/23/2022]
Abstract
PURPOSE To evaluate the importance of nutritional supplements, dietary pattern, and genetic associations in age-related macular degeneration (AMD); and to discuss the technique of artificial intelligence/deep learning to potentially enhance research in detecting and classifying AMD. DESIGN Retrospective literature review. METHODS To review the studies of both prospective and retrospective (post hoc) analyses of nutrition, genetic variants, and deep learning in AMD in both the Age-Related Eye Disease Study (AREDS) and AREDS2. RESULTS In addition to demonstrating the beneficial effects of the AREDS and AREDS2 supplements of antioxidant vitamins and zinc (plus copper) for reducing the risk of progression to late AMD, these 2 studies also confirmed the importance of high adherence to Mediterranean diet in reducing progression of AMD in persons with varying severity of disease. In persons with the protective genetic alleles of complement factor H (CFH), the Mediterranean diet had further beneficial effect. However, despite the genetic association with AMD progression, prediction models found genetic information added little to the high predictive value of baseline severity of AMD for disease progression. The technique of deep learning, an arm of artificial intelligence, using color fundus photographs from AREDS/AREDS2 was superior in some cases and noninferior in others to clinical human grading (retinal specialists) and to the gold standard of the certified reading center graders. CONCLUSIONS Counseling individuals affected with AMD regarding the use of the AREDS2 supplements and the beneficial association of the Mediterranean diet is an important public health message. Although genetic testing is important in research, it is not recommended for prediction of disease or to guide therapies and/or dietary interventions in AMD. Techniques in deep learning hold great promise, but further prospective research is required to validate the use of this technique to provide improvement in accuracy and sensitivity/specificity in clinical research and medical management of patients with AMD.
Collapse
Affiliation(s)
- Emily Y Chew
- Clinical Trials Branch, Division of Epidemiology and Clinical Applications, National Eye Institute/National Institutes of Health, Bethesda, Maryland, USA.
| |
Collapse
|
45
|
He M, Li Z, Liu C, Shi D, Tan Z. Deployment of Artificial Intelligence in Real-World Practice: Opportunity and Challenge. Asia Pac J Ophthalmol (Phila) 2020; 9:299-307. [PMID: 32694344 DOI: 10.1097/apo.0000000000000301] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Artificial intelligence has rapidly evolved from the experimental phase to the implementation phase in many image-driven clinical disciplines, including ophthalmology. A combination of the increasing availability of large datasets and computing power with revolutionary progress in deep learning has created unprecedented opportunities for major breakthrough improvements in the performance and accuracy of automated diagnoses that primarily focus on image recognition and feature detection. Such an automated disease classification would significantly improve the accessibility, efficiency, and cost-effectiveness of eye care systems where it is less dependent on human input, potentially enabling diagnosis to be cheaper, quicker, and more consistent. Although this technology will have a profound impact on clinical flow and practice patterns sooner or later, translating such a technology into clinical practice is challenging and requires similar levels of accountability and effectiveness as any new medication or medical device due to the potential problems of bias, and ethical, medical, and legal issues that might arise. The objective of this review is to summarize the opportunities and challenges of this transition and to facilitate the integration of artificial intelligence (AI) into routine clinical practice based on our best understanding and experience in this area.
Collapse
Affiliation(s)
- Mingguang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, Australia
| | - Zhixi Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- School of Computer Science, University of Technology Sydney, Ultimo NSW, Australia
| | - Danli Shi
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zachary Tan
- Faculty of Medicine, The University of Queensland, Brisbane, Australia
- Schwarzman College, Tsinghua University, Beijing, China
| |
Collapse
|
46
|
Diving Deep into Deep Learning: An Update on Artificial Intelligence in Retina. CURRENT OPHTHALMOLOGY REPORTS 2020; 8:121-128. [PMID: 33224635 DOI: 10.1007/s40135-020-00240-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Purpose of Review In the present article, we will provide an understanding and review of artificial intelligence in the subspecialty of retina and its potential applications within the specialty. Recent Findings Given the significant use of diagnostic imaging within retina, this subspecialty is a fitting area for the incorporation of artificial intelligence. Researchers have aimed at creating models to assist in the diagnosis and management of retinal disease as well as in the prediction of disease course and treatment response. Most of this work thus far has focused on diabetic retinopathy, age-related macular degeneration, and retinopathy of prematurity, although other retinal diseases have started to be explored as well. Summary Artificial intelligence is well-suited to transform the practice of ophthalmology. A basic understanding of the technology is important for its effective implementation and growth.
Collapse
|
47
|
Keenan TD, Chew EY. Study the past if you would define the future (Confucius). Br J Ophthalmol 2020; 104:449-450. [DOI: 10.1136/bjophthalmol-2020-315890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
48
|
Liefers B, Colijn JM, González-Gonzalo C, Verzijden T, Wang JJ, Joachim N, Mitchell P, Hoyng CB, van Ginneken B, Klaver CCW, Sánchez CI. A Deep Learning Model for Segmentation of Geographic Atrophy to Study Its Long-Term Natural History. Ophthalmology 2020; 127:1086-1096. [PMID: 32197912 DOI: 10.1016/j.ophtha.2020.02.009] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2019] [Revised: 01/17/2020] [Accepted: 02/07/2020] [Indexed: 12/23/2022] Open
Abstract
PURPOSE To develop and validate a deep learning model for the automatic segmentation of geographic atrophy (GA) using color fundus images (CFIs) and its application to study the growth rate of GA. DESIGN Prospective, multicenter, natural history study with up to 15 years of follow-up. PARTICIPANTS Four hundred nine CFIs of 238 eyes with GA from the Rotterdam Study (RS) and Blue Mountain Eye Study (BMES) for model development, and 3589 CFIs of 376 eyes from the Age-Related Eye Disease Study (AREDS) for analysis of GA growth rate. METHODS A deep learning model based on an ensemble of encoder-decoder architectures was implemented and optimized for the segmentation of GA in CFIs. Four experienced graders delineated, in consensus, GA in CFIs from the RS and BMES. These manual delineations were used to evaluate the segmentation model using 5-fold cross-validation. The model was applied further to CFIs from the AREDS to study the growth rate of GA. Linear regression analysis was used to study associations between structural biomarkers at baseline and the GA growth rate. A general estimate of the progression of GA area over time was made by combining growth rates of all eyes with GA from the AREDS set. MAIN OUTCOME MEASURES Automatically segmented GA and GA growth rate. RESULTS The model obtained an average Dice coefficient of 0.72±0.26 on the BMES and RS set while comparing the automatically segmented GA area with the graders' manual delineations. An intraclass correlation coefficient of 0.83 was reached between the automatically estimated GA area and the graders' consensus measures. Nine automatically calculated structural biomarkers (area, filled area, convex area, convex solidity, eccentricity, roundness, foveal involvement, perimeter, and circularity) were significantly associated with growth rate. Combining all growth rates indicated that GA area grows quadratically up to an area of approximately 12 mm2, after which growth rate stabilizes or decreases. CONCLUSIONS The deep learning model allowed for fully automatic and robust segmentation of GA on CFIs. These segmentations can be used to extract structural characteristics of GA that predict its growth rate.
Collapse
Affiliation(s)
- Bart Liefers
- Diagnostic Image Analysis Group, Department of Radiology, Radboud University Medical Center, Nijmegen, The Netherlands; Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Johanna M Colijn
- Department of Ophthalmology, Erasmus University Medical Center, Rotterdam, The Netherlands; Department of Epidemiology, Erasmus University Medical Center, Rotterdam, The Netherlands
| | - Cristina González-Gonzalo
- Diagnostic Image Analysis Group, Department of Radiology, Radboud University Medical Center, Nijmegen, The Netherlands; Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Timo Verzijden
- Department of Ophthalmology, Erasmus University Medical Center, Rotterdam, The Netherlands; Department of Epidemiology, Erasmus University Medical Center, Rotterdam, The Netherlands
| | - Jie Jin Wang
- Centre for Vision Research, Department of Ophthalmology, The Westmead Institute for Medical Research, The University of Sydney, Sydney, Australia; Health Services and Systems Research, Duke-NUS Medical School, National University of Singapore, Singapore, Republic of Singapore
| | - Nichole Joachim
- Centre for Vision Research, Department of Ophthalmology, The Westmead Institute for Medical Research, The University of Sydney, Sydney, Australia
| | - Paul Mitchell
- Centre for Vision Research, Department of Ophthalmology, The Westmead Institute for Medical Research, The University of Sydney, Sydney, Australia
| | - Carel B Hoyng
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, The Netherlands; Department of Ophthalmology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Department of Radiology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Caroline C W Klaver
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, The Netherlands; Department of Ophthalmology, Erasmus University Medical Center, Rotterdam, The Netherlands; Department of Epidemiology, Erasmus University Medical Center, Rotterdam, The Netherlands; Department of Ophthalmology, Radboud University Medical Center, Nijmegen, The Netherlands; Institute for Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Clara I Sánchez
- Diagnostic Image Analysis Group, Department of Radiology, Radboud University Medical Center, Nijmegen, The Netherlands; Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, The Netherlands; Department of Ophthalmology, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
49
|
Cai L, Hinkle JW, Arias D, Gorniak RJ, Lakhani PC, Flanders AE, Kuriyan AE. Applications of Artificial Intelligence for the Diagnosis, Prognosis, and Treatment of Age-related Macular Degeneration. Int Ophthalmol Clin 2020; 60:147-168. [PMID: 33093323 DOI: 10.1097/iio.0000000000000334] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
|