1
|
Chen D, Han Y, Duncan J, Jia L, Shan J. Generative Artificial Intelligence Enhancements for Reducing Image-based Training Data Requirements. OPHTHALMOLOGY SCIENCE 2024; 4:100531. [PMID: 39071920 PMCID: PMC11283142 DOI: 10.1016/j.xops.2024.100531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 04/08/2024] [Accepted: 04/09/2024] [Indexed: 07/30/2024]
Abstract
Objective Training data fuel and shape the development of artificial intelligence (AI) models. Intensive data requirements are a major bottleneck limiting the success of AI tools in sectors with inherently scarce data. In health care, training data are difficult to curate, triggering growing concerns that the current lack of access to health care by under-privileged social groups will translate into future bias in health care AIs. In this report, we developed an autoencoder to grow and enhance inherently scarce datasets to alleviate our dependence on big data. Design Computational study with open-source data. Subjects The data were obtained from 6 open-source datasets comprising patients aged 40-80 years in Singapore, China, India, and Spain. Methods The reported framework generates synthetic images based on real-world patient imaging data. As a test case, we used autoencoder to expand publicly available training sets of optic disc photos, and evaluated the ability of the resultant datasets to train AI models in the detection of glaucomatous optic neuropathy. Main Outcome Measures Area under the receiver operating characteristic curve (AUC) were used to evaluate the performance of the glaucoma detector. A higher AUC indicates better detection performance. Results Results show that enhancing datasets with synthetic images generated by autoencoder led to superior training sets that improved the performance of AI models. Conclusions Our findings here help address the increasingly untenable data volume and quality requirements for AI model development and have implications beyond health care, toward empowering AI adoption for all similarly data-challenged fields. Financial Disclosures The authors have no proprietary or commercial interest in any materials discussed in this article.
Collapse
Affiliation(s)
- Dake Chen
- Department of Ophthalmology, University of California, San Francisco, San Francisco, California
| | - Ying Han
- Department of Ophthalmology, University of California, San Francisco, San Francisco, California
| | - Jacque Duncan
- Department of Ophthalmology, University of California, San Francisco, San Francisco, California
| | - Lin Jia
- Digillect LLC, San Francisco, California
| | - Jing Shan
- Department of Ophthalmology, University of California, San Francisco, San Francisco, California
| |
Collapse
|
2
|
Kraus M, Anteby R, Konen E, Eshed I, Klang E. Artificial intelligence for X-ray scaphoid fracture detection: a systematic review and diagnostic test accuracy meta-analysis. Eur Radiol 2024; 34:4341-4351. [PMID: 38097728 PMCID: PMC11213739 DOI: 10.1007/s00330-023-10473-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 09/22/2023] [Accepted: 09/28/2023] [Indexed: 06/29/2024]
Abstract
OBJECTIVES Scaphoid fractures are usually diagnosed using X-rays, a low-sensitivity modality. Artificial intelligence (AI) using Convolutional Neural Networks (CNNs) has been explored for diagnosing scaphoid fractures in X-rays. The aim of this systematic review and meta-analysis is to evaluate the use of AI for detecting scaphoid fractures on X-rays and analyze its accuracy and usefulness. MATERIALS AND METHODS This study followed the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) and PRISMA-Diagnostic Test Accuracy. A literature search was conducted in the PubMed database for original articles published until July 2023. The risk of bias and applicability were evaluated using the QUADAS-2 tool. A bivariate diagnostic random-effects meta-analysis was conducted, and the results were analyzed using the Summary Receiver Operating Characteristic (SROC) curve. RESULTS Ten studies met the inclusion criteria and were all retrospective. The AI's diagnostic performance for detecting scaphoid fractures ranged from AUC 0.77 to 0.96. Seven studies were included in the meta-analysis, with a total of 3373 images. The meta-analysis pooled sensitivity and specificity were 0.80 and 0.89, respectively. The meta-analysis overall AUC was 0.88. The QUADAS-2 tool found high risk of bias and concerns about applicability in 9 out of 10 studies. CONCLUSIONS The current results of AI's diagnostic performance for detecting scaphoid fractures in X-rays show promise. The results show high overall sensitivity and specificity and a high SROC result. Further research is needed to compare AI's diagnostic performance to human diagnostic performance in a clinical setting. CLINICAL RELEVANCE STATEMENT Scaphoid fractures are prone to be missed secondary to assessment with a low sensitivity modality and a high occult fracture rate. AI systems can be beneficial for clinicians and radiologists to facilitate early diagnosis, and avoid missed injuries. KEY POINTS • Scaphoid fractures are common and some can be easily missed in X-rays. • Artificial intelligence (AI) systems demonstrate high diagnostic performance for the diagnosis of scaphoid fractures in X-rays. • AI systems can be beneficial in diagnosing both obvious and occult scaphoid fractures.
Collapse
Affiliation(s)
- Matan Kraus
- Department of Diagnostic Imaging, Sheba Medical Center, 2 Sheba Road, 5262000, Ramat Gan, Israel.
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel.
| | - Roi Anteby
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
- Department of General Surgery, Sheba Medical Center, 2 Sheba Road, 5262000, Ramat Gan, Israel
| | - Eli Konen
- Department of Diagnostic Imaging, Sheba Medical Center, 2 Sheba Road, 5262000, Ramat Gan, Israel
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Iris Eshed
- Department of Diagnostic Imaging, Sheba Medical Center, 2 Sheba Road, 5262000, Ramat Gan, Israel
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Eyal Klang
- Department of Diagnostic Imaging, Sheba Medical Center, 2 Sheba Road, 5262000, Ramat Gan, Israel
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
3
|
Rao DP, Shroff S, Savoy FM, S S, Hsu CK, Negiloni K, Pradhan ZS, P V J, Sivaraman A, Rao HL. Evaluation of an offline, artificial intelligence system for referable glaucoma screening using a smartphone-based fundus camera: a prospective study. Eye (Lond) 2024; 38:1104-1111. [PMID: 38092938 PMCID: PMC11009383 DOI: 10.1038/s41433-023-02826-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 10/27/2023] [Accepted: 11/01/2023] [Indexed: 04/13/2024] Open
Abstract
BACKGROUND/OBJECTIVES An affordable and scalable screening model is critical for undetected glaucoma. The study evaluated the performance of an offline, smartphone-based AI system for the detection of referable glaucoma against two benchmarks: specialist diagnosis following full glaucoma workup and consensus image grading. SUBJECTS/METHODS This prospective study (tertiary glaucoma centre, India) included 243 subjects with varying severity of glaucoma and control group without glaucoma. Disc-centred images were captured using a validated smartphone-based fundus camera analysed by the AI system and graded by specialists. Diagnostic ability of the AI in detecting referable Glaucoma (Confirmed glaucoma) and no referable Glaucoma (Suspects and No glaucoma) when compared to a final diagnosis (comprehensive glaucoma workup) and majority grading (image grading) by Glaucoma specialists (pre-defined criteria) were evaluated. RESULTS The AI system demonstrated a sensitivity and specificity of 93.7% (95% CI: 87.6-96.9%) and 85.6% (95% CI:78.6-90.6%), respectively, in the detection of referable glaucoma when compared against final diagnosis following full glaucoma workup. True negative rate in definite non-glaucoma cases was 94.7% (95% CI: 87.2-97.9%). Amongst the false negatives were 4 early and 3 moderate glaucoma. When the same set of images provided to the AI was also provided to the specialists for image grading, specialists detected 60% (67/111) of true glaucoma cases versus a detection rate of 94% (104/111) by the AI. CONCLUSION The AI tool showed robust performance when compared against a stringent benchmark. It had modest over-referral of normal subjects despite being challenged with fundus images alone. The next step involves a population-level assessment.
Collapse
Affiliation(s)
| | - Sujani Shroff
- Narayana Nethralaya Eye Hospital, Glaucoma Services, Bangalore, India
| | | | - Shruthi S
- Narayana Nethralaya Eye Hospital, Glaucoma Services, Bangalore, India
| | - Chao-Kai Hsu
- Medios Technologies Pte Ltd, Singapore, Singapore
| | | | | | - Jayasree P V
- Narayana Nethralaya Eye Hospital, Glaucoma Services, Bangalore, India
| | | | - Harsha L Rao
- Narayana Nethralaya Eye Hospital, Glaucoma Services, Bangalore, India
| |
Collapse
|
4
|
Lee DK, Choi YJ, Lee SJ, Kang HG, Park YR. Development of a deep learning model to distinguish the cause of optic disc atrophy using retinal fundus photography. Sci Rep 2024; 14:5079. [PMID: 38429319 PMCID: PMC10907364 DOI: 10.1038/s41598-024-55054-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 02/20/2024] [Indexed: 03/03/2024] Open
Abstract
The differential diagnosis for optic atrophy can be challenging and requires expensive, time-consuming ancillary testing to determine the cause. While Leber's hereditary optic neuropathy (LHON) and optic neuritis (ON) are both clinically significant causes for optic atrophy, both relatively rare in the general population, contributing to limitations in obtaining large imaging datasets. This study therefore aims to develop a deep learning (DL) model based on small datasets that could distinguish the cause of optic disc atrophy using only fundus photography. We retrospectively reviewed fundus photographs of 120 normal eyes, 30 eyes (15 patients) with genetically-confirmed LHON, and 30 eyes (26 patients) with ON. Images were split into a training dataset and a test dataset and used for model training with ResNet-18. To visualize the critical regions in retinal photographs that are highly associated with disease prediction, Gradient-Weighted Class Activation Map (Grad-CAM) was used to generate image-level attention heat maps and to enhance the interpretability of the DL system. In the 3-class classification of normal, LHON, and ON, the area under the receiver operating characteristic curve (AUROC) was 1.0 for normal, 0.988 for LHON, and 0.990 for ON, clearly differentiating each class from the others with an overall total accuracy of 0.93. Specifically, when distinguishing between normal and disease cases, the precision, recall, and F1 scores were perfect at 1.0. Furthermore, in the differentiation of LHON from other conditions, ON from others, and between LHON and ON, we consistently observed precision, recall, and F1 scores of 0.8. The model performance was maintained until only 10% of the pixel values of the image, identified as important by Grad-CAM, were preserved and the rest were masked, followed by retraining and evaluation.
Collapse
Affiliation(s)
- Dong Kyu Lee
- Department of Ophthalmology, Institute of Vision Research, Severance Eye Hospital, Yonsei University College of Medicine, Yonsei-ro 50-1, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Young Jo Choi
- Department of Biomedical Systems Informatics, Yonsei University College of Medicine, Yonsei-ro 50-1, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Seung Jae Lee
- Department of Ophthalmology, Institute of Vision Research, Severance Eye Hospital, Yonsei University College of Medicine, Yonsei-ro 50-1, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Hyun Goo Kang
- Department of Ophthalmology, Institute of Vision Research, Severance Eye Hospital, Yonsei University College of Medicine, Yonsei-ro 50-1, Seodaemun-gu, Seoul, 03722, Republic of Korea.
| | - Yu Rang Park
- Department of Biomedical Systems Informatics, Yonsei University College of Medicine, Yonsei-ro 50-1, Seodaemun-gu, Seoul, 03722, Republic of Korea.
| |
Collapse
|
5
|
Christopher M, Gonzalez R, Huynh J, Walker E, Radha Saseendrakumar B, Bowd C, Belghith A, Goldbaum MH, Fazio MA, Girkin CA, De Moraes CG, Liebmann JM, Weinreb RN, Baxter SL, Zangwill LM. Proactive Decision Support for Glaucoma Treatment: Predicting Surgical Interventions with Clinically Available Data. Bioengineering (Basel) 2024; 11:140. [PMID: 38391627 PMCID: PMC10886033 DOI: 10.3390/bioengineering11020140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 01/06/2024] [Accepted: 01/27/2024] [Indexed: 02/24/2024] Open
Abstract
A longitudinal ophthalmic dataset was used to investigate multi-modal machine learning (ML) models incorporating patient demographics and history, clinical measurements, optical coherence tomography (OCT), and visual field (VF) testing in predicting glaucoma surgical interventions. The cohort included 369 patients who underwent glaucoma surgery and 592 patients who did not undergo surgery. The data types used for prediction included patient demographics, history of systemic conditions, medication history, ophthalmic measurements, 24-2 VF results, and thickness measurements from OCT imaging. The ML models were trained to predict surgical interventions and evaluated on independent data collected at a separate study site. The models were evaluated based on their ability to predict surgeries at varying lengths of time prior to surgical intervention. The highest performing predictions achieved an AUC of 0.93, 0.92, and 0.93 in predicting surgical intervention at 1 year, 2 years, and 3 years, respectively. The models were also able to achieve high sensitivity (0.89, 0.77, 0.86 at 1, 2, and 3 years, respectively) and specificity (0.85, 0.90, and 0.91 at 1, 2, and 3 years, respectively) at an 0.80 level of precision. The multi-modal models trained on a combination of data types predicted surgical interventions with high accuracy up to three years prior to surgery and could provide an important tool to predict the need for glaucoma intervention.
Collapse
Affiliation(s)
- Mark Christopher
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Ruben Gonzalez
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Justin Huynh
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Evan Walker
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Bharanidharan Radha Saseendrakumar
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Christopher Bowd
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Akram Belghith
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Michael H Goldbaum
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Massimo A Fazio
- Department of Ophthalmology and Vision Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, AL 35233, USA
| | - Christopher A Girkin
- Department of Ophthalmology and Vision Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, AL 35233, USA
| | - Carlos Gustavo De Moraes
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Medical Center, New York, NY 10032, USA
| | - Jeffrey M Liebmann
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Medical Center, New York, NY 10032, USA
| | - Robert N Weinreb
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Sally L Baxter
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Linda M Zangwill
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| |
Collapse
|
6
|
Chuter B, Huynh J, Bowd C, Walker E, Rezapour J, Brye N, Belghith A, Fazio MA, Girkin CA, De Moraes G, Liebmann JM, Weinreb RN, Zangwill LM, Christopher M. Deep Learning Identifies High-Quality Fundus Photographs and Increases Accuracy in Automated Primary Open Angle Glaucoma Detection. Transl Vis Sci Technol 2024; 13:23. [PMID: 38285462 PMCID: PMC10829806 DOI: 10.1167/tvst.13.1.23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 12/26/2023] [Indexed: 01/30/2024] Open
Abstract
Purpose To develop and evaluate a deep learning (DL) model to assess fundus photograph quality, and quantitatively measure its impact on automated POAG detection in independent study populations. Methods Image quality ground truth was determined by manual review of 2815 fundus photographs of healthy and POAG eyes from the Diagnostic Innovations in Glaucoma Study and African Descent and Glaucoma Evaluation Study (DIGS/ADAGES), as well as 11,350 from the Ocular Hypertension Treatment Study (OHTS). Human experts assessed a photograph as high quality if of sufficient quality to determine POAG status and poor quality if not. A DL quality model was trained on photographs from DIGS/ADAGES and tested on OHTS. The effect of DL quality assessment on DL POAG detection was measured using area under the receiver operating characteristic (AUROC). Results The DL quality model yielded an AUROC of 0.97 for differentiating between high- and low-quality photographs; qualitative human review affirmed high model performance. Diagnostic accuracy of the DL POAG model was significantly greater (P < 0.001) in good (AUROC, 0.87; 95% CI, 0.80-0.92) compared with poor quality photographs (AUROC, 0.77; 95% CI, 0.67-0.88). Conclusions The DL quality model was able to accurately assess fundus photograph quality. Using automated quality assessment to filter out low-quality photographs increased the accuracy of a DL POAG detection model. Translational Relevance Incorporating DL quality assessment into automated review of fundus photographs can help to decrease the burden of manual review and improve accuracy for automated DL POAG detection.
Collapse
Affiliation(s)
- Benton Chuter
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Justin Huynh
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Christopher Bowd
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Evan Walker
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Jasmin Rezapour
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
- Department of Ophthalmology, University Medical Center Mainz, Germany
| | - Nicole Brye
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Akram Belghith
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Massimo A. Fazio
- School of Medicine, Callahan Eye Hospital, University of Alabama-Birmingham, Birmingham, Alabama, United States
| | - Christopher A. Girkin
- School of Medicine, Callahan Eye Hospital, University of Alabama-Birmingham, Birmingham, Alabama, United States
| | - Gustavo De Moraes
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Medical Center, New York, New York, United States
| | - Jeffrey M. Liebmann
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Medical Center, New York, New York, United States
| | - Robert N. Weinreb
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Linda M. Zangwill
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| | - Mark Christopher
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California, United States
| |
Collapse
|
7
|
Fan R, Bowd C, Brye N, Christopher M, Weinreb RN, Kriegman DJ, Zangwill LM. One-Vote Veto: Semi-Supervised Learning for Low-Shot Glaucoma Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3764-3778. [PMID: 37610903 PMCID: PMC11214580 DOI: 10.1109/tmi.2023.3307689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/25/2023]
Abstract
Convolutional neural networks (CNNs) are a promising technique for automated glaucoma diagnosis from images of the fundus, and these images are routinely acquired as part of an ophthalmic exam. Nevertheless, CNNs typically require a large amount of well-labeled data for training, which may not be available in many biomedical image classification applications, especially when diseases are rare and where labeling by experts is costly. This article makes two contributions to address this issue: 1) It extends the conventional Siamese network and introduces a training method for low-shot learning when labeled data are limited and imbalanced, and 2) it introduces a novel semi-supervised learning strategy that uses additional unlabeled training data to achieve greater accuracy. Our proposed multi-task Siamese network (MTSN) can employ any backbone CNN, and we demonstrate with four backbone CNNs that its accuracy with limited training data approaches the accuracy of backbone CNNs trained with a dataset that is 50 times larger. We also introduce One-Vote Veto (OVV) self-training, a semi-supervised learning strategy that is designed specifically for MTSNs. By taking both self-predictions and contrastive predictions of the unlabeled training data into account, OVV self-training provides additional pseudo labels for fine-tuning a pre-trained MTSN. Using a large (imbalanced) dataset with 66,715 fundus photographs acquired over 15 years, extensive experimental results demonstrate the effectiveness of low-shot learning with MTSN and semi-supervised learning with OVV self-training. Three additional, smaller clinical datasets of fundus images acquired under different conditions (cameras, instruments, locations, populations) are used to demonstrate the generalizability of the proposed methods.
Collapse
|
8
|
Yamashita T, Asaoka R, Terasaki H, Yoshihara N, Kakiuchi N, Sakamoto T. Three-year changes in sex judgment using color fundus parameters in elementary school students. PLoS One 2023; 18:e0295123. [PMID: 38033010 PMCID: PMC10688721 DOI: 10.1371/journal.pone.0295123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Accepted: 11/14/2023] [Indexed: 12/02/2023] Open
Abstract
PURPOSE In a previous cross-sectional study, we reported that the sexes can be distinguished using known factors obtained from color fundus photography (CFP). However, it is not clear how sex differences in fundus parameters appear across the human lifespan. Therefore, we conducted a cohort study to investigate sex determination based on fundus parameters in elementary school students. METHODS This prospective observational longitudinal study investigated 109 right eyes of elementary school students over 4 years (age, 8.5 to 11.5 years). From each CFP, the tessellation fundus index was calculated as red/red + green + blue (R/[R+G+B]) using the mean value of red-green-blue intensity in eight locations around the optic disc and macular region. Optic disc area, ovality ratio, papillomacular angle, and retinal vessel angles and distances were quantified according to the data in our previous report. Using 54 fundus parameters, sex was predicted by L2 regularized binomial logistic regression for each grade. RESULTS The right eyes of 53 boys and 56 girls were analyzed. The discrimination accuracy rate significantly increased with age: 56.3% at 8.5 years, 46.1% at 9.5 years, 65.5% at 10.5 years and 73.1% at 11.5 years. CONCLUSIONS The accuracy of sex discrimination by fundus photography improved during a 3-year cohort study of elementary school students.
Collapse
Affiliation(s)
- Takehiro Yamashita
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| | - Ryo Asaoka
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Hamamatsu, Shizuoka, Japan
- School of Nursing, Seirei Christopher University, Hamamatsu, Shizuoka, Japan
- Nanovision Research Division, Research Institute of Electronics, Shizuoka University, Hamamatsu, Shizuoka, Japan
- The Graduate School for the Creation of New Photonics Industries, Hamamatsu, Shizuoka, Japan
| | - Hiroto Terasaki
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| | - Naoya Yoshihara
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| | - Naoko Kakiuchi
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| | - Taiji Sakamoto
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| |
Collapse
|
9
|
Hwang EE, Chen D, Han Y, Jia L, Shan J. Multi-Dataset Comparison of Vision Transformers and Convolutional Neural Networks for Detecting Glaucomatous Optic Neuropathy from Fundus Photographs. Bioengineering (Basel) 2023; 10:1266. [PMID: 38002390 PMCID: PMC10669064 DOI: 10.3390/bioengineering10111266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 10/26/2023] [Accepted: 10/27/2023] [Indexed: 11/26/2023] Open
Abstract
Glaucomatous optic neuropathy (GON) can be diagnosed and monitored using fundus photography, a widely available and low-cost approach already adopted for automated screening of ophthalmic diseases such as diabetic retinopathy. Despite this, the lack of validated early screening approaches remains a major obstacle in the prevention of glaucoma-related blindness. Deep learning models have gained significant interest as potential solutions, as these models offer objective and high-throughput methods for processing image-based medical data. While convolutional neural networks (CNN) have been widely utilized for these purposes, more recent advances in the application of Transformer architectures have led to new models, including Vision Transformer (ViT,) that have shown promise in many domains of image analysis. However, previous comparisons of these two architectures have not sufficiently compared models side-by-side with more than a single dataset, making it unclear which model is more generalizable or performs better in different clinical contexts. Our purpose is to investigate comparable ViT and CNN models tasked with GON detection from fundus photos and highlight their respective strengths and weaknesses. We train CNN and ViT models on six unrelated, publicly available databases and compare their performance using well-established statistics including AUC, sensitivity, and specificity. Our results indicate that ViT models often show superior performance when compared with a similarly trained CNN model, particularly when non-glaucomatous images are over-represented in a given dataset. We discuss the clinical implications of these findings and suggest that ViT can further the development of accurate and scalable GON detection for this leading cause of irreversible blindness worldwide.
Collapse
Affiliation(s)
- Elizabeth E. Hwang
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA 94143, USA
- Medical Scientist Training Program, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Dake Chen
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Ying Han
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Lin Jia
- Digillect LLC, San Francisco, CA 94158, USA
| | - Jing Shan
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA 94143, USA
| |
Collapse
|
10
|
Saha S, Vignarajan J, Frost S. A fast and fully automated system for glaucoma detection using color fundus photographs. Sci Rep 2023; 13:18408. [PMID: 37891238 PMCID: PMC10611813 DOI: 10.1038/s41598-023-44473-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 10/09/2023] [Indexed: 10/29/2023] Open
Abstract
This paper presents a low computationally intensive and memory efficient convolutional neural network (CNN)-based fully automated system for detection of glaucoma, a leading cause of irreversible blindness worldwide. Using color fundus photographs, the system detects glaucoma in two steps. In the first step, the optic disc region is determined relying upon You Only Look Once (YOLO) CNN architecture. In the second step classification of 'glaucomatous' and 'non-glaucomatous' is performed using MobileNet architecture. A simplified version of the original YOLO net, specific to the context, is also proposed. Extensive experiments are conducted using seven state-of-the-art CNNs with varying computational intensity, namely, MobileNetV2, MobileNetV3, Custom ResNet, InceptionV3, ResNet50, 18-Layer CNN and InceptionResNetV2. A total of 6671 fundus images collected from seven publicly available glaucoma datasets are used for the experiment. The system achieves an accuracy and F1 score of 97.4% and 97.3%, with sensitivity, specificity, and AUC of respectively 97.5%, 97.2%, 99.3%. These findings are comparable with the best reported methods in the literature. With comparable or better performance, the proposed system produces significantly faster decisions and drastically minimizes the resource requirement. For example, the proposed system requires 12 times less memory in comparison to ResNes50, and produces 2 times faster decisions. With significantly less memory efficient and faster processing, the proposed system has the capability to be directly embedded into resource limited devices such as portable fundus cameras.
Collapse
Affiliation(s)
- Sajib Saha
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Perth, Australia.
| | - Janardhan Vignarajan
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Perth, Australia
| | - Shaun Frost
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Perth, Australia
| |
Collapse
|
11
|
Gao M, Jiang H, Zhu L, Jiang Z, Geng M, Ren Q, Lu Y. Discriminative ensemble meta-learning with co-regularization for rare fundus diseases diagnosis. Med Image Anal 2023; 89:102884. [PMID: 37459674 DOI: 10.1016/j.media.2023.102884] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 05/26/2023] [Accepted: 06/28/2023] [Indexed: 09/08/2023]
Abstract
Deep neural networks (DNNs) have been widely applied in the medical image community, contributing to automatic ophthalmic screening systems for some common diseases. However, the incidence of fundus diseases patterns exhibits a typical long-tailed distribution. In clinic, a small number of common fundus diseases have sufficient observed cases for large-scale analysis while most of the fundus diseases are infrequent. For these rare diseases with extremely low-data regimes, it is challenging to train DNNs to realize automatic diagnosis. In this work, we develop an automatic diagnosis system for rare fundus diseases, based on the meta-learning framework. The system incorporates a co-regularization loss and the ensemble-learning strategy into the meta-learning framework, fully leveraging the advantage of multi-scale hierarchical feature embedding. We initially conduct comparative experiments on our newly-constructed lightweight multi-disease fundus images dataset for the few-shot recognition task (namely, FundusData-FS). Moreover, we verify the cross-domain transferability from miniImageNet to FundusData-FS, and further confirm our method's good repeatability. Rigorous experiments demonstrate that our method can detect rare fundus diseases, and is superior to the state-of-the-art methods. These investigations demonstrate that the potential of our method for the real clinical practice is promising.
Collapse
Affiliation(s)
- Mengdi Gao
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China; Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China; National Biomedical Imaging Center, Peking University, Beijing 100871, China; Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China; Institute of Biomedical Engineering, Shenzhen Bay Laboratory 5F, Shenzhen 518071, China
| | - Hongyang Jiang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
| | - Lei Zhu
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China; Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China; National Biomedical Imaging Center, Peking University, Beijing 100871, China; Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China; Institute of Biomedical Engineering, Shenzhen Bay Laboratory 5F, Shenzhen 518071, China
| | - Zhe Jiang
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China; Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China; National Biomedical Imaging Center, Peking University, Beijing 100871, China; Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China; Institute of Biomedical Engineering, Shenzhen Bay Laboratory 5F, Shenzhen 518071, China
| | - Mufeng Geng
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China; Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China; National Biomedical Imaging Center, Peking University, Beijing 100871, China; Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China; Institute of Biomedical Engineering, Shenzhen Bay Laboratory 5F, Shenzhen 518071, China
| | - Qiushi Ren
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China; Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China; National Biomedical Imaging Center, Peking University, Beijing 100871, China; Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China; Institute of Biomedical Engineering, Shenzhen Bay Laboratory 5F, Shenzhen 518071, China
| | - Yanye Lu
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China; National Biomedical Imaging Center, Peking University, Beijing 100871, China; Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China.
| |
Collapse
|
12
|
Lee J, Warner E, Shaikhouni S, Bitzer M, Kretzler M, Gipson D, Pennathur S, Bellovich K, Bhat Z, Gadegbeku C, Massengill S, Perumal K, Saha J, Yang Y, Luo J, Zhang X, Mariani L, Hodgin JB, Rao A. Clustering-based spatial analysis (CluSA) framework through graph neural network for chronic kidney disease prediction using histopathology images. Sci Rep 2023; 13:12701. [PMID: 37543648 PMCID: PMC10404289 DOI: 10.1038/s41598-023-39591-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 07/27/2023] [Indexed: 08/07/2023] Open
Abstract
Machine learning applied to digital pathology has been increasingly used to assess kidney function and diagnose the underlying cause of chronic kidney disease (CKD). We developed a novel computational framework, clustering-based spatial analysis (CluSA), that leverages unsupervised learning to learn spatial relationships between local visual patterns in kidney tissue. This framework minimizes the need for time-consuming and impractical expert annotations. 107,471 histopathology images obtained from 172 biopsy cores were used in the clustering and in the deep learning model. To incorporate spatial information over the clustered image patterns on the biopsy sample, we spatially encoded clustered patterns with colors and performed spatial analysis through graph neural network. A random forest classifier with various groups of features were used to predict CKD. For predicting eGFR at the biopsy, we achieved a sensitivity of 0.97, specificity of 0.90, and accuracy of 0.95. AUC was 0.96. For predicting eGFR changes in one-year, we achieved a sensitivity of 0.83, specificity of 0.85, and accuracy of 0.84. AUC was 0.85. This study presents the first spatial analysis based on unsupervised machine learning algorithms. Without expert annotation, CluSA framework can not only accurately classify and predict the degree of kidney function at the biopsy and in one year, but also identify novel predictors of kidney function and renal prognosis.
Collapse
Affiliation(s)
- Joonsang Lee
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA.
| | - Elisa Warner
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Salma Shaikhouni
- Department of Internal Medicine, Nephrology, University of Michigan, Ann Arbor, MI, USA
| | - Markus Bitzer
- Department of Internal Medicine, Nephrology, University of Michigan, Ann Arbor, MI, USA
| | - Matthias Kretzler
- Department of Internal Medicine, Nephrology, University of Michigan, Ann Arbor, MI, USA
| | - Debbie Gipson
- Department of Pediatrics, Pediatric Nephrology, University of Michigan, Ann Arbor, MI, USA
| | - Subramaniam Pennathur
- Department of Internal Medicine, Nephrology, University of Michigan, Ann Arbor, MI, USA
| | - Keith Bellovich
- Department of Internal Medicine, Nephrology, St. Clair Nephrology Research, Detroit, MI, USA
| | - Zeenat Bhat
- Department of Internal Medicine, Nephrology, Wayne State University, Detroit, MI, USA
| | - Crystal Gadegbeku
- Department of Internal Medicine, Nephrology, Cleveland Clinic, , Cleveland, OH, USA
| | - Susan Massengill
- Department of Pediatrics, Pediatric Nephrology, Levine Children's Hospital, Charlotte, NC, USA
| | - Kalyani Perumal
- Department of Internal Medicine, Nephrology, Department of JH Stroger Hospital, Chicago, IL, USA
| | - Jharna Saha
- Department of Pathology, University of Michigan, Ann Arbor, MI, USA
| | - Yingbao Yang
- Department of Pathology, University of Michigan, Ann Arbor, MI, USA
| | - Jinghui Luo
- Department of Pathology, University of Michigan, Ann Arbor, MI, USA
| | - Xin Zhang
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Laura Mariani
- Department of Internal Medicine, Nephrology, University of Michigan, Ann Arbor, MI, USA
| | - Jeffrey B Hodgin
- Department of Pathology, University of Michigan, Ann Arbor, MI, USA.
| | - Arvind Rao
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA.
- Department of Biostatistics, University of Michigan, Ann Arbor, MI, USA.
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, USA.
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA.
| |
Collapse
|
13
|
Gu B, Sidhu S, Weinreb RN, Christopher M, Zangwill LM, Baxter SL. Review of Visualization Approaches in Deep Learning Models of Glaucoma. Asia Pac J Ophthalmol (Phila) 2023; 12:392-401. [PMID: 37523431 DOI: 10.1097/apo.0000000000000619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 05/11/2023] [Indexed: 08/02/2023] Open
Abstract
Glaucoma is a major cause of irreversible blindness worldwide. As glaucoma often presents without symptoms, early detection and intervention are important in delaying progression. Deep learning (DL) has emerged as a rapidly advancing tool to help achieve these objectives. In this narrative review, data types and visualization approaches for presenting model predictions, including models based on tabular data, functional data, and/or structural data, are summarized, and the importance of data source diversity for improving the utility and generalizability of DL models is explored. Examples of innovative approaches to understanding predictions of artificial intelligence (AI) models and alignment with clinicians are provided. In addition, methods to enhance the interpretability of clinical features from tabular data used to train AI models are investigated. Examples of published DL models that include interfaces to facilitate end-user engagement and minimize cognitive and time burdens are highlighted. The stages of integrating AI models into existing clinical workflows are reviewed, and challenges are discussed. Reviewing these approaches may help inform the generation of user-friendly interfaces that are successfully integrated into clinical information systems. This review details key principles regarding visualization approaches in DL models of glaucoma. The articles reviewed here focused on usability, explainability, and promotion of clinician trust to encourage wider adoption for clinical use. These studies demonstrate important progress in addressing visualization and explainability issues required for successful real-world implementation of DL models in glaucoma.
Collapse
Affiliation(s)
- Byoungyoung Gu
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
- Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, CA, US
| | - Sophia Sidhu
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
- Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, CA, US
| | - Robert N Weinreb
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
| | - Mark Christopher
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
| | - Linda M Zangwill
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
| | - Sally L Baxter
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
- Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, CA, US
| |
Collapse
|
14
|
Velpula VK, Sharma LD. Multi-stage glaucoma classification using pre-trained convolutional neural networks and voting-based classifier fusion. Front Physiol 2023; 14:1175881. [PMID: 37383146 PMCID: PMC10293617 DOI: 10.3389/fphys.2023.1175881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 05/19/2023] [Indexed: 06/30/2023] Open
Abstract
Aim: To design an automated glaucoma detection system for early detection of glaucoma using fundus images. Background: Glaucoma is a serious eye problem that can cause vision loss and even permanent blindness. Early detection and prevention are crucial for effective treatment. Traditional diagnostic approaches are time consuming, manual, and often inaccurate, thus making automated glaucoma diagnosis necessary. Objective: To propose an automated glaucoma stage classification model using pre-trained deep convolutional neural network (CNN) models and classifier fusion. Methods: The proposed model utilized five pre-trained CNN models: ResNet50, AlexNet, VGG19, DenseNet-201, and Inception-ResNet-v2. The model was tested using four public datasets: ACRIMA, RIM-ONE, Harvard Dataverse (HVD), and Drishti. Classifier fusion was created to merge the decisions of all CNN models using the maximum voting-based approach. Results: The proposed model achieved an area under the curve of 1 and an accuracy of 99.57% for the ACRIMA dataset. The HVD dataset had an area under the curve of 0.97 and an accuracy of 85.43%. The accuracy rates for Drishti and RIM-ONE were 90.55 and 94.95%, respectively. The experimental results showed that the proposed model performed better than the state-of-the-art methods in classifying glaucoma in its early stages. Understanding the model output includes both attribution-based methods such as activations and gradient class activation map and perturbation-based methods such as locally interpretable model-agnostic explanations and occlusion sensitivity, which generate heatmaps of various sections of an image for model prediction. Conclusion: The proposed automated glaucoma stage classification model using pre-trained CNN models and classifier fusion is an effective method for the early detection of glaucoma. The results indicate high accuracy rates and superior performance compared to the existing methods.
Collapse
|
15
|
Shoukat A, Akbar S, Hassan SA, Iqbal S, Mehmood A, Ilyas QM. Automatic Diagnosis of Glaucoma from Retinal Images Using Deep Learning Approach. Diagnostics (Basel) 2023; 13:diagnostics13101738. [PMID: 37238222 DOI: 10.3390/diagnostics13101738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 05/04/2023] [Accepted: 05/06/2023] [Indexed: 05/28/2023] Open
Abstract
Glaucoma is characterized by increased intraocular pressure and damage to the optic nerve, which may result in irreversible blindness. The drastic effects of this disease can be avoided if it is detected at an early stage. However, the condition is frequently detected at an advanced stage in the elderly population. Therefore, early-stage detection may save patients from irreversible vision loss. The manual assessment of glaucoma by ophthalmologists includes various skill-oriented, costly, and time-consuming methods. Several techniques are in experimental stages to detect early-stage glaucoma, but a definite diagnostic technique remains elusive. We present an automatic method based on deep learning that can detect early-stage glaucoma with very high accuracy. The detection technique involves the identification of patterns from the retinal images that are often overlooked by clinicians. The proposed approach uses the gray channels of fundus images and applies the data augmentation technique to create a large dataset of versatile fundus images to train the convolutional neural network model. Using the ResNet-50 architecture, the proposed approach achieved excellent results for detecting glaucoma on the G1020, RIM-ONE, ORIGA, and DRISHTI-GS datasets. We obtained a detection accuracy of 98.48%, a sensitivity of 99.30%, a specificity of 96.52%, an AUC of 97%, and an F1-score of 98% by using the proposed model on the G1020 dataset. The proposed model may help clinicians to diagnose early-stage glaucoma with very high accuracy for timely interventions.
Collapse
Affiliation(s)
- Ayesha Shoukat
- Department of Computer Science, Riphah International University, Faisalabad Campus, Faisalabad 44000, Pakistan
| | - Shahzad Akbar
- Department of Computer Science, Riphah International University, Faisalabad Campus, Faisalabad 44000, Pakistan
| | - Syed Ale Hassan
- Department of Computer Science, Riphah International University, Faisalabad Campus, Faisalabad 44000, Pakistan
| | - Sajid Iqbal
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Al Ahsa 31982, Saudi Arabia
| | - Abid Mehmood
- Department of Management Information Systems, College of Business Administration, King Faisal University, Al Ahsa 31982, Saudi Arabia
| | - Qazi Mudassar Ilyas
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Al Ahsa 31982, Saudi Arabia
| |
Collapse
|
16
|
Zhang L, Tang L, Xia M, Cao G. The application of artificial intelligence in glaucoma diagnosis and prediction. Front Cell Dev Biol 2023; 11:1173094. [PMID: 37215077 PMCID: PMC10192631 DOI: 10.3389/fcell.2023.1173094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Accepted: 04/24/2023] [Indexed: 05/24/2023] Open
Abstract
Artificial intelligence is a multidisciplinary and collaborative science, the ability of deep learning for image feature extraction and processing gives it a unique advantage in dealing with problems in ophthalmology. The deep learning system can assist ophthalmologists in diagnosing characteristic fundus lesions in glaucoma, such as retinal nerve fiber layer defects, optic nerve head damage, optic disc hemorrhage, etc. Early detection of these lesions can help delay structural damage, protect visual function, and reduce visual field damage. The development of deep learning led to the emergence of deep convolutional neural networks, which are pushing the integration of artificial intelligence with testing devices such as visual field meters, fundus imaging and optical coherence tomography to drive more rapid advances in clinical glaucoma diagnosis and prediction techniques. This article details advances in artificial intelligence combined with visual field, fundus photography, and optical coherence tomography in the field of glaucoma diagnosis and prediction, some of which are familiar and some not widely known. Then it further explores the challenges at this stage and the prospects for future clinical applications. In the future, the deep cooperation between artificial intelligence and medical technology will make the datasets and clinical application rules more standardized, and glaucoma diagnosis and prediction tools will be simplified in a single direction, which will benefit multiple ethnic groups.
Collapse
Affiliation(s)
- Linyu Zhang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Li Tang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Min Xia
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Guofan Cao
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| |
Collapse
|
17
|
Thakur S, Dinh LL, Lavanya R, Quek TC, Liu Y, Cheng CY. Use of artificial intelligence in forecasting glaucoma progression. Taiwan J Ophthalmol 2023; 13:168-183. [PMID: 37484617 PMCID: PMC10361424 DOI: 10.4103/tjo.tjo-d-23-00022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 03/03/2023] [Indexed: 07/25/2023] Open
Abstract
Artificial intelligence (AI) has been widely used in ophthalmology for disease detection and monitoring progression. For glaucoma research, AI has been used to understand progression patterns and forecast disease trajectory based on analysis of clinical and imaging data. Techniques such as machine learning, natural language processing, and deep learning have been employed for this purpose. The results from studies using AI for forecasting glaucoma progression however vary considerably due to dataset constraints, lack of a standard progression definition and differences in methodology and approach. While glaucoma detection and screening have been the focus of most research that has been published in the last few years, in this narrative review we focus on studies that specifically address glaucoma progression. We also summarize the current evidence, highlight studies that have translational potential, and provide suggestions on how future research that addresses glaucoma progression can be improved.
Collapse
Affiliation(s)
- Sahil Thakur
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Linh Le Dinh
- Institute of High Performance Computing, The Agency for Science, Technology and Research, Singapore
| | - Raghavan Lavanya
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Ten Cheer Quek
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Yong Liu
- Institute of High Performance Computing, The Agency for Science, Technology and Research, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Department of Ophthalmology, Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| |
Collapse
|
18
|
Lee YJ, Sun S, Kim YK, Jeoung JW, Park KH. Diagnostic ability of macular microvasculature with swept-source OCT angiography for highly myopic glaucoma using deep learning. Sci Rep 2023; 13:5209. [PMID: 36997639 PMCID: PMC10063664 DOI: 10.1038/s41598-023-32164-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2022] [Accepted: 03/23/2023] [Indexed: 04/01/2023] Open
Abstract
Macular OCT angiography (OCTA) measurements have been reported to be useful for glaucoma diagnostics. However, research on highly myopic glaucoma is lacking, and the diagnostic value of macular OCTA measurements versus OCT parameters remains inconclusive. We aimed to evaluate the diagnostic ability of the macular microvasculature assessed with OCTA for highly myopic glaucoma and to compare it with that of macular thickness parameters, using deep learning (DL). A DL model was trained, validated and tested using 260 pairs of macular OCTA and OCT images from 260 eyes (203 eyes with highly myopic glaucoma, 57 eyes with healthy high myopia). The DL model achieved an AUC of 0.946 with the OCTA superficial capillary plexus (SCP) images, which was comparable to that with the OCT GCL+ (ganglion cell layer + inner plexiform layer; AUC, 0.982; P = 0.268) or OCT GCL++ (retinal nerve fiber layer + ganglion cell layer + inner plexiform layer) images (AUC, 0.997; P = 0.101), and significantly superior to that with the OCTA deep capillary plexus images (AUC, 0.779; P = 0.028). The DL model with macular OCTA SCP images demonstrated excellent and comparable diagnostic ability to that with macular OCT images in highly myopic glaucoma, which suggests macular OCTA microvasculature could serve as a potential biomarker for glaucoma diagnosis in high myopia.
Collapse
Affiliation(s)
- Yun Jeong Lee
- Department of Ophthalmology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Sukkyu Sun
- Biomedical Research Institute, Seoul National University Hospital, Seoul, Korea
| | - Young Kook Kim
- Department of Ophthalmology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Jin Wook Jeoung
- Department of Ophthalmology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Ki Ho Park
- Department of Ophthalmology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea.
| |
Collapse
|
19
|
Jiang X, Xie M, Ma L, Dong L, Li D. International publication trends in the application of artificial intelligence in ophthalmology research: an updated bibliometric analysis. ANNALS OF TRANSLATIONAL MEDICINE 2023; 11:219. [PMID: 37007552 PMCID: PMC10061466 DOI: 10.21037/atm-22-3773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Accepted: 12/18/2022] [Indexed: 03/17/2023]
Abstract
Background The literature on artificial intelligence (AI)-related topics has been expanding rapidly over the last two decades, showing that AI is a crucial force in advancing ophthalmology. This analysis aims to provide a dynamic and longitudinal bibliometric analysis of AI-related ophthalmic papers. Methods The Web of Science was searched to retrieve papers regarding the application of AI in ophthalmology published in the English language up to May 2022. The variables were analyzed using Microsoft Excel 2019 and GraphPad Prism 9. Data visualization was performed using VOSviewer and CiteSpace. Results In this study, a total of 1,686 publications were analyzed. Recently, AI-related ophthalmology research has increased exponentially. China was the most productive country in this research field, with 483 articles, but the United States of America (446 publications) contributed most to the sum of citations and the H-index. The League of European Research Universities, Ting DSW, and Daniel SW were the most prolific institution and researchers. This field is primarily concerned with diabetic retinopathy (DR), glaucoma, optical coherence tomography, and the classification and diagnosis of fundus pictures. Current hotspots in AI research include deep learning, diagnosing and predicting systemic disorders by fundus images, incidence and progression of ocular diseases, and outcome prediction. Conclusions This analysis thoroughly reviews AI-related research in ophthalmology to help academics better comprehend the growth and possible practice consequences of AI. The association between eye and systemic biomarkers, telemedicine, real-world studies, and the development and application of new AI algorithms, such as visual converters, will continue to be research hotspots over the next few years.
Collapse
Affiliation(s)
- Xue Jiang
- Beijing Tongren Eye Center, Beijing Ophthalmology & Visual Sciences Key Lab, Capital Medical University, Beijing Tongren Hospital, Beijing, China
| | - Minyue Xie
- Beijing Tongren Eye Center, Beijing Ophthalmology & Visual Sciences Key Lab, Capital Medical University, Beijing Tongren Hospital, Beijing, China
| | - Lan Ma
- Beijing Tongren Eye Center, Beijing Ophthalmology & Visual Sciences Key Lab, Capital Medical University, Beijing Tongren Hospital, Beijing, China
| | - Li Dong
- Beijing Tongren Eye Center, Beijing Ophthalmology & Visual Sciences Key Lab, Capital Medical University, Beijing Tongren Hospital, Beijing, China
| | - Dongmei Li
- Beijing Tongren Eye Center, Beijing Ophthalmology & Visual Sciences Key Lab, Capital Medical University, Beijing Tongren Hospital, Beijing, China
| |
Collapse
|
20
|
Lemij HG, de Vente C, Sánchez CI, Vermeer KA. Characteristics of a large, labeled dataset for the training of artificial intelligence for glaucoma screening with fundus photographs. OPHTHALMOLOGY SCIENCE 2023; 3:100300. [PMID: 37113471 PMCID: PMC10127130 DOI: 10.1016/j.xops.2023.100300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 02/12/2023] [Accepted: 03/13/2023] [Indexed: 03/19/2023]
Abstract
Purpose Significant visual impairment due to glaucoma is largely caused by the disease being detected too late. Objective To build a labeled data set for training artificial intelligence (AI) algorithms for glaucoma screening by fundus photography, to assess the accuracy of the graders, and to characterize the features of all eyes with referable glaucoma (RG). Design Cross-sectional study. Subjects Color fundus photographs (CFPs) of 113 893 eyes of 60 357 individuals were obtained from EyePACS, California, United States, from a population screening program for diabetic retinopathy. Methods Carefully selected graders (ophthalmologists and optometrists) graded the images. To qualify, they had to pass the European Optic Disc Assessment Trial optic disc assessment with ≥ 85% accuracy and 92% specificity. Of 90 candidates, 30 passed. Each image of the EyePACS set was then scored by varying random pairs of graders as "RG," "no referable glaucoma (NRG)," or "ungradable (UG)." In case of disagreement, a glaucoma specialist made the final grading. Referable glaucoma was scored if visual field damage was expected. In case of RG, graders were instructed to mark up to 10 relevant glaucomatous features. Main Outcome Measures Qualitative features in eyes with RG. Results The performance of each grader was monitored; if the sensitivity and specificity dropped below 80% and 95%, respectively (the final grade served as reference), they exited the study and their gradings were redone by other graders. In all, 20 graders qualified; their mean sensitivity and specificity (standard deviation [SD]) were 85.6% (5.7) and 96.1% (2.8), respectively. The 2 graders agreed in 92.45% of the images (Gwet's AC2, expressing the inter-rater reliability, was 0.917). Of all gradings, the sensitivity and specificity (95% confidence interval) were 86.0 (85.2-86.7)% and 96.4 (96.3-96.5)%, respectively. Of all gradable eyes (n = 111 183; 97.62%) the prevalence of RG was 4.38%. The most common features of RG were the appearance of the neuroretinal rim (NRR) inferiorly and superiorly. Conclusions A large data set of CFPs was put together of sufficient quality to develop AI screening solutions for glaucoma. The most common features of RG were the appearance of the NRR inferiorly and superiorly. Disc hemorrhages were a rare feature of RG. Financial Disclosures Proprietary or commercial disclosure may be found after the references.
Collapse
|
21
|
Kamalipour A, Moghimi S, Khosravi P, Mohammadzadeh V, Nishida T, Micheletti E, Wu JH, Mahmoudinezhad G, Li EHF, Christopher M, Zangwill L, Javidi T, Weinreb RN. Combining Optical Coherence Tomography and Optical Coherence Tomography Angiography Longitudinal Data for the Detection of Visual Field Progression in Glaucoma. Am J Ophthalmol 2023; 246:141-154. [PMID: 36328200 DOI: 10.1016/j.ajo.2022.10.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 10/14/2022] [Accepted: 10/15/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE To use longitudinal optical coherence tomography (OCT) and OCT angiography (OCTA) data to detect glaucomatous visual field (VF) progression with a supervised machine learning approach. DESIGN Prospective cohort study. METHODS One hundred ten eyes of patients with suspected glaucoma (33.6%) and patients with glaucoma (66.4%) with a minimum of 5 24-2 VF tests and 3 optic nerve head and macula images over an average follow-up duration of 4.1 years were included. VF progression was defined using a composite measure including either a "likely progression event" on Guided Progression Analysis, a statistically significant negative slope of VF mean deviation or VF index, or a positive pointwise linear regression event. Feature-based gradient boosting classifiers were developed using different subsets of baseline and longitudinal OCT and OCTA summary parameters. The area under the receiver operating characteristic curve (AUROC) was used to compare the classification performance of different models. RESULTS VF progression was detected in 28 eyes (25.5%). The model with combined baseline and longitudinal OCT and OCTA parameters at the global and hemifield levels had the best classification accuracy to detect VF progression (AUROC = 0.89). Models including combined OCT and OCTA parameters had higher classification accuracy compared with those with individual subsets of OCT or OCTA features alone. Including hemifield measurements significantly improved the models' classification accuracy compared with using global measurements alone. Including longitudinal rates of change of OCT and OCTA parameters (AUROCs = 0.80-0.89) considerably increased the classification accuracy of the models with baseline measurements alone (AUROCs = 0.60-0.63). CONCLUSIONS Longitudinal OCTA measurements complement OCT-derived structural metrics for the evaluation of functional VF loss in patients with glaucoma.
Collapse
Affiliation(s)
- Alireza Kamalipour
- From the Hamilton Glaucoma (A.K., S.M., V.M., T.N., E.M., J-H.W., G.M., E.H.F.L., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology
| | - Sasan Moghimi
- From the Hamilton Glaucoma (A.K., S.M., V.M., T.N., E.M., J-H.W., G.M., E.H.F.L., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology
| | - Pooya Khosravi
- School of Medicine (P.K.), University of California, Irvine, Irvine, California, USA
| | - Vahid Mohammadzadeh
- From the Hamilton Glaucoma (A.K., S.M., V.M., T.N., E.M., J-H.W., G.M., E.H.F.L., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology
| | - Takashi Nishida
- From the Hamilton Glaucoma (A.K., S.M., V.M., T.N., E.M., J-H.W., G.M., E.H.F.L., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology
| | - Eleonora Micheletti
- From the Hamilton Glaucoma (A.K., S.M., V.M., T.N., E.M., J-H.W., G.M., E.H.F.L., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology
| | - Jo-Hsuan Wu
- From the Hamilton Glaucoma (A.K., S.M., V.M., T.N., E.M., J-H.W., G.M., E.H.F.L., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology
| | - Golnoush Mahmoudinezhad
- From the Hamilton Glaucoma (A.K., S.M., V.M., T.N., E.M., J-H.W., G.M., E.H.F.L., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology
| | - Elizabeth H F Li
- From the Hamilton Glaucoma (A.K., S.M., V.M., T.N., E.M., J-H.W., G.M., E.H.F.L., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology
| | - Mark Christopher
- From the Hamilton Glaucoma (A.K., S.M., V.M., T.N., E.M., J-H.W., G.M., E.H.F.L., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology
| | - Linda Zangwill
- From the Hamilton Glaucoma (A.K., S.M., V.M., T.N., E.M., J-H.W., G.M., E.H.F.L., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology
| | - Tara Javidi
- Department of Electrical and Computer Engineering (T.J.), University of California San Diego, La Jolla
| | - Robert N Weinreb
- From the Hamilton Glaucoma (A.K., S.M., V.M., T.N., E.M., J-H.W., G.M., E.H.F.L., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology.
| |
Collapse
|
22
|
Precision Medicine in Glaucoma: Artificial Intelligence, Biomarkers, Genetics and Redox State. Int J Mol Sci 2023; 24:ijms24032814. [PMID: 36769127 PMCID: PMC9917798 DOI: 10.3390/ijms24032814] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 01/07/2023] [Accepted: 01/18/2023] [Indexed: 02/05/2023] Open
Abstract
Glaucoma is a multifactorial neurodegenerative illness requiring early diagnosis and strict monitoring of the disease progression. Current exams for diagnosis and prognosis are based on clinical examination, intraocular pressure (IOP) measurements, visual field tests, and optical coherence tomography (OCT). In this scenario, there is a critical unmet demand for glaucoma-related biomarkers to enhance clinical testing for early diagnosis and tracking of the disease's development. The introduction of validated biomarkers would allow for prompt intervention in the clinic to help with prognosis prediction and treatment response monitoring. This review aims to report the latest acquisitions on biomarkers in glaucoma, from imaging analysis to genetics and metabolic markers.
Collapse
|
23
|
Lin YC, Lin Y, Huang YL, Ho CY, Chiang HJ, Lu HY, Wang CC, Wang JJ, Ng SH, Lai CH, Lin G. Generalizable transfer learning of automated tumor segmentation from cervical cancers toward a universal model for uterine malignancies in diffusion-weighted MRI. Insights Imaging 2023; 14:14. [PMID: 36690870 PMCID: PMC9871146 DOI: 10.1186/s13244-022-01356-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Accepted: 12/04/2022] [Indexed: 01/25/2023] Open
Abstract
PURPOSE To investigate the generalizability of transfer learning (TL) of automated tumor segmentation from cervical cancers toward a universal model for cervical and uterine malignancies in diffusion-weighted magnetic resonance imaging (DWI). METHODS In this retrospective multicenter study, we analyzed pelvic DWI data from 169 and 320 patients with cervical and uterine malignancies and divided them into the training (144 and 256) and testing (25 and 64) datasets, respectively. A pretrained model was established using DeepLab V3 + from the cervical cancer dataset, followed by TL experiments adjusting the training data sizes and fine-tuning layers. The model performance was evaluated using the dice similarity coefficient (DSC). RESULTS In predicting tumor segmentation for all cervical and uterine malignancies, TL models improved the DSCs from the pretrained cervical model (DSC 0.43) when adding 5, 13, 26, and 51 uterine cases for training (DSC improved from 0.57, 0.62, 0.68, 0.70, p < 0.001). Following the crossover at adding 128 cases (DSC 0.71), the model trained by combining data from adding all the 256 patients exhibited the highest DSCs for the combined cervical and uterine datasets (DSC 0.81) and cervical only dataset (DSC 0.91). CONCLUSIONS TL may improve the generalizability of automated tumor segmentation of DWI from a specific cancer type toward multiple types of uterine malignancies especially in limited case numbers.
Collapse
Affiliation(s)
- Yu-Chun Lin
- grid.413801.f0000 0001 0711 0593Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Keelung, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan ,grid.145695.a0000 0004 1798 0922Department of Medical Imaging and Radiological Sciences, Chang Gung University, Taoyuan, 33302 Taiwan ,grid.454210.60000 0004 1756 1461Clinical Metabolomics Core Laboratory, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan
| | - Yenpo Lin
- grid.413801.f0000 0001 0711 0593Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Keelung, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan
| | - Yen-Ling Huang
- grid.413801.f0000 0001 0711 0593Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Keelung, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan
| | - Chih-Yi Ho
- grid.413801.f0000 0001 0711 0593Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Keelung, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan
| | - Hsin-Ju Chiang
- grid.413801.f0000 0001 0711 0593Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Keelung, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan ,grid.454210.60000 0004 1756 1461Clinical Metabolomics Core Laboratory, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan
| | - Hsin-Ying Lu
- grid.413801.f0000 0001 0711 0593Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Keelung, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan ,grid.454210.60000 0004 1756 1461Clinical Metabolomics Core Laboratory, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan
| | - Chun-Chieh Wang
- grid.145695.a0000 0004 1798 0922Department of Medical Imaging and Radiological Sciences, Chang Gung University, Taoyuan, 33302 Taiwan ,grid.145695.a0000 0004 1798 0922Department of Radiation Oncology, Chang Gung Memorial Hospital at Linkou and Chang Gung University, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan
| | - Jiun-Jie Wang
- grid.413801.f0000 0001 0711 0593Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Keelung, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan ,grid.145695.a0000 0004 1798 0922Department of Medical Imaging and Radiological Sciences, Chang Gung University, Taoyuan, 33302 Taiwan
| | - Shu-Hang Ng
- grid.413801.f0000 0001 0711 0593Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Keelung, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan
| | - Chyong-Huey Lai
- grid.145695.a0000 0004 1798 0922Gynecologic Cancer Research Center, Department of Obstetrics and Gynecology, Chang Gung Memorial Hospital at Linkou and Chang Gung University, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan
| | - Gigin Lin
- grid.413801.f0000 0001 0711 0593Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Keelung, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan ,grid.454210.60000 0004 1756 1461Clinical Metabolomics Core Laboratory, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan ,grid.145695.a0000 0004 1798 0922Gynecologic Cancer Research Center, Department of Obstetrics and Gynecology, Chang Gung Memorial Hospital at Linkou and Chang Gung University, 5 Fuhsing St., Guishan, Taoyuan, 33382 Taiwan
| |
Collapse
|
24
|
Chan E, Tang Z, Najjar RP, Narayanaswamy A, Sathianvichitr K, Newman NJ, Biousse V, Milea D. A Deep Learning System for Automated Quality Evaluation of Optic Disc Photographs in Neuro-Ophthalmic Disorders. Diagnostics (Basel) 2023; 13:diagnostics13010160. [PMID: 36611452 PMCID: PMC9818957 DOI: 10.3390/diagnostics13010160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/27/2022] [Accepted: 12/28/2022] [Indexed: 01/05/2023] Open
Abstract
The quality of ocular fundus photographs can affect the accuracy of the morphologic assessment of the optic nerve head (ONH), either by humans or by deep learning systems (DLS). In order to automatically identify ONH photographs of optimal quality, we have developed, trained, and tested a DLS, using an international, multicentre, multi-ethnic dataset of 5015 ocular fundus photographs from 31 centres in 20 countries participating to the Brain and Optic Nerve Study with Artificial Intelligence (BONSAI). The reference standard in image quality was established by three experts who independently classified photographs as of "good", "borderline", or "poor" quality. The DLS was trained on 4208 fundus photographs and tested on an independent external dataset of 807 photographs, using a multi-class model, evaluated with a one-vs-rest classification strategy. In the external-testing dataset, the DLS could identify with excellent performance "good" quality photographs (AUC = 0.93 (95% CI, 0.91-0.95), accuracy = 91.4% (95% CI, 90.0-92.9%), sensitivity = 93.8% (95% CI, 92.5-95.2%), specificity = 75.9% (95% CI, 69.7-82.1%) and "poor" quality photographs (AUC = 1.00 (95% CI, 0.99-1.00), accuracy = 99.1% (95% CI, 98.6-99.6%), sensitivity = 81.5% (95% CI, 70.6-93.8%), specificity = 99.7% (95% CI, 99.6-100.0%). "Borderline" quality images were also accurately classified (AUC = 0.90 (95% CI, 0.88-0.93), accuracy = 90.6% (95% CI, 89.1-92.2%), sensitivity = 65.4% (95% CI, 56.6-72.9%), specificity = 93.4% (95% CI, 92.1-94.8%). The overall accuracy to distinguish among the three classes was 90.6% (95% CI, 89.1-92.1%), suggesting that this DLS could select optimal quality fundus photographs in patients with neuro-ophthalmic and neurological disorders affecting the ONH.
Collapse
Affiliation(s)
- Ebenezer Chan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Duke-NUS School of Medicine, Singapore 169857, Singapore
| | - Zhiqun Tang
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
| | - Raymond P. Najjar
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Duke-NUS School of Medicine, Singapore 169857, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 117597, Singapore
- Center for Innovation & Precision Eye Health, National University of Singapore, Singapore 119077, Singapore
| | - Arun Narayanaswamy
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Glaucoma Department, Singapore National Eye Centre, Singapore 168751, Singapore
| | | | - Nancy J. Newman
- Departments of Ophthalmology and Neurology, Emory University, Atlanta, GA 30322, USA
| | - Valérie Biousse
- Departments of Ophthalmology and Neurology, Emory University, Atlanta, GA 30322, USA
| | - Dan Milea
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Duke-NUS School of Medicine, Singapore 169857, Singapore
- Department of Ophthalmology, Rigshospitalet, University of Copenhagen, 2600 Copenhagen, Denmark
- Department of Ophthalmology, Angers University Hospital, 49100 Angers, France
- Neuro-Ophthalmology Department, Singapore National Eye Centre, Singapore 168751, Singapore
- Correspondence:
| | | |
Collapse
|
25
|
Ji Y, Liu S, Hong X, Lu Y, Wu X, Li K, Li K, Liu Y. Advances in artificial intelligence applications for ocular surface diseases diagnosis. Front Cell Dev Biol 2022; 10:1107689. [PMID: 36605721 PMCID: PMC9808405 DOI: 10.3389/fcell.2022.1107689] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 12/05/2022] [Indexed: 01/07/2023] Open
Abstract
In recent years, with the rapid development of computer technology, continual optimization of various learning algorithms and architectures, and establishment of numerous large databases, artificial intelligence (AI) has been unprecedentedly developed and applied in the field of ophthalmology. In the past, ophthalmological AI research mainly focused on posterior segment diseases, such as diabetic retinopathy, retinopathy of prematurity, age-related macular degeneration, retinal vein occlusion, and glaucoma optic neuropathy. Meanwhile, an increasing number of studies have employed AI to diagnose ocular surface diseases. In this review, we summarize the research progress of AI in the diagnosis of several ocular surface diseases, namely keratitis, keratoconus, dry eye, and pterygium. We discuss the limitations and challenges of AI in the diagnosis of ocular surface diseases, as well as prospects for the future.
Collapse
Affiliation(s)
- Yuke Ji
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Sha Liu
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Xiangqian Hong
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Yi Lu
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Xingyang Wu
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Kunke Li
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China,*Correspondence: Yunfang Liu, ; Keran Li, ; Kunke Li,
| | - Keran Li
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China,*Correspondence: Yunfang Liu, ; Keran Li, ; Kunke Li,
| | - Yunfang Liu
- Department of Ophthalmology, First Affiliated Hospital of Huzhou University, Huzhou, China,*Correspondence: Yunfang Liu, ; Keran Li, ; Kunke Li,
| |
Collapse
|
26
|
Hung KH, Kao YC, Tang YH, Chen YT, Wang CH, Wang YC, Lee OKS. Application of a deep learning system in glaucoma screening and further classification with colour fundus photographs: a case control study. BMC Ophthalmol 2022; 22:483. [PMID: 36510171 PMCID: PMC9743575 DOI: 10.1186/s12886-022-02730-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Accepted: 12/06/2022] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND To verify efficacy of automatic screening and classification of glaucoma with deep learning system. METHODS A cross-sectional, retrospective study in a tertiary referral hospital. Patients with healthy optic disc, high-tension, or normal-tension glaucoma were enrolled. Complicated non-glaucomatous optic neuropathy was excluded. Colour and red-free fundus images were collected for development of DLS and comparison of their efficacy. The convolutional neural network with the pre-trained EfficientNet-b0 model was selected for machine learning. Glaucoma screening (Binary) and ternary classification with or without additional demographics (age, gender, high myopia) were evaluated, followed by creating confusion matrix and heatmaps. Area under receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, and F1 score were viewed as main outcome measures. RESULTS Two hundred and twenty-two cases (421 eyes) were enrolled, with 1851 images in total (1207 normal and 644 glaucomatous disc). Train set and test set were comprised of 1539 and 312 images, respectively. If demographics were not provided, AUC, accuracy, precision, sensitivity, F1 score, and specificity of our deep learning system in eye-based glaucoma screening were 0.98, 0.91, 0.86, 0.86, 0.86, and 0.94 in test set. Same outcome measures in eye-based ternary classification without demographic data were 0.94, 0.87, 0.87, 0.87, 0.87, and 0.94 in our test set, respectively. Adding demographics has no significant impact on efficacy, but establishing a linkage between eyes and images is helpful for a better performance. Confusion matrix and heatmaps suggested that retinal lesions and quality of photographs could affect classification. Colour fundus images play a major role in glaucoma classification, compared to red-free fundus images. CONCLUSIONS Promising results with high AUC and specificity were shown in distinguishing normal optic nerve from glaucomatous fundus images and doing further classification.
Collapse
Affiliation(s)
- Kuo-Hsuan Hung
- grid.413801.f0000 0001 0711 0593Department of Ophthalmology, Chang-Gung Memorial Hospital, Linkou, No.5, Fu-Hsing St., Kuei Shan Hsiang, Tao Yuan Hsien, Taiwan ,grid.145695.a0000 0004 1798 0922Chang-Gung University College of Medicine, No.259 Wen-Hwa 1st Road, Kuei Shan Hsiang, Tao Yuan Hsien, Taiwan ,grid.260539.b0000 0001 2059 7017Institute of Clinical Medicine, National Yang Ming Chiao Tung University, No.201, Sec.2, Shih-Pai Rd. Peitou, R.O.C, Taipei, 112 Taiwan
| | - Yu-Ching Kao
- Muen Biomedical and Optoelectronics Technologies Inc., Taipei, Taiwan
| | - Yu-Hsuan Tang
- grid.260539.b0000 0001 2059 7017Institute of Clinical Medicine, National Yang Ming Chiao Tung University, No.201, Sec.2, Shih-Pai Rd. Peitou, R.O.C, Taipei, 112 Taiwan
| | - Yi-Ting Chen
- Muen Biomedical and Optoelectronics Technologies Inc., Taipei, Taiwan
| | - Chuen-Heng Wang
- Muen Biomedical and Optoelectronics Technologies Inc., Taipei, Taiwan
| | - Yu-Chen Wang
- Muen Biomedical and Optoelectronics Technologies Inc., Taipei, Taiwan
| | - Oscar Kuang-Sheng Lee
- grid.260539.b0000 0001 2059 7017Institute of Clinical Medicine, National Yang Ming Chiao Tung University, No.201, Sec.2, Shih-Pai Rd. Peitou, R.O.C, Taipei, 112 Taiwan ,grid.260539.b0000 0001 2059 7017Stem Cell Research Centre, National Yang Ming Chiao Tung University, Taipei, Taiwan ,grid.411508.90000 0004 0572 9415Department of Orthopedics, China Medical University Hospital, Taichung, Taiwan
| |
Collapse
|
27
|
Duong-Trung N, Born S, Kim JW, Schermeyer MT, Paulick K, Borisyak M, Cruz-Bournazou MN, Werner T, Scholz R, Schmidt-Thieme L, Neubauer P, Martinez E. When Bioprocess Engineering Meets Machine Learning: A Survey from the Perspective of Automated Bioprocess Development. Biochem Eng J 2022. [DOI: 10.1016/j.bej.2022.108764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
28
|
Kaothanthong N, Atsavasirilert K, Sarampakhul S, Chantangphol P, Songsaeng D, Makhanov S. Artificial intelligence for localization of the acute ischemic stroke by non-contrast computed tomography. PLoS One 2022; 17:e0277573. [PMID: 36454916 PMCID: PMC9714826 DOI: 10.1371/journal.pone.0277573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Accepted: 10/29/2022] [Indexed: 12/03/2022] Open
Abstract
A non-contrast cranial computer tomography (ncCT) is often employed for the diagnosis of the early stage of the ischemic stroke. However, the number of false negatives is high. More accurate results are obtained by an MRI. However, the MRI is not available in every hospital. Moreover, even if it is available in the clinic for the routine tests, emergency often does not have it. Therefore, this paper proposes an end-to-end framework for detection and segmentation of the brain infarct on the ncCT. The computer tomography perfusion (CTp) is used as the ground truth. The proposed ensemble model employs three deep convolution neural networks (CNNs) to process three end-to-end feature maps and a hand-craft features characterized by specific contra-lateral features. To improve the accuracy of the detected infarct area, the spatial dependencies between neighboring slices are employed at the postprocessing step. The numerical experiments have been performed on 18 ncCT-CTp paired stroke cases (804 image-pairs). The leave-one-out approach is applied for evaluating the proposed method. The model achieves 91.16% accuracy, 65.15% precision, 77.44% recall, 69.97% F1 score, and 0.4536 IoU.
Collapse
Affiliation(s)
- Natsuda Kaothanthong
- Sirindhorn International Institute of Technology, Thammasat University, Pathum Thani, Thailand
- * E-mail: (NK); (SM)
| | - Kamin Atsavasirilert
- Sirindhorn International Institute of Technology, Thammasat University, Pathum Thani, Thailand
| | - Soawapot Sarampakhul
- Division of Diagnostic Radiology, Department of Radiology, Faculty of Medicine, Siriraj Hospital, Mahidol University, Bangkok, Thailand
| | - Pantid Chantangphol
- Sirindhorn International Institute of Technology, Thammasat University, Pathum Thani, Thailand
| | - Dittapong Songsaeng
- Division of Diagnostic Radiology, Department of Radiology, Faculty of Medicine, Siriraj Hospital, Mahidol University, Bangkok, Thailand
| | - Stanislav Makhanov
- Sirindhorn International Institute of Technology, Thammasat University, Pathum Thani, Thailand
- * E-mail: (NK); (SM)
| |
Collapse
|
29
|
Primary Open-Angle Glaucoma Diagnosis From Optic Disc Photographs Using a Siamese Network. OPHTHALMOLOGY SCIENCE 2022; 2:100209. [PMID: 36531584 PMCID: PMC9754976 DOI: 10.1016/j.xops.2022.100209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 08/01/2022] [Accepted: 08/05/2022] [Indexed: 11/20/2022]
Abstract
Purpose Primary open-angle glaucoma (POAG) is one of the leading causes of irreversible blindness in the United States and worldwide. Although deep learning methods have been proposed to diagnose POAG, these methods all used a single image as input. Contrastingly, glaucoma specialists typically compare the follow-up image with the baseline image to diagnose incident glaucoma. To simulate this process, we proposed a Siamese neural network, POAGNet, to detect POAG from optic disc photographs. Design The POAGNet, an algorithm for glaucoma diagnosis, is developed using optic disc photographs. Participants The POAGNet was trained and evaluated on 2 data sets: (1) 37 339 optic disc photographs from 1636 Ocular Hypertension Treatment Study (OHTS) participants and (2) 3684 optic disc photographs from the Sequential fundus Images for Glaucoma (SIG) data set. Gold standard labels were obtained using reading center grades. Methods We proposed a Siamese network model, POAGNet, to simulate the clinical process of identifying POAG from optic disc photographs. The POAGNet consists of 2 side outputs for deep supervision and uses convolution to measure the similarity between 2 networks. Main Outcome Measures The main outcome measures are the area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity. Results In POAG diagnosis, extensive experiments show that POAGNet performed better than the best state-of-the-art model on the OHTS test set (area under the curve [AUC] 0.9587 versus 0.8750). It also outperformed the baseline models on the SIG test set (AUC 0.7518 versus 0.6434). To assess the transferability of POAGNet, we also validated the impact of cross-data set variability on our model. The model trained on OHTS achieved an AUC of 0.7490 on SIG, comparable to the previous model trained on the same data set. When using the combination of SIG and OHTS for training, our model achieved superior AUC to the single-data model (AUC 0.8165 versus 0.7518). These demonstrate the relative generalizability of POAGNet. Conclusions By simulating the clinical grading process, POAGNet demonstrated high accuracy in POAG diagnosis. These results highlight the potential of deep learning to assist and enhance clinical POAG diagnosis. The POAGNet is publicly available on https://github.com/bionlplab/poagnet.
Collapse
|
30
|
Lim WS, Ho HY, Ho HC, Chen YW, Lee CK, Chen PJ, Lai F, Jang JSR, Ko ML. Use of multimodal dataset in AI for detecting glaucoma based on fundus photographs assessed with OCT: focus group study on high prevalence of myopia. BMC Med Imaging 2022; 22:206. [PMID: 36434508 PMCID: PMC9700928 DOI: 10.1186/s12880-022-00933-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 11/10/2022] [Indexed: 11/26/2022] Open
Abstract
BACKGROUND Glaucoma is one of the major causes of blindness; it is estimated that over 110 million people will be affected by glaucoma worldwide by 2040. Research on glaucoma detection using deep learning technology has been increasing, but the diagnosis of glaucoma in a large population with high incidence of myopia remains a challenge. This study aimed to provide a decision support system for the automatic detection of glaucoma using fundus images, which can be applied for general screening, especially in areas of high incidence of myopia. METHODS A total of 1,155 fundus images were acquired from 667 individuals with a mean axial length of 25.60 ± 2.0 mm at the National Taiwan University Hospital, Hsinchu Br. These images were graded based on the findings of complete ophthalmology examinations, visual field test, and optical coherence tomography into three groups: normal (N, n = 596), pre-perimetric glaucoma (PPG, n = 66), and glaucoma (G, n = 493), and divided into a training-validation (N: 476, PPG: 55, G: 373) and test (N: 120, PPG: 11, G: 120) sets. A multimodal model with the Xception model as image feature extraction and machine learning algorithms [random forest (RF), support vector machine (SVM), dense neural network (DNN), and others] was applied. RESULTS The Xception model classified the N, PPG, and G groups with 93.9% of the micro-average area under the receiver operating characteristic curve (AUROC) with tenfold cross-validation. Although normal and glaucoma sensitivity can reach 93.51% and 86.13% respectively, the PPG sensitivity was only 30.27%. The AUROC increased to 96.4% in the N + PPG and G groups. The multimodal model with the N + PPG and G groups showed that the AUROCs of RF, SVM, and DNN were 99.56%, 99.59%, and 99.10%, respectively; The N and PPG + G groups had less than 1% difference. The test set showed an overall 3%-5% less AUROC than the validation results. CONCLUSION The multimodal model had good AUROC while detecting glaucoma in a population with high incidence of myopia. The model shows the potential for general automatic screening and telemedicine, especially in Asia. TRIAL REGISTRATION The study was approved by the Institutional Review Board of the National Taiwan University Hospital, Hsinchu Branch (no. NTUHHCB 108-025-E).
Collapse
Affiliation(s)
- Wee Shin Lim
- grid.19188.390000 0004 0546 0241Department of Computer Science and Information Engineering, National Taiwan University, Taipei City 10617, Taiwan, ROC
| | - Heng-Yen Ho
- grid.19188.390000 0004 0546 0241School of Medicine, National Taiwan University, Taipei City 10617, Taiwan, ROC
| | - Heng-Chen Ho
- grid.19188.390000 0004 0546 0241School of Medicine, National Taiwan University, Taipei City 10617, Taiwan, ROC
| | - Yan-Wu Chen
- grid.412036.20000 0004 0531 9758Department of Applied Mathematics, National Sun Yat-Sen University, Kaohsiung City 804201, Taiwan, ROC
| | - Chih-Kuo Lee
- grid.412094.a0000 0004 0572 7815Department of Internal Medicine, National Taiwan University Hospital Hsin-Chu Branch, Hsinchu City 300, Taiwan, ROC
| | - Pao-Ju Chen
- grid.412094.a0000 0004 0572 7815Department of Ophthalmology, National Taiwan University Hospital Hsin-Chu Branch, No. 25, Lane 442, Sec.1, Jingguo Rd., Hsinchu City 300, Taiwan, ROC
| | - Feipei Lai
- grid.19188.390000 0004 0546 0241Department of Electrical Engineering, National Taiwan University, Taipei City 10617, Taiwan, ROC
| | - Jyh-Shing Roger Jang
- grid.19188.390000 0004 0546 0241Department of Computer Science and Information Engineering, National Taiwan University, Taipei City 10617, Taiwan, ROC
| | - Mei-Lan Ko
- grid.412094.a0000 0004 0572 7815Department of Ophthalmology, National Taiwan University Hospital Hsin-Chu Branch, No. 25, Lane 442, Sec.1, Jingguo Rd., Hsinchu City 300, Taiwan, ROC ,grid.38348.340000 0004 0532 0580Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Taipei City 10617, Taiwan, ROC
| |
Collapse
|
31
|
Sheng B, Chen X, Li T, Ma T, Yang Y, Bi L, Zhang X. An overview of artificial intelligence in diabetic retinopathy and other ocular diseases. Front Public Health 2022; 10:971943. [PMID: 36388304 PMCID: PMC9650481 DOI: 10.3389/fpubh.2022.971943] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 10/04/2022] [Indexed: 01/25/2023] Open
Abstract
Artificial intelligence (AI), also known as machine intelligence, is a branch of science that empowers machines using human intelligence. AI refers to the technology of rendering human intelligence through computer programs. From healthcare to the precise prevention, diagnosis, and management of diseases, AI is progressing rapidly in various interdisciplinary fields, including ophthalmology. Ophthalmology is at the forefront of AI in medicine because the diagnosis of ocular diseases heavy reliance on imaging. Recently, deep learning-based AI screening and prediction models have been applied to the most common visual impairment and blindness diseases, including glaucoma, cataract, age-related macular degeneration (ARMD), and diabetic retinopathy (DR). The success of AI in medicine is primarily attributed to the development of deep learning algorithms, which are computational models composed of multiple layers of simulated neurons. These models can learn the representations of data at multiple levels of abstraction. The Inception-v3 algorithm and transfer learning concept have been applied in DR and ARMD to reuse fundus image features learned from natural images (non-medical images) to train an AI system with a fraction of the commonly used training data (<1%). The trained AI system achieved performance comparable to that of human experts in classifying ARMD and diabetic macular edema on optical coherence tomography images. In this study, we highlight the fundamental concepts of AI and its application in these four major ocular diseases and further discuss the current challenges, as well as the prospects in ophthalmology.
Collapse
Affiliation(s)
- Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
| | - Xiaosi Chen
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Tingyao Li
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
| | - Tianxing Ma
- Chongqing University-University of Cincinnati Joint Co-op Institute, Chongqing University, Chongqing, China
| | - Yang Yang
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Lei Bi
- School of Computer Science, University of Sydney, Sydney, NSW, Australia
| | - Xinyuan Zhang
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
32
|
Fan R, Alipour K, Bowd C, Christopher M, Brye N, Proudfoot JA, Goldbaum MH, Belghith A, Girkin CA, Fazio MA, Liebmann JM, Weinreb RN, Pazzani M, Kriegman D, Zangwill LM. Detecting Glaucoma from Fundus Photographs Using Deep Learning without Convolutions: Transformer for Improved Generalization. OPHTHALMOLOGY SCIENCE 2022; 3:100233. [PMID: 36545260 PMCID: PMC9762193 DOI: 10.1016/j.xops.2022.100233] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 10/04/2022] [Accepted: 10/12/2022] [Indexed: 12/14/2022]
Abstract
Purpose To compare the diagnostic accuracy and explainability of a Vision Transformer deep learning technique, Data-efficient image Transformer (DeiT), and ResNet-50, trained on fundus photographs from the Ocular Hypertension Treatment Study (OHTS) to detect primary open-angle glaucoma (POAG) and identify the salient areas of the photographs most important for each model's decision-making process. Design Evaluation of a diagnostic technology. Subjects Participants and Controls Overall 66 715 photographs from 1636 OHTS participants and an additional 5 external datasets of 16 137 photographs of healthy and glaucoma eyes. Methods Data-efficient image Transformer models were trained to detect 5 ground-truth OHTS POAG classifications: OHTS end point committee POAG determinations because of disc changes (model 1), visual field (VF) changes (model 2), or either disc or VF changes (model 3) and Reading Center determinations based on disc (model 4) and VFs (model 5). The best-performing DeiT models were compared with ResNet-50 models on OHTS and 5 external datasets. Main Outcome Measures Diagnostic performance was compared using areas under the receiver operating characteristic curve (AUROC) and sensitivities at fixed specificities. The explainability of the DeiT and ResNet-50 models was compared by evaluating the attention maps derived directly from DeiT to 3 gradient-weighted class activation map strategies. Results Compared with our best-performing ResNet-50 models, the DeiT models demonstrated similar performance on the OHTS test sets for all 5 ground-truth POAG labels; AUROC ranged from 0.82 (model 5) to 0.91 (model 1). Data-efficient image Transformer AUROC was consistently higher than ResNet-50 on the 5 external datasets. For example, AUROC for the main OHTS end point (model 3) was between 0.08 and 0.20 higher in the DeiT than ResNet-50 models. The saliency maps from the DeiT highlight localized areas of the neuroretinal rim, suggesting important rim features for classification. The same maps in the ResNet-50 models show a more diffuse, generalized distribution around the optic disc. Conclusions Vision Transformers have the potential to improve generalizability and explainability in deep learning models, detecting eye disease and possibly other medical conditions that rely on imaging for clinical diagnosis and management.
Collapse
Key Words
- AI, artificial intelligence
- AUROC, areas under the receiver operating characteristic curve
- CI, confidence interval
- CNN, convolutional neural network
- DL, deep learning
- Deep learning
- DeiT, Data-efficient image Transformer
- Fundus photographs
- Glaucoma detection
- LAG, Large-Scale Attention-Based Glaucoma
- OHTS, Ocular Hypertension Treatment Study
- POAG, primary open-angle glaucoma
- SoTA, state-of-the-art
- VF, visual field
- ViT, Vision Transformer
- Vision Transformers
Collapse
Affiliation(s)
- Rui Fan
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California,Department of Computer Science and Engineering, University of California San Diego, La Jolla, California,Department of Control Science and Engineering, Tongji University, Shanghai 201804, China
| | - Kamran Alipour
- Department of Computer Science and Engineering, University of California San Diego, La Jolla, California
| | - Christopher Bowd
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
| | - Mark Christopher
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
| | - Nicole Brye
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
| | - James A. Proudfoot
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
| | - Michael H. Goldbaum
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
| | - Akram Belghith
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
| | - Christopher A. Girkin
- Department of Ophthalmology, School of Medicine, The University of Alabama at Birmingham, Birmingham, Alabama
| | - Massimo A. Fazio
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California,Department of Ophthalmology, School of Medicine, The University of Alabama at Birmingham, Birmingham, Alabama,Department of Biomedical Engineering, School of Engineering, The University of Alabama at Birmingham, Birmingham, Alabama
| | - Jeffrey M. Liebmann
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Columbia University Medical Center, New York, New York
| | - Robert N. Weinreb
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
| | - Michael Pazzani
- Department of Computer Science and Engineering, University of California San Diego, La Jolla, California
| | - David Kriegman
- Department of Computer Science and Engineering, University of California San Diego, La Jolla, California
| | - Linda M. Zangwill
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California,Correspondence: Linda M. Zangwill, 9500 Gilman Dr., #0946, La Jolla, California 92093-0946.
| |
Collapse
|
33
|
Huang X, Sun J, Gupta K, Montesano G, Crabb DP, Garway-Heath DF, Brusini P, Lanzetta P, Oddone F, Turpin A, McKendrick AM, Johnson CA, Yousefi S. Detecting glaucoma from multi-modal data using probabilistic deep learning. Front Med (Lausanne) 2022; 9:923096. [PMID: 36250081 PMCID: PMC9556968 DOI: 10.3389/fmed.2022.923096] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Accepted: 08/10/2022] [Indexed: 11/13/2022] Open
Abstract
Objective To assess the accuracy of probabilistic deep learning models to discriminate normal eyes and eyes with glaucoma from fundus photographs and visual fields. Design Algorithm development for discriminating normal and glaucoma eyes using data from multicenter, cross-sectional, case-control study. Subjects and participants Fundus photograph and visual field data from 1,655 eyes of 929 normal and glaucoma subjects to develop and test deep learning models and an independent group of 196 eyes of 98 normal and glaucoma patients to validate deep learning models. Main outcome measures Accuracy and area under the receiver-operating characteristic curve (AUC). Methods Fundus photographs and OCT images were carefully examined by clinicians to identify glaucomatous optic neuropathy (GON). When GON was detected by the reader, the finding was further evaluated by another clinician. Three probabilistic deep convolutional neural network (CNN) models were developed using 1,655 fundus photographs, 1,655 visual fields, and 1,655 pairs of fundus photographs and visual fields collected from Compass instruments. Deep learning models were trained and tested using 80% of fundus photographs and visual fields for training set and 20% of the data for testing set. Models were further validated using an independent validation dataset. The performance of the probabilistic deep learning model was compared with that of the corresponding deterministic CNN model. Results The AUC of the deep learning model in detecting glaucoma from fundus photographs, visual fields, and combined modalities using development dataset were 0.90 (95% confidence interval: 0.89-0.92), 0.89 (0.88-0.91), and 0.94 (0.92-0.96), respectively. The AUC of the deep learning model in detecting glaucoma from fundus photographs, visual fields, and both modalities using the independent validation dataset were 0.94 (0.92-0.95), 0.98 (0.98-0.99), and 0.98 (0.98-0.99), respectively. The AUC of the deep learning model in detecting glaucoma from fundus photographs, visual fields, and both modalities using an early glaucoma subset were 0.90 (0.88,0.91), 0.74 (0.73,0.75), 0.91 (0.89,0.93), respectively. Eyes that were misclassified had significantly higher uncertainty in likelihood of diagnosis compared to eyes that were classified correctly. The uncertainty level of the correctly classified eyes is much lower in the combined model compared to the model based on visual fields only. The AUCs of the deterministic CNN model using fundus images, visual field, and combined modalities based on the development dataset were 0.87 (0.85,0.90), 0.88 (0.84,0.91), and 0.91 (0.89,0.94), and the AUCs based on the independent validation dataset were 0.91 (0.89,0.93), 0.97 (0.95,0.99), and 0.97 (0.96,0.99), respectively, while the AUCs based on an early glaucoma subset were 0.88 (0.86,0.91), 0.75 (0.73,0.77), and 0.92 (0.89,0.95), respectively. Conclusion and relevance Probabilistic deep learning models can detect glaucoma from multi-modal data with high accuracy. Our findings suggest that models based on combined visual field and fundus photograph modalities detects glaucoma with higher accuracy. While probabilistic and deterministic CNN models provided similar performance, probabilistic models generate certainty level of the outcome thus providing another level of confidence in decision making.
Collapse
Affiliation(s)
- Xiaoqin Huang
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, United States
| | - Jian Sun
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, United States
- German Center for Neurodegenerative Diseases (DZNE), Tübingen, Germany
| | - Krati Gupta
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, United States
| | - Giovanni Montesano
- ASST Santi Paolo e Carlo, University of Milan, Milan, Italy
- Department of Optometry and Visual Sciences, City University of London, London, United Kingdom
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - David P. Crabb
- Department of Optometry and Visual Sciences, City University of London, London, United Kingdom
| | - David F. Garway-Heath
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Paolo Brusini
- Department of Ophthalmology, “Città di Udine” Health Center, Udine, Italy
| | - Paolo Lanzetta
- Ophthalmology Unit, Department of Medical and Biological Sciences, University of Udine, Udine, Italy
| | | | - Andrew Turpin
- School of Computing and Information System, University of Melbourne, Melbourne, VIC, Australia
| | - Allison M. McKendrick
- Department of Optometry and Vision Sciences, University of Melbourne, Melbourne, VIC, Australia
| | - Chris A. Johnson
- Department of Ophthalmology and Visual Sciences, University of Iowa Hospitals and Clinics, Iowa City, IA, United States
| | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, United States
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, TN, United States
| |
Collapse
|
34
|
Lin M, Liu L, Gorden M, Kass M, Van Tassel S, Wang F, Peng Y. Multi-scale Multi-structure Siamese Network (MMSNet) for Primary Open-Angle Glaucoma Prediction. MACHINE LEARNING IN MEDICAL IMAGING. MLMI (WORKSHOP) 2022; 13583:436-445. [PMID: 36656619 PMCID: PMC9844668 DOI: 10.1007/978-3-031-21014-3_45] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Primary open-angle glaucoma (POAG) is one of the leading causes of irreversible blindness in the United States and worldwide. POAG prediction before onset plays an important role in early treatment. Although deep learning methods have been proposed to predict POAG, these methods mainly focus on current status prediction. In addition, all these methods used a single image as input. On the other hand, glaucoma specialists determine a glaucomatous eye by comparing the follow-up optic nerve image with the baseline along with supplementary clinical data. To simulate this process, we proposed a Multi-scale Multi-structure Siamese Network (MMSNet) to predict future POAG event from fundus photographs. The MMSNet consists of two side-outputs for deep supervision and 2D blocks to utilize two-dimensional features to assist classification. The MMSNet network was trained and evaluated on a large dataset: 37,339 fundus photographs from 1,636 Ocular Hypertension Treatment Study (OHTS) participants. Extensive experiments show that MMSNet outperforms the state-of-the-art on two "POAG prediction before onset" tasks. Our AUC are 0.9312 and 0.9507, which are 0.2204 and 0.1490 higher than the state-of-the-art, respectively. In addition, an ablation study is performed to check the contribution of different components. These results highlight the potential of deep learning to assist and enhance the prediction of future POAG event. The proposed network will be publicly available on https://github.com/bionlplab/MMSNet.
Collapse
Affiliation(s)
| | - Lei Liu
- Washington University School of Medicine in St. Louis, St. Louis, MO, USA
| | - Mae Gorden
- Washington University School of Medicine in St. Louis, St. Louis, MO, USA
| | - Michael Kass
- Washington University School of Medicine in St. Louis, St. Louis, MO, USA
| | | | - Fei Wang
- Weill Cornell Medicine, New York, NY, USA
| | - Yifan Peng
- Weill Cornell Medicine, New York, NY, USA
| |
Collapse
|
35
|
Lin M, Hou B, Liu L, Gordon M, Kass M, Wang F, Van Tassel SH, Peng Y. Automated diagnosing primary open-angle glaucoma from fundus image by simulating human's grading with deep learning. Sci Rep 2022; 12:14080. [PMID: 35982106 PMCID: PMC9388536 DOI: 10.1038/s41598-022-17753-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 07/30/2022] [Indexed: 11/09/2022] Open
Abstract
Primary open-angle glaucoma (POAG) is a leading cause of irreversible blindness worldwide. Although deep learning methods have been proposed to diagnose POAG, it remains challenging to develop a robust and explainable algorithm to automatically facilitate the downstream diagnostic tasks. In this study, we present an automated classification algorithm, GlaucomaNet, to identify POAG using variable fundus photographs from different populations and settings. GlaucomaNet consists of two convolutional neural networks to simulate the human grading process: learning the discriminative features and fusing the features for grading. We evaluated GlaucomaNet on two datasets: Ocular Hypertension Treatment Study (OHTS) participants and the Large-scale Attention-based Glaucoma (LAG) dataset. GlaucomaNet achieved the highest AUC of 0.904 and 0.997 for POAG diagnosis on OHTS and LAG datasets. An ensemble of network architectures further improved diagnostic accuracy. By simulating the human grading process, GlaucomaNet demonstrated high accuracy with increased transparency in POAG diagnosis (comprehensiveness scores of 97% and 36%). These methods also address two well-known challenges in the field: the need for increased image data diversity and relying heavily on perimetry for POAG diagnosis. These results highlight the potential of deep learning to assist and enhance clinical POAG diagnosis. GlaucomaNet is publicly available on https://github.com/bionlplab/GlaucomaNet .
Collapse
Affiliation(s)
- Mingquan Lin
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA
| | - Bojian Hou
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA
| | - Lei Liu
- Institute for Public Health, Washington University School of Medicine, St. Louis, MO, USA
| | - Mae Gordon
- Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, St. Louis, MO, USA
| | - Michael Kass
- Department of Ophthalmology and Visual Sciences, Washington University School of Medicine, St. Louis, MO, USA
| | - Fei Wang
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA.
| | | | - Yifan Peng
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA.
| |
Collapse
|
36
|
Sun K, He M, Xu Y, Wu Q, He Z, Li W, Liu H, Pi X. Multi-label classification of fundus images with graph convolutional network and LightGBM. Comput Biol Med 2022; 149:105909. [PMID: 35998479 DOI: 10.1016/j.compbiomed.2022.105909] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 07/03/2022] [Accepted: 07/16/2022] [Indexed: 12/01/2022]
Abstract
Early detection and treatment of retinal disorders are critical for avoiding irreversible visual impairment. Given that patients in the clinical setting may have various types of retinal illness, the development of multi-label fundus disease detection models capable of screening for multiple diseases is more in line with clinical needs. This article presented a composite model based on hybrid graph convolution for patient-level multi-label fundus illness identification. The composite model comprised a backbone module, a hybrid graph convolution module, and a classifier module. This article established the relationship between labels via graph convolution and then employed a self-attention mechanism to design a hybrid graph convolution structure. The backbone module extracted features using EfficientNet-B4, whereas the classifier module output multi-label using LightGBM. Additionally, this work investigated the input pattern of binocular images and the influence of label correlation on the model's identification performance. The proposed model MCGL-Net outperformed all other state-of-the-art methods on the publicly available ODIR dataset, with F1 reaching 91.60% on the test set. Ablation experiments were also performed in this paper. Experiments showed that the idea of hybrid graph convolutional structure and composite model designed in this paper promotes the model performance under any backbone CNN. The adoption of hybrid graph convolution can increase the F1 by 2.39% in trials using EfficientNet-B4 as the backbone. The composite model had a higher F1 index by 5.42% than the single EfficientNet-B4 model.
Collapse
Affiliation(s)
- Kai Sun
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
| | - Mengjia He
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
| | - Yao Xu
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
| | - Qinying Wu
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
| | - Zichun He
- Chongqing Red Cross Hospital (People's Hospital of Jiangbei District), Chongqing, China
| | - Wang Li
- School of Pharmacy and Bioengineering, Chongqing University of Technology, Chongqing, China
| | - Hongying Liu
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China; Chongqing Engineering Technology Research Center of Medical Electronic, Chongqing, 400030, People's Republic of China.
| | - Xitian Pi
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China; Chongqing Engineering Technology Research Center of Medical Electronic, Chongqing, 400030, People's Republic of China.
| |
Collapse
|
37
|
Duan S, Huang P, Chen M, Wang T, Sun X, Chen M, Dong X, Jiang Z, Li D. Semi-supervised classification of fundus images combined with CNN and GCN. J Appl Clin Med Phys 2022; 23:e13746. [PMID: 35946866 PMCID: PMC9797168 DOI: 10.1002/acm2.13746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Revised: 06/22/2022] [Accepted: 07/13/2022] [Indexed: 01/01/2023] Open
Abstract
PURPOSE Diabetic retinopathy (DR) is one of the most serious complications of diabetes, which is a kind of fundus lesion with specific changes. Early diagnosis of DR can effectively reduce the visual damage caused by DR. Due to the variety and different morphology of DR lesions, automatic classification of fundus images in mass screening can greatly save clinicians' diagnosis time. To alleviate these problems, in this paper, we propose a novel framework-graph attentional convolutional neural network (GACNN). METHODS AND MATERIALS The network consists of convolutional neural network (CNN) and graph convolutional network (GCN). The global and spatial features of fundus images are extracted by using CNN and GCN, and attention mechanism is introduced to enhance the adaptability of GCN to topology map. We adopt semi-supervised method for classification, which greatly improves the generalization ability of the network. RESULTS In order to verify the effectiveness of the network, we conducted comparative experiments and ablation experiments. We use confusion matrix, precision, recall, kappa score, and accuracy as evaluation indexes. With the increase of the labeling rates, the classification accuracy is higher. Particularly, when the labeling rate is set to 100%, the classification accuracy of GACNN reaches 93.35%. Compared with DenseNet121, the accuracy rate is improved by 6.24%. CONCLUSIONS Semi-supervised classification based on attention mechanism can effectively improve the classification performance of the model, and attain preferable results in classification indexes such as accuracy and recall. GACNN provides a feasible classification scheme for fundus images, which effectively reduces the screening human resources.
Collapse
Affiliation(s)
- Sixu Duan
- Shandong Key Laboratory of Medical Physics and Image ProcessingShandong Provincial Engineering and Technical Center of Light ManipulationSchool of Physics and ElectronicsShandong Normal UniversityJinanChina
| | - Pu Huang
- Shandong Key Laboratory of Medical Physics and Image ProcessingShandong Provincial Engineering and Technical Center of Light ManipulationSchool of Physics and ElectronicsShandong Normal UniversityJinanChina
| | - Min Chen
- The Second Hospital of Shandong UniversityShandong UniversityJinanChina,Department of MedicineThe Second Hospital of Shandong UniversityJinanChina
| | - Ting Wang
- Eye Hospital of Shandong First Medical University (Shandong Eye Hospital)JinanChina,State Key Laboratory Cultivation BaseShandong Provincial Key Laboratory of OphthalmologyShandong Eye InstituteShandong First Medical University and Shandong Academy of Medical SciencesJinanChina,School of OphthalmologyShandong First Medical UniversityJinanChina
| | - Xiaolei Sun
- Eye Hospital of Shandong First Medical University (Shandong Eye Hospital)JinanChina,State Key Laboratory Cultivation BaseShandong Provincial Key Laboratory of OphthalmologyShandong Eye InstituteShandong First Medical University and Shandong Academy of Medical SciencesJinanChina,School of OphthalmologyShandong First Medical UniversityJinanChina
| | - Meirong Chen
- Affiliated Hospital of Shandong University of Traditional Chinese MedicineJinanChina
| | - Xueyuan Dong
- Shandong Key Laboratory of Medical Physics and Image ProcessingShandong Provincial Engineering and Technical Center of Light ManipulationSchool of Physics and ElectronicsShandong Normal UniversityJinanChina
| | - Zekun Jiang
- Shandong Key Laboratory of Medical Physics and Image ProcessingShandong Provincial Engineering and Technical Center of Light ManipulationSchool of Physics and ElectronicsShandong Normal UniversityJinanChina
| | - Dengwang Li
- Shandong Key Laboratory of Medical Physics and Image ProcessingShandong Provincial Engineering and Technical Center of Light ManipulationSchool of Physics and ElectronicsShandong Normal UniversityJinanChina
| |
Collapse
|
38
|
Balasubramanian K, Ramya K, Gayathri Devi K. Improved swarm optimization of deep features for glaucoma classification using SEGSO and VGGNet. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
39
|
Balasubramanian K, N.P. A. Correlation-based feature selection using bio-inspired algorithms and optimized KELM classifier for glaucoma diagnosis. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109432] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
40
|
Practical Application of Artificial Intelligence Technology in Glaucoma Diagnosis. J Ophthalmol 2022; 2022:5212128. [PMID: 35957747 PMCID: PMC9357716 DOI: 10.1155/2022/5212128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Accepted: 06/29/2022] [Indexed: 11/18/2022] Open
Abstract
Purpose. By comparing the performance of different models between artificial intelligence (AI) and doctors, we aim to evaluate and identify the optimal model for future usage of AI. Methods. A total of 500 fundus images of glaucoma and 500 fundus images of normal eyes were collected and randomly divided into five groups, with each group corresponding to one round. The AI system provided diagnostic suggestions for each image. Four doctors provided diagnoses without the assistance of the AI in the first round and with the assistance of the AI in the second and third rounds. In the fourth round, doctor B and doctor D made diagnoses with the help of the AI and the other two doctors without the help of the AI. In the last round, doctor A and doctor B made diagnoses with the help of AI and the other two doctors without the help of the AI. Results. Doctor A, doctor B, and doctor D had a higher accuracy in the diagnosis of glaucoma with the assistance of AI in the second (
,
, and
) and the third round (
,
, and
) than in the first round. The accuracy of at least one doctor was higher than that of AI in the second and third rounds, in spite of no detectable significance (
,
,
, and
). The four doctors’ overall accuracy (
and
) and sensitivity (
and
) as a whole were significantly improved in the second and third rounds. Conclusions. This “Doctor + AI” model can clarify the role of doctors and AI in medical responsibility and ensure the safety of patients, and importantly, this model shows great potential and application prospects.
Collapse
|
41
|
Multi-task deep learning for glaucoma detection from color fundus images. Sci Rep 2022; 12:12361. [PMID: 35858986 PMCID: PMC9300731 DOI: 10.1038/s41598-022-16262-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 07/07/2022] [Indexed: 11/11/2022] Open
Abstract
Glaucoma is an eye condition that leads to loss of vision and blindness if not diagnosed in time. Diagnosis requires human experts to estimate in a limited time subtle changes in the shape of the optic disc from retinal fundus images. Deep learning methods have been satisfactory in classifying and segmenting diseases in retinal fundus images, assisting in analyzing the increasing amount of images. Model training requires extensive annotations to achieve successful generalization, which can be highly problematic given the costly expert annotations. This work aims at designing and training a novel multi-task deep learning model that leverages the similarities of related eye-fundus tasks and measurements used in glaucoma diagnosis. The model simultaneously learns different segmentation and classification tasks, thus benefiting from their similarity. The evaluation of the method in a retinal fundus glaucoma challenge dataset, including 1200 retinal fundus images from different cameras and medical centers, obtained a \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$96.76 \pm 0.96$$\end{document}96.76±0.96 AUC performance compared to an \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$93.56 \pm 1.48$$\end{document}93.56±1.48 obtained by the same backbone network trained to detect glaucoma. Our approach outperforms other multi-task learning models, and its performance pairs with trained experts using \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$~\sim 3.5$$\end{document}∼3.5 times fewer parameters than training each task separately. The data and the code for reproducing our results are publicly available.
Collapse
|
42
|
Young SL, Jain N, Tatham AJ. The application of advanced imaging techniques in glaucoma. EXPERT REVIEW OF OPHTHALMOLOGY 2022. [DOI: 10.1080/17469899.2022.2101449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Affiliation(s)
- Su Ling Young
- Princess Alexandra Eye Pavilion, Edinburgh, UK
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| | - Nikhil Jain
- Addenbrooke’s Hospital, Cambridge University Hospitals NHS trust, Cambridge, UK
| | - Andrew J Tatham
- Princess Alexandra Eye Pavilion, Edinburgh, UK
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
43
|
Basu S, Agarwal R, Srivastava V. Deep discriminative learning model with calibrated attention map for the automated diagnosis of diffuse large B-cell lymphoma. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
44
|
Ran AR, Wang X, Chan PP, Chan NC, Yip W, Young AL, Wong MOM, Yung HW, Chang RT, Mannil SS, Tham YC, Cheng CY, Chen H, Li F, Zhang X, Heng PA, Tham CC, Cheung CY. Three-Dimensional Multi-Task Deep Learning Model to Detect Glaucomatous Optic Neuropathy and Myopic Features From Optical Coherence Tomography Scans: A Retrospective Multi-Centre Study. Front Med (Lausanne) 2022; 9:860574. [PMID: 35783623 PMCID: PMC9240220 DOI: 10.3389/fmed.2022.860574] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Accepted: 04/25/2022] [Indexed: 11/13/2022] Open
Abstract
PurposeWe aim to develop a multi-task three-dimensional (3D) deep learning (DL) model to detect glaucomatous optic neuropathy (GON) and myopic features (MF) simultaneously from spectral-domain optical coherence tomography (SDOCT) volumetric scans.MethodsEach volumetric scan was labelled as GON according to the criteria of retinal nerve fibre layer (RNFL) thinning, with a structural defect that correlated in position with the visual field defect (i.e., reference standard). MF were graded by the SDOCT en face images, defined as presence of peripapillary atrophy (PPA), optic disc tilting, or fundus tessellation. The multi-task DL model was developed by ResNet with output of Yes/No GON and Yes/No MF. SDOCT scans were collected in a tertiary eye hospital (Hong Kong SAR, China) for training (80%), tuning (10%), and internal validation (10%). External testing was performed on five independent datasets from eye centres in Hong Kong, the United States, and Singapore, respectively. For GON detection, we compared the model to the average RNFL thickness measurement generated from the SDOCT device. To investigate whether MF can affect the model’s performance on GON detection, we conducted subgroup analyses in groups stratified by Yes/No MF. The area under the receiver operating characteristic curve (AUROC), sensitivity, specificity, and accuracy were reported.ResultsA total of 8,151 SDOCT volumetric scans from 3,609 eyes were collected. For detecting GON, in the internal validation, the proposed 3D model had significantly higher AUROC (0.949 vs. 0.913, p < 0.001) than average RNFL thickness in discriminating GON from normal. In the external testing, the two approaches had comparable performance. In the subgroup analysis, the multi-task DL model performed significantly better in the group of “no MF” (0.883 vs. 0.965, p-value < 0.001) in one external testing dataset, but no significant difference in internal validation and other external testing datasets. The multi-task DL model’s performance to detect MF was also generalizable in all datasets, with the AUROC values ranging from 0.855 to 0.896.ConclusionThe proposed multi-task 3D DL model demonstrated high generalizability in all the datasets and the presence of MF did not affect the accuracy of GON detection generally.
Collapse
Affiliation(s)
- An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Lam Kin Chung. Jet King-Shing Ho Glaucoma Treatment and Research Centre, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
| | - Xi Wang
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, Palo Alto, CA, United States
| | - Poemen P. Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Lam Kin Chung. Jet King-Shing Ho Glaucoma Treatment and Research Centre, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Hong Kong Eye Hospital, Hong Kong, Hong Kong SAR, China
| | - Noel C. Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Department of Ophthalmology, Prince of Wales Hospital, Hong Kong, Hong Kong SAR, China
- Department of Ophthalmology, Alice Ho Miu Ling Nethersole Hospital, Hong Kong, Hong Kong SAR, China
| | - Wilson Yip
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Department of Ophthalmology, Prince of Wales Hospital, Hong Kong, Hong Kong SAR, China
- Department of Ophthalmology, Alice Ho Miu Ling Nethersole Hospital, Hong Kong, Hong Kong SAR, China
| | - Alvin L. Young
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Department of Ophthalmology, Prince of Wales Hospital, Hong Kong, Hong Kong SAR, China
| | - Mandy O. M. Wong
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Hong Kong Eye Hospital, Hong Kong, Hong Kong SAR, China
| | - Hon-Wah Yung
- Tuen Mun Eye Centre, Hong Kong, Hong Kong SAR, China
| | - Robert T. Chang
- Department of Ophthalmology, Byers Eye Institute, Stanford University, Palo Alto, CA, United States
| | - Suria S. Mannil
- Department of Ophthalmology, Byers Eye Institute, Stanford University, Palo Alto, CA, United States
| | - Yih Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-National University of Singapore Medical School, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-National University of Singapore Medical School, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Hao Chen
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, Hong Kong SAR, China
| | - Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, China
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
| | - Clement C. Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Lam Kin Chung. Jet King-Shing Ho Glaucoma Treatment and Research Centre, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Hong Kong Eye Hospital, Hong Kong, Hong Kong SAR, China
| | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Lam Kin Chung. Jet King-Shing Ho Glaucoma Treatment and Research Centre, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- *Correspondence: Carol Y. Cheung,
| |
Collapse
|
45
|
Widen the Applicability of a Convolutional Neural-Network-Assisted Glaucoma Detection Algorithm of Limited Training Images across Different Datasets. Biomedicines 2022; 10:biomedicines10061314. [PMID: 35740336 PMCID: PMC9219722 DOI: 10.3390/biomedicines10061314] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 05/22/2022] [Accepted: 05/30/2022] [Indexed: 02/04/2023] Open
Abstract
Automated glaucoma detection using deep learning may increase the diagnostic rate of glaucoma to prevent blindness, but generalizable models are currently unavailable despite the use of huge training datasets. This study aims to evaluate the performance of a convolutional neural network (CNN) classifier trained with a limited number of high-quality fundus images in detecting glaucoma and methods to improve its performance across different datasets. A CNN classifier was constructed using EfficientNet B3 and 944 images collected from one medical center (core model) and externally validated using three datasets. The performance of the core model was compared with (1) the integrated model constructed by using all training images from the four datasets and (2) the dataset-specific model built by fine-tuning the core model with training images from the external datasets. The diagnostic accuracy of the core model was 95.62% but dropped to ranges of 52.5–80.0% on the external datasets. Dataset-specific models exhibited superior diagnostic performance on the external datasets compared to other models, with a diagnostic accuracy of 87.50–92.5%. The findings suggest that dataset-specific tuning of the core CNN classifier effectively improves its applicability across different datasets when increasing training images fails to achieve generalization.
Collapse
|
46
|
Shin Y, Cho H, Shin YU, Seong M, Choi JW, Lee WJ. Comparison between Deep-Learning-Based Ultra-Wide-Field Fundus Imaging and True-Colour Confocal Scanning for Diagnosing Glaucoma. J Clin Med 2022; 11:jcm11113168. [PMID: 35683577 PMCID: PMC9181263 DOI: 10.3390/jcm11113168] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 05/25/2022] [Accepted: 06/01/2022] [Indexed: 02/05/2023] Open
Abstract
In this retrospective, comparative study, we evaluated and compared the performance of two confocal imaging modalities in detecting glaucoma based on a deep learning (DL) classifier: ultra-wide-field (UWF) fundus imaging and true-colour confocal scanning. A total of 777 eyes, including 273 normal control eyes and 504 glaucomatous eyes, were tested. A convolutional neural network was used for each true-colour confocal scan (Eidon AF™, CenterVue, Padova, Italy) and UWF fundus image (Optomap™, Optos PLC, Dunfermline, UK) to detect glaucoma. The diagnostic model was trained using 545 training and 232 test images. The presence of glaucoma was determined, and the accuracy and area under the receiver operating characteristic curve (AUC) metrics were assessed for diagnostic power comparison. DL-based UWF fundus imaging achieved an AUC of 0.904 (95% confidence interval (CI): 0.861−0.937) and accuracy of 83.62%. In contrast, DL-based true-colour confocal scanning achieved an AUC of 0.868 (95% CI: 0.824−0.912) and accuracy of 81.46%. Both DL-based confocal imaging modalities showed no significant differences in their ability to diagnose glaucoma (p = 0.135) and were comparable to the traditional optical coherence tomography parameter-based methods (all p > 0.005). Therefore, using a DL-based algorithm on true-colour confocal scanning and UWF fundus imaging, we confirmed that both confocal fundus imaging techniques had high value in diagnosing glaucoma.
Collapse
Affiliation(s)
- Younji Shin
- Department of Electrical Engineering, Hanyang University, Seoul 04763, Korea;
| | - Hyunsoo Cho
- Department of Ophthalmology, Hanyang University College of Medicine, Seoul 04763, Korea; (H.C.); (Y.U.S.); (M.S.)
| | - Yong Un Shin
- Department of Ophthalmology, Hanyang University College of Medicine, Seoul 04763, Korea; (H.C.); (Y.U.S.); (M.S.)
| | - Mincheol Seong
- Department of Ophthalmology, Hanyang University College of Medicine, Seoul 04763, Korea; (H.C.); (Y.U.S.); (M.S.)
| | - Jun Won Choi
- Department of Electrical Engineering, Hanyang University, Seoul 04763, Korea;
- Correspondence: (J.W.C.); (W.J.L.); Tel.: +82-2-2290-2316 (J.W.C.); +82-2-2290-8570 (W.J.L.)
| | - Won June Lee
- Department of Ophthalmology, Hanyang University College of Medicine, Seoul 04763, Korea; (H.C.); (Y.U.S.); (M.S.)
- Correspondence: (J.W.C.); (W.J.L.); Tel.: +82-2-2290-2316 (J.W.C.); +82-2-2290-8570 (W.J.L.)
| |
Collapse
|
47
|
|
48
|
Lazaridis G, Montesano G, Afgeh SS, Mohamed-Noriega J, Ourselin S, Lorenzi M, Garway-Heath DF. Predicting Visual Fields From Optical Coherence Tomography via an Ensemble of Deep Representation Learners. Am J Ophthalmol 2022; 238:52-65. [PMID: 34998718 DOI: 10.1016/j.ajo.2021.12.020] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 12/23/2021] [Accepted: 12/27/2021] [Indexed: 02/04/2023]
Abstract
PURPOSE To develop and validate a deep learning method of predicting visual function from spectral domain optical coherence tomography (SD-OCT)-derived retinal nerve fiber layer thickness (RNFLT) measurements and corresponding SD-OCT images. DESIGN Development and evaluation of diagnostic technology. METHODS Two deep learning ensemble models to predict pointwise VF sensitivity from SD-OCT images (model 1: RNFLT profile only; model 2: RNFLT profile plus SD-OCT image) and 2 reference models were developed. All models were tested in an independent test-retest data set comprising 2181 SD-OCT/VF pairs; the median of ∼10 VFs per eye was taken as the best available estimate (BAE) of the true VF. The performance of single VFs predicting the BAE VF was also evaluated. The training data set comprised 954 eyes of 220 healthy and 332 glaucomatous participants, and the test data set, 144 eyes of 72 glaucomatous participants. The main outcome measures included the pointwise prediction mean error (ME), mean absolute error (MAE), and correlation of predictions with the BAE VF sensitivity. RESULTS The median mean deviation was -4.17 dB (-14.22 to 0.88). Model 2 had excellent accuracy (ME 0.5 dB, SD 0.8) and overall performance (MAE 2.3 dB, SD 3.1), and significantly (paired t test) outperformed the other methods. For single VFs predicting the BAE VF, the pointwise MAE was 1.5 dB (SD 0.7). The association between SD-OCT and single VF predictions of the BAE pointwise VF sensitivities was R2 = 0.78 and R2 = 0.88, respectively. CONCLUSIONS Our method outperformed standard statistical and deep learning approaches. Predictions of BAEs from OCT images approached the accuracy of single real VF estimates of the BAE.
Collapse
Affiliation(s)
- Georgios Lazaridis
- From the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology (G.L., G.M., J.M.-N., D.F.G.-H.), London, United Kingdom; Centre for Medical Image Computing, University College London (G.L.), London, United Kingdom.
| | - Giovanni Montesano
- From the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology (G.L., G.M., J.M.-N., D.F.G.-H.), London, United Kingdom; Optometry and Visual Sciences, City, University of London, London, United Kingdom
| | | | - Jibran Mohamed-Noriega
- From the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology (G.L., G.M., J.M.-N., D.F.G.-H.), London, United Kingdom; Departamento de Oftalmología, Hospital Universitario (J.M.-N.), UANL, México
| | - Sebastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, King's College London (S.O.), London, United Kingdom and
| | - Marco Lorenzi
- Université Côte d'Azur, Inria Sophia Antipolis, Epione Research Project (M.L.), Valbonne, France
| | - David F Garway-Heath
- From the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology (G.L., G.M., J.M.-N., D.F.G.-H.), London, United Kingdom
| |
Collapse
|
49
|
Kaskar OG, Wells-Gray E, Fleischman D, Grace L. Evaluating machine learning classifiers for glaucoma referral decision support in primary care settings. Sci Rep 2022; 12:8518. [PMID: 35595794 PMCID: PMC9122936 DOI: 10.1038/s41598-022-12270-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Accepted: 04/18/2022] [Indexed: 11/09/2022] Open
Abstract
Several artificial intelligence algorithms have been proposed to help diagnose glaucoma by analyzing the functional and/or structural changes in the eye. These algorithms require carefully curated datasets with access to ocular images. In the current study, we have modeled and evaluated classifiers to predict self-reported glaucoma using a single, easily obtained ocular feature (intraocular pressure (IOP)) and non-ocular features (age, gender, race, body mass index, systolic and diastolic blood pressure, and comorbidities). The classifiers were trained on publicly available data of 3015 subjects without a glaucoma diagnosis at the time of enrollment. 337 subjects subsequently self-reported a glaucoma diagnosis in a span of 1–12 years after enrollment. The classifiers were evaluated on the ability to identify these subjects by only using their features recorded at the time of enrollment. Support vector machine, logistic regression, and adaptive boosting performed similarly on the dataset with F1 scores of 0.31, 0.30, and 0.28, respectively. Logistic regression had the highest sensitivity at 60% with a specificity of 69%. Predictive classifiers using primarily non-ocular features have the potential to be used for identifying suspected glaucoma in non-eye care settings, including primary care. Further research into finding additional features that improve the performance of predictive classifiers is warranted.
Collapse
Affiliation(s)
- Omkar G Kaskar
- North Carolina State University, Raleigh, NC, 27695, USA
| | | | - David Fleischman
- University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Landon Grace
- North Carolina State University, Raleigh, NC, 27695, USA.
| |
Collapse
|
50
|
Chaurasia AK, Greatbatch CJ, Hewitt AW. Diagnostic Accuracy of Artificial Intelligence in Glaucoma Screening and Clinical Practice. J Glaucoma 2022; 31:285-299. [PMID: 35302538 DOI: 10.1097/ijg.0000000000002015] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 02/26/2022] [Indexed: 11/25/2022]
Abstract
PURPOSE Artificial intelligence (AI) has been shown as a diagnostic tool for glaucoma detection through imaging modalities. However, these tools are yet to be deployed into clinical practice. This meta-analysis determined overall AI performance for glaucoma diagnosis and identified potential factors affecting their implementation. METHODS We searched databases (Embase, Medline, Web of Science, and Scopus) for studies that developed or investigated the use of AI for glaucoma detection using fundus and optical coherence tomography (OCT) images. A bivariate random-effects model was used to determine the summary estimates for diagnostic outcomes. The Preferred Reporting Items for Systematic Reviews and Meta-Analysis of Diagnostic Test Accuracy (PRISMA-DTA) extension was followed, and the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool was used for bias and applicability assessment. RESULTS Seventy-nine articles met inclusion criteria, with a subset of 66 containing adequate data for quantitative analysis. The pooled area under receiver operating characteristic curve across all studies for glaucoma detection was 96.3%, with a sensitivity of 92.0% (95% confidence interval: 89.0-94.0) and specificity of 94.0% (95% confidence interval: 92.0-95.0). The pooled area under receiver operating characteristic curve on fundus and OCT images was 96.2% and 96.0%, respectively. Mixed data set and external data validation had unsatisfactory diagnostic outcomes. CONCLUSION Although AI has the potential to revolutionize glaucoma care, this meta-analysis highlights that before such algorithms can be implemented into clinical care, a number of issues need to be addressed. With substantial heterogeneity across studies, many factors were found to affect the diagnostic performance. We recommend implementing a standard diagnostic protocol for grading, implementing external data validation, and analysis across different ethnicity groups.
Collapse
Affiliation(s)
- Abadh K Chaurasia
- Menzies Institute for Medical Research, School of Medicine, University of Tasmania, Tasmania
| | - Connor J Greatbatch
- Menzies Institute for Medical Research, School of Medicine, University of Tasmania, Tasmania
| | - Alex W Hewitt
- Menzies Institute for Medical Research, School of Medicine, University of Tasmania, Tasmania
- Centre for Eye Research Australia, University of Melbourne, Melbourne, Australia
| |
Collapse
|