1
|
Chen D, Han Y, Duncan J, Jia L, Shan J. Generative Artificial Intelligence Enhancements for Reducing Image-based Training Data Requirements. OPHTHALMOLOGY SCIENCE 2024; 4:100531. [PMID: 39071920 PMCID: PMC11283142 DOI: 10.1016/j.xops.2024.100531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 04/08/2024] [Accepted: 04/09/2024] [Indexed: 07/30/2024]
Abstract
Objective Training data fuel and shape the development of artificial intelligence (AI) models. Intensive data requirements are a major bottleneck limiting the success of AI tools in sectors with inherently scarce data. In health care, training data are difficult to curate, triggering growing concerns that the current lack of access to health care by under-privileged social groups will translate into future bias in health care AIs. In this report, we developed an autoencoder to grow and enhance inherently scarce datasets to alleviate our dependence on big data. Design Computational study with open-source data. Subjects The data were obtained from 6 open-source datasets comprising patients aged 40-80 years in Singapore, China, India, and Spain. Methods The reported framework generates synthetic images based on real-world patient imaging data. As a test case, we used autoencoder to expand publicly available training sets of optic disc photos, and evaluated the ability of the resultant datasets to train AI models in the detection of glaucomatous optic neuropathy. Main Outcome Measures Area under the receiver operating characteristic curve (AUC) were used to evaluate the performance of the glaucoma detector. A higher AUC indicates better detection performance. Results Results show that enhancing datasets with synthetic images generated by autoencoder led to superior training sets that improved the performance of AI models. Conclusions Our findings here help address the increasingly untenable data volume and quality requirements for AI model development and have implications beyond health care, toward empowering AI adoption for all similarly data-challenged fields. Financial Disclosures The authors have no proprietary or commercial interest in any materials discussed in this article.
Collapse
Affiliation(s)
- Dake Chen
- Department of Ophthalmology, University of California, San Francisco, San Francisco, California
| | - Ying Han
- Department of Ophthalmology, University of California, San Francisco, San Francisco, California
| | - Jacque Duncan
- Department of Ophthalmology, University of California, San Francisco, San Francisco, California
| | - Lin Jia
- Digillect LLC, San Francisco, California
| | - Jing Shan
- Department of Ophthalmology, University of California, San Francisco, San Francisco, California
| |
Collapse
|
2
|
Luo Y, Tian Y, Shi M, Pasquale LR, Shen LQ, Zebardast N, Elze T, Wang M. Harvard Glaucoma Fairness: A Retinal Nerve Disease Dataset for Fairness Learning and Fair Identity Normalization. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2623-2633. [PMID: 38478455 PMCID: PMC11251413 DOI: 10.1109/tmi.2024.3377552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/20/2024]
Abstract
Fairness (also known as equity interchangeably) in machine learning is important for societal well-being, but limited public datasets hinder its progress. Currently, no dedicated public medical datasets with imaging data for fairness learning are available, though underrepresented groups suffer from more health issues. To address this gap, we introduce Harvard Glaucoma Fairness (Harvard-GF), a retinal nerve disease dataset including 3,300 subjects with both 2D and 3D imaging data and balanced racial groups for glaucoma detection. Glaucoma is the leading cause of irreversible blindness globally with Blacks having doubled glaucoma prevalence than other races. We also propose a fair identity normalization (FIN) approach to equalize the feature importance between different identity groups. Our FIN approach is compared with various state-of-the-art fairness learning methods with superior performance in the racial, gender, and ethnicity fairness tasks with 2D and 3D imaging data, demonstrating the utilities of our dataset Harvard-GF for fairness learning. To facilitate fairness comparisons between different models, we propose an equity-scaled performance measure, which can be flexibly used to compare all kinds of performance metrics in the context of fairness. The dataset and code are publicly accessible via https://ophai.hms.harvard.edu/datasets/harvard-gf3300/.
Collapse
|
3
|
Karimi A, Stanik A, Kozitza C, Chen A. Integrating Deep Learning with Electronic Health Records for Early Glaucoma Detection: A Multi-Dimensional Machine Learning Approach. Bioengineering (Basel) 2024; 11:577. [PMID: 38927813 PMCID: PMC11200568 DOI: 10.3390/bioengineering11060577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2024] [Revised: 06/02/2024] [Accepted: 06/03/2024] [Indexed: 06/28/2024] Open
Abstract
BACKGROUND Recent advancements in deep learning have significantly impacted ophthalmology, especially in glaucoma, a leading cause of irreversible blindness worldwide. In this study, we developed a reliable predictive model for glaucoma detection using deep learning models based on clinical data, social and behavior risk factor, and demographic data from 1652 participants, split evenly between 826 control subjects and 826 glaucoma patients. METHODS We extracted structural data from control and glaucoma patients' electronic health records (EHR). Three distinct machine learning classifiers, the Random Forest and Gradient Boosting algorithms, as well as the Sequential model from the Keras library of TensorFlow, were employed to conduct predictive analyses across our dataset. Key performance metrics such as accuracy, F1 score, precision, recall, and the area under the receiver operating characteristics curve (AUC) were computed to both train and optimize these models. RESULTS The Random Forest model achieved an accuracy of 67.5%, with a ROC AUC of 0.67, outperforming the Gradient Boosting and Sequential models, which registered accuracies of 66.3% and 64.5%, respectively. Our results highlighted key predictive factors such as intraocular pressure, family history, and body mass index, substantiating their roles in glaucoma risk assessment. CONCLUSIONS This study demonstrates the potential of utilizing readily available clinical, lifestyle, and demographic data from EHRs for glaucoma detection through deep learning models. While our model, using EHR data alone, has a lower accuracy compared to those incorporating imaging data, it still offers a promising avenue for early glaucoma risk assessment in primary care settings. The observed disparities in model performance and feature significance show the importance of tailoring detection strategies to individual patient characteristics, potentially leading to more effective and personalized glaucoma screening and intervention.
Collapse
Affiliation(s)
- Alireza Karimi
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, OR 97239, USA; (A.S.); (C.K.); (A.C.)
- Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR 97239, USA
| | - Ansel Stanik
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, OR 97239, USA; (A.S.); (C.K.); (A.C.)
| | - Cooper Kozitza
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, OR 97239, USA; (A.S.); (C.K.); (A.C.)
| | - Aiyin Chen
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, OR 97239, USA; (A.S.); (C.K.); (A.C.)
| |
Collapse
|
4
|
Aresta G, Araujo T, Reiter GS, Mai J, Riedl S, Grechenig C, Guymer RH, Wu Z, Schmidt-Erfurth U, Bogunovic H. Deep Neural Networks for Automated Outer Plexiform Layer Subsidence Detection on Retinal OCT of Patients With Intermediate AMD. Transl Vis Sci Technol 2024; 13:7. [PMID: 38874975 PMCID: PMC11182370 DOI: 10.1167/tvst.13.6.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 04/04/2024] [Indexed: 06/15/2024] Open
Abstract
Purpose The subsidence of the outer plexiform layer (OPL) is an important imaging biomarker on optical coherence tomography (OCT) associated with early outer retinal atrophy and a risk factor for progression to geographic atrophy in patients with intermediate age-related macular degeneration (AMD). Deep neural networks (DNNs) for OCT can support automated detection and localization of this biomarker. Methods The method predicts potential OPL subsidence locations on retinal OCTs. A detection module (DM) infers bounding boxes around subsidences with a likelihood score, and a classification module (CM) assesses subsidence presence at the B-scan level. Overlapping boxes between B-scans are combined and scored by the product of the DM and CM predictions. The volume-wise score is the maximum prediction across all B-scans. One development and one independent external data set were used with 140 and 26 patients with AMD, respectively. Results The system detected more than 85% of OPL subsidences with less than one false-positive (FP)/scan. The average area under the curve was 0.94 ± 0.03 for volume-level detection. Similar or better performance was achieved on the independent external data set. Conclusions DNN systems can efficiently perform automated retinal layer subsidence detection in retinal OCT images. In particular, the proposed DNN system detects OPL subsidence with high sensitivity and a very limited number of FP detections. Translational Relevance DNNs enable objective identification of early signs associated with high risk of progression to the atrophic late stage of AMD, ideally suited for screening and assessing the efficacy of the interventions aiming to slow disease progression.
Collapse
Affiliation(s)
- Guilherme Aresta
- Christian Doppler Laboratory for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University Vienna, Vienna, Austria
| | - Teresa Araujo
- Christian Doppler Laboratory for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University Vienna, Vienna, Austria
| | - Gregor S. Reiter
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Julia Mai
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Sophie Riedl
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Christoph Grechenig
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Robyn H. Guymer
- Centre for Eye Research Australia, The Royal Victorian Eye and Ear Hospital, East Melbourne, VIC, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, VIC, Australia
| | - Zhichao Wu
- Centre for Eye Research Australia, The Royal Victorian Eye and Ear Hospital, East Melbourne, VIC, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, VIC, Australia
| | - Ursula Schmidt-Erfurth
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Hrvoje Bogunovic
- Christian Doppler Laboratory for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University Vienna, Vienna, Austria
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
5
|
Mandal S, Jammal AA, Malek D, Medeiros FA. Progression or Aging? A Deep Learning Approach for Distinguishing Glaucoma Progression From Age-Related Changes in OCT Scans. Am J Ophthalmol 2024; 266:46-55. [PMID: 38703802 DOI: 10.1016/j.ajo.2024.04.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 04/16/2024] [Accepted: 04/29/2024] [Indexed: 05/06/2024]
Abstract
PURPOSE To develop deep learning (DL) algorithm to detect glaucoma progression using optical coherence tomography (OCT) images, in the absence of a reference standard. DESIGN Retrospective cohort study. METHODS Glaucomatous and healthy eyes with ≥5 reliable peripapillary OCT (Spectralis, Heidelberg Engineering) circle scans were included. A weakly supervised time-series learning model, called noise positive-unlabeled (Noise-PU) DL was developed to classify whether sequences of OCT B-scans showed glaucoma progression. The model used 2 learning schemes, one to identify age-related changes by differentiating test sequences from glaucoma vs healthy eyes, and the other to identify test-retest variability based on scrambled OCTs of glaucoma eyes. Both models' bases were convolutional neural networks (CNN) and long short-term memory (LSTM) networks which were combined to form a CNN-LSTM model. Model features were combined and jointly trained to identify glaucoma progression, accounting for age-related loss. The DL model's outcomes were compared with ordinary least squares (OLS) regression of retinal nerve fiber layer (RNFL) thickness over time, matched for specificity. The hit ratio was used as a proxy for sensitivity. RESULTS Eight thousand seven hundred eighty-five follow-up sequences of 5 consecutive OCT tests from 3253 eyes (1859 subjects) were included in the study. The mean follow-up time was 3.5 ± 1.6 years. In the test sample, the hit ratios of the DL and OLS methods were 0.498 (95%CI: 0.470-0.526) and 0.284 (95%CI: 0.258-0.309) respectively (P < .001) when the specificities were equalized to 95%. CONCLUSION A DL model was able to identify longitudinal glaucomatous structural changes in OCT B-scans using a surrogate reference standard for progression.
Collapse
Affiliation(s)
- Sayan Mandal
- From the Department of Electrical and Computer Engineering, Pratt School of Engineering (S.M., F.A.M.), Duke University, Durham, North Carolina, USA
| | - Alessandro A Jammal
- Duke Eye Center and Department of Ophthalmology (A.A.J., F.A.M.), Duke University, Durham, North Carolina, USA; Bascom Palmer Eye Institute (A.A.J., D.M., F.A.M.), University of Miami, Miami, Florida, USA
| | - Davina Malek
- Bascom Palmer Eye Institute (A.A.J., D.M., F.A.M.), University of Miami, Miami, Florida, USA
| | - Felipe A Medeiros
- From the Department of Electrical and Computer Engineering, Pratt School of Engineering (S.M., F.A.M.), Duke University, Durham, North Carolina, USA; Duke Eye Center and Department of Ophthalmology (A.A.J., F.A.M.), Duke University, Durham, North Carolina, USA; Bascom Palmer Eye Institute (A.A.J., D.M., F.A.M.), University of Miami, Miami, Florida, USA.
| |
Collapse
|
6
|
Gunn PJG, Read S, Dickinson C, Fenerty CH, Harper RA. Providing capacity in glaucoma care using trained and accredited optometrists: A qualitative evaluation. Eye (Lond) 2024; 38:994-1004. [PMID: 38017099 DOI: 10.1038/s41433-023-02820-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 10/02/2023] [Accepted: 10/30/2023] [Indexed: 11/30/2023] Open
Abstract
INTRODUCTION The role of optometrists in glaucoma within primary and secondary care has been well described. Whilst many studies examined safety and clinical effectiveness, there is a paucity of qualitative research evaluating enablers and barriers for optometrists delivering glaucoma care. The aims of this study are to investigate qualitatively, and from a multi-stakeholder perspective whether optometric glaucoma care is accepted as an effective alternative to traditional models and what contextual factors impact upon their success. METHODS Patients were recruited from clinics at Manchester Royal Eye Hospital and nationally via a Glaucoma UK registrant database. Optometrists, ophthalmologists, and other stakeholders involved in glaucoma services were recruited via direct contact and through an optometry educational event. Interviews and focus groups were recorded and transcribed anonymously, then analysed using the framework method and NVivo 12. RESULTS Interviews and focus groups were conducted with 38 participants including 14 optometrists and 6 ophthalmologists (from all 4 UK nations), and 15 patients and 3 commissioners/other stakeholders. Themes emerging related to: enablers and drivers; challenges and barriers; training; laser; professional practice; the role of other health professionals; commissioning; COVID-19; and patient experience. CONCLUSION Success in developing glaucoma services with optometrists and other health professionals is reliant on multi-stakeholder input, investment in technology and training, inter-professional respect and appropriate time and funding to set up and deliver services. The multi-stakeholder perspective affirms there is notable support for developing glaucoma services delivered by optometrists in primary and secondary care, with caveats around training, appropriate case selection and clinical responsibility.
Collapse
Affiliation(s)
- Patrick J G Gunn
- Manchester Royal Eye Hospital, Manchester University NHS Foundation Trust, Manchester, UK.
- Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK.
| | - Simon Read
- School of Health and Social Care, Swansea University, Swansea, SA2 8PP, UK
| | - Christine Dickinson
- Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Cecilia H Fenerty
- Manchester Royal Eye Hospital, Manchester University NHS Foundation Trust, Manchester, UK
- Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Robert A Harper
- Manchester Royal Eye Hospital, Manchester University NHS Foundation Trust, Manchester, UK
- Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
- Centre for Applied Vision Research, City, University of London, London, UK
| |
Collapse
|
7
|
McKendrick AM, Turpin A. Understanding and identifying visual field progression. Clin Exp Optom 2024; 107:122-129. [PMID: 38467126 DOI: 10.1080/08164622.2024.2316002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 02/02/2024] [Indexed: 03/13/2024] Open
Abstract
Detecting deterioration of visual field sensitivity measurements is important for the diagnosis and management of glaucoma. This review surveys the current methods for assessing progression that are implemented in clinical devices, which have been used in clinical trials, alongside more recent advances proposed in the literature. Advice is also offered to clinicians on what they can do to improve the collection of perimetric data to help analytical progression methods more accurately predict change. This advice includes a discussion of how frequently visual field testing should be undertaken, with a view towards future developments, such as digital healthcare outside the standard clinical setting and more personalised approaches to perimetry.
Collapse
Affiliation(s)
- Allison M McKendrick
- Discipline of Optometry, School of Allied Health, University of Western Australia, Perth, Western Australia, Australia
- Data Analytics, Lions Eye Institute, Perth, Western Australia
- Department of Optometry & Vision Sciences the University of Melbourne
| | - Andrew Turpin
- Data Analytics, Lions Eye Institute, Perth, Western Australia
- School of Population Health, Curtin University, Perth, Western Australia, Australia
| |
Collapse
|
8
|
Skevas C, de Olaguer NP, Lleó A, Thiwa D, Schroeter U, Lopes IV, Mautone L, Linke SJ, Spitzer MS, Yap D, Xiao D. Implementing and evaluating a fully functional AI-enabled model for chronic eye disease screening in a real clinical environment. BMC Ophthalmol 2024; 24:51. [PMID: 38302908 PMCID: PMC10832120 DOI: 10.1186/s12886-024-03306-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 01/16/2024] [Indexed: 02/03/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) has the potential to increase the affordability and accessibility of eye disease screening, especially with the recent approval of AI-based diabetic retinopathy (DR) screening programs in several countries. METHODS This study investigated the performance, feasibility, and user experience of a seamless hardware and software solution for screening chronic eye diseases in a real-world clinical environment in Germany. The solution integrated AI grading for DR, age-related macular degeneration (AMD), and glaucoma, along with specialist auditing and patient referral decision. The study comprised several components: (1) evaluating the entire system solution from recruitment to eye image capture and AI grading for DR, AMD, and glaucoma; (2) comparing specialist's grading results with AI grading results; (3) gathering user feedback on the solution. RESULTS A total of 231 patients were recruited, and their consent forms were obtained. The sensitivity, specificity, and area under the curve for DR grading were 100.00%, 80.10%, and 90.00%, respectively. For AMD grading, the values were 90.91%, 78.79%, and 85.00%, and for glaucoma grading, the values were 93.26%, 76.76%, and 85.00%. The analysis of all false positive cases across the three diseases and their comparison with the final referral decisions revealed that only 17 patients were falsely referred among the 231 patients. The efficacy analysis of the system demonstrated the effectiveness of the AI grading process in the study's testing environment. Clinical staff involved in using the system provided positive feedback on the disease screening process, particularly praising the seamless workflow from patient registration to image transmission and obtaining the final result. Results from a questionnaire completed by 12 participants indicated that most found the system easy, quick, and highly satisfactory. The study also revealed room for improvement in the AMD model, suggesting the need to enhance its training data. Furthermore, the performance of the glaucoma model grading could be improved by incorporating additional measures such as intraocular pressure. CONCLUSIONS The implementation of the AI-based approach for screening three chronic eye diseases proved effective in real-world settings, earning positive feedback on the usability of the integrated platform from both the screening staff and auditors. The auditing function has proven valuable for obtaining efficient second opinions from experts, pointing to its potential for enhancing remote screening capabilities. TRIAL REGISTRATION Institutional Review Board of the Hamburg Medical Chamber (Ethik-Kommission der Ärztekammer Hamburg): 2021-10574-BO-ff.
Collapse
Affiliation(s)
- Christos Skevas
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | | | - Albert Lleó
- TeleMedC GmbH, Raboisen 32, 20095, Hamburg, Germany
| | - David Thiwa
- Department of Otorhinolaryngology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | - Ulrike Schroeter
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | - Inês Valente Lopes
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany.
| | - Luca Mautone
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | - Stephan J Linke
- Zentrum Sehestaerke, Martinistraße 64, 20251, Hamburg, Germany
| | - Martin Stephan Spitzer
- Department of Ophthalmology, University Medical Center Hamburg - Eppendorf, Martinistr. 52, 20249, Hamburg, Germany
| | - Daniel Yap
- TeleMedC Pty Ltd, 61 Ubi Avenue 1, #06-11 UBPoint, Singapore, 40894, Singapore
| | - Di Xiao
- TeleMedC Pty Ltd, Brisbane Technology Park, Level 2, 1 Westlink Court, Darra, QLD 4076, Australia
| |
Collapse
|
9
|
Christopher M, Gonzalez R, Huynh J, Walker E, Radha Saseendrakumar B, Bowd C, Belghith A, Goldbaum MH, Fazio MA, Girkin CA, De Moraes CG, Liebmann JM, Weinreb RN, Baxter SL, Zangwill LM. Proactive Decision Support for Glaucoma Treatment: Predicting Surgical Interventions with Clinically Available Data. Bioengineering (Basel) 2024; 11:140. [PMID: 38391627 PMCID: PMC10886033 DOI: 10.3390/bioengineering11020140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 01/06/2024] [Accepted: 01/27/2024] [Indexed: 02/24/2024] Open
Abstract
A longitudinal ophthalmic dataset was used to investigate multi-modal machine learning (ML) models incorporating patient demographics and history, clinical measurements, optical coherence tomography (OCT), and visual field (VF) testing in predicting glaucoma surgical interventions. The cohort included 369 patients who underwent glaucoma surgery and 592 patients who did not undergo surgery. The data types used for prediction included patient demographics, history of systemic conditions, medication history, ophthalmic measurements, 24-2 VF results, and thickness measurements from OCT imaging. The ML models were trained to predict surgical interventions and evaluated on independent data collected at a separate study site. The models were evaluated based on their ability to predict surgeries at varying lengths of time prior to surgical intervention. The highest performing predictions achieved an AUC of 0.93, 0.92, and 0.93 in predicting surgical intervention at 1 year, 2 years, and 3 years, respectively. The models were also able to achieve high sensitivity (0.89, 0.77, 0.86 at 1, 2, and 3 years, respectively) and specificity (0.85, 0.90, and 0.91 at 1, 2, and 3 years, respectively) at an 0.80 level of precision. The multi-modal models trained on a combination of data types predicted surgical interventions with high accuracy up to three years prior to surgery and could provide an important tool to predict the need for glaucoma intervention.
Collapse
Affiliation(s)
- Mark Christopher
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Ruben Gonzalez
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Justin Huynh
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Evan Walker
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Bharanidharan Radha Saseendrakumar
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Christopher Bowd
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Akram Belghith
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Michael H Goldbaum
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Massimo A Fazio
- Department of Ophthalmology and Vision Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, AL 35233, USA
| | - Christopher A Girkin
- Department of Ophthalmology and Vision Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, AL 35233, USA
| | - Carlos Gustavo De Moraes
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Medical Center, New York, NY 10032, USA
| | - Jeffrey M Liebmann
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Medical Center, New York, NY 10032, USA
| | - Robert N Weinreb
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Sally L Baxter
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| | - Linda M Zangwill
- Hamilton Glaucoma Center and Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, CA 92037, USA
| |
Collapse
|
10
|
Zhu Y, Salowe R, Chow C, Li S, Bastani O, O'Brien JM. Advancing Glaucoma Care: Integrating Artificial Intelligence in Diagnosis, Management, and Progression Detection. Bioengineering (Basel) 2024; 11:122. [PMID: 38391608 PMCID: PMC10886285 DOI: 10.3390/bioengineering11020122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 01/23/2024] [Accepted: 01/24/2024] [Indexed: 02/24/2024] Open
Abstract
Glaucoma, the leading cause of irreversible blindness worldwide, comprises a group of progressive optic neuropathies requiring early detection and lifelong treatment to preserve vision. Artificial intelligence (AI) technologies are now demonstrating transformative potential across the spectrum of clinical glaucoma care. This review summarizes current capabilities, future outlooks, and practical translation considerations. For enhanced screening, algorithms analyzing retinal photographs and machine learning models synthesizing risk factors can identify high-risk patients needing diagnostic workup and close follow-up. To augment definitive diagnosis, deep learning techniques detect characteristic glaucomatous patterns by interpreting results from optical coherence tomography, visual field testing, fundus photography, and other ocular imaging. AI-powered platforms also enable continuous monitoring, with algorithms that analyze longitudinal data alerting physicians about rapid disease progression. By integrating predictive analytics with patient-specific parameters, AI can also guide precision medicine for individualized glaucoma treatment selections. Advances in robotic surgery and computer-based guidance demonstrate AI's potential to improve surgical outcomes and surgical training. Beyond the clinic, AI chatbots and reminder systems could provide patient education and counseling to promote medication adherence. However, thoughtful approaches to clinical integration, usability, diversity, and ethical implications remain critical to successfully implementing these emerging technologies. This review highlights AI's vast capabilities to transform glaucoma care while summarizing key achievements, future prospects, and practical considerations to progress from bench to bedside.
Collapse
Affiliation(s)
- Yan Zhu
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Rebecca Salowe
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Caven Chow
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Shuo Li
- Department of Computer & Information Science, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Osbert Bastani
- Department of Computer & Information Science, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Joan M O'Brien
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA
| |
Collapse
|
11
|
Nam Y, Kim J, Kim K, Park KA, Kang M, Cho BH, Oh SY, Kee C, Han J, Lee GI, Kang MC, Lee D, Choi Y, Yun HJ, Park H, Kim J, Cho SJ, Chang DK. Deep learning-based optic disc classification is affected by optic-disc tilt. Sci Rep 2024; 14:498. [PMID: 38177229 PMCID: PMC10767025 DOI: 10.1038/s41598-023-50256-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 12/18/2023] [Indexed: 01/06/2024] Open
Abstract
We aimed to determine the effect of optic disc tilt on deep learning-based optic disc classification. A total of 2507 fundus photographs were acquired from 2236 eyes of 1809 subjects (mean age of 46 years; 53% men). Among all photographs, 1010 (40.3%) had tilted optic discs. Image annotation was performed to label pathologic changes of the optic disc (normal, glaucomatous optic disc changes, disc swelling, and disc pallor). Deep learning-based classification modeling was implemented to develop optic-disc appearance classification models with the photographs of all subjects and those with and without tilted optic discs. Regardless of deep learning algorithms, the classification models showed better overall performance when developed based on data from subjects with non-tilted discs (AUC, 0.988 ± 0.002, 0.991 ± 0.003, and 0.986 ± 0.003 for VGG16, VGG19, and DenseNet121, respectively) than when developed based on data with tilted discs (AUC, 0.924 ± 0.046, 0.928 ± 0.017, and 0.935 ± 0.008). In classification of each pathologic change, non-tilted disc models had better sensitivity and specificity than the tilted disc models. The optic disc appearance classification models developed based all-subject data demonstrated lower accuracy in patients with the appearance of tilted discs than in those with non-tilted discs. Our findings suggested the need to identify and adjust for the effect of optic disc tilt on the optic disc classification algorithm in future development.
Collapse
Affiliation(s)
- Youngwoo Nam
- Medical AI Research Center, Institute of Smart Healthcare, Samsung Medical Center, Seoul, Republic of Korea
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea
| | - Joonhyoung Kim
- Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Kyunga Kim
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea
- Biomedical Statistics Center, Research Institute for Future Medicine, Samsung Medical Center, Seoul, Republic of Korea
- Department of Data Convergence & Future Medicine, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Kyung-Ah Park
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea.
| | - Mira Kang
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea.
- Health Promotion Center, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea.
- Digital Innovation Center, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea.
| | - Baek Hwan Cho
- Department of Medical Device Management and Research, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea
- Department of Biomedical Informatics, CHA University School of Medicine, CHA University, Seongam, Republic of Korea
| | - Sei Yeul Oh
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Changwon Kee
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Jongchul Han
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
- Department of Medical Device Management and Research, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea
| | - Ga-In Lee
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Min Chae Kang
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Dongyoung Lee
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Yeeun Choi
- Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Hee Jee Yun
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Hansol Park
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Jiho Kim
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Soo Jin Cho
- Health Promotion Center, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Dong Kyung Chang
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea
- Division of Gastroenterology, Department of Internal Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| |
Collapse
|
12
|
Su E, Mohammadzadeh V, Mohammadi M, Shi L, Law SK, Coleman AL, Caprioli J, Weiss RE, Nouri-Mahdavi K. A Bayesian Hierarchical Spatial Longitudinal Model Improves Estimation of Local Macular Rates of Change in Glaucomatous Eyes. Transl Vis Sci Technol 2024; 13:26. [PMID: 38285459 PMCID: PMC10829804 DOI: 10.1167/tvst.13.1.26] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Accepted: 12/06/2023] [Indexed: 01/30/2024] Open
Abstract
Purpose Demonstrate that a novel Bayesian hierarchical spatial longitudinal (HSL) model improves estimation of local macular ganglion cell complex (GCC) rates of change compared to simple linear regression (SLR) and a conditional autoregressive (CAR) model. Methods We analyzed GCC thickness measurements within 49 macular superpixels in 111 eyes (111 patients) with four or more macular optical coherence tomography scans and two or more years of follow-up. We compared superpixel-patient-specific estimates and their posterior variances derived from the latest version of a recently developed Bayesian HSL model, CAR, and SLR. We performed a simulation study to compare the accuracy of intercept and slope estimates in individual superpixels. Results HSL identified a significantly higher proportion of significant negative slopes in 13/49 superpixels and a significantly lower proportion of significant positive slopes in 21/49 superpixels than SLR. In the simulation study, the median (tenth, ninetieth percentile) ratio of mean squared error of SLR [CAR] over HSL for intercepts and slopes were 1.91 (1.23, 2.75) [1.51 (1.05, 2.20)] and 3.25 (1.40, 10.14) [2.36 (1.17, 5.56)], respectively. Conclusions A novel Bayesian HSL model improves estimation accuracy of patient-specific local GCC rates of change. The proposed model is more than twice as efficient as SLR for estimating superpixel-patient slopes and identifies a higher proportion of deteriorating superpixels than SLR while minimizing false-positive detection rates. Translational Relevance The proposed HSL model can be used to model macular structural measurements to detect individual glaucoma progression earlier and more efficiently in clinical and research settings.
Collapse
Affiliation(s)
- Erica Su
- Department of Biostatistics, Fielding School of Public Health, University of California Los Angeles, Los Angeles, California, USA
| | - Vahid Mohammadzadeh
- Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California, USA
| | - Massood Mohammadi
- Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California, USA
| | - Lynn Shi
- Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California, USA
| | - Simon K Law
- Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California, USA
| | - Anne L Coleman
- Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California, USA
| | - Joseph Caprioli
- Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California, USA
| | - Robert E Weiss
- Department of Biostatistics, Fielding School of Public Health, University of California Los Angeles, Los Angeles, California, USA
| | - Kouros Nouri-Mahdavi
- Glaucoma Division, Stein Eye Institute, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California, USA
| |
Collapse
|
13
|
Heger KA, Waldstein SM. Artificial intelligence in retinal imaging: current status and future prospects. Expert Rev Med Devices 2024; 21:73-89. [PMID: 38088362 DOI: 10.1080/17434440.2023.2294364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Accepted: 12/09/2023] [Indexed: 12/19/2023]
Abstract
INTRODUCTION The steadily growing and aging world population, in conjunction with continuously increasing prevalences of vision-threatening retinal diseases, is placing an increasing burden on the global healthcare system. The main challenges within retinology involve identifying the comparatively few patients requiring therapy within the large mass, the assurance of comprehensive screening for retinal disease and individualized therapy planning. In order to sustain high-quality ophthalmic care in the future, the incorporation of artificial intelligence (AI) technologies into our clinical practice represents a potential solution. AREAS COVERED This review sheds light onto already realized and promising future applications of AI techniques in retinal imaging. The main attention is directed at the application in diabetic retinopathy and age-related macular degeneration. The principles of use in disease screening, grading, therapeutic planning and prediction of future developments are explained based on the currently available literature. EXPERT OPINION The recent accomplishments of AI in retinal imaging indicate that its implementation into our daily practice is likely to fundamentally change the ophthalmic healthcare system and bring us one step closer to the goal of individualized treatment. However, it must be emphasized that the aim is to optimally support clinicians by gradually incorporating AI approaches, rather than replacing ophthalmologists.
Collapse
Affiliation(s)
- Katharina A Heger
- Department of Ophthalmology, Landesklinikum Mistelbach-Gaenserndorf, Mistelbach, Austria
| | - Sebastian M Waldstein
- Department of Ophthalmology, Landesklinikum Mistelbach-Gaenserndorf, Mistelbach, Austria
| |
Collapse
|
14
|
Hwang EE, Chen D, Han Y, Jia L, Shan J. Multi-Dataset Comparison of Vision Transformers and Convolutional Neural Networks for Detecting Glaucomatous Optic Neuropathy from Fundus Photographs. Bioengineering (Basel) 2023; 10:1266. [PMID: 38002390 PMCID: PMC10669064 DOI: 10.3390/bioengineering10111266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 10/26/2023] [Accepted: 10/27/2023] [Indexed: 11/26/2023] Open
Abstract
Glaucomatous optic neuropathy (GON) can be diagnosed and monitored using fundus photography, a widely available and low-cost approach already adopted for automated screening of ophthalmic diseases such as diabetic retinopathy. Despite this, the lack of validated early screening approaches remains a major obstacle in the prevention of glaucoma-related blindness. Deep learning models have gained significant interest as potential solutions, as these models offer objective and high-throughput methods for processing image-based medical data. While convolutional neural networks (CNN) have been widely utilized for these purposes, more recent advances in the application of Transformer architectures have led to new models, including Vision Transformer (ViT,) that have shown promise in many domains of image analysis. However, previous comparisons of these two architectures have not sufficiently compared models side-by-side with more than a single dataset, making it unclear which model is more generalizable or performs better in different clinical contexts. Our purpose is to investigate comparable ViT and CNN models tasked with GON detection from fundus photos and highlight their respective strengths and weaknesses. We train CNN and ViT models on six unrelated, publicly available databases and compare their performance using well-established statistics including AUC, sensitivity, and specificity. Our results indicate that ViT models often show superior performance when compared with a similarly trained CNN model, particularly when non-glaucomatous images are over-represented in a given dataset. We discuss the clinical implications of these findings and suggest that ViT can further the development of accurate and scalable GON detection for this leading cause of irreversible blindness worldwide.
Collapse
Affiliation(s)
- Elizabeth E. Hwang
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA 94143, USA
- Medical Scientist Training Program, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Dake Chen
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Ying Han
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Lin Jia
- Digillect LLC, San Francisco, CA 94158, USA
| | - Jing Shan
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA 94143, USA
| |
Collapse
|
15
|
Saha S, Vignarajan J, Frost S. A fast and fully automated system for glaucoma detection using color fundus photographs. Sci Rep 2023; 13:18408. [PMID: 37891238 PMCID: PMC10611813 DOI: 10.1038/s41598-023-44473-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 10/09/2023] [Indexed: 10/29/2023] Open
Abstract
This paper presents a low computationally intensive and memory efficient convolutional neural network (CNN)-based fully automated system for detection of glaucoma, a leading cause of irreversible blindness worldwide. Using color fundus photographs, the system detects glaucoma in two steps. In the first step, the optic disc region is determined relying upon You Only Look Once (YOLO) CNN architecture. In the second step classification of 'glaucomatous' and 'non-glaucomatous' is performed using MobileNet architecture. A simplified version of the original YOLO net, specific to the context, is also proposed. Extensive experiments are conducted using seven state-of-the-art CNNs with varying computational intensity, namely, MobileNetV2, MobileNetV3, Custom ResNet, InceptionV3, ResNet50, 18-Layer CNN and InceptionResNetV2. A total of 6671 fundus images collected from seven publicly available glaucoma datasets are used for the experiment. The system achieves an accuracy and F1 score of 97.4% and 97.3%, with sensitivity, specificity, and AUC of respectively 97.5%, 97.2%, 99.3%. These findings are comparable with the best reported methods in the literature. With comparable or better performance, the proposed system produces significantly faster decisions and drastically minimizes the resource requirement. For example, the proposed system requires 12 times less memory in comparison to ResNes50, and produces 2 times faster decisions. With significantly less memory efficient and faster processing, the proposed system has the capability to be directly embedded into resource limited devices such as portable fundus cameras.
Collapse
Affiliation(s)
- Sajib Saha
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Perth, Australia.
| | - Janardhan Vignarajan
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Perth, Australia
| | - Shaun Frost
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Perth, Australia
| |
Collapse
|
16
|
Bowd C, Belghith A, Rezapour J, Christopher M, Jonas JB, Hyman L, Fazio MA, Weinreb RN, Zangwill LM. Multimodal Deep Learning Classifier for Primary Open Angle Glaucoma Diagnosis Using Wide-Field Optic Nerve Head Cube Scans in Eyes With and Without High Myopia. J Glaucoma 2023; 32:841-847. [PMID: 37523623 DOI: 10.1097/ijg.0000000000002267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Accepted: 06/18/2023] [Indexed: 08/02/2023]
Abstract
PRCIS An optical coherence tomography (OCT)-based multimodal deep learning (DL) classification model, including texture information, is introduced that outperforms single-modal models and multimodal models without texture information for glaucoma diagnosis in eyes with and without high myopia. BACKGROUND/AIMS To evaluate the diagnostic accuracy of a multimodal DL classifier using wide OCT optic nerve head cube scans in eyes with and without axial high myopia. MATERIALS AND METHODS Three hundred seventy-one primary open angle glaucoma (POAG) eyes and 86 healthy eyes, all without axial high myopia [axial length (AL) ≤ 26 mm] and 92 POAG eyes and 44 healthy eyes, all with axial high myopia (AL > 26 mm) were included. The multimodal DL classifier combined features of 3 individual VGG-16 models: (1) texture-based en face image, (2) retinal nerve fiber layer (RNFL) thickness map image, and (3) confocal scanning laser ophthalmoscope (cSLO) image. Age, AL, and disc area adjusted area under the receiver operating curves were used to compare model accuracy. RESULTS Adjusted area under the receiver operating curve for the multimodal DL model was 0.91 (95% CI = 0.87, 0.95). This value was significantly higher than the values of individual models [0.83 (0.79, 0.86) for texture-based en face image; 0.84 (0.81, 0.87) for RNFL thickness map; and 0.68 (0.61, 0.74) for cSLO image; all P ≤ 0.05]. Using only highly myopic eyes, the multimodal DL model showed significantly higher diagnostic accuracy [0.89 (0.86, 0.92)] compared with texture en face image [0.83 (0.78, 0.85)], RNFL [0.85 (0.81, 0.86)] and cSLO image models [0.69 (0.63, 0.76)] (all P ≤ 0.05). CONCLUSIONS Combining OCT-based RNFL thickness maps with texture-based en face images showed a better ability to discriminate between healthy and POAG than thickness maps alone, particularly in high axial myopic eyes.
Collapse
Affiliation(s)
- Christopher Bowd
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, Hamilton Glaucoma Center
| | - Akram Belghith
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, Hamilton Glaucoma Center
| | - Jasmin Rezapour
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, Hamilton Glaucoma Center
- Department of Ophthalmology, University Medical Center of the Johannes Gutenberg University Mainz
| | - Mark Christopher
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, Hamilton Glaucoma Center
| | - Jost B Jonas
- Department of Ophthalmology, Heidelberg University, Mannheim, Germany
| | - Leslie Hyman
- Vickie and Jack Farber Vision Research Center, Wills Eye Hospital, Thomas Jefferson University, Philadelphia, PA
| | - Massimo A Fazio
- Department of Ophthalmology and Visual Sciences, The University of Alabama at Birmingham, Birmingham, AL
| | - Robert N Weinreb
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, Hamilton Glaucoma Center
| | - Linda M Zangwill
- Viterbi Family Department of Ophthalmology, Shiley Eye Institute, Hamilton Glaucoma Center
| |
Collapse
|
17
|
Feng HW, Chen JJ, Zhang ZC, Zhang SC, Yang WH. Bibliometric analysis of artificial intelligence and optical coherence tomography images: research hotspots and frontiers. Int J Ophthalmol 2023; 16:1431-1440. [PMID: 37724282 PMCID: PMC10475613 DOI: 10.18240/ijo.2023.09.09] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 07/05/2023] [Indexed: 09/20/2023] Open
Abstract
AIM To explore the latest application of artificial intelligence (AI) in optical coherence tomography (OCT) images, and to analyze the current research status of AI in OCT, and discuss the future research trend. METHODS On June 1, 2023, a bibliometric analysis of the Web of Science Core Collection was performed in order to explore the utilization of AI in OCT imagery. Key parameters such as papers, countries/regions, citations, databases, organizations, keywords, journal names, and research hotspots were extracted and then visualized employing the VOSviewer and CiteSpace V bibliometric platforms. RESULTS Fifty-five nations reported studies on AI biotechnology and its application in analyzing OCT images. The United States was the country with the largest number of published papers. Furthermore, 197 institutions worldwide provided published articles, where University of London had more publications than the rest. The reference clusters from the study could be divided into four categories: thickness and eyes, diabetic retinopathy (DR), images and segmentation, and OCT classification. CONCLUSION The latest hot topics and future directions in this field are identified, and the dynamic evolution of AI-based OCT imaging are outlined. AI-based OCT imaging holds great potential for revolutionizing clinical care.
Collapse
Affiliation(s)
- Hai-Wen Feng
- Department of Software Engineering, School of Software, Shenyang University of Technology, Shenyang 110870, Liaoning Province, China
| | - Jun-Jie Chen
- Department of Software Engineering, School of Software, Shenyang University of Technology, Shenyang 110870, Liaoning Province, China
| | - Zhi-Chang Zhang
- Department of Computer, School of Intelligent Medicine, China Medical University, Shenyang 110122, Liaoning Province, China
| | - Shao-Chong Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, Guangdong Province, China
| | - Wei-Hua Yang
- Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, Guangdong Province, China
| |
Collapse
|
18
|
Pucchio A, Krance S, Pur DR, Bassi A, Miranda R, Felfeli T. The role of artificial intelligence in analysis of biofluid markers for diagnosis and management of glaucoma: A systematic review. Eur J Ophthalmol 2023; 33:1816-1833. [PMID: 36426575 PMCID: PMC10469503 DOI: 10.1177/11206721221140948] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 11/01/2022] [Indexed: 08/31/2023]
Abstract
PURPOSE This review focuses on utility of artificial intelligence (AI) in analysis of biofluid markers in glaucoma. We detail the accuracy and validity of AI in the exploration of biomarkers to provide insight into glaucoma pathogenesis. METHODS A comprehensive search was conducted across five electronic databases including Embase, Medline, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, and Web of Science. Studies pertaining to biofluid marker analysis using AI or bioinformatics in glaucoma were included. Identified studies were critically appraised and assessed for risk of bias using the Joanna Briggs Institute Critical Appraisal tools. RESULTS A total of 10,258 studies were screened and 39 studies met the inclusion criteria, including 23 cross-sectional studies (59%), nine prospective cohort studies (23%), six retrospective cohort studies (15%), and one case-control study (3%). Primary open angle glaucoma (POAG) was the most commonly studied subtype (55% of included studies). Twenty-four studies examined disease characteristics, 10 explored treatment decisions, and 5 provided diagnostic clarification. While studies examined at entire metabolomic or proteomic profiles to determine changes in POAG, there was heterogeneity in the data with over 175 unique, differentially expressed biomarkers reported. Discriminant analysis and artificial neural network predictive models displayed strong differentiating ability between glaucoma patients and controls, although these tools were untested in a clinical context. CONCLUSION The use of AI models could inform glaucoma diagnosis with high sensitivity and specificity. While insight into differentially expressed biomarkers is valuable in pathogenic exploration, no clear pathogenic mechanism in glaucoma has emerged.
Collapse
Affiliation(s)
- Aidan Pucchio
- School of Medicine, Queen's University, Kingston, Ontario, Canada
| | - Saffire Krance
- Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Daiana R Pur
- Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Arshpreet Bassi
- Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Rafael Miranda
- Toronto Health Economics and Technology Assessment Collaborative, University of Toronto, Toronto, Ontario, Canada
- The Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
| | - Tina Felfeli
- Toronto Health Economics and Technology Assessment Collaborative, University of Toronto, Toronto, Ontario, Canada
- Department of Ophthalmology and Visual Sciences, University of Toronto, Toronto, Ontario, Canada
- The Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
19
|
Gu B, Sidhu S, Weinreb RN, Christopher M, Zangwill LM, Baxter SL. Review of Visualization Approaches in Deep Learning Models of Glaucoma. Asia Pac J Ophthalmol (Phila) 2023; 12:392-401. [PMID: 37523431 DOI: 10.1097/apo.0000000000000619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 05/11/2023] [Indexed: 08/02/2023] Open
Abstract
Glaucoma is a major cause of irreversible blindness worldwide. As glaucoma often presents without symptoms, early detection and intervention are important in delaying progression. Deep learning (DL) has emerged as a rapidly advancing tool to help achieve these objectives. In this narrative review, data types and visualization approaches for presenting model predictions, including models based on tabular data, functional data, and/or structural data, are summarized, and the importance of data source diversity for improving the utility and generalizability of DL models is explored. Examples of innovative approaches to understanding predictions of artificial intelligence (AI) models and alignment with clinicians are provided. In addition, methods to enhance the interpretability of clinical features from tabular data used to train AI models are investigated. Examples of published DL models that include interfaces to facilitate end-user engagement and minimize cognitive and time burdens are highlighted. The stages of integrating AI models into existing clinical workflows are reviewed, and challenges are discussed. Reviewing these approaches may help inform the generation of user-friendly interfaces that are successfully integrated into clinical information systems. This review details key principles regarding visualization approaches in DL models of glaucoma. The articles reviewed here focused on usability, explainability, and promotion of clinician trust to encourage wider adoption for clinical use. These studies demonstrate important progress in addressing visualization and explainability issues required for successful real-world implementation of DL models in glaucoma.
Collapse
Affiliation(s)
- Byoungyoung Gu
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
- Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, CA, US
| | - Sophia Sidhu
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
- Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, CA, US
| | - Robert N Weinreb
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
| | - Mark Christopher
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
| | - Linda M Zangwill
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
| | - Sally L Baxter
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
- Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, CA, US
| |
Collapse
|
20
|
Sarossy M, Stepnicka K, Sarossy A, Wu Z. Using texture based features from the continuous wavelet transform of the electroretinogram to predict glaucoma . ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082633 DOI: 10.1109/embc40787.2023.10341019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Glaucoma is an optic neuropathy resulting in the progressive loss of retinal ganglion cells (RCGs). The photopic negative response (PhNR) of the electroretinogram (ERG) has been used to objectively measure RCG function. This study sought to explore whether the usage of textural features extracted from the continuous wavelet transform of the ERG combined with ERG amplitude markers were more effective at predicting glaucoma severity than using the ERG markers alone. One-hundred and three eyes of 55 participants were included in this study, who underwent ERG testing with a protocol targeted at the PhNR. Predictive models for glaucoma severity based on the estimated RGC count were fitted using multiadaptive regression splines (MARS). The models informed by a combination of amplitude markers and texture analysis had a better predictive performance; R2 = 0.492, compared to models informed by markers alone having an R2 = 0.349 (p = 0.009).Clinical Relevance- As a direct measure of retinal function, the ERG has potential to determine the health of RGCs. This study demonstrates there is additional data within the ERG available to clinicians, which has the potential to improve the diagnosis and management of glaucoma.
Collapse
|
21
|
AlRyalat SA, Singh P, Kalpathy-Cramer J, Kahook MY. Artificial Intelligence and Glaucoma: Going Back to Basics. Clin Ophthalmol 2023; 17:1525-1530. [PMID: 37284059 PMCID: PMC10239633 DOI: 10.2147/opth.s410905] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 05/24/2023] [Indexed: 06/08/2023] Open
Abstract
There has been a recent surge in the number of publications centered on the use of artificial intelligence (AI) to diagnose various systemic diseases. The Food and Drug Administration has approved several algorithms for use in clinical practice. In ophthalmology, most advances in AI relate to diabetic retinopathy, which is a disease process with agreed upon diagnostic and classification criteria. However, this is not the case for glaucoma, which is a relatively complex disease without agreed-upon diagnostic criteria. Moreover, currently available public datasets that focus on glaucoma have inconstant label quality, further complicating attempts at training AI algorithms efficiently. In this perspective paper, we discuss specific details related to developing AI models for glaucoma and suggest potential steps to overcome current limitations.
Collapse
Affiliation(s)
| | - Praveer Singh
- Department of Ophthalmology, University of Colorado School of Medicine, Sue Anschutz-Rodgers Eye Center, Aurora, CO, USA
| | - Jayashree Kalpathy-Cramer
- Department of Ophthalmology, University of Colorado School of Medicine, Sue Anschutz-Rodgers Eye Center, Aurora, CO, USA
| | - Malik Y Kahook
- Department of Ophthalmology, University of Colorado School of Medicine, Sue Anschutz-Rodgers Eye Center, Aurora, CO, USA
| |
Collapse
|
22
|
Kim JA, Yoon H, Lee D, Kim M, Choi J, Lee EJ, Kim TW. Development of a deep learning system to detect glaucoma using macular vertical optical coherence tomography scans of myopic eyes. Sci Rep 2023; 13:8040. [PMID: 37198215 DOI: 10.1038/s41598-023-34794-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 05/08/2023] [Indexed: 05/19/2023] Open
Abstract
Myopia is one of the risk factors for glaucoma, making accurate diagnosis of glaucoma in myopic eyes particularly important. However, diagnosis of glaucoma in myopic eyes is challenging due to the frequent associations of distorted optic disc and distorted parapapillary and macular structures. Macular vertical scan has been suggested as a useful tool to detect glaucomatous retinal nerve fiber layer loss even in highly myopic eyes. The present study was performed to develop and validate a deep learning (DL) system to detect glaucoma in myopic eyes using macular vertical optical coherence tomography (OCT) scans and compare its diagnostic power with that of circumpapillary OCT scans. The study included a training set of 1416 eyes, a validation set of 471 eyes, a test set of 471 eyes, and an external test set of 249 eyes. The ability to diagnose glaucoma in eyes with large myopic parapapillary atrophy was greater with the vertical than the circumpapillary OCT scans, with areas under the receiver operating characteristic curves of 0.976 and 0.914, respectively. These findings suggest that DL artificial intelligence based on macular vertical scans may be a promising tool for diagnosis of glaucoma in myopic eyes.
Collapse
Affiliation(s)
- Ji-Ah Kim
- Department of Ophthalmology, Ewha Womans University College of Medicine, Ewha Womans University Seoul Hospital, Seoul, Korea
| | - Hanbit Yoon
- Department of Machine Learning and Computer Vision, Sungkyunkwan University, Suwon, Korea
| | - Dayun Lee
- Department of Computing, Sungkyunkwan University College of Computing and Informatics, Sungkyunkwan University, Suwon, Korea
| | - MoonHyun Kim
- Department of Computing, Sungkyunkwan University College of Computing and Informatics, Sungkyunkwan University, Suwon, Korea
- Hippo T&C, Suwon, Korea
| | | | - Eun Ji Lee
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro, 173 Beon-gil, Bundang-gu, Seongnam, Gyeonggi-do, Seongnam, 23347, Korea
| | - Tae-Woo Kim
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro, 173 Beon-gil, Bundang-gu, Seongnam, Gyeonggi-do, Seongnam, 23347, Korea.
| |
Collapse
|
23
|
Zhang L, Tang L, Xia M, Cao G. The application of artificial intelligence in glaucoma diagnosis and prediction. Front Cell Dev Biol 2023; 11:1173094. [PMID: 37215077 PMCID: PMC10192631 DOI: 10.3389/fcell.2023.1173094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Accepted: 04/24/2023] [Indexed: 05/24/2023] Open
Abstract
Artificial intelligence is a multidisciplinary and collaborative science, the ability of deep learning for image feature extraction and processing gives it a unique advantage in dealing with problems in ophthalmology. The deep learning system can assist ophthalmologists in diagnosing characteristic fundus lesions in glaucoma, such as retinal nerve fiber layer defects, optic nerve head damage, optic disc hemorrhage, etc. Early detection of these lesions can help delay structural damage, protect visual function, and reduce visual field damage. The development of deep learning led to the emergence of deep convolutional neural networks, which are pushing the integration of artificial intelligence with testing devices such as visual field meters, fundus imaging and optical coherence tomography to drive more rapid advances in clinical glaucoma diagnosis and prediction techniques. This article details advances in artificial intelligence combined with visual field, fundus photography, and optical coherence tomography in the field of glaucoma diagnosis and prediction, some of which are familiar and some not widely known. Then it further explores the challenges at this stage and the prospects for future clinical applications. In the future, the deep cooperation between artificial intelligence and medical technology will make the datasets and clinical application rules more standardized, and glaucoma diagnosis and prediction tools will be simplified in a single direction, which will benefit multiple ethnic groups.
Collapse
Affiliation(s)
- Linyu Zhang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Li Tang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Min Xia
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Guofan Cao
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| |
Collapse
|
24
|
Le D, Abtahi M, Adejumo T, Ebrahimi B, K Dadzie A, Son T, Yao X. Deep learning for artery-vein classification in optical coherence tomography angiography. Exp Biol Med (Maywood) 2023; 248:747-761. [PMID: 37452729 PMCID: PMC10468646 DOI: 10.1177/15353702231181182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/18/2023] Open
Abstract
Major retinopathies can differentially impact the arteries and veins. Traditional fundus photography provides limited resolution for visualizing retinal vascular details. Optical coherence tomography (OCT) can provide improved resolution for retinal imaging. However, it cannot discern capillary-level structures due to the limited image contrast. As a functional extension of OCT modality, optical coherence tomography angiography (OCTA) is a non-invasive, label-free method for enhanced contrast visualization of retinal vasculatures at the capillary level. Recently differential artery-vein (AV) analysis in OCTA has been demonstrated to improve the sensitivity for staging of retinopathies. Therefore, AV classification is an essential step for disease detection and diagnosis. However, current methods for AV classification in OCTA have employed multiple imagers, that is, fundus photography and OCT, and complex algorithms, thereby making it difficult for clinical deployment. On the contrary, deep learning (DL) algorithms may be able to reduce computational complexity and automate AV classification. In this article, we summarize traditional AV classification methods, recent DL methods for AV classification in OCTA, and discuss methods for interpretability in DL models.
Collapse
Affiliation(s)
- David Le
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Mansour Abtahi
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Tobiloba Adejumo
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Behrouz Ebrahimi
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Albert K Dadzie
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Taeyoon Son
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Xincheng Yao
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| |
Collapse
|
25
|
Thakur S, Dinh LL, Lavanya R, Quek TC, Liu Y, Cheng CY. Use of artificial intelligence in forecasting glaucoma progression. Taiwan J Ophthalmol 2023; 13:168-183. [PMID: 37484617 PMCID: PMC10361424 DOI: 10.4103/tjo.tjo-d-23-00022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 03/03/2023] [Indexed: 07/25/2023] Open
Abstract
Artificial intelligence (AI) has been widely used in ophthalmology for disease detection and monitoring progression. For glaucoma research, AI has been used to understand progression patterns and forecast disease trajectory based on analysis of clinical and imaging data. Techniques such as machine learning, natural language processing, and deep learning have been employed for this purpose. The results from studies using AI for forecasting glaucoma progression however vary considerably due to dataset constraints, lack of a standard progression definition and differences in methodology and approach. While glaucoma detection and screening have been the focus of most research that has been published in the last few years, in this narrative review we focus on studies that specifically address glaucoma progression. We also summarize the current evidence, highlight studies that have translational potential, and provide suggestions on how future research that addresses glaucoma progression can be improved.
Collapse
Affiliation(s)
- Sahil Thakur
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Linh Le Dinh
- Institute of High Performance Computing, The Agency for Science, Technology and Research, Singapore
| | - Raghavan Lavanya
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Ten Cheer Quek
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Yong Liu
- Institute of High Performance Computing, The Agency for Science, Technology and Research, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Department of Ophthalmology, Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| |
Collapse
|
26
|
Arnould L, Meriaudeau F, Guenancia C, Germanese C, Delcourt C, Kawasaki R, Cheung CY, Creuzot-Garcher C, Grzybowski A. Using Artificial Intelligence to Analyse the Retinal Vascular Network: The Future of Cardiovascular Risk Assessment Based on Oculomics? A Narrative Review. Ophthalmol Ther 2023; 12:657-674. [PMID: 36562928 PMCID: PMC10011267 DOI: 10.1007/s40123-022-00641-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 12/09/2022] [Indexed: 12/24/2022] Open
Abstract
The healthcare burden of cardiovascular diseases remains a major issue worldwide. Understanding the underlying mechanisms and improving identification of people with a higher risk profile of systemic vascular disease through noninvasive examinations is crucial. In ophthalmology, retinal vascular network imaging is simple and noninvasive and can provide in vivo information of the microstructure and vascular health. For more than 10 years, different research teams have been working on developing software to enable automatic analysis of the retinal vascular network from different imaging techniques (retinal fundus photographs, OCT angiography, adaptive optics, etc.) and to provide a description of the geometric characteristics of its arterial and venous components. Thus, the structure of retinal vessels could be considered a witness of the systemic vascular status. A new approach called "oculomics" using retinal image datasets and artificial intelligence algorithms recently increased the interest in retinal microvascular biomarkers. Despite the large volume of associated research, the role of retinal biomarkers in the screening, monitoring, or prediction of systemic vascular disease remains uncertain. A PubMed search was conducted until August 2022 and yielded relevant peer-reviewed articles based on a set of inclusion criteria. This literature review is intended to summarize the state of the art in oculomics and cardiovascular disease research.
Collapse
Affiliation(s)
- Louis Arnould
- Ophthalmology Department, Dijon University Hospital, 14 Rue Paul Gaffarel, 21079, Dijon CEDEX, France. .,University of Bordeaux, Inserm, Bordeaux Population Health Research Center, UMR U1219, 33000, Bordeaux, France.
| | - Fabrice Meriaudeau
- Laboratory ImViA, IFTIM, Université Bourgogne Franche-Comté, 21078, Dijon, France
| | - Charles Guenancia
- Pathophysiology and Epidemiology of Cerebro-Cardiovascular Diseases, (EA 7460), Faculty of Health Sciences, Université de Bourgogne Franche-Comté, Dijon, France.,Cardiology Department, Dijon University Hospital, Dijon, France
| | - Clément Germanese
- Ophthalmology Department, Dijon University Hospital, 14 Rue Paul Gaffarel, 21079, Dijon CEDEX, France
| | - Cécile Delcourt
- University of Bordeaux, Inserm, Bordeaux Population Health Research Center, UMR U1219, 33000, Bordeaux, France
| | - Ryo Kawasaki
- Artificial Intelligence Center for Medical Research and Application, Osaka University Hospital, Osaka, Japan
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Catherine Creuzot-Garcher
- Ophthalmology Department, Dijon University Hospital, 14 Rue Paul Gaffarel, 21079, Dijon CEDEX, France.,Centre des Sciences du Goût et de l'Alimentation, AgroSup Dijon, CNRS, INRAE, Université Bourgogne Franche-Comté, Dijon, France
| | - Andrzej Grzybowski
- Department of Ophthalmology, University of Warmia and Mazury, Olsztyn, Poland.,Institute for Research in Ophthalmology, Poznan, Poland
| |
Collapse
|
27
|
Lemij HG, de Vente C, Sánchez CI, Vermeer KA. Characteristics of a large, labeled dataset for the training of artificial intelligence for glaucoma screening with fundus photographs. OPHTHALMOLOGY SCIENCE 2023; 3:100300. [PMID: 37113471 PMCID: PMC10127130 DOI: 10.1016/j.xops.2023.100300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 02/12/2023] [Accepted: 03/13/2023] [Indexed: 03/19/2023]
Abstract
Purpose Significant visual impairment due to glaucoma is largely caused by the disease being detected too late. Objective To build a labeled data set for training artificial intelligence (AI) algorithms for glaucoma screening by fundus photography, to assess the accuracy of the graders, and to characterize the features of all eyes with referable glaucoma (RG). Design Cross-sectional study. Subjects Color fundus photographs (CFPs) of 113 893 eyes of 60 357 individuals were obtained from EyePACS, California, United States, from a population screening program for diabetic retinopathy. Methods Carefully selected graders (ophthalmologists and optometrists) graded the images. To qualify, they had to pass the European Optic Disc Assessment Trial optic disc assessment with ≥ 85% accuracy and 92% specificity. Of 90 candidates, 30 passed. Each image of the EyePACS set was then scored by varying random pairs of graders as "RG," "no referable glaucoma (NRG)," or "ungradable (UG)." In case of disagreement, a glaucoma specialist made the final grading. Referable glaucoma was scored if visual field damage was expected. In case of RG, graders were instructed to mark up to 10 relevant glaucomatous features. Main Outcome Measures Qualitative features in eyes with RG. Results The performance of each grader was monitored; if the sensitivity and specificity dropped below 80% and 95%, respectively (the final grade served as reference), they exited the study and their gradings were redone by other graders. In all, 20 graders qualified; their mean sensitivity and specificity (standard deviation [SD]) were 85.6% (5.7) and 96.1% (2.8), respectively. The 2 graders agreed in 92.45% of the images (Gwet's AC2, expressing the inter-rater reliability, was 0.917). Of all gradings, the sensitivity and specificity (95% confidence interval) were 86.0 (85.2-86.7)% and 96.4 (96.3-96.5)%, respectively. Of all gradable eyes (n = 111 183; 97.62%) the prevalence of RG was 4.38%. The most common features of RG were the appearance of the neuroretinal rim (NRR) inferiorly and superiorly. Conclusions A large data set of CFPs was put together of sufficient quality to develop AI screening solutions for glaucoma. The most common features of RG were the appearance of the NRR inferiorly and superiorly. Disc hemorrhages were a rare feature of RG. Financial Disclosures Proprietary or commercial disclosure may be found after the references.
Collapse
|
28
|
A deep learning model incorporating spatial and temporal information successfully detects visual field worsening using a consensus based approach. Sci Rep 2023; 13:1041. [PMID: 36658309 PMCID: PMC9852268 DOI: 10.1038/s41598-023-28003-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Accepted: 01/11/2023] [Indexed: 01/20/2023] Open
Abstract
Glaucoma is a leading cause of irreversible blindness, and its worsening is most often monitored with visual field (VF) testing. Deep learning models (DLM) may help identify VF worsening consistently and reproducibly. In this study, we developed and investigated the performance of a DLM on a large population of glaucoma patients. We included 5099 patients (8705 eyes) seen at one institute from June 1990 to June 2020 that had VF testing as well as clinician assessment of VF worsening. Since there is no gold standard to identify VF worsening, we used a consensus of six commonly used algorithmic methods which include global regressions as well as point-wise change in the VFs. We used the consensus decision as a reference standard to train/test the DLM and evaluate clinician performance. 80%, 10%, and 10% of patients were included in training, validation, and test sets, respectively. Of the 873 eyes in the test set, 309 [60.6%] were from females and the median age was 62.4; (IQR 54.8-68.9). The DLM achieved an AUC of 0.94 (95% CI 0.93-0.99). Even after removing the 6 most recent VFs, providing fewer data points to the model, the DLM successfully identified worsening with an AUC of 0.78 (95% CI 0.72-0.84). Clinician assessment of worsening (based on documentation from the health record at the time of the final VF in each eye) had an AUC of 0.64 (95% CI 0.63-0.66). Both the DLM and clinician performed worse when the initial disease was more severe. This data shows that a DLM trained on a consensus of methods to define worsening successfully identified VF worsening and could help guide clinicians during routine clinical care.
Collapse
|
29
|
Coan LJ, Williams BM, Krishna Adithya V, Upadhyaya S, Alkafri A, Czanner S, Venkatesh R, Willoughby CE, Kavitha S, Czanner G. Automatic detection of glaucoma via fundus imaging and artificial intelligence: A review. Surv Ophthalmol 2023; 68:17-41. [PMID: 35985360 DOI: 10.1016/j.survophthal.2022.08.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 08/04/2022] [Accepted: 08/08/2022] [Indexed: 02/01/2023]
Abstract
Glaucoma is a leading cause of irreversible vision impairment globally, and cases are continuously rising worldwide. Early detection is crucial, allowing timely intervention that can prevent further visual field loss. To detect glaucoma an examination of the optic nerve head via fundus imaging can be performed, at the center of which is the assessment of the optic cup and disc boundaries. Fundus imaging is noninvasive and low-cost; however, image examination relies on subjective, time-consuming, and costly expert assessments. A timely question to ask is: "Can artificial intelligence mimic glaucoma assessments made by experts?" Specifically, can artificial intelligence automatically find the boundaries of the optic cup and disc (providing a so-called segmented fundus image) and then use the segmented image to identify glaucoma with high accuracy? We conducted a comprehensive review on artificial intelligence-enabled glaucoma detection frameworks that produce and use segmented fundus images and summarized the advantages and disadvantages of such frameworks. We identified 36 relevant papers from 2011 to 2021 and 2 main approaches: 1) logical rule-based frameworks, based on a set of rules; and 2) machine learning/statistical modeling-based frameworks. We critically evaluated the state-of-art of the 2 approaches, identified gaps in the literature and pointed at areas for future research.
Collapse
Affiliation(s)
- Lauren J Coan
- School of Computer Science and Mathematics, Liverpool John Moores University, UK.
| | - Bryan M Williams
- School of Computing and Communications, Lancaster University, UK
| | | | - Swati Upadhyaya
- Department of Glaucoma, Aravind Eye Hospital, Pondicherry, India
| | - Ala Alkafri
- School of Computing, Engineering & Digital Technologies, Teesside University, UK
| | - Silvester Czanner
- School of Computer Science and Mathematics, Liverpool John Moores University, UK; Faculty of Informatics and Information Technologies, Slovak University of Technology, Slovakia
| | - Rengaraj Venkatesh
- Department of Glaucoma and Chief Medical Officer, Aravind Eye Hospital, Pondicherry, India
| | | | | | - Gabriela Czanner
- School of Computer Science and Mathematics, Liverpool John Moores University, UK; Faculty of Informatics and Information Technologies, Slovak University of Technology, Slovakia
| |
Collapse
|
30
|
Aspberg J, Heijl A, Bengtsson B. Estimating the Length of the Preclinical Detectable Phase for Open-Angle Glaucoma. JAMA Ophthalmol 2023; 141:48-54. [PMID: 36416831 PMCID: PMC9857634 DOI: 10.1001/jamaophthalmol.2022.5056] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 10/09/2022] [Indexed: 11/24/2022]
Abstract
Importance A 50% reduction of glaucoma-related blindness has previously been demonstrated in a population that was screened for open-angle glaucoma. Ongoing screening trials of high-risk populations and forthcoming low-cost screening methods suggest that such screening may become more common in the future. One would then need to estimate a key component of the natural history of chronic disease, the mean preclinical detectable phase (PCDP). Knowledge of the PCDP is essential for the planning and early evaluation of screening programs and has been estimated for several types of cancer that are screened for. Objective To estimate the mean PCDP for open-angle glaucoma. Design, Setting, and Participants A large population-based screening for open-angle glaucoma was conducted from October 1992 to January 1997 in Malmö, Sweden, including 32 918 participants aged 57 to 77 years. A retrospective medical record review was conducted to assess the prevalence of newly detected cases at the screening, incidence of new cases after the screening, and the expected clinical incidence, ie, the number of new glaucoma cases expected to be detected without a screening. The latter was derived from incident cases in the screened age cohorts before the screening started and from older cohorts not invited to the screening. A total of 2029 patients were included in the current study. Data were analyzed from March 2020 to October 2021. Main Outcomes and Measures The length of the mean PCDP was calculated by 2 different methods: first, by dividing the prevalence of screen-detected glaucoma with the clinical incidence, assuming that the screening sensitivity was 100% and second, by using a Markov chain Monte Carlo (MCMC) model simulation that simultaneously derived both the length of the mean PCDP and the sensitivity of the screening. Results Of 2029 included patients, 1352 (66.6%) were female. Of 1420 screened patients, the mean age at screening was 67.4 years (95% CI, 67.2-67.7). The mean length of the PCDP of the whole study population was 10.7 years (95% CI, 8.7-13.0) by the prevalence/incidence method and 10.1 years (95% credible interval, 8.9-11.2) by the MCMC method. Conclusions and Relevance The mean PCDP was similar for both methods of analysis, approximately 10 years. A mean PCDP of 10 years found in the current study allows for screening with reasonably long intervals, eg, 5 years.
Collapse
Affiliation(s)
- Johan Aspberg
- Department of Clinical Sciences in Malmö, Ophthalmology, Lund University, Malmö, Sweden
- Department of Ophthalmology, Skåne University Hospital, Malmö, Sweden
| | - Anders Heijl
- Department of Clinical Sciences in Malmö, Ophthalmology, Lund University, Malmö, Sweden
- Department of Ophthalmology, Skåne University Hospital, Malmö, Sweden
| | - Boel Bengtsson
- Department of Clinical Sciences in Malmö, Ophthalmology, Lund University, Malmö, Sweden
| |
Collapse
|
31
|
Haider A, Arsalan M, Park C, Sultan H, Park KR. Exploring deep feature-blending capabilities to assist glaucoma screening. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
32
|
Chalkidou A, Shokraneh F, Kijauskaite G, Taylor-Phillips S, Halligan S, Wilkinson L, Glocker B, Garrett P, Denniston AK, Mackie A, Seedat F. Recommendations for the development and use of imaging test sets to investigate the test performance of artificial intelligence in health screening. Lancet Digit Health 2022; 4:e899-e905. [PMID: 36427951 DOI: 10.1016/s2589-7500(22)00186-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 08/11/2022] [Accepted: 09/09/2022] [Indexed: 11/24/2022]
Abstract
Rigorous evaluation of artificial intelligence (AI) systems for image classification is essential before deployment into health-care settings, such as screening programmes, so that adoption is effective and safe. A key step in the evaluation process is the external validation of diagnostic performance using a test set of images. We conducted a rapid literature review on methods to develop test sets, published from 2012 to 2020, in English. Using thematic analysis, we mapped themes and coded the principles using the Population, Intervention, and Comparator or Reference standard, Outcome, and Study design framework. A group of screening and AI experts assessed the evidence-based principles for completeness and provided further considerations. From the final 15 principles recommended here, five affect population, one intervention, two comparator, one reference standard, and one both reference standard and comparator. Finally, four are appliable to outcome and one to study design. Principles from the literature were useful to address biases from AI; however, they did not account for screening specific biases, which we now incorporate. The principles set out here should be used to support the development and use of test sets for studies that assess the accuracy of AI within screening programmes, to ensure they are fit for purpose and minimise bias.
Collapse
Affiliation(s)
| | - Farhad Shokraneh
- King's Technology Evaluation Centre, King's College London, London, UK
| | - Goda Kijauskaite
- UK National Screening Committee, Office for Health Improvement and Disparities, Department of Health and Social Care, London, UK
| | | | - Steve Halligan
- Centre for Medical Imaging, Division of Medicine, University College London, London, UK
| | | | - Ben Glocker
- Department of Computing, Imperial College London, London, UK
| | - Peter Garrett
- Department of Chemical Engineering and Analytical Science, University of Manchester, Manchester, UK
| | - Alastair K Denniston
- Department of Ophthalmology, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
| | - Anne Mackie
- UK National Screening Committee, Office for Health Improvement and Disparities, Department of Health and Social Care, London, UK
| | - Farah Seedat
- UK National Screening Committee, Office for Health Improvement and Disparities, Department of Health and Social Care, London, UK
| |
Collapse
|
33
|
Zang P, Hormel TT, Hwang TS, Bailey ST, Huang D, Jia Y. Deep-Learning-Aided Diagnosis of Diabetic Retinopathy, Age-Related Macular Degeneration, and Glaucoma Based on Structural and Angiographic OCT. OPHTHALMOLOGY SCIENCE 2022; 3:100245. [PMID: 36579336 PMCID: PMC9791595 DOI: 10.1016/j.xops.2022.100245] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Revised: 10/21/2022] [Accepted: 10/28/2022] [Indexed: 11/11/2022]
Abstract
Purpose Timely diagnosis of eye diseases is paramount to obtaining the best treatment outcomes. OCT and OCT angiography (OCTA) have several advantages that lend themselves to early detection of ocular pathology; furthermore, the techniques produce large, feature-rich data volumes. However, the full clinical potential of both OCT and OCTA is stymied when complex data acquired using the techniques must be manually processed. Here, we propose an automated diagnostic framework based on structural OCT and OCTA data volumes that could substantially support the clinical application of these technologies. Design Cross sectional study. Participants Five hundred twenty-six OCT and OCTA volumes were scanned from the eyes of 91 healthy participants, 161 patients with diabetic retinopathy (DR), 95 patients with age-related macular degeneration (AMD), and 108 patients with glaucoma. Methods The diagnosis framework was constructed based on semisequential 3-dimensional (3D) convolutional neural networks. The trained framework classifies combined structural OCT and OCTA scans as normal, DR, AMD, or glaucoma. Fivefold cross-validation was performed, with 60% of the data reserved for training, 20% for validation, and 20% for testing. The training, validation, and test data sets were independent, with no shared patients. For scans diagnosed as DR, AMD, or glaucoma, 3D class activation maps were generated to highlight subregions that were considered important by the framework for automated diagnosis. Main Outcome Measures The area under the curve (AUC) of the receiver operating characteristic curve and quadratic-weighted kappa were used to quantify the diagnostic performance of the framework. Results For the diagnosis of DR, the framework achieved an AUC of 0.95 ± 0.01. For the diagnosis of AMD, the framework achieved an AUC of 0.98 ± 0.01. For the diagnosis of glaucoma, the framework achieved an AUC of 0.91 ± 0.02. Conclusions Deep learning frameworks can provide reliable, sensitive, interpretable, and fully automated diagnosis of eye diseases. Financial Disclosures Proprietary or commercial disclosure may be found after the references.
Collapse
Affiliation(s)
- Pengxiao Zang
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon,Department of Biomedical Engineering, Oregon Health & Science University, Portland, Oregon
| | - Tristan T. Hormel
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Thomas S. Hwang
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Steven T. Bailey
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - David Huang
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon,Department of Biomedical Engineering, Oregon Health & Science University, Portland, Oregon,Correspondence: Yali Jia, PhD, Casey Eye Institute & Department of Biomedical Engineering, Oregon Health & Science University, 515 SW Campus Dr., CEI 3154, Portland, OR 97239-4197.
| |
Collapse
|
34
|
Wu Z, Zangwill LM, Medeiros FA, Keenan TDL. Editorial: Clinical applications of artificial intelligence in retinal and optic nerve disease. Front Med (Lausanne) 2022; 9:1065603. [DOI: 10.3389/fmed.2022.1065603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 10/14/2022] [Indexed: 11/07/2022] Open
|
35
|
Fan R, Alipour K, Bowd C, Christopher M, Brye N, Proudfoot JA, Goldbaum MH, Belghith A, Girkin CA, Fazio MA, Liebmann JM, Weinreb RN, Pazzani M, Kriegman D, Zangwill LM. Detecting Glaucoma from Fundus Photographs Using Deep Learning without Convolutions: Transformer for Improved Generalization. OPHTHALMOLOGY SCIENCE 2022; 3:100233. [PMID: 36545260 PMCID: PMC9762193 DOI: 10.1016/j.xops.2022.100233] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 10/04/2022] [Accepted: 10/12/2022] [Indexed: 12/14/2022]
Abstract
Purpose To compare the diagnostic accuracy and explainability of a Vision Transformer deep learning technique, Data-efficient image Transformer (DeiT), and ResNet-50, trained on fundus photographs from the Ocular Hypertension Treatment Study (OHTS) to detect primary open-angle glaucoma (POAG) and identify the salient areas of the photographs most important for each model's decision-making process. Design Evaluation of a diagnostic technology. Subjects Participants and Controls Overall 66 715 photographs from 1636 OHTS participants and an additional 5 external datasets of 16 137 photographs of healthy and glaucoma eyes. Methods Data-efficient image Transformer models were trained to detect 5 ground-truth OHTS POAG classifications: OHTS end point committee POAG determinations because of disc changes (model 1), visual field (VF) changes (model 2), or either disc or VF changes (model 3) and Reading Center determinations based on disc (model 4) and VFs (model 5). The best-performing DeiT models were compared with ResNet-50 models on OHTS and 5 external datasets. Main Outcome Measures Diagnostic performance was compared using areas under the receiver operating characteristic curve (AUROC) and sensitivities at fixed specificities. The explainability of the DeiT and ResNet-50 models was compared by evaluating the attention maps derived directly from DeiT to 3 gradient-weighted class activation map strategies. Results Compared with our best-performing ResNet-50 models, the DeiT models demonstrated similar performance on the OHTS test sets for all 5 ground-truth POAG labels; AUROC ranged from 0.82 (model 5) to 0.91 (model 1). Data-efficient image Transformer AUROC was consistently higher than ResNet-50 on the 5 external datasets. For example, AUROC for the main OHTS end point (model 3) was between 0.08 and 0.20 higher in the DeiT than ResNet-50 models. The saliency maps from the DeiT highlight localized areas of the neuroretinal rim, suggesting important rim features for classification. The same maps in the ResNet-50 models show a more diffuse, generalized distribution around the optic disc. Conclusions Vision Transformers have the potential to improve generalizability and explainability in deep learning models, detecting eye disease and possibly other medical conditions that rely on imaging for clinical diagnosis and management.
Collapse
Key Words
- AI, artificial intelligence
- AUROC, areas under the receiver operating characteristic curve
- CI, confidence interval
- CNN, convolutional neural network
- DL, deep learning
- Deep learning
- DeiT, Data-efficient image Transformer
- Fundus photographs
- Glaucoma detection
- LAG, Large-Scale Attention-Based Glaucoma
- OHTS, Ocular Hypertension Treatment Study
- POAG, primary open-angle glaucoma
- SoTA, state-of-the-art
- VF, visual field
- ViT, Vision Transformer
- Vision Transformers
Collapse
Affiliation(s)
- Rui Fan
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California,Department of Computer Science and Engineering, University of California San Diego, La Jolla, California,Department of Control Science and Engineering, Tongji University, Shanghai 201804, China
| | - Kamran Alipour
- Department of Computer Science and Engineering, University of California San Diego, La Jolla, California
| | - Christopher Bowd
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
| | - Mark Christopher
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
| | - Nicole Brye
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
| | - James A. Proudfoot
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
| | - Michael H. Goldbaum
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
| | - Akram Belghith
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
| | - Christopher A. Girkin
- Department of Ophthalmology, School of Medicine, The University of Alabama at Birmingham, Birmingham, Alabama
| | - Massimo A. Fazio
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California,Department of Ophthalmology, School of Medicine, The University of Alabama at Birmingham, Birmingham, Alabama,Department of Biomedical Engineering, School of Engineering, The University of Alabama at Birmingham, Birmingham, Alabama
| | - Jeffrey M. Liebmann
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Columbia University Medical Center, New York, New York
| | - Robert N. Weinreb
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
| | - Michael Pazzani
- Department of Computer Science and Engineering, University of California San Diego, La Jolla, California
| | - David Kriegman
- Department of Computer Science and Engineering, University of California San Diego, La Jolla, California
| | - Linda M. Zangwill
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California,Correspondence: Linda M. Zangwill, 9500 Gilman Dr., #0946, La Jolla, California 92093-0946.
| |
Collapse
|
36
|
Chen HSL, Chen GA, Syu JY, Chuang LH, Su WW, Wu WC, Liu JH, Chen JR, Huang SC, Kang EYC. Early Glaucoma Detection by Using Style Transfer to Predict Retinal Nerve Fiber Layer Thickness Distribution on the Fundus Photograph. OPHTHALMOLOGY SCIENCE 2022; 2:100180. [PMID: 36245759 PMCID: PMC9559108 DOI: 10.1016/j.xops.2022.100180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Revised: 05/16/2022] [Accepted: 06/06/2022] [Indexed: 12/03/2022]
Abstract
Objective We aimed to develop a deep learning (DL)–based algorithm for early glaucoma detection based on color fundus photographs that provides information on defects in the retinal nerve fiber layer (RNFL) and its thickness from the mapping and translating relations of spectral domain OCT (SD-OCT) thickness maps. Design Developing and evaluating an artificial intelligence detection tool. Subjects Pretraining paired data of color fundus photographs and SD-OCT images from 189 healthy participants and 371 patients with early glaucoma were used. Methods The variational autoencoder (VAE) network training architecture was used for training, and the correlation between the fundus photographs and RNFL thickness distribution was determined through the deep neural network. The reference standard was defined as a vertical cup-to-disc ratio of ≥0.7, other typical changes in glaucomatous optic neuropathy, and RNFL defects. Convergence indicates that the VAE has learned a distribution that would enable us to produce corresponding synthetic OCT scans. Main Outcome Measures Similarly to wide-field OCT scanning, the proposed model can extract the results of RNFL thickness analysis. The structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) were used to assess signal strength and the similarity in the structure of the color fundus images converted to an RNFL thickness distribution model. The differences between the model-generated images and original images were quantified. Results We developed and validated a novel DL-based algorithm to extract thickness information from the color space of fundus images similarly to that from OCT images and to use this information to regenerate RNFL thickness distribution images. The generated thickness map was sufficient for clinical glaucoma detection, and the generated images were similar to ground truth (PSNR: 19.31 decibels; SSIM: 0.44). The inference results were similar to the OCT-generated original images in terms of the ability to predict RNFL thickness distribution. Conclusions The proposed technique may aid clinicians in early glaucoma detection, especially when only color fundus photographs are available.
Collapse
Affiliation(s)
- Henry Shen-Lih Chen
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Henry Shen-Lih Chen, MD, MBA, Department of Ophthalmology, Chang Gung Memorial Hospital, No. 5, Fu-Hsin Rd., Taoyuan 333, Taiwan.
| | - Guan-An Chen
- Healthcare Service Division, Department of Intelligent Medical & Healthcare, Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Jhen-Yang Syu
- Healthcare Service Division, Department of Intelligent Medical & Healthcare, Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Lan-Hsin Chuang
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Department of Ophthalmology, Keelung Chang Gung Memorial Hospital, Keelung, Taiwan
| | - Wei-Wen Su
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Wei-Chi Wu
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Jian-Hong Liu
- Healthcare Service Division, Department of Intelligent Medical & Healthcare, Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Jian-Ren Chen
- Healthcare Service Division, Department of Intelligent Medical & Healthcare, Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Su-Chen Huang
- Healthcare Service Division, Department of Intelligent Medical & Healthcare, Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Eugene Yu-Chuan Kang
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Graduate Institute of Clinical Medical Sciences, College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Correspondence: Eugene Yu-Chuan Kang, MD, Department of Ophthalmology, Chang Gung Memorial Hospital, No. 5, Fu-Hsin Rd., Taoyuan 333, Taiwan.
| |
Collapse
|
37
|
Bhartiya S. Glaucoma Screening: Is AI the Answer? J Curr Glaucoma Pract 2022; 16:71-73. [PMID: 36128081 PMCID: PMC9452706 DOI: 10.5005/jp-journals-10078-1380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Affiliation(s)
- Shibal Bhartiya
- Department of Ophthalmology, Glaucoma Services, Fortis Memorial Research Institute, Gurugram, Haryana, India
- Shibal Bhartiya, Department of Ophthalmology, Glaucoma Services, Fortis Memorial Research Institute, Gurugram, Haryana, India, e-mail:
| |
Collapse
|
38
|
Fukai K, Terauchi R, Noro T, Ogawa S, Watanabe T, Nakagawa T, Honda T, Watanabe Y, Furuya Y, Hayashi T, Tatemichi M, Nakano T. Real-Time Risk Score for Glaucoma Mass Screening by Spectral Domain Optical Coherence Tomography: Development and Validation. Transl Vis Sci Technol 2022; 11:8. [PMID: 35938880 PMCID: PMC9366724 DOI: 10.1167/tvst.11.8.8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
Purpose To develop and validate a risk score assessable in real-time using only retinal thickness-related values measured by spectral domain optical coherence tomography alone for use in population-based glaucoma mass screenings. Methods A total of 7572 participants (aged 35-74 years) underwent spectral domain optical coherence tomography examination annually between 2016 to 2021 in a population-based setting. We selected 284 glaucoma cases and 284 controls, matched by age and sex, from 11,487 scans in 2016. We conducted multivariable logistic regression with backward stepwise selection of retinal thickness-related variables to develop the diagnostic models. The developed risk scores were applied to all participants in 2018 (9720 eyes), and we randomly selected 723 scans for validation. Additional validation using the Humphrey field analyzer was conducted on 129 eyes in 2020. We assessed the models using sensitivity, specificity, the area under the receiver operating characteristic curve and positive and negative predictive values. Results The best-predicting model achieved an area under the receiver operating characteristic curve of 0.97 (95% confidence interval, 0.96-0.98) with a sensitivity of 0.93 and specificity of 0.91. The validation dataset showed a positive predictive value of 90.8% for high-risk scorers, corresponding to 6.2% of the population, and negative predictive value of 88.2% for low-risk scorers, corresponding to 85.2%. Sensitivity and specificity for glaucoma diagnosis were 0.85 and 0.91, when we set the risk score cut-off at 90 points out of 100. Conclusions This risk score could be used as a valid index for glaucoma screening in a population-based setting. Translational Relevance The score is feasible by installing a simple computer application on an existing spectral domain optical coherence tomography and will help to improve the accuracy and efficiency of glaucoma screening.
Collapse
Affiliation(s)
- Kota Fukai
- Department of Preventive Medicine, Tokai University School of Medicine, Kanagawa, Japan
| | - Ryo Terauchi
- Department of Ophthalmology, The Jikei University School of Medicine, Tokyo, Japan
| | - Takahiko Noro
- Department of Ophthalmology, The Jikei University School of Medicine, Tokyo, Japan
| | - Shumpei Ogawa
- Department of Ophthalmology, The Jikei University School of Medicine, Tokyo, Japan
| | - Tomoyuki Watanabe
- Department of Ophthalmology, The Jikei University School of Medicine, Tokyo, Japan
| | | | - Toru Honda
- Hitachi Health Care Center, Ibaraki, Japan
| | | | - Yuko Furuya
- Department of Preventive Medicine, Tokai University School of Medicine, Kanagawa, Japan
| | | | - Masayuki Tatemichi
- Department of Preventive Medicine, Tokai University School of Medicine, Kanagawa, Japan
| | - Tadashi Nakano
- Department of Ophthalmology, The Jikei University School of Medicine, Tokyo, Japan
| |
Collapse
|
39
|
Balasubramanian K, Ramya K, Gayathri Devi K. Improved swarm optimization of deep features for glaucoma classification using SEGSO and VGGNet. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
40
|
Balasubramanian K, N.P. A. Correlation-based feature selection using bio-inspired algorithms and optimized KELM classifier for glaucoma diagnosis. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109432] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
41
|
Liebmann JM, Hood DC, de Moraes CG, Blumberg DM, Harizman N, Kresch YS, Tsamis E, Cioffi GA. Rationale and Development of an OCT-Based Method for Detection of Glaucomatous Optic Neuropathy. J Glaucoma 2022; 31:375-381. [PMID: 35220387 PMCID: PMC9167228 DOI: 10.1097/ijg.0000000000002005] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2021] [Accepted: 02/08/2022] [Indexed: 11/27/2022]
Abstract
A specific, sensitive, and intersubjectively verifiable definition of disease for clinical care and research remains an important unmet need in the field of glaucoma. Using an iterative, consensus-building approach and employing pilot data, an optical coherence tomography (OCT)-based method to aid in the detection of glaucomatous optic neuropathy was sought to address this challenge. To maximize the chance of success, we utilized all available information from the OCT circle and cube scans, applied both quantitative and semiquantitative data analysis methods, and aimed to limit the use of perimetry to cases where it is absolutely necessary. The outcome of this approach was an OCT-based method for the diagnosis of glaucomatous optic neuropathy that did not require the use of perimetry for initial diagnosis. A decision tree was devised for testing and implementation in clinical practice and research that can be used by reading centers, researchers, and clinicians. While initial pilot data were encouraging, future testing and validation will be needed to establish its utility in clinical practice, as well as for research.
Collapse
Affiliation(s)
- Jeffrey M Liebmann
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center
| | - Donald C Hood
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center
- Department of Psychology, Columbia University, New York, NY
| | - Carlos Gustavo de Moraes
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center
| | - Dana M Blumberg
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center
| | - Noga Harizman
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center
| | - Yocheved S Kresch
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center
| | | | - George A Cioffi
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center
| |
Collapse
|
42
|
Glaucoma diagnosis using multi-feature analysis and a deep learning technique. Sci Rep 2022; 12:8064. [PMID: 35577876 PMCID: PMC9110703 DOI: 10.1038/s41598-022-12147-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Accepted: 04/25/2022] [Indexed: 11/08/2022] Open
Abstract
In this study, we aimed to facilitate the current diagnostic assessment of glaucoma by analyzing multiple features and introducing a new cross-sectional optic nerve head (ONH) feature from optical coherence tomography (OCT) images. The data (n = 100 for both glaucoma and control) were collected based on structural, functional, demographic and risk factors. The features were statistically analyzed, and the most significant four features were used to train machine learning (ML) algorithms. Two ML algorithms: deep learning (DL) and logistic regression (LR) were compared in terms of the classification accuracy for automated glaucoma detection. The performance of the ML models was evaluated on unseen test data, n = 55. An image segmentation pilot study was then performed on cross-sectional OCT scans. The ONH cup area was extracted, analyzed, and a new DL model was trained for glaucoma prediction. The DL model was estimated using five-fold cross-validation and compared with two pre-trained models. The DL model trained from the optimal features achieved significantly higher diagnostic performance (area under the receiver operating characteristic curve (AUC) 0.98 and accuracy of 97% on validation data and 96% on test data) compared to previous studies for automated glaucoma detection. The second DL model used in the pilot study also showed promising outcomes (AUC 0.99 and accuracy of 98.6%) to detect glaucoma compared to two pre-trained models. In combination, the result of the two studies strongly suggests the four features and the cross-sectional ONH cup area trained using deep learning have a great potential for use as an initial screening tool for glaucoma which will assist clinicians in making a precise decision.
Collapse
|
43
|
Xu S, Yuan H. A three-methylation-driven genes based deep learning model for tuberculosis diagnosis in people with and without human immunodeficiency virus co-infection. Microbiol Immunol 2022; 66:317-323. [PMID: 35510555 DOI: 10.1111/1348-0421.12983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 04/21/2022] [Accepted: 04/29/2022] [Indexed: 11/29/2022]
Abstract
Improved diagnostic tests for tuberculosis (TB) among people living with human immunodeficiency virus (HIV) are urgently required. We hypothesized that methylation-driven genes of host blood could be used to diagnosis patients co-infected with HIV/TB. In this study, we identified three methylation-driven genes (MDGs) between patients with HIV mono-infection and with HIV/TB co-infection using R package MethylMix. Then, we developed a deep learning model using three MDGs screened which distinguished HIV/TB co-infection from HIV mono-infection with a sensitivity of 95.2% and a specificity of 88.3%. On the two independent datasets, the sensitivity was 80% to 92.8%, respectively; the specificity was 72.7% to 87.5%, respectively. Besides, our deep learning model also accurately classified TB from healthy controls (75.0-100% sensitivity, 91.3-98.1% specificity) and other respiratory disorders (ORDs) (72.7-75.0% sensitivity, 70.9-72.7% specificity). This study will contribute to improve molecular diagnosis for HIV/TB co-infection. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Shaohua Xu
- Drug Clinical Trial Center, Gansu Wuwei Tumor Hospital, Wuwei, Gansu, People's Republic of China
| | - Huicheng Yuan
- Drug Clinical Trial Center, Gansu Wuwei Tumor Hospital, Wuwei, Gansu, People's Republic of China
| |
Collapse
|
44
|
WU JOHSUAN, NISHIDA TAKASHI, WEINREB ROBERTN, LIN JOUWEI. Performances of Machine Learning in Detecting Glaucoma Using Fundus and Retinal Optical Coherence Tomography Images: A Meta-Analysis. Am J Ophthalmol 2022; 237:1-12. [PMID: 34942113 DOI: 10.1016/j.ajo.2021.12.008] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 11/24/2021] [Accepted: 12/03/2021] [Indexed: 11/01/2022]
Abstract
PURPOSE To evaluate the performance of machine learning (ML) in detecting glaucoma using fundus and retinal optical coherence tomography (OCT) images. DESIGN Meta-analysis. METHODS PubMed and EMBASE were searched on August 11, 2021. A bivariate random-effects model was used to pool ML's diagnostic sensitivity, specificity, and area under the curve (AUC). Subgroup analyses were performed based on ML classifier categories and dataset types. RESULTS One hundred and five studies (3.3%) were retrieved. Seventy-three (69.5%), 30 (28.6%), and 2 (1.9%) studies tested ML using fundus, OCT, and both image types, respectively. Total testing data numbers were 197,174 for fundus and 16,039 for OCT. Overall, ML showed excellent performances for both fundus (pooled sensitivity = 0.92 [95% CI, 0.91-0.93]; specificity = 0.93 [95% CI, 0.91-0.94]; and AUC = 0.97 [95% CI, 0.95-0.98]) and OCT (pooled sensitivity = 0.90 [95% CI, 0.86-0.92]; specificity = 0.91 [95% CI, 0.89-0.92]; and AUC = 0.96 [95% CI, 0.93-0.97]). ML performed similarly using all data and external data for fundus and the external test result of OCT was less robust (AUC = 0.87). When comparing different classifier categories, although support vector machine showed the highest performance (pooled sensitivity, specificity, and AUC ranges, 0.92-0.96, 0.95-0.97, and 0.96-0.99, respectively), results by neural network and others were still good (pooled sensitivity, specificity, and AUC ranges, 0.88-0.93, 0.90-0.93, 0.95-0.97, respectively). When analyzed based on dataset types, ML demonstrated consistent performances on clinical datasets (fundus AUC = 0.98 [95% CI, 0.97-0.99] and OCT AUC = 0.95 [95% 0.93-0.97]). CONCLUSIONS Performance of ML in detecting glaucoma compares favorably to that of experts and is promising for clinical application. Future prospective studies are needed to better evaluate its real-world utility.
Collapse
|
45
|
Chaurasia AK, Greatbatch CJ, Hewitt AW. Diagnostic Accuracy of Artificial Intelligence in Glaucoma Screening and Clinical Practice. J Glaucoma 2022; 31:285-299. [PMID: 35302538 DOI: 10.1097/ijg.0000000000002015] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 02/26/2022] [Indexed: 11/25/2022]
Abstract
PURPOSE Artificial intelligence (AI) has been shown as a diagnostic tool for glaucoma detection through imaging modalities. However, these tools are yet to be deployed into clinical practice. This meta-analysis determined overall AI performance for glaucoma diagnosis and identified potential factors affecting their implementation. METHODS We searched databases (Embase, Medline, Web of Science, and Scopus) for studies that developed or investigated the use of AI for glaucoma detection using fundus and optical coherence tomography (OCT) images. A bivariate random-effects model was used to determine the summary estimates for diagnostic outcomes. The Preferred Reporting Items for Systematic Reviews and Meta-Analysis of Diagnostic Test Accuracy (PRISMA-DTA) extension was followed, and the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool was used for bias and applicability assessment. RESULTS Seventy-nine articles met inclusion criteria, with a subset of 66 containing adequate data for quantitative analysis. The pooled area under receiver operating characteristic curve across all studies for glaucoma detection was 96.3%, with a sensitivity of 92.0% (95% confidence interval: 89.0-94.0) and specificity of 94.0% (95% confidence interval: 92.0-95.0). The pooled area under receiver operating characteristic curve on fundus and OCT images was 96.2% and 96.0%, respectively. Mixed data set and external data validation had unsatisfactory diagnostic outcomes. CONCLUSION Although AI has the potential to revolutionize glaucoma care, this meta-analysis highlights that before such algorithms can be implemented into clinical care, a number of issues need to be addressed. With substantial heterogeneity across studies, many factors were found to affect the diagnostic performance. We recommend implementing a standard diagnostic protocol for grading, implementing external data validation, and analysis across different ethnicity groups.
Collapse
Affiliation(s)
- Abadh K Chaurasia
- Menzies Institute for Medical Research, School of Medicine, University of Tasmania, Tasmania
| | - Connor J Greatbatch
- Menzies Institute for Medical Research, School of Medicine, University of Tasmania, Tasmania
| | - Alex W Hewitt
- Menzies Institute for Medical Research, School of Medicine, University of Tasmania, Tasmania
- Centre for Eye Research Australia, University of Melbourne, Melbourne, Australia
| |
Collapse
|
46
|
Kako NA, Abdulazeez AM. Peripapillary Atrophy Segmentation and Classification Methodologies for Glaucoma Image Detection: A Review. Curr Med Imaging 2022; 18:1140-1159. [PMID: 35260060 DOI: 10.2174/1573405618666220308112732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 12/04/2021] [Accepted: 12/22/2021] [Indexed: 11/22/2022]
Abstract
Information-based image processing and computer vision methods are utilized in several healthcare organizations to diagnose diseases. The irregularities in the visual system are identified over fundus images shaped over a fundus camera. Among ophthalmology diseases, glaucoma is measured as the most common case that can lead to neurodegenerative illness. The unsuitable fluid pressure inside the eye within the visual system is described as the major cause of those diseases. Glaucoma has no symptoms in the early stages, and if it is not treated, it may result in total blindness. Diagnosing glaucoma at an early stage may prevent permanent blindness. Manual inspection of the human eye may be a solution, but it depends on the skills of the individuals involved. The auto diagnosis of glaucoma by applying a consolidation of computer vision, artificial intelligence, and image processing can aid in the ban and detection of those diseases. In this review article, we aim to introduce a review of the numerous approaches based on peripapillary atrophy segmentation and classification that can detect these diseases, as well as details about the publicly available image benchmarks, datasets, and measurement of performance. The review article introduces the demonstrated research of numerous available study models that objectively diagnose glaucoma via peripapillary atrophy from the lowest level of feature extraction to the current direction based on deep learning. The advantages and disadvantages of each method are addressed in detail, and tabular descriptions are included to highlight the results of each category. Moreover, the frameworks of each approach and fundus image datasets are provided. The improved reporting of our study would help in providing possible future work directions to diagnose glaucoma in conclusion.
Collapse
Affiliation(s)
- Najdavan A Kako
- Duhok Polytechnic University, Technical Institute of Administration, MIS, Duhok, Iraq
| | | |
Collapse
|
47
|
Alawad M, Aljouie A, Alamri S, Alghamdi M, Alabdulkader B, Alkanhal N, Almazroa A. Machine Learning and Deep Learning Techniques for Optic Disc and Cup Segmentation – A Review. Clin Ophthalmol 2022; 16:747-764. [PMID: 35300031 PMCID: PMC8923700 DOI: 10.2147/opth.s348479] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Accepted: 02/11/2022] [Indexed: 12/12/2022] Open
Abstract
Background Globally, glaucoma is the second leading cause of blindness. Detecting glaucoma in the early stages is essential to avoid disease complications, which lead to blindness. Thus, computer-aided diagnosis systems are powerful tools to overcome the shortage of glaucoma screening programs. Methods A systematic search of public databases, including PubMed, Google Scholar, and other sources, was performed to identify relevant studies to overview the publicly available fundus image datasets used to train, validate, and test machine learning and deep learning methods. Additionally, existing machine learning and deep learning methods for optic cup and disc segmentation were surveyed and critically reviewed. Results Eight fundus images datasets were publicly available with 15,445 images labeled with glaucoma or non-glaucoma, and manually annotated optic disc and cup boundaries were found. Five metrics were identified for evaluating the developed models. Finally, three main deep learning architectural designs were commonly used for optic disc and optic cup segmentation. Conclusion We provided future research directions to formulate robust optic cup and disc segmentation systems. Deep learning can be utilized in clinical settings for this task. However, many challenges need to be addressed before using this strategy in clinical trials. Finally, two deep learning architectural designs have been widely adopted, such as U-net and its variants.
Collapse
Affiliation(s)
- Mohammed Alawad
- Department of Biostatistics and Bioinformatics, King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia
| | - Abdulrhman Aljouie
- Department of Biostatistics and Bioinformatics, King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia
| | - Suhailah Alamri
- Department of Imaging Research, King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for health Sciences, Riyadh, Saudi Arabia
- Research Labs, National Center for Artificial Intelligence, Riyadh, Saudi Arabia
| | - Mansour Alghamdi
- Department of Optometry and Vision Sciences College of Applied Medical Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Balsam Alabdulkader
- Department of Optometry and Vision Sciences College of Applied Medical Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Norah Alkanhal
- Department of Imaging Research, King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for health Sciences, Riyadh, Saudi Arabia
| | - Ahmed Almazroa
- Department of Imaging Research, King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for health Sciences, Riyadh, Saudi Arabia
- Correspondence: Ahmed Almazroa; Abdulrhman Aljouie, Email ;
| |
Collapse
|
48
|
AlRyalat SA, Ertel MK, Seibold LK, Kahook MY. Designs and Methodologies Used in Landmark Clinical Trials of Glaucoma: Implications for Future Big Data Mining and Actionable Disease Treatment. Front Med (Lausanne) 2022; 9:818568. [PMID: 35155501 PMCID: PMC8825364 DOI: 10.3389/fmed.2022.818568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 01/04/2022] [Indexed: 11/20/2022] Open
Affiliation(s)
| | - Monica K Ertel
- Department of Ophthalmology, Sue Anschutz-Rodgers Eye Center, University of Colorado School of Medicine, Aurora, CO, United States
| | - Leonard K Seibold
- Department of Ophthalmology, Sue Anschutz-Rodgers Eye Center, University of Colorado School of Medicine, Aurora, CO, United States
| | - Malik Y Kahook
- Department of Ophthalmology, Sue Anschutz-Rodgers Eye Center, University of Colorado School of Medicine, Aurora, CO, United States
| |
Collapse
|
49
|
Schreur V, Larsen MB, Sobrin L, Bhavsar AR, Hollander AI, Klevering BJ, Hoyng CB, Jong EK, Grauslund J, Peto T. Imaging diabetic retinal disease: clinical imaging requirements. Acta Ophthalmol 2022; 100:752-762. [PMID: 35142031 DOI: 10.1111/aos.15110] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Revised: 12/12/2021] [Accepted: 01/20/2022] [Indexed: 12/27/2022]
Abstract
Diabetic retinopathy (DR) is a sight-threatening complication of diabetes mellitus (DM) and it contributes substantially to the burden of disease globally. During the last decades, the development of multiple imaging modalities to evaluate DR, combined with emerging treatment possibilities, has led to the implementation of large-scale screening programmes resulting in improved prevention of vision loss. However, not all patients are able to participate in such programmes and not all are at equal risk of DR development and progression. In this review, we discuss the relevance of the currently available imaging modalities for the evaluation of DR: colour fundus photography (CFP), ultrawide-field photography (UWFP), fundus fluorescein angiography (FFA), optical coherence tomography (OCT), OCT angiography (OCTA) and functional testing. Furthermore, we suggest where a particular imaging technique of DR may aid the evaluation of the disease in different clinical settings. Combining information from various imaging modalities may enable the design of more personalized care including the initiation of treatment and understanding the progression of disease more adequately.
Collapse
Affiliation(s)
- Vivian Schreur
- Department of Ophthalmology, Donders Institution for Brain, Cognition and Behaviour Radboud University Medical Center Nijmegen The Netherlands
| | - Morten B. Larsen
- Research Unit of Ophthalmology University of Southern Denmark Odense Denmark
- Department of Ophthalmology Odense University Hospital Odense Denmark
| | - Lucia Sobrin
- Department of Ophthalmology, Harvard Medical School Massachusetts Eye and Ear Infirmary Boston USA
| | | | - Anneke I. Hollander
- Department of Ophthalmology, Donders Institution for Brain, Cognition and Behaviour Radboud University Medical Center Nijmegen The Netherlands
| | - B. Jeroen Klevering
- Department of Ophthalmology, Donders Institution for Brain, Cognition and Behaviour Radboud University Medical Center Nijmegen The Netherlands
| | - Carel B. Hoyng
- Department of Ophthalmology, Donders Institution for Brain, Cognition and Behaviour Radboud University Medical Center Nijmegen The Netherlands
| | - Eiko K. Jong
- Department of Ophthalmology, Donders Institution for Brain, Cognition and Behaviour Radboud University Medical Center Nijmegen The Netherlands
| | - Jakob Grauslund
- Research Unit of Ophthalmology University of Southern Denmark Odense Denmark
- Department of Ophthalmology Odense University Hospital Odense Denmark
| | - Tunde Peto
- Research Unit of Ophthalmology University of Southern Denmark Odense Denmark
- Centre for Public Health Queen's University Belfast Belfast UK
| |
Collapse
|
50
|
End-to-end multi-task learning for simultaneous optic disc and cup segmentation and glaucoma classification in eye fundus images. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2021.108347] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|