1
|
Chaurasia AK, Greatbatch CJ, Han X, Gharahkhani P, Mackey DA, MacGregor S, Craig JE, Hewitt AW. Highly Accurate and Precise Automated Cup-to-Disc Ratio Quantification for Glaucoma Screening. OPHTHALMOLOGY SCIENCE 2024; 4:100540. [PMID: 39051045 PMCID: PMC11268341 DOI: 10.1016/j.xops.2024.100540] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 03/26/2024] [Accepted: 04/22/2024] [Indexed: 07/27/2024]
Abstract
Objective An enlarged cup-to-disc ratio (CDR) is a hallmark of glaucomatous optic neuropathy. Manual assessment of the CDR may be less accurate and more time-consuming than automated methods. Here, we sought to develop and validate a deep learning-based algorithm to automatically determine the CDR from fundus images. Design Algorithm development for estimating CDR using fundus data from a population-based observational study. Participants A total of 181 768 fundus images from the United Kingdom Biobank (UKBB), Drishti_GS, and EyePACS. Methods FastAI and PyTorch libraries were used to train a convolutional neural network-based model on fundus images from the UKBB. Models were constructed to determine image gradability (classification analysis) as well as to estimate CDR (regression analysis). The best-performing model was then validated for use in glaucoma screening using a multiethnic dataset from EyePACS and Drishti_GS. Main Outcome Measures The area under the receiver operating characteristic curve and coefficient of determination. Results Our gradability model vgg19_batch normalization (bn) achieved an accuracy of 97.13% on a validation set of 16 045 images, with 99.26% precision and area under the receiver operating characteristic curve of 96.56%. Using regression analysis, our best-performing model (trained on the vgg19_bn architecture) attained a coefficient of determination of 0.8514 (95% confidence interval [CI]: 0.8459-0.8568), while the mean squared error was 0.0050 (95% CI: 0.0048-0.0051) and mean absolute error was 0.0551 (95% CI: 0.0543-0.0559) on a validation set of 12 183 images for determining CDR. The regression point was converted into classification metrics using a tolerance of 0.2 for 20 classes; the classification metrics achieved an accuracy of 99.20%. The EyePACS dataset (98 172 healthy, 3270 glaucoma) was then used to externally validate the model for glaucoma classification, with an accuracy, sensitivity, and specificity of 82.49%, 72.02%, and 82.83%, respectively. Conclusions Our models were precise in determining image gradability and estimating CDR. Although our artificial intelligence-derived CDR estimates achieve high accuracy, the CDR threshold for glaucoma screening will vary depending on other clinical parameters. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Abadh K. Chaurasia
- Menzies Institute for Medical Research, University of Tasmania, Hobart, Australia
| | - Connor J. Greatbatch
- Menzies Institute for Medical Research, University of Tasmania, Hobart, Australia
| | - Xikun Han
- QIMR Berghofer Medical Research Institute, Brisbane, Australia
- School of Medicine, University of Queensland, Brisbane, Australia
| | - Puya Gharahkhani
- QIMR Berghofer Medical Research Institute, Brisbane, Australia
- School of Medicine, University of Queensland, Brisbane, Australia
- Faculty of Health, School of Biomedical Sciences, Queensland University of Technology, Brisbane, Queensland, Australia
| | - David A. Mackey
- Lions Eye Institute, Centre for Vision Sciences, University of Western Australia, Nedlands, Australia
| | - Stuart MacGregor
- QIMR Berghofer Medical Research Institute, Brisbane, Australia
- School of Medicine, University of Queensland, Brisbane, Australia
| | - Jamie E. Craig
- Department of Ophthalmology, Flinders University, Flinders Medical Centre, Bedford Park, Australia
| | - Alex W. Hewitt
- Menzies Institute for Medical Research, University of Tasmania, Hobart, Australia
- Centre for Eye Research Australia, University of Melbourne, Melbourne, Australia
| |
Collapse
|
2
|
Madadi Y, Abu-Serhan H, Yousefi S. Domain Adaptation-Based Deep Learning Model for Forecasting and Diagnosis of Glaucoma Disease. Biomed Signal Process Control 2024; 92:106061. [PMID: 38463435 PMCID: PMC10922017 DOI: 10.1016/j.bspc.2024.106061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/12/2024]
Abstract
The main factor causing irreversible blindness is glaucoma. Early detection greatly reduces the risk of further vision loss. To address this problem, we developed a domain adaptation-based deep learning model called Glaucoma Domain Adaptation (GDA) based on 66,742 fundus photographs collected from 3272 eyes of 1636 subjects. GDA learns domain-invariant and domain-specific representations to extract both general and specific features. We also developed a progressive weighting mechanism to accurately transfer the source domain knowledge while mitigating the transfer of negative knowledge from the source to the target domain. We employed low-rank coding for aligning the source and target distributions. We trained GDA based on three different scenarios including eyes annotated as glaucoma due to 1) optic disc abnormalities regardless of visual field abnormalities, 2) optic disc or visual field abnormalities except ones that are glaucoma due to both optic disc and visual field abnormalities at the same time, and 3) visual field abnormalities regardless of optic disc abnormalities We then evaluate the generalizability of GDA based on two independent datasets. The AUCs of GDA in forecasting glaucoma based on the first, second, and third scenarios were 0.90, 0.88, and 0.80 and the Accuracies were 0.82, 0.78, and 0.72, respectively. The AUCs of GDA in diagnosing glaucoma based on the first, second, and third scenarios were 0.98, 0.96, and 0.93 and the Accuracies were 0.93, 0.91, and 0.88, respectively. The proposed GDA model achieved high performance and generalizability for forecasting and diagnosis of glaucoma disease from fundus photographs. GDA may augment glaucoma research and clinical practice in identifying patients with glaucoma and forecasting those who may develop glaucoma thus preventing future vision loss.
Collapse
Affiliation(s)
- Yeganeh Madadi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA
| | | | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, TN, USA
| |
Collapse
|
3
|
Yap BP, Kelvin LZ, Toh EQ, Low KY, Rani SK, Goh EJH, Hui VYC, Ng BK, Lim TH. Generalizability of Deep Neural Networks for Vertical Cup-to-Disc Ratio Estimation in Ultra-Widefield and Smartphone-Based Fundus Images. Transl Vis Sci Technol 2024; 13:6. [PMID: 38568608 PMCID: PMC10996969 DOI: 10.1167/tvst.13.4.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 02/19/2024] [Indexed: 04/05/2024] Open
Abstract
Purpose To develop and validate a deep learning system (DLS) for estimation of vertical cup-to-disc ratio (vCDR) in ultra-widefield (UWF) and smartphone-based fundus images. Methods A DLS consisting of two sequential convolutional neural networks (CNNs) to delineate optic disc (OD) and optic cup (OC) boundaries was developed using 800 standard fundus images from the public REFUGE data set. The CNNs were tested on 400 test images from the REFUGE data set and 296 UWF and 300 smartphone-based images from a teleophthalmology clinic. vCDRs derived from the delineated OD/OC boundaries were compared with optometrists' annotations using mean absolute error (MAE). Subgroup analysis was conducted to study the impact of peripapillary atrophy (PPA), and correlation study was performed to investigate potential correlations between sectoral CDR (sCDR) and retinal nerve fiber layer (RNFL) thickness. Results The system achieved MAEs of 0.040 (95% CI, 0.037-0.043) in the REFUGE test images, 0.068 (95% CI, 0.061-0.075) in the UWF images, and 0.084 (95% CI, 0.075-0.092) in the smartphone-based images. There was no statistical significance in differences between PPA and non-PPA images. Weak correlation (r = -0.4046, P < 0.05) between sCDR and RNFL thickness was found only in the superior sector. Conclusions We developed a deep learning system that estimates vCDR from standard, UWF, and smartphone-based images. We also described anatomic peripapillary adversarial lesion and its potential impact on OD/OC delineation. Translational Relevance Artificial intelligence can estimate vCDR from different types of fundus images and may be used as a general and interpretable screening tool to improve community reach for diagnosis and management of glaucoma.
Collapse
Affiliation(s)
- Boon Peng Yap
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| | - Li Zhenghao Kelvin
- Department of Ophthalmology, Tan Tock Seng Hospital, Singapore, Singapore
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
- National Healthcare Group Eye Institute, Singapore, Singapore
| | - En Qi Toh
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
| | - Kok Yao Low
- Department of Ophthalmology, Tan Tock Seng Hospital, Singapore, Singapore
- National Healthcare Group Eye Institute, Singapore, Singapore
| | - Sumaya Khan Rani
- Department of Ophthalmology, Tan Tock Seng Hospital, Singapore, Singapore
- National Healthcare Group Eye Institute, Singapore, Singapore
| | - Eunice Jin Hui Goh
- Department of Ophthalmology, Tan Tock Seng Hospital, Singapore, Singapore
- National Healthcare Group Eye Institute, Singapore, Singapore
| | - Vivien Yip Cherng Hui
- Department of Ophthalmology, Tan Tock Seng Hospital, Singapore, Singapore
- National Healthcare Group Eye Institute, Singapore, Singapore
| | - Beng Koon Ng
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| | - Tock Han Lim
- Department of Ophthalmology, Tan Tock Seng Hospital, Singapore, Singapore
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
- National Healthcare Group Eye Institute, Singapore, Singapore
| |
Collapse
|
4
|
Hasan MM, Phu J, Sowmya A, Meijering E, Kalloniatis M. Artificial intelligence in the diagnosis of glaucoma and neurodegenerative diseases. Clin Exp Optom 2024; 107:130-146. [PMID: 37674264 DOI: 10.1080/08164622.2023.2235346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 07/07/2023] [Indexed: 09/08/2023] Open
Abstract
Artificial Intelligence is a rapidly expanding field within computer science that encompasses the emulation of human intelligence by machines. Machine learning and deep learning - two primary data-driven pattern analysis approaches under the umbrella of artificial intelligence - has created considerable interest in the last few decades. The evolution of technology has resulted in a substantial amount of artificial intelligence research on ophthalmic and neurodegenerative disease diagnosis using retinal images. Various artificial intelligence-based techniques have been used for diagnostic purposes, including traditional machine learning, deep learning, and their combinations. Presented here is a review of the literature covering the last 10 years on this topic, discussing the use of artificial intelligence in analysing data from different modalities and their combinations for the diagnosis of glaucoma and neurodegenerative diseases. The performance of published artificial intelligence methods varies due to several factors, yet the results suggest that such methods can potentially facilitate clinical diagnosis. Generally, the accuracy of artificial intelligence-assisted diagnosis ranges from 67-98%, and the area under the sensitivity-specificity curve (AUC) ranges from 0.71-0.98, which outperforms typical human performance of 71.5% accuracy and 0.86 area under the curve. This indicates that artificial intelligence-based tools can provide clinicians with useful information that would assist in providing improved diagnosis. The review suggests that there is room for improvement of existing artificial intelligence-based models using retinal imaging modalities before they are incorporated into clinical practice.
Collapse
Affiliation(s)
- Md Mahmudul Hasan
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Jack Phu
- School of Optometry and Vision Science, University of New South Wales, Kensington, Australia
- Centre for Eye Health, University of New South Wales, Sydney, New South Wales, Australia
- School of Medicine (Optometry), Deakin University, Waurn Ponds, Victoria, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Michael Kalloniatis
- School of Optometry and Vision Science, University of New South Wales, Kensington, Australia
- School of Medicine (Optometry), Deakin University, Waurn Ponds, Victoria, Australia
| |
Collapse
|
5
|
Zhu Y, Salowe R, Chow C, Li S, Bastani O, O'Brien JM. Advancing Glaucoma Care: Integrating Artificial Intelligence in Diagnosis, Management, and Progression Detection. Bioengineering (Basel) 2024; 11:122. [PMID: 38391608 PMCID: PMC10886285 DOI: 10.3390/bioengineering11020122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 01/23/2024] [Accepted: 01/24/2024] [Indexed: 02/24/2024] Open
Abstract
Glaucoma, the leading cause of irreversible blindness worldwide, comprises a group of progressive optic neuropathies requiring early detection and lifelong treatment to preserve vision. Artificial intelligence (AI) technologies are now demonstrating transformative potential across the spectrum of clinical glaucoma care. This review summarizes current capabilities, future outlooks, and practical translation considerations. For enhanced screening, algorithms analyzing retinal photographs and machine learning models synthesizing risk factors can identify high-risk patients needing diagnostic workup and close follow-up. To augment definitive diagnosis, deep learning techniques detect characteristic glaucomatous patterns by interpreting results from optical coherence tomography, visual field testing, fundus photography, and other ocular imaging. AI-powered platforms also enable continuous monitoring, with algorithms that analyze longitudinal data alerting physicians about rapid disease progression. By integrating predictive analytics with patient-specific parameters, AI can also guide precision medicine for individualized glaucoma treatment selections. Advances in robotic surgery and computer-based guidance demonstrate AI's potential to improve surgical outcomes and surgical training. Beyond the clinic, AI chatbots and reminder systems could provide patient education and counseling to promote medication adherence. However, thoughtful approaches to clinical integration, usability, diversity, and ethical implications remain critical to successfully implementing these emerging technologies. This review highlights AI's vast capabilities to transform glaucoma care while summarizing key achievements, future prospects, and practical considerations to progress from bench to bedside.
Collapse
Affiliation(s)
- Yan Zhu
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Rebecca Salowe
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Caven Chow
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Shuo Li
- Department of Computer & Information Science, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Osbert Bastani
- Department of Computer & Information Science, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Joan M O'Brien
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA
| |
Collapse
|
6
|
Nam Y, Kim J, Kim K, Park KA, Kang M, Cho BH, Oh SY, Kee C, Han J, Lee GI, Kang MC, Lee D, Choi Y, Yun HJ, Park H, Kim J, Cho SJ, Chang DK. Deep learning-based optic disc classification is affected by optic-disc tilt. Sci Rep 2024; 14:498. [PMID: 38177229 PMCID: PMC10767025 DOI: 10.1038/s41598-023-50256-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 12/18/2023] [Indexed: 01/06/2024] Open
Abstract
We aimed to determine the effect of optic disc tilt on deep learning-based optic disc classification. A total of 2507 fundus photographs were acquired from 2236 eyes of 1809 subjects (mean age of 46 years; 53% men). Among all photographs, 1010 (40.3%) had tilted optic discs. Image annotation was performed to label pathologic changes of the optic disc (normal, glaucomatous optic disc changes, disc swelling, and disc pallor). Deep learning-based classification modeling was implemented to develop optic-disc appearance classification models with the photographs of all subjects and those with and without tilted optic discs. Regardless of deep learning algorithms, the classification models showed better overall performance when developed based on data from subjects with non-tilted discs (AUC, 0.988 ± 0.002, 0.991 ± 0.003, and 0.986 ± 0.003 for VGG16, VGG19, and DenseNet121, respectively) than when developed based on data with tilted discs (AUC, 0.924 ± 0.046, 0.928 ± 0.017, and 0.935 ± 0.008). In classification of each pathologic change, non-tilted disc models had better sensitivity and specificity than the tilted disc models. The optic disc appearance classification models developed based all-subject data demonstrated lower accuracy in patients with the appearance of tilted discs than in those with non-tilted discs. Our findings suggested the need to identify and adjust for the effect of optic disc tilt on the optic disc classification algorithm in future development.
Collapse
Affiliation(s)
- Youngwoo Nam
- Medical AI Research Center, Institute of Smart Healthcare, Samsung Medical Center, Seoul, Republic of Korea
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea
| | - Joonhyoung Kim
- Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Kyunga Kim
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea
- Biomedical Statistics Center, Research Institute for Future Medicine, Samsung Medical Center, Seoul, Republic of Korea
- Department of Data Convergence & Future Medicine, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Kyung-Ah Park
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea.
| | - Mira Kang
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea.
- Health Promotion Center, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea.
- Digital Innovation Center, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea.
| | - Baek Hwan Cho
- Department of Medical Device Management and Research, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea
- Department of Biomedical Informatics, CHA University School of Medicine, CHA University, Seongam, Republic of Korea
| | - Sei Yeul Oh
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Changwon Kee
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Jongchul Han
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
- Department of Medical Device Management and Research, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea
| | - Ga-In Lee
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Min Chae Kang
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Dongyoung Lee
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Yeeun Choi
- Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Hee Jee Yun
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Hansol Park
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Jiho Kim
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Soo Jin Cho
- Health Promotion Center, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Dong Kyung Chang
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea
- Division of Gastroenterology, Department of Internal Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| |
Collapse
|
7
|
D’Souza G, Siddalingaswamy PC, Pandya MA. AlterNet-K: a small and compact model for the detection of glaucoma. Biomed Eng Lett 2024; 14:23-33. [PMID: 38186944 PMCID: PMC10770015 DOI: 10.1007/s13534-023-00307-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 07/12/2023] [Accepted: 07/20/2023] [Indexed: 01/09/2024] Open
Abstract
Glaucoma is one of the leading causes of permanent blindness in the world. It is caused due to an increase in the intraocular pressure within the eye that harms the optic nerve. People suffering from Glaucoma often do not notice any changes in their vision in the early stages. However, as it progresses, Glaucoma usually leads to vision loss that is irreversible in many cases. Thus, early diagnosis of this eye disease is of critical importance. The fundus image is one of the most used diagnostic tools for glaucoma detection. However, drawing accurate insights from these images requires them to be manually analyzed by medical experts, which is a time-consuming process. In this work, we propose a parameter-efficient AlterNet-K model based on an alternating design pattern, which combines ResNets and multi-head self-attention (MSA) to leverage their complementary properties to improve the generalizability of the overall model. The model was trained on the Rotterdam EyePACS AIROGS dataset, comprising 113,893 colour fundus images from 60,357 subjects. The AlterNet-K model outperformed transformer models such as ViT, DeiT-S, and Swin transformer, standard DCNN models including ResNet, EfficientNet, MobileNet and VGG with an accuracy of 0.916, AUROC of 0.968 and F1 score of 0.915. The results indicate that smaller CNN models combined with self-attention mechanisms can achieve high classification accuracies. Small and compact Resnet models combined with MSA outperform their larger counterparts. The models in this work can be extended to handle classification tasks in other medical imaging domains.
Collapse
Affiliation(s)
- Gavin D’Souza
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| | - P. C. Siddalingaswamy
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| | - Mayur Anand Pandya
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| |
Collapse
|
8
|
Sigut J, Fumero F, Estévez J, Alayón S, Díaz-Alemán T. In-Depth Evaluation of Saliency Maps for Interpreting Convolutional Neural Network Decisions in the Diagnosis of Glaucoma Based on Fundus Imaging. SENSORS (BASEL, SWITZERLAND) 2023; 24:239. [PMID: 38203101 PMCID: PMC10781365 DOI: 10.3390/s24010239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Revised: 12/14/2023] [Accepted: 12/29/2023] [Indexed: 01/12/2024]
Abstract
Glaucoma, a leading cause of blindness, damages the optic nerve, making early diagnosis challenging due to no initial symptoms. Fundus eye images taken with a non-mydriatic retinograph help diagnose glaucoma by revealing structural changes, including the optic disc and cup. This research aims to thoroughly analyze saliency maps in interpreting convolutional neural network decisions for diagnosing glaucoma from fundus images. These maps highlight the most influential image regions guiding the network's decisions. Various network architectures were trained and tested on 739 optic nerve head images, with nine saliency methods used. Some other popular datasets were also used for further validation. The results reveal disparities among saliency maps, with some consensus between the folds corresponding to the same architecture. Concerning the significance of optic disc sectors, there is generally a lack of agreement with standard medical criteria. The background, nasal, and temporal sectors emerge as particularly influential for neural network decisions, showing a likelihood of being the most relevant ranging from 14.55% to 28.16% on average across all evaluated datasets. We can conclude that saliency maps are usually difficult to interpret and even the areas indicated as the most relevant can be very unintuitive. Therefore, its usefulness as an explanatory tool may be compromised, at least in problems such as the one addressed in this study, where the features defining the model prediction are generally not consistently reflected in relevant regions of the saliency maps, and they even cannot always be related to those used as medical standards.
Collapse
Affiliation(s)
- Jose Sigut
- Department of Computer Science and Systems Engineering, Universidad de La Laguna, Camino San Francisco de Paula, 19, La Laguna, 38203 Santa Cruz de Tenerife, Spain; (F.F.); (J.E.); (S.A.)
| | - Francisco Fumero
- Department of Computer Science and Systems Engineering, Universidad de La Laguna, Camino San Francisco de Paula, 19, La Laguna, 38203 Santa Cruz de Tenerife, Spain; (F.F.); (J.E.); (S.A.)
| | - José Estévez
- Department of Computer Science and Systems Engineering, Universidad de La Laguna, Camino San Francisco de Paula, 19, La Laguna, 38203 Santa Cruz de Tenerife, Spain; (F.F.); (J.E.); (S.A.)
| | - Silvia Alayón
- Department of Computer Science and Systems Engineering, Universidad de La Laguna, Camino San Francisco de Paula, 19, La Laguna, 38203 Santa Cruz de Tenerife, Spain; (F.F.); (J.E.); (S.A.)
| | - Tinguaro Díaz-Alemán
- Department of Ophthalmology, Hospital Universitario de Canarias, Carretera Ofra S/N, La Laguna, 38320 Santa Cruz de Tenerife, Spain;
| |
Collapse
|
9
|
Huang X, Islam MR, Akter S, Ahmed F, Kazami E, Serhan HA, Abd-Alrazaq A, Yousefi S. Artificial intelligence in glaucoma: opportunities, challenges, and future directions. Biomed Eng Online 2023; 22:126. [PMID: 38102597 PMCID: PMC10725017 DOI: 10.1186/s12938-023-01187-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 12/01/2023] [Indexed: 12/17/2023] Open
Abstract
Artificial intelligence (AI) has shown excellent diagnostic performance in detecting various complex problems related to many areas of healthcare including ophthalmology. AI diagnostic systems developed from fundus images have become state-of-the-art tools in diagnosing retinal conditions and glaucoma as well as other ocular diseases. However, designing and implementing AI models using large imaging data is challenging. In this study, we review different machine learning (ML) and deep learning (DL) techniques applied to multiple modalities of retinal data, such as fundus images and visual fields for glaucoma detection, progression assessment, staging and so on. We summarize findings and provide several taxonomies to help the reader understand the evolution of conventional and emerging AI models in glaucoma. We discuss opportunities and challenges facing AI application in glaucoma and highlight some key themes from the existing literature that may help to explore future studies. Our goal in this systematic review is to help readers and researchers to understand critical aspects of AI related to glaucoma as well as determine the necessary steps and requirements for the successful development of AI models in glaucoma.
Collapse
Affiliation(s)
- Xiaoqin Huang
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, USA
| | - Md Rafiqul Islam
- Business Information Systems, Australian Institute of Higher Education, Sydney, Australia
| | - Shanjita Akter
- School of Computer Science, Taylors University, Subang Jaya, Malaysia
| | - Fuad Ahmed
- Department of Computer Science & Engineering, Islamic University of Technology (IUT), Gazipur, Bangladesh
| | - Ehsan Kazami
- Ophthalmology, General Hospital of Mahabad, Urmia University of Medical Sciences, Urmia, Iran
| | - Hashem Abu Serhan
- Department of Ophthalmology, Hamad Medical Corporations, Doha, Qatar
| | - Alaa Abd-Alrazaq
- AI Center for Precision Health, Weill Cornell Medicine-Qatar, Doha, Qatar
| | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, USA.
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, USA.
| |
Collapse
|
10
|
Rashidisabet H, Sethi A, Jindarak P, Edmonds J, Chan RVP, Leiderman YI, Vajaranant TS, Yi D. Validating the Generalizability of Ophthalmic Artificial Intelligence Models on Real-World Clinical Data. Transl Vis Sci Technol 2023; 12:8. [PMID: 37922149 PMCID: PMC10629532 DOI: 10.1167/tvst.12.11.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Accepted: 08/21/2023] [Indexed: 11/05/2023] Open
Abstract
Purpose This study aims to investigate generalizability of deep learning (DL) models trained on commonly used public fundus images to an instance of real-world data (RWD) for glaucoma diagnosis. Methods We used Illinois Eye and Ear Infirmary fundus data set as an instance of RWD in addition to six publicly available fundus data sets. We compared the performance of DL-trained models on public data and RWD for glaucoma classification and optic disc (OD) segmentation tasks. For each task, we created models trained on each data set, respectively, and each model was tested on both data sets. We further examined each model's decision-making process and learned embeddings for the glaucoma classification task. Results Using public data for the test set, public-trained models outperformed RWD-trained models in OD segmentation and glaucoma classification with a mean intersection over union of 96.3% and mean area under the receiver operating characteristic curve of 95.0%, respectively. Using the RWD test set, the performance of public models decreased by 8.0% and 18.4% to 85.6% and 76.6% for OD segmentation and glaucoma classification tasks, respectively. RWD models outperformed public models on RWD test sets by 2.0% and 9.5%, respectively, in OD segmentation and glaucoma classification tasks. Conclusions DL models trained on commonly used public data have limited ability to generalize to RWD for classifying glaucoma. They perform similarly to RWD models for OD segmentation. Translational Relevance RWD is a potential solution for improving generalizability of DL models and enabling clinical translations in the care of prevalent blinding ophthalmic conditions, such as glaucoma.
Collapse
Affiliation(s)
- Homa Rashidisabet
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, USA
- Artificial Intelligence in Ophthalmology (Ai-O) Center, University of Illinois Chicago, Chicago, IL, USA
| | - Abhishek Sethi
- Artificial Intelligence in Ophthalmology (Ai-O) Center, University of Illinois Chicago, Chicago, IL, USA
- Illinois Eye and Ear Infirmary, Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, USA
| | - Ponpawee Jindarak
- Illinois Eye and Ear Infirmary, Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, USA
| | - James Edmonds
- Artificial Intelligence in Ophthalmology (Ai-O) Center, University of Illinois Chicago, Chicago, IL, USA
- Illinois Eye and Ear Infirmary, Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, USA
| | - R V Paul Chan
- Artificial Intelligence in Ophthalmology (Ai-O) Center, University of Illinois Chicago, Chicago, IL, USA
- Illinois Eye and Ear Infirmary, Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, USA
| | - Yannek I Leiderman
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, USA
- Artificial Intelligence in Ophthalmology (Ai-O) Center, University of Illinois Chicago, Chicago, IL, USA
- Illinois Eye and Ear Infirmary, Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, USA
| | - Thasarat Sutabutr Vajaranant
- Artificial Intelligence in Ophthalmology (Ai-O) Center, University of Illinois Chicago, Chicago, IL, USA
- Illinois Eye and Ear Infirmary, Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, USA
| | - Darvin Yi
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, USA
- Artificial Intelligence in Ophthalmology (Ai-O) Center, University of Illinois Chicago, Chicago, IL, USA
- Illinois Eye and Ear Infirmary, Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL, USA
| |
Collapse
|
11
|
Sangchocanonta S, Ingpochai S, Puangarom S, Munthuli A, Phienphanich P, Itthipanichpong R, Chansangpetch S, Manassakorn A, Ratanawongphaibul K, Tantisevi V, Rojanapongpun P, Tantibundhit C. Donut: Augmentation Technique for Enhancing The Efficacy of Glaucoma Suspect Screening. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-5. [PMID: 38083547 DOI: 10.1109/embc40787.2023.10341115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Glaucoma is the second most common cause of blindness. A glaucoma suspect has risk factors that increase the possibility of developing glaucoma. Evaluating a patient with suspected glaucoma is challenging. The "donut method" was developed in this study as an augmentation technique for obtaining high-quality fundus images for training ConvNeXt-Small model. Fundus images from GlauCUTU-DATA, labelled by randomizing at least 3 well-trained ophthalmologists (4 well-trained ophthalmologists in case of no majority agreement) with a unanimous agreement (3/3) and majority agreement (2/3), were used in the experiment. The experimental results from the proposed method showed the training model with the "donut method" increased the sensitivity of glaucoma suspects from 52.94% to 70.59% for the 3/3 data and increased the sensitivity of glaucoma suspects from 37.78% to 42.22% for the 2/3 data. This method enhanced the efficacy of classifying glaucoma suspects in both equalizing sensitivity and specificity sufficiently. Furthermore, three well-trained ophthalmologists agreed that the GradCAM++ heatmaps obtained from the training model using the proposed method highlighted the clinical criteria.Clinical relevance- The donut method for augmentation fundus images focuses on the optic nerve head region for enhancing efficacy of glaucoma suspect screening, and uses Grad-CAM++ to highlight the clinical criteria.
Collapse
|
12
|
Zedan MJM, Zulkifley MA, Ibrahim AA, Moubark AM, Kamari NAM, Abdani SR. Automated Glaucoma Screening and Diagnosis Based on Retinal Fundus Images Using Deep Learning Approaches: A Comprehensive Review. Diagnostics (Basel) 2023; 13:2180. [PMID: 37443574 DOI: 10.3390/diagnostics13132180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Revised: 06/16/2023] [Accepted: 06/17/2023] [Indexed: 07/15/2023] Open
Abstract
Glaucoma is a chronic eye disease that may lead to permanent vision loss if it is not diagnosed and treated at an early stage. The disease originates from an irregular behavior in the drainage flow of the eye that eventually leads to an increase in intraocular pressure, which in the severe stage of the disease deteriorates the optic nerve head and leads to vision loss. Medical follow-ups to observe the retinal area are needed periodically by ophthalmologists, who require an extensive degree of skill and experience to interpret the results appropriately. To improve on this issue, algorithms based on deep learning techniques have been designed to screen and diagnose glaucoma based on retinal fundus image input and to analyze images of the optic nerve and retinal structures. Therefore, the objective of this paper is to provide a systematic analysis of 52 state-of-the-art relevant studies on the screening and diagnosis of glaucoma, which include a particular dataset used in the development of the algorithms, performance metrics, and modalities employed in each article. Furthermore, this review analyzes and evaluates the used methods and compares their strengths and weaknesses in an organized manner. It also explored a wide range of diagnostic procedures, such as image pre-processing, localization, classification, and segmentation. In conclusion, automated glaucoma diagnosis has shown considerable promise when deep learning algorithms are applied. Such algorithms could increase the accuracy and efficiency of glaucoma diagnosis in a better and faster manner.
Collapse
Affiliation(s)
- Mohammad J M Zedan
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
- Computer and Information Engineering Department, College of Electronics Engineering, Ninevah University, Mosul 41002, Iraq
| | - Mohd Asyraf Zulkifley
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Ahmad Asrul Ibrahim
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Asraf Mohamed Moubark
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Nor Azwan Mohamed Kamari
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Siti Raihanah Abdani
- School of Computing Sciences, College of Computing, Informatics and Media, Universiti Teknologi MARA, Shah Alam 40450, Selangor, Malaysia
| |
Collapse
|
13
|
Hemelings R, Elen B, Schuster AK, Blaschko MB, Barbosa-Breda J, Hujanen P, Junglas A, Nickels S, White A, Pfeiffer N, Mitchell P, De Boever P, Tuulonen A, Stalmans I. A generalizable deep learning regression model for automated glaucoma screening from fundus images. NPJ Digit Med 2023; 6:112. [PMID: 37311940 DOI: 10.1038/s41746-023-00857-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2022] [Accepted: 06/01/2023] [Indexed: 06/15/2023] Open
Abstract
A plethora of classification models for the detection of glaucoma from fundus images have been proposed in recent years. Often trained with data from a single glaucoma clinic, they report impressive performance on internal test sets, but tend to struggle in generalizing to external sets. This performance drop can be attributed to data shifts in glaucoma prevalence, fundus camera, and the definition of glaucoma ground truth. In this study, we confirm that a previously described regression network for glaucoma referral (G-RISK) obtains excellent results in a variety of challenging settings. Thirteen different data sources of labeled fundus images were utilized. The data sources include two large population cohorts (Australian Blue Mountains Eye Study, BMES and German Gutenberg Health Study, GHS) and 11 publicly available datasets (AIROGS, ORIGA, REFUGE1, LAG, ODIR, REFUGE2, GAMMA, RIM-ONEr3, RIM-ONE DL, ACRIMA, PAPILA). To minimize data shifts in input data, a standardized image processing strategy was developed to obtain 30° disc-centered images from the original data. A total of 149,455 images were included for model testing. Area under the receiver operating characteristic curve (AUC) for BMES and GHS population cohorts were at 0.976 [95% CI: 0.967-0.986] and 0.984 [95% CI: 0.980-0.991] on participant level, respectively. At a fixed specificity of 95%, sensitivities were at 87.3% and 90.3%, respectively, surpassing the minimum criteria of 85% sensitivity recommended by Prevent Blindness America. AUC values on the eleven publicly available data sets ranged from 0.854 to 0.988. These results confirm the excellent generalizability of a glaucoma risk regression model trained with homogeneous data from a single tertiary referral center. Further validation using prospective cohort studies is warranted.
Collapse
Affiliation(s)
- Ruben Hemelings
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium.
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium.
| | - Bart Elen
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium
| | - Alexander K Schuster
- Department of Ophthalmology, University Medical Center Mainz, Langenbeckstr. 1, 55131, Mainz, Germany
| | | | - João Barbosa-Breda
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium
- Cardiovascular R&D Center, Faculty of Medicine of the University of Porto, Alameda Prof. Hernâni Monteiro, 4200-319, Porto, Portugal
- Department of Ophthalmology, Centro Hospitalar e Universitário São João, Alameda Prof. Hernâni Monteiro, 4200-319, Porto, Portugal
| | - Pekko Hujanen
- Tays Eye Centre, Tampere University Hospital, Tampere, Finland
| | - Annika Junglas
- Department of Ophthalmology, University Medical Center Mainz, Langenbeckstr. 1, 55131, Mainz, Germany
| | - Stefan Nickels
- Department of Ophthalmology, University Medical Center Mainz, Langenbeckstr. 1, 55131, Mainz, Germany
| | - Andrew White
- Department of Ophthalmology, The University of Sydney, Sydney, NSW, Australia
| | - Norbert Pfeiffer
- Department of Ophthalmology, University Medical Center Mainz, Langenbeckstr. 1, 55131, Mainz, Germany
| | - Paul Mitchell
- Department of Ophthalmology, The University of Sydney, Sydney, NSW, Australia
| | - Patrick De Boever
- Centre for Environmental Sciences, Hasselt University, Agoralaan building D, 3590, Diepenbeek, Belgium
- University of Antwerp, Department of Biology, 2610, Wilrijk, Belgium
| | - Anja Tuulonen
- Tays Eye Centre, Tampere University Hospital, Tampere, Finland
| | - Ingeborg Stalmans
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium
- Ophthalmology Department, UZ Leuven, Herestraat 49, 3000, Leuven, Belgium
| |
Collapse
|
14
|
Teo ZL, Ting DSW. AI telemedicine screening in ophthalmology: health economic considerations. Lancet Glob Health 2023; 11:e318-e320. [PMID: 36702140 DOI: 10.1016/s2214-109x(23)00037-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 01/09/2023] [Indexed: 01/25/2023]
Affiliation(s)
- Zhen Ling Teo
- Singapore National Eye Center, Singapore Eye Research Institute, Singapore
| | - Daniel Shu Wei Ting
- Singapore National Eye Center, Singapore Eye Research Institute, Singapore; Duke-NUS Medical School, National University of Singapore, Singapore.
| |
Collapse
|
15
|
Chen D, Ran Ran A, Fang Tan T, Ramachandran R, Li F, Cheung CY, Yousefi S, Tham CCY, Ting DSW, Zhang X, Al-Aswad LA. Applications of Artificial Intelligence and Deep Learning in Glaucoma. Asia Pac J Ophthalmol (Phila) 2023; 12:80-93. [PMID: 36706335 DOI: 10.1097/apo.0000000000000596] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 12/06/2022] [Indexed: 01/28/2023] Open
Abstract
Diagnosis and detection of progression of glaucoma remains challenging. Artificial intelligence-based tools have the potential to improve and standardize the assessment of glaucoma but development of these algorithms is difficult given the multimodal and variable nature of the diagnosis. Currently, most algorithms are focused on a single imaging modality, specifically screening and diagnosis based on fundus photos or optical coherence tomography images. Use of anterior segment optical coherence tomography and goniophotographs is limited. The majority of algorithms designed for disease progression prediction are based on visual fields. No studies in our literature search assessed the use of artificial intelligence for treatment response prediction and no studies conducted prospective testing of their algorithms. Additional challenges to the development of artificial intelligence-based tools include scarcity of data and a lack of consensus in diagnostic criteria. Although research in the use of artificial intelligence for glaucoma is promising, additional work is needed to develop clinically usable tools.
Collapse
Affiliation(s)
- Dinah Chen
- Department of Ophthalmology, NYU Langone Health, New York City, NY
- Genentech Inc, South San Francisco, CA
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
- Lam Kin Chung, Jet King-Shing Ho Glaucoma Treatment And Research Centre, The Chinese University of Hong Kong, Hong Kong, China
| | - Ting Fang Tan
- Singapore Eye Research Institute, Singapore
- Singapore National Eye Center, Singapore
| | | | - Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
- Lam Kin Chung, Jet King-Shing Ho Glaucoma Treatment And Research Centre, The Chinese University of Hong Kong, Hong Kong, China
| | - Siamak Yousefi
- Department of Ophthalmology, The University of Tennessee Health Science Center, Memphis, TN
| | - Clement C Y Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
- Lam Kin Chung, Jet King-Shing Ho Glaucoma Treatment And Research Centre, The Chinese University of Hong Kong, Hong Kong, China
| | - Daniel S W Ting
- Singapore Eye Research Institute, Singapore
- Singapore National Eye Center, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | | |
Collapse
|
16
|
Coan LJ, Williams BM, Krishna Adithya V, Upadhyaya S, Alkafri A, Czanner S, Venkatesh R, Willoughby CE, Kavitha S, Czanner G. Automatic detection of glaucoma via fundus imaging and artificial intelligence: A review. Surv Ophthalmol 2023; 68:17-41. [PMID: 35985360 DOI: 10.1016/j.survophthal.2022.08.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 08/04/2022] [Accepted: 08/08/2022] [Indexed: 02/01/2023]
Abstract
Glaucoma is a leading cause of irreversible vision impairment globally, and cases are continuously rising worldwide. Early detection is crucial, allowing timely intervention that can prevent further visual field loss. To detect glaucoma an examination of the optic nerve head via fundus imaging can be performed, at the center of which is the assessment of the optic cup and disc boundaries. Fundus imaging is noninvasive and low-cost; however, image examination relies on subjective, time-consuming, and costly expert assessments. A timely question to ask is: "Can artificial intelligence mimic glaucoma assessments made by experts?" Specifically, can artificial intelligence automatically find the boundaries of the optic cup and disc (providing a so-called segmented fundus image) and then use the segmented image to identify glaucoma with high accuracy? We conducted a comprehensive review on artificial intelligence-enabled glaucoma detection frameworks that produce and use segmented fundus images and summarized the advantages and disadvantages of such frameworks. We identified 36 relevant papers from 2011 to 2021 and 2 main approaches: 1) logical rule-based frameworks, based on a set of rules; and 2) machine learning/statistical modeling-based frameworks. We critically evaluated the state-of-art of the 2 approaches, identified gaps in the literature and pointed at areas for future research.
Collapse
Affiliation(s)
- Lauren J Coan
- School of Computer Science and Mathematics, Liverpool John Moores University, UK.
| | - Bryan M Williams
- School of Computing and Communications, Lancaster University, UK
| | | | - Swati Upadhyaya
- Department of Glaucoma, Aravind Eye Hospital, Pondicherry, India
| | - Ala Alkafri
- School of Computing, Engineering & Digital Technologies, Teesside University, UK
| | - Silvester Czanner
- School of Computer Science and Mathematics, Liverpool John Moores University, UK; Faculty of Informatics and Information Technologies, Slovak University of Technology, Slovakia
| | - Rengaraj Venkatesh
- Department of Glaucoma and Chief Medical Officer, Aravind Eye Hospital, Pondicherry, India
| | | | | | - Gabriela Czanner
- School of Computer Science and Mathematics, Liverpool John Moores University, UK; Faculty of Informatics and Information Technologies, Slovak University of Technology, Slovakia
| |
Collapse
|
17
|
Hung KH, Kao YC, Tang YH, Chen YT, Wang CH, Wang YC, Lee OKS. Application of a deep learning system in glaucoma screening and further classification with colour fundus photographs: a case control study. BMC Ophthalmol 2022; 22:483. [PMID: 36510171 PMCID: PMC9743575 DOI: 10.1186/s12886-022-02730-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Accepted: 12/06/2022] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND To verify efficacy of automatic screening and classification of glaucoma with deep learning system. METHODS A cross-sectional, retrospective study in a tertiary referral hospital. Patients with healthy optic disc, high-tension, or normal-tension glaucoma were enrolled. Complicated non-glaucomatous optic neuropathy was excluded. Colour and red-free fundus images were collected for development of DLS and comparison of their efficacy. The convolutional neural network with the pre-trained EfficientNet-b0 model was selected for machine learning. Glaucoma screening (Binary) and ternary classification with or without additional demographics (age, gender, high myopia) were evaluated, followed by creating confusion matrix and heatmaps. Area under receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, and F1 score were viewed as main outcome measures. RESULTS Two hundred and twenty-two cases (421 eyes) were enrolled, with 1851 images in total (1207 normal and 644 glaucomatous disc). Train set and test set were comprised of 1539 and 312 images, respectively. If demographics were not provided, AUC, accuracy, precision, sensitivity, F1 score, and specificity of our deep learning system in eye-based glaucoma screening were 0.98, 0.91, 0.86, 0.86, 0.86, and 0.94 in test set. Same outcome measures in eye-based ternary classification without demographic data were 0.94, 0.87, 0.87, 0.87, 0.87, and 0.94 in our test set, respectively. Adding demographics has no significant impact on efficacy, but establishing a linkage between eyes and images is helpful for a better performance. Confusion matrix and heatmaps suggested that retinal lesions and quality of photographs could affect classification. Colour fundus images play a major role in glaucoma classification, compared to red-free fundus images. CONCLUSIONS Promising results with high AUC and specificity were shown in distinguishing normal optic nerve from glaucomatous fundus images and doing further classification.
Collapse
Affiliation(s)
- Kuo-Hsuan Hung
- grid.413801.f0000 0001 0711 0593Department of Ophthalmology, Chang-Gung Memorial Hospital, Linkou, No.5, Fu-Hsing St., Kuei Shan Hsiang, Tao Yuan Hsien, Taiwan ,grid.145695.a0000 0004 1798 0922Chang-Gung University College of Medicine, No.259 Wen-Hwa 1st Road, Kuei Shan Hsiang, Tao Yuan Hsien, Taiwan ,grid.260539.b0000 0001 2059 7017Institute of Clinical Medicine, National Yang Ming Chiao Tung University, No.201, Sec.2, Shih-Pai Rd. Peitou, R.O.C, Taipei, 112 Taiwan
| | - Yu-Ching Kao
- Muen Biomedical and Optoelectronics Technologies Inc., Taipei, Taiwan
| | - Yu-Hsuan Tang
- grid.260539.b0000 0001 2059 7017Institute of Clinical Medicine, National Yang Ming Chiao Tung University, No.201, Sec.2, Shih-Pai Rd. Peitou, R.O.C, Taipei, 112 Taiwan
| | - Yi-Ting Chen
- Muen Biomedical and Optoelectronics Technologies Inc., Taipei, Taiwan
| | - Chuen-Heng Wang
- Muen Biomedical and Optoelectronics Technologies Inc., Taipei, Taiwan
| | - Yu-Chen Wang
- Muen Biomedical and Optoelectronics Technologies Inc., Taipei, Taiwan
| | - Oscar Kuang-Sheng Lee
- grid.260539.b0000 0001 2059 7017Institute of Clinical Medicine, National Yang Ming Chiao Tung University, No.201, Sec.2, Shih-Pai Rd. Peitou, R.O.C, Taipei, 112 Taiwan ,grid.260539.b0000 0001 2059 7017Stem Cell Research Centre, National Yang Ming Chiao Tung University, Taipei, Taiwan ,grid.411508.90000 0004 0572 9415Department of Orthopedics, China Medical University Hospital, Taichung, Taiwan
| |
Collapse
|
18
|
Detecting multiple retinal diseases in ultra-widefield fundus imaging and data-driven identification of informative regions with deep learning. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00566-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
19
|
Fan R, Alipour K, Bowd C, Christopher M, Brye N, Proudfoot JA, Goldbaum MH, Belghith A, Girkin CA, Fazio MA, Liebmann JM, Weinreb RN, Pazzani M, Kriegman D, Zangwill LM. Detecting Glaucoma from Fundus Photographs Using Deep Learning without Convolutions: Transformer for Improved Generalization. OPHTHALMOLOGY SCIENCE 2022; 3:100233. [PMID: 36545260 PMCID: PMC9762193 DOI: 10.1016/j.xops.2022.100233] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 10/04/2022] [Accepted: 10/12/2022] [Indexed: 12/14/2022]
Abstract
Purpose To compare the diagnostic accuracy and explainability of a Vision Transformer deep learning technique, Data-efficient image Transformer (DeiT), and ResNet-50, trained on fundus photographs from the Ocular Hypertension Treatment Study (OHTS) to detect primary open-angle glaucoma (POAG) and identify the salient areas of the photographs most important for each model's decision-making process. Design Evaluation of a diagnostic technology. Subjects Participants and Controls Overall 66 715 photographs from 1636 OHTS participants and an additional 5 external datasets of 16 137 photographs of healthy and glaucoma eyes. Methods Data-efficient image Transformer models were trained to detect 5 ground-truth OHTS POAG classifications: OHTS end point committee POAG determinations because of disc changes (model 1), visual field (VF) changes (model 2), or either disc or VF changes (model 3) and Reading Center determinations based on disc (model 4) and VFs (model 5). The best-performing DeiT models were compared with ResNet-50 models on OHTS and 5 external datasets. Main Outcome Measures Diagnostic performance was compared using areas under the receiver operating characteristic curve (AUROC) and sensitivities at fixed specificities. The explainability of the DeiT and ResNet-50 models was compared by evaluating the attention maps derived directly from DeiT to 3 gradient-weighted class activation map strategies. Results Compared with our best-performing ResNet-50 models, the DeiT models demonstrated similar performance on the OHTS test sets for all 5 ground-truth POAG labels; AUROC ranged from 0.82 (model 5) to 0.91 (model 1). Data-efficient image Transformer AUROC was consistently higher than ResNet-50 on the 5 external datasets. For example, AUROC for the main OHTS end point (model 3) was between 0.08 and 0.20 higher in the DeiT than ResNet-50 models. The saliency maps from the DeiT highlight localized areas of the neuroretinal rim, suggesting important rim features for classification. The same maps in the ResNet-50 models show a more diffuse, generalized distribution around the optic disc. Conclusions Vision Transformers have the potential to improve generalizability and explainability in deep learning models, detecting eye disease and possibly other medical conditions that rely on imaging for clinical diagnosis and management.
Collapse
Key Words
- AI, artificial intelligence
- AUROC, areas under the receiver operating characteristic curve
- CI, confidence interval
- CNN, convolutional neural network
- DL, deep learning
- Deep learning
- DeiT, Data-efficient image Transformer
- Fundus photographs
- Glaucoma detection
- LAG, Large-Scale Attention-Based Glaucoma
- OHTS, Ocular Hypertension Treatment Study
- POAG, primary open-angle glaucoma
- SoTA, state-of-the-art
- VF, visual field
- ViT, Vision Transformer
- Vision Transformers
Collapse
Affiliation(s)
- Rui Fan
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California,Department of Computer Science and Engineering, University of California San Diego, La Jolla, California,Department of Control Science and Engineering, Tongji University, Shanghai 201804, China
| | - Kamran Alipour
- Department of Computer Science and Engineering, University of California San Diego, La Jolla, California
| | - Christopher Bowd
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
| | - Mark Christopher
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
| | - Nicole Brye
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
| | - James A. Proudfoot
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
| | - Michael H. Goldbaum
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
| | - Akram Belghith
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
| | - Christopher A. Girkin
- Department of Ophthalmology, School of Medicine, The University of Alabama at Birmingham, Birmingham, Alabama
| | - Massimo A. Fazio
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California,Department of Ophthalmology, School of Medicine, The University of Alabama at Birmingham, Birmingham, Alabama,Department of Biomedical Engineering, School of Engineering, The University of Alabama at Birmingham, Birmingham, Alabama
| | - Jeffrey M. Liebmann
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Columbia University Medical Center, New York, New York
| | - Robert N. Weinreb
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California
| | - Michael Pazzani
- Department of Computer Science and Engineering, University of California San Diego, La Jolla, California
| | - David Kriegman
- Department of Computer Science and Engineering, University of California San Diego, La Jolla, California
| | - Linda M. Zangwill
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, California,Correspondence: Linda M. Zangwill, 9500 Gilman Dr., #0946, La Jolla, California 92093-0946.
| |
Collapse
|
20
|
Al-Aswad LA, Ramachandran R, Schuman JS, Medeiros F, Eydelman MB. Artificial Intelligence for Glaucoma: Creating and Implementing Artificial Intelligence for Disease Detection and Progression. Ophthalmol Glaucoma 2022; 5:e16-e25. [PMID: 35218987 PMCID: PMC9399304 DOI: 10.1016/j.ogla.2022.02.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 02/14/2022] [Accepted: 02/17/2022] [Indexed: 12/15/2022]
Abstract
On September 3, 2020, the Collaborative Community on Ophthalmic Imaging conducted its first 2-day virtual workshop on the role of artificial intelligence (AI) and related machine learning techniques in the diagnosis and treatment of various ophthalmic conditions. In a session entitled "Artificial Intelligence for Glaucoma," a panel of glaucoma specialists, researchers, industry experts, and patients convened to share current research on the application of AI to commonly used diagnostic modalities, including fundus photography, OCT imaging, standard automated perimetry, and gonioscopy. The conference participants focused on the use of AI as a tool for disease prediction, highlighted its ability to address inequalities, and presented the limitations of and challenges to its clinical application. The panelists' discussion addressed AI and health equities from clinical, societal, and regulatory perspectives.
Collapse
Affiliation(s)
- Lama A Al-Aswad
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, New York; Department of Population Health, NYU Langone Health, NYU Grossman School of Medicine, New York, New York.
| | - Rithambara Ramachandran
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, New York
| | - Joel S Schuman
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, New York; Department of Biomedical Engineering, New York University Tandon School of Engineering, Brooklyn, New York; Department of Electrical and Computer Engineering, New York University Tandon School of Engineering, Brooklyn, New York; Center for Neural Science, NYU, New York, New York; Neuroscience Institute, NYU Langone Health, New York, New York
| | - Felipe Medeiros
- Department of Ophthalmology, Duke University School of Medicine, Durham, North Carolina; Department of Electrical and Computer Engineering, Pratt School of Engineering, Duke University, Durham, North Carolina
| | | |
Collapse
|
21
|
Practical Application of Artificial Intelligence Technology in Glaucoma Diagnosis. J Ophthalmol 2022; 2022:5212128. [PMID: 35957747 PMCID: PMC9357716 DOI: 10.1155/2022/5212128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Accepted: 06/29/2022] [Indexed: 11/18/2022] Open
Abstract
Purpose. By comparing the performance of different models between artificial intelligence (AI) and doctors, we aim to evaluate and identify the optimal model for future usage of AI. Methods. A total of 500 fundus images of glaucoma and 500 fundus images of normal eyes were collected and randomly divided into five groups, with each group corresponding to one round. The AI system provided diagnostic suggestions for each image. Four doctors provided diagnoses without the assistance of the AI in the first round and with the assistance of the AI in the second and third rounds. In the fourth round, doctor B and doctor D made diagnoses with the help of the AI and the other two doctors without the help of the AI. In the last round, doctor A and doctor B made diagnoses with the help of AI and the other two doctors without the help of the AI. Results. Doctor A, doctor B, and doctor D had a higher accuracy in the diagnosis of glaucoma with the assistance of AI in the second (
,
, and
) and the third round (
,
, and
) than in the first round. The accuracy of at least one doctor was higher than that of AI in the second and third rounds, in spite of no detectable significance (
,
,
, and
). The four doctors’ overall accuracy (
and
) and sensitivity (
and
) as a whole were significantly improved in the second and third rounds. Conclusions. This “Doctor + AI” model can clarify the role of doctors and AI in medical responsibility and ensure the safety of patients, and importantly, this model shows great potential and application prospects.
Collapse
|
22
|
Multi-task deep learning for glaucoma detection from color fundus images. Sci Rep 2022; 12:12361. [PMID: 35858986 PMCID: PMC9300731 DOI: 10.1038/s41598-022-16262-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 07/07/2022] [Indexed: 11/11/2022] Open
Abstract
Glaucoma is an eye condition that leads to loss of vision and blindness if not diagnosed in time. Diagnosis requires human experts to estimate in a limited time subtle changes in the shape of the optic disc from retinal fundus images. Deep learning methods have been satisfactory in classifying and segmenting diseases in retinal fundus images, assisting in analyzing the increasing amount of images. Model training requires extensive annotations to achieve successful generalization, which can be highly problematic given the costly expert annotations. This work aims at designing and training a novel multi-task deep learning model that leverages the similarities of related eye-fundus tasks and measurements used in glaucoma diagnosis. The model simultaneously learns different segmentation and classification tasks, thus benefiting from their similarity. The evaluation of the method in a retinal fundus glaucoma challenge dataset, including 1200 retinal fundus images from different cameras and medical centers, obtained a \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$96.76 \pm 0.96$$\end{document}96.76±0.96 AUC performance compared to an \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$93.56 \pm 1.48$$\end{document}93.56±1.48 obtained by the same backbone network trained to detect glaucoma. Our approach outperforms other multi-task learning models, and its performance pairs with trained experts using \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$~\sim 3.5$$\end{document}∼3.5 times fewer parameters than training each task separately. The data and the code for reproducing our results are publicly available.
Collapse
|
23
|
Shin Y, Cho H, Shin YU, Seong M, Choi JW, Lee WJ. Comparison between Deep-Learning-Based Ultra-Wide-Field Fundus Imaging and True-Colour Confocal Scanning for Diagnosing Glaucoma. J Clin Med 2022; 11:jcm11113168. [PMID: 35683577 PMCID: PMC9181263 DOI: 10.3390/jcm11113168] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 05/25/2022] [Accepted: 06/01/2022] [Indexed: 02/05/2023] Open
Abstract
In this retrospective, comparative study, we evaluated and compared the performance of two confocal imaging modalities in detecting glaucoma based on a deep learning (DL) classifier: ultra-wide-field (UWF) fundus imaging and true-colour confocal scanning. A total of 777 eyes, including 273 normal control eyes and 504 glaucomatous eyes, were tested. A convolutional neural network was used for each true-colour confocal scan (Eidon AF™, CenterVue, Padova, Italy) and UWF fundus image (Optomap™, Optos PLC, Dunfermline, UK) to detect glaucoma. The diagnostic model was trained using 545 training and 232 test images. The presence of glaucoma was determined, and the accuracy and area under the receiver operating characteristic curve (AUC) metrics were assessed for diagnostic power comparison. DL-based UWF fundus imaging achieved an AUC of 0.904 (95% confidence interval (CI): 0.861−0.937) and accuracy of 83.62%. In contrast, DL-based true-colour confocal scanning achieved an AUC of 0.868 (95% CI: 0.824−0.912) and accuracy of 81.46%. Both DL-based confocal imaging modalities showed no significant differences in their ability to diagnose glaucoma (p = 0.135) and were comparable to the traditional optical coherence tomography parameter-based methods (all p > 0.005). Therefore, using a DL-based algorithm on true-colour confocal scanning and UWF fundus imaging, we confirmed that both confocal fundus imaging techniques had high value in diagnosing glaucoma.
Collapse
Affiliation(s)
- Younji Shin
- Department of Electrical Engineering, Hanyang University, Seoul 04763, Korea;
| | - Hyunsoo Cho
- Department of Ophthalmology, Hanyang University College of Medicine, Seoul 04763, Korea; (H.C.); (Y.U.S.); (M.S.)
| | - Yong Un Shin
- Department of Ophthalmology, Hanyang University College of Medicine, Seoul 04763, Korea; (H.C.); (Y.U.S.); (M.S.)
| | - Mincheol Seong
- Department of Ophthalmology, Hanyang University College of Medicine, Seoul 04763, Korea; (H.C.); (Y.U.S.); (M.S.)
| | - Jun Won Choi
- Department of Electrical Engineering, Hanyang University, Seoul 04763, Korea;
- Correspondence: (J.W.C.); (W.J.L.); Tel.: +82-2-2290-2316 (J.W.C.); +82-2-2290-8570 (W.J.L.)
| | - Won June Lee
- Department of Ophthalmology, Hanyang University College of Medicine, Seoul 04763, Korea; (H.C.); (Y.U.S.); (M.S.)
- Correspondence: (J.W.C.); (W.J.L.); Tel.: +82-2-2290-2316 (J.W.C.); +82-2-2290-8570 (W.J.L.)
| |
Collapse
|
24
|
Camara J, Neto A, Pires IM, Villasana MV, Zdravevski E, Cunha A. Literature Review on Artificial Intelligence Methods for Glaucoma Screening, Segmentation, and Classification. J Imaging 2022; 8:jimaging8020019. [PMID: 35200722 PMCID: PMC8878383 DOI: 10.3390/jimaging8020019] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 01/11/2022] [Accepted: 01/17/2022] [Indexed: 12/20/2022] Open
Abstract
Artificial intelligence techniques are now being applied in different medical solutions ranging from disease screening to activity recognition and computer-aided diagnosis. The combination of computer science methods and medical knowledge facilitates and improves the accuracy of the different processes and tools. Inspired by these advances, this paper performs a literature review focused on state-of-the-art glaucoma screening, segmentation, and classification based on images of the papilla and excavation using deep learning techniques. These techniques have been shown to have high sensitivity and specificity in glaucoma screening based on papilla and excavation images. The automatic segmentation of the contours of the optic disc and the excavation then allows the identification and assessment of the glaucomatous disease’s progression. As a result, we verified whether deep learning techniques may be helpful in performing accurate and low-cost measurements related to glaucoma, which may promote patient empowerment and help medical doctors better monitor patients.
Collapse
Affiliation(s)
- José Camara
- R. Escola Politécnica, Universidade Aberta, 1250-100 Lisboa, Portugal;
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal;
| | - Alexandre Neto
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal;
- Escola de Ciências e Tecnologia, University of Trás-os-Montes e Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal;
| | - Ivan Miguel Pires
- Escola de Ciências e Tecnologia, University of Trás-os-Montes e Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal;
- Instituto de Telecomunicações, Universidade da Beira Interior, 6200-001 Covilhã, Portugal
| | - María Vanessa Villasana
- Centro Hospitalar Universitário Cova da Beira, 6200-251 Covilhã, Portugal;
- UICISA:E Research Centre, School of Health, Polytechnic Institute of Viseu, 3504-510 Viseu, Portugal
| | - Eftim Zdravevski
- Faculty of Computer Science and Engineering, University Ss Cyril and Methodius, 1000 Skopje, North Macedonia;
| | - António Cunha
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal;
- Escola de Ciências e Tecnologia, University of Trás-os-Montes e Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal;
- Correspondence: ; Tel.: +351-931-636-373
| |
Collapse
|
25
|
Rahman L, Hafejee A, Anantharanjit R, Wei W, Cordeiro MF. Accelerating precision ophthalmology: recent advances. EXPERT REVIEW OF PRECISION MEDICINE AND DRUG DEVELOPMENT 2022. [DOI: 10.1080/23808993.2022.2154146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Affiliation(s)
- Loay Rahman
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Healthcare NHS Trust, London, UK
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, UK
| | - Ammaarah Hafejee
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Healthcare NHS Trust, London, UK
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, UK
| | - Rajeevan Anantharanjit
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Healthcare NHS Trust, London, UK
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, UK
| | - Wei Wei
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Healthcare NHS Trust, London, UK
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, UK
| | | |
Collapse
|