1
|
Jin Y, Liang L, Li J, Xu K, Zhou W, Li Y. Artificial intelligence and glaucoma: a lucid and comprehensive review. Front Med (Lausanne) 2024; 11:1423813. [PMID: 39736974 PMCID: PMC11682886 DOI: 10.3389/fmed.2024.1423813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Accepted: 11/25/2024] [Indexed: 01/01/2025] Open
Abstract
Glaucoma is a pathologically irreversible eye illness in the realm of ophthalmic diseases. Because it is difficult to detect concealed and non-obvious progressive changes, clinical diagnosis and treatment of glaucoma is extremely challenging. At the same time, screening and monitoring for glaucoma disease progression are crucial. Artificial intelligence technology has advanced rapidly in all fields, particularly medicine, thanks to ongoing in-depth study and algorithm extension. Simultaneously, research and applications of machine learning and deep learning in the field of glaucoma are fast evolving. Artificial intelligence, with its numerous advantages, will raise the accuracy and efficiency of glaucoma screening and diagnosis to new heights, as well as significantly cut the cost of diagnosis and treatment for the majority of patients. This review summarizes the relevant applications of artificial intelligence in the screening and diagnosis of glaucoma, as well as reflects deeply on the limitations and difficulties of the current application of artificial intelligence in the field of glaucoma, and presents promising prospects and expectations for the application of artificial intelligence in other eye diseases such as glaucoma.
Collapse
Affiliation(s)
| | - Lina Liang
- Department of Eye Function Laboratory, Eye Hospital, China Academy of Chinese Medical Sciences, Beijing, China
| | | | | | | | | |
Collapse
|
2
|
Hsu TK, Lai IP, Tsai MJ, Lee PJ, Hung KC, Yang S, Chan LW, Lin IC, Chang WH, Huang YJ, Cheng MC, Hsieh YT. A deep learning approach for the screening of referable age-related macular degeneration - Model development and external validation. J Formos Med Assoc 2024:S0929-6646(24)00567-9. [PMID: 39675993 DOI: 10.1016/j.jfma.2024.12.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Revised: 11/21/2024] [Accepted: 12/03/2024] [Indexed: 12/17/2024] Open
Abstract
PURPOSE To develop a deep learning image assessment software, VeriSee™ AMD, and to validate its accuracy in diagnosing referable age-related macular degeneration (AMD). METHODS For model development, a total of 6801 judgable 45-degree color fundus images from patients, aged 50 years and over, were collected. These images were assessed for AMD severity by ophthalmologists, according to the Age-Related Eye Disease Studies (AREDS) AMD category. Referable AMD was defined as category three (intermediate) or four (advanced). Of these images, 6123 were used for model training and validation. The other 678 images were used for testing the accuracy of VeriSee™ AMD relative to the ophthalmologists. Area under the receiver operating characteristic curve (AUC) for VeriSee™ AMD, and the sensitivities and specificities for VeriSee™ AMD and ophthalmologists were calculated. For external validation, another 937 color fundus images were used to test the accuracy of VeriSee™ AMD. RESULTS During model development, the AUC for VeriSee™ AMD in diagnosing referable AMD was 0.961. The accuracy for VeriSee™ AMD for testing was 92.04% (sensitivity 90.0% and specificity 92.43%). The mean accuracy of the ophthalmologists in diagnosing referable AMD was 85.8% (range: 75.93%-97.31%). During external validation, VeriSee AMD achieved a sensitivity of 90.03%, a specificity of 96.44%, and an accuracy of 92.04%. CONCLUSIONS VeriSee™ AMD demonstrated good sensitivity and specificity in diagnosing referable AMD from color fundus images. The findings of this study support the use of VeriSee™ AMD in assisting with the clinical screening of intermediate and advanced AMD using color fundus photography.
Collapse
Affiliation(s)
- Tsui-Kang Hsu
- Cheng-Hsin General Hospital, Taipei, Taiwan; Department of Ophthalmology, School of medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Ivan Pochou Lai
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan
| | - Meng-Ju Tsai
- Taoyuan General Hospital, Ministry of Health and Welfare, Taoyuan, Taiwan
| | - Pei-Jung Lee
- Department of Ophthalmology, New Taipei City Hospital, New Taipei, Taiwan
| | - Kuo-Chi Hung
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan; Universal Eye Center, Xinnan branch, Taipei, Taiwan
| | - Shihyi Yang
- Landseed International Hospital, Taoyuan, Taiwan
| | - Li-Wei Chan
- Department of Ophthalmology, Taipei Tzu Chi Hospital, The Buddhist Tzu Chi Medical Foundation, New Taipei, Taiwan
| | - I-Chan Lin
- Department of Ophthalmology, School of Medicine, College of Medicine, Taipei Medical University, Taipei, Taiwan; Department of Ophthalmology, Wan Fang Hospital, Taipei Medical University, Taipei, Taiwan
| | | | | | | | - Yi-Ting Hsieh
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan; Department of Ophthalmology, School of Medicine, College of Medicine, National Taiwan University, Taipei, Taiwan; Department of Ophthalmology, National Taiwan University Hospital Hsin-Chu Branch, Hsinchu, Taiwan.
| |
Collapse
|
3
|
Li F, Wang D, Yang Z, Zhang Y, Jiang J, Liu X, Kong K, Zhou F, Tham CC, Medeiros F, Han Y, Grzybowski A, Zangwill LM, Lam DSC, Zhang X. The AI revolution in glaucoma: Bridging challenges with opportunities. Prog Retin Eye Res 2024; 103:101291. [PMID: 39186968 DOI: 10.1016/j.preteyeres.2024.101291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 08/19/2024] [Accepted: 08/19/2024] [Indexed: 08/28/2024]
Abstract
Recent advancements in artificial intelligence (AI) herald transformative potentials for reshaping glaucoma clinical management, improving screening efficacy, sharpening diagnosis precision, and refining the detection of disease progression. However, incorporating AI into healthcare usages faces significant hurdles in terms of developing algorithms and putting them into practice. When creating algorithms, issues arise due to the intensive effort required to label data, inconsistent diagnostic standards, and a lack of thorough testing, which often limits the algorithms' widespread applicability. Additionally, the "black box" nature of AI algorithms may cause doctors to be wary or skeptical. When it comes to using these tools, challenges include dealing with lower-quality images in real situations and the systems' limited ability to work well with diverse ethnic groups and different diagnostic equipment. Looking ahead, new developments aim to protect data privacy through federated learning paradigms, improving algorithm generalizability by diversifying input data modalities, and augmenting datasets with synthetic imagery. The integration of smartphones appears promising for using AI algorithms in both clinical and non-clinical settings. Furthermore, bringing in large language models (LLMs) to act as interactive tool in medicine may signify a significant change in how healthcare will be delivered in the future. By navigating through these challenges and leveraging on these as opportunities, the field of glaucoma AI will not only have improved algorithmic accuracy and optimized data integration but also a paradigmatic shift towards enhanced clinical acceptance and a transformative improvement in glaucoma care.
Collapse
Affiliation(s)
- Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Deming Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Zefeng Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Yinhang Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Jiaxuan Jiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Xiaoyi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Kangjie Kong
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Fengqi Zhou
- Ophthalmology, Mayo Clinic Health System, Eau Claire, WI, USA.
| | - Clement C Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China.
| | - Felipe Medeiros
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA.
| | - Ying Han
- University of California, San Francisco, Department of Ophthalmology, San Francisco, CA, USA; The Francis I. Proctor Foundation for Research in Ophthalmology, University of California, San Francisco, CA, USA.
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| | - Linda M Zangwill
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, CA, USA.
| | - Dennis S C Lam
- The International Eye Research Institute of the Chinese University of Hong Kong (Shenzhen), Shenzhen, China; The C-MER Dennis Lam & Partners Eye Center, C-MER International Eye Care Group, Hong Kong, China.
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| |
Collapse
|
4
|
Zeppieri M, Gardini L, Culiersi C, Fontana L, Musa M, D’Esposito F, Surico PL, Gagliano C, Sorrentino FS. Novel Approaches for the Early Detection of Glaucoma Using Artificial Intelligence. Life (Basel) 2024; 14:1386. [PMID: 39598184 PMCID: PMC11595922 DOI: 10.3390/life14111386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2024] [Revised: 10/14/2024] [Accepted: 10/26/2024] [Indexed: 11/29/2024] Open
Abstract
BACKGROUND If left untreated, glaucoma-the second most common cause of blindness worldwide-causes irreversible visual loss due to a gradual neurodegeneration of the retinal ganglion cells. Conventional techniques for identifying glaucoma, like optical coherence tomography (OCT) and visual field exams, are frequently laborious and dependent on subjective interpretation. Through the fast and accurate analysis of massive amounts of imaging data, artificial intelligence (AI), in particular machine learning (ML) and deep learning (DL), has emerged as a promising method to improve the early detection and management of glaucoma. AIMS The purpose of this study is to examine the current uses of AI in the early diagnosis, treatment, and detection of glaucoma while highlighting the advantages and drawbacks of different AI models and algorithms. In addition, it aims to determine how AI technologies might transform glaucoma treatment and suggest future lines of inquiry for this area of study. METHODS A thorough search of databases, including Web of Science, PubMed, and Scopus, was carried out to find pertinent papers released until August 2024. The inclusion criteria were limited to research published in English in peer-reviewed publications that used AI, ML, or DL to diagnose or treat glaucoma in human subjects. Articles were chosen and vetted according to their quality, contribution to the field, and relevancy. RESULTS Convolutional neural networks (CNNs) and other deep learning algorithms are among the AI models included in this paper that have been shown to have excellent sensitivity and specificity in identifying glaucomatous alterations in fundus photos, OCT scans, and visual field tests. By automating standard screening procedures, these models have demonstrated promise in distinguishing between glaucomatous and healthy eyes, forecasting the course of the disease, and possibly lessening the workload of physicians. Nonetheless, several significant obstacles remain, such as the requirement for various training datasets, outside validation, decision-making transparency, and handling moral and legal issues. CONCLUSIONS Artificial intelligence (AI) holds great promise for improving the diagnosis and treatment of glaucoma by facilitating prompt and precise interpretation of imaging data and assisting in clinical decision making. To guarantee wider accessibility and better patient results, future research should create strong generalizable AI models validated in various populations, address ethical and legal matters, and incorporate AI into clinical practice.
Collapse
Affiliation(s)
- Marco Zeppieri
- Department of Ophthalmology, University Hospital of Udine, 33100 Udine, Italy
| | - Lorenzo Gardini
- Unit of Ophthalmology, Department of Surgical Sciences, Ospedale Maggiore, 40100 Bologna, Italy (F.S.S.)
| | - Carola Culiersi
- Unit of Ophthalmology, Department of Surgical Sciences, Ospedale Maggiore, 40100 Bologna, Italy (F.S.S.)
| | - Luigi Fontana
- Ophthalmology Unit, Department of Surgical Sciences, IRCCS Azienda Ospedaliero, Alma Mater Studiorum University of Bologna, 40100 Bologna, Italy
| | - Mutali Musa
- Department of Optometry, University of Benin, Benin City 300238, Nigeria
- Africa Eye Laser Centre, Km 7, Benin City 300105, Nigeria
| | - Fabiana D’Esposito
- Imperial College Ophthalmic Research Group (ICORG) Unit, Imperial College, 153-173 Marylebone Rd, London NW15QH, UK
- Department of Neurosciences, Reproductive Sciences and Dentistry, University of Naples Federico II, Via Pansini 5, 80131 Napoli, Italy
| | - Pier Luigi Surico
- Schepens Eye Research Institute of Mass Eye and Ear, Harvard Medical School, Boston, MA 02114, USA
- Department of Ophthalmology, Campus Bio-Medico University, 00128 Rome, Italy
| | - Caterina Gagliano
- Department of Medicine and Surgery, University of Enna “Kore”, Piazza dell’Università, 94100 Enna, Italy
- Mediterranean Foundation “G.B. Morgagni”, 95125 Catania, Italy
| | | |
Collapse
|
5
|
Savoy FM, Rao DP, Toh JK, Ong B, Sivaraman A, Sharma A, Das T. Empowering Portable Age-Related Macular Degeneration Screening: Evaluation of a Deep Learning Algorithm for a Smartphone Fundus Camera. BMJ Open 2024; 14:e081398. [PMID: 39237272 PMCID: PMC11381639 DOI: 10.1136/bmjopen-2023-081398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Accepted: 08/14/2024] [Indexed: 09/07/2024] Open
Abstract
OBJECTIVES Despite global research on early detection of age-related macular degeneration (AMD), not enough is being done for large-scale screening. Automated analysis of retinal images captured via smartphone presents a potential solution; however, to our knowledge, such an artificial intelligence (AI) system has not been evaluated. The study aimed to assess the performance of an AI algorithm in detecting referable AMD on images captured on a portable fundus camera. DESIGN, SETTING A retrospective image database from the Age-Related Eye Disease Study (AREDS) and target device was used. PARTICIPANTS The algorithm was trained on two distinct data sets with macula-centric images: initially on 108,251 images (55% referable AMD) from AREDS and then fine-tuned on 1108 images (33% referable AMD) captured on Asian eyes using the target device. The model was designed to indicate the presence of referable AMD (intermediate and advanced AMD). Following the first training step, the test set consisted of 909 images (49% referable AMD). For the fine-tuning step, the test set consisted of 238 (34% referable AMD) images. The reference standard for the AREDS data set was fundus image grading by the central reading centre, and for the target device, it was consensus image grading by specialists. OUTCOME MEASURES Area under receiver operating curve (AUC), sensitivity and specificity of algorithm. RESULTS Before fine-tuning, the deep learning (DL) algorithm exhibited a test set (from AREDS) sensitivity of 93.48% (95% CI: 90.8% to 95.6%), specificity of 82.33% (95% CI: 78.6% to 85.7%) and AUC of 0.965 (95% CI:0.95 to 0.98). After fine-tuning, the DL algorithm displayed a test set (from the target device) sensitivity of 91.25% (95% CI: 82.8% to 96.4%), specificity of 84.18% (95% CI: 77.5% to 89.5%) and AUC 0.947 (95% CI: 0.911 to 0.982). CONCLUSION The DL algorithm shows promising results in detecting referable AMD from a portable smartphone-based imaging system. This approach can potentially bring effective and affordable AMD screening to underserved areas.
Collapse
Affiliation(s)
| | | | - Jun Kai Toh
- Medios Technologies, Remidio Innovative Solutions, Singapore
| | - Bryan Ong
- Medios Technologies, Remidio Innovative Solutions, Singapore
| | - Anand Sivaraman
- Remidio Innovative Solutions Pvt Ltd, Bangalore, Karnataka, India
| | - Ashish Sharma
- Lotus Eye Care Hospital and Institute, Coimbatore, Tamil Nadu, India
| | | |
Collapse
|
6
|
Christopher M, Hallaj S, Jiravarnsirikul A, Baxter SL, Zangwill LM. Novel Technologies in Artificial Intelligence and Telemedicine for Glaucoma Screening. J Glaucoma 2024; 33:S26-S32. [PMID: 38506792 DOI: 10.1097/ijg.0000000000002367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Accepted: 01/22/2024] [Indexed: 03/21/2024]
Abstract
PURPOSE To provide an overview of novel technologies in telemedicine and artificial intelligence (AI) approaches for cost-effective glaucoma screening. METHODS/RESULTS A narrative review was performed by summarizing research results, recent developments in glaucoma detection and care, and considerations related to telemedicine and AI in glaucoma screening. Telemedicine and AI approaches provide the opportunity for novel glaucoma screening programs in primary care, optometry, portable, and home-based settings. These approaches offer several advantages for glaucoma screening, including increasing access to care, lowering costs, identifying patients in need of urgent treatment, and enabling timely diagnosis and early intervention. However, challenges remain in implementing these systems, including integration into existing clinical workflows, ensuring equity for patients, and meeting ethical and regulatory requirements. Leveraging recent work towards standardized data acquisition as well as tools and techniques developed for automated diabetic retinopathy screening programs may provide a model for a cost-effective approach to glaucoma screening. CONCLUSION Leveraging novel technologies and advances in telemedicine and AI-based approaches to glaucoma detection show promise for improving our ability to detect moderate and advanced glaucoma in primary care settings and target higher individuals at high risk for having the disease.
Collapse
Affiliation(s)
- Mark Christopher
- Viterbi Family Department of Ophthalmology, Hamilton Glaucoma Center
- Viterbi Family Department of Ophthalmology, Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute
| | - Shahin Hallaj
- Viterbi Family Department of Ophthalmology, Hamilton Glaucoma Center
- Viterbi Family Department of Ophthalmology, Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute
| | - Anuwat Jiravarnsirikul
- Viterbi Family Department of Ophthalmology, Hamilton Glaucoma Center
- Department of Medicine, Division of Biomedical Informatics, University of California San Diego, La Jolla, CA
| | - Sally L Baxter
- Viterbi Family Department of Ophthalmology, Hamilton Glaucoma Center
- Viterbi Family Department of Ophthalmology, Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute
- Department of Medicine, Division of Biomedical Informatics, University of California San Diego, La Jolla, CA
| | - Linda M Zangwill
- Viterbi Family Department of Ophthalmology, Hamilton Glaucoma Center
- Viterbi Family Department of Ophthalmology, Division of Ophthalmology Informatics and Data Science, Shiley Eye Institute
| |
Collapse
|
7
|
Abd El-Khalek AA, Balaha HM, Sewelam A, Ghazal M, Khalil AT, Abo-Elsoud MEA, El-Baz A. A Comprehensive Review of AI Diagnosis Strategies for Age-Related Macular Degeneration (AMD). Bioengineering (Basel) 2024; 11:711. [PMID: 39061793 PMCID: PMC11273790 DOI: 10.3390/bioengineering11070711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Revised: 07/02/2024] [Accepted: 07/09/2024] [Indexed: 07/28/2024] Open
Abstract
The rapid advancement of computational infrastructure has led to unprecedented growth in machine learning, deep learning, and computer vision, fundamentally transforming the analysis of retinal images. By utilizing a wide array of visual cues extracted from retinal fundus images, sophisticated artificial intelligence models have been developed to diagnose various retinal disorders. This paper concentrates on the detection of Age-Related Macular Degeneration (AMD), a significant retinal condition, by offering an exhaustive examination of recent machine learning and deep learning methodologies. Additionally, it discusses potential obstacles and constraints associated with implementing this technology in the field of ophthalmology. Through a systematic review, this research aims to assess the efficacy of machine learning and deep learning techniques in discerning AMD from different modalities as they have shown promise in the field of AMD and retinal disorders diagnosis. Organized around prevalent datasets and imaging techniques, the paper initially outlines assessment criteria, image preprocessing methodologies, and learning frameworks before conducting a thorough investigation of diverse approaches for AMD detection. Drawing insights from the analysis of more than 30 selected studies, the conclusion underscores current research trajectories, major challenges, and future prospects in AMD diagnosis, providing a valuable resource for both scholars and practitioners in the domain.
Collapse
Affiliation(s)
- Aya A. Abd El-Khalek
- Communications and Electronics Engineering Department, Nile Higher Institute for Engineering and Technology, Mansoura 35511, Egypt;
| | - Hossam Magdy Balaha
- Department of Bioengineering, J.B. Speed School of Engineering, University of Louisville, Louisville, KY 40292, USA
| | - Ashraf Sewelam
- Ophthalmology Department, Faculty of Medicine, Mansoura University, Mansoura 35511, Egypt;
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates;
| | - Abeer T. Khalil
- Communications and Electronics Engineering Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (A.T.K.); (M.E.A.A.-E.)
| | - Mohy Eldin A. Abo-Elsoud
- Communications and Electronics Engineering Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (A.T.K.); (M.E.A.A.-E.)
| | - Ayman El-Baz
- Department of Bioengineering, J.B. Speed School of Engineering, University of Louisville, Louisville, KY 40292, USA
| |
Collapse
|
8
|
Pandey PU, Ballios BG, Christakis PG, Kaplan AJ, Mathew DJ, Ong Tone S, Wan MJ, Micieli JA, Wong JCY. Ensemble of deep convolutional neural networks is more accurate and reliable than board-certified ophthalmologists at detecting multiple diseases in retinal fundus photographs. Br J Ophthalmol 2024; 108:417-423. [PMID: 36720585 PMCID: PMC10894841 DOI: 10.1136/bjo-2022-322183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 01/11/2023] [Indexed: 02/02/2023]
Abstract
AIMS To develop an algorithm to classify multiple retinal pathologies accurately and reliably from fundus photographs and to validate its performance against human experts. METHODS We trained a deep convolutional ensemble (DCE), an ensemble of five convolutional neural networks (CNNs), to classify retinal fundus photographs into diabetic retinopathy (DR), glaucoma, age-related macular degeneration (AMD) and normal eyes. The CNN architecture was based on the InceptionV3 model, and initial weights were pretrained on the ImageNet dataset. We used 43 055 fundus images from 12 public datasets. Five trained ensembles were then tested on an 'unseen' set of 100 images. Seven board-certified ophthalmologists were asked to classify these test images. RESULTS Board-certified ophthalmologists achieved a mean accuracy of 72.7% over all classes, while the DCE achieved a mean accuracy of 79.2% (p=0.03). The DCE had a statistically significant higher mean F1-score for DR classification compared with the ophthalmologists (76.8% vs 57.5%; p=0.01) and greater but statistically non-significant mean F1-scores for glaucoma (83.9% vs 75.7%; p=0.10), AMD (85.9% vs 85.2%; p=0.69) and normal eyes (73.0% vs 70.5%; p=0.39). The DCE had a greater mean agreement between accuracy and confident of 81.6% vs 70.3% (p<0.001). DISCUSSION We developed a deep learning model and found that it could more accurately and reliably classify four categories of fundus images compared with board-certified ophthalmologists. This work provides proof-of-principle that an algorithm is capable of accurate and reliable recognition of multiple retinal diseases using only fundus photographs.
Collapse
Affiliation(s)
- Prashant U Pandey
- School of Biomedical Engineering, The University of British Columbia, Vancouver, British Columbia, Canada
| | - Brian G Ballios
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Krembil Research Institute, University Health Network, Toronto, Ontario, Canada
- Kensington Vision and Research Centre and Kensington Research Institute, Toronto, Ontario, Canada
| | - Panos G Christakis
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Kensington Vision and Research Centre and Kensington Research Institute, Toronto, Ontario, Canada
| | - Alexander J Kaplan
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - David J Mathew
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Krembil Research Institute, University Health Network, Toronto, Ontario, Canada
- Kensington Vision and Research Centre and Kensington Research Institute, Toronto, Ontario, Canada
| | - Stephan Ong Tone
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Sunnybrook Research Institute, Toronto, Ontario, Canada
| | - Michael J Wan
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - Jonathan A Micieli
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Kensington Vision and Research Centre and Kensington Research Institute, Toronto, Ontario, Canada
- Department of Ophthalmology, St. Michael's Hospital, Unity Health, Toronto, Ontario, Canada
| | - Jovi C Y Wong
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
9
|
Abd El-Khalek AA, Balaha HM, Alghamdi NS, Ghazal M, Khalil AT, Abo-Elsoud MEA, El-Baz A. A concentrated machine learning-based classification system for age-related macular degeneration (AMD) diagnosis using fundus images. Sci Rep 2024; 14:2434. [PMID: 38287062 PMCID: PMC10825213 DOI: 10.1038/s41598-024-52131-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 01/14/2024] [Indexed: 01/31/2024] Open
Abstract
The increase in eye disorders among older individuals has raised concerns, necessitating early detection through regular eye examinations. Age-related macular degeneration (AMD), a prevalent condition in individuals over 45, is a leading cause of vision impairment in the elderly. This paper presents a comprehensive computer-aided diagnosis (CAD) framework to categorize fundus images into geographic atrophy (GA), intermediate AMD, normal, and wet AMD categories. This is crucial for early detection and precise diagnosis of age-related macular degeneration (AMD), enabling timely intervention and personalized treatment strategies. We have developed a novel system that extracts both local and global appearance markers from fundus images. These markers are obtained from the entire retina and iso-regions aligned with the optical disc. Applying weighted majority voting on the best classifiers improves performance, resulting in an accuracy of 96.85%, sensitivity of 93.72%, specificity of 97.89%, precision of 93.86%, F1 of 93.72%, ROC of 95.85%, balanced accuracy of 95.81%, and weighted sum of 95.38%. This system not only achieves high accuracy but also provides a detailed assessment of the severity of each retinal region. This approach ensures that the final diagnosis aligns with the physician's understanding of AMD, aiding them in ongoing treatment and follow-up for AMD patients.
Collapse
Affiliation(s)
- Aya A Abd El-Khalek
- Communications and Electronics Engineering Department, Nile Higher Institute for Engineering and Technology, Mansoura, Egypt
| | - Hossam Magdy Balaha
- BioImaging Lab, Department of Bioengineering, J.B. Speed School of Engineering, University of Louisville, Louisville, KY, USA
| | - Norah Saleh Alghamdi
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Depatrment, Abu Dhabi University, Abu Dhabi, UAE
| | - Abeer T Khalil
- Communications and Electronics Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Mohy Eldin A Abo-Elsoud
- Communications and Electronics Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Ayman El-Baz
- BioImaging Lab, Department of Bioengineering, J.B. Speed School of Engineering, University of Louisville, Louisville, KY, USA.
| |
Collapse
|
10
|
Huang X, Islam MR, Akter S, Ahmed F, Kazami E, Serhan HA, Abd-Alrazaq A, Yousefi S. Artificial intelligence in glaucoma: opportunities, challenges, and future directions. Biomed Eng Online 2023; 22:126. [PMID: 38102597 PMCID: PMC10725017 DOI: 10.1186/s12938-023-01187-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 12/01/2023] [Indexed: 12/17/2023] Open
Abstract
Artificial intelligence (AI) has shown excellent diagnostic performance in detecting various complex problems related to many areas of healthcare including ophthalmology. AI diagnostic systems developed from fundus images have become state-of-the-art tools in diagnosing retinal conditions and glaucoma as well as other ocular diseases. However, designing and implementing AI models using large imaging data is challenging. In this study, we review different machine learning (ML) and deep learning (DL) techniques applied to multiple modalities of retinal data, such as fundus images and visual fields for glaucoma detection, progression assessment, staging and so on. We summarize findings and provide several taxonomies to help the reader understand the evolution of conventional and emerging AI models in glaucoma. We discuss opportunities and challenges facing AI application in glaucoma and highlight some key themes from the existing literature that may help to explore future studies. Our goal in this systematic review is to help readers and researchers to understand critical aspects of AI related to glaucoma as well as determine the necessary steps and requirements for the successful development of AI models in glaucoma.
Collapse
Affiliation(s)
- Xiaoqin Huang
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, USA
| | - Md Rafiqul Islam
- Business Information Systems, Australian Institute of Higher Education, Sydney, Australia
| | - Shanjita Akter
- School of Computer Science, Taylors University, Subang Jaya, Malaysia
| | - Fuad Ahmed
- Department of Computer Science & Engineering, Islamic University of Technology (IUT), Gazipur, Bangladesh
| | - Ehsan Kazami
- Ophthalmology, General Hospital of Mahabad, Urmia University of Medical Sciences, Urmia, Iran
| | - Hashem Abu Serhan
- Department of Ophthalmology, Hamad Medical Corporations, Doha, Qatar
| | - Alaa Abd-Alrazaq
- AI Center for Precision Health, Weill Cornell Medicine-Qatar, Doha, Qatar
| | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, USA.
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, USA.
| |
Collapse
|
11
|
Duong R, Abou-Samra A, Bogaard JD, Shildkrot Y. Asteroid Hyalosis: An Update on Prevalence, Risk Factors, Emerging Clinical Impact and Management Strategies. Clin Ophthalmol 2023; 17:1739-1754. [PMID: 37361691 PMCID: PMC10290459 DOI: 10.2147/opth.s389111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Accepted: 06/12/2023] [Indexed: 06/28/2023] Open
Abstract
Asteroid hyalosis (AH) is a benign clinical entity characterized by the presence of multiple refractile spherical calcium and phospholipids within the vitreous body. First described by Benson in 1894, this entity has been well documented in the clinical literature and is named due to the resemblance of asteroid bodies on clinical examination to a starry night sky. Today, a growing body of epidemiologic data estimates the global prevalence of asteroid hyalosis to be around 1%, and there is a strong established association between AH and older age. While pathophysiology remains unclear, a variety of systemic and ocular risk factors for AH have recently been suggested in the literature and may provide insight into possible mechanisms for asteroid body (AB) development. As vision is rarely affected, clinical management is focused on differentiation of asteroid hyalosis from mimicking conditions, evaluation of the underlying retina for other pathology and consideration of vitrectomy in rare cases with visual impairment. Taking into account the recent technologic advances in large-scale medical databases, improving imaging modalities, and the popularity of telemedicine, this review summarizes the growing body of literature of AH epidemiology and pathophysiology and provides updates on the clinical diagnosis and management of AH.
Collapse
Affiliation(s)
- Ryan Duong
- Department of Ophthalmology, University of Virginia, Charlottesville, VA, USA
| | - Abdullah Abou-Samra
- Department of Ophthalmology, University of Virginia, Charlottesville, VA, USA
| | - Joseph D Bogaard
- Department of Ophthalmology, University of Virginia, Charlottesville, VA, USA
| | - Yevgeniy Shildkrot
- RetinaCare of Virginia, Augusta Eye Associates PLC, Fishersville, VA, USA
- Virginia Commonwealth University, Richmond, VA, USA
| |
Collapse
|
12
|
Abramovich O, Pizem H, Van Eijgen J, Oren I, Melamed J, Stalmans I, Blumenthal EZ, Behar JA. FundusQ-Net: A regression quality assessment deep learning algorithm for fundus images quality grading. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 239:107522. [PMID: 37285697 DOI: 10.1016/j.cmpb.2023.107522] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 03/23/2023] [Accepted: 03/30/2023] [Indexed: 06/09/2023]
Abstract
OBJECTIVE Ophthalmological pathologies such as glaucoma, diabetic retinopathy and age-related macular degeneration are major causes of blindness and vision impairment. There is a need for novel decision support tools that can simplify and speed up the diagnosis of these pathologies. A key step in this process is to automatically estimate the quality of the fundus images to make sure these are interpretable by a human operator or a machine learning model. We present a novel fundus image quality scale and deep learning (DL) model that can estimate fundus image quality relative to this new scale. METHODS A total of 1245 images were graded for quality by two ophthalmologists within the range 1-10, with a resolution of 0.5. A DL regression model was trained for fundus image quality assessment. The architecture used was Inception-V3. The model was developed using a total of 89,947 images from 6 databases, of which 1245 were labeled by the specialists and the remaining 88,702 images were used for pre-training and semi-supervised learning. The final DL model was evaluated on an internal test set (n=209) as well as an external test set (n=194). RESULTS The final DL model, denoted FundusQ-Net, achieved a mean absolute error of 0.61 (0.54-0.68) on the internal test set. When evaluated as a binary classification model on the public DRIMDB database as an external test set the model obtained an accuracy of 99%. SIGNIFICANCE the proposed algorithm provides a new robust tool for automated quality grading of fundus images.
Collapse
Affiliation(s)
- Or Abramovich
- The Faculty of Biomedical Engineering, Technion-IIT, Haifa, Israel
| | - Hadas Pizem
- Rambam Medical Center: Rambam Health Care Campus, Israel
| | - Jan Van Eijgen
- Research Group of Ophthalmology, Department of Neurosciences, KU Leuven, Oude Markt 13, 3000 Leuven; Department of Ophthalmology, University Hospitals UZ Leuven, Herestraat 49, 3000 Leuven, Belgium
| | - Ilan Oren
- The Faculty of Biomedical Engineering, Technion-IIT, Haifa, Israel
| | - Joshua Melamed
- The Faculty of Biomedical Engineering, Technion-IIT, Haifa, Israel
| | - Ingeborg Stalmans
- Research Group of Ophthalmology, Department of Neurosciences, KU Leuven, Oude Markt 13, 3000 Leuven; Department of Ophthalmology, University Hospitals UZ Leuven, Herestraat 49, 3000 Leuven, Belgium
| | | | - Joachim A Behar
- The Faculty of Biomedical Engineering, Technion-IIT, Haifa, Israel.
| |
Collapse
|
13
|
Wawer Matos PA, Reimer RP, Rokohl AC, Caldeira L, Heindl LM, Große Hokamp N. Artificial Intelligence in Ophthalmology - Status Quo and Future Perspectives. Semin Ophthalmol 2023; 38:226-237. [PMID: 36356300 DOI: 10.1080/08820538.2022.2139625] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Artificial intelligence (AI) is an emerging technology in healthcare and holds the potential to disrupt many arms in medical care. In particular, disciplines using medical imaging modalities, including e.g. radiology but ophthalmology as well, are already confronted with a wide variety of AI implications. In ophthalmologic research, AI has demonstrated promising results limited to specific diseases and imaging tools, respectively. Yet, implementation of AI in clinical routine is not widely spread due to availability, heterogeneity in imaging techniques and AI methods. In order to describe the status quo, this narrational review provides a brief introduction to AI ("what the ophthalmologist needs to know"), followed by an overview of different AI-based applications in ophthalmology and a discussion on future challenges.Abbreviations: Age-related macular degeneration, AMD; Artificial intelligence, AI; Anterior segment OCT, AS-OCT; Coronary artery calcium score, CACS; Convolutional neural network, CNN; Deep convolutional neural network, DCNN; Diabetic retinopathy, DR; Machine learning, ML; Optical coherence tomography, OCT; Retinopathy of prematurity, ROP; Support vector machine, SVM; Thyroid-associated ophthalmopathy, TAO.
Collapse
Affiliation(s)
| | - Robert P Reimer
- Department of Diagnostic and Interventional Radiology, University Hospital of Cologne, Köln, Germany
| | - Alexander C Rokohl
- Department of Ophthalmology, University Hospital of Cologne, Köln, Germany
| | - Liliana Caldeira
- Department of Diagnostic and Interventional Radiology, University Hospital of Cologne, Köln, Germany
| | - Ludwig M Heindl
- Department of Ophthalmology, University Hospital of Cologne, Köln, Germany
| | - Nils Große Hokamp
- Department of Diagnostic and Interventional Radiology, University Hospital of Cologne, Köln, Germany
| |
Collapse
|
14
|
Harikiran J, Chandana BS, Rao BS, Raviteja B. Ocular disease examination of fundus images by hybriding SFCNN and rule mining algorithms. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2183456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
Affiliation(s)
- J. Harikiran
- School of Computer Science and Engineering, VIT-AP University, Amaravathi, India
| | - B. Sai Chandana
- School of Computer Science and Engineering, VIT-AP University, Amaravathi, India
| | - B. Srinivasa Rao
- School of Computer Science and Engineering, VIT-AP University, Amaravathi, India
| | - B. Raviteja
- Department of Computer Science and Engineering, GITAM Deemed to be University, Visakhapatnam, India
| |
Collapse
|
15
|
A Neural Network for Automated Image Quality Assessment of Optic Disc Photographs. J Clin Med 2023; 12:jcm12031217. [PMID: 36769865 PMCID: PMC9917571 DOI: 10.3390/jcm12031217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 01/26/2023] [Accepted: 01/31/2023] [Indexed: 02/05/2023] Open
Abstract
This study describes the development of a convolutional neural network (CNN) for automated assessment of optic disc photograph quality. Using a code-free deep learning platform, a total of 2377 optic disc photographs were used to develop a deep CNN capable of determining optic disc photograph quality. Of these, 1002 were good-quality images, 609 were acceptable-quality, and 766 were poor-quality images. The dataset was split 80/10/10 into training, validation, and test sets and balanced for quality. A ternary classification model (good, acceptable, and poor quality) and a binary model (usable, unusable) were developed. In the ternary classification system, the model had an overall accuracy of 91% and an AUC of 0.98. The model had higher predictive accuracy for images of good (93%) and poor quality (96%) than for images of acceptable quality (91%). The binary model performed with an overall accuracy of 98% and an AUC of 0.99. When validated on 292 images not included in the original training/validation/test dataset, the model's accuracy was 85% on the three-class classification task and 97% on the binary classification task. The proposed system for automated image-quality assessment for optic disc photographs achieves high accuracy in both ternary and binary classification systems, and highlights the success achievable with a code-free platform. There is wide clinical and research potential for such a model, with potential applications ranging from integration into fundus camera software to provide immediate feedback to ophthalmic photographers, to prescreening large databases before their use in research.
Collapse
|
16
|
Domínguez C, Heras J, Mata E, Pascual V, Royo D, Zapata MÁ. Binary and multi-class automated detection of age-related macular degeneration using convolutional- and transformer-based architectures. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 229:107302. [PMID: 36528999 DOI: 10.1016/j.cmpb.2022.107302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 09/05/2022] [Accepted: 12/05/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVE Age-related macular degeneration (AMD) is an eye disease that happens when ageing causes damage to the macula, and it is the leading cause of blindness in developed countries. Screening retinal fundus images allows ophthalmologists to early detect, diagnose and treat this disease; however, the manual interpretation of images is a time-consuming task. In this paper, we aim to study different deep learning methods to diagnose AMD. METHODS We have conducted a thorough study of two families of deep learning models based on convolutional neural networks (CNN) and transformer architectures to automatically diagnose referable/non-referable AMD, and grade AMD severity scales (no AMD, early AMD, intermediate AMD, and advanced AMD). In addition, we have analysed several progressive resizing strategies and ensemble methods for convolutional-based architectures to further improve the performance of the models. RESULTS As a first result, we have shown that transformer-based architectures obtain considerably worse results than convolutional-based architectures for diagnosing AMD. Moreover, we have built a model for diagnosing referable AMD that yielded a mean F1-score (SD) of 92.60% (0.47), a mean AUROC (SD) of 97.53% (0.40), and a mean weighted kappa coefficient (SD) of 85.28% (0.91); and an ensemble of models for grading AMD severity scales with a mean accuracy (SD) of 82.55% (2.92), and a mean weighted kappa coefficient (SD) of 84.76% (2.45). CONCLUSIONS This work shows that working with convolutional based architectures is more suitable than using transformer based models for classifying and grading AMD from retinal fundus images. Furthermore, convolutional models can be improved by means of progressive resizing strategies and ensemble methods.
Collapse
Affiliation(s)
- César Domínguez
- Department of Mathematics and Computer Science, University of La Rioja, Spain
| | - Jónathan Heras
- Department of Mathematics and Computer Science, University of La Rioja, Spain.
| | - Eloy Mata
- Department of Mathematics and Computer Science, University of La Rioja, Spain
| | - Vico Pascual
- Department of Mathematics and Computer Science, University of La Rioja, Spain
| | | | - Miguel Ángel Zapata
- UPRetina, Barcelona, Spain; Hospital Vall Hebron, Passeig Roser 126, Sant Cugat del Vallés, 08195 Barcelona, Spain
| |
Collapse
|
17
|
Chan E, Tang Z, Najjar RP, Narayanaswamy A, Sathianvichitr K, Newman NJ, Biousse V, Milea D. A Deep Learning System for Automated Quality Evaluation of Optic Disc Photographs in Neuro-Ophthalmic Disorders. Diagnostics (Basel) 2023; 13:diagnostics13010160. [PMID: 36611452 PMCID: PMC9818957 DOI: 10.3390/diagnostics13010160] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/27/2022] [Accepted: 12/28/2022] [Indexed: 01/05/2023] Open
Abstract
The quality of ocular fundus photographs can affect the accuracy of the morphologic assessment of the optic nerve head (ONH), either by humans or by deep learning systems (DLS). In order to automatically identify ONH photographs of optimal quality, we have developed, trained, and tested a DLS, using an international, multicentre, multi-ethnic dataset of 5015 ocular fundus photographs from 31 centres in 20 countries participating to the Brain and Optic Nerve Study with Artificial Intelligence (BONSAI). The reference standard in image quality was established by three experts who independently classified photographs as of "good", "borderline", or "poor" quality. The DLS was trained on 4208 fundus photographs and tested on an independent external dataset of 807 photographs, using a multi-class model, evaluated with a one-vs-rest classification strategy. In the external-testing dataset, the DLS could identify with excellent performance "good" quality photographs (AUC = 0.93 (95% CI, 0.91-0.95), accuracy = 91.4% (95% CI, 90.0-92.9%), sensitivity = 93.8% (95% CI, 92.5-95.2%), specificity = 75.9% (95% CI, 69.7-82.1%) and "poor" quality photographs (AUC = 1.00 (95% CI, 0.99-1.00), accuracy = 99.1% (95% CI, 98.6-99.6%), sensitivity = 81.5% (95% CI, 70.6-93.8%), specificity = 99.7% (95% CI, 99.6-100.0%). "Borderline" quality images were also accurately classified (AUC = 0.90 (95% CI, 0.88-0.93), accuracy = 90.6% (95% CI, 89.1-92.2%), sensitivity = 65.4% (95% CI, 56.6-72.9%), specificity = 93.4% (95% CI, 92.1-94.8%). The overall accuracy to distinguish among the three classes was 90.6% (95% CI, 89.1-92.1%), suggesting that this DLS could select optimal quality fundus photographs in patients with neuro-ophthalmic and neurological disorders affecting the ONH.
Collapse
Affiliation(s)
- Ebenezer Chan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Duke-NUS School of Medicine, Singapore 169857, Singapore
| | - Zhiqun Tang
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
| | - Raymond P. Najjar
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Duke-NUS School of Medicine, Singapore 169857, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 117597, Singapore
- Center for Innovation & Precision Eye Health, National University of Singapore, Singapore 119077, Singapore
| | - Arun Narayanaswamy
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Glaucoma Department, Singapore National Eye Centre, Singapore 168751, Singapore
| | | | - Nancy J. Newman
- Departments of Ophthalmology and Neurology, Emory University, Atlanta, GA 30322, USA
| | - Valérie Biousse
- Departments of Ophthalmology and Neurology, Emory University, Atlanta, GA 30322, USA
| | - Dan Milea
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Duke-NUS School of Medicine, Singapore 169857, Singapore
- Department of Ophthalmology, Rigshospitalet, University of Copenhagen, 2600 Copenhagen, Denmark
- Department of Ophthalmology, Angers University Hospital, 49100 Angers, France
- Neuro-Ophthalmology Department, Singapore National Eye Centre, Singapore 168751, Singapore
- Correspondence:
| | | |
Collapse
|
18
|
Ibragimova RR, Gilmanov II, Lopukhova EA, Lakman IA, Bilyalov AR, Mukhamadeev TR, Kutluyarov RV, Idrisova GM. Algorithm of segmentation of OCT macular images to analyze the results in patients with age-related macular degeneration. BULLETIN OF RUSSIAN STATE MEDICAL UNIVERSITY 2022. [DOI: 10.24075/brsmu.2022.062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Age-related macular degeneration (AMD) is one of the main causes of loss of sight and hypovision in people over working age. Results of optical coherence tomography (OCT) are essential for diagnostics of the disease. Developing the recommendation system to analyze OCT images will reduce the time to process visual data and decrease the probability of errors while working as a doctor. The purpose of the study was to develop an algorithm of segmentation to analyze the results of macular OCT in patients with AMD. It allows to provide a correct prediction of an AMD stage based on the form of discovered pathologies. A program has been developed in the Python programming language using the Pytorch and TensorFlow libraries. Its quality was estimated using OCT macular images of 51 patients with early, intermediate, late AMD. A segmentation algorithm of OCT images was developed based on convolutional neural network. UNet network was selected as architecture of high-accuracy neural net. The neural net is trained on macular OCT images of 125 patients (197 eyes). The author algorithm displayed 98.1% of properly segmented areas on OCT images, which are the most essential for diagnostics and determination of an AMD stage. Weighted sensitivity and specificity of AMD stage classifier amounted to 83.8% and 84.9% respectively. The developed algorithm is promising as a recommendation system that implements the AMD classification based on data that promote taking decisions regarding the treatment strategy.
Collapse
Affiliation(s)
| | - II Gilmanov
- Ufa State Aviation Technical University, Ufa, Russia
| | - EA Lopukhova
- Ufa State Aviation Technical University, Ufa, Russia
| | - IA Lakman
- Bashkir State Medical University, Ufa, Russia
| | - AR Bilyalov
- Bashkir State Medical University, Ufa, Russia
| | | | - RV Kutluyarov
- Ufa State Aviation Technical University, Ufa, Russia
| | - GM Idrisova
- Bashkir State Medical University, Ufa, Russia
| |
Collapse
|
19
|
Gojić G, Petrović VB, Dragan D, Gajić DB, Mišković D, Džinić V, Grgić Z, Pantelić J, Oros A. Comparing the Clinical Viability of Automated Fundus Image Segmentation Methods. SENSORS (BASEL, SWITZERLAND) 2022; 22:9101. [PMID: 36501801 PMCID: PMC9735987 DOI: 10.3390/s22239101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 11/16/2022] [Accepted: 11/17/2022] [Indexed: 06/17/2023]
Abstract
Recent methods for automatic blood vessel segmentation from fundus images have been commonly implemented as convolutional neural networks. While these networks report high values for objective metrics, the clinical viability of recovered segmentation masks remains unexplored. In this paper, we perform a pilot study to assess the clinical viability of automatically generated segmentation masks in the diagnosis of diseases affecting retinal vascularization. Five ophthalmologists with clinical experience were asked to participate in the study. The results demonstrate low classification accuracy, inferring that generated segmentation masks cannot be used as a standalone resource in general clinical practice. The results also hint at possible clinical infeasibility in experimental design. In the follow-up experiment, we evaluate the clinical quality of masks by having ophthalmologists rank generation methods. The ranking is established with high intra-observer consistency, indicating better subjective performance for a subset of tested networks. The study also demonstrates that objective metrics are not correlated with subjective metrics in retinal segmentation tasks for the methods involved, suggesting that objective metrics commonly used in scientific papers to measure the method's performance are not plausible criteria for choosing clinically robust solutions.
Collapse
Affiliation(s)
- Gorana Gojić
- The Institute for Artificial Intelligence Research and Development of Serbia, 21102 Novi Sad, Serbia
- Faculty of Technical Sciences, University of Novi Sad, 21102 Novi Sad, Serbia
| | - Veljko B. Petrović
- Faculty of Technical Sciences, University of Novi Sad, 21102 Novi Sad, Serbia
| | - Dinu Dragan
- Faculty of Technical Sciences, University of Novi Sad, 21102 Novi Sad, Serbia
| | - Dušan B. Gajić
- Faculty of Technical Sciences, University of Novi Sad, 21102 Novi Sad, Serbia
| | - Dragiša Mišković
- The Institute for Artificial Intelligence Research and Development of Serbia, 21102 Novi Sad, Serbia
| | | | | | - Jelica Pantelić
- Institute of Eye Diseases, University Clinical Center of Serbia, 11000 Belgrade, Serbia
| | - Ana Oros
- Eye Clinic Džinić, 21107 Novi Sad, Serbia
- Institute of Neonatology, 11000 Belgrade, Serbia
| |
Collapse
|
20
|
Font O, Torrents-Barrena J, Royo D, García SB, Zarranz-Ventura J, Bures A, Salinas C, Zapata MÁ. Validation of an autonomous artificial intelligence-based diagnostic system for holistic maculopathy screening in a routine occupational health checkup context. Graefes Arch Clin Exp Ophthalmol 2022; 260:3255-3265. [PMID: 35567610 PMCID: PMC9477940 DOI: 10.1007/s00417-022-05653-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 03/15/2022] [Accepted: 03/31/2022] [Indexed: 02/08/2023] Open
Abstract
PURPOSE This study aims to evaluate the ability of an autonomous artificial intelligence (AI) system for detection of the most common central retinal pathologies in fundus photography. METHODS Retrospective diagnostic test evaluation on a raw dataset of 5918 images (2839 individuals) evaluated with non-mydriatic cameras during routine occupational health checkups. Three camera models were employed: Optomed Aurora (field of view - FOV 50º, 88% of the dataset), ZEISS VISUSCOUT 100 (FOV 40º, 9%), and Optomed SmartScope M5 (FOV 40º, 3%). Image acquisition took 2 min per patient. Ground truth for each image of the dataset was determined by 2 masked retina specialists, and disagreements were resolved by a 3rd retina specialist. The specific pathologies considered for evaluation were "diabetic retinopathy" (DR), "Age-related macular degeneration" (AMD), "glaucomatous optic neuropathy" (GON), and "Nevus." Images with maculopathy signs that did not match the described taxonomy were classified as "Other." RESULTS The combination of algorithms to detect any abnormalities had an area under the curve (AUC) of 0.963 with a sensitivity of 92.9% and a specificity of 86.8%. The algorithms individually obtained are as follows: AMD AUC 0.980 (sensitivity 93.8%; specificity 95.7%), DR AUC 0.950 (sensitivity 81.1%; specificity 94.8%), GON AUC 0.889 (sensitivity 53.6% specificity 95.7%), Nevus AUC 0.931 (sensitivity 86.7%; specificity 90.7%). CONCLUSION Our holistic AI approach reaches high diagnostic accuracy at simultaneous detection of DR, AMD, and Nevus. The integration of pathology-specific algorithms permits higher sensitivities with minimal impact on its specificity. It also reduces the risk of missing incidental findings. Deep learning may facilitate wider screenings of eye diseases.
Collapse
Affiliation(s)
- Octavi Font
- Optretina Image Reading Team, Barcelona, Spain
| | - Jordina Torrents-Barrena
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Dídac Royo
- Optretina Image Reading Team, Barcelona, Spain
| | - Sandra Banderas García
- Facultat de Cirurgia i Ciències Morfològiques, Universitat Autònoma de Barcelona (UAB), Barcelona, Spain.
- Ophthalmology Department Hospital Vall d'Hebron, Barcelona, Spain.
| | - Javier Zarranz-Ventura
- Institut Clinic of Ophthalmology (ICOF), Hospital Clinic, Barcelona, Spain
- Institut d'Investigacions Biomediques August Pi I Sunyer (IDIBAPS), Barcelona, Spain
| | - Anniken Bures
- Optretina Image Reading Team, Barcelona, Spain
- Instituto de Microcirugía Ocular (IMO), Barcelona, Spain
| | - Cecilia Salinas
- Optretina Image Reading Team, Barcelona, Spain
- Instituto de Microcirugía Ocular (IMO), Barcelona, Spain
| | - Miguel Ángel Zapata
- Optretina Image Reading Team, Barcelona, Spain
- Ophthalmology Department Hospital Vall d'Hebron, Barcelona, Spain
| |
Collapse
|
21
|
Li Q, Wang N, Liu Z, Li L, Liu Z, Long X, Yang H, Song H. Approach to glaucoma diagnosis and prediction based on multiparameter neural network. Int Ophthalmol 2022; 43:837-845. [PMID: 36083563 DOI: 10.1007/s10792-022-02485-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 08/20/2022] [Indexed: 11/28/2022]
Abstract
PURPOSE To investigate the effect of comprehensive factor analysis on the relationship between glaucoma assessment and combined parameters including trans-laminar cribrosa pressure difference (TLCPD) and fractional pressure reserve (FPR). METHODS The clinical data of 1029 patients with 15 indicators from the medical records of Beijing Tongren Hospital and 600 cases with 1322 indicators from Beijing Eye Research were collected. The doc2vec method was used to vectorize. The multivariate imputation by chained equations (MICE) method was used to interpolate. The original data combined with TLCPD, combined with FPR, and not combined parameters were respectively applied to train the neural network based on VGG16 and autoencoder to predict glaucoma and to evaluate the effect of combined parameters. RESULTS The accuracy rates used to classify the glaucoma of the two sets reach over 0.90, and the precision rates reach 0.70 and 0.80 respectively. After using TLCPD and FPR for the autoencoder method, the accuracy rates are both close to 1.0, and the precision rates are 0.90 and 0.70 respectively. CONCLUSION Using the combined parameters of FPR and TLCPD can effectively improve the diagnosis and prediction of glaucoma. Compared with TLCPD, FPR is more suitable for improving the effect of neural network for glaucoma classification.
Collapse
Affiliation(s)
- Qi Li
- School of Biomedical Engineering, Capital Medical University, Beijing, 100069, China.,Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, 100069, China.,Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, School of Biological Science and Medical Engineering, Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, 100083, China
| | - Ningli Wang
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, 100730, China
| | - Zhicheng Liu
- School of Biomedical Engineering, Capital Medical University, Beijing, 100069, China.,Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, 100069, China
| | - Lin Li
- School of Biomedical Engineering, Capital Medical University, Beijing, 100069, China.,Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, 100069, China
| | - Zhicheng Liu
- School of Biomedical Engineering, Capital Medical University, Beijing, 100069, China.,Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, 100069, China
| | - Xiaoxue Long
- School of Biomedical Engineering, Capital Medical University, Beijing, 100069, China.,Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, 100069, China
| | - Hongyu Yang
- School of Biomedical Engineering, Capital Medical University, Beijing, 100069, China.,Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, 100069, China
| | - Hongfang Song
- School of Biomedical Engineering, Capital Medical University, Beijing, 100069, China. .,Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, 100069, China.
| |
Collapse
|
22
|
Patil AD, Biousse V, Newman NJ. Artificial intelligence in ophthalmology: an insight into neurodegenerative disease. Curr Opin Ophthalmol 2022; 33:432-439. [PMID: 35819902 DOI: 10.1097/icu.0000000000000877] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW The aging world population accounts for the increasing prevalence of neurodegenerative diseases such as Alzheimer's and Parkinson's which carry a significant health and economic burden. There is therefore a need for sensitive and specific noninvasive biomarkers for early diagnosis and monitoring. Advances in retinal and optic nerve multimodal imaging as well as the development of artificial intelligence deep learning systems (AI-DLS) have heralded a number of promising advances of which ophthalmologists are at the forefront. RECENT FINDINGS The association among retinal vascular, nerve fiber layer, and macular findings in neurodegenerative disease is well established. In order to optimize the use of these ophthalmic parameters as biomarkers, validated AI-DLS are required to ensure clinical efficacy and reliability. Varied image acquisition methods and protocols as well as variability in neurogenerative disease diagnosis compromise the robustness of ground truths that are paramount to developing high-quality training datasets. SUMMARY In order to produce effective AI-DLS for the diagnosis and monitoring of neurodegenerative disease, multicenter international collaboration is required to prospectively produce large inclusive datasets, acquired through standardized methods and protocols. With a uniform approach, the efficacy of resultant clinical applications will be maximized.
Collapse
Affiliation(s)
| | | | - Nancy J Newman
- Department of Ophthalmology
- Department of Neurology
- Department of Neurological Surgery, Emory University School of Medicine, Atlanta, Georgia, USA
| |
Collapse
|
23
|
Tan Y, Zhu W, Zou Y, Zhang B, Yu Y, Li W, Jin G, Liu Z. Hotspots and trends in ophthalmology in recent 5 years: Bibliometric analysis in 2017–2021. Front Med (Lausanne) 2022; 9:988133. [PMID: 36091704 PMCID: PMC9462464 DOI: 10.3389/fmed.2022.988133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 07/27/2022] [Indexed: 11/13/2022] Open
Abstract
Purpose The purpose of this study was to investigate the hotspots and research trends of ophthalmology research. Method Ophthalmology research literature published between 2017 and 2021 was obtained in the Web of Science Core Collection database. The bibliometric analysis and network visualization were performed with the VOSviewer and CiteSpace. Publication-related information, including publication volume, citation counts, countries, journals, keywords, subject categories, and publication time, was analyzed. Results A total of 10,469 included ophthalmology publications had been cited a total of 7,995 times during the past 5 years. The top countries and journals for the number of publications were the United States and the Ophthalmology. The top 25 global high-impact documents had been identified using the citation ranking. Keyword co-occurrence analysis showed that the hotspots in ophthalmology research were epidemiological characteristics and treatment modalities of ocular diseases, artificial intelligence and fundus imaging technology, COVID-19-related telemedicine, and screening and prevention of ocular diseases. Keyword burst analysis revealed that “neural network,” “pharmacokinetics,” “geographic atrophy,” “implementation,” “variability,” “adverse events,” “automated detection,” and “retinal images” were the research trends of research in the field of ophthalmology through 2021. The analysis of the subject categories demonstrated the close cooperation relationships that existed between different subject categories, and collaborations with non-ophthalmology-related subject categories were increasing over time in the field of ophthalmology research. Conclusions The hotspots in ophthalmology research were epidemiology, prevention, screening, and treatment of ocular diseases, as well as artificial intelligence and fundus imaging technology and telemedicine. Research trends in ophthalmology research were artificial intelligence, drug development, and fundus diseases. Knowledge from non-ophthalmology fields is likely to be more involved in ophthalmology research.
Collapse
Affiliation(s)
- Yuan Tan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
- Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Weining Zhu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Zhongshan Medical School, Sun Yat-sen University, Guangzhou, China
| | - Yingshi Zou
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
- Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Bowen Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Zhongshan Medical School, Sun Yat-sen University, Guangzhou, China
| | - Yinglin Yu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
- Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Wei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Zhongshan Medical School, Sun Yat-sen University, Guangzhou, China
| | - Guangming Jin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
- Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- *Correspondence: Guangming Jin
| | - Zhenzhen Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
- Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- Zhenzhen Liu
| |
Collapse
|
24
|
Pucchio A, Krance SH, Pur DR, Miranda RN, Felfeli T. Artificial Intelligence Analysis of Biofluid Markers in Age-Related Macular Degeneration: A Systematic Review. Clin Ophthalmol 2022; 16:2463-2476. [PMID: 35968055 PMCID: PMC9369085 DOI: 10.2147/opth.s377262] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Accepted: 07/26/2022] [Indexed: 11/23/2022] Open
Abstract
This systematic review explores the use of artificial intelligence (AI) in the analysis of biofluid markers in age-related macular degeneration (AMD). We detail the accuracy and validity of AI in diagnostic and prognostic models and biofluid markers that provide insight into AMD pathogenesis and progression. This review was conducted in accordance with the Preferred Reporting Items for a Systematic Review and Meta-analysis guidelines. A comprehensive search was conducted across 5 electronic databases including Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, EMBASE, Medline, and Web of Science from inception to July 14, 2021. Studies pertaining to biofluid marker analysis using AI or bioinformatics in AMD were included. Identified studies were assessed for risk of bias and critically appraised using the Joanna Briggs Institute Critical Appraisal tools. A total of 10,264 articles were retrieved from all databases and 37 studies met the inclusion criteria, including 15 cross-sectional studies, 15 prospective cohort studies, five retrospective cohort studies, one randomized controlled trial, and one case–control study. The majority of studies had a general focus on AMD (58%), while neovascular AMD (nAMD) was the focus in 11 studies (30%), and geographic atrophy (GA) was highlighted by three studies. Fifteen studies examined disease characteristics, 15 studied risk factors, and seven guided treatment decisions. Altered lipid metabolism (HDL-cholesterol, total serum triglycerides), inflammation (c-reactive protein), oxidative stress, and protein digestion were implicated in AMD development and progression. AI tools were able to both accurately differentiate controls and AMD patients with accuracies as high as 87% and predict responsiveness to anti-VEGF therapy in nAMD patients. Use of AI models such as discriminant analysis could inform prognostic and diagnostic decision-making in a clinical setting. The identified pathways provide opportunity for future studies of AMD development and could be valuable in the advancement of novel treatments.
Collapse
Affiliation(s)
- Aidan Pucchio
- School of Medicine, Queen’s University, Kingston, ON, Canada
| | - Saffire H Krance
- Schulich School of Medicine & Dentistry, Western University, London, ON, Canada
| | - Daiana R Pur
- Schulich School of Medicine & Dentistry, Western University, London, ON, Canada
| | - Rafael N Miranda
- Toronto Health Economics and Technology Assessment Collaborative, Toronto, ON, Canada
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada
| | - Tina Felfeli
- Toronto Health Economics and Technology Assessment Collaborative, Toronto, ON, Canada
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, ON, Canada
- Correspondence: Tina Felfeli, Department of Ophthalmology and Vision Sciences, University of Toronto, 340 College Street, Suite 400, Toronto, ON, M5T 3A9, Canada, Fax +416-978-4590, Email
| |
Collapse
|
25
|
Laurik-Feuerstein KL, Sapahia R, Cabrera DeBuc D, Somfai GM. The assessment of fundus image quality labeling reliability among graders with different backgrounds. PLoS One 2022; 17:e0271156. [PMID: 35881576 PMCID: PMC9321443 DOI: 10.1371/journal.pone.0271156] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Accepted: 06/27/2022] [Indexed: 11/25/2022] Open
Abstract
PURPOSE For the training of machine learning (ML) algorithms, correctly labeled ground truth data are inevitable. In this pilot study, we assessed the performance of graders with different backgrounds in the labeling of retinal fundus image quality. METHODS Color fundus photographs were labeled using a Python-based tool using four image categories: excellent (E), good (G), adequate (A) and insufficient for grading (I). We enrolled 8 subjects (4 with and 4 without medical background, groups M and NM, respectively) to whom a tutorial was presented on image quality requirements. We randomly selected 200 images from a pool of 18,145 expert-labeled images (50/E, 50/G, 50/A, 50/I). The performance of the grading was timed and the agreement was assessed. An additional grading round was performed with 14 labels for a more objective analysis. RESULTS The median time (interquartile range) for the labeling task with 4 categories was 987.8 sec (418.6) for all graders and 872.9 sec (621.0) vs. 1019.8 sec (479.5) in the M vs. NM groups, respectively. Cohen's weighted kappa showed moderate agreement (0.564) when using four categories that increased to substantial (0.637) when using only three by merging the E and G groups. By the use of 14 labels, the weighted kappa values were 0.594 and 0.667 when assigning four or three categories, respectively. CONCLUSION Image grading with a Python-based tool seems to be a simple yet possibly efficient solution for the labeling of fundus images according to image quality that does not necessarily require medical background. Such grading can be subject to variability but could still effectively serve the robust identification of images with insufficient quality. This emphasizes the opportunity for the democratization of ML-applications among persons with both medical and non-medical background. However, simplicity of the grading system is key to successful categorization.
Collapse
Affiliation(s)
| | - Rishav Sapahia
- Miller School of Medicine, Bascom Palmer Eye Institute, University of Miami, Miami, Florida, United States of America
| | - Delia Cabrera DeBuc
- Miller School of Medicine, Bascom Palmer Eye Institute, University of Miami, Miami, Florida, United States of America
| | - Gábor Márk Somfai
- Department of Ophthalmology, Stadtspital Zürich, Zurich, Switzerland
- Spross Research Institute, Zurich, Switzerland
- Department of Ophthalmology, Semmelweis University, Budapest, Hungary
| |
Collapse
|
26
|
Nderitu P, Nunez do Rio JM, Webster ML, Mann SS, Hopkins D, Cardoso MJ, Modat M, Bergeles C, Jackson TL. Automated image curation in diabetic retinopathy screening using deep learning. Sci Rep 2022; 12:11196. [PMID: 35778615 PMCID: PMC9249740 DOI: 10.1038/s41598-022-15491-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 06/24/2022] [Indexed: 11/20/2022] Open
Abstract
Diabetic retinopathy (DR) screening images are heterogeneous and contain undesirable non-retinal, incorrect field and ungradable samples which require curation, a laborious task to perform manually. We developed and validated single and multi-output laterality, retinal presence, retinal field and gradability classification deep learning (DL) models for automated curation. The internal dataset comprised of 7743 images from DR screening (UK) with 1479 external test images (Portugal and Paraguay). Internal vs external multi-output laterality AUROC were right (0.994 vs 0.905), left (0.994 vs 0.911) and unidentifiable (0.996 vs 0.680). Retinal presence AUROC were (1.000 vs 1.000). Retinal field AUROC were macula (0.994 vs 0.955), nasal (0.995 vs 0.962) and other retinal field (0.997 vs 0.944). Gradability AUROC were (0.985 vs 0.918). DL effectively detects laterality, retinal presence, retinal field and gradability of DR screening images with generalisation between centres and populations. DL models could be used for automated image curation within DR screening.
Collapse
Affiliation(s)
- Paul Nderitu
- Section of Ophthalmology, King's College London, London, UK.
- King's Ophthalmology Research Unit, King's College Hospital, London, UK.
| | | | - Ms Laura Webster
- South East London Diabetic Eye Screening Programme, Guy's and St Thomas' Foundation Trust, London, UK
| | - Samantha S Mann
- South East London Diabetic Eye Screening Programme, Guy's and St Thomas' Foundation Trust, London, UK
- Department of Ophthalmology, Guy's and St Thomas' Foundation Trust, London, UK
| | - David Hopkins
- Department of Diabetes, School of Life Course Sciences, King's College London, London, UK
- Institute of Diabetes, Endocrinology and Obesity, King's Health Partners, London, UK
| | - M Jorge Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Marc Modat
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Christos Bergeles
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Timothy L Jackson
- Section of Ophthalmology, King's College London, London, UK
- King's Ophthalmology Research Unit, King's College Hospital, London, UK
| |
Collapse
|
27
|
Chaurasia AK, Greatbatch CJ, Hewitt AW. Diagnostic Accuracy of Artificial Intelligence in Glaucoma Screening and Clinical Practice. J Glaucoma 2022; 31:285-299. [PMID: 35302538 DOI: 10.1097/ijg.0000000000002015] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 02/26/2022] [Indexed: 11/25/2022]
Abstract
PURPOSE Artificial intelligence (AI) has been shown as a diagnostic tool for glaucoma detection through imaging modalities. However, these tools are yet to be deployed into clinical practice. This meta-analysis determined overall AI performance for glaucoma diagnosis and identified potential factors affecting their implementation. METHODS We searched databases (Embase, Medline, Web of Science, and Scopus) for studies that developed or investigated the use of AI for glaucoma detection using fundus and optical coherence tomography (OCT) images. A bivariate random-effects model was used to determine the summary estimates for diagnostic outcomes. The Preferred Reporting Items for Systematic Reviews and Meta-Analysis of Diagnostic Test Accuracy (PRISMA-DTA) extension was followed, and the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool was used for bias and applicability assessment. RESULTS Seventy-nine articles met inclusion criteria, with a subset of 66 containing adequate data for quantitative analysis. The pooled area under receiver operating characteristic curve across all studies for glaucoma detection was 96.3%, with a sensitivity of 92.0% (95% confidence interval: 89.0-94.0) and specificity of 94.0% (95% confidence interval: 92.0-95.0). The pooled area under receiver operating characteristic curve on fundus and OCT images was 96.2% and 96.0%, respectively. Mixed data set and external data validation had unsatisfactory diagnostic outcomes. CONCLUSION Although AI has the potential to revolutionize glaucoma care, this meta-analysis highlights that before such algorithms can be implemented into clinical care, a number of issues need to be addressed. With substantial heterogeneity across studies, many factors were found to affect the diagnostic performance. We recommend implementing a standard diagnostic protocol for grading, implementing external data validation, and analysis across different ethnicity groups.
Collapse
Affiliation(s)
- Abadh K Chaurasia
- Menzies Institute for Medical Research, School of Medicine, University of Tasmania, Tasmania
| | - Connor J Greatbatch
- Menzies Institute for Medical Research, School of Medicine, University of Tasmania, Tasmania
| | - Alex W Hewitt
- Menzies Institute for Medical Research, School of Medicine, University of Tasmania, Tasmania
- Centre for Eye Research Australia, University of Melbourne, Melbourne, Australia
| |
Collapse
|
28
|
Yuksel Elgin C, Chen D, Al‐Aswad LA. Ophthalmic imaging for the diagnosis and monitoring of glaucoma: A review. Clin Exp Ophthalmol 2022; 50:183-197. [DOI: 10.1111/ceo.14044] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Revised: 12/16/2021] [Accepted: 01/03/2022] [Indexed: 12/21/2022]
Affiliation(s)
- Cansu Yuksel Elgin
- Department of Ophthalmology, NYU Langone Health NYU Grossman School of Medicine New York New York USA
| | - Dinah Chen
- Department of Ophthalmology, NYU Langone Health NYU Grossman School of Medicine New York New York USA
| | - Lama A. Al‐Aswad
- Department of Ophthalmology, NYU Langone Health NYU Grossman School of Medicine New York New York USA
- Department of Population Health, NYU Langone Health NYU Grossman School of Medicine New York New York USA
| |
Collapse
|
29
|
Review of Machine Learning Applications Using Retinal Fundus Images. Diagnostics (Basel) 2022; 12:diagnostics12010134. [PMID: 35054301 PMCID: PMC8774893 DOI: 10.3390/diagnostics12010134] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 01/03/2022] [Accepted: 01/03/2022] [Indexed: 02/04/2023] Open
Abstract
Automating screening and diagnosis in the medical field saves time and reduces the chances of misdiagnosis while saving on labor and cost for physicians. With the feasibility and development of deep learning methods, machines are now able to interpret complex features in medical data, which leads to rapid advancements in automation. Such efforts have been made in ophthalmology to analyze retinal images and build frameworks based on analysis for the identification of retinopathy and the assessment of its severity. This paper reviews recent state-of-the-art works utilizing the color fundus image taken from one of the imaging modalities used in ophthalmology. Specifically, the deep learning methods of automated screening and diagnosis for diabetic retinopathy (DR), age-related macular degeneration (AMD), and glaucoma are investigated. In addition, the machine learning techniques applied to the retinal vasculature extraction from the fundus image are covered. The challenges in developing these systems are also discussed.
Collapse
|
30
|
Cívico Vallejos Y, Hernández Dacruz B, Cívico Vallejos S. Selección de embriones en los tratamientos de fecundación in vitro. CLINICA E INVESTIGACION EN GINECOLOGIA Y OBSTETRICIA 2022. [DOI: 10.1016/j.gine.2021.100709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
31
|
Jahangir S, Khan HA. Artificial intelligence in ophthalmology and visual sciences: Current implications and future directions. Artif Intell Med Imaging 2021; 2:95-103. [DOI: 10.35711/aimi.v2.i5.95] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 06/30/2021] [Accepted: 10/27/2021] [Indexed: 02/06/2023] Open
Abstract
Since its inception in 1959, artificial intelligence (AI) has evolved at an unprecedented rate and has revolutionized the world of medicine. Ophthalmology, being an image-driven field of medicine, is well-suited for the implementation of AI. Machine learning (ML) and deep learning (DL) models are being utilized for screening of vision threatening ocular conditions of the eye. These models have proven to be accurate and reliable for diagnosing anterior and posterior segment diseases, screening large populations, and even predicting the natural course of various ocular morbidities. With the increase in population and global burden of managing irreversible blindness, AI offers a unique solution when implemented in clinical practice. In this review, we discuss what are AI, ML, and DL, their uses, future direction for AI, and its limitations in ophthalmology.
Collapse
Affiliation(s)
- Smaha Jahangir
- School of Optometry, The University of Faisalabad, Faisalabad, Punjab 38000, Pakistan
| | - Hashim Ali Khan
- Department of Ophthalmology, SEHHAT Foundation, Gilgit 15100, Gilgit-Baltistan, Pakistan
| |
Collapse
|
32
|
Hemelings R, Elen B, Barbosa-Breda J, Blaschko MB, De Boever P, Stalmans I. Deep learning on fundus images detects glaucoma beyond the optic disc. Sci Rep 2021; 11:20313. [PMID: 34645908 PMCID: PMC8514536 DOI: 10.1038/s41598-021-99605-1] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 09/21/2021] [Indexed: 02/07/2023] Open
Abstract
Although unprecedented sensitivity and specificity values are reported, recent glaucoma detection deep learning models lack in decision transparency. Here, we propose a methodology that advances explainable deep learning in the field of glaucoma detection and vertical cup-disc ratio (VCDR), an important risk factor. We trained and evaluated deep learning models using fundus images that underwent a certain cropping policy. We defined the crop radius as a percentage of image size, centered on the optic nerve head (ONH), with an equidistant spaced range from 10-60% (ONH crop policy). The inverse of the cropping mask was also applied (periphery crop policy). Trained models using original images resulted in an area under the curve (AUC) of 0.94 [95% CI 0.92-0.96] for glaucoma detection, and a coefficient of determination (R2) equal to 77% [95% CI 0.77-0.79] for VCDR estimation. Models that were trained on images with absence of the ONH are still able to obtain significant performance (0.88 [95% CI 0.85-0.90] AUC for glaucoma detection and 37% [95% CI 0.35-0.40] R2 score for VCDR estimation in the most extreme setup of 60% ONH crop). Our findings provide the first irrefutable evidence that deep learning can detect glaucoma from fundus image regions outside the ONH.
Collapse
Affiliation(s)
- Ruben Hemelings
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium.
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium.
| | - Bart Elen
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium
| | - João Barbosa-Breda
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium
- Cardiovascular R&D Center, Faculty of Medicine of the University of Porto, Alameda Prof. Hernâni Monteiro, 4200-319, Porto, Portugal
- Department of Ophthalmology, Centro Hospitalar E Universitário São João, Alameda Prof. Hernâni Monteiro, 4200-319, Porto, Portugal
| | | | - Patrick De Boever
- Hasselt University, Agoralaan building D, 3590, Diepenbeek, Belgium
- Department of Biology, University of Antwerp, 2610, Wilrijk, Belgium
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium
| | - Ingeborg Stalmans
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium
- Ophthalmology Department, UZ Leuven, Herestraat 49, 3000, Leuven, Belgium
| |
Collapse
|
33
|
Yuen V, Ran A, Shi J, Sham K, Yang D, Chan VTT, Chan R, Yam JC, Tham CC, McKay GJ, Williams MA, Schmetterer L, Cheng CY, Mok V, Chen CL, Wong TY, Cheung CY. Deep-Learning-Based Pre-Diagnosis Assessment Module for Retinal Photographs: A Multicenter Study. Transl Vis Sci Technol 2021; 10:16. [PMID: 34524409 PMCID: PMC8444486 DOI: 10.1167/tvst.10.11.16] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Accepted: 08/12/2021] [Indexed: 12/23/2022] Open
Abstract
Purpose Artificial intelligence (AI) deep learning (DL) has been shown to have significant potential for eye disease detection and screening on retinal photographs in different clinical settings, particular in primary care. However, an automated pre-diagnosis image assessment is essential to streamline the application of the developed AI-DL algorithms. In this study, we developed and validated a DL-based pre-diagnosis assessment module for retinal photographs, targeting image quality (gradable vs. ungradable), field of view (macula-centered vs. optic-disc-centered), and laterality of the eye (right vs. left). Methods A total of 21,348 retinal photographs from 1914 subjects from various clinical settings in Hong Kong, Singapore, and the United Kingdom were used for training, internal validation, and external testing for the DL module, developed by two DL-based algorithms (EfficientNet-B0 and MobileNet-V2). Results For image-quality assessment, the pre-diagnosis module achieved area under the receiver operating characteristic curve (AUROC) values of 0.975, 0.999, and 0.987 in the internal validation dataset and the two external testing datasets, respectively. For field-of-view assessment, the module had an AUROC value of 1.000 in all of the datasets. For laterality-of-the-eye assessment, the module had AUROC values of 1.000, 0.999, and 0.985 in the internal validation dataset and the two external testing datasets, respectively. Conclusions Our study showed that this three-in-one DL module for assessing image quality, field of view, and laterality of the eye of retinal photographs achieved excellent performance and generalizability across different centers and ethnicities. Translational Relevance The proposed DL-based pre-diagnosis module realized accurate and automated assessments of image quality, field of view, and laterality of the eye of retinal photographs, which could be further integrated into AI-based models to improve operational flow for enhancing disease screening and diagnosis.
Collapse
Affiliation(s)
- Vincent Yuen
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Anran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Jian Shi
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Kaiser Sham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Victor T. T. Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Raymond Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Jason C. Yam
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
- Hong Kong Eye Hospital, Hong Kong
| | - Clement C. Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
- Hong Kong Eye Hospital, Hong Kong
| | - Gareth J. McKay
- Center for Public Health, Royal Victoria Hospital, Queen's University Belfast, Belfast, UK
| | - Michael A. Williams
- Center for Medical Education, Royal Victoria Hospital, Queen's University Belfast, Belfast, UK
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Nanyang Technological University, Singapore
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
| | - Vincent Mok
- Gerald Choa Neuroscience Center, Therese Pei Fong Chow Research Center for Prevention of Dementia, Lui Che Woo Institute of Innovative Medicine, Department of Medicine and Therapeutics, The Chinese University of Hong Kong, Hong Kong
| | - Christopher L. Chen
- Memory, Aging and Cognition Center, Department of Pharmacology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Tien Y. Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore
| | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
34
|
Saeed AQ, Sheikh Abdullah SNH, Che-Hamzah J, Abdul Ghani AT. Accuracy of Using Generative Adversarial Networks for Glaucoma Detection During the COVID-19 Pandemic: A Systematic Review and Bibliometric Analysis. J Med Internet Res 2021; 23:e27414. [PMID: 34236992 PMCID: PMC8493455 DOI: 10.2196/27414] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 05/11/2021] [Accepted: 07/05/2021] [Indexed: 01/19/2023] Open
Abstract
Background Glaucoma leads to irreversible blindness. Globally, it is the second most common retinal disease that leads to blindness, slightly less common than cataracts. Therefore, there is a great need to avoid the silent growth of this disease using recently developed generative adversarial networks (GANs). Objective This paper aims to introduce a GAN technology for the diagnosis of eye disorders, particularly glaucoma. This paper illustrates deep adversarial learning as a potential diagnostic tool and the challenges involved in its implementation. This study describes and analyzes many of the pitfalls and problems that researchers will need to overcome to implement this kind of technology. Methods To organize this review comprehensively, articles and reviews were collected using the following keywords: (“Glaucoma,” “optic disc,” “blood vessels”) and (“receptive field,” “loss function,” “GAN,” “Generative Adversarial Network,” “Deep learning,” “CNN,” “convolutional neural network” OR encoder). The records were identified from 5 highly reputed databases: IEEE Xplore, Web of Science, Scopus, ScienceDirect, and PubMed. These libraries broadly cover the technical and medical literature. Publications within the last 5 years, specifically 2015-2020, were included because the target GAN technique was invented only in 2014 and the publishing date of the collected papers was not earlier than 2016. Duplicate records were removed, and irrelevant titles and abstracts were excluded. In addition, we excluded papers that used optical coherence tomography and visual field images, except for those with 2D images. A large-scale systematic analysis was performed, and then a summarized taxonomy was generated. Furthermore, the results of the collected articles were summarized and a visual representation of the results was presented on a T-shaped matrix diagram. This study was conducted between March 2020 and November 2020. Results We found 59 articles after conducting a comprehensive survey of the literature. Among the 59 articles, 30 present actual attempts to synthesize images and provide accurate segmentation/classification using single/multiple landmarks or share certain experiences. The other 29 articles discuss the recent advances in GANs, do practical experiments, and contain analytical studies of retinal disease. Conclusions Recent deep learning techniques, namely GANs, have shown encouraging performance in retinal disease detection. Although this methodology involves an extensive computing budget and optimization process, it saturates the greedy nature of deep learning techniques by synthesizing images and solves major medical issues. This paper contributes to this research field by offering a thorough analysis of existing works, highlighting current limitations, and suggesting alternatives to support other researchers and participants in further improving and strengthening future work. Finally, new directions for this research have been identified.
Collapse
Affiliation(s)
- Ali Q Saeed
- Faculty of Information Science & Technology (FTSM), Universiti Kebangsaan Malaysia (UKM), UKM, 43600 Bangi, Selangor, Malaysia, Selangor, MY.,Computer Center, Northern Technical University, Ninevah, IQ
| | - Siti Norul Huda Sheikh Abdullah
- Faculty of Information Science & Technology (FTSM), Universiti Kebangsaan Malaysia (UKM), UKM, 43600 Bangi, Selangor, Malaysia, Selangor, MY
| | - Jemaima Che-Hamzah
- Department of Ophthalmology, Faculty of Medicine, Universiti Kebangsaan Malaysia (UKM), Cheras, Kuala Lumpur, MY
| | - Ahmad Tarmizi Abdul Ghani
- Faculty of Information Science & Technology (FTSM), Universiti Kebangsaan Malaysia (UKM), UKM, 43600 Bangi, Selangor, Malaysia, Selangor, MY
| |
Collapse
|
35
|
Chan EJJ, Najjar RP, Tang Z, Milea D. Deep Learning for Retinal Image Quality Assessment of Optic Nerve Head Disorders. Asia Pac J Ophthalmol (Phila) 2021; 10:282-288. [PMID: 34383719 DOI: 10.1097/apo.0000000000000404] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
ABSTRACT Deep learning (DL)-based retinal image quality assessment (RIQA) algorithms have been gaining popularity, as a solution to reduce the frequency of diagnostically unusable images. Most existing RIQA tools target retinal conditions, with a dearth of studies looking into RIQA models for optic nerve head (ONH) disorders. The recent success of DL systems in detecting ONH abnormalities on color fundus images prompts the development of tailored RIQA algorithms for these specific conditions. In this review, we discuss recent progress in DL-based RIQA models in general and the need for RIQA models tailored for ONH disorders. Finally, we propose suggestions for such models in the future.
Collapse
Affiliation(s)
| | - Raymond P Najjar
- Duke-NUS School of Medicine, Singapore
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
| | - Zhiqun Tang
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
| | - Dan Milea
- Duke-NUS School of Medicine, Singapore
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
- Ophthalmology Department, Singapore National Eye Centre, Singapore
- Rigshospitalet, Copenhagen University, Denmark
| |
Collapse
|
36
|
Dong L, Yang Q, Zhang RH, Wei WB. Artificial intelligence for the detection of age-related macular degeneration in color fundus photographs: A systematic review and meta-analysis. EClinicalMedicine 2021; 35:100875. [PMID: 34027334 PMCID: PMC8129891 DOI: 10.1016/j.eclinm.2021.100875] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 04/14/2021] [Accepted: 04/15/2021] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Age-related macular degeneration (AMD) is one of the leading causes of vision loss in the elderly population. The application of artificial intelligence (AI) provides convenience for the diagnosis of AMD. This systematic review and meta-analysis aimed to quantify the performance of AI in detecting AMD in fundus photographs. METHODS We searched PubMed, Embase, Web of Science and the Cochrane Library before December 31st, 2020 for studies reporting the application of AI in detecting AMD in color fundus photographs. Then, we pooled the data for analysis. PROSPERO registration number: CRD42020197532. FINDINGS 19 studies were finally selected for systematic review and 13 of them were included in the quantitative synthesis. All studies adopted human graders as reference standard. The pooled area under the receiver operating characteristic curve (AUROC) was 0.983 (95% confidence interval (CI):0.979-0.987). The pooled sensitivity, specificity, and diagnostic odds ratio (DOR) were 0.88 (95% CI:0.88-0.88), 0.90 (95% CI:0.90-0.91), and 275.27 (95% CI:158.43-478.27), respectively. Threshold analysis was performed and a potential threshold effect was detected among the studies (Spearman correlation coefficient: -0.600, P = 0.030), which was the main cause for the heterogeneity. For studies applying convolutional neural networks in the Age-Related Eye Disease Study database, the pooled AUROC, sensitivity, specificity, and DOR were 0.983 (95% CI:0.978-0.988), 0.88 (95% CI:0.88-0.88), 0.91 (95% CI:0.91-0.91), and 273.14 (95% CI:130.79-570.43), respectively. INTERPRETATION Our data indicated that AI was able to detect AMD in color fundus photographs. The application of AI-based automatic tools is beneficial for the diagnosis of AMD. FUNDING Capital Health Research and Development of Special (2020-1-2052).
Collapse
|
37
|
Orellanos-Rios J, Yokoyama S, Bhuiyan A, Gao L, Otero-Marquez O, Smith RT. Translational Retinal Imaging. Asia Pac J Ophthalmol (Phila) 2020; 9:269-277. [PMID: 32487917 PMCID: PMC7299229 DOI: 10.1097/apo.0000000000000292] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Accepted: 04/03/2020] [Indexed: 11/25/2022] Open
Abstract
The diagnosis and treatment of medical retinal disease is now inseparable from retinal imaging in all its multimodal incarnations. The purpose of this article is to present a selection of very different retinal imaging techniques that are truly translational, in the sense that they are not only new, but can guide us to new understandings of disease processes or interventions that are not accessible by present methods. Quantitative autofluorescence imaging, now available for clinical investigation, has already fundamentally changed our understanding of the role of lipofuscin in age-related macular degeneration. Hyperspectral autofluorescence imaging is bench science poised not only to unravel the molecular basis of retinal pigment epithelium fluorescence, but also to be translated into a clinical camera for earliest detection of age-related macular degeneration. The ophthalmic endoscope for vitreous surgery is a radically new retinal imaging system that enables surgical approaches heretofore impossible while it captures subretinal images of living tissue. Remote retinal imaging coupled with deep learning artificial intelligence will transform the very fabric of future medical care.
Collapse
Affiliation(s)
| | - Sho Yokoyama
- Department of Ophthalmology, Japan Community Healthcare Organization, Chukyo Hospital, Nagoya, Aichi, Japan
| | | | - Liang Gao
- Department of Biomedical Engineering, UCLA, LA, Los Angeles, CA, USA
| | - Oscar Otero-Marquez
- Department of Ophthalmology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - R. Theodore Smith
- Department of Ophthalmology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| |
Collapse
|
38
|
Cai L, Hinkle JW, Arias D, Gorniak RJ, Lakhani PC, Flanders AE, Kuriyan AE. Applications of Artificial Intelligence for the Diagnosis, Prognosis, and Treatment of Age-related Macular Degeneration. Int Ophthalmol Clin 2020; 60:147-168. [PMID: 33093323 DOI: 10.1097/iio.0000000000000334] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
|