1
|
Taki Y, Ueno Y, Oda M, Kitaguchi Y, Ibrahim OMA, Aketa N, Yamaguchi T. Analysis of the performance of the CorneAI for iOS in the classification of corneal diseases and cataracts based on journal photographs. Sci Rep 2024; 14:15517. [PMID: 38969757 PMCID: PMC11226423 DOI: 10.1038/s41598-024-66296-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2024] [Accepted: 07/01/2024] [Indexed: 07/07/2024] Open
Abstract
CorneAI for iOS is an artificial intelligence (AI) application to classify the condition of the cornea and cataract into nine categories: normal, infectious keratitis, non-infection keratitis, scar, tumor, deposit, acute primary angle closure, lens opacity, and bullous keratopathy. We evaluated its performance to classify multiple conditions of the cornea and cataract of various races in images published in the Cornea journal. The positive predictive value (PPV) of the top classification with the highest predictive score was 0.75, and the PPV for the top three classifications exceeded 0.80. For individual diseases, the highest PPVs were 0.91, 0.73, 0.42, 0.72, 0.77, and 0.55 for infectious keratitis, normal, non-infection keratitis, scar, tumor, and deposit, respectively. CorneAI for iOS achieved an area under the receiver operating characteristic curve of 0.78 (95% confidence interval [CI] 0.5-1.0) for normal, 0.76 (95% CI 0.67-0.85) for infectious keratitis, 0.81 (95% CI 0.64-0.97) for non-infection keratitis, 0.55 (95% CI 0.41-0.69) for scar, 0.62 (95% CI 0.27-0.97) for tumor, and 0.71 (95% CI 0.53-0.89) for deposit. CorneAI performed well in classifying various conditions of the cornea and cataract when used to diagnose journal images, including those with variable imaging conditions, ethnicities, and rare cases.
Collapse
Affiliation(s)
- Yosuke Taki
- Department of Ophthalmology, Tokyo Dental College Ichikawa General Hospital, 5-11-13, Sugano, Ichikawa, Chiba, 272-8513, Japan
| | - Yuta Ueno
- Department of Ophthalmology, Faculty of Medicine, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | - Masahiro Oda
- Scholarly Information Division, Information Technology Center, Nagoya University, Nagoya, Aichi, Japan
- Graduate School of Informatics, Nagoya University, Nagoya, Aichi, Japan
| | - Yoshiyuki Kitaguchi
- Department of Ophthalmology, Osaka University Gradual School of Medicine, Suita, Osaka, Japan
| | - Osama M A Ibrahim
- Department of Ophthalmology, Tokyo Dental College Ichikawa General Hospital, 5-11-13, Sugano, Ichikawa, Chiba, 272-8513, Japan
| | - Naohiko Aketa
- Department of Ophthalmology, Tokyo Dental College Ichikawa General Hospital, 5-11-13, Sugano, Ichikawa, Chiba, 272-8513, Japan
- Clinical and Translational Research Center, Keio University Hospital, Shinjuku, Tokyo, Japan
| | - Takefumi Yamaguchi
- Department of Ophthalmology, Tokyo Dental College Ichikawa General Hospital, 5-11-13, Sugano, Ichikawa, Chiba, 272-8513, Japan.
| |
Collapse
|
2
|
Wu H, Jin K, Yip CC, Koh V, Ye J. A systematic review of economic evaluation of artificial intelligence-based screening for eye diseases: From possibility to reality. Surv Ophthalmol 2024; 69:499-507. [PMID: 38492584 DOI: 10.1016/j.survophthal.2024.03.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 03/04/2024] [Accepted: 03/12/2024] [Indexed: 03/18/2024]
Abstract
Artificial Intelligence (AI) has become a focus of research in the rapidly evolving field of ophthalmology. Nevertheless, there is a lack of systematic studies on the health economics of AI in this field. We examine studies from the PubMed, Google Scholar, and Web of Science databases that employed quantitative analysis, retrieved up to July 2023. Most of the studies indicate that AI leads to cost savings and improved efficiency in ophthalmology. On the other hand, some studies suggest that using AI in healthcare may raise costs for patients, especially when taking into account factors such as labor costs, infrastructure, and patient adherence. Future research should cover a wider range of ophthalmic diseases beyond common eye conditions. Moreover, conducting extensive health economic research, designed to collect data relevant to its own context, is imperative.
Collapse
Affiliation(s)
- Hongkang Wu
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Kai Jin
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Chee Chew Yip
- Department of Ophthalmology & Visual Sciences, Khoo Teck Puat Hospital, Singapore, Singapore
| | - Victor Koh
- Department of Ophthalmology, National University Hospital, National University of Singapore, Singapore
| | - Juan Ye
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China.
| |
Collapse
|
3
|
Wang Y, Wei R, Yang D, Song K, Shen Y, Niu L, Li M, Zhou X. Development and validation of a deep learning model to predict axial length from ultra-wide field images. Eye (Lond) 2024; 38:1296-1300. [PMID: 38102471 PMCID: PMC11076502 DOI: 10.1038/s41433-023-02885-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 11/22/2023] [Accepted: 11/30/2023] [Indexed: 12/17/2023] Open
Abstract
BACKGROUND To validate the feasibility of building a deep learning model to predict axial length (AL) for moderate to high myopic patients from ultra-wide field (UWF) images. METHODS This study included 6174 UWF images from 3134 myopic patients during 2014 to 2020 in Eye and ENT Hospital of Fudan University. Of 6174 images, 4939 were used for training, 617 for validation, and 618 for testing. The coefficient of determination (R2), mean absolute error (MAE), and mean squared error (MSE) were used for model performance evaluation. RESULTS The model predicted AL with high accuracy. Evaluating performance of R2, MSE and MAE were 0.579, 1.419 and 0.9043, respectively. Prediction bias of 64.88% of the tests was under 1-mm error, 76.90% of tests was within the range of 5% error and 97.57% within 10% error. The prediction bias had a strong negative correlation with true AL values and showed significant difference between male and female (P < 0.001). Generated heatmaps demonstrated that the model focused on posterior atrophy changes in pathological fundus and peri-optic zone in normal fundus. In sex-specific models, R2, MSE, and MAE results of the female AL model were 0.411, 1.357, and 0.911 in female dataset and 0.343, 2.428, and 1.264 in male dataset. The corresponding metrics of male AL models were 0.216, 2.900, and 1.352 in male dataset and 0.083, 2.112, and 1.154 in female dataset. CONCLUSIONS It is feasible to utilize deep learning models to predict AL for moderate to high myopic patients with UWF images.
Collapse
Affiliation(s)
- Yunzhe Wang
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Ruoyan Wei
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
- Shanghai Medical College and Zhongshan Hospital Immunotherapy Translational Research Center, Shanghai, China
| | - Danjuan Yang
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Kaimin Song
- Beijing Airdoc Technology Co., Ltd, Beijing, China
| | - Yang Shen
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Lingling Niu
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Meiyan Li
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China.
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China.
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China.
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China.
| | - Xingtao Zhou
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China.
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China.
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China.
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China.
| |
Collapse
|
4
|
Yap BP, Kelvin LZ, Toh EQ, Low KY, Rani SK, Goh EJH, Hui VYC, Ng BK, Lim TH. Generalizability of Deep Neural Networks for Vertical Cup-to-Disc Ratio Estimation in Ultra-Widefield and Smartphone-Based Fundus Images. Transl Vis Sci Technol 2024; 13:6. [PMID: 38568608 PMCID: PMC10996969 DOI: 10.1167/tvst.13.4.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 02/19/2024] [Indexed: 04/05/2024] Open
Abstract
Purpose To develop and validate a deep learning system (DLS) for estimation of vertical cup-to-disc ratio (vCDR) in ultra-widefield (UWF) and smartphone-based fundus images. Methods A DLS consisting of two sequential convolutional neural networks (CNNs) to delineate optic disc (OD) and optic cup (OC) boundaries was developed using 800 standard fundus images from the public REFUGE data set. The CNNs were tested on 400 test images from the REFUGE data set and 296 UWF and 300 smartphone-based images from a teleophthalmology clinic. vCDRs derived from the delineated OD/OC boundaries were compared with optometrists' annotations using mean absolute error (MAE). Subgroup analysis was conducted to study the impact of peripapillary atrophy (PPA), and correlation study was performed to investigate potential correlations between sectoral CDR (sCDR) and retinal nerve fiber layer (RNFL) thickness. Results The system achieved MAEs of 0.040 (95% CI, 0.037-0.043) in the REFUGE test images, 0.068 (95% CI, 0.061-0.075) in the UWF images, and 0.084 (95% CI, 0.075-0.092) in the smartphone-based images. There was no statistical significance in differences between PPA and non-PPA images. Weak correlation (r = -0.4046, P < 0.05) between sCDR and RNFL thickness was found only in the superior sector. Conclusions We developed a deep learning system that estimates vCDR from standard, UWF, and smartphone-based images. We also described anatomic peripapillary adversarial lesion and its potential impact on OD/OC delineation. Translational Relevance Artificial intelligence can estimate vCDR from different types of fundus images and may be used as a general and interpretable screening tool to improve community reach for diagnosis and management of glaucoma.
Collapse
Affiliation(s)
- Boon Peng Yap
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| | - Li Zhenghao Kelvin
- Department of Ophthalmology, Tan Tock Seng Hospital, Singapore, Singapore
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
- National Healthcare Group Eye Institute, Singapore, Singapore
| | - En Qi Toh
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
| | - Kok Yao Low
- Department of Ophthalmology, Tan Tock Seng Hospital, Singapore, Singapore
- National Healthcare Group Eye Institute, Singapore, Singapore
| | - Sumaya Khan Rani
- Department of Ophthalmology, Tan Tock Seng Hospital, Singapore, Singapore
- National Healthcare Group Eye Institute, Singapore, Singapore
| | - Eunice Jin Hui Goh
- Department of Ophthalmology, Tan Tock Seng Hospital, Singapore, Singapore
- National Healthcare Group Eye Institute, Singapore, Singapore
| | - Vivien Yip Cherng Hui
- Department of Ophthalmology, Tan Tock Seng Hospital, Singapore, Singapore
- National Healthcare Group Eye Institute, Singapore, Singapore
| | - Beng Koon Ng
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| | - Tock Han Lim
- Department of Ophthalmology, Tan Tock Seng Hospital, Singapore, Singapore
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
- National Healthcare Group Eye Institute, Singapore, Singapore
| |
Collapse
|
5
|
Zhang X, Jiang J, Kong K, Li F, Chen S, Wang P, Song Y, Lin F, Lin TPH, Zangwill LM, Ohno-Matsui K, Jonas JB, Weinreb RN, Lam DSC. Optic neuropathy in high myopia: Glaucoma or high myopia or both? Prog Retin Eye Res 2024; 99:101246. [PMID: 38262557 DOI: 10.1016/j.preteyeres.2024.101246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 01/17/2024] [Accepted: 01/19/2024] [Indexed: 01/25/2024]
Abstract
Due to the increasing prevalence of high myopia around the world, structural and functional damages to the optic nerve in high myopia has recently attracted much attention. Evidence has shown that high myopia is related to the development of glaucomatous or glaucoma-like optic neuropathy, and that both have many common features. These similarities often pose a diagnostic challenge that will affect the future management of glaucoma suspects in high myopia. In this review, we summarize similarities and differences in optic neuropathy arising from non-pathologic high myopia and glaucoma by considering their respective structural and functional characteristics on fundus photography, optical coherence tomography scanning, and visual field tests. These features may also help to distinguish the underlying mechanisms of the optic neuropathies and to determine management strategies for patients with high myopia and glaucoma.
Collapse
Affiliation(s)
- Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, 510060, China.
| | - Jingwen Jiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, 510060, China.
| | - Kangjie Kong
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, 510060, China.
| | - Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, 510060, China.
| | - Shida Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, 510060, China.
| | - Peiyuan Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, 510060, China.
| | - Yunhe Song
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, 510060, China.
| | - Fengbin Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, 510060, China.
| | - Timothy P H Lin
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China.
| | - Linda M Zangwill
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA.
| | - Kyoko Ohno-Matsui
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo, Japan.
| | - Jost B Jonas
- Department of Ophthalmology, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany; Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland.
| | - Robert N Weinreb
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, USA.
| | - Dennis S C Lam
- The International Eye Research Institute of the Chinese University of Hong Kong (Shenzhen), Shenzhen, China; The C-MER Dennis Lam & Partners Eye Center, C-MER International Eye Care Group, Hong Kong, China.
| |
Collapse
|
6
|
Wu T, Ju L, Fu X, Wang B, Ge Z, Liu Y. Deep Learning Detection of Early Retinal Peripheral Degeneration From Ultra-Widefield Fundus Photographs of Asymptomatic Young Adult (17-19 Years) Candidates to Airforce Cadets. Transl Vis Sci Technol 2024; 13:1. [PMID: 38300623 PMCID: PMC10851781 DOI: 10.1167/tvst.13.2.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 12/27/2023] [Indexed: 02/02/2024] Open
Abstract
Purpose Artificial intelligence (AI)-assisted ultra-widefield (UWF) fundus photographic interpretation is beneficial to improve the screening of fundus abnormalities. Therefore we constructed an AI machine-learning approach and performed preliminary training and validation. Methods We proposed a two-stage deep learning-based framework to detect early retinal peripheral degeneration using UWF images from the Chinese Air Force cadets' medical selection between February 2016 and June 2022. We developed a detection model for the localization of optic disc and macula, which are used to find the peripheral areas. Then we developed six classification models for the screening of various retinal cases. We also compared our proposed framework with two baseline models reported in the literature. The performance of the screening models was evaluated by area under the receiver operating curve (AUC) with 95% confidence interval. Results A total of 3911 UWF fundus images were used to develop the deep learning model. The external validation included 760 UWF fundus images. The results of comparison study revealed that our proposed framework achieved competitive performance compared to existing baselines while also demonstrating significantly faster inference time. The developed classification models achieved an average AUC of 0.879 on six different retinal cases in the external validation dataset. Conclusions Our two-stage deep learning-based framework improved the machine learning efficiency of the AI model for fundus images with high resolution and many interference factors by maximizing the retention of valid information and compressing the image file size. Translational Relevance This machine learning model may become a new paradigm for developing UWF fundus photography AI-assisted diagnosis.
Collapse
Affiliation(s)
- Tengyun Wu
- Air Force Medical Center of Chinese PLA, Beijing, China
| | - Lie Ju
- Beijing Airdoc Technology Co. Ltd., Beijing, China
- Faculty of engineering, Monash University, Clayton, Australia
| | - Xuefei Fu
- Beijing Airdoc Technology Co. Ltd., Beijing, China
| | - Bin Wang
- Beijing Airdoc Technology Co. Ltd., Beijing, China
| | - Zongyuan Ge
- Beijing Airdoc Technology Co. Ltd., Beijing, China
- Faculty of engineering, Monash University, Clayton, Australia
| | - Yong Liu
- Air Force Medical Center of Chinese PLA, Beijing, China
| |
Collapse
|
7
|
Zhang J, Zou H. Insights into artificial intelligence in myopia management: from a data perspective. Graefes Arch Clin Exp Ophthalmol 2024; 262:3-17. [PMID: 37231280 PMCID: PMC10212230 DOI: 10.1007/s00417-023-06101-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 03/23/2023] [Accepted: 05/06/2023] [Indexed: 05/27/2023] Open
Abstract
Given the high incidence and prevalence of myopia, the current healthcare system is struggling to handle the task of myopia management, which is worsened by home quarantine during the ongoing COVID-19 pandemic. The utilization of artificial intelligence (AI) in ophthalmology is thriving, yet not enough in myopia. AI can serve as a solution for the myopia pandemic, with application potential in early identification, risk stratification, progression prediction, and timely intervention. The datasets used for developing AI models are the foundation and determine the upper limit of performance. Data generated from clinical practice in managing myopia can be categorized into clinical data and imaging data, and different AI methods can be used for analysis. In this review, we comprehensively review the current application status of AI in myopia with an emphasis on data modalities used for developing AI models. We propose that establishing large public datasets with high quality, enhancing the model's capability of handling multimodal input, and exploring novel data modalities could be of great significance for the further application of AI for myopia.
Collapse
Affiliation(s)
- Juzhao Zhang
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Haidong Zou
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
- Shanghai Eye Diseases Prevention & Treatment Center, Shanghai Eye Hospital, Shanghai, China.
- National Clinical Research Center for Eye Diseases, Shanghai, China.
- Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China.
| |
Collapse
|
8
|
Kurysheva NI, Rodionova OY, Pomerantsev AL, Sharova GA. [Application of artificial intelligence in glaucoma. Part 1. Neural networks and deep learning in glaucoma screening and diagnosis]. Vestn Oftalmol 2024; 140:82-87. [PMID: 38962983 DOI: 10.17116/oftalma202414003182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/05/2024]
Abstract
This article reviews literature on the use of artificial intelligence (AI) for screening, diagnosis, monitoring and treatment of glaucoma. The first part of the review provides information how AI methods improve the effectiveness of glaucoma screening, presents the technologies using deep learning, including neural networks, for the analysis of big data obtained by methods of ocular imaging (fundus imaging, optical coherence tomography of the anterior and posterior eye segments, digital gonioscopy, ultrasound biomicroscopy, etc.), including a multimodal approach. The results found in the reviewed literature are contradictory, indicating that improvement of the AI models requires further research and a standardized approach. The use of neural networks for timely detection of glaucoma based on multimodal imaging will reduce the risk of blindness associated with glaucoma.
Collapse
Affiliation(s)
- N I Kurysheva
- Medical Biological University of Innovations and Continuing Education of the Federal Biophysical Center named after A.I. Burnazyan, Moscow, Russia
- Ophthalmological Center of the Federal Medical-Biological Agency at the Federal Biophysical Center named after A.I. Burnazyan, Moscow, Russia
| | - O Ye Rodionova
- N.N. Semenov Federal Research Center for Chemical Physics, Moscow, Russia
| | - A L Pomerantsev
- N.N. Semenov Federal Research Center for Chemical Physics, Moscow, Russia
| | - G A Sharova
- Medical Biological University of Innovations and Continuing Education of the Federal Biophysical Center named after A.I. Burnazyan, Moscow, Russia
- OOO Glaznaya Klinika Doktora Belikovoy, Moscow, Russia
| |
Collapse
|
9
|
Lin YT, Zhou Q, Tan J, Tao Y. Multimodal and multi-omics-based deep learning model for screening of optic neuropathy. Heliyon 2023; 9:e22244. [PMID: 38046141 PMCID: PMC10686864 DOI: 10.1016/j.heliyon.2023.e22244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 11/06/2023] [Accepted: 11/07/2023] [Indexed: 12/05/2023] Open
Abstract
Purpose To examine the use of multimodal data and multi-omics strategies for optic nerve disease screening. Methods This was a single-center retrospective study. A deep learning model was created from fundus photography and infrared reflectance (IR) images of patients with diabetic optic neuropathy, glaucomatous optic neuropathy, and optic neuritis. Patients who were seen at the Ophthalmology Department of First Affiliated Hospital of Nanchang University in Jiangxi Province from November 2019 to April 2023 were included in this study. The data were analyzed in single and multimodal modes following the traditional omics, Resnet101, and fusion models. The accuracy and area-under-the-curve (AUC) of each model were compared. Results A total of 312 images fundus and infrared fundus photographs were collected from 156 patients. When multi-modal data was used, the accuracy of the traditional omics mode, Resnet101, and fusion models with the training set were 0.97, 0.98, and 0.99, respectively. The accuracy of the same models with the test sets were 0.72, 0.87, and 0.88, respectively. We compared single- and multi-mode states by applying the data to the different groups in the learning model. In the traditional omics model, the macro-average AUCs of the features extracted from fundus photography, IR images, and multimodal data were 0.94, 0.90, and 0.96, respectively. When the same data were processed in the Resnet101 model, the scores were 0.97 equally. However, when multimodal data was utilized, the macro-average AUCs in the traditional omics, Resnet101, and fusion modesl were 0.96, 0.97, and 0.99, respectively. Conclusion The deep learning model based on multimodal data and multi-omics strategies can improve the accuracy of screening and diagnosing diabetic optic neuropathy, glaucomatous optic neuropathy, and optic neuritis.
Collapse
Affiliation(s)
- Ye-ting Lin
- Department of Ophthalmology, The First Affiliated Hospital of Nanchang University, China
| | - Qiong Zhou
- Department of Ophthalmology, The First Affiliated Hospital of Nanchang University, China
| | - Jian Tan
- Department of Ophthalmology, The First Affiliated Hospital of Nanchang University, China
| | - Yulin Tao
- Department of Ophthalmology, The First Affiliated Hospital of Nanchang University, China
| |
Collapse
|
10
|
Cui T, Lin D, Yu S, Zhao X, Lin Z, Zhao L, Xu F, Yun D, Pang J, Li R, Xie L, Zhu P, Huang Y, Huang H, Hu C, Huang W, Liang X, Lin H. Deep Learning Performance of Ultra-Widefield Fundus Imaging for Screening Retinal Lesions in Rural Locales. JAMA Ophthalmol 2023; 141:1045-1051. [PMID: 37856107 PMCID: PMC10587822 DOI: 10.1001/jamaophthalmol.2023.4650] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 08/27/2023] [Indexed: 10/20/2023]
Abstract
Importance Retinal diseases are the leading cause of irreversible blindness worldwide, and timely detection contributes to prevention of permanent vision loss, especially for patients in rural areas with limited medical resources. Deep learning systems (DLSs) based on fundus images with a 45° field of view have been extensively applied in population screening, while the feasibility of using ultra-widefield (UWF) fundus image-based DLSs to detect retinal lesions in patients in rural areas warrants exploration. Objective To explore the performance of a DLS for multiple retinal lesion screening using UWF fundus images from patients in rural areas. Design, Setting, and Participants In this diagnostic study, a previously developed DLS based on UWF fundus images was used to screen for 5 retinal lesions (retinal exudates or drusen, glaucomatous optic neuropathy, retinal hemorrhage, lattice degeneration or retinal breaks, and retinal detachment) in 24 villages of Yangxi County, China, between November 17, 2020, and March 30, 2021. Interventions The captured images were analyzed by the DLS and ophthalmologists. Main Outcomes and Measures The performance of the DLS in rural screening was compared with that of the internal validation in the previous model development stage. The image quality, lesion proportion, and complexity of lesion composition were compared between the model development stage and the rural screening stage. Results A total of 6222 eyes in 3149 participants (1685 women [53.5%]; mean [SD] age, 70.9 [9.1] years) were screened. The DLS achieved a mean (SD) area under the receiver operating characteristic curve (AUC) of 0.918 (0.021) (95% CI, 0.892-0.944) for detecting 5 retinal lesions in the entire data set when applied for patients in rural areas, which was lower than that reported at the model development stage (AUC, 0.998 [0.002] [95% CI, 0.995-1.000]; P < .001). Compared with the fundus images in the model development stage, the fundus images in this rural screening study had an increased frequency of poor quality (13.8% [860 of 6222] vs 0%), increased variation in lesion proportions (0.1% [6 of 6222]-36.5% [2271 of 6222] vs 14.0% [2793 of 19 891]-21.3% [3433 of 16 138]), and an increased complexity of lesion composition. Conclusions and Relevance This diagnostic study suggests that the DLS exhibited excellent performance using UWF fundus images as a screening tool for 5 retinal lesions in patients in a rural setting. However, poor image quality, diverse lesion proportions, and a complex set of lesions may have reduced the performance of the DLS; these factors in targeted screening scenarios should be taken into consideration in the model development stage to ensure good performance.
Collapse
Affiliation(s)
- Tingxin Cui
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Shanshan Yu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xinyu Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, China
| | - Dongyuan Yun
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou, China
| | - Jianyu Pang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou, China
| | - Ruiyang Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Liqiong Xie
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Pengzhi Zhu
- Greater Bay Area Center for Medical Device Evaluation and Inspection of National Medical Products Administration, Shenzhen, China
| | - Yuzhe Huang
- Guangdong Medical Devices Quality Surveillance and Test Institute, Guangzhou, China
| | - Hongxin Huang
- Guangdong Medical Devices Quality Surveillance and Test Institute, Guangzhou, China
| | - Changming Hu
- Guangdong Medical Devices Quality Surveillance and Test Institute, Guangzhou, China
| | - Wenyong Huang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xiaoling Liang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou, China
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, China
- Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
11
|
Pham VN, Le DT, Bum J, Kim SH, Song SJ, Choo H. Discriminative-Region Multi-Label Classification of Ultra-Widefield Fundus Images. Bioengineering (Basel) 2023; 10:1048. [PMID: 37760150 PMCID: PMC10525847 DOI: 10.3390/bioengineering10091048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 09/01/2023] [Accepted: 09/04/2023] [Indexed: 09/29/2023] Open
Abstract
Ultra-widefield fundus image (UFI) has become a crucial tool for ophthalmologists in diagnosing ocular diseases because of its ability to capture a wide field of the retina. Nevertheless, detecting and classifying multiple diseases within this imaging modality continues to pose a significant challenge for ophthalmologists. An automated disease classification system for UFI can support ophthalmologists in making faster and more precise diagnoses. However, existing works for UFI classification often focus on a single disease or assume each image only contains one disease when tackling multi-disease issues. Furthermore, the distinctive characteristics of each disease are typically not utilized to improve the performance of the classification systems. To address these limitations, we propose a novel approach that leverages disease-specific regions of interest for the multi-label classification of UFI. Our method uses three regions, including the optic disc area, the macula area, and the entire UFI, which serve as the most informative regions for diagnosing one or multiple ocular diseases. Experimental results on a dataset comprising 5930 UFIs with six common ocular diseases showcase that our proposed approach attains exceptional performance, with the area under the receiver operating characteristic curve scores for each class spanning from 95.07% to 99.14%. These results not only surpass existing state-of-the-art methods but also exhibit significant enhancements, with improvements of up to 5.29%. These results demonstrate the potential of our method to provide ophthalmologists with valuable information for early and accurate diagnosis of ocular diseases, ultimately leading to improved patient outcomes.
Collapse
Affiliation(s)
- Van-Nguyen Pham
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea;
| | - Duc-Tai Le
- College of Computing and Informatics, Sungkyunkwan University, Suwon 16419, Republic of Korea;
| | - Junghyun Bum
- Sungkyun AI Research Institute, Sungkyunkwan University, Suwon 16419, Republic of Korea;
| | - Seong Ho Kim
- Department of Ophthalmology, Kangbuk Samsung Hospital, School of Medicine, Sungkyunkwan University, Seoul 03181, Republic of Korea;
| | - Su Jeong Song
- Department of Ophthalmology, Kangbuk Samsung Hospital, School of Medicine, Sungkyunkwan University, Seoul 03181, Republic of Korea;
- Biomedical Institute for Convergence, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - Hyunseung Choo
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea;
- College of Computing and Informatics, Sungkyunkwan University, Suwon 16419, Republic of Korea;
- Department of Superintelligence Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| |
Collapse
|
12
|
Li Z, Wang L, Wu X, Jiang J, Qiang W, Xie H, Zhou H, Wu S, Shao Y, Chen W. Artificial intelligence in ophthalmology: The path to the real-world clinic. Cell Rep Med 2023:101095. [PMID: 37385253 PMCID: PMC10394169 DOI: 10.1016/j.xcrm.2023.101095] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 04/17/2023] [Accepted: 06/07/2023] [Indexed: 07/01/2023]
Abstract
Artificial intelligence (AI) has great potential to transform healthcare by enhancing the workflow and productivity of clinicians, enabling existing staff to serve more patients, improving patient outcomes, and reducing health disparities. In the field of ophthalmology, AI systems have shown performance comparable with or even better than experienced ophthalmologists in tasks such as diabetic retinopathy detection and grading. However, despite these quite good results, very few AI systems have been deployed in real-world clinical settings, challenging the true value of these systems. This review provides an overview of the current main AI applications in ophthalmology, describes the challenges that need to be overcome prior to clinical implementation of the AI systems, and discusses the strategies that may pave the way to the clinical translation of these systems.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| | - Lei Wang
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Xuefang Wu
- Guizhou Provincial People's Hospital, Guizhou University, Guiyang 550002, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - Wei Qiang
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - He Xie
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Hongjian Zhou
- Department of Computer Science, University of Oxford, Oxford, Oxfordshire OX1 2JD, UK
| | - Shanjun Wu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Yi Shao
- Department of Ophthalmology, the First Affiliated Hospital of Nanchang University, Nanchang 330006, China.
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| |
Collapse
|
13
|
Hubbard DC, Cox P, Redd TK. Assistive applications of artificial intelligence in ophthalmology. Curr Opin Ophthalmol 2023; 34:261-266. [PMID: 36728651 PMCID: PMC10065924 DOI: 10.1097/icu.0000000000000939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
PURPOSE OF REVIEW Assistive (nonautonomous) artificial intelligence (AI) models designed to support (rather than function independently of) clinicians have received increasing attention in medicine. This review aims to highlight several recent developments in these models over the past year and their ophthalmic implications. RECENT FINDINGS Artificial intelligence models with a diverse range of applications in ophthalmology have been reported in the literature over the past year. Many of these systems have reported high performance in detection, classification, prognostication, and/or monitoring of retinal, glaucomatous, anterior segment, and other ocular pathologies. SUMMARY Over the past year, developments in AI have been made that have implications affecting ophthalmic surgical training and refractive outcomes after cataract surgery, therapeutic monitoring of disease, disease classification, and prognostication. Many of these recently developed models have obtained encouraging results and have the potential to serve as powerful clinical decision-making tools pending further external validation and evaluation of their generalizability.
Collapse
Affiliation(s)
- Donald C Hubbard
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Parker Cox
- Spencer Fox Eccles School of Medicine, University of Utah, Salt Lake City, Utah, USA
| | - Travis K Redd
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| |
Collapse
|
14
|
Sun G, Wang X, Xu L, Li C, Wang W, Yi Z, Luo H, Su Y, Zheng J, Li Z, Chen Z, Zheng H, Chen C. Deep Learning for the Detection of Multiple Fundus Diseases Using Ultra-widefield Images. Ophthalmol Ther 2023; 12:895-907. [PMID: 36565376 PMCID: PMC10011259 DOI: 10.1007/s40123-022-00627-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 11/27/2022] [Indexed: 12/25/2022] Open
Abstract
INTRODUCTION To design and evaluate a deep learning model based on ultra-widefield images (UWFIs) that can detect several common fundus diseases. METHODS Based on 4574 UWFIs, a deep learning model was trained and validated that can identify normal fundus and eight common fundus diseases, namely referable diabetic retinopathy, retinal vein occlusion, pathologic myopia, retinal detachment, retinitis pigmentosa, age-related macular degeneration, vitreous opacity, and optic neuropathy. The model was tested on three test sets with data volumes of 465, 979, and 525. The performance of the three deep learning networks, EfficientNet-B7, DenseNet, and ResNet-101, was evaluated on the internal test set. Additionally, we compared the performance of the deep learning model with that of doctors in a tertiary referral hospital. RESULTS Compared to the other two deep learning models, EfficientNet-B7 achieved the best performance. The area under the receiver operating characteristic curves of the EfficientNet-B7 model on the internal test set, external test set A and external test set B were 0.9708 (0.8772, 0.9849) to 1.0000 (1.0000, 1.0000), 0.9683 (0.8829, 0.9770) to 1.0000 (0.9975, 1.0000), and 0.8919 (0.7150, 0.9055) to 0.9977 (0.9165, 1.0000), respectively. On a data set of 100 images, the total accuracy of the deep learning model was 93.00%, the average accuracy of three ophthalmologists who had been working for 2 years and three ophthalmologists who had been working in fundus imaging for more than 5 years was 88.00% and 94.00%, respectively. CONCLUSION High performance was achieved on all three test sets using our UWFI multidisease classification model with a small sample size and fast model inference. The performance of the artificial intelligence model was comparable to that of a physician with 2-5 years of experience in fundus diseases at a tertiary referral hospital. The model is expected to be used as an effective aid for fundus disease screening.
Collapse
Affiliation(s)
- Gongpeng Sun
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Xiaoling Wang
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Lizhang Xu
- Wuhan Aiyanbang Technology Co., Ltd, Wuhan, 430073, China
| | - Chang Li
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin International Joint Research and Development Centre of Ophthalmology and Vision Science, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, 300384, China
| | - Wenyu Wang
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Zuohuizi Yi
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Huijuan Luo
- The People's Hospital of Yidu, Yidu, 443300, China
| | - Yu Su
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Jian Zheng
- School of Electronic Information and Electric Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Zhiqing Li
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin International Joint Research and Development Centre of Ophthalmology and Vision Science, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, 300384, China
| | - Zhen Chen
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China.
| | - Hongmei Zheng
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China.
| | - Changzheng Chen
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China.
| |
Collapse
|
15
|
Wen J, Liu D, Wu Q, Zhao L, Iao WC, Lin H. Retinal image‐based artificial intelligence in detecting and predicting kidney diseases: Current advances and future perspectives. VIEW 2023. [DOI: 10.1002/viw.20220070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/22/2023] Open
Affiliation(s)
- Jingyi Wen
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Dong Liu
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Qianni Wu
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Lanqin Zhao
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Wai Cheng Iao
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Haotian Lin
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
- Center for Precision Medicine and Department of Genetics and Biomedical Informatics Zhongshan School of Medicine Sun Yat‐sen University Guangzhou China
| |
Collapse
|
16
|
Chen D, Ran Ran A, Fang Tan T, Ramachandran R, Li F, Cheung CY, Yousefi S, Tham CCY, Ting DSW, Zhang X, Al-Aswad LA. Applications of Artificial Intelligence and Deep Learning in Glaucoma. Asia Pac J Ophthalmol (Phila) 2023; 12:80-93. [PMID: 36706335 DOI: 10.1097/apo.0000000000000596] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 12/06/2022] [Indexed: 01/28/2023] Open
Abstract
Diagnosis and detection of progression of glaucoma remains challenging. Artificial intelligence-based tools have the potential to improve and standardize the assessment of glaucoma but development of these algorithms is difficult given the multimodal and variable nature of the diagnosis. Currently, most algorithms are focused on a single imaging modality, specifically screening and diagnosis based on fundus photos or optical coherence tomography images. Use of anterior segment optical coherence tomography and goniophotographs is limited. The majority of algorithms designed for disease progression prediction are based on visual fields. No studies in our literature search assessed the use of artificial intelligence for treatment response prediction and no studies conducted prospective testing of their algorithms. Additional challenges to the development of artificial intelligence-based tools include scarcity of data and a lack of consensus in diagnostic criteria. Although research in the use of artificial intelligence for glaucoma is promising, additional work is needed to develop clinically usable tools.
Collapse
Affiliation(s)
- Dinah Chen
- Department of Ophthalmology, NYU Langone Health, New York City, NY
- Genentech Inc, South San Francisco, CA
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
- Lam Kin Chung, Jet King-Shing Ho Glaucoma Treatment And Research Centre, The Chinese University of Hong Kong, Hong Kong, China
| | - Ting Fang Tan
- Singapore Eye Research Institute, Singapore
- Singapore National Eye Center, Singapore
| | | | - Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
- Lam Kin Chung, Jet King-Shing Ho Glaucoma Treatment And Research Centre, The Chinese University of Hong Kong, Hong Kong, China
| | - Siamak Yousefi
- Department of Ophthalmology, The University of Tennessee Health Science Center, Memphis, TN
| | - Clement C Y Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
- Lam Kin Chung, Jet King-Shing Ho Glaucoma Treatment And Research Centre, The Chinese University of Hong Kong, Hong Kong, China
| | - Daniel S W Ting
- Singapore Eye Research Institute, Singapore
- Singapore National Eye Center, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | | |
Collapse
|
17
|
LAC-GAN: Lesion attention conditional GAN for Ultra-widefield image synthesis. Neural Netw 2023; 158:89-98. [PMID: 36446158 DOI: 10.1016/j.neunet.2022.11.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 08/30/2022] [Accepted: 11/03/2022] [Indexed: 11/13/2022]
Abstract
Automatic detection of retinal diseases based on deep learning technology and Ultra-widefield (UWF) images plays an important role in clinical practices in recent years. However, due to small lesions and limited data samples, it is not easy to train a detection-accurate model with strong generalization ability. In this paper, we propose a lesion attention conditional generative adversarial network (LAC-GAN) to synthesize retinal images with realistic lesion details to improve the training of the disease detection model. Specifically, the generator takes the vessel mask and class label as the conditional inputs, and processes the random Gaussian noise by a series of residual block to generate the synthetic images. To focus on pathological information, we propose a lesion feature attention mechanism based on random forest (RF) method, which constructs its reverse activation network to activate the lesion features. For discriminator, a weight-sharing multi-discriminator is designed to improve the performance of model by affine transformations. Experimental results on multi-center UWF image datasets demonstrate that the proposed method can generate retinal images with reasonable details, which helps to enhance the performance of the disease detection model.
Collapse
|
18
|
Bhambra N, Antaki F, Malt FE, Xu A, Duval R. Deep learning for ultra-widefield imaging: a scoping review. Graefes Arch Clin Exp Ophthalmol 2022; 260:3737-3778. [PMID: 35857087 DOI: 10.1007/s00417-022-05741-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 05/16/2022] [Accepted: 06/22/2022] [Indexed: 11/04/2022] Open
Abstract
PURPOSE This article is a scoping review of published and peer-reviewed articles using deep-learning (DL) applied to ultra-widefield (UWF) imaging. This study provides an overview of the published uses of DL and UWF imaging for the detection of ophthalmic and systemic diseases, generative image synthesis, quality assessment of images, and segmentation and localization of ophthalmic image features. METHODS A literature search was performed up to August 31st, 2021 using PubMed, Embase, Cochrane Library, and Google Scholar. The inclusion criteria were as follows: (1) deep learning, (2) ultra-widefield imaging. The exclusion criteria were as follows: (1) articles published in any language other than English, (2) articles not peer-reviewed (usually preprints), (3) no full-text availability, (4) articles using machine learning algorithms other than deep learning. No study design was excluded from consideration. RESULTS A total of 36 studies were included. Twenty-three studies discussed ophthalmic disease detection and classification, 5 discussed segmentation and localization of ultra-widefield images (UWFIs), 3 discussed generative image synthesis, 3 discussed ophthalmic image quality assessment, and 2 discussed detecting systemic diseases via UWF imaging. CONCLUSION The application of DL to UWF imaging has demonstrated significant effectiveness in the diagnosis and detection of ophthalmic diseases including diabetic retinopathy, retinal detachment, and glaucoma. DL has also been applied in the generation of synthetic ophthalmic images. This scoping review highlights and discusses the current uses of DL with UWF imaging, and the future of DL applications in this field.
Collapse
Affiliation(s)
- Nishaant Bhambra
- Faculty of Medicine, McGill University, Montréal, Québec, Canada
| | - Fares Antaki
- Department of Ophthalmology, Université de Montréal, Montréal, Québec, Canada.,Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de L'Est-de-L'Île-de-Montréal, 5415 Assumption Blvd, Montréal, Québec, H1T 2M4, Canada
| | - Farida El Malt
- Faculty of Medicine, McGill University, Montréal, Québec, Canada
| | - AnQi Xu
- Faculty of Medicine, Université de Montréal, Montréal, Québec, Canada
| | - Renaud Duval
- Department of Ophthalmology, Université de Montréal, Montréal, Québec, Canada. .,Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de L'Est-de-L'Île-de-Montréal, 5415 Assumption Blvd, Montréal, Québec, H1T 2M4, Canada.
| |
Collapse
|
19
|
Shin Y, Cho H, Shin YU, Seong M, Choi JW, Lee WJ. Comparison between Deep-Learning-Based Ultra-Wide-Field Fundus Imaging and True-Colour Confocal Scanning for Diagnosing Glaucoma. J Clin Med 2022; 11:jcm11113168. [PMID: 35683577 PMCID: PMC9181263 DOI: 10.3390/jcm11113168] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 05/25/2022] [Accepted: 06/01/2022] [Indexed: 02/05/2023] Open
Abstract
In this retrospective, comparative study, we evaluated and compared the performance of two confocal imaging modalities in detecting glaucoma based on a deep learning (DL) classifier: ultra-wide-field (UWF) fundus imaging and true-colour confocal scanning. A total of 777 eyes, including 273 normal control eyes and 504 glaucomatous eyes, were tested. A convolutional neural network was used for each true-colour confocal scan (Eidon AF™, CenterVue, Padova, Italy) and UWF fundus image (Optomap™, Optos PLC, Dunfermline, UK) to detect glaucoma. The diagnostic model was trained using 545 training and 232 test images. The presence of glaucoma was determined, and the accuracy and area under the receiver operating characteristic curve (AUC) metrics were assessed for diagnostic power comparison. DL-based UWF fundus imaging achieved an AUC of 0.904 (95% confidence interval (CI): 0.861−0.937) and accuracy of 83.62%. In contrast, DL-based true-colour confocal scanning achieved an AUC of 0.868 (95% CI: 0.824−0.912) and accuracy of 81.46%. Both DL-based confocal imaging modalities showed no significant differences in their ability to diagnose glaucoma (p = 0.135) and were comparable to the traditional optical coherence tomography parameter-based methods (all p > 0.005). Therefore, using a DL-based algorithm on true-colour confocal scanning and UWF fundus imaging, we confirmed that both confocal fundus imaging techniques had high value in diagnosing glaucoma.
Collapse
Affiliation(s)
- Younji Shin
- Department of Electrical Engineering, Hanyang University, Seoul 04763, Korea;
| | - Hyunsoo Cho
- Department of Ophthalmology, Hanyang University College of Medicine, Seoul 04763, Korea; (H.C.); (Y.U.S.); (M.S.)
| | - Yong Un Shin
- Department of Ophthalmology, Hanyang University College of Medicine, Seoul 04763, Korea; (H.C.); (Y.U.S.); (M.S.)
| | - Mincheol Seong
- Department of Ophthalmology, Hanyang University College of Medicine, Seoul 04763, Korea; (H.C.); (Y.U.S.); (M.S.)
| | - Jun Won Choi
- Department of Electrical Engineering, Hanyang University, Seoul 04763, Korea;
- Correspondence: (J.W.C.); (W.J.L.); Tel.: +82-2-2290-2316 (J.W.C.); +82-2-2290-8570 (W.J.L.)
| | - Won June Lee
- Department of Ophthalmology, Hanyang University College of Medicine, Seoul 04763, Korea; (H.C.); (Y.U.S.); (M.S.)
- Correspondence: (J.W.C.); (W.J.L.); Tel.: +82-2-2290-2316 (J.W.C.); +82-2-2290-8570 (W.J.L.)
| |
Collapse
|
20
|
WU JOHSUAN, NISHIDA TAKASHI, WEINREB ROBERTN, LIN JOUWEI. Performances of Machine Learning in Detecting Glaucoma Using Fundus and Retinal Optical Coherence Tomography Images: A Meta-Analysis. Am J Ophthalmol 2022; 237:1-12. [PMID: 34942113 DOI: 10.1016/j.ajo.2021.12.008] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 11/24/2021] [Accepted: 12/03/2021] [Indexed: 11/01/2022]
Abstract
PURPOSE To evaluate the performance of machine learning (ML) in detecting glaucoma using fundus and retinal optical coherence tomography (OCT) images. DESIGN Meta-analysis. METHODS PubMed and EMBASE were searched on August 11, 2021. A bivariate random-effects model was used to pool ML's diagnostic sensitivity, specificity, and area under the curve (AUC). Subgroup analyses were performed based on ML classifier categories and dataset types. RESULTS One hundred and five studies (3.3%) were retrieved. Seventy-three (69.5%), 30 (28.6%), and 2 (1.9%) studies tested ML using fundus, OCT, and both image types, respectively. Total testing data numbers were 197,174 for fundus and 16,039 for OCT. Overall, ML showed excellent performances for both fundus (pooled sensitivity = 0.92 [95% CI, 0.91-0.93]; specificity = 0.93 [95% CI, 0.91-0.94]; and AUC = 0.97 [95% CI, 0.95-0.98]) and OCT (pooled sensitivity = 0.90 [95% CI, 0.86-0.92]; specificity = 0.91 [95% CI, 0.89-0.92]; and AUC = 0.96 [95% CI, 0.93-0.97]). ML performed similarly using all data and external data for fundus and the external test result of OCT was less robust (AUC = 0.87). When comparing different classifier categories, although support vector machine showed the highest performance (pooled sensitivity, specificity, and AUC ranges, 0.92-0.96, 0.95-0.97, and 0.96-0.99, respectively), results by neural network and others were still good (pooled sensitivity, specificity, and AUC ranges, 0.88-0.93, 0.90-0.93, 0.95-0.97, respectively). When analyzed based on dataset types, ML demonstrated consistent performances on clinical datasets (fundus AUC = 0.98 [95% CI, 0.97-0.99] and OCT AUC = 0.95 [95% 0.93-0.97]). CONCLUSIONS Performance of ML in detecting glaucoma compares favorably to that of experts and is promising for clinical application. Future prospective studies are needed to better evaluate its real-world utility.
Collapse
|
21
|
Artificial intelligence to detect malignant eyelid tumors from photographic images. NPJ Digit Med 2022; 5:23. [PMID: 35236921 PMCID: PMC8891262 DOI: 10.1038/s41746-022-00571-3] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 02/04/2022] [Indexed: 11/23/2022] Open
Abstract
Malignant eyelid tumors can invade adjacent structures and pose a threat to vision and even life. Early identification of malignant eyelid tumors is crucial to avoiding substantial morbidity and mortality. However, differentiating malignant eyelid tumors from benign ones can be challenging for primary care physicians and even some ophthalmologists. Here, based on 1,417 photographic images from 851 patients across three hospitals, we developed an artificial intelligence system using a faster region-based convolutional neural network and deep learning classification networks to automatically locate eyelid tumors and then distinguish between malignant and benign eyelid tumors. The system performed well in both internal and external test sets (AUCs ranged from 0.899 to 0.955). The performance of the system is comparable to that of a senior ophthalmologist, indicating that this system has the potential to be used at the screening stage for promoting the early detection and treatment of malignant eyelid tumors.
Collapse
|
22
|
Li Z, Jiang J, Qiang W, Guo L, Liu X, Weng H, Wu S, Zheng Q, Chen W. Comparison of deep learning systems and cornea specialists in detecting corneal diseases from low-quality images. iScience 2021; 24:103317. [PMID: 34778732 PMCID: PMC8577078 DOI: 10.1016/j.isci.2021.103317] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 10/11/2021] [Accepted: 10/15/2021] [Indexed: 01/01/2023] Open
Abstract
The performance of deep learning in disease detection from high-quality clinical images is identical to and even greater than that of human doctors. However, in low-quality images, deep learning performs poorly. Whether human doctors also have poor performance in low-quality images is unknown. Here, we compared the performance of deep learning systems with that of cornea specialists in detecting corneal diseases from low-quality slit lamp images. The results showed that the cornea specialists performed better than our previously established deep learning system (PEDLS) trained on only high-quality images. The performance of the system trained on both high- and low-quality images was superior to that of the PEDLS while inferior to that of a senior corneal specialist. This study highlights that cornea specialists perform better in low-quality images than the system trained on high-quality images. Adding low-quality images with sufficient diagnostic certainty to the training set can reduce this performance gap. Deep learning performs poorly in low-quality images for detecting corneal diseases Corneal specialists perform better than the PEDLS in low-quality images The performance of the NDLS is better than that of the PEDLS in low-quality images Adding low-quality images to the training set can improve the system's performance
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China.,School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - Wei Qiang
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Liufei Guo
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - Xiaotian Liu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Hongfei Weng
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Shanjun Wu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Qinxiang Zheng
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China.,School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China.,School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| |
Collapse
|
23
|
Wong SH, Tsai JC. Telehealth and Screening Strategies in the Diagnosis and Management of Glaucoma. J Clin Med 2021; 10:jcm10163452. [PMID: 34441748 PMCID: PMC8396962 DOI: 10.3390/jcm10163452] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 07/31/2021] [Accepted: 08/02/2021] [Indexed: 11/16/2022] Open
Abstract
Telehealth has become a viable option for glaucoma screening and glaucoma monitoring due to advances in technology. The ability to measure intraocular pressure without an anesthetic and to take optic nerve photographs without pharmacologic pupillary dilation using portable equipment have allowed glaucoma screening programs to generate enough data for assessment. At home, patients can perform visual acuity testing, web-based visual field testing, rebound tonometry, and video visits with the physician to monitor for glaucomatous progression. Artificial intelligence will enhance the accuracy of data interpretation and inspire confidence in popularizing telehealth for glaucoma.
Collapse
|
24
|
Li Z, Guo C, Nie D, Lin D, Cui T, Zhu Y, Chen C, Zhao L, Zhang X, Dongye M, Wang D, Xu F, Jin C, Zhang P, Han Y, Yan P, Lin H. Automated detection of retinal exudates and drusen in ultra-widefield fundus images based on deep learning. Eye (Lond) 2021; 36:1681-1686. [PMID: 34345030 PMCID: PMC9307785 DOI: 10.1038/s41433-021-01715-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2021] [Revised: 07/14/2021] [Accepted: 07/22/2021] [Indexed: 01/20/2023] Open
Abstract
BACKGROUND Retinal exudates and/or drusen (RED) can be signs of many fundus diseases that can lead to irreversible vision loss. Early detection and treatment of these diseases are critical for improving vision prognosis. However, manual RED screening on a large scale is time-consuming and labour-intensive. Here, we aim to develop and assess a deep learning system for automated detection of RED using ultra-widefield fundus (UWF) images. METHODS A total of 26,409 UWF images from 14,994 subjects were used to develop and evaluate the deep learning system. The Zhongshan Ophthalmic Center (ZOC) dataset was selected to compare the performance of the system to that of retina specialists in RED detection. The saliency map visualization technique was used to understand which areas in the UWF image had the most influence on our deep learning system when detecting RED. RESULTS The system for RED detection achieved areas under the receiver operating characteristic curve of 0.994 (95% confidence interval [CI]: 0.991-0.996), 0.972 (95% CI: 0.957-0.984), and 0.988 (95% CI: 0.983-0.992) in three independent datasets. The performance of the system in the ZOC dataset was comparable to that of an experienced retina specialist. Regions of RED were highlighted by saliency maps in UWF images. CONCLUSIONS Our deep learning system is reliable in the automated detection of RED in UWF images. As a screening tool, our system may promote the early diagnosis and management of RED-related fundus diseases.
Collapse
Affiliation(s)
- Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chong Guo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Danyao Nie
- Shenzhen Eye Hospital, Shenzhen Key Laboratory of Ophthalmology, Affiliated Shenzhen Eye Hospital of Jinan University, Shenzhen, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Tingxin Cui
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yi Zhu
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Chuan Chen
- Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xulin Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Meimei Dongye
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Dongni Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chenjin Jin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Ping Zhang
- Xudong Ophthalmic Hospital, Inner Mongolia, China
| | - Yu Han
- EYE & ENT Hospital of Fudan University, Shanghai, China
| | - Pisong Yan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China. .,Center for Precision Medicine, Sun Yat-sen University, Guangzhou, China.
| |
Collapse
|
25
|
Preventing corneal blindness caused by keratitis using artificial intelligence. Nat Commun 2021; 12:3738. [PMID: 34145294 PMCID: PMC8213803 DOI: 10.1038/s41467-021-24116-6] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2020] [Accepted: 06/01/2021] [Indexed: 12/14/2022] Open
Abstract
Keratitis is the main cause of corneal blindness worldwide. Most vision loss caused by keratitis can be avoidable via early detection and treatment. The diagnosis of keratitis often requires skilled ophthalmologists. However, the world is short of ophthalmologists, especially in resource-limited settings, making the early diagnosis of keratitis challenging. Here, we develop a deep learning system for the automated classification of keratitis, other cornea abnormalities, and normal cornea based on 6,567 slit-lamp images. Our system exhibits remarkable performance in cornea images captured by the different types of digital slit lamp cameras and a smartphone with the super macro mode (all AUCs>0.96). The comparable sensitivity and specificity in keratitis detection are observed between the system and experienced cornea specialists. Our system has the potential to be applied to both digital slit lamp cameras and smartphones to promote the early diagnosis and treatment of keratitis, preventing the corneal blindness caused by keratitis.
Collapse
|
26
|
Xiao X, Xue L, Ye L, Li H, He Y. Health care cost and benefits of artificial intelligence-assisted population-based glaucoma screening for the elderly in remote areas of China: a cost-offset analysis. BMC Public Health 2021; 21:1065. [PMID: 34088286 PMCID: PMC8178835 DOI: 10.1186/s12889-021-11097-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 05/17/2021] [Indexed: 12/04/2022] Open
Abstract
Background Population-based screening was essential for glaucoma management. Although various studies have investigated the cost-effectiveness of glaucoma screening, policymakers facing with uncontrollably growing total health expenses were deeply concerned about the potential financial consequences of glaucoma screening. This present study was aimed to explore the impact of glaucoma screening with artificial intelligence (AI) automated diagnosis from a budgetary standpoint in Changjiang county, China. Methods A Markov model based on health care system’s perspective was adapted from previously published studies to predict disease progression and healthcare costs. A cohort of 19,395 individuals aged 65 and above were simulated over a 15-year timeframe. Fur illustrative purpose, we only considered primary angle-closure glaucoma (PACG) in this study. Prevalence, disease progression risks between stages, compliance rates were obtained from publish studies. We did a meta-analysis to estimate diagnostic performance of AI automated diagnosis system from fundus image. Screening costs were provided by the Changjiang screening programme, whereas treatment costs were derived from electronic medical records from two county hospitals. Main outcomes included the number of PACG patients and health care costs. Cost-offset analysis was employed to compare projected health outcomes and medical care costs under the screening with what they would have been without screening. One-way sensitivity analysis was conducted to quantify uncertainties around model results. Results Among people aged 65 and above in Changjiang county, it was predicted that there were 1940 PACG patients under the AI-assisted screening scenario, compared with 2104 patients without screening in 15 years’ time. Specifically, the screening would reduce patients with primary angle closure suspect by 7.7%, primary angle closure by 8.8%, PACG by 16.7%, and visual blindness by 33.3%. Due to early diagnosis and treatment under the screening, healthcare costs surged dramatically to $107,761.4 dollar in the first year and then were constantly declining over time, while without screening costs grew from $14,759.8 in the second year until peaking at $17,900.9 in the 9th year. However, cost-offset analysis revealed that additional healthcare costs resulted from the screening could not be offset by decreased disease progression. The 5-, 10-, and 15-year accumulated incremental costs of screening versus no screening were estimated to be $396,362.8, $424,907.9, and $434,903.2, respectively. As a result, the incremental cost per PACG of any stages prevented was $1464.3. Conclusions This study represented the first attempt to address decision-maker’s budgetary concerns when adopting glaucoma screening by developing a Markov prediction model to project health outcomes and costs. Population screening combined with AI automated diagnosis for PACG in China were able to reduce disease progression risks. However, the excess costs of screening could never be offset by reduction in disease progression. Further studies examining the cost-effectiveness or cost-utility of AI-assisted glaucoma screening were needed. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-021-11097-w.
Collapse
Affiliation(s)
- Xuan Xiao
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060, China
| | - Long Xue
- School of Public Health, Fudan University, Shanghai, 200433, China
| | - Lin Ye
- Department of Eye Plastic and Lacrimal Disease, Shenzhen Eye Hospital of Jinan University, Shenzhen, 518040, China
| | - Hongzheng Li
- School of Public Health, Fudan University, Shanghai, 200433, China
| | - Yunzhen He
- School of Public Health, Fudan University, Shanghai, 200433, China.
| |
Collapse
|
27
|
Zhai R, Wang Z, Sheng Q, Fan X, Kong X, Sun X. Polymorphisms of the cytomegalovirus glycoprotein B genotype in patients with Posner-Schlossman syndrome. Br J Ophthalmol 2021; 106:1240-1244. [PMID: 33753409 PMCID: PMC9411906 DOI: 10.1136/bjophthalmol-2020-318284] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Revised: 02/16/2021] [Accepted: 03/07/2021] [Indexed: 02/05/2023]
Abstract
Aims The aim of this observational study was to report the distribution of glycoprotein B (gB) genotypes in the eyes of cytomegalovirus (CMV) positive patients with Posner-Schlossman syndrome (PSS), and to investigate their clinical characteristics and outcomes. Methods We collected aqueous humour samples from 165 patients clinically diagnosed with PSS between 2017 and 2019. PCR was performed to analyse the CMV DNA and identify the gB genotypes in the samples. Clinical characteristics and responses to antiviral treatment were compared among patients with different gB genotypes. Results CMV DNA was detected in 94 (56.97%) of the 165 aqueous humour specimens analysed. Owing to the quantity requirement for CMV gB genotype analysis, results could be obtained from only 14 specimens. CMV gB type 1 was detected in 11 samples (78.6%), whereas CMV gB type 3 was detected in three samples (21.4%). No other gB genotypes or mixed genotypes were detected. Overall, 9.1% (1/11) of the patients in the gB type 1 group and 66.7% (2/3) of the patients in the gB type 3 group had bilateral attacks (p=0.093). The concentration of anti-CMV immunoglobulin G (IgG) in the type 1 group was 0.94±0.79 s/co (ratio of aqueous humour CMV IgG/serum CMV IgG to aqueous humour albumin concentration/serum albumin concentration), whereas that in the type 3 group was 0.67±0.71 s/co. Conclusion Genotype 1 was the most prevalent genotype in the aqueous humour of CMV-infected patients with PSS. Bilateral attack was predominant among patients with gB genotype 3. CMV gB gene may be related to the pathogenicity of CMV virus strain in patients with PSS.
Collapse
Affiliation(s)
- Ruyi Zhai
- Department of Ophthalmology and Vision Science, Eye & ENT Hospital, Fudan University, Shanghai, China.,NHC Key Laboratory of Myopia, Fudan University, Shanghai, China.,Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China.,Shanghai Key Laboratory of Visual Impairment and Restoration, Shanghai, China
| | - Zhujian Wang
- Department of Clinical Laboratory, Eye, Ear, Nose and Throat Hospital, Fudan University, Shanghai, China
| | - Qilian Sheng
- Department of Ophthalmology and Vision Science, Eye & ENT Hospital, Fudan University, Shanghai, China.,NHC Key Laboratory of Myopia, Fudan University, Shanghai, China.,Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China.,Shanghai Key Laboratory of Visual Impairment and Restoration, Shanghai, China
| | - Xintong Fan
- Department of Ophthalmology and Vision Science, Eye & ENT Hospital, Fudan University, Shanghai, China.,NHC Key Laboratory of Myopia, Fudan University, Shanghai, China.,Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China.,Shanghai Key Laboratory of Visual Impairment and Restoration, Shanghai, China
| | - Xiangmei Kong
- Department of Ophthalmology and Vision Science, Eye & ENT Hospital, Fudan University, Shanghai, China .,NHC Key Laboratory of Myopia, Fudan University, Shanghai, China.,Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China.,Shanghai Key Laboratory of Visual Impairment and Restoration, Shanghai, China
| | - Xinghuai Sun
- Department of Ophthalmology and Vision Science, Eye & ENT Hospital, Fudan University, Shanghai, China.,NHC Key Laboratory of Myopia, Fudan University, Shanghai, China.,Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China.,Shanghai Key Laboratory of Visual Impairment and Restoration, Shanghai, China.,State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Institutes of Brain Science,Fudan University, Shanghai, China
| |
Collapse
|
28
|
Wang X, Ji Z, Ma X, Zhang Z, Yi Z, Zheng H, Fan W, Chen C. Automated Grading of Diabetic Retinopathy with Ultra-Widefield Fluorescein Angiography and Deep Learning. J Diabetes Res 2021; 2021:2611250. [PMID: 34541004 PMCID: PMC8445732 DOI: 10.1155/2021/2611250] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/05/2021] [Revised: 08/07/2021] [Accepted: 08/19/2021] [Indexed: 11/17/2022] Open
Abstract
PURPOSE The objective of this study was to establish diagnostic technology to automatically grade the severity of diabetic retinopathy (DR) according to the ischemic index and leakage index with ultra-widefield fluorescein angiography (UWFA) and the Early Treatment Diabetic Retinopathy Study (ETDRS) 7-standard field (7-SF). METHODS This is a cross-sectional study. UWFA samples from 280 diabetic patients and 119 normal patients were used to train and test an artificial intelligence model to differentiate PDR and NPDR based on the ischemic index and leakage index with UWFA. A panel of retinal specialists determined the ground truth for our data set before experimentation. A confusion matrix as a metric was used to measure the precision of our algorithm, and a simple linear regression function was implemented to explore the discrimination of indexes on the DR grades. In addition, the model was tested with simulated 7-SF. RESULTS The model classification of DR in the original UWFA images achieved 88.50% accuracy and 73.68% accuracy in the simulated 7-SF images. A simple linear regression function demonstrated that there is a significant relationship between the ischemic index and leakage index and the severity of DR. These two thresholds were set to classify the grade of DR, which achieved 76.8% accuracy. CONCLUSIONS The optimization of the cycle generative adversarial network (CycleGAN) and convolutional neural network (CNN) model classifier achieved DR grading based on the ischemic index and leakage index with UWFA and simulated 7-SF and provided accurate inference results. The classification accuracy with UWFA is slightly higher than that of simulated 7-SF.
Collapse
Affiliation(s)
- Xiaoling Wang
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, China
| | - Zexuan Ji
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Xiao Ma
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Ziyue Zhang
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Zuohuizi Yi
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, China
| | - Hongmei Zheng
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, China
| | - Wen Fan
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Changzheng Chen
- Eye Center, Renmin Hospital of Wuhan University, Wuhan, China
| |
Collapse
|