1
|
Tang QQ, Yang XG, Wang HQ, Wu DW, Zhang MX. Applications of deep learning for detecting ophthalmic diseases with ultrawide-field fundus images. Int J Ophthalmol 2024; 17:188-200. [PMID: 38239939 PMCID: PMC10754665 DOI: 10.18240/ijo.2024.01.24] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Accepted: 11/07/2023] [Indexed: 01/22/2024] Open
Abstract
AIM To summarize the application of deep learning in detecting ophthalmic disease with ultrawide-field fundus images and analyze the advantages, limitations, and possible solutions common to all tasks. METHODS We searched three academic databases, including PubMed, Web of Science, and Ovid, with the date of August 2022. We matched and screened according to the target keywords and publication year and retrieved a total of 4358 research papers according to the keywords, of which 23 studies were retrieved on applying deep learning in diagnosing ophthalmic disease with ultrawide-field images. RESULTS Deep learning in ultrawide-field images can detect various ophthalmic diseases and achieve great performance, including diabetic retinopathy, glaucoma, age-related macular degeneration, retinal vein occlusions, retinal detachment, and other peripheral retinal diseases. Compared to fundus images, the ultrawide-field fundus scanning laser ophthalmoscopy enables the capture of the ocular fundus up to 200° in a single exposure, which can observe more areas of the retina. CONCLUSION The combination of ultrawide-field fundus images and artificial intelligence will achieve great performance in diagnosing multiple ophthalmic diseases in the future.
Collapse
Affiliation(s)
- Qing-Qing Tang
- Department of Ophthalmology and Research Laboratory of Macular Disease, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Xiang-Gang Yang
- Department of Ophthalmology and Research Laboratory of Macular Disease, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Hong-Qiu Wang
- Hong Kong University of Science and Technology (Guangzhou), Guangzhou 511400, Guangdong Province, China
| | - Da-Wen Wu
- Department of Ophthalmology and Research Laboratory of Macular Disease, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Mei-Xia Zhang
- Department of Ophthalmology and Research Laboratory of Macular Disease, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| |
Collapse
|
2
|
Cui T, Lin D, Yu S, Zhao X, Lin Z, Zhao L, Xu F, Yun D, Pang J, Li R, Xie L, Zhu P, Huang Y, Huang H, Hu C, Huang W, Liang X, Lin H. Deep Learning Performance of Ultra-Widefield Fundus Imaging for Screening Retinal Lesions in Rural Locales. JAMA Ophthalmol 2023; 141:1045-1051. [PMID: 37856107 PMCID: PMC10587822 DOI: 10.1001/jamaophthalmol.2023.4650] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 08/27/2023] [Indexed: 10/20/2023]
Abstract
Importance Retinal diseases are the leading cause of irreversible blindness worldwide, and timely detection contributes to prevention of permanent vision loss, especially for patients in rural areas with limited medical resources. Deep learning systems (DLSs) based on fundus images with a 45° field of view have been extensively applied in population screening, while the feasibility of using ultra-widefield (UWF) fundus image-based DLSs to detect retinal lesions in patients in rural areas warrants exploration. Objective To explore the performance of a DLS for multiple retinal lesion screening using UWF fundus images from patients in rural areas. Design, Setting, and Participants In this diagnostic study, a previously developed DLS based on UWF fundus images was used to screen for 5 retinal lesions (retinal exudates or drusen, glaucomatous optic neuropathy, retinal hemorrhage, lattice degeneration or retinal breaks, and retinal detachment) in 24 villages of Yangxi County, China, between November 17, 2020, and March 30, 2021. Interventions The captured images were analyzed by the DLS and ophthalmologists. Main Outcomes and Measures The performance of the DLS in rural screening was compared with that of the internal validation in the previous model development stage. The image quality, lesion proportion, and complexity of lesion composition were compared between the model development stage and the rural screening stage. Results A total of 6222 eyes in 3149 participants (1685 women [53.5%]; mean [SD] age, 70.9 [9.1] years) were screened. The DLS achieved a mean (SD) area under the receiver operating characteristic curve (AUC) of 0.918 (0.021) (95% CI, 0.892-0.944) for detecting 5 retinal lesions in the entire data set when applied for patients in rural areas, which was lower than that reported at the model development stage (AUC, 0.998 [0.002] [95% CI, 0.995-1.000]; P < .001). Compared with the fundus images in the model development stage, the fundus images in this rural screening study had an increased frequency of poor quality (13.8% [860 of 6222] vs 0%), increased variation in lesion proportions (0.1% [6 of 6222]-36.5% [2271 of 6222] vs 14.0% [2793 of 19 891]-21.3% [3433 of 16 138]), and an increased complexity of lesion composition. Conclusions and Relevance This diagnostic study suggests that the DLS exhibited excellent performance using UWF fundus images as a screening tool for 5 retinal lesions in patients in a rural setting. However, poor image quality, diverse lesion proportions, and a complex set of lesions may have reduced the performance of the DLS; these factors in targeted screening scenarios should be taken into consideration in the model development stage to ensure good performance.
Collapse
Affiliation(s)
- Tingxin Cui
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Shanshan Yu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xinyu Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, China
| | - Dongyuan Yun
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou, China
| | - Jianyu Pang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou, China
| | - Ruiyang Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Liqiong Xie
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Pengzhi Zhu
- Greater Bay Area Center for Medical Device Evaluation and Inspection of National Medical Products Administration, Shenzhen, China
| | - Yuzhe Huang
- Guangdong Medical Devices Quality Surveillance and Test Institute, Guangzhou, China
| | - Hongxin Huang
- Guangdong Medical Devices Quality Surveillance and Test Institute, Guangzhou, China
| | - Changming Hu
- Guangdong Medical Devices Quality Surveillance and Test Institute, Guangzhou, China
| | - Wenyong Huang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xiaoling Liang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou, China
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, China
- Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
3
|
Azzopardi M, Chong YJ, Ng B, Recchioni A, Logeswaran A, Ting DSJ. Diagnosis of Acanthamoeba Keratitis: Past, Present and Future. Diagnostics (Basel) 2023; 13:2655. [PMID: 37627913 PMCID: PMC10453105 DOI: 10.3390/diagnostics13162655] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 08/04/2023] [Accepted: 08/09/2023] [Indexed: 08/27/2023] Open
Abstract
Acanthamoeba keratitis (AK) is a painful and sight-threatening parasitic corneal infection. In recent years, the incidence of AK has increased. Timely and accurate diagnosis is crucial during the management of AK, as delayed diagnosis often results in poor clinical outcomes. Currently, AK diagnosis is primarily achieved through a combination of clinical suspicion, microbiological investigations and corneal imaging. Historically, corneal scraping for microbiological culture has been considered to be the gold standard. Despite its technical ease, accessibility and cost-effectiveness, the long diagnostic turnaround time and variably low sensitivity of microbiological culture limit its use as a sole diagnostic test for AK in clinical practice. In this review, we aim to provide a comprehensive overview of the diagnostic modalities that are currently used to diagnose AK, including microscopy with staining, culture, corneal biopsy, in vivo confocal microscopy, polymerase chain reaction and anterior segment optical coherence tomography. We also highlight emerging techniques, such as next-generation sequencing and artificial intelligence-assisted models, which have the potential to transform the diagnostic landscape of AK.
Collapse
Affiliation(s)
- Matthew Azzopardi
- Department of Ophthalmology, Royal London Hospital, London E1 1BB, UK;
| | - Yu Jeat Chong
- Birmingham and Midland Eye Centre, Birmingham B18 7QH, UK; (B.N.); (A.R.)
| | - Benjamin Ng
- Birmingham and Midland Eye Centre, Birmingham B18 7QH, UK; (B.N.); (A.R.)
| | - Alberto Recchioni
- Birmingham and Midland Eye Centre, Birmingham B18 7QH, UK; (B.N.); (A.R.)
- Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, University of Birmingham, Birmingham B15 2TT, UK
| | | | - Darren S. J. Ting
- Birmingham and Midland Eye Centre, Birmingham B18 7QH, UK; (B.N.); (A.R.)
- Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, University of Birmingham, Birmingham B15 2TT, UK
- Academic Ophthalmology, School of Medicine, University of Nottingham, Nottingham NG7 2RD, UK
| |
Collapse
|
4
|
Zhang J, Sha D, Ma Y, Zhang D, Tan T, Xu X, Yi Q, Zhao Y. Joint conditional generative adversarial networks for eyelash artifact removal in ultra-wide-field fundus images. Front Cell Dev Biol 2023; 11:1181305. [PMID: 37215081 PMCID: PMC10196374 DOI: 10.3389/fcell.2023.1181305] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 04/24/2023] [Indexed: 05/24/2023] Open
Abstract
Background: Ultra-Wide-Field (UWF) fundus imaging is an essential diagnostic tool for identifying ophthalmologic diseases, as it captures detailed retinal structures within a wider field of view (FOV). However, the presence of eyelashes along the edge of the eyelids can cast shadows and obscure the view of fundus imaging, which hinders reliable interpretation and subsequent screening of fundus diseases. Despite its limitations, there are currently no effective methods or datasets available for removing eyelash artifacts from UWF fundus images. This research aims to develop an effective approach for eyelash artifact removal and thus improve the visual quality of UWF fundus images for accurate analysis and diagnosis. Methods: To address this issue, we first constructed two UWF fundus datasets: the paired synthetic eyelashes (PSE) dataset and the unpaired real eyelashes (uPRE) dataset. Then we proposed a deep learning architecture called Joint Conditional Generative Adversarial Networks (JcGAN) to remove eyelash artifacts from UWF fundus images. JcGAN employs a shared generator with two discriminators for joint learning of both real and synthetic eyelash artifacts. Furthermore, we designed a background refinement module that refines background information and is trained with the generator in an end-to-end manner. Results: Experimental results on both PSE and uPRE datasets demonstrate the superiority of the proposed JcGAN over several state-of-the-art deep learning approaches. Compared with the best existing method, JcGAN improves PSNR and SSIM by 4.82% and 0.23%, respectively. In addition, we also verified that eyelash artifact removal via JcGAN could significantly improve vessel segmentation performance in UWF fundus images. Assessment via vessel segmentation illustrates that the sensitivity, Dice coefficient and area under curve (AUC) of ResU-Net have respectively increased by 3.64%, 1.54%, and 1.43% after eyelash artifact removal using JcGAN. Conclusion: The proposed JcGAN effectively removes eyelash artifacts in UWF images, resulting in improved visibility of retinal vessels. Our method can facilitate better processing and analysis of retinal vessels and has the potential to improve diagnostic outcomes.
Collapse
Affiliation(s)
- Jiong Zhang
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- The Affiliated Ningbo Eye Hospital of Wenzhou Medical University, Ningbo, China
| | - Dengfeng Sha
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- Faculty of Electrical Engineering and Computer Science, Ningbo University, Ningbo, China
| | - Yuhui Ma
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Dan Zhang
- School of Cyber Science and Engineering, Ningbo University of Technology, Ningbo, China
| | - Tao Tan
- Faulty of Applied Sciences, Macao Polytechnic University, Macao, Macao SAR, China
| | - Xiayu Xu
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi’an Jiaotong University, Xi’an, China
- Zhejiang Research Institute of Xi’an Jiaotong University, Hangzhou, China
| | - Quanyong Yi
- The Affiliated Ningbo Eye Hospital of Wenzhou Medical University, Ningbo, China
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- The Affiliated Ningbo Eye Hospital of Wenzhou Medical University, Ningbo, China
| |
Collapse
|
5
|
Dolar-Szczasny J, Barańska A, Rejdak R. Evaluating the Efficacy of Teleophthalmology in Delivering Ophthalmic Care to Underserved Populations: A Literature Review. J Clin Med 2023; 12:jcm12093161. [PMID: 37176602 PMCID: PMC10179149 DOI: 10.3390/jcm12093161] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Revised: 04/24/2023] [Accepted: 04/25/2023] [Indexed: 05/15/2023] Open
Abstract
Technological advancement has brought commendable changes in medicine, advancing diagnosis, treatment, and interventions. Telemedicine has been adopted by various subspecialties including ophthalmology. Over the years, teleophthalmology has been implemented in various countries, and continuous progress is being made in this area. In underserved populations, due to socioeconomic factors, there is little or no access to healthcare facilities, and people are at higher risk of eye diseases and vision impairment. Transportation is the major hurdle for these people in obtaining access to eye care in the main hospitals. There is a dire need for accessible eye care for such populations, and teleophthalmology is the ray of hope for providing eye care facilities to underserved people. Numerous studies have reported the advantages of teleophthalmology for rural populations such as being cost-effective, timesaving, reliable, efficient, and satisfactory for patients. Although it is being practiced in urban populations, for rural populations, its benefits amplify. However, there are certain obstacles as well, such as the cost of equipment, lack of steady electricity and internet supply in rural areas, and the attitude of people in certain regions toward acceptance of teleophthalmology. In this review, we have discussed in detail eye health in rural populations, teleophthalmology, and its effectiveness in rural populations of different countries.
Collapse
Affiliation(s)
- Joanna Dolar-Szczasny
- Chair and Department of General and Pediatric Ophthalmology, Medical University of Lublin, 20-079 Lublin, Poland
| | - Agnieszka Barańska
- Department of Medical Informatics and Statistics with E-Learning Laboratory, Medical University of Lublin, 20-090 Lublin, Poland
| | - Robert Rejdak
- Chair and Department of General and Pediatric Ophthalmology, Medical University of Lublin, 20-079 Lublin, Poland
| |
Collapse
|
6
|
Intelligent Diagnosis of Multiple Peripheral Retinal Lesions in Ultra-widefield Fundus Images Based on Deep Learning. Ophthalmol Ther 2023; 12:1081-1095. [PMID: 36692813 PMCID: PMC9872743 DOI: 10.1007/s40123-023-00651-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 01/05/2023] [Indexed: 01/25/2023] Open
Abstract
INTRODUCTION Compared with traditional fundus examination techniques, ultra-widefield fundus (UWF) images provide 200° panoramic images of the retina, which allows better detection of peripheral retinal lesions. The advent of UWF provides effective solutions only for detection but still lacks efficient diagnostic capabilities. This study proposed a retinal lesion detection model to automatically locate and identify six relatively typical and high-incidence peripheral retinal lesions from UWF images which will enable early screening and rapid diagnosis. METHODS A total of 24,602 augmented ultra-widefield fundus images with labels corresponding to 6 peripheral retinal lesions and normal manifestation labelled by 5 ophthalmologists were included in this study. An object detection model named You Only Look Once X (YOLOX) was modified and trained to locate and classify the six peripheral retinal lesions including rhegmatogenous retinal detachment (RRD), retinal breaks (RB), white without pressure (WWOP), cystic retinal tuft (CRT), lattice degeneration (LD), and paving-stone degeneration (PSD). We applied coordinate attention block and generalized intersection over union (GIOU) loss to YOLOX and evaluated it for accuracy, sensitivity, specificity, precision, F1 score, and average precision (AP). This model was able to show the exact location and saliency map of the retinal lesions detected by the model thus contributing to efficient screening and diagnosis. RESULTS The model reached an average accuracy of 96.64%, sensitivity of 87.97%, specificity of 98.04%, precision of 87.01%, F1 score of 87.39%, and mAP of 86.03% on test dataset 1 including 248 UWF images and reached an average accuracy of 95.04%, sensitivity of 83.90%, specificity of 96.70%, precision of 78.73%, F1 score of 81.96%, and mAP of 80.59% on external test dataset 2 including 586 UWF images, showing this system performs well in distinguishing the six peripheral retinal lesions. CONCLUSION Focusing on peripheral retinal lesions, this work proposed a deep learning model, which automatically recognized multiple peripheral retinal lesions from UWF images and localized exact positions of lesions. Therefore, it has certain potential for early screening and intelligent diagnosis of peripheral retinal lesions.
Collapse
|
7
|
Sun G, Wang X, Xu L, Li C, Wang W, Yi Z, Luo H, Su Y, Zheng J, Li Z, Chen Z, Zheng H, Chen C. Deep Learning for the Detection of Multiple Fundus Diseases Using Ultra-widefield Images. Ophthalmol Ther 2023; 12:895-907. [PMID: 36565376 PMCID: PMC10011259 DOI: 10.1007/s40123-022-00627-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 11/27/2022] [Indexed: 12/25/2022] Open
Abstract
INTRODUCTION To design and evaluate a deep learning model based on ultra-widefield images (UWFIs) that can detect several common fundus diseases. METHODS Based on 4574 UWFIs, a deep learning model was trained and validated that can identify normal fundus and eight common fundus diseases, namely referable diabetic retinopathy, retinal vein occlusion, pathologic myopia, retinal detachment, retinitis pigmentosa, age-related macular degeneration, vitreous opacity, and optic neuropathy. The model was tested on three test sets with data volumes of 465, 979, and 525. The performance of the three deep learning networks, EfficientNet-B7, DenseNet, and ResNet-101, was evaluated on the internal test set. Additionally, we compared the performance of the deep learning model with that of doctors in a tertiary referral hospital. RESULTS Compared to the other two deep learning models, EfficientNet-B7 achieved the best performance. The area under the receiver operating characteristic curves of the EfficientNet-B7 model on the internal test set, external test set A and external test set B were 0.9708 (0.8772, 0.9849) to 1.0000 (1.0000, 1.0000), 0.9683 (0.8829, 0.9770) to 1.0000 (0.9975, 1.0000), and 0.8919 (0.7150, 0.9055) to 0.9977 (0.9165, 1.0000), respectively. On a data set of 100 images, the total accuracy of the deep learning model was 93.00%, the average accuracy of three ophthalmologists who had been working for 2 years and three ophthalmologists who had been working in fundus imaging for more than 5 years was 88.00% and 94.00%, respectively. CONCLUSION High performance was achieved on all three test sets using our UWFI multidisease classification model with a small sample size and fast model inference. The performance of the artificial intelligence model was comparable to that of a physician with 2-5 years of experience in fundus diseases at a tertiary referral hospital. The model is expected to be used as an effective aid for fundus disease screening.
Collapse
Affiliation(s)
- Gongpeng Sun
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Xiaoling Wang
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Lizhang Xu
- Wuhan Aiyanbang Technology Co., Ltd, Wuhan, 430073, China
| | - Chang Li
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin International Joint Research and Development Centre of Ophthalmology and Vision Science, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, 300384, China
| | - Wenyu Wang
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Zuohuizi Yi
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Huijuan Luo
- The People's Hospital of Yidu, Yidu, 443300, China
| | - Yu Su
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Jian Zheng
- School of Electronic Information and Electric Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Zhiqing Li
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin International Joint Research and Development Centre of Ophthalmology and Vision Science, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, 300384, China
| | - Zhen Chen
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China.
| | - Hongmei Zheng
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China.
| | - Changzheng Chen
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China.
| |
Collapse
|
8
|
Wang R, He J, Chen Q, Ye L, Sun D, Yin L, Zhou H, Zhao L, Zhu J, Zou H, Tan Q, Huang D, Liang B, He L, Wang W, Fan Y, Xu X. Efficacy of a Deep Learning System for Screening Myopic Maculopathy Based on Color Fundus Photographs. Ophthalmol Ther 2022; 12:469-484. [PMID: 36495394 PMCID: PMC9735275 DOI: 10.1007/s40123-022-00621-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 11/23/2022] [Indexed: 12/14/2022] Open
Abstract
INTRODUCTION The maculopathy in highly myopic eyes is complex. Its clinical diagnosis is a huge workload and subjective. To simply and quickly classify pathologic myopia (PM), a deep learning algorithm was developed and assessed to screen myopic maculopathy lesions based on color fundus photographs. METHODS This study included 10,347 ocular fundus photographs from 7606 participants. Of these photographs, 8210 were used for training and validation, and 2137 for external testing. A deep learning algorithm was trained, validated, and externally tested to screen myopic maculopathy which was classified into four categories: normal or mild tessellated fundus, severe tessellated fundus, early-stage PM, and advanced-stage PM. The area under the precision-recall curve, the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, accuracy, and Cohen's kappa were calculated and compared with those of retina specialists. RESULTS In the validation data set, the model detected normal or mild tessellated fundus, severe tessellated fundus, early-stage PM, and advanced-stage PM with AUCs of 0.98, 0.95, 0.99, and 1.00, respectively; while in the external-testing data set of 2137 photographs, the model had AUCs of 0.99, 0.96, 0.98, and 1.00, respectively. CONCLUSIONS We developed a deep learning model for detection and classification of myopic maculopathy based on fundus photographs. Our model achieved high sensitivities, specificities, and reliable Cohen's kappa, compared with those of attending ophthalmologists.
Collapse
Affiliation(s)
- Ruonan Wang
- grid.452752.30000 0004 8501 948XDepartment of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, 200040 China ,grid.16821.3c0000 0004 0368 8293Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628National Clinical Research Center for Eye Diseases, Shanghai, 200080 China ,grid.16821.3c0000 0004 0368 8293Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, 200080 China ,Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, 200080 China
| | - Jiangnan He
- grid.452752.30000 0004 8501 948XDepartment of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, 200040 China ,grid.24516.340000000123704535School of Medicine, Tongji University, Shanghai, China
| | - Qiuying Chen
- grid.452752.30000 0004 8501 948XDepartment of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, 200040 China ,grid.16821.3c0000 0004 0368 8293Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628National Clinical Research Center for Eye Diseases, Shanghai, 200080 China ,grid.16821.3c0000 0004 0368 8293Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, 200080 China ,Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, 200080 China
| | - Luyao Ye
- grid.452752.30000 0004 8501 948XDepartment of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, 200040 China ,grid.16821.3c0000 0004 0368 8293Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628National Clinical Research Center for Eye Diseases, Shanghai, 200080 China ,grid.16821.3c0000 0004 0368 8293Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, 200080 China ,Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, 200080 China
| | - Dandan Sun
- grid.452752.30000 0004 8501 948XDepartment of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, 200040 China ,grid.16821.3c0000 0004 0368 8293Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628National Clinical Research Center for Eye Diseases, Shanghai, 200080 China ,grid.16821.3c0000 0004 0368 8293Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, 200080 China ,Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, 200080 China
| | - Lili Yin
- grid.16821.3c0000 0004 0368 8293Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628National Clinical Research Center for Eye Diseases, Shanghai, 200080 China ,grid.16821.3c0000 0004 0368 8293Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, 200080 China ,Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, 200080 China
| | - Hao Zhou
- grid.16821.3c0000 0004 0368 8293Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628National Clinical Research Center for Eye Diseases, Shanghai, 200080 China ,grid.16821.3c0000 0004 0368 8293Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, 200080 China ,Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, 200080 China
| | - Lijun Zhao
- Suzhou Life Intelligence Industry Research Institute, Suzhou, 215124 China
| | - Jianfeng Zhu
- grid.16821.3c0000 0004 0368 8293Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080 China
| | - Haidong Zou
- grid.452752.30000 0004 8501 948XDepartment of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, 200040 China ,grid.16821.3c0000 0004 0368 8293Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628National Clinical Research Center for Eye Diseases, Shanghai, 200080 China ,grid.16821.3c0000 0004 0368 8293Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, 200080 China ,Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, 200080 China
| | - Qichao Tan
- Suzhou Life Intelligence Industry Research Institute, Suzhou, 215124 China
| | - Difeng Huang
- Suzhou Life Intelligence Industry Research Institute, Suzhou, 215124 China
| | - Bo Liang
- grid.459411.c0000 0004 1761 0825School of Biology and Food Engineering, Changshu Institute of Technology, Changshu, China
| | - Lin He
- Suzhou Life Intelligence Industry Research Institute, Suzhou, 215124 China
| | - Weijun Wang
- grid.16821.3c0000 0004 0368 8293Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628National Clinical Research Center for Eye Diseases, Shanghai, 200080 China ,grid.16821.3c0000 0004 0368 8293Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, 200080 China ,Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, 200080 China ,No. 100 Haining Road, Shanghai, 200080 China
| | - Ying Fan
- grid.452752.30000 0004 8501 948XDepartment of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, 200040 China ,grid.16821.3c0000 0004 0368 8293Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628National Clinical Research Center for Eye Diseases, Shanghai, 200080 China ,grid.16821.3c0000 0004 0368 8293Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, 200080 China ,Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, 200080 China ,No. 380 Kangding Road, Shanghai, 200080 China
| | - Xun Xu
- grid.452752.30000 0004 8501 948XDepartment of Preventative Ophthalmology, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, 200040 China ,grid.16821.3c0000 0004 0368 8293Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628National Clinical Research Center for Eye Diseases, Shanghai, 200080 China ,grid.16821.3c0000 0004 0368 8293Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Jiao Tong University, Shanghai, 200080 China ,Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, 200080 China ,grid.412478.c0000 0004 1760 4628Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, 200080 China
| |
Collapse
|
9
|
Bhambra N, Antaki F, Malt FE, Xu A, Duval R. Deep learning for ultra-widefield imaging: a scoping review. Graefes Arch Clin Exp Ophthalmol 2022; 260:3737-3778. [PMID: 35857087 DOI: 10.1007/s00417-022-05741-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 05/16/2022] [Accepted: 06/22/2022] [Indexed: 11/04/2022] Open
Abstract
PURPOSE This article is a scoping review of published and peer-reviewed articles using deep-learning (DL) applied to ultra-widefield (UWF) imaging. This study provides an overview of the published uses of DL and UWF imaging for the detection of ophthalmic and systemic diseases, generative image synthesis, quality assessment of images, and segmentation and localization of ophthalmic image features. METHODS A literature search was performed up to August 31st, 2021 using PubMed, Embase, Cochrane Library, and Google Scholar. The inclusion criteria were as follows: (1) deep learning, (2) ultra-widefield imaging. The exclusion criteria were as follows: (1) articles published in any language other than English, (2) articles not peer-reviewed (usually preprints), (3) no full-text availability, (4) articles using machine learning algorithms other than deep learning. No study design was excluded from consideration. RESULTS A total of 36 studies were included. Twenty-three studies discussed ophthalmic disease detection and classification, 5 discussed segmentation and localization of ultra-widefield images (UWFIs), 3 discussed generative image synthesis, 3 discussed ophthalmic image quality assessment, and 2 discussed detecting systemic diseases via UWF imaging. CONCLUSION The application of DL to UWF imaging has demonstrated significant effectiveness in the diagnosis and detection of ophthalmic diseases including diabetic retinopathy, retinal detachment, and glaucoma. DL has also been applied in the generation of synthetic ophthalmic images. This scoping review highlights and discusses the current uses of DL with UWF imaging, and the future of DL applications in this field.
Collapse
Affiliation(s)
- Nishaant Bhambra
- Faculty of Medicine, McGill University, Montréal, Québec, Canada
| | - Fares Antaki
- Department of Ophthalmology, Université de Montréal, Montréal, Québec, Canada.,Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de L'Est-de-L'Île-de-Montréal, 5415 Assumption Blvd, Montréal, Québec, H1T 2M4, Canada
| | - Farida El Malt
- Faculty of Medicine, McGill University, Montréal, Québec, Canada
| | - AnQi Xu
- Faculty of Medicine, Université de Montréal, Montréal, Québec, Canada
| | - Renaud Duval
- Department of Ophthalmology, Université de Montréal, Montréal, Québec, Canada. .,Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de L'Est-de-L'Île-de-Montréal, 5415 Assumption Blvd, Montréal, Québec, H1T 2M4, Canada.
| |
Collapse
|
10
|
Han R, Yu W, Chen H, Chen Y. Using artificial intelligence reading label system in diabetic retinopathy grading training of junior ophthalmology residents and medical students. BMC MEDICAL EDUCATION 2022; 22:258. [PMID: 35397598 PMCID: PMC8994224 DOI: 10.1186/s12909-022-03272-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 03/18/2022] [Indexed: 06/01/2023]
Abstract
PURPOSE Evaluate the efficiency of using an artificial intelligence reading label system in the diabetic retinopathy grading training of junior ophthalmology resident doctors and medical students. METHODS Loading 520 diabetic retinopathy patients' colour fundus images into the artificial intelligence reading label system. Thirteen participants, including six junior ophthalmology residents and seven medical students, read the images randomly for eight rounds. They evaluated the grading of images and labeled the typical lesions. The sensitivity, specificity, and kappa scores were determined by comparison with the participants' results and diagnosis gold standards. RESULTS Through eight rounds of reading, the average kappa score was elevated from 0.67 to 0.81. The average kappa score for rounds 1 to 4 was 0.77, and the average kappa score for rounds 5 to 8 was 0.81. The participants were divided into two groups. The participants in Group 1 were junior ophthalmology resident doctors, and the participants in Group 2 were medical students. The average kappa score of Group 1 was elevated from 0.71 to 0.76. The average kappa score of Group 2 was elevated from 0.63 to 0.84. CONCLUSION The artificial intelligence reading label system is a valuable tool for training resident doctors and medical students in performing diabetic retinopathy grading.
Collapse
Affiliation(s)
- Ruoan Han
- Department of Ophthalmology, Peking Union Medical College Hospital, Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Weihong Yu
- Department of Ophthalmology, Peking Union Medical College Hospital, Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Huan Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Beijing, 100730, China.
| |
Collapse
|
11
|
Artificial intelligence to detect malignant eyelid tumors from photographic images. NPJ Digit Med 2022; 5:23. [PMID: 35236921 PMCID: PMC8891262 DOI: 10.1038/s41746-022-00571-3] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 02/04/2022] [Indexed: 11/23/2022] Open
Abstract
Malignant eyelid tumors can invade adjacent structures and pose a threat to vision and even life. Early identification of malignant eyelid tumors is crucial to avoiding substantial morbidity and mortality. However, differentiating malignant eyelid tumors from benign ones can be challenging for primary care physicians and even some ophthalmologists. Here, based on 1,417 photographic images from 851 patients across three hospitals, we developed an artificial intelligence system using a faster region-based convolutional neural network and deep learning classification networks to automatically locate eyelid tumors and then distinguish between malignant and benign eyelid tumors. The system performed well in both internal and external test sets (AUCs ranged from 0.899 to 0.955). The performance of the system is comparable to that of a senior ophthalmologist, indicating that this system has the potential to be used at the screening stage for promoting the early detection and treatment of malignant eyelid tumors.
Collapse
|
12
|
Li Z, Jiang J, Qiang W, Guo L, Liu X, Weng H, Wu S, Zheng Q, Chen W. Comparison of deep learning systems and cornea specialists in detecting corneal diseases from low-quality images. iScience 2021; 24:103317. [PMID: 34778732 PMCID: PMC8577078 DOI: 10.1016/j.isci.2021.103317] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 10/11/2021] [Accepted: 10/15/2021] [Indexed: 01/01/2023] Open
Abstract
The performance of deep learning in disease detection from high-quality clinical images is identical to and even greater than that of human doctors. However, in low-quality images, deep learning performs poorly. Whether human doctors also have poor performance in low-quality images is unknown. Here, we compared the performance of deep learning systems with that of cornea specialists in detecting corneal diseases from low-quality slit lamp images. The results showed that the cornea specialists performed better than our previously established deep learning system (PEDLS) trained on only high-quality images. The performance of the system trained on both high- and low-quality images was superior to that of the PEDLS while inferior to that of a senior corneal specialist. This study highlights that cornea specialists perform better in low-quality images than the system trained on high-quality images. Adding low-quality images with sufficient diagnostic certainty to the training set can reduce this performance gap. Deep learning performs poorly in low-quality images for detecting corneal diseases Corneal specialists perform better than the PEDLS in low-quality images The performance of the NDLS is better than that of the PEDLS in low-quality images Adding low-quality images to the training set can improve the system's performance
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China.,School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - Wei Qiang
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Liufei Guo
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - Xiaotian Liu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Hongfei Weng
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Shanjun Wu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Qinxiang Zheng
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China.,School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China.,School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| |
Collapse
|
13
|
Li Z, Guo C, Nie D, Lin D, Cui T, Zhu Y, Chen C, Zhao L, Zhang X, Dongye M, Wang D, Xu F, Jin C, Zhang P, Han Y, Yan P, Lin H. Automated detection of retinal exudates and drusen in ultra-widefield fundus images based on deep learning. Eye (Lond) 2021; 36:1681-1686. [PMID: 34345030 PMCID: PMC9307785 DOI: 10.1038/s41433-021-01715-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2021] [Revised: 07/14/2021] [Accepted: 07/22/2021] [Indexed: 01/20/2023] Open
Abstract
BACKGROUND Retinal exudates and/or drusen (RED) can be signs of many fundus diseases that can lead to irreversible vision loss. Early detection and treatment of these diseases are critical for improving vision prognosis. However, manual RED screening on a large scale is time-consuming and labour-intensive. Here, we aim to develop and assess a deep learning system for automated detection of RED using ultra-widefield fundus (UWF) images. METHODS A total of 26,409 UWF images from 14,994 subjects were used to develop and evaluate the deep learning system. The Zhongshan Ophthalmic Center (ZOC) dataset was selected to compare the performance of the system to that of retina specialists in RED detection. The saliency map visualization technique was used to understand which areas in the UWF image had the most influence on our deep learning system when detecting RED. RESULTS The system for RED detection achieved areas under the receiver operating characteristic curve of 0.994 (95% confidence interval [CI]: 0.991-0.996), 0.972 (95% CI: 0.957-0.984), and 0.988 (95% CI: 0.983-0.992) in three independent datasets. The performance of the system in the ZOC dataset was comparable to that of an experienced retina specialist. Regions of RED were highlighted by saliency maps in UWF images. CONCLUSIONS Our deep learning system is reliable in the automated detection of RED in UWF images. As a screening tool, our system may promote the early diagnosis and management of RED-related fundus diseases.
Collapse
Affiliation(s)
- Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chong Guo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Danyao Nie
- Shenzhen Eye Hospital, Shenzhen Key Laboratory of Ophthalmology, Affiliated Shenzhen Eye Hospital of Jinan University, Shenzhen, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Tingxin Cui
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yi Zhu
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Chuan Chen
- Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xulin Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Meimei Dongye
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Dongni Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chenjin Jin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Ping Zhang
- Xudong Ophthalmic Hospital, Inner Mongolia, China
| | - Yu Han
- EYE & ENT Hospital of Fudan University, Shanghai, China
| | - Pisong Yan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China. .,Center for Precision Medicine, Sun Yat-sen University, Guangzhou, China.
| |
Collapse
|
14
|
Preventing corneal blindness caused by keratitis using artificial intelligence. Nat Commun 2021; 12:3738. [PMID: 34145294 PMCID: PMC8213803 DOI: 10.1038/s41467-021-24116-6] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2020] [Accepted: 06/01/2021] [Indexed: 12/14/2022] Open
Abstract
Keratitis is the main cause of corneal blindness worldwide. Most vision loss caused by keratitis can be avoidable via early detection and treatment. The diagnosis of keratitis often requires skilled ophthalmologists. However, the world is short of ophthalmologists, especially in resource-limited settings, making the early diagnosis of keratitis challenging. Here, we develop a deep learning system for the automated classification of keratitis, other cornea abnormalities, and normal cornea based on 6,567 slit-lamp images. Our system exhibits remarkable performance in cornea images captured by the different types of digital slit lamp cameras and a smartphone with the super macro mode (all AUCs>0.96). The comparable sensitivity and specificity in keratitis detection are observed between the system and experienced cornea specialists. Our system has the potential to be applied to both digital slit lamp cameras and smartphones to promote the early diagnosis and treatment of keratitis, preventing the corneal blindness caused by keratitis.
Collapse
|
15
|
Li JPO, Liu H, Ting DSJ, Jeon S, Chan RVP, Kim JE, Sim DA, Thomas PBM, Lin H, Chen Y, Sakomoto T, Loewenstein A, Lam DSC, Pasquale LR, Wong TY, Lam LA, Ting DSW. Digital technology, tele-medicine and artificial intelligence in ophthalmology: A global perspective. Prog Retin Eye Res 2021; 82:100900. [PMID: 32898686 PMCID: PMC7474840 DOI: 10.1016/j.preteyeres.2020.100900] [Citation(s) in RCA: 201] [Impact Index Per Article: 67.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Revised: 08/25/2020] [Accepted: 08/31/2020] [Indexed: 12/29/2022]
Abstract
The simultaneous maturation of multiple digital and telecommunications technologies in 2020 has created an unprecedented opportunity for ophthalmology to adapt to new models of care using tele-health supported by digital innovations. These digital innovations include artificial intelligence (AI), 5th generation (5G) telecommunication networks and the Internet of Things (IoT), creating an inter-dependent ecosystem offering opportunities to develop new models of eye care addressing the challenges of COVID-19 and beyond. Ophthalmology has thrived in some of these areas partly due to its many image-based investigations. Tele-health and AI provide synchronous solutions to challenges facing ophthalmologists and healthcare providers worldwide. This article reviews how countries across the world have utilised these digital innovations to tackle diabetic retinopathy, retinopathy of prematurity, age-related macular degeneration, glaucoma, refractive error correction, cataract and other anterior segment disorders. The review summarises the digital strategies that countries are developing and discusses technologies that may increasingly enter the clinical workflow and processes of ophthalmologists. Furthermore as countries around the world have initiated a series of escalating containment and mitigation measures during the COVID-19 pandemic, the delivery of eye care services globally has been significantly impacted. As ophthalmic services adapt and form a "new normal", the rapid adoption of some of telehealth and digital innovation during the pandemic is also discussed. Finally, challenges for validation and clinical implementation are considered, as well as recommendations on future directions.
Collapse
Affiliation(s)
- Ji-Peng Olivia Li
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Hanruo Liu
- Beijing Tongren Hospital; Capital Medical University; Beijing Institute of Ophthalmology; Beijing, China
| | - Darren S J Ting
- Academic Ophthalmology, University of Nottingham, United Kingdom
| | - Sohee Jeon
- Keye Eye Center, Seoul, Republic of Korea
| | | | - Judy E Kim
- Medical College of Wisconsin, Milwaukee, WI, USA
| | - Dawn A Sim
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Peter B M Thomas
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Haotian Lin
- Zhongshan Ophthalmic Center, State Key Laboratory of Ophthalmology, Guangzhou, China
| | - Youxin Chen
- Peking Union Medical College Hospital, Beijing, China
| | - Taiji Sakomoto
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Japan
| | | | - Dennis S C Lam
- C-MER Dennis Lam Eye Center, C-Mer International Eye Care Group Limited, Hong Kong, Hong Kong; International Eye Research Institute of the Chinese University of Hong Kong (Shenzhen), Shenzhen, China
| | - Louis R Pasquale
- Department of Ophthalmology, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Tien Y Wong
- Singapore National Eye Center, Duke-NUS Medical School Singapore, Singapore
| | - Linda A Lam
- USC Roski Eye Institute, University of Southern California (USC) Keck School of Medicine, Los Angeles, CA, USA
| | - Daniel S W Ting
- Singapore National Eye Center, Duke-NUS Medical School Singapore, Singapore.
| |
Collapse
|
16
|
Li Z, Jiang J, Chen K, Zheng Q, Liu X, Weng H, Wu S, Chen W. Development of a deep learning-based image quality control system to detect and filter out ineligible slit-lamp images: A multicenter study. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 203:106048. [PMID: 33765481 DOI: 10.1016/j.cmpb.2021.106048] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 03/08/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Previous studies developed artificial intelligence (AI) diagnostic systems only using eligible slit-lamp images for detecting corneal diseases. However, images of ineligible quality (including poor-field, defocused, and poor-location images), which are inevitable in the real world, can cause diagnostic information loss and thus affect downstream AI-based image analysis. Manual evaluation for the eligibility of slit-lamp images often requires an ophthalmologist, and this procedure can be time-consuming and labor-intensive when applied on a large scale. Here, we aimed to develop a deep learning-based image quality control system (DLIQCS) to automatically detect and filter out ineligible slit-lamp images (poor-field, defocused, and poor-location images). METHODS We developed and externally evaluated the DLIQCS based on 48,530 slit-lamp images (19,890 individuals) that were derived from 4 independent institutions using different types of digital slit lamp cameras. To find the best deep learning model for the DLIQCS, we used 3 algorithms (AlexNet, DenseNet121, and InceptionV3) to train models. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy were leveraged to assess the performance of each algorithm for the classification of poor-field, defocused, poor-location, and eligible images. RESULTS In an internal test dataset, the best algorithm DenseNet121 had AUCs of 0.999, 1.000, 1.000, and 1.000 in the detection of poor-field, defocused, poor-location, and eligible images, respectively. In external test datasets, the AUCs of the best algorithm DenseNet121 for identifying poor-field, defocused, poor-location, and eligible images were ranged from 0.997 to 0.997, 0.983 to 0.995, 0.995 to 0.998, and 0.999 to 0.999, respectively. CONCLUSIONS Our DLIQCS can accurately detect poor-field, defocused, poor-location, and eligible slit-lamp images in an automated fashion. This system may serve as a prescreening tool to filter out ineligible images and enable that only eligible images would be transferred to the subsequent AI diagnostic systems.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Jiewei Jiang
- School of Electronics Engineering, Xi'an University of Posts and Telecommunications, Xi'an, 710121, China
| | - Kuan Chen
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Qinxiang Zheng
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Xiaotian Liu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Hongfei Weng
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Shanjun Wu
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| |
Collapse
|
17
|
Cai S, Parker F, Urias MG, Goldberg MF, Hager GD, Scott AW. Deep Learning Detection of Sea Fan Neovascularization From Ultra-Widefield Color Fundus Photographs of Patients With Sickle Cell Hemoglobinopathy. JAMA Ophthalmol 2021; 139:206-213. [PMID: 33377944 DOI: 10.1001/jamaophthalmol.2020.5900] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Importance Adherence to screening for vision-threatening proliferative sickle cell retinopathy is limited among patients with sickle cell hemoglobinopathy despite guidelines recommending dilated fundus examinations beginning in childhood. An automated algorithm for detecting sea fan neovascularization from ultra-widefield color fundus photographs could expand access to rapid retinal evaluations to identify patients at risk of vision loss from proliferative sickle cell retinopathy. Objective To develop a deep learning system for detecting sea fan neovascularization from ultra-widefield color fundus photographs from patients with sickle cell hemoglobinopathy. Design, Setting, and Participants In a cross-sectional study conducted at a single-institution, tertiary academic referral center, deidentified, retrospectively collected, ultra-widefield color fundus photographs from 190 adults with sickle cell hemoglobinopathy were independently graded by 2 masked retinal specialists for presence or absence of sea fan neovascularization. A third masked retinal specialist regraded images with discordant or indeterminate grades. Consensus retinal specialist reference standard grades were used to train a convolutional neural network to classify images for presence or absence of sea fan neovascularization. Participants included nondiabetic adults with sickle cell hemoglobinopathy receiving care from a Wilmer Eye Institute retinal specialist; the patients had received no previous laser or surgical treatment for sickle cell retinopathy and underwent imaging with ultra-widefield color fundus photographs between January 1, 2012, and January 30, 2019. Interventions Deidentified ultra-widefield color fundus photographs were retrospectively collected. Main Outcomes and Measures Sensitivity, specificity, and area under the receiver operating characteristic curve of the convolutional neural network for sea fan detection. Results A total of 1182 images from 190 patients were included. Of the 190 patients, 101 were women (53.2%), and the mean (SD) age at baseline was 36.2 (12.3) years; 119 patients (62.6%) had hemoglobin SS disease and 46 (24.2%) had hemoglobin SC disease. One hundred seventy-nine patients (94.2%) were of Black or African descent. Images with sea fan neovascularization were obtained in 57 patients (30.0%). The convolutional neural network had an area under the curve of 0.988 (95% CI, 0.969-0.999), with sensitivity of 97.4% (95% CI, 86.5%-99.9%) and specificity of 97.0% (95% CI, 93.5%-98.9%) for detecting sea fan neovascularization from ultra-widefield color fundus photographs. Conclusions and Relevance This study reports an automated system with high sensitivity and specificity for detecting sea fan neovascularization from ultra-widefield color fundus photographs from patients with sickle cell hemoglobinopathy, with potential applications for improving screening for vision-threatening proliferative sickle cell retinopathy.
Collapse
Affiliation(s)
- Sophie Cai
- Retina Division, Wilmer Eye Institute, The Johns Hopkins University School of Medicine and Hospital, Baltimore, Maryland.,Retina Division, Duke Eye Center, Durham, North Carolina
| | - Felix Parker
- Center for Systems Science and Engineering, The Johns Hopkins University, Baltimore, Maryland.,Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland
| | - Muller G Urias
- Retina Division, Wilmer Eye Institute, The Johns Hopkins University School of Medicine and Hospital, Baltimore, Maryland.,Retina Division, Ophthalmology and Vision Sciences Department, Federal University of São Paulo, São Paulo, Brazil
| | - Morton F Goldberg
- Retina Division, Wilmer Eye Institute, The Johns Hopkins University School of Medicine and Hospital, Baltimore, Maryland
| | - Gregory D Hager
- Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland.,Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland
| | - Adrienne W Scott
- Retina Division, Wilmer Eye Institute, The Johns Hopkins University School of Medicine and Hospital, Baltimore, Maryland
| |
Collapse
|
18
|
Li Z, Guo C, Nie D, Lin D, Zhu Y, Chen C, Zhao L, Wu X, Dongye M, Xu F, Jin C, Zhang P, Han Y, Yan P, Lin H. Deep learning from "passive feeding" to "selective eating" of real-world data. NPJ Digit Med 2020; 3:143. [PMID: 33145439 PMCID: PMC7603327 DOI: 10.1038/s41746-020-00350-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Accepted: 09/24/2020] [Indexed: 12/23/2022] Open
Abstract
Artificial intelligence (AI) based on deep learning has shown excellent diagnostic performance in detecting various diseases with good-quality clinical images. Recently, AI diagnostic systems developed from ultra-widefield fundus (UWF) images have become popular standard-of-care tools in screening for ocular fundus diseases. However, in real-world settings, these systems must base their diagnoses on images with uncontrolled quality ("passive feeding"), leading to uncertainty about their performance. Here, using 40,562 UWF images, we develop a deep learning-based image filtering system (DLIFS) for detecting and filtering out poor-quality images in an automated fashion such that only good-quality images are transferred to the subsequent AI diagnostic system ("selective eating"). In three independent datasets from different clinical institutions, the DLIFS performed well with sensitivities of 96.9%, 95.6% and 96.6%, and specificities of 96.6%, 97.9% and 98.8%, respectively. Furthermore, we show that the application of our DLIFS significantly improves the performance of established AI diagnostic systems in real-world settings. Our work demonstrates that "selective eating" of real-world data is necessary and needs to be considered in the development of image-based AI systems.
Collapse
Affiliation(s)
- Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Chong Guo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Danyao Nie
- Shenzhen Eye Hospital, Shenzhen Key Laboratory of Ophthalmology, Affiliated Shenzhen Eye Hospital of Jinan University, 518001 Shenzhen, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Yi Zhu
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, FL 33136 USA
| | - Chuan Chen
- Sylvester Comprehensive Cancer Centre, University of Miami Miller School of Medicine, Miami, FL 33136 USA
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Meimei Dongye
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Chenjin Jin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Ping Zhang
- Xudong Ophthalmic Hospital, 015000 Inner Mongolia, China
| | - Yu Han
- EYE and ENT Hospital of Fudan University, 200031 Shanghai, China
| | - Pisong Yan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
- Centre for Precision Medicine, Sun Yat-sen University, 510060 Guangzhou, China
| |
Collapse
|
19
|
Li Z, Guo C, Lin D, Nie D, Zhu Y, Chen C, Zhao L, Wang J, Zhang X, Dongye M, Wang D, Xu F, Jin C, Zhang P, Han Y, Yan P, Han Y, Lin H. Deep learning for automated glaucomatous optic neuropathy detection from ultra-widefield fundus images. Br J Ophthalmol 2020; 105:1548-1554. [DOI: 10.1136/bjophthalmol-2020-317327] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Revised: 08/22/2020] [Accepted: 08/25/2020] [Indexed: 12/28/2022]
Abstract
Background/AimsTo develop a deep learning system for automated glaucomatous optic neuropathy (GON) detection using ultra-widefield fundus (UWF) images.MethodsWe trained, validated and externally evaluated a deep learning system for GON detection based on 22 972 UWF images from 10 590 subjects that were collected at 4 different institutions in China and Japan. The InceptionResNetV2 neural network architecture was used to develop the system. The area under the receiver operating characteristic curve (AUC), sensitivity and specificity were used to assess the performance of detecting GON by the system. The data set from the Zhongshan Ophthalmic Center (ZOC) was selected to compare the performance of the system to that of ophthalmologists who mainly conducted UWF image analysis in clinics.ResultsThe system for GON detection achieved AUCs of 0.983–0.999 with sensitivities of 97.5–98.2% and specificities of 94.3–98.4% in four independent data sets. The most common reasons for false-negative results were confounding optic disc characteristics caused by high myopia or pathological myopia (n=39 (53%)). The leading cause for false-positive results was having other fundus lesions (n=401 (96%)). The performance of the system in the ZOC data set was comparable to that of an experienced ophthalmologist (p>0.05).ConclusionOur deep learning system can accurately detect GON from UWF images in an automated fashion. It may be used as a screening tool to improve the accessibility of screening and promote the early diagnosis and management of glaucoma.
Collapse
|