51
|
Biswas S, Khan MIA, Hossain MT, Biswas A, Nakai T, Rohdin J. Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? LIFE (BASEL, SWITZERLAND) 2022; 12:life12070973. [PMID: 35888063 PMCID: PMC9321111 DOI: 10.3390/life12070973] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 05/25/2022] [Accepted: 06/01/2022] [Indexed: 11/22/2022]
Abstract
Color fundus photographs are the most common type of image used for automatic diagnosis of retinal diseases and abnormalities. As all color photographs, these images contain information about three primary colors, i.e., red, green, and blue, in three separate color channels. This work aims to understand the impact of each channel in the automatic diagnosis of retinal diseases and abnormalities. To this end, the existing works are surveyed extensively to explore which color channel is used most commonly for automatically detecting four leading causes of blindness and one retinal abnormality along with segmenting three retinal landmarks. From this survey, it is clear that all channels together are typically used for neural network-based systems, whereas for non-neural network-based systems, the green channel is most commonly used. However, from the previous works, no conclusion can be drawn regarding the importance of the different channels. Therefore, systematic experiments are conducted to analyse this. A well-known U-shaped deep neural network (U-Net) is used to investigate which color channel is best for segmenting one retinal abnormality and three retinal landmarks.
Collapse
Affiliation(s)
- Sangeeta Biswas
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
- Correspondence: or
| | - Md. Iqbal Aziz Khan
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Md. Tanvir Hossain
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Angkan Biswas
- CAPM Company Limited, Bonani, Dhaka 1213, Bangladesh;
| | - Takayoshi Nakai
- Faculty of Engineering, Shizuoka University, Hamamatsu 432-8561, Japan;
| | - Johan Rohdin
- Faculty of Information Technology, Brno University of Technology, 61200 Brno, Czech Republic;
| |
Collapse
|
52
|
Yoo TK, Ryu IH, Kim JK, Lee IS, Kim HK. A deep learning approach for detection of shallow anterior chamber depth based on the hidden features of fundus photographs. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 219:106735. [PMID: 35305492 DOI: 10.1016/j.cmpb.2022.106735] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Revised: 02/15/2022] [Accepted: 03/04/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVES Patients with angle-closure glaucoma (ACG) are asymptomatic until they experience a painful attack. Shallow anterior chamber depth (ACD) is considered a significant risk factor for ACG. We propose a deep learning approach to detect shallow ACD using fundus photographs and to identify the hidden features of shallow ACD. METHODS This retrospective study assigned healthy subjects to the training (n = 1188 eyes) and test (n = 594) datasets (prospective validation design). We used a deep learning approach to estimate ACD and build a classification model to identify eyes with a shallow ACD. The proposed method, including subtraction of the input and output images of CycleGAN and a thresholding algorithm, was adopted to visualize the characteristic features of fundus photographs with a shallow ACD. RESULTS The deep learning model integrating fundus photographs and clinical variables achieved areas under the receiver operating characteristic curve of 0.978 (95% confidence interval [CI], 0.963-0.988) for an ACD ≤ 2.60 mm and 0.895 (95% CI, 0.868-0.919) for an ACD ≤ 2.80 mm, and outperformed the regression model using only clinical variables. However, the difference between shallow and deep ACD classes on fundus photographs was difficult to be detected with the naked eye. We were unable to identify the features of shallow ACD using the Grad-CAM. The CycleGAN-based feature images showed that area around the macula and optic disk significantly contributed to the classification of fundus photographs with a shallow ACD. CONCLUSIONS We demonstrated the feasibility of a novel deep learning model to detect a shallow ACD as a screening tool for ACG using fundus photographs. The CycleGAN-based feature map showed the hidden characteristic features of shallow ACD that were previously undetectable by conventional techniques and ophthalmologists. This framework will facilitate the early detection of shallow ACD to prevent overlooking the risks associated with ACG.
Collapse
Affiliation(s)
- Tae Keun Yoo
- B&VIIT Eye Center, Seoul, South Korea; Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, Cheongju, South Korea.
| | - Ik Hee Ryu
- B&VIIT Eye Center, Seoul, South Korea; VISUWORKS, Seoul, South Korea
| | - Jin Kuk Kim
- B&VIIT Eye Center, Seoul, South Korea; VISUWORKS, Seoul, South Korea
| | | | - Hong Kyu Kim
- Department of Ophthalmology, Dankook University Hospital, Dankook University College of Medicine, Cheonan, South Korea
| |
Collapse
|
53
|
Dong L, He W, Zhang R, Ge Z, Wang YX, Zhou J, Xu J, Shao L, Wang Q, Yan Y, Xie Y, Fang L, Wang H, Wang Y, Zhu X, Wang J, Zhang C, Wang H, Wang Y, Chen R, Wan Q, Yang J, Zhou W, Li H, Yao X, Yang Z, Xiong J, Wang X, Huang Y, Chen Y, Wang Z, Rong C, Gao J, Zhang H, Wu S, Jonas JB, Wei WB. Artificial Intelligence for Screening of Multiple Retinal and Optic Nerve Diseases. JAMA Netw Open 2022; 5:e229960. [PMID: 35503220 PMCID: PMC9066285 DOI: 10.1001/jamanetworkopen.2022.9960] [Citation(s) in RCA: 43] [Impact Index Per Article: 21.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
IMPORTANCE The lack of experienced ophthalmologists limits the early diagnosis of retinal diseases. Artificial intelligence can be an efficient real-time way for screening retinal diseases. OBJECTIVE To develop and prospectively validate a deep learning (DL) algorithm that, based on ocular fundus images, recognizes numerous retinal diseases simultaneously in clinical practice. DESIGN, SETTING, AND PARTICIPANTS This multicenter, diagnostic study at 65 public medical screening centers and hospitals in 19 Chinese provinces included individuals attending annual routine medical examinations and participants of population-based and community-based studies. EXPOSURES Based on 120 002 ocular fundus photographs, the Retinal Artificial Intelligence Diagnosis System (RAIDS) was developed to identify 10 retinal diseases. RAIDS was validated in a prospective collected data set, and the performance between RAIDS and ophthalmologists was compared in the data sets of the population-based Beijing Eye Study and the community-based Kailuan Eye Study. MAIN OUTCOMES AND MEASURES The performance of each classifier included sensitivity, specificity, accuracy, F1 score, and Cohen κ score. RESULTS In the prospective validation data set of 208 758 images collected from 110 784 individuals (median [range] age, 42 [8-87] years; 115 443 [55.3%] female), RAIDS achieved a sensitivity of 89.8% (95% CI, 89.5%-90.1%) to detect any of 10 retinal diseases. RAIDS differentiated 10 retinal diseases with accuracies ranging from 95.3% to 99.9%, without marked differences between medical screening centers and geographical regions in China. Compared with retinal specialists, RAIDS achieved a higher sensitivity for detection of any retinal abnormality (RAIDS, 91.7% [95% CI, 90.6%-92.8%]; certified ophthalmologists, 83.7% [95% CI, 82.1%-85.1%]; junior retinal specialists, 86.4% [95% CI, 84.9%-87.7%]; and senior retinal specialists, 88.5% [95% CI, 87.1%-89.8%]). RAIDS reached a superior or similar diagnostic sensitivity compared with senior retinal specialists in the detection of 7 of 10 retinal diseases (ie, referral diabetic retinopathy, referral possible glaucoma, macular hole, epiretinal macular membrane, hypertensive retinopathy, myelinated fibers, and retinitis pigmentosa). It achieved a performance comparable with the performance by certified ophthalmologists in 2 diseases (ie, age-related macular degeneration and retinal vein occlusion). Compared with ophthalmologists, RAIDS needed 96% to 97% less time for the image assessment. CONCLUSIONS AND RELEVANCE In this diagnostic study, the DL system was associated with accurately distinguishing 10 retinal diseases in real time. This technology may help overcome the lack of experienced ophthalmologists in underdeveloped areas.
Collapse
Affiliation(s)
- Li Dong
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wanji He
- Beijing Airdoc Technology Co, Ltd, Beijing, China
| | - Ruiheng Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Zongyuan Ge
- eResearch Centre, Monash University, Melbourne, Victoria, Australia
- ECSE, Faculty of Engineering, Monash University, Melbourne, Victoria, Australia
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Ophthalmology and Visual Science Key Lab, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jinqiong Zhou
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jie Xu
- Beijing Institute of Ophthalmology, Beijing Ophthalmology and Visual Science Key Lab, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Lei Shao
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Qian Wang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yanni Yan
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Ying Xie
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Department of Ophthalmology, Shanxi Provincial People's Hospital, Taiyuan, China
| | - Lijian Fang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Department of Ophthalmology, Beijing Liangxiang Hospital, Capital Medical University, Beijing, China
| | - Haiwei Wang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Department of Ophthalmology, Fuxing Hospital, Capital Medical University, Beijing, China
| | - Yenan Wang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Department of Ophthalmology, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Xiaobo Zhu
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Department of Ophthalmology, Dongfang Hospital, Beijing University of Chinese Medicine, Beijing, China
| | - Jinyuan Wang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Chuan Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Heng Wang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yining Wang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Rongtian Chen
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Qianqian Wan
- Department of Ophthalmology, the Second Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Jingyan Yang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wenda Zhou
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Heyan Li
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xuan Yao
- Beijing Airdoc Technology Co, Ltd, Beijing, China
| | - Zhiwen Yang
- Beijing Airdoc Technology Co, Ltd, Beijing, China
| | | | - Xin Wang
- Beijing Airdoc Technology Co, Ltd, Beijing, China
| | - Yelin Huang
- Beijing Airdoc Technology Co, Ltd, Beijing, China
| | - Yuzhong Chen
- Beijing Airdoc Technology Co, Ltd, Beijing, China
| | - Zhaohui Wang
- iKang Guobin Healthcare Group Co, Ltd, Beijing, China
| | - Ce Rong
- iKang Guobin Healthcare Group Co, Ltd, Beijing, China
| | - Jianxiong Gao
- iKang Guobin Healthcare Group Co, Ltd, Beijing, China
| | | | - Shouling Wu
- Department of Cardiology, Kailuan General Hospital, Tangshan, Hebei, China
| | - Jost B Jonas
- Department of Ophthalmology, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Institute of Molecular and Clinical Ophthalmology Basel, Switzerland
| | - Wen Bin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
54
|
Yun JS, Kim J, Jung SH, Cha SA, Ko SH, Ahn YB, Won HH, Sohn KA, Kim D. A deep learning model for screening type 2 diabetes from retinal photographs. Nutr Metab Cardiovasc Dis 2022; 32:1218-1226. [PMID: 35197214 PMCID: PMC9018521 DOI: 10.1016/j.numecd.2022.01.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 12/13/2021] [Accepted: 01/08/2022] [Indexed: 11/16/2022]
Abstract
BACKGROUND AND AIMS We aimed to develop and evaluate a non-invasive deep learning algorithm for screening type 2 diabetes in UK Biobank participants using retinal images. METHODS AND RESULTS The deep learning model for prediction of type 2 diabetes was trained on retinal images from 50,077 UK Biobank participants and tested on 12,185 participants. We evaluated its performance in terms of predicting traditional risk factors (TRFs) and genetic risk for diabetes. Next, we compared the performance of three models in predicting type 2 diabetes using 1) an image-only deep learning algorithm, 2) TRFs, 3) the combination of the algorithm and TRFs. Assessing net reclassification improvement (NRI) allowed quantification of the improvement afforded by adding the algorithm to the TRF model. When predicting TRFs with the deep learning algorithm, the areas under the curve (AUCs) obtained with the validation set for age, sex, and HbA1c status were 0.931 (0.928-0.934), 0.933 (0.929-0.936), and 0.734 (0.715-0.752), respectively. When predicting type 2 diabetes, the AUC of the composite logistic model using non-invasive TRFs was 0.810 (0.790-0.830), and that for the deep learning model using only fundus images was 0.731 (0.707-0.756). Upon addition of TRFs to the deep learning algorithm, discriminative performance was improved to 0.844 (0.826-0.861). The addition of the algorithm to the TRFs model improved risk stratification with an overall NRI of 50.8%. CONCLUSION Our results demonstrate that this deep learning algorithm can be a useful tool for stratifying individuals at high risk of type 2 diabetes in the general population.
Collapse
Affiliation(s)
- Jae-Seung Yun
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Division of Endocrinology and Metabolism, Department of Internal Medicine, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Jaesik Kim
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Department of Computer Engineering, Ajou University, Suwon, Republic of Korea; Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA
| | - Sang-Hyuk Jung
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA; Samsung Advanced Institute for Health Sciences and Technology (SAIHST), Sungkyunkwan University, Samsung Medical Center, Seoul, Republic of Korea
| | - Seon-Ah Cha
- Division of Endocrinology and Metabolism, Department of Internal Medicine, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Seung-Hyun Ko
- Division of Endocrinology and Metabolism, Department of Internal Medicine, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Yu-Bae Ahn
- Division of Endocrinology and Metabolism, Department of Internal Medicine, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Hong-Hee Won
- Samsung Advanced Institute for Health Sciences and Technology (SAIHST), Sungkyunkwan University, Samsung Medical Center, Seoul, Republic of Korea
| | - Kyung-Ah Sohn
- Department of Computer Engineering, Ajou University, Suwon, Republic of Korea; Department of Artificial Intelligence, Ajou University, Suwon, Republic of Korea.
| | - Dokyoon Kim
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
55
|
Wang TY, Chen YH, Chen JT, Liu JT, Wu PY, Chang SY, Lee YW, Su KC, Chen CL. Diabetic Macular Edema Detection Using End-to-End Deep Fusion Model and Anatomical Landmark Visualization on an Edge Computing Device. Front Med (Lausanne) 2022; 9:851644. [PMID: 35445051 PMCID: PMC9014123 DOI: 10.3389/fmed.2022.851644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 03/14/2022] [Indexed: 11/23/2022] Open
Abstract
Purpose Diabetic macular edema (DME) is a common cause of vision impairment and blindness in patients with diabetes. However, vision loss can be prevented by regular eye examinations during primary care. This study aimed to design an artificial intelligence (AI) system to facilitate ophthalmology referrals by physicians. Methods We developed an end-to-end deep fusion model for DME classification and hard exudate (HE) detection. Based on the architecture of fusion model, we also applied a dual model which included an independent classifier and object detector to perform these two tasks separately. We used 35,001 annotated fundus images from three hospitals between 2007 and 2018 in Taiwan to create a private dataset. The Private dataset, Messidor-1 and Messidor-2 were used to assess the performance of the fusion model for DME classification and HE detection. A second object detector was trained to identify anatomical landmarks (optic disc and macula). We integrated the fusion model and the anatomical landmark detector, and evaluated their performance on an edge device, a device with limited compute resources. Results For DME classification of our private testing dataset, Messidor-1 and Messidor-2, the area under the receiver operating characteristic curve (AUC) for the fusion model had values of 98.1, 95.2, and 95.8%, the sensitivities were 96.4, 88.7, and 87.4%, the specificities were 90.1, 90.2, and 90.2%, and the accuracies were 90.8, 90.0, and 89.9%, respectively. In addition, the AUC was not significantly different for the fusion and dual models for the three datasets (p = 0.743, 0.942, and 0.114, respectively). For HE detection, the fusion model achieved a sensitivity of 79.5%, a specificity of 87.7%, and an accuracy of 86.3% using our private testing dataset. The sensitivity of the fusion model was higher than that of the dual model (p = 0.048). For optic disc and macula detection, the second object detector achieved accuracies of 98.4% (optic disc) and 99.3% (macula). The fusion model and the anatomical landmark detector can be deployed on a portable edge device. Conclusion This portable AI system exhibited excellent performance for the classification of DME, and the visualization of HE and anatomical locations. It facilitates interpretability and can serve as a clinical reference for physicians. Clinically, this system could be applied to diabetic eye screening to improve the interpretation of fundus imaging in patients with DME.
Collapse
Affiliation(s)
- Ting-Yuan Wang
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Yi-Hao Chen
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Jiann-Torng Chen
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Jung-Tzu Liu
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Po-Yi Wu
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Sung-Yen Chang
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Ya-Wen Lee
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Kuo-Chen Su
- Department of Optometry, Chung Shan Medical University, Taichung, Taiwan
| | - Ching-Long Chen
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| |
Collapse
|
56
|
Yang D, Li M, Li W, Wang Y, Niu L, Shen Y, Zhang X, Fu B, Zhou X. Prediction of Refractive Error Based on Ultrawide Field Images With Deep Learning Models in Myopia Patients. Front Med (Lausanne) 2022; 9:834281. [PMID: 35433763 PMCID: PMC9007166 DOI: 10.3389/fmed.2022.834281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 03/04/2022] [Indexed: 11/21/2022] Open
Abstract
Summary Ultrawide field fundus images could be applied in deep learning models to predict the refractive error of myopic patients. The predicted error was related to the older age and greater spherical power. Purpose To explore the possibility of predicting the refractive error of myopic patients by applying deep learning models trained with ultrawide field (UWF) images. Methods UWF fundus images were collected from left eyes of 987 myopia patients of Eye and ENT Hospital, Fudan University between November 2015 and January 2019. The fundus images were all captured with Optomap Daytona, a 200° UWF imaging device. Three deep learning models (ResNet-50, Inception-v3, Inception-ResNet-v2) were trained with the UWF images for predicting refractive error. 133 UWF fundus images were also collected after January 2021 as an the external validation data set. The predicted refractive error was compared with the “true value” measured by subjective refraction. Mean absolute error (MAE), mean absolute percentage error (MAPE) and coefficient (R2) value were calculated in the test set. The Spearman rank correlation test was applied for univariate analysis and multivariate linear regression analysis on variables affecting MAE. The weighted heat map was generated by averaging the predicted weight of each pixel. Results ResNet-50, Inception-v3 and Inception-ResNet-v2 models were trained with the UWF images for refractive error prediction with R2 of 0.9562, 0.9555, 0.9563 and MAE of 1.72(95%CI: 1.62–1.82), 1.75(95%CI: 1.65–1.86) and 1.76(95%CI: 1.66–1.86), respectively. 29.95%, 31.47% and 29.44% of the test set were within the predictive error of 0.75D in the three models. 64.97%, 64.97%, and 64.47% was within 2.00D predictive error. The predicted MAE was related to older age (P < 0.01) and greater spherical power(P < 0.01). The optic papilla and macular region had significant predictive power in the weighted heat map. Conclusions It was feasible to predict refractive error in myopic patients with deep learning models trained by UWF images with the accuracy to be improved.
Collapse
Affiliation(s)
- Danjuan Yang
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
- Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Meiyan Li
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
- Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Weizhen Li
- School of Data Science, Fudan University, Shanghai, China
| | - Yunzhe Wang
- Shanghai Medical College, Fudan University, Shanghai, China
| | - Lingling Niu
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
- Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Yang Shen
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
- Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Xiaoyu Zhang
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
- Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Bo Fu
- School of Data Science, Fudan University, Shanghai, China
- Bo Fu
| | - Xingtao Zhou
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
- Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
- *Correspondence: Xingtao Zhou
| |
Collapse
|
57
|
Li Y, Zhu M, Sun G, Chen J, Zhu X, Yang J. Weakly supervised training for eye fundus lesion segmentation in patients with diabetic retinopathy. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2022; 19:5293-5311. [PMID: 35430865 DOI: 10.3934/mbe.2022248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OBJECTIVE Diabetic retinopathy is the leading cause of vision loss in working-age adults. Early screening and diagnosis can help to facilitate subsequent treatment and prevent vision loss. Deep learning has been applied in various fields of medical identification. However, current deep learning-based lesion segmentation techniques rely on a large amount of pixel-level labeled ground truth data, which limits their performance and application. In this work, we present a weakly supervised deep learning framework for eye fundus lesion segmentation in patients with diabetic retinopathy. METHODS First, an efficient segmentation algorithm based on grayscale and morphological features is proposed for rapid coarse segmentation of lesions. Then, a deep learning model named Residual-Attention Unet (RAUNet) is proposed for eye fundus lesion segmentation. Finally, a data sample of fundus images with labeled lesions and unlabeled images with coarse segmentation results is jointly used to train RAUNet to broaden the diversity of lesion samples and increase the robustness of the segmentation model. RESULTS A dataset containing 582 fundus images with labels verified by doctors, including hemorrhage (HE), microaneurysm (MA), hard exudate (EX) and soft exudate (SE), and 903 images without labels was used to evaluate the model. In ablation test, the proposed RAUNet achieved the highest intersection over union (IOU) on the labeled dataset, and the proposed attention and residual modules both improved the IOU of the UNet benchmark. Using both the images labeled by doctors and the proposed coarse segmentation method, the weakly supervised framework based on RAUNet architecture significantly improved the mean segmentation accuracy by over 7% on the lesions. SIGNIFICANCE This study demonstrates that combining unlabeled medical images with coarse segmentation results can effectively improve the robustness of the lesion segmentation model and proposes a practical framework for improving the performance of medical image segmentation given limited labeled data samples.
Collapse
Affiliation(s)
- Yu Li
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Meilong Zhu
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Guangmin Sun
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Jiayang Chen
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- School of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Xiaorong Zhu
- Beijing Tongren Hospital, Beijing 100730, China
- Beijing Institute of Diabetes Research, Beijing 100730, China
| | - Jinkui Yang
- Beijing Tongren Hospital, Beijing 100730, China
- Beijing Institute of Diabetes Research, Beijing 100730, China
| |
Collapse
|
58
|
Woo JH, Kim EC, Kim SM. The Current Status of Breakthrough Devices Designation in the United States and Innovative Medical Devices Designation in Korea for Digital Health Software. Expert Rev Med Devices 2022; 19:213-228. [PMID: 35255755 DOI: 10.1080/17434440.2022.2051479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
INTRODUCTION Artificial Intelligence (AI) is becoming increasingly utilized in the medical device industry as it can address unmet demands in clinical sites and provide more patient treatment options. This study aims to analyze the FDA's Breakthrough Device Program and MFDS' Innovative Medical Device Program, which support regulatory science for innovative medical devices today. Through this study, it is intended to enable prediction of current development trends of Software as a Medical Device (SaMD) and Digital Therapeutics (DTx), which combine AI and technologies to be used in the clinical field soon. AREAS COVERED A systematic search was conducted on the broad topics of "FDA and MFDS Program's SaMD, DTx". A parallel review and update of PubMed, and the official websites were conducted to investigate the regulator's databases, review official press releases of regulatory agencies, and provide detailed descriptions of researchers. EXPERT OPINION The efforts of related stakeholders are needed to expand AI technology to diagnosis, prevention, and treatment technologies for diseases that are difficult to diagnose early or are classified as clinical challenges. It is important to prepare regulatory policies suitable for the rapid pace of technological development and to create an environment where regulatory science can be realized by developers.
Collapse
Affiliation(s)
- Jae Hyun Woo
- Research Institute for Commercialization of Biomedical Convergence Technology, Seoul, Republic of Korea.,Medical Device Industry Program in Graduate School, Dongguk University, Seoul, Republic of Korea.,National Institute of Medical Device Safety Information, Seoul, Republic of Korea.,Department of Medical Biotechnology, Dongguk University-Seoul, Seoul, Korea
| | - Eun Cheol Kim
- Research Institute for Commercialization of Biomedical Convergence Technology, Seoul, Republic of Korea.,Medical Device Industry Program in Graduate School, Dongguk University, Seoul, Republic of Korea.,National Institute of Medical Device Safety Information, Seoul, Republic of Korea.,Department of Medical Biotechnology, Dongguk University-Seoul, Seoul, Korea
| | - Sung Min Kim
- Research Institute for Commercialization of Biomedical Convergence Technology, Seoul, Republic of Korea.,Medical Device Industry Program in Graduate School, Dongguk University, Seoul, Republic of Korea.,National Institute of Medical Device Safety Information, Seoul, Republic of Korea.,Department of Medical Biotechnology, Dongguk University-Seoul, Seoul, Korea
| |
Collapse
|
59
|
Matta S, Lamard M, Conze PH, Le Guilcher A, Ricquebourg V, Benyoussef AA, Massin P, Rottier JB, Cochener B, Quellec G. Automatic Screening for Ocular Anomalies Using Fundus Photographs. Optom Vis Sci 2022; 99:281-291. [PMID: 34897234 DOI: 10.1097/opx.0000000000001845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
SIGNIFICANCE Screening for ocular anomalies using fundus photography is key to prevent vision impairment and blindness. With the growing and aging population, automated algorithms that can triage fundus photographs and provide instant referral decisions are relevant to scale-up screening and face the shortage of ophthalmic expertise. PURPOSE This study aimed to develop a deep learning algorithm that detects any ocular anomaly in fundus photographs and to evaluate this algorithm for "normal versus anomalous" eye examination classification in the diabetic and general populations. METHODS The deep learning algorithm was developed and evaluated in two populations: the diabetic and general populations. Our patient cohorts consist of 37,129 diabetic patients from the OPHDIAT diabetic retinopathy screening network in Paris, France, and 7356 general patients from the OphtaMaine private screening network, in Le Mans, France. Each data set was divided into a development subset and a test subset of more than 4000 examinations each. For ophthalmologist/algorithm comparison, a subset of 2014 examinations from the OphtaMaine test subset was labeled by a second ophthalmologist. First, the algorithm was trained on the OPHDIAT development subset. Then, it was fine-tuned on the OphtaMaine development subset. RESULTS On the OPHDIAT test subset, the area under the receiver operating characteristic curve for normal versus anomalous classification was 0.9592. On the OphtaMaine test subset, the area under the receiver operating characteristic curve was 0.8347 before fine-tuning and 0.9108 after fine-tuning. On the ophthalmologist/algorithm comparison subset, the second ophthalmologist achieved a specificity of 0.8648 and a sensitivity of 0.6682. For the same specificity, the fine-tuned algorithm achieved a sensitivity of 0.8248. CONCLUSIONS The proposed algorithm compares favorably with human performance for normal versus anomalous eye examination classification using fundus photography. Artificial intelligence, which previously targeted a few retinal pathologies, can be used to screen for ocular anomalies comprehensively.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Pascale Massin
- Ophtalmology Department, Lariboisière Hospital, APHP, Paris, France
| | | | | | | |
Collapse
|
60
|
Abitbol E, Miere A, Excoffier JB, Mehanna CJ, Amoroso F, Kerr S, Ortala M, Souied EH. Deep learning-based classification of retinal vascular diseases using ultra-widefield colour fundus photographs. BMJ Open Ophthalmol 2022; 7:e000924. [PMID: 35141420 PMCID: PMC8819815 DOI: 10.1136/bmjophth-2021-000924] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 01/18/2022] [Indexed: 01/01/2023] Open
Abstract
Objective To assess the ability of a deep learning model to distinguish between diabetic retinopathy (DR), sickle cell retinopathy (SCR), retinal vein occlusions (RVOs) and healthy eyes using ultra-widefield colour fundus photography (UWF-CFP). Methods and Analysis In this retrospective study, UWF-CFP images of patients with retinal vascular disease (DR, RVO, and SCR) and healthy controls were included. The images were used to train a multilayer deep convolutional neural network to differentiate on UWF-CFP between different vascular diseases and healthy controls. A total of 224 UWF-CFP images were included, of which 169 images were of retinal vascular diseases and 55 were healthy controls. A cross-validation technique was used to ensure that every image from the dataset was tested once. Established augmentation techniques were applied to enhance performances, along with an Adam optimiser for training. The visualisation method was integrated gradient visualisation. Results The best performance of the model was obtained using 10 epochs, with an overall accuracy of 88.4%. For DR, the area under the receiver operating characteristics (ROC) curve (AUC) was 90.5% and the accuracy was 85.2%. For RVO, the AUC was 91.2% and the accuracy 88.4%. For SCR, the AUC was 96.7% and the accuracy 93.8%. For healthy controls, the ROC was 88.5% with an accuracy that reached 86.2%. Conclusion Deep learning algorithms can classify several retinal vascular diseases on UWF-CPF with good accuracy. This technology may be a useful tool for telemedicine and areas with a shortage of ophthalmic care.
Collapse
Affiliation(s)
- Elie Abitbol
- Department of Ophthalmology, Centre Hospitalier Intercommunal de Créteil, Creteil, France
| | - Alexandra Miere
- Department of Ophthalmology, Centre Hospitalier Intercommunal de Créteil, Creteil, France
| | | | - Carl-Joe Mehanna
- Department of Ophthalmology, Centre Hospitalier Intercommunal de Créteil, Creteil, France
| | - Francesca Amoroso
- Department of Ophthalmology, Centre Hospitalier Intercommunal de Créteil, Creteil, France
| | | | | | - Eric H Souied
- Department of Ophthalmology, Centre Hospitalier Intercommunal de Créteil, Creteil, France
| |
Collapse
|
61
|
End-to-end diabetic retinopathy grading based on fundus fluorescein angiography images using deep learning. Graefes Arch Clin Exp Ophthalmol 2022; 260:1663-1673. [DOI: 10.1007/s00417-021-05503-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 10/11/2021] [Accepted: 11/14/2021] [Indexed: 12/14/2022] Open
|
62
|
Zhang RH, Liu YM, Dong L, Li HY, Li YF, Zhou WD, Wu HT, Wang YX, Wei WB. Prevalence, Years Lived With Disability, and Time Trends for 16 Causes of Blindness and Vision Impairment: Findings Highlight Retinopathy of Prematurity. Front Pediatr 2022; 10:735335. [PMID: 35359888 PMCID: PMC8962664 DOI: 10.3389/fped.2022.735335] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Accepted: 01/25/2022] [Indexed: 11/26/2022] Open
Abstract
BACKGROUND Cause-specific prevalence data of vision loss and blindness is fundamental for making public health policies and is essential for prioritizing scientific advances and industry research. METHODS Cause-specific vision loss data from the Global Health Data Exchange was used. The burden of vision loss was measured by prevalence and years lived with disability (YLDs). FINDINGS In 2019, uncorrected refractory error and cataract were the most common causes for vision loss and blindness globally. Women have higher rates of cataract, age-related macular degeneration (AMD), and diabetic retinopathy (DR) than men. In the past 30 years, the prevalence of moderate/severe vision loss and blindness due to neonatal disorders has increased by 13.73 and 33.53%, respectively. Retinopathy of prematurity (ROP) is the major cause of neonatal disorders related vision loss. In 2019, ROP caused 101.6 thousand [95% uncertainty intervals (UI) 77.5-128.2] cases of vision impairment, including 49.1 thousand (95% UI 28.1-75.1) moderate vision loss, 27.5 thousand (95% UI 19.3-36.60) severe vision loss and, 25.0 thousand (95% UI 14.6-35.8) blindness. The prevalence of new-onset ROP in Africa and East Asia was significantly higher than other regions. Variation of preterm birth prevalence can explain 49.8% geometry variation of ROP-related vision loss burden among 204 countries and territories. After adjusting for preterm prevalence, government health spending per total health spending (%), rather than total health spending per person, was associated with a reduced burden of ROP-related vision loss in 2019 (-0.19 YLDs for 10% increment). By 2050, prevalence of moderate, severe vision loss and blindness due to ROP is expected to reach 43.6 (95% UI 35.1-52.0), 23.2 (95% UI 19.4-27.1), 31.9 (95% UI 29.7-34.1) per 100,000 population. CONCLUSION The global burden of vision loss and blindness highlights the prevalent of ROP, a major and avoidable cause for childhood vision loss. Advanced screening techniques and treatments have shown to be effective in preventing ROP-related vision loss and are urgently needed in regions with high ROP-related blindness rates, including Africa and East Asia.
Collapse
Affiliation(s)
- Rui-Heng Zhang
- Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yue-Ming Liu
- Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Li Dong
- Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - He-Yan Li
- Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yi-Fan Li
- Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wen-Da Zhou
- Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Hao-Tian Wu
- Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Ya-Xing Wang
- Beijing Institute of Ophthalmology and Beijing Ophthalmology and Visual Science Key Lab, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wen-Bin Wei
- Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
63
|
Nancy W, Celine Kavida A. Optimized Ensemble Machine Learning-Based Diabetic Retinopathy Grading Using Multiple Region of Interest Analysis and Bayesian Approach. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2022. [DOI: 10.1166/jmihi.2022.3923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Diabetic Retinopathy (DR) is a critical abnormality in the retina mainly caused by diabetes. The early diagnosis of DR is essential to avoid painless blindness. The conventional DR diagnosis is manual and requires skilled Ophthalmologists. The Ophthalmologist’s analyses are subjective
to inconsistency and record maintenance issues. Hence, there is a need for other DR diagnosis methods. In this paper, we proposed an AdaBoost algorithm-based ensemble classification approach to classify DR grades. The major objective of the proposed approach is an enhancement of DR classification
performance by using optimized features and ensemble machine learning techniques. The proposed method classifies different grades of DR using the Meyer wavelet and retinal vessel-based features extracted from multiple regions of interest of the retina. To improve the predictive accuracy, we
used a Bayesian algorithm to optimize the hyper-parameters of the proposed ensemble classifier. The proposed DR grading model was constructed and evaluated by using the MESSIDOR fundus image dataset. In evaluation experiment, the classification outcome of the proposed approach was evaluated
by the confusion matrix and receiver operating characteristic (ROC) based metrics. The evaluation experiments show that the proposed approach attained 99.2% precision, 98.2% recall, 99% accuracy, and 0.99 AUC. The experimental findings also indicate that the proposed approach’s classification
outcome is significantly better than that of state of art DR classification methods.
Collapse
Affiliation(s)
- W. Nancy
- Department of Electronics and Communication Engineering, Jeppiaar Institute of Technology, Chennai 631604, India
| | - A. Celine Kavida
- Department of Physics, Vel Tech Multi Tech Dr. Rangarajan Dr. Sakunthala Engineering College, Chennai 600062, India
| |
Collapse
|
64
|
AIM in Endocrinology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
|
65
|
Ma Y, Xiong J, Zhu Y, Ge Z, Hua R, Fu M, Li C, Wang B, Dong L, Zhao X, Chen J, Rong C, He C, Chen Y, Wang Z, Wei W, Xie W, Wu Y. Deep learning algorithm using fundus photographs for 10-year risk assessment of ischemic cardiovascular diseases in China. Sci Bull (Beijing) 2022; 67:17-20. [PMID: 36545953 DOI: 10.1016/j.scib.2021.08.016] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Revised: 08/13/2021] [Accepted: 08/23/2021] [Indexed: 01/06/2023]
Affiliation(s)
- Yanjun Ma
- Peking University Clinical Research Institute, Peking University First Hospital, Beijing 100191, China; PUCRI Heart and Vascular Health Research Center at Peking University Shougang Hospital, Beijing 100191, China; Key Laboratory of Molecular Cardiovascular Sciences (Peking University), Ministry of Education, Beijing 100191, China
| | - Jianhao Xiong
- Beijing Airdoc Technology Co., Ltd., Beijing 100081, China
| | - Yidan Zhu
- Peking University Clinical Research Institute, Peking University First Hospital, Beijing 100191, China; PUCRI Heart and Vascular Health Research Center at Peking University Shougang Hospital, Beijing 100191, China; Key Laboratory of Molecular Cardiovascular Sciences (Peking University), Ministry of Education, Beijing 100191, China
| | - Zongyuan Ge
- Beijing Airdoc Technology Co., Ltd., Beijing 100081, China
| | - Rong Hua
- Peking University Clinical Research Institute, Peking University First Hospital, Beijing 100191, China; PUCRI Heart and Vascular Health Research Center at Peking University Shougang Hospital, Beijing 100191, China; Key Laboratory of Molecular Cardiovascular Sciences (Peking University), Ministry of Education, Beijing 100191, China
| | - Meng Fu
- Beijing Airdoc Technology Co., Ltd., Beijing 100081, China
| | - Chenglong Li
- Peking University Clinical Research Institute, Peking University First Hospital, Beijing 100191, China; PUCRI Heart and Vascular Health Research Center at Peking University Shougang Hospital, Beijing 100191, China; Key Laboratory of Molecular Cardiovascular Sciences (Peking University), Ministry of Education, Beijing 100191, China
| | - Bin Wang
- Beijing Airdoc Technology Co., Ltd., Beijing 100081, China
| | - Li Dong
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing 100005, China
| | - Xin Zhao
- Beijing Airdoc Technology Co., Ltd., Beijing 100081, China
| | - Jili Chen
- Shibei Hospital, Shanghai 200435, China
| | - Ce Rong
- iKang Guobin Healthcare Group Co., Ltd., Beijing 100022, China
| | - Chao He
- Beijing Airdoc Technology Co., Ltd., Beijing 100081, China
| | - Yuzhong Chen
- Beijing Airdoc Technology Co., Ltd., Beijing 100081, China
| | - Zhaohui Wang
- iKang Guobin Healthcare Group Co., Ltd., Beijing 100022, China
| | - Wenbin Wei
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing 100005, China
| | - Wuxiang Xie
- Peking University Clinical Research Institute, Peking University First Hospital, Beijing 100191, China; PUCRI Heart and Vascular Health Research Center at Peking University Shougang Hospital, Beijing 100191, China; Key Laboratory of Molecular Cardiovascular Sciences (Peking University), Ministry of Education, Beijing 100191, China.
| | - Yangfeng Wu
- Peking University Clinical Research Institute, Peking University First Hospital, Beijing 100191, China; PUCRI Heart and Vascular Health Research Center at Peking University Shougang Hospital, Beijing 100191, China; Key Laboratory of Molecular Cardiovascular Sciences (Peking University), Ministry of Education, Beijing 100191, China.
| |
Collapse
|
66
|
Shah PM, Ullah F, Shah D, Gani A, Maple C, Wang Y, Abrar M, Islam SU. Deep GRU-CNN Model for COVID-19 Detection From Chest X-Rays Data. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2022; 10:35094-35105. [PMID: 35582498 PMCID: PMC9088790 DOI: 10.1109/access.2021.3077592] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2021] [Accepted: 04/20/2021] [Indexed: 05/03/2023]
Abstract
In the current era, data is growing exponentially due to advancements in smart devices. Data scientists apply a variety of learning-based techniques to identify underlying patterns in the medical data to address various health-related issues. In this context, automated disease detection has now become a central concern in medical science. Such approaches can reduce the mortality rate through accurate and timely diagnosis. COVID-19 is a modern virus that has spread all over the world and is affecting millions of people. Many countries are facing a shortage of testing kits, vaccines, and other resources due to significant and rapid growth in cases. In order to accelerate the testing process, scientists around the world have sought to create novel methods for the detection of the virus. In this paper, we propose a hybrid deep learning model based on a convolutional neural network (CNN) and gated recurrent unit (GRU) to detect the viral disease from chest X-rays (CXRs). In the proposed model, a CNN is used to extract features, and a GRU is used as a classifier. The model has been trained on 424 CXR images with 3 classes (COVID-19, Pneumonia, and Normal). The proposed model achieves encouraging results of 0.96, 0.96, and 0.95 in terms of precision, recall, and f1-score, respectively. These findings indicate how deep learning can significantly contribute to the early detection of COVID-19 in patients through the analysis of X-ray scans. Such indications can pave the way to mitigate the impact of the disease. We believe that this model can be an effective tool for medical practitioners for early diagnosis.
Collapse
Affiliation(s)
- Pir Masoom Shah
- Department of Computer ScienceBacha Khan University Charsadda 24000 Pakistan
- School of Computer ScienceWuhan University Wuhan 430072 China
| | - Faizan Ullah
- Department of Computer ScienceBacha Khan University Charsadda 24000 Pakistan
| | - Dilawar Shah
- Department of Computer ScienceBacha Khan University Charsadda 24000 Pakistan
| | - Abdullah Gani
- Faculty of Computer Science and Information TechnologyUniversity of Malaya Kuala Lumpur 50603 Malaysia
- Faculty of Computing and InformaticsUniversity Malaysia Sabah Labuan 88400 Malaysia
| | - Carsten Maple
- Secure Cyber Systems Research Group, WMGUniversity of Warwick Coventry CV4 7AL U.K
- Alan Turing Institute London NW1 2DB U.K
| | - Yulin Wang
- School of Computer ScienceWuhan University Wuhan 430072 China
| | - Mohammad Abrar
- Department of Computer ScienceMohi-ud-Din Islamic University Nerian Sharif 12080 Pakistan
| | - Saif Ul Islam
- Department of Computer ScienceInstitute of Space Technology Islamabad 44000 Pakistan
| |
Collapse
|
67
|
Trustworthy AI: Closing the gap between development and integration of AI systems in ophthalmic practice. Prog Retin Eye Res 2021; 90:101034. [PMID: 34902546 DOI: 10.1016/j.preteyeres.2021.101034] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Revised: 12/03/2021] [Accepted: 12/06/2021] [Indexed: 01/14/2023]
Abstract
An increasing number of artificial intelligence (AI) systems are being proposed in ophthalmology, motivated by the variety and amount of clinical and imaging data, as well as their potential benefits at the different stages of patient care. Despite achieving close or even superior performance to that of experts, there is a critical gap between development and integration of AI systems in ophthalmic practice. This work focuses on the importance of trustworthy AI to close that gap. We identify the main aspects or challenges that need to be considered along the AI design pipeline so as to generate systems that meet the requirements to be deemed trustworthy, including those concerning accuracy, resiliency, reliability, safety, and accountability. We elaborate on mechanisms and considerations to address those aspects or challenges, and define the roles and responsibilities of the different stakeholders involved in AI for ophthalmic care, i.e., AI developers, reading centers, healthcare providers, healthcare institutions, ophthalmological societies and working groups or committees, patients, regulatory bodies, and payers. Generating trustworthy AI is not a responsibility of a sole stakeholder. There is an impending necessity for a collaborative approach where the different stakeholders are represented along the AI design pipeline, from the definition of the intended use to post-market surveillance after regulatory approval. This work contributes to establish such multi-stakeholder interaction and the main action points to be taken so that the potential benefits of AI reach real-world ophthalmic settings.
Collapse
|
68
|
Choi KJ, Choi JE, Roh HC, Eun JS, Kim JM, Shin YK, Kang MC, Chung JK, Lee C, Lee D, Kang SW, Cho BH, Kim SJ. Deep learning models for screening of high myopia using optical coherence tomography. Sci Rep 2021; 11:21663. [PMID: 34737335 PMCID: PMC8568935 DOI: 10.1038/s41598-021-00622-x] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Accepted: 10/13/2021] [Indexed: 12/02/2022] Open
Abstract
This study aimed to validate and evaluate deep learning (DL) models for screening of high myopia using spectral-domain optical coherence tomography (OCT). This retrospective cross-sectional study included 690 eyes in 492 patients with OCT images and axial length measurement. Eyes were divided into three groups based on axial length: a “normal group,” a “high myopia group,” and an “other retinal disease” group. The researchers trained and validated three DL models to classify the three groups based on horizontal and vertical OCT images of the 600 eyes. For evaluation, OCT images of 90 eyes were used. Diagnostic agreements of human doctors and DL models were analyzed. The area under the receiver operating characteristic curve of the three DL models was evaluated. Absolute agreement of retina specialists was 99.11% (range: 97.78–100%). Absolute agreement of the DL models with multiple-column model was 100.0% (ResNet 50), 90.0% (Inception V3), and 72.22% (VGG 16). Areas under the receiver operating characteristic curves of the DL models with multiple-column model were 0.99 (ResNet 50), 0.97 (Inception V3), and 0.86 (VGG 16). The DL model based on ResNet 50 showed comparable diagnostic performance with retinal specialists. The DL model using OCT images demonstrated reliable diagnostic performance to identify high myopia.
Collapse
Affiliation(s)
- Kyung Jun Choi
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Jung Eun Choi
- Medical AI Research Center, Samsung Medical Center, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Hyeon Cheol Roh
- Department of Ophthalmology, Samsung Changwon Hospital, Sungkyunkwan University School of Medicine, Changwon, Republic of Korea
| | - Jun Soo Eun
- Department of Ophthalmology, Gil Medical Center, Gachon University, Incheon, Republic of Korea
| | | | - Yong Kyun Shin
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Min Chae Kang
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Joon Kyo Chung
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Chaeyeon Lee
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Dongyoung Lee
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Se Woong Kang
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea
| | - Baek Hwan Cho
- Medical AI Research Center, Samsung Medical Center, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea. .,Department of Medical Device Management and Research, SAIHST, Sungkyunkwan University, Seoul, 06351, Republic of Korea.
| | - Sang Jin Kim
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, #81 Irwon-ro, Gangnam-gu, Seoul, 06351, Republic of Korea.
| |
Collapse
|
69
|
Jahangir S, Khan HA. Artificial intelligence in ophthalmology and visual sciences: Current implications and future directions. Artif Intell Med Imaging 2021; 2:95-103. [DOI: 10.35711/aimi.v2.i5.95] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 06/30/2021] [Accepted: 10/27/2021] [Indexed: 02/06/2023] Open
Abstract
Since its inception in 1959, artificial intelligence (AI) has evolved at an unprecedented rate and has revolutionized the world of medicine. Ophthalmology, being an image-driven field of medicine, is well-suited for the implementation of AI. Machine learning (ML) and deep learning (DL) models are being utilized for screening of vision threatening ocular conditions of the eye. These models have proven to be accurate and reliable for diagnosing anterior and posterior segment diseases, screening large populations, and even predicting the natural course of various ocular morbidities. With the increase in population and global burden of managing irreversible blindness, AI offers a unique solution when implemented in clinical practice. In this review, we discuss what are AI, ML, and DL, their uses, future direction for AI, and its limitations in ophthalmology.
Collapse
Affiliation(s)
- Smaha Jahangir
- School of Optometry, The University of Faisalabad, Faisalabad, Punjab 38000, Pakistan
| | - Hashim Ali Khan
- Department of Ophthalmology, SEHHAT Foundation, Gilgit 15100, Gilgit-Baltistan, Pakistan
| |
Collapse
|
70
|
Sharmila C, Shanthi N. An Effective Approach Based on Deep Residual Google Net Convolutional Neural Network Classifier for the Detection of Glaucoma. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Glaucoma is a disease caused by fluid pressure build-up in the inner eye. Early detection of glaucoma is critical as it is expected that 111.8 million people worldwide shall suffer from glaucoma in 2040. In the diagnosis of glaucoma, the use of machine learning method is hoped to be
highly promising. This paper provides an important method to master learning to diagnose glaucoma. Initially, human retinal fundus images are preprocessed by means of histogram equalization in order to enhance them. The segmentation is performed by semantic segmentation method, mainly the
features are extracted using density with correlation based feature extraction approach. PCA (principal component analysis) methodology is used to choose the most optimal features. Ultimately, through the usage of the Deep residual Google Net CNN Classification method, the retinal image is
classified/predicted as regular and abnormal. The Deep residual Google Net CNN classifier is designed to distinguish view patterns with minimal pre-processing from pixel pictures. ORIGA and STARE datasets are used in this work. The findings are then analyzed and contrasted to illustrate the
efficacy of the new technique with alternate current techniques. Test accuracy of 99%, Specificity of 98.9% and 100% Sensitivity were achieved. The quantitative results are analyzed for specifications like sensitivity, specificity, accuracy, positive predictive rate, false predictive rate
and assured to provide excellent outcomes when compared with traditional methods.
Collapse
Affiliation(s)
- C. Sharmila
- Information Technology, Excel Engineering College, Komarapalayam, Namakkal 637303, India
| | - N. Shanthi
- Computer Science Engineering, Kongu Engineering College, Perundurai, Erode 638060, India
| |
Collapse
|
71
|
Buisson M, Navel V, Labbé A, Watson SL, Baker JS, Murtagh P, Chiambaretta F, Dutheil F. Deep learning versus ophthalmologists for screening for glaucoma on fundus examination: A systematic review and meta-analysis. Clin Exp Ophthalmol 2021; 49:1027-1038. [PMID: 34506041 DOI: 10.1111/ceo.14000] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Revised: 09/02/2021] [Accepted: 09/08/2021] [Indexed: 11/29/2022]
Abstract
BACKGROUND In this systematic review and meta-analysis, we aimed to compare deep learning versus ophthalmologists in glaucoma diagnosis on fundus examinations. METHOD PubMed, Cochrane, Embase, ClinicalTrials.gov and ScienceDirect databases were searched for studies reporting a comparison between the glaucoma diagnosis performance of deep learning and ophthalmologists on fundus examinations on the same datasets, until 10 December 2020. Studies had to report an area under the receiver operating characteristics (AUC) with SD or enough data to generate one. RESULTS We included six studies in our meta-analysis. There was no difference in AUC between ophthalmologists (AUC = 82.0, 95% confidence intervals [CI] 65.4-98.6) and deep learning (97.0, 89.4-104.5). There was also no difference using several pessimistic and optimistic variants of our meta-analysis: the best (82.2, 60.0-104.3) or worst (77.7, 53.1-102.3) ophthalmologists versus the best (97.1, 89.5-104.7) or worst (97.1, 88.5-105.6) deep learning of each study. We did not retrieve any factors influencing those results. CONCLUSION Deep learning had similar performance compared to ophthalmologists in glaucoma diagnosis from fundus examinations. Further studies should evaluate deep learning in clinical situations.
Collapse
Affiliation(s)
- Mathieu Buisson
- CHU Clermont-Ferrand, Ophthalmology, University Hospital of Clermont-Ferrand, Clermont-Ferrand, France
| | - Valentin Navel
- CHU Clermont-Ferrand, Ophthalmology, University Hospital of Clermont-Ferrand, Clermont-Ferrand, France.,CNRS UMR 6293, INSERM U1103, Genetic Reproduction and Development Laboratory (GReD), Translational Approach to Epithelial Injury and Repair Team, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Antoine Labbé
- Department of Ophthalmology III, Quinze-Vingts National Ophthalmology Hospital, IHU FOReSIGHT, Paris, France.,Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France.,Department of Ophthalmology, Ambroise Paré Hospital, APHP, Université de Versailles Saint-Quentin en Yvelines, Versailles, France
| | - Stephanie L Watson
- Save Sight Institute, Discipline of Ophthalmology, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia.,Corneal Unit, Sydney Eye Hospital, Sydney, New South Wales, Australia
| | - Julien S Baker
- Centre for Health and Exercise Science Research, Department of Sport, Physical Education and Health, Hong Kong Baptist University, Kowloon Tong, Hong Kong
| | - Patrick Murtagh
- Department of Ophthalmology, Royal Victoria Eye and Ear Hospital, Dublin, Ireland
| | - Frédéric Chiambaretta
- CHU Clermont-Ferrand, Ophthalmology, University Hospital of Clermont-Ferrand, Clermont-Ferrand, France.,CNRS UMR 6293, INSERM U1103, Genetic Reproduction and Development Laboratory (GReD), Translational Approach to Epithelial Injury and Repair Team, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Frédéric Dutheil
- Université Clermont Auvergne, CNRS, LaPSCo, Physiological and Psychosocial Stress, CHU Clermont-Ferrand, University Hospital of Clermont-Ferrand, Preventive and Occupational Medicine, Witty Fit, Clermont-Ferrand, France
| |
Collapse
|
72
|
Abstract
PURPOSE OF REVIEW Systemic retinal biomarkers are biomarkers identified in the retina and related to evaluation and management of systemic disease. This review summarizes the background, categories and key findings from this body of research as well as potential applications to clinical care. RECENT FINDINGS Potential systemic retinal biomarkers for cardiovascular disease, kidney disease and neurodegenerative disease were identified using regression analysis as well as more sophisticated image processing techniques. Deep learning techniques were used in a number of studies predicting diseases including anaemia and chronic kidney disease. A virtual coronary artery calcium score performed well against other competing traditional models of event prediction. SUMMARY Systemic retinal biomarker research has progressed rapidly using regression studies with clearly identified biomarkers such as retinal microvascular patterns, as well as using deep learning models. Future systemic retinal biomarker research may be able to boost performance using larger data sets, the addition of meta-data and higher resolution image inputs.
Collapse
|
73
|
Lin D, Xiong J, Liu C, Zhao L, Li Z, Yu S, Wu X, Ge Z, Hu X, Wang B, Fu M, Zhao X, Wang X, Zhu Y, Chen C, Li T, Li Y, Wei W, Zhao M, Li J, Xu F, Ding L, Tan G, Xiang Y, Hu Y, Zhang P, Han Y, Li JPO, Wei L, Zhu P, Liu Y, Chen W, Ting DSW, Wong TY, Chen Y, Lin H. Application of Comprehensive Artificial intelligence Retinal Expert (CARE) system: a national real-world evidence study. LANCET DIGITAL HEALTH 2021; 3:e486-e495. [PMID: 34325853 DOI: 10.1016/s2589-7500(21)00086-8] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Revised: 04/21/2021] [Accepted: 05/07/2021] [Indexed: 12/15/2022]
Abstract
BACKGROUND Medical artificial intelligence (AI) has entered the clinical implementation phase, although real-world performance of deep-learning systems (DLSs) for screening fundus disease remains unsatisfactory. Our study aimed to train a clinically applicable DLS for fundus diseases using data derived from the real world, and externally test the model using fundus photographs collected prospectively from the settings in which the model would most likely be adopted. METHODS In this national real-world evidence study, we trained a DLS, the Comprehensive AI Retinal Expert (CARE) system, to identify the 14 most common retinal abnormalities using 207 228 colour fundus photographs derived from 16 clinical settings with different disease distributions. CARE was internally validated using 21 867 photographs and externally tested using 18 136 photographs prospectively collected from 35 real-world settings across China where CARE might be adopted, including eight tertiary hospitals, six community hospitals, and 21 physical examination centres. The performance of CARE was further compared with that of 16 ophthalmologists and tested using datasets with non-Chinese ethnicities and previously unused camera types. This study was registered with ClinicalTrials.gov, NCT04213430, and is currently closed. FINDINGS The area under the receiver operating characteristic curve (AUC) in the internal validation set was 0·955 (SD 0·046). AUC values in the external test set were 0·965 (0·035) in tertiary hospitals, 0·983 (0·031) in community hospitals, and 0·953 (0·042) in physical examination centres. The performance of CARE was similar to that of ophthalmologists. Large variations in sensitivity were observed among the ophthalmologists in different regions and with varying experience. The system retained strong identification performance when tested using the non-Chinese dataset (AUC 0·960, 95% CI 0·957-0·964 in referable diabetic retinopathy). INTERPRETATION Our DLS (CARE) showed satisfactory performance for screening multiple retinal abnormalities in real-world settings using prospectively collected fundus photographs, and so could allow the system to be implemented and adopted for clinical care. FUNDING This study was funded by the National Key R&D Programme of China, the Science and Technology Planning Projects of Guangdong Province, the National Natural Science Foundation of China, the Natural Science Foundation of Guangdong Province, and the Fundamental Research Funds for the Central Universities. TRANSLATION For the Chinese translation of the abstract see Supplementary Materials section.
Collapse
Affiliation(s)
- Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Jianhao Xiong
- Beijing Eaglevision Technology Development, Beijing, China
| | - Congxin Liu
- Beijing Eaglevision Technology Development, Beijing, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Shanshan Yu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Zongyuan Ge
- Department of Electrical and Computer Systems Engineering, Faculty of Engineering, Monash University, Melbourne, VIC, Australia
| | - Xinyue Hu
- Beijing Eaglevision Technology Development, Beijing, China
| | - Bin Wang
- Beijing Eaglevision Technology Development, Beijing, China
| | - Meng Fu
- Beijing Eaglevision Technology Development, Beijing, China
| | - Xin Zhao
- Beijing Eaglevision Technology Development, Beijing, China
| | - Xin Wang
- Centre for Precision Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yi Zhu
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Chuan Chen
- Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Tao Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yonghao Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Wenbin Wei
- Beijing Tongren Eye Centre, Beijing Key Laboratory of Intraocular Tumour Diagnosis and Treatment, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Mingwei Zhao
- Department of Ophthalmology, Ophthalmology and Optometry Centre, Peking University People's Hospital, Beijing, China
| | - Jianqiao Li
- Department of Ophthalmology, Qilu Hospital of Shandong University, Jinan, Shandong, China
| | - Fan Xu
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, Guangxi, China
| | - Lin Ding
- Department of Ophthalmology, People's Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Shanxi, China
| | - Gang Tan
- Department of Ophthalmology, University of South China, Hengyang, Hunan, China
| | - Yi Xiang
- Department of Ophthalmology, The Central Hospital of Wuhan, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Yongcheng Hu
- Bayannur Paralympic Eye Hospital, Bayannur, Inner Mongolia, China
| | - Ping Zhang
- Bayannur Paralympic Eye Hospital, Bayannur, Inner Mongolia, China
| | - Yu Han
- Department of Ophthalmology, Eye and ENT Hospital, Fudan University, Shanghai, China
| | | | - Lai Wei
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Pengzhi Zhu
- Guangdong Medical Devices Quality Surveillance and Test Institute, Guangzhou, Guangdong, China
| | - Yizhi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Weirong Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Daniel S W Ting
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Tien Y Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Yuzhong Chen
- Beijing Eaglevision Technology Development, Beijing, China.
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, Guangdong, China; Centre for Precision Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China.
| |
Collapse
|
74
|
Nuzzi R, Boscia G, Marolo P, Ricardi F. The Impact of Artificial Intelligence and Deep Learning in Eye Diseases: A Review. Front Med (Lausanne) 2021; 8:710329. [PMID: 34527682 PMCID: PMC8437147 DOI: 10.3389/fmed.2021.710329] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Accepted: 07/23/2021] [Indexed: 12/21/2022] Open
Abstract
Artificial intelligence (AI) is a subset of computer science dealing with the development and training of algorithms that try to replicate human intelligence. We report a clinical overview of the basic principles of AI that are fundamental to appreciating its application to ophthalmology practice. Here, we review the most common eye diseases, focusing on some of the potential challenges and limitations emerging with the development and application of this new technology into ophthalmology.
Collapse
Affiliation(s)
- Raffaele Nuzzi
- Ophthalmology Unit, A.O.U. City of Health and Science of Turin, Department of Surgical Sciences, University of Turin, Turin, Italy
| | | | | | | |
Collapse
|
75
|
Cen LP, Ji J, Lin JW, Ju ST, Lin HJ, Li TP, Wang Y, Yang JF, Liu YF, Tan S, Tan L, Li D, Wang Y, Zheng D, Xiong Y, Wu H, Jiang J, Wu Z, Huang D, Shi T, Chen B, Yang J, Zhang X, Luo L, Huang C, Zhang G, Huang Y, Ng TK, Chen H, Chen W, Pang CP, Zhang M. Automatic detection of 39 fundus diseases and conditions in retinal photographs using deep neural networks. Nat Commun 2021; 12:4828. [PMID: 34376678 PMCID: PMC8355164 DOI: 10.1038/s41467-021-25138-w] [Citation(s) in RCA: 69] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 07/22/2021] [Indexed: 02/05/2023] Open
Abstract
Retinal fundus diseases can lead to irreversible visual impairment without timely diagnoses and appropriate treatments. Single disease-based deep learning algorithms had been developed for the detection of diabetic retinopathy, age-related macular degeneration, and glaucoma. Here, we developed a deep learning platform (DLP) capable of detecting multiple common referable fundus diseases and conditions (39 classes) by using 249,620 fundus images marked with 275,543 labels from heterogenous sources. Our DLP achieved a frequency-weighted average F1 score of 0.923, sensitivity of 0.978, specificity of 0.996 and area under the receiver operating characteristic curve (AUC) of 0.9984 for multi-label classification in the primary test dataset and reached the average level of retina specialists. External multihospital test, public data test and tele-reading application also showed high efficiency for multiple retinal diseases and conditions detection. These results indicate that our DLP can be applied for retinal fundus disease triage, especially in remote areas around the world.
Collapse
Affiliation(s)
- Ling-Ping Cen
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jie Ji
- Network & Information Centre, Shantou University, Shantou, Guangdong, China
- Shantou University Medical College, Shantou, Guangdong, China
- XuanShi Med Tech (Shanghai) Company Limited, Shanghai, China
| | - Jian-Wei Lin
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Si-Tong Ju
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Hong-Jie Lin
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Tai-Ping Li
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yun Wang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jian-Feng Yang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yu-Fen Liu
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Shaoying Tan
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Li Tan
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Dongjie Li
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yifan Wang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Dezhi Zheng
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yongqun Xiong
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Hanfu Wu
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jingjing Jiang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Zhenggen Wu
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Dingguo Huang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Tingkun Shi
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Binyao Chen
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jianling Yang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Xiaoling Zhang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Li Luo
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Chukai Huang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Guihua Zhang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yuqiang Huang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Tsz Kin Ng
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
- Shantou University Medical College, Shantou, Guangdong, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Haoyu Chen
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Weiqi Chen
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Chi Pui Pang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Mingzhi Zhang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China.
| |
Collapse
|
76
|
Avilés-Rodríguez GJ, Nieto-Hipólito JI, Cosío-León MDLÁ, Romo-Cárdenas GS, Sánchez-López JDD, Radilla-Chávez P, Vázquez-Briseño M. Topological Data Analysis for Eye Fundus Image Quality Assessment. Diagnostics (Basel) 2021; 11:1322. [PMID: 34441257 PMCID: PMC8394537 DOI: 10.3390/diagnostics11081322] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 07/12/2021] [Accepted: 07/16/2021] [Indexed: 11/29/2022] Open
Abstract
The objective of this work is to perform image quality assessment (IQA) of eye fundus images in the context of digital fundoscopy with topological data analysis (TDA) and machine learning methods. Eye health remains inaccessible for a large amount of the global population. Digital tools that automize the eye exam could be used to address this issue. IQA is a fundamental step in digital fundoscopy for clinical applications; it is one of the first steps in the preprocessing stages of computer-aided diagnosis (CAD) systems using eye fundus images. Images from the EyePACS dataset were used, and quality labels from previous works in the literature were selected. Cubical complexes were used to represent the images; the grayscale version was, then, used to calculate a persistent homology on the simplex and represented with persistence diagrams. Then, 30 vectorized topological descriptors were calculated from each image and used as input to a classification algorithm. Six different algorithms were tested for this study (SVM, decision tree, k-NN, random forest, logistic regression (LoGit), MLP). LoGit was selected and used for the classification of all images, given the low computational cost it carries. Performance results on the validation subset showed a global accuracy of 0.932, precision of 0.912 for label "quality" and 0.952 for label "no quality", recall of 0.932 for label "quality" and 0.912 for label "no quality", AUC of 0.980, F1 score of 0.932, and a Matthews correlation coefficient of 0.864. This work offers evidence for the use of topological methods for the process of quality assessment of eye fundus images, where a relatively small vector of characteristics (30 in this case) can enclose enough information for an algorithm to yield classification results useful in the clinical settings of a digital fundoscopy pipeline for CAD.
Collapse
Affiliation(s)
- Gener José Avilés-Rodríguez
- Facultad de Ingeniería Arquitectura y Diseño, Universidad Autónoma de Baja California, Carretera Transpeninsular Ensenada-Tijuana #3917, Playitas, Ensenada 22860, Mexico; (G.S.R.-C.); (J.d.D.S.-L.); (M.V.-B.)
| | - Juan Iván Nieto-Hipólito
- Facultad de Ingeniería Arquitectura y Diseño, Universidad Autónoma de Baja California, Carretera Transpeninsular Ensenada-Tijuana #3917, Playitas, Ensenada 22860, Mexico; (G.S.R.-C.); (J.d.D.S.-L.); (M.V.-B.)
| | - María de los Ángeles Cosío-León
- Dirección de Investigación, Innovación y Posgrado, Universidad Politécnica de Pachuca, Carretera Ciudad Sahagún-Pachuca Km. 20, Ex-Hacienda de Santa Bárbara, Hidalgo 43830, Mexico;
| | - Gerardo Salvador Romo-Cárdenas
- Facultad de Ingeniería Arquitectura y Diseño, Universidad Autónoma de Baja California, Carretera Transpeninsular Ensenada-Tijuana #3917, Playitas, Ensenada 22860, Mexico; (G.S.R.-C.); (J.d.D.S.-L.); (M.V.-B.)
| | - Juan de Dios Sánchez-López
- Facultad de Ingeniería Arquitectura y Diseño, Universidad Autónoma de Baja California, Carretera Transpeninsular Ensenada-Tijuana #3917, Playitas, Ensenada 22860, Mexico; (G.S.R.-C.); (J.d.D.S.-L.); (M.V.-B.)
| | - Patricia Radilla-Chávez
- Escuela de Ciencias de la Salud, Universidad Autónoma de Baja California, Carretera Transpeninsular S/N, Valle Dorado, Ensenada 22890, Mexico;
| | - Mabel Vázquez-Briseño
- Facultad de Ingeniería Arquitectura y Diseño, Universidad Autónoma de Baja California, Carretera Transpeninsular Ensenada-Tijuana #3917, Playitas, Ensenada 22860, Mexico; (G.S.R.-C.); (J.d.D.S.-L.); (M.V.-B.)
| |
Collapse
|
77
|
Nakahara K, Asaoka R, Tanito M, Shibata N, Mitsuhashi K, Fujino Y, Matsuura M, Inoue T, Azuma K, Obata R, Murata H. Deep learning-assisted (automatic) diagnosis of glaucoma using a smartphone. Br J Ophthalmol 2021; 106:587-592. [PMID: 34261663 DOI: 10.1136/bjophthalmol-2020-318107] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Accepted: 01/07/2021] [Indexed: 11/04/2022]
Abstract
BACKGROUND/AIMS To validate a deep learning algorithm to diagnose glaucoma from fundus photography obtained with a smartphone. METHODS A training dataset consisting of 1364 colour fundus photographs with glaucomatous indications and 1768 colour fundus photographs without glaucomatous features was obtained using an ordinary fundus camera. The testing dataset consisted of 73 eyes of 73 patients with glaucoma and 89 eyes of 89 normative subjects. In the testing dataset, fundus photographs were acquired using an ordinary fundus camera and a smartphone. A deep learning algorithm was developed to diagnose glaucoma using a training dataset. The trained neural network was evaluated by prediction result of the diagnostic of glaucoma or normal over the test datasets, using images from both an ordinary fundus camera and a smartphone. Diagnostic accuracy was assessed using the area under the receiver operating characteristic curve (AROC). RESULTS The AROC with a fundus camera was 98.9% and 84.2% with a smartphone. When validated only in eyes with advanced glaucoma (mean deviation value < -12 dB, N=26), the AROC with a fundus camera was 99.3% and 90.0% with a smartphone. There were significant differences between these AROC values using different cameras. CONCLUSION The usefulness of a deep learning algorithm to automatically screen for glaucoma from smartphone-based fundus photographs was validated. The algorithm had a considerable high diagnostic ability, particularly in eyes with advanced glaucoma.
Collapse
Affiliation(s)
| | - Ryo Asaoka
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Shizuoka, Japan .,Seirei Christopher University, Shizuoka, Hamamatsu, Japan.,Nanovision Research Division, Research Institute of Electronics, Shizuoka University, Hamamatsu, Japan.,The Graduate School for the Creation of New Photonics Industries, Hamamatsu, Japan.,Department of Ophthalmology, University of Tokyo, Tokyo, Japan
| | - Masaki Tanito
- Department of Ophthalmology, Shimane University Faculty of Medicine, Shimane, Japan
| | | | | | - Yuri Fujino
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Shizuoka, Japan.,Department of Ophthalmology, University of Tokyo, Tokyo, Japan.,Department of Ophthalmology, Shimane University Faculty of Medicine, Shimane, Japan
| | - Masato Matsuura
- Department of Ophthalmology, University of Tokyo, Tokyo, Japan
| | - Tatsuya Inoue
- Department of Ophthalmology, University of Tokyo, Tokyo, Japan.,Department of Ophthalmology and Microtechnology, Yokohama City University School of Medicine, Kanagawa, Japan
| | - Keiko Azuma
- Department of Ophthalmology, University of Tokyo, Tokyo, Japan
| | - Ryo Obata
- Department of Ophthalmology, University of Tokyo, Tokyo, Japan
| | - Hiroshi Murata
- Department of Ophthalmology, University of Tokyo, Tokyo, Japan
| |
Collapse
|
78
|
Wu JH, Liu TYA, Hsu WT, Ho JHC, Lee CC. Performance and Limitation of Machine Learning Algorithms for Diabetic Retinopathy Screening: Meta-analysis. J Med Internet Res 2021; 23:e23863. [PMID: 34407500 PMCID: PMC8406115 DOI: 10.2196/23863] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 11/19/2020] [Accepted: 04/30/2021] [Indexed: 12/23/2022] Open
Abstract
Background Diabetic retinopathy (DR), whose standard diagnosis is performed by human experts, has high prevalence and requires a more efficient screening method. Although machine learning (ML)–based automated DR diagnosis has gained attention due to recent approval of IDx-DR, performance of this tool has not been examined systematically, and the best ML technique for use in a real-world setting has not been discussed. Objective The aim of this study was to systematically examine the overall diagnostic accuracy of ML in diagnosing DR of different categories based on color fundus photographs and to determine the state-of-the-art ML approach. Methods Published studies in PubMed and EMBASE were searched from inception to June 2020. Studies were screened for relevant outcomes, publication types, and data sufficiency, and a total of 60 out of 2128 (2.82%) studies were retrieved after study selection. Extraction of data was performed by 2 authors according to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses), and the quality assessment was performed according to the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2). Meta-analysis of diagnostic accuracy was pooled using a bivariate random effects model. The main outcomes included diagnostic accuracy, sensitivity, and specificity of ML in diagnosing DR based on color fundus photographs, as well as the performances of different major types of ML algorithms. Results The primary meta-analysis included 60 color fundus photograph studies (445,175 interpretations). Overall, ML demonstrated high accuracy in diagnosing DR of various categories, with a pooled area under the receiver operating characteristic (AUROC) ranging from 0.97 (95% CI 0.96-0.99) to 0.99 (95% CI 0.98-1.00). The performance of ML in detecting more-than-mild DR was robust (sensitivity 0.95; AUROC 0.97), and by subgroup analyses, we observed that robust performance of ML was not limited to benchmark data sets (sensitivity 0.92; AUROC 0.96) but could be generalized to images collected in clinical practice (sensitivity 0.97; AUROC 0.97). Neural network was the most widely used method, and the subgroup analysis revealed a pooled AUROC of 0.98 (95% CI 0.96-0.99) for studies that used neural networks to diagnose more-than-mild DR. Conclusions This meta-analysis demonstrated high diagnostic accuracy of ML algorithms in detecting DR on color fundus photographs, suggesting that state-of-the-art, ML-based DR screening algorithms are likely ready for clinical applications. However, a significant portion of the earlier published studies had methodology flaws, such as the lack of external validation and presence of spectrum bias. The results of these studies should be interpreted with caution.
Collapse
Affiliation(s)
- Jo-Hsuan Wu
- Shiley Eye Institute and Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla, CA, United States
| | - T Y Alvin Liu
- Retina Division, Wilmer Eye Institute, The Johns Hopkins Medicine, Baltimore, MD, United States
| | - Wan-Ting Hsu
- Harvard TH Chan School of Public Health, Boston, MA, United States
| | | | - Chien-Chang Lee
- Health Data Science Research Group, National Taiwan University Hospital, Taipei, Taiwan.,The Centre for Intelligent Healthcare, National Taiwan University Hospital, Taipei, Taiwan.,Department of Emergency Medicine, National Taiwan University Hospital, Taipei, Taiwan
| |
Collapse
|
79
|
Yellapragada B, Hornauer S, Snyder K, Yu S, Yiu G. Self-Supervised Feature Learning and Phenotyping for Assessing Age-Related Macular Degeneration Using Retinal Fundus Images. Ophthalmol Retina 2021; 6:116-129. [PMID: 34217854 DOI: 10.1016/j.oret.2021.06.010] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Revised: 06/24/2021] [Accepted: 06/25/2021] [Indexed: 12/18/2022]
Abstract
OBJECTIVE Diseases such as age-related macular degeneration (AMD) are classified based on human rubrics that are prone to bias. Supervised neural networks trained using human-generated labels require labor-intensive annotations and are restricted to specific trained tasks. Here, we trained a self-supervised deep learning network using unlabeled fundus images, enabling data-driven feature classification of AMD severity and discovery of ocular phenotypes. DESIGN Development of a self-supervised training pipeline to evaluate fundus photographs from the Age-Related Eye Disease Study (AREDS). PARTICIPANTS One hundred thousand eight hundred forty-eight human-graded fundus images from 4757 AREDS participants between 55 and 80 years of age. METHODS We trained a deep neural network with self-supervised Non-Parametric Instance Discrimination (NPID) using AREDS fundus images without labels then evaluated its performance in grading AMD severity using 2-step, 4-step, and 9-step classification schemes using a supervised classifier. We compared balanced and unbalanced accuracies of NPID against supervised-trained networks and ophthalmologists, explored network behavior using hierarchical learning of image subsets and spherical k-means clustering of feature vectors, then searched for ocular features that can be identified without labels. MAIN OUTCOME MEASURES Accuracy and kappa statistics. RESULTS NPID demonstrated versatility across different AMD classification schemes without re-training and achieved balanced accuracies comparable with those of supervised-trained networks or human ophthalmologists in classifying advanced AMD (82% vs. 81-92% or 89%), referable AMD (87% vs. 90-92% or 96%), or on the 4-step AMD severity scale (65% vs. 63-75% or 67%), despite never directly using these labels during self-supervised feature learning. Drusen area drove network predictions on the 4-step scale, while depigmentation and geographic atrophy (GA) areas correlated with advanced AMD classes. Self-supervised learning revealed grader-mislabeled images and susceptibility of some classes within more granular AMD scales to misclassification by both ophthalmologists and neural networks. Importantly, self-supervised learning enabled data-driven discovery of AMD features such as GA and other ocular phenotypes of the choroid (e.g., tessellated or blonde fundi), vitreous (e.g., asteroid hyalosis), and lens (e.g., nuclear cataracts) that were not predefined by human labels. CONCLUSIONS Self-supervised learning enables AMD severity grading comparable with that of ophthalmologists and supervised networks, reveals biases of human-defined AMD classification systems, and allows unbiased, data-driven discovery of AMD and non-AMD ocular phenotypes.
Collapse
Affiliation(s)
- Baladitya Yellapragada
- Department of Vision Science, University of California, Berkeley, Berkeley, California; International Computer Science Institute, Berkeley, California; Department of Ophthalmology & Vision Science, University of California, Davis, Sacramento, California
| | - Sascha Hornauer
- International Computer Science Institute, Berkeley, California
| | - Kiersten Snyder
- Department of Ophthalmology & Vision Science, University of California, Davis, Sacramento, California
| | - Stella Yu
- Department of Vision Science, University of California, Berkeley, Berkeley, California; International Computer Science Institute, Berkeley, California
| | - Glenn Yiu
- Department of Ophthalmology & Vision Science, University of California, Davis, Sacramento, California.
| |
Collapse
|
80
|
Deep learning-based automated detection for diabetic retinopathy and diabetic macular oedema in retinal fundus photographs. Eye (Lond) 2021; 36:1433-1441. [PMID: 34211137 DOI: 10.1038/s41433-021-01552-8] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Revised: 03/24/2021] [Accepted: 04/13/2021] [Indexed: 02/07/2023] Open
Abstract
OBJECTIVES To present and validate a deep ensemble algorithm to detect diabetic retinopathy (DR) and diabetic macular oedema (DMO) using retinal fundus images. METHODS A total of 8739 retinal fundus images were collected from a retrospective cohort of 3285 patients. For detecting DR and DMO, a multiple improved Inception-v4 ensembling approach was developed. We measured the algorithm's performance and made a comparison with that of human experts on our primary dataset, while its generalization was assessed on the publicly available Messidor-2 dataset. Also, we investigated systematically the impact of the size and number of input images used in training on model's performance, respectively. Further, the time budget of training/inference versus model performance was analyzed. RESULTS On our primary test dataset, the model achieved an 0.992 (95% CI, 0.989-0.995) AUC corresponding to 0.925 (95% CI, 0.916-0.936) sensitivity and 0.961 (95% CI, 0.950-0.972) specificity for referable DR, while the sensitivity and specificity for ophthalmologists ranged from 0.845 to 0.936, and from 0.912 to 0.971, respectively. For referable DMO, our model generated an AUC of 0.994 (95% CI, 0.992-0.996) with a 0.930 (95% CI, 0.919-0.941) sensitivity and 0.971 (95% CI, 0.965-0.978) specificity, whereas ophthalmologists obtained sensitivities ranging between 0.852 and 0.946, and specificities ranging between 0.926 and 0.985. CONCLUSION This study showed that the deep ensemble model exhibited excellent performance in detecting DR and DMO, and had good robustness and generalization, which could potentially help support and expand DR/DMO screening programs.
Collapse
|
81
|
Soans RS, Grillini A, Saxena R, Renken RJ, Gandhi TK, Cornelissen FW. Eye-Movement-Based Assessment of the Perceptual Consequences of Glaucomatous and Neuro-Ophthalmological Visual Field Defects. Transl Vis Sci Technol 2021; 10:1. [PMID: 34003886 PMCID: PMC7873497 DOI: 10.1167/tvst.10.2.1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose Assessing the presence of visual field defects (VFD) through procedures such as perimetry is an essential aspect of the management and diagnosis of ocular disorders. However, even the latest perimetric methods have shortcomings-a high cognitive demand and requiring prolonged stable fixation and feedback through a button response. Consequently, an approach using eye movements (EM)-as a natural response-has been proposed as an alternate way to evaluate the presence of VFD. This approach has given good results for computer-simulated VFD. However, its use in patients is not well documented yet. Here we use this new approach to quantify the spatiotemporal properties (STP) of EM of various patients suffering from glaucoma and neuro-ophthalmological VFD and controls. Methods In total, 15 glaucoma patients, 37 patients with a neuro-ophthalmological disorder, and 21 controls performed a visual tracking task while their EM were being recorded. Subsequently, the STP of EM were quantified using a cross-correlogram analysis. Decision trees were used to identify the relevant STP and classify the populations. Results We achieved a classification accuracy of 94.5% (TPR/sensitivity = 96%, TNR/specificity = 90%) between patients and controls. Individually, the algorithm achieved an accuracy of 86.3% (TPR for neuro-ophthalmology [97%], glaucoma [60%], and controls [86%]). The STP of EM were highly similar across two different control cohorts. Conclusions In an ocular tracking task, patients with VFD due to different underlying pathology make EM with distinctive STP. These properties are interpretable based on different clinical characteristics of patients and can be used for patient classification. Translational Relevance Our EM-based screening tool may complement existing perimetric techniques in clinical practice.
Collapse
Affiliation(s)
- Rijul Saurabh Soans
- Department of Electrical Engineering, Indian Institute of Technology - Delhi, New Delhi, India.,Laboratory of Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, The Netherlands
| | - Alessandro Grillini
- Laboratory of Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, The Netherlands
| | - Rohit Saxena
- Department of Ophthalmology, Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | - Remco J Renken
- Cognitive Neuroscience Center, Department of Biomedical Sciences of Cells and Systems, University Medical Center Groningen, University of Groningen, The Netherlands
| | - Tapan Kumar Gandhi
- Department of Electrical Engineering, Indian Institute of Technology - Delhi, New Delhi, India
| | - Frans W Cornelissen
- Laboratory of Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, The Netherlands
| |
Collapse
|
82
|
Shi Z, Wang T, Huang Z, Xie F, Song G. A method for the automatic detection of myopia in Optos fundus images based on deep learning. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2021; 37:e3460. [PMID: 33773080 DOI: 10.1002/cnm.3460] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Revised: 03/08/2021] [Accepted: 03/20/2021] [Indexed: 06/12/2023]
Abstract
Myopia detection is significant for preventing irreversible visual impairment and diagnosing myopic retinopathy. To improve the detection efficiency and accuracy, a Myopia Detection Network (MDNet) that combines the advantages of dense connection and Residual Squeeze-and-Excitation attention is proposed in this paper to automatically detect myopia in Optos fundus images. First, an automatic optic disc recognition method is applied to extract the Regions of Interest and remove the noise disturbances; then, data augmentation techniques are implemented to enlarge the data set and prevent overfitting; moreover, an MDNet composed of Attention Dense blocks is constructed to detect myopia in Optos fundus images. The results show that the Mean Absolute Error of the Spherical Equivalent detected by this network can reach 1.1150 D (diopter), which verifies the feasibility and applicability of this method for the automatic detection of myopia in Optos fundus images.
Collapse
Affiliation(s)
- Zhengjin Shi
- School of Automation and Electrical Engineering, Shenyang Ligong University, Shenyang, China
| | - Tianyu Wang
- School of Automation and Electrical Engineering, Shenyang Ligong University, Shenyang, China
| | - Zheng Huang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Feng Xie
- School of Automation and Electrical Engineering, Shenyang Ligong University, Shenyang, China
| | - Guoli Song
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
| |
Collapse
|
83
|
Wang Y, Yu M, Hu B, Jin X, Li Y, Zhang X, Zhang Y, Gong D, Wu C, Zhang B, Yang J, Li B, Yuan M, Mo B, Wei Q, Zhao J, Ding D, Yang J, Li X, Yu W, Chen Y. Deep learning-based detection and stage grading for optimising diagnosis of diabetic retinopathy. Diabetes Metab Res Rev 2021; 37:e3445. [PMID: 33713564 DOI: 10.1002/dmrr.3445] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Revised: 02/19/2021] [Accepted: 02/23/2021] [Indexed: 11/07/2022]
Abstract
AIMS To establish an automated method for identifying referable diabetic retinopathy (DR), defined as moderate nonproliferative DR and above, using deep learning-based lesion detection and stage grading. MATERIALS AND METHODS A set of 12,252 eligible fundus images of diabetic patients were manually annotated by 45 licenced ophthalmologists and were randomly split into training, validation, and internal test sets (ratio of 7:1:2). Another set of 565 eligible consecutive clinical fundus images was established as an external test set. For automated referable DR identification, four deep learning models were programmed based on whether two factors were included: DR-related lesions and DR stages. Sensitivity, specificity and the area under the receiver operating characteristic curve (AUC) were reported for referable DR identification, while precision and recall were reported for lesion detection. RESULTS Adding lesion information to the five-stage grading model improved the AUC (0.943 vs. 0.938), sensitivity (90.6% vs. 90.5%) and specificity (80.7% vs. 78.5%) of the model for identifying referable DR in the internal test set. Adding stage information to the lesion-based model increased the AUC (0.943 vs. 0.936) and sensitivity (90.6% vs. 76.7%) of the model for identifying referable DR in the internal test set. Similar trends were also seen in the external test set. DR lesion types with high precision results were preretinal haemorrhage, hard exudate, vitreous haemorrhage, neovascularisation, cotton wool spots and fibrous proliferation. CONCLUSIONS The herein described automated model employed DR lesions and stage information to identify referable DR and displayed better diagnostic value than models built without this information.
Collapse
Affiliation(s)
- Yuelin Wang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Miao Yu
- Department of Endocrinology, Key Laboratory of Endocrinology, National Health Commission, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China
| | - Bojie Hu
- Department of Ophthalmology, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Xuemin Jin
- Department of Ophthalmology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Yibin Li
- Department of Ophthalmology, Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Key Laboratory of Ophthalmology and Visual Science, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xiao Zhang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Yongpeng Zhang
- Beijing Key Laboratory of Ophthalmology and Visual Science, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Di Gong
- Department of Ophthalmology, China-Japan Friendship Hospital, Beijing, China
| | - Chan Wu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Bilei Zhang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Jingyuan Yang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Bing Li
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Mingzhen Yuan
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Bin Mo
- Beijing Key Laboratory of Ophthalmology and Visual Science, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Qijie Wei
- Vistel AI Lab, Visionary Intelligence Ltd., Beijing, China
| | - Jianchun Zhao
- Vistel AI Lab, Visionary Intelligence Ltd., Beijing, China
| | - Dayong Ding
- Vistel AI Lab, Visionary Intelligence Ltd., Beijing, China
| | - Jingyun Yang
- Department of Neurological Sciences, Rush Alzheimer's Disease Center, Rush University Medical Center, Chicago, Illinois, USA
| | - Xirong Li
- Key Lab of Data Engineering and Knowledge Engineering, Renmin University of China, Beijing, China
| | - Weihong Yu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| |
Collapse
|
84
|
Li B, Chen H, Zhang B, Yuan M, Jin X, Lei B, Xu J, Gu W, Wong DCS, He X, Wang H, Ding D, Li X, Chen Y, Yu W. Development and evaluation of a deep learning model for the detection of multiple fundus diseases based on colour fundus photography. Br J Ophthalmol 2021; 106:1079-1086. [PMID: 33785508 DOI: 10.1136/bjophthalmol-2020-316290] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 01/24/2021] [Accepted: 02/16/2021] [Indexed: 12/24/2022]
Abstract
AIM To explore and evaluate an appropriate deep learning system (DLS) for the detection of 12 major fundus diseases using colour fundus photography. METHODS Diagnostic performance of a DLS was tested on the detection of normal fundus and 12 major fundus diseases including referable diabetic retinopathy, pathologic myopic retinal degeneration, retinal vein occlusion, retinitis pigmentosa, retinal detachment, wet and dry age-related macular degeneration, epiretinal membrane, macula hole, possible glaucomatous optic neuropathy, papilledema and optic nerve atrophy. The DLS was developed with 56 738 images and tested with 8176 images from one internal test set and two external test sets. The comparison with human doctors was also conducted. RESULTS The area under the receiver operating characteristic curves of the DLS on the internal test set and the two external test sets were 0.950 (95% CI 0.942 to 0.957) to 0.996 (95% CI 0.994 to 0.998), 0.931 (95% CI 0.923 to 0.939) to 1.000 (95% CI 0.999 to 1.000) and 0.934 (95% CI 0.929 to 0.938) to 1.000 (95% CI 0.999 to 1.000), with sensitivities of 80.4% (95% CI 79.1% to 81.6%) to 97.3% (95% CI 96.7% to 97.8%), 64.6% (95% CI 63.0% to 66.1%) to 100% (95% CI 100% to 100%) and 68.0% (95% CI 67.1% to 68.9%) to 100% (95% CI 100% to 100%), respectively, and specificities of 89.7% (95% CI 88.8% to 90.7%) to 98.1% (95%CI 97.7% to 98.6%), 78.7% (95% CI 77.4% to 80.0%) to 99.6% (95% CI 99.4% to 99.8%) and 88.1% (95% CI 87.4% to 88.7%) to 98.7% (95% CI 98.5% to 99.0%), respectively. When compared with human doctors, the DLS obtained a higher diagnostic sensitivity but lower specificity. CONCLUSION The proposed DLS is effective in diagnosing normal fundus and 12 major fundus diseases, and thus has much potential for fundus diseases screening in the real world.
Collapse
Affiliation(s)
- Bing Li
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China.,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Mecical College, Beijing, China
| | - Huan Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China.,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Mecical College, Beijing, China
| | - Bilei Zhang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China.,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Mecical College, Beijing, China
| | - Mingzhen Yuan
- Department of Ophthalmology, Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xuemin Jin
- Department of Ophthalmology, Zhengzhou University First Affiliated Hospital, Zhengzhou, Henan, China
| | - Bo Lei
- Clinical Research Center, Henan Eye Institute, Henan Eye Hospital, Clinical Research Center, Henan Provincial People's Hospital, Zhengzhou, Henan, China
| | - Jie Xu
- Department of Ophthalmology, Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wei Gu
- Department of Ophthalmology, Beijing Aier Intech Eye Hospital, Beijing, China
| | | | - Xixi He
- Vistel AI Lab, Visionary Intelligence Ltd, Beijing, China
| | - Hao Wang
- Vistel AI Lab, Visionary Intelligence Ltd, Beijing, China
| | - Dayong Ding
- Vistel AI Lab, Visionary Intelligence Ltd, Beijing, China
| | - Xirong Li
- Key Lab of DEKE, Renmin University of China, Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China .,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Mecical College, Beijing, China
| | - Weihong Yu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China .,Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Mecical College, Beijing, China
| |
Collapse
|
85
|
Ishii K, Asaoka R, Omoto T, Mitaki S, Fujino Y, Murata H, Onoda K, Nagai A, Yamaguchi S, Obana A, Tanito M. Predicting intraocular pressure using systemic variables or fundus photography with deep learning in a health examination cohort. Sci Rep 2021; 11:3687. [PMID: 33574359 PMCID: PMC7878799 DOI: 10.1038/s41598-020-80839-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Accepted: 12/21/2020] [Indexed: 12/17/2022] Open
Abstract
The purpose of the current study was to predict intraocular pressure (IOP) using color fundus photography with a deep learning (DL) model, or, systemic variables with a multivariate linear regression model (MLM), along with least absolute shrinkage and selection operator regression (LASSO), support vector machine (SVM), and Random Forest: (RF). Training dataset included 3883 examinations from 3883 eyes of 1945 subjects and testing dataset 289 examinations from 289 eyes from 146 subjects. With the training dataset, MLM was constructed to predict IOP using 35 systemic variables and 25 blood measurements. A DL model was developed to predict IOP from color fundus photographs. The prediction accuracy of each model was evaluated through the absolute error and the marginal R-squared (mR2), using the testing dataset. The mean absolute error with MLM was 2.29 mmHg, which was significantly smaller than that with DL (2.70 dB). The mR2 with MLM was 0.15, whereas that with DL was 0.0066. The mean absolute error (between 2.24 and 2.30 mmHg) and mR2 (between 0.11 and 0.15) with LASSO, SVM and RF were similar to or poorer than MLM. A DL model to predict IOP using color fundus photography proved far less accurate than MLM using systemic variables.
Collapse
Affiliation(s)
- Kaori Ishii
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Hamamatsu, Shizuoka, Japan
| | - Ryo Asaoka
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Hamamatsu, Shizuoka, Japan.
- Seirei Christopher University, Hamamatsu, Shizuoka, Japan.
- Department of Ophthalmology, The University of Tokyo, Tokyo, Japan.
| | - Takashi Omoto
- Department of Ophthalmology, The University of Tokyo, Tokyo, Japan
| | - Shingo Mitaki
- Department of Neurology, Shimane University Faculty of Medicine, Izumo, Japan
| | - Yuri Fujino
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Hamamatsu, Shizuoka, Japan
- Department of Ophthalmology, Shimane University Faculty of Medicine, Izumo, Japan
| | - Hiroshi Murata
- Department of Ophthalmology, The University of Tokyo, Tokyo, Japan
| | - Keiichi Onoda
- Department of Neurology, Shimane University Faculty of Medicine, Izumo, Japan
- Faculty of Psychology, Outemon Gakuin University, Osaka, Japan
| | - Atsushi Nagai
- Department of Neurology, Shimane University Faculty of Medicine, Izumo, Japan
| | - Shuhei Yamaguchi
- Department of Neurology, Shimane University Faculty of Medicine, Izumo, Japan
| | - Akira Obana
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Hamamatsu, Shizuoka, Japan
- Hamamatsu BioPhotonics Innovation Chair, Institute for Medical Photonics Research, Preeminent Medical Photonics Education & Research Center, Hamamatsu University School of Medicine, Hamamatsu, Shizuoka, Japan
| | - Masaki Tanito
- Department of Ophthalmology, Shimane University Faculty of Medicine, Izumo, Japan
| |
Collapse
|
86
|
Yu Y, Chen X, Zhu X, Zhang P, Hou Y, Zhang R, Wu C. Performance of Deep Transfer Learning for Detecting Abnormal Fundus Images. J Curr Ophthalmol 2021; 32:368-374. [PMID: 33553839 PMCID: PMC7861106 DOI: 10.4103/joco.joco_123_20] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 07/22/2020] [Accepted: 07/27/2020] [Indexed: 11/04/2022] Open
Abstract
Purpose To develop and validate a deep transfer learning (DTL) algorithm for detecting abnormalities in fundus images from non-mydriatic fundus photography examinations. Methods A total of 1295 fundus images were collected to develop and validate a DTL algorithm for detecting abnormal fundus images. After removing 366 poor images, the DTL model was developed using 929 (370 normal and 559 abnormal) fundus images. Data preprocessing was performed to normalize the images. The inception-ResNet-v2 architecture was applied to achieve transfer learning. We tested our model using a subset of the publicly available Messidor dataset (using 366 images) and evaluated the testing performance of the DTL model for detecting abnormal fundus images. Results In the internal validation dataset (n = 273 images), the area under the curve (AUC), sensitivity, accuracy, and specificity of DTL for correctly classified fundus images were 0.997%, 97.41%, 97.07%, and 96.82%, respectively. For the test dataset (n = 273 images), the AUC, sensitivity, accuracy, and specificity of the DTL for correctly classifying fundus images were 0.926%, 88.17%, 87.18%, and 86.67%, respectively. Conclusion DTL showed high sensitivity and specificity for detecting abnormal fundus-related diseases. Further research is necessary to improve this method and evaluate the applicability of DTL in community health-care centers.
Collapse
Affiliation(s)
- Yan Yu
- Department of Ophthalmology, Yijishan Hospital of Wannan Medical College, Wuhu, China
| | - Xiao Chen
- Optoelectronic Technology Research Center, Anhui Normal University, Wuhu, China
| | - XiangBing Zhu
- Optoelectronic Technology Research Center, Anhui Normal University, Wuhu, China
| | - PengFei Zhang
- Department of Ophthalmology, Yijishan Hospital of Wannan Medical College, Wuhu, China
| | - YinFen Hou
- Department of Ophthalmology, Yijishan Hospital of Wannan Medical College, Wuhu, China
| | - RongRong Zhang
- Department of Ophthalmology, Yijishan Hospital of Wannan Medical College, Wuhu, China
| | - ChangFan Wu
- Department of Ophthalmology, Yijishan Hospital of Wannan Medical College, Wuhu, China
| |
Collapse
|
87
|
Gunasekeran DV, Tham YC, Ting DSW, Tan GSW, Wong TY. Digital health during COVID-19: lessons from operationalising new models of care in ophthalmology. LANCET DIGITAL HEALTH 2021; 3:e124-e134. [PMID: 33509383 DOI: 10.1016/s2589-7500(20)30287-9] [Citation(s) in RCA: 74] [Impact Index Per Article: 24.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 11/11/2020] [Accepted: 11/18/2020] [Indexed: 12/13/2022]
Abstract
The COVID-19 pandemic has resulted in massive disruptions within health care, both directly as a result of the infectious disease outbreak, and indirectly because of public health measures to mitigate against transmission. This disruption has caused rapid dynamic fluctuations in demand, capacity, and even contextual aspects of health care. Therefore, the traditional face-to-face patient-physician care model has had to be re-examined in many countries, with digital technology and new models of care being rapidly deployed to meet the various challenges of the pandemic. This Viewpoint highlights new models in ophthalmology that have adapted to incorporate digital health solutions such as telehealth, artificial intelligence decision support for triaging and clinical care, and home monitoring. These models can be operationalised for different clinical applications based on the technology, clinical need, demand from patients, and manpower availability, ranging from out-of-hospital models including the hub-and-spoke pre-hospital model, to front-line models such as the inflow funnel model and monitoring models such as the so-called lighthouse model for provider-led monitoring. Lessons learnt from operationalising these models for ophthalmology in the context of COVID-19 are discussed, along with their relevance for other specialty domains.
Collapse
Affiliation(s)
- Dinesh V Gunasekeran
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Duke-NUS Medical School, Singapore
| | - Daniel S W Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Duke-NUS Medical School, Singapore
| | - Gavin S W Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Duke-NUS Medical School, Singapore
| | - Tien Y Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Duke-NUS Medical School, Singapore.
| |
Collapse
|
88
|
Bilal A, Sun G, Mazhar S. Survey on recent developments in automatic detection of diabetic retinopathy. J Fr Ophtalmol 2021; 44:420-440. [PMID: 33526268 DOI: 10.1016/j.jfo.2020.08.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 08/24/2020] [Indexed: 12/13/2022]
Abstract
Diabetic retinopathy (DR) is a disease facilitated by the rapid spread of diabetes worldwide. DR can blind diabetic individuals. Early detection of DR is essential to restoring vision and providing timely treatment. DR can be detected manually by an ophthalmologist, examining the retinal and fundus images to analyze the macula, morphological changes in blood vessels, hemorrhage, exudates, and/or microaneurysms. This is a time consuming, costly, and challenging task. An automated system can easily perform this function by using artificial intelligence, especially in screening for early DR. Recently, much state-of-the-art research relevant to the identification of DR has been reported. This article describes the current methods of detecting non-proliferative diabetic retinopathy, exudates, hemorrhage, and microaneurysms. In addition, the authors point out future directions in overcoming current challenges in the field of DR research.
Collapse
Affiliation(s)
- A Bilal
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China.
| | - G Sun
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China
| | - S Mazhar
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China
| |
Collapse
|
89
|
Mun Y, Kim J, Noh KJ, Lee S, Kim S, Yi S, Park KH, Yoo S, Chang DJ, Park SJ. An innovative strategy for standardized, structured, and interoperable results in ophthalmic examinations. BMC Med Inform Decis Mak 2021; 21:9. [PMID: 33407448 PMCID: PMC7789748 DOI: 10.1186/s12911-020-01370-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Accepted: 12/09/2020] [Indexed: 01/08/2023] Open
Abstract
BACKGROUND Although ophthalmic devices have made remarkable progress and are widely used, most lack standardization of both image review and results reporting systems, making interoperability unachievable. We developed and validated new software for extracting, transforming, and storing information from report images produced by ophthalmic examination devices to generate standardized, structured, and interoperable information to assist ophthalmologists in eye clinics. RESULTS We selected report images derived from optical coherence tomography (OCT). The new software consists of three parts: (1) The Area Explorer, which determines whether the designated area in the configuration file contains numeric values or tomographic images; (2) The Value Reader, which converts images to text according to ophthalmic measurements; and (3) The Finding Classifier, which classifies pathologic findings from tomographic images included in the report. After assessment of Value Reader accuracy by human experts, all report images were converted and stored in a database. We applied the Value Reader, which achieved 99.67% accuracy, to a total of 433,175 OCT report images acquired in a single tertiary hospital from 07/04/2006 to 08/31/2019. The Finding Classifier provided pathologic findings (e.g., macular edema and subretinal fluid) and disease activity. Patient longitudinal data could be easily reviewed to document changes in measurements over time. The final results were loaded into a common data model (CDM), and the cropped tomographic images were loaded into the Picture Archive Communication System. CONCLUSIONS The newly developed software extracts valuable information from OCT images and may be extended to other types of report image files produced by medical devices. Furthermore, powerful databases such as the CDM may be implemented or augmented by adding the information captured through our program.
Collapse
Affiliation(s)
- Yongseok Mun
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, Gyunggi-do, 13620, Republic of Korea
| | - Jooyoung Kim
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, Gyunggi-do, 13620, Republic of Korea
| | - Kyoung Jin Noh
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, Gyunggi-do, 13620, Republic of Korea
| | - Soochahn Lee
- School of Electrical Engineering, Kookmin University, 77, Jeongneung-ro, Seongbuk-gu, Seoul, Republic of Korea
| | - Seok Kim
- Healthcare ICT Research Center, Office of eHealth Research and Businesses, Seoul National University Bundang Hospital, 172, Dolma-ro, Bundang-gu, Seongnam-si, 13605, Gyunggi-do, Republic of Korea
| | - Soyoung Yi
- Healthcare ICT Research Center, Office of eHealth Research and Businesses, Seoul National University Bundang Hospital, 172, Dolma-ro, Bundang-gu, Seongnam-si, 13605, Gyunggi-do, Republic of Korea
| | - Kyu Hyung Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, Gyunggi-do, 13620, Republic of Korea
| | - Sooyoung Yoo
- Healthcare ICT Research Center, Office of eHealth Research and Businesses, Seoul National University Bundang Hospital, 172, Dolma-ro, Bundang-gu, Seongnam-si, 13605, Gyunggi-do, Republic of Korea
| | - Dong Jin Chang
- Department of Ophthalmology, College of medicine, The Catholic University of Korea, Yeouido St. Mary's Hospital, 10, 63-ro, Seoul, 07345, Yeongdeungpo-gu, Republic of Korea
| | - Sang Jun Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, Gyunggi-do, 13620, Republic of Korea.
| |
Collapse
|
90
|
Jiang Y, Pan J, Yuan M, Shen Y, Zhu J, Wang Y, Li Y, Zhang K, Yu Q, Xie H, Li H, Wang X, Luo Y. Segmentation of Laser Marks of Diabetic Retinopathy in the Fundus Photographs Using Lightweight U-Net. J Diabetes Res 2021; 2021:8766517. [PMID: 34712739 PMCID: PMC8548126 DOI: 10.1155/2021/8766517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/29/2021] [Revised: 09/03/2021] [Accepted: 09/24/2021] [Indexed: 11/17/2022] Open
Abstract
Diabetic retinopathy (DR) is a prevalent vision-threatening disease worldwide. Laser marks are the scars left after panretinal photocoagulation, a treatment to prevent patients with severe DR from losing vision. In this study, we develop a deep learning algorithm based on the lightweight U-Net to segment laser marks from the color fundus photos, which could help indicate a stage or providing valuable auxiliary information for the care of DR patients. We prepared our training and testing data, manually annotated by trained and experienced graders from Image Reading Center, Zhongshan Ophthalmic Center, publicly available to fill the vacancy of public image datasets dedicated to the segmentation of laser marks. The lightweight U-Net, along with two postprocessing procedures, achieved an AUC of 0.9824, an optimal sensitivity of 94.16%, and an optimal specificity of 92.82% on the segmentation of laser marks in fundus photographs. With accurate segmentation and high numeric metrics, the lightweight U-Net method showed its reliable performance in automatically segmenting laser marks in fundus photographs, which could help the AI assist the diagnosis of DR in the severe stage.
Collapse
Affiliation(s)
- Yukang Jiang
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Jianying Pan
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| | - Ming Yuan
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Yanhe Shen
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Jin Zhu
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Yishen Wang
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| | - Yewei Li
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Ke Zhang
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Qingyun Yu
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Huirui Xie
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| | - Huiting Li
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| | - Xueqin Wang
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
- Department of Statistics and Finance, School of Management, University of Science and Technology of China, Hefei, Anhui 230026, China
- Xinhua College, Sun Yat-Sen University, Guangzhou 510520, China
| | - Yan Luo
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| |
Collapse
|
91
|
Hong N, Park Y, You SC, Rhee Y. AIM in Endocrinology. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_328-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
|
92
|
Cao B, Zhang N, Zhang Y, Fu Y, Zhao D. Plasma cytokines for predicting diabetic retinopathy among type 2 diabetic patients via machine learning algorithms. Aging (Albany NY) 2020; 13:1972-1988. [PMID: 33323553 PMCID: PMC7880388 DOI: 10.18632/aging.202168] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Accepted: 10/09/2020] [Indexed: 11/25/2022]
Abstract
AIMS This study aimed to investigate changes of plasma cytokines and to develop machine learning classifiers for predicting non-proliferative diabetic retinopathy among type 2 diabetes mellitus patients. RESULTS There were 12 plasma cytokines significantly higher in the non-proliferative diabetic retinopathy group in the pilot cohort. The validation cohort showed that angiopoietin 1, platelet-derived growth factor-BB, tissue inhibitors of metalloproteinase 2 and vascular endothelial growth factor receptor 2 were significantly higher in the NPDR group. Machine learning algorithms using the random forest yielded the best performance, with sensitivity of 92.3%, specificity of 75%, PPV of 82.8%, NPV of 88.2% and area under the curve of 0.84. CONCLUSIONS Plasma angiopoietin 1, platelet-derived growth factor-BB, and vascular endothelial growth factor receptor 2 were associated with presence of non-proliferative diabetic retinopathy and may be good biomarkers that play important roles in pathophysiology of diabetic retinopathy. MATERIALS AND METHODS In pilot cohort, 60 plasma cytokines were simultaneously measured. In validation cohort, angiopoietin 1, CXC-chemokine ligand 16, platelet-derived growth factor-BB, tissue inhibitors of metalloproteinase 1, tissue inhibitors of metalloproteinase 2, and vascular endothelial growth factor receptor 2 were validated using ELISA kits. Machine learning algorithms were developed to build a prediction model for non-proliferative diabetic retinopathy.
Collapse
Affiliation(s)
- Bin Cao
- Center for Endocrine Metabolism and Immune Diseases, Beijing Luhe Hospital, Capital Medical University, Beijing 101149, China.,Beijing Key Laboratory of Diabetes Research and Care, Beijing 101149, China
| | - Ning Zhang
- Center for Endocrine Metabolism and Immune Diseases, Beijing Luhe Hospital, Capital Medical University, Beijing 101149, China.,Beijing Key Laboratory of Diabetes Research and Care, Beijing 101149, China
| | - Yuanyuan Zhang
- Center for Endocrine Metabolism and Immune Diseases, Beijing Luhe Hospital, Capital Medical University, Beijing 101149, China.,Beijing Key Laboratory of Diabetes Research and Care, Beijing 101149, China
| | - Ying Fu
- Center for Endocrine Metabolism and Immune Diseases, Beijing Luhe Hospital, Capital Medical University, Beijing 101149, China.,Beijing Key Laboratory of Diabetes Research and Care, Beijing 101149, China
| | - Dong Zhao
- Center for Endocrine Metabolism and Immune Diseases, Beijing Luhe Hospital, Capital Medical University, Beijing 101149, China.,Beijing Key Laboratory of Diabetes Research and Care, Beijing 101149, China
| |
Collapse
|
93
|
Sun J, Huang X, Egwuagu C, Badr Y, Dryden SC, Fowler BT, Yousefi S. Identifying Mouse Autoimmune Uveitis from Fundus Photographs Using Deep Learning. Transl Vis Sci Technol 2020; 9:59. [PMID: 33294300 PMCID: PMC7718814 DOI: 10.1167/tvst.9.2.59] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Accepted: 09/25/2020] [Indexed: 01/09/2023] Open
Abstract
Purpose To develop a deep learning model for objective evaluation of experimental autoimmune uveitis (EAU), the animal model of posterior uveitis that reveals its essential pathological features via fundus photographs. Methods We developed a deep learning construct to identify uveitis using reference mouse fundus images and further categorized the severity levels of disease into mild and severe EAU. We evaluated the performance of the model using the area under the receiver operating characteristic curve (AUC) and confusion matrices. We further assessed the clinical relevance of the model by visualizing the principal components of features at different layers and through the use of gradient-weighted class activation maps, which presented retinal regions having the most significant influence on the model. Results Our model was trained, validated, and tested on 1500 fundus images (training, 1200; validation, 150; testing, 150) and achieved an average AUC of 0.98 for identifying the normal, trace (small and local lesions), and disease classes (large and spreading lesions). The AUCs of the model using an independent subset with 180 images were 1.00 (95% confidence interval [CI], 0.99-1.00), 0.97 (95% CI, 0.94-0.99), and 0.96 (95% CI, 0.90-1.00) for the normal, trace and disease classes, respectively. Conclusions The proposed deep learning model is able to identify three severity levels of EAU with high accuracy. The model also achieved high accuracy on independent validation subsets, reflecting a substantial degree of generalizability. Translational Relevance The proposed model represents an important new tool for use in animal medical research and provides a step toward clinical uveitis identification in clinical practice.
Collapse
Affiliation(s)
- Jian Sun
- Molecular Immunology Section, Laboratory of Immunology, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Xiaoqin Huang
- The Pennsylvania State University Great Valley, Malvern, PA, USA
| | - Charles Egwuagu
- Molecular Immunology Section, Laboratory of Immunology, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Youakim Badr
- The Pennsylvania State University Great Valley, Malvern, PA, USA
| | | | | | - Siamak Yousefi
- University of Tennessee Health Science Center, Memphis, TN, USA
| |
Collapse
|
94
|
Tian Y, Fu S. A descriptive framework for the field of deep learning applications in medical images. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2020.106445] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
95
|
Son J, Shin JY, Chun EJ, Jung KH, Park KH, Park SJ. Predicting High Coronary Artery Calcium Score From Retinal Fundus Images With Deep Learning Algorithms. Transl Vis Sci Technol 2020; 9:28. [PMID: 33184590 PMCID: PMC7410115 DOI: 10.1167/tvst.9.2.28] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2019] [Accepted: 03/06/2020] [Indexed: 01/04/2023] Open
Abstract
Purpose To evaluate high accumulation of coronary artery calcium (CAC) from retinal fundus images with deep learning technologies as an inexpensive and radiation-free screening method. Methods Individuals who underwent bilateral retinal fundus imaging and CAC score (CACS) evaluation from coronary computed tomography scans on the same day were identified. With this database, performances of deep learning algorithms (inception-v3) to distinguish high CACS from CACS of 0 were evaluated at various thresholds for high CACS. Vessel-inpainted and fovea-inpainted images were also used as input to investigate areas of interest in determining CACS. Results A total of 44,184 images from 20,130 individuals were included. A deep learning algorithm for discrimination of no CAC from CACS >100 achieved area under receiver operating curve (AUROC) of 82.3% (79.5%–85.0%) and 83.2% (80.2%–86.3%) using unilateral and bilateral fundus images, respectively, under a 5-fold cross validation setting. AUROC increased as the criterion for high CACS was increased, showing a plateau at 100 and losing significant improvement thereafter. AUROC decreased when fovea was inpainted and decreased further when vessels were inpainted, whereas AUROC increased when bilateral images were used as input. Conclusions Visual patterns of retinal fundus images in subjects with CACS > 100 could be recognized by deep learning algorithms compared with those with no CAC. Exploiting bilateral images improves discrimination performance, and ablation studies removing retinal vasculature or fovea suggest that recognizable patterns reside mainly in these areas. Translational Relevance Retinal fundus images can be used by deep learning algorithms for prediction of high CACS.
Collapse
Affiliation(s)
| | - Joo Young Shin
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul, Korea
| | - Eun Ju Chun
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Korea
| | | | - Kyu Hyung Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Sang Jun Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, Korea
| |
Collapse
|
96
|
Li Z, Guo C, Nie D, Lin D, Zhu Y, Chen C, Zhao L, Wu X, Dongye M, Xu F, Jin C, Zhang P, Han Y, Yan P, Lin H. Deep learning from "passive feeding" to "selective eating" of real-world data. NPJ Digit Med 2020; 3:143. [PMID: 33145439 PMCID: PMC7603327 DOI: 10.1038/s41746-020-00350-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Accepted: 09/24/2020] [Indexed: 12/23/2022] Open
Abstract
Artificial intelligence (AI) based on deep learning has shown excellent diagnostic performance in detecting various diseases with good-quality clinical images. Recently, AI diagnostic systems developed from ultra-widefield fundus (UWF) images have become popular standard-of-care tools in screening for ocular fundus diseases. However, in real-world settings, these systems must base their diagnoses on images with uncontrolled quality ("passive feeding"), leading to uncertainty about their performance. Here, using 40,562 UWF images, we develop a deep learning-based image filtering system (DLIFS) for detecting and filtering out poor-quality images in an automated fashion such that only good-quality images are transferred to the subsequent AI diagnostic system ("selective eating"). In three independent datasets from different clinical institutions, the DLIFS performed well with sensitivities of 96.9%, 95.6% and 96.6%, and specificities of 96.6%, 97.9% and 98.8%, respectively. Furthermore, we show that the application of our DLIFS significantly improves the performance of established AI diagnostic systems in real-world settings. Our work demonstrates that "selective eating" of real-world data is necessary and needs to be considered in the development of image-based AI systems.
Collapse
Affiliation(s)
- Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Chong Guo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Danyao Nie
- Shenzhen Eye Hospital, Shenzhen Key Laboratory of Ophthalmology, Affiliated Shenzhen Eye Hospital of Jinan University, 518001 Shenzhen, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Yi Zhu
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, FL 33136 USA
| | - Chuan Chen
- Sylvester Comprehensive Cancer Centre, University of Miami Miller School of Medicine, Miami, FL 33136 USA
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Meimei Dongye
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Chenjin Jin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Ping Zhang
- Xudong Ophthalmic Hospital, 015000 Inner Mongolia, China
| | - Yu Han
- EYE and ENT Hospital of Fudan University, 200031 Shanghai, China
| | - Pisong Yan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
- Centre for Precision Medicine, Sun Yat-sen University, 510060 Guangzhou, China
| |
Collapse
|
97
|
Sex judgment using color fundus parameters in elementary school students. Graefes Arch Clin Exp Ophthalmol 2020; 258:2781-2789. [PMID: 33064194 DOI: 10.1007/s00417-020-04969-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Revised: 09/28/2020] [Accepted: 10/05/2020] [Indexed: 12/17/2022] Open
Abstract
PURPOSES Recently, artificial intelligence has been used to determine sex using fundus photographs alone. We had earlier reported that sex can be distinguished using known factors obtained from color fundus photography (CFP) in adult eyes. However, it is not clear when the sex difference in fundus parameters begins. Therefore, we conducted this study to investigate sex determination based on fundus parameters using binominal logistic regression in elementary school students. METHODS This prospective observational cross-sectional study was conducted on 119 right eyes of elementary school students (aged 8 or 9 years, 59 boys and 60 girls). Through CFP, the tessellation fundus index was calculated as R/(R + G + B) using the mean value of red-green-blue intensity in the eight locations around the optic disc. Optic disc ovality ratio, papillomacular angle, retinal artery trajectory, and retinal vessel were quantified based on our earlier reports. Regularized binomial logistic regression was applied to these variables to select the decisive factors. Furthermore, its discriminative performance was evaluated using the leave-one-out cross-validation method. Sex difference in the parameters was assessed using the Mann-Whitney U test. RESULTS The optimal model yielded by the Ridge binomial logistic regression suggested that the ovality ratio of girls was significantly smaller, whereas their nasal green and blue intensities were significantly higher, than those of boys. Using this approach, the area under the receiver-operating characteristic curve was 63.2%. CONCLUSIONS Although sex can be distinguished using CFP even in elementary school students, the discrimination accuracy was relatively low. Some sex difference in the ocular fundus may begin after the age of 10 years.
Collapse
|
98
|
Cho BH, Lee DY, Park KA, Oh SY, Moon JH, Lee GI, Noh H, Chung JK, Kang MC, Chung MJ. Computer-aided recognition of myopic tilted optic disc using deep learning algorithms in fundus photography. BMC Ophthalmol 2020; 20:407. [PMID: 33036582 PMCID: PMC7547463 DOI: 10.1186/s12886-020-01657-w] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2020] [Accepted: 09/23/2020] [Indexed: 12/27/2022] Open
Abstract
Background It is necessary to consider myopic optic disc tilt as it seriously impacts normal ocular parameters. However, ophthalmologic measurements are within inter-observer variability and time-consuming to get. This study aimed to develop and evaluate deep learning models that automatically recognize a myopic tilted optic disc in fundus photography. Methods This study used 937 fundus photographs of patients with normal or myopic tilted disc, collected from Samsung Medical Center between April 2016 and December 2018. We developed an automated computer-aided recognition system for optic disc tilt on color fundus photographs via a deep learning algorithm. We preprocessed all images with two image resizing techniques. GoogleNet Inception-v3 architecture was implemented. The performances of the models were compared with the human examiner’s results. Activation map visualization was qualitatively analyzed using the generalized visualization technique based on gradient-weighted class activation mapping (Grad-CAM++). Results Nine hundred thirty-seven fundus images were collected and annotated from 509 subjects. In total, 397 images from eyes with tilted optic discs and 540 images from eyes with non-tilted optic discs were analyzed. We included both eye data of most included patients and analyzed them separately in this study. For comparison, we conducted training using two aspect ratios: the simple resized dataset and the original aspect ratio (AR) preserving dataset, and the impacts of the augmentations for both datasets were evaluated. The constructed deep learning models for myopic optic disc tilt achieved the best results when simple image-resizing and augmentation were used. The results were associated with an area under the receiver operating characteristic curve (AUC) of 0.978 ± 0.008, an accuracy of 0.960 ± 0.010, sensitivity of 0.937 ± 0.023, and specificity of 0.963 ± 0.015. The heatmaps revealed that the model could effectively identify the locations of the optic discs, the superior retinal vascular arcades, and the retinal maculae. Conclusions We developed an automated deep learning-based system to detect optic disc tilt. The model demonstrated excellent agreement with the previous clinical criteria, and the results are promising for developing future programs to adjust and identify the effect of optic disc tilt on ophthalmic measurements.
Collapse
Affiliation(s)
- Baek Hwan Cho
- Medical AI Research Center, Institute of Smart Healthcare, Samsung Medical Center, Seoul, Korea.,Department of Medical Device Management and Research, SAIHST, Sungkyunkwan University, Seoul, Korea
| | - Da Young Lee
- Medical AI Research Center, Institute of Smart Healthcare, Samsung Medical Center, Seoul, Korea.,Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Korea
| | - Kyung-Ah Park
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Korea.
| | - Sei Yeul Oh
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Korea.
| | - Jong Hak Moon
- Medical AI Research Center, Institute of Smart Healthcare, Samsung Medical Center, Seoul, Korea.,Department of Medical Device Management and Research, SAIHST, Sungkyunkwan University, Seoul, Korea
| | - Ga-In Lee
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Korea
| | - Hoon Noh
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Korea
| | - Joon Kyo Chung
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Korea
| | - Min Chae Kang
- Department of Ophthalmology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-ro, Gangnam-gu, Seoul, 06351, Korea
| | - Myung Jin Chung
- Medical AI Research Center, Institute of Smart Healthcare, Samsung Medical Center, Seoul, Korea.,Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| |
Collapse
|
99
|
Artificial intelligence for diabetic retinopathy screening, prediction and management. Curr Opin Ophthalmol 2020; 31:357-365. [PMID: 32740069 DOI: 10.1097/icu.0000000000000693] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
PURPOSE OF REVIEW Diabetic retinopathy is the most common specific complication of diabetes mellitus. Traditional care for patients with diabetes and diabetic retinopathy is fragmented, uncoordinated and delivered in a piecemeal nature, often in the most expensive and high-resource tertiary settings. Transformative new models incorporating digital technology are needed to address these gaps in clinical care. RECENT FINDINGS Artificial intelligence and telehealth may improve access, financial sustainability and coverage of diabetic retinopathy screening programs. They enable risk stratifying patients based on individual risk of vision-threatening diabetic retinopathy including diabetic macular edema (DME), and predicting which patients with DME best respond to antivascular endothelial growth factor therapy. SUMMARY Progress in artificial intelligence and tele-ophthalmology for diabetic retinopathy screening, including artificial intelligence applications in 'real-world settings' and cost-effectiveness studies are summarized. Furthermore, the initial research on the use of artificial intelligence models for diabetic retinopathy risk stratification and management of DME are outlined along with potential future directions. Finally, the need for artificial intelligence adoption within ophthalmology in response to coronavirus disease 2019 is discussed. Digital health solutions such as artificial intelligence and telehealth can facilitate the integration of community, primary and specialist eye care services, optimize the flow of patients within healthcare networks, and improve the efficiency of diabetic retinopathy management.
Collapse
|
100
|
Odaibo SG. Re: Wang et al.: Machine learning models for diagnosing glaucoma from retinal nerve fiber layer thickness maps (Ophthalmology Glaucoma. 2019;2:422-428). Ophthalmol Glaucoma 2020; 3:e3. [PMID: 32672624 DOI: 10.1016/j.ogla.2020.03.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2020] [Accepted: 03/04/2020] [Indexed: 11/29/2022]
|