1
|
Wu T, Ju L, Fu X, Wang B, Ge Z, Liu Y. Deep Learning Detection of Early Retinal Peripheral Degeneration From Ultra-Widefield Fundus Photographs of Asymptomatic Young Adult (17-19 Years) Candidates to Airforce Cadets. Transl Vis Sci Technol 2024; 13:1. [PMID: 38300623 PMCID: PMC10851781 DOI: 10.1167/tvst.13.2.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 12/27/2023] [Indexed: 02/02/2024] Open
Abstract
Purpose Artificial intelligence (AI)-assisted ultra-widefield (UWF) fundus photographic interpretation is beneficial to improve the screening of fundus abnormalities. Therefore we constructed an AI machine-learning approach and performed preliminary training and validation. Methods We proposed a two-stage deep learning-based framework to detect early retinal peripheral degeneration using UWF images from the Chinese Air Force cadets' medical selection between February 2016 and June 2022. We developed a detection model for the localization of optic disc and macula, which are used to find the peripheral areas. Then we developed six classification models for the screening of various retinal cases. We also compared our proposed framework with two baseline models reported in the literature. The performance of the screening models was evaluated by area under the receiver operating curve (AUC) with 95% confidence interval. Results A total of 3911 UWF fundus images were used to develop the deep learning model. The external validation included 760 UWF fundus images. The results of comparison study revealed that our proposed framework achieved competitive performance compared to existing baselines while also demonstrating significantly faster inference time. The developed classification models achieved an average AUC of 0.879 on six different retinal cases in the external validation dataset. Conclusions Our two-stage deep learning-based framework improved the machine learning efficiency of the AI model for fundus images with high resolution and many interference factors by maximizing the retention of valid information and compressing the image file size. Translational Relevance This machine learning model may become a new paradigm for developing UWF fundus photography AI-assisted diagnosis.
Collapse
Affiliation(s)
- Tengyun Wu
- Air Force Medical Center of Chinese PLA, Beijing, China
| | - Lie Ju
- Beijing Airdoc Technology Co. Ltd., Beijing, China
- Faculty of engineering, Monash University, Clayton, Australia
| | - Xuefei Fu
- Beijing Airdoc Technology Co. Ltd., Beijing, China
| | - Bin Wang
- Beijing Airdoc Technology Co. Ltd., Beijing, China
| | - Zongyuan Ge
- Beijing Airdoc Technology Co. Ltd., Beijing, China
- Faculty of engineering, Monash University, Clayton, Australia
| | - Yong Liu
- Air Force Medical Center of Chinese PLA, Beijing, China
| |
Collapse
|
2
|
Valentim CCS, Wu AK, Yu S, Manivannan N, Zhang Q, Cao J, Song W, Wang V, Kang H, Kalur A, Iyer AI, Conti T, Singh RP, Talcott KE. Deep learning-based algorithm for the detection of idiopathic full thickness macular holes in spectral domain optical coherence tomography. Int J Retina Vitreous 2024; 10:9. [PMID: 38263402 PMCID: PMC10804727 DOI: 10.1186/s40942-024-00526-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Accepted: 01/04/2024] [Indexed: 01/25/2024] Open
Abstract
BACKGROUND Automated identification of spectral domain optical coherence tomography (SD-OCT) features can improve retina clinic workflow efficiency as they are able to detect pathologic findings. The purpose of this study was to test a deep learning (DL)-based algorithm for the identification of Idiopathic Full Thickness Macular Hole (IFTMH) features and stages of severity in SD-OCT B-scans. METHODS In this cross-sectional study, subjects solely diagnosed with either IFTMH or Posterior Vitreous Detachment (PVD) were identified excluding secondary causes of macular holes, any concurrent maculopathies, or incomplete records. SD-OCT scans (512 × 128) from all subjects were acquired with CIRRUS™ HD-OCT (ZEISS, Dublin, CA) and reviewed for quality. In order to establish a ground truth classification, each SD-OCT B-scan was labeled by two trained graders and adjudicated by a retina specialist when applicable. Two test sets were built based on different gold-standard classification methods. The sensitivity, specificity and accuracy of the algorithm to identify IFTMH features in SD-OCT B-scans were determined. Spearman's correlation was run to examine if the algorithm's probability score was associated with the severity stages of IFTMH. RESULTS Six hundred and one SD-OCT cube scans from 601 subjects (299 with IFTMH and 302 with PVD) were used. A total of 76,928 individual SD-OCT B-scans were labeled gradable by the algorithm and yielded an accuracy of 88.5% (test set 1, 33,024 B-scans) and 91.4% (test set 2, 43,904 B-scans) in identifying SD-OCT features of IFTMHs. A Spearman's correlation coefficient of 0.15 was achieved between the algorithm's probability score and the stages of the 299 (47 [15.7%] stage 2, 56 [18.7%] stage 3 and 196 [65.6%] stage 4) IFTMHs cubes studied. CONCLUSIONS The DL-based algorithm was able to accurately detect IFTMHs features on individual SD-OCT B-scans in both test sets. However, there was a low correlation between the algorithm's probability score and IFTMH severity stages. The algorithm may serve as a clinical decision support tool that assists with the identification of IFTMHs. Further training is necessary for the algorithm to identify stages of IFTMHs.
Collapse
Affiliation(s)
- Carolina C S Valentim
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, 9500 Euclid Ave. i32, Cleveland, OH, USA
| | - Anna K Wu
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, 9500 Euclid Ave. i32, Cleveland, OH, USA
- Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - Sophia Yu
- Carl Zeiss Meditec, Inc, Dublin, CA, USA
| | | | | | - Jessica Cao
- Cole Eye Institute, Cleveland Clinic Foundation, Cleveland, OH, USA
| | - Weilin Song
- Cleveland Clinic Lerner College of Medicine, Cleveland, OH, USA
| | - Victoria Wang
- Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - Hannah Kang
- Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - Aneesha Kalur
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, 9500 Euclid Ave. i32, Cleveland, OH, USA
| | - Amogh I Iyer
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, 9500 Euclid Ave. i32, Cleveland, OH, USA
| | - Thais Conti
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, 9500 Euclid Ave. i32, Cleveland, OH, USA
| | - Rishi P Singh
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, 9500 Euclid Ave. i32, Cleveland, OH, USA
| | - Katherine E Talcott
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, 9500 Euclid Ave. i32, Cleveland, OH, USA.
| |
Collapse
|
3
|
Tang QQ, Yang XG, Wang HQ, Wu DW, Zhang MX. Applications of deep learning for detecting ophthalmic diseases with ultrawide-field fundus images. Int J Ophthalmol 2024; 17:188-200. [PMID: 38239939 PMCID: PMC10754665 DOI: 10.18240/ijo.2024.01.24] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Accepted: 11/07/2023] [Indexed: 01/22/2024] Open
Abstract
AIM To summarize the application of deep learning in detecting ophthalmic disease with ultrawide-field fundus images and analyze the advantages, limitations, and possible solutions common to all tasks. METHODS We searched three academic databases, including PubMed, Web of Science, and Ovid, with the date of August 2022. We matched and screened according to the target keywords and publication year and retrieved a total of 4358 research papers according to the keywords, of which 23 studies were retrieved on applying deep learning in diagnosing ophthalmic disease with ultrawide-field images. RESULTS Deep learning in ultrawide-field images can detect various ophthalmic diseases and achieve great performance, including diabetic retinopathy, glaucoma, age-related macular degeneration, retinal vein occlusions, retinal detachment, and other peripheral retinal diseases. Compared to fundus images, the ultrawide-field fundus scanning laser ophthalmoscopy enables the capture of the ocular fundus up to 200° in a single exposure, which can observe more areas of the retina. CONCLUSION The combination of ultrawide-field fundus images and artificial intelligence will achieve great performance in diagnosing multiple ophthalmic diseases in the future.
Collapse
Affiliation(s)
- Qing-Qing Tang
- Department of Ophthalmology and Research Laboratory of Macular Disease, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Xiang-Gang Yang
- Department of Ophthalmology and Research Laboratory of Macular Disease, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Hong-Qiu Wang
- Hong Kong University of Science and Technology (Guangzhou), Guangzhou 511400, Guangdong Province, China
| | - Da-Wen Wu
- Department of Ophthalmology and Research Laboratory of Macular Disease, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Mei-Xia Zhang
- Department of Ophthalmology and Research Laboratory of Macular Disease, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| |
Collapse
|
4
|
Nguyen TD, Le DT, Bum J, Kim S, Song SJ, Choo H. Retinal Disease Diagnosis Using Deep Learning on Ultra-Wide-Field Fundus Images. Diagnostics (Basel) 2024; 14:105. [PMID: 38201414 PMCID: PMC10804390 DOI: 10.3390/diagnostics14010105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 12/20/2023] [Accepted: 12/26/2023] [Indexed: 01/12/2024] Open
Abstract
Ultra-wide-field fundus imaging (UFI) provides comprehensive visualization of crucial eye components, including the optic disk, fovea, and macula. This in-depth view facilitates doctors in accurately diagnosing diseases and recommending suitable treatments. This study investigated the application of various deep learning models for detecting eye diseases using UFI. We developed an automated system that processes and enhances a dataset of 4697 images. Our approach involves brightness and contrast enhancement, followed by applying feature extraction, data augmentation and image classification, integrated with convolutional neural networks. These networks utilize layer-wise feature extraction and transfer learning from pre-trained models to accurately represent and analyze medical images. Among the five evaluated models, including ResNet152, Vision Transformer, InceptionResNetV2, RegNet and ConVNext, ResNet152 is the most effective, achieving a testing area under the curve (AUC) score of 96.47% (with a 95% confidence interval (CI) of 0.931-0.974). Additionally, the paper presents visualizations of the model's predictions, including confidence scores and heatmaps that highlight the model's focal points-particularly where lesions due to damage are evident. By streamlining the diagnosis process and providing intricate prediction details without human intervention, our system serves as a pivotal tool for ophthalmologists. This research underscores the compatibility and potential of utilizing ultra-wide-field images in conjunction with deep learning.
Collapse
Affiliation(s)
- Toan Duc Nguyen
- Department of AI Systems Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - Duc-Tai Le
- College of Computing and Informatics, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - Junghyun Bum
- Sungkyun AI Research Institute, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - Seongho Kim
- Department of Ophthalmology, Kangbuk Samsung Hospital, School of Medicine, Sungkyunkwan University, Suwon 16419, Republic of Korea;
| | - Su Jeong Song
- Department of Ophthalmology, Kangbuk Samsung Hospital, School of Medicine, Sungkyunkwan University, Suwon 16419, Republic of Korea;
- Biomedical Institute for Convergence, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - Hyunseung Choo
- Department of AI Systems Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
- College of Computing and Informatics, Sungkyunkwan University, Suwon 16419, Republic of Korea
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| |
Collapse
|
5
|
Zhang J, Zou H. Insights into artificial intelligence in myopia management: from a data perspective. Graefes Arch Clin Exp Ophthalmol 2024; 262:3-17. [PMID: 37231280 PMCID: PMC10212230 DOI: 10.1007/s00417-023-06101-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 03/23/2023] [Accepted: 05/06/2023] [Indexed: 05/27/2023] Open
Abstract
Given the high incidence and prevalence of myopia, the current healthcare system is struggling to handle the task of myopia management, which is worsened by home quarantine during the ongoing COVID-19 pandemic. The utilization of artificial intelligence (AI) in ophthalmology is thriving, yet not enough in myopia. AI can serve as a solution for the myopia pandemic, with application potential in early identification, risk stratification, progression prediction, and timely intervention. The datasets used for developing AI models are the foundation and determine the upper limit of performance. Data generated from clinical practice in managing myopia can be categorized into clinical data and imaging data, and different AI methods can be used for analysis. In this review, we comprehensively review the current application status of AI in myopia with an emphasis on data modalities used for developing AI models. We propose that establishing large public datasets with high quality, enhancing the model's capability of handling multimodal input, and exploring novel data modalities could be of great significance for the further application of AI for myopia.
Collapse
Affiliation(s)
- Juzhao Zhang
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Haidong Zou
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
- Shanghai Eye Diseases Prevention & Treatment Center, Shanghai Eye Hospital, Shanghai, China.
- National Clinical Research Center for Eye Diseases, Shanghai, China.
- Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China.
| |
Collapse
|
6
|
Sun G, Wang X, Xu L, Li C, Wang W, Yi Z, Luo H, Su Y, Zheng J, Li Z, Chen Z, Zheng H, Chen C. Deep Learning for the Detection of Multiple Fundus Diseases Using Ultra-widefield Images. Ophthalmol Ther 2023; 12:895-907. [PMID: 36565376 PMCID: PMC10011259 DOI: 10.1007/s40123-022-00627-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 11/27/2022] [Indexed: 12/25/2022] Open
Abstract
INTRODUCTION To design and evaluate a deep learning model based on ultra-widefield images (UWFIs) that can detect several common fundus diseases. METHODS Based on 4574 UWFIs, a deep learning model was trained and validated that can identify normal fundus and eight common fundus diseases, namely referable diabetic retinopathy, retinal vein occlusion, pathologic myopia, retinal detachment, retinitis pigmentosa, age-related macular degeneration, vitreous opacity, and optic neuropathy. The model was tested on three test sets with data volumes of 465, 979, and 525. The performance of the three deep learning networks, EfficientNet-B7, DenseNet, and ResNet-101, was evaluated on the internal test set. Additionally, we compared the performance of the deep learning model with that of doctors in a tertiary referral hospital. RESULTS Compared to the other two deep learning models, EfficientNet-B7 achieved the best performance. The area under the receiver operating characteristic curves of the EfficientNet-B7 model on the internal test set, external test set A and external test set B were 0.9708 (0.8772, 0.9849) to 1.0000 (1.0000, 1.0000), 0.9683 (0.8829, 0.9770) to 1.0000 (0.9975, 1.0000), and 0.8919 (0.7150, 0.9055) to 0.9977 (0.9165, 1.0000), respectively. On a data set of 100 images, the total accuracy of the deep learning model was 93.00%, the average accuracy of three ophthalmologists who had been working for 2 years and three ophthalmologists who had been working in fundus imaging for more than 5 years was 88.00% and 94.00%, respectively. CONCLUSION High performance was achieved on all three test sets using our UWFI multidisease classification model with a small sample size and fast model inference. The performance of the artificial intelligence model was comparable to that of a physician with 2-5 years of experience in fundus diseases at a tertiary referral hospital. The model is expected to be used as an effective aid for fundus disease screening.
Collapse
Affiliation(s)
- Gongpeng Sun
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Xiaoling Wang
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Lizhang Xu
- Wuhan Aiyanbang Technology Co., Ltd, Wuhan, 430073, China
| | - Chang Li
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin International Joint Research and Development Centre of Ophthalmology and Vision Science, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, 300384, China
| | - Wenyu Wang
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Zuohuizi Yi
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Huijuan Luo
- The People's Hospital of Yidu, Yidu, 443300, China
| | - Yu Su
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Jian Zheng
- School of Electronic Information and Electric Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Zhiqing Li
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin International Joint Research and Development Centre of Ophthalmology and Vision Science, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, 300384, China
| | - Zhen Chen
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China.
| | - Hongmei Zheng
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China.
| | - Changzheng Chen
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China.
| |
Collapse
|
7
|
Nagasato D, Sogawa T, Tanabe M, Tabuchi H, Numa S, Oishi A, Ohashi Ikeda H, Tsujikawa A, Maeda T, Takahashi M, Ito N, Miura G, Shinohara T, Egawa M, Mitamura Y. Estimation of Visual Function Using Deep Learning From Ultra-Widefield Fundus Images of Eyes With Retinitis Pigmentosa. JAMA Ophthalmol 2023; 141:305-313. [PMID: 36821134 PMCID: PMC9951103 DOI: 10.1001/jamaophthalmol.2022.6393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
Abstract
Importance There is no widespread effective treatment to halt the progression of retinitis pigmentosa. Consequently, adequate assessment and estimation of residual visual function are important clinically. Objective To examine whether deep learning can accurately estimate the visual function of patients with retinitis pigmentosa by using ultra-widefield fundus images obtained on concurrent visits. Design, Setting, and Participants Data for this multicenter, retrospective, cross-sectional study were collected between January 1, 2012, and December 31, 2018. This study included 695 consecutive patients with retinitis pigmentosa who were examined at 5 institutions. Each of the 3 types of input images-ultra-widefield pseudocolor images, ultra-widefield fundus autofluorescence images, and both ultra-widefield pseudocolor and fundus autofluorescence images-was paired with 1 of the 31 types of ensemble models constructed from 5 deep learning models (Visual Geometry Group-16, Residual Network-50, InceptionV3, DenseNet121, and EfficientNetB0). We used 848, 212, and 214 images for the training, validation, and testing data, respectively. All data from 1 institution were used for the independent testing data. Data analysis was performed from June 7, 2021, to December 5, 2022. Main Outcomes and Measures The mean deviation on the Humphrey field analyzer, central retinal sensitivity, and best-corrected visual acuity were estimated. The image type-ensemble model combination that yielded the smallest mean absolute error was defined as the model with the best estimation accuracy. After removal of the bias of including both eyes with the generalized linear mixed model, correlations between the actual values of the testing data and the estimated values by the best accuracy model were examined by calculating standardized regression coefficients and P values. Results The study included 1274 eyes of 695 patients. A total of 385 patients were female (55.4%), and the mean (SD) age was 53.9 (17.2) years. Among the 3 types of images, the model using ultra-widefield fundus autofluorescence images alone provided the best estimation accuracy for mean deviation, central sensitivity, and visual acuity. Standardized regression coefficients were 0.684 (95% CI, 0.567-0.802) for the mean deviation estimation, 0.697 (95% CI, 0.590-0.804) for the central sensitivity estimation, and 0.309 (95% CI, 0.187-0.430) for the visual acuity estimation (all P < .001). Conclusions and Relevance Results of this study suggest that the visual function estimation in patients with retinitis pigmentosa from ultra-widefield fundus autofluorescence images using deep learning might help assess disease progression objectively. Findings also suggest that deep learning models might monitor the progression of retinitis pigmentosa efficiently during follow-up.
Collapse
Affiliation(s)
- Daisuke Nagasato
- Department of Ophthalmology, Saneikai Tsukazaki Hospital, Himeji, Japan,Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan,Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan
| | - Takahiro Sogawa
- Department of Ophthalmology, Saneikai Tsukazaki Hospital, Himeji, Japan
| | - Mao Tanabe
- Department of Ophthalmology, Saneikai Tsukazaki Hospital, Himeji, Japan
| | - Hitoshi Tabuchi
- Department of Ophthalmology, Saneikai Tsukazaki Hospital, Himeji, Japan,Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan,Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan
| | - Shogo Numa
- Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Akio Oishi
- Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan,Department of Ophthalmology and Visual Sciences, Graduate School of Biomedical Sciences, Nagasaki University, Nagasaki, Japan
| | - Hanako Ohashi Ikeda
- Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Akitaka Tsujikawa
- Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Tadao Maeda
- Research Center, Kobe City Eye Hospital, Kobe, Japan,Laboratory for Retinal Regeneration, RIKEN Center for Biosystems Dynamics Research, Kobe, Japan
| | - Masayo Takahashi
- Research Center, Kobe City Eye Hospital, Kobe, Japan,Laboratory for Retinal Regeneration, RIKEN Center for Biosystems Dynamics Research, Kobe, Japan,Vision Care Inc, Kobe, Japan
| | - Nana Ito
- Department of Ophthalmology and Visual Science, Chiba University Graduate School of Medicine, Chiba, Japan
| | - Gen Miura
- Department of Ophthalmology and Visual Science, Chiba University Graduate School of Medicine, Chiba, Japan
| | - Terumi Shinohara
- Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan
| | - Mariko Egawa
- Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan
| | - Yoshinori Mitamura
- Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan
| |
Collapse
|
8
|
Antaki F, Coussa RG, Kahwati G, Hammamji K, Sebag M, Duval R. Accuracy of automated machine learning in classifying retinal pathologies from ultra-widefield pseudocolour fundus images. Br J Ophthalmol 2023; 107:90-95. [PMID: 34344669 DOI: 10.1136/bjophthalmol-2021-319030] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Accepted: 07/23/2021] [Indexed: 12/23/2022]
Abstract
AIMS Automated machine learning (AutoML) is a novel tool in artificial intelligence (AI). This study assessed the discriminative performance of AutoML in differentiating retinal vein occlusion (RVO), retinitis pigmentosa (RP) and retinal detachment (RD) from normal fundi using ultra-widefield (UWF) pseudocolour fundus images. METHODS Two ophthalmologists without coding experience carried out AutoML model design using a publicly available image data set (2137 labelled images). The data set was reviewed for low-quality and mislabeled images and then uploaded to the Google Cloud AutoML Vision platform for training and testing. We designed multiple binary models to differentiate RVO, RP and RD from normal fundi and compared them to bespoke models obtained from the literature. We then devised a multiclass model to detect RVO, RP and RD. Saliency maps were generated to assess the interpretability of the model. RESULTS The AutoML models demonstrated high diagnostic properties in the binary classification tasks that were generally comparable to bespoke deep-learning models (area under the precision-recall curve (AUPRC) 0.921-1, sensitivity 84.91%-89.77%, specificity 78.72%-100%). The multiclass AutoML model had an AUPRC of 0.876, a sensitivity of 77.93% and a positive predictive value of 82.59%. The per-label sensitivity and specificity, respectively, were normal fundi (91.49%, 86.75%), RVO (83.02%, 92.50%), RP (72.00%, 100%) and RD (79.55%,96.80%). CONCLUSION AutoML models created by ophthalmologists without coding experience can detect RVO, RP and RD in UWF images with very good diagnostic accuracy. The performance was comparable to bespoke deep-learning models derived by AI experts for RVO and RP but not for RD.
Collapse
Affiliation(s)
- Fares Antaki
- Department of Ophthalmology, Centre Hospitalier de l'Universite de Montreal (CHUM), Montreal, Quebec, Canada.,Department of Ophthalmology, Hopital Maisonneuve-Rosemont (CUO-HMR), Montreal, Quebec, Canada
| | - Razek Georges Coussa
- Department of Ophthalmology and Visual Sciences, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Ghofril Kahwati
- Department of Electrical Engineering, Ecole de technologie superieure (ETS), Montreal, Quebec, Canada
| | - Karim Hammamji
- Department of Ophthalmology, Centre Hospitalier de l'Universite de Montreal (CHUM), Montreal, Quebec, Canada
| | - Mikael Sebag
- Department of Ophthalmology, Centre Hospitalier de l'Universite de Montreal (CHUM), Montreal, Quebec, Canada
| | - Renaud Duval
- Department of Ophthalmology, Hopital Maisonneuve-Rosemont (CUO-HMR), Montreal, Quebec, Canada
| |
Collapse
|
9
|
Xiao Y, Hu Y, Quan W, Yang Y, Lai W, Wang X, Zhang X, Zhang B, Wu Y, Wu Q, Liu B, Zeng X, Lin Z, Fang Y, Hu Y, Feng S, Yuan L, Cai H, Li T, Lin H, Yu H. Development and validation of a deep learning system to classify aetiology and predict anatomical outcomes of macular hole. Br J Ophthalmol 2023; 107:109-115. [PMID: 34348922 PMCID: PMC9763201 DOI: 10.1136/bjophthalmol-2021-318844] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Accepted: 07/23/2021] [Indexed: 11/03/2022]
Abstract
AIMS To develop a deep learning (DL) model for automatic classification of macular hole (MH) aetiology (idiopathic or secondary), and a multimodal deep fusion network (MDFN) model for reliable prediction of MH status (closed or open) at 1 month after vitrectomy and internal limiting membrane peeling (VILMP). METHODS In this multicentre retrospective cohort study, a total of 330 MH eyes with 1082 optical coherence tomography (OCT) images and 3300 clinical data enrolled from four ophthalmic centres were used to train, validate and externally test the DL and MDFN models. 266 eyes from three centres were randomly split by eye-level into a training set (80%) and a validation set (20%). In the external testing dataset, 64 eyes were included from the remaining centre. All eyes underwent macular OCT scanning at baseline and 1 month after VILMP. The area under the receiver operated characteristic curve (AUC), accuracy, specificity and sensitivity were used to evaluate the performance of the models. RESULTS In the external testing set, the AUC, accuracy, specificity and sensitivity of the MH aetiology classification model were 0.965, 0.950, 0.870 and 0.938, respectively; the AUC, accuracy, specificity and sensitivity of the postoperative MH status prediction model were 0.904, 0.825, 0.977 and 0.766, respectively; the AUC, accuracy, specificity and sensitivity of the postoperative idiopathic MH status prediction model were 0.947, 0.875, 0.815 and 0.979, respectively. CONCLUSION Our DL-based models can accurately classify the MH aetiology and predict the MH status after VILMP. These models would help ophthalmologists in diagnosis and surgical planning of MH.
Collapse
Affiliation(s)
- Yu Xiao
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China,Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
| | - Yijun Hu
- Aier Institute of Refractive Surgery, Refractive Surgery Center, Guangzhou Aier Eye Hospital, Guangzhou, China,Aier School of Ophthalmology, Central South University, Changsha, China
| | - Wuxiu Quan
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
| | - Yahan Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic CenterSun, Yat-sen University, Guangzhou, China
| | - Weiyi Lai
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic CenterSun, Yat-sen University, Guangzhou, China
| | - Xun Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic CenterSun, Yat-sen University, Guangzhou, China
| | - Xiayin Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic CenterSun, Yat-sen University, Guangzhou, China
| | - Bin Zhang
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
| | - Yuqing Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic CenterSun, Yat-sen University, Guangzhou, China
| | - Qiaowei Wu
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China,Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
| | - Baoyi Liu
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China,Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
| | - Xiaomin Zeng
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China,Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
| | - Zhanjie Lin
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Ying Fang
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Yu Hu
- Department of Opthalmology, the First Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Songfu Feng
- Department of Ophthalmology, Zhujiang Hospital of Southern Medical University, Guangzhou, China
| | - Ling Yuan
- Department of Opthalmology, the First Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Hongmin Cai
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
| | - Tao Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic CenterSun, Yat-sen University, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic CenterSun, Yat-sen University, Guangzhou, China .,Center of Precision Medicine, Sun Yat-sen University, Guangzhou, China
| | - Honghua Yu
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China .,Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
| |
Collapse
|
10
|
Detecting multiple retinal diseases in ultra-widefield fundus imaging and data-driven identification of informative regions with deep learning. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00566-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
11
|
Bhambra N, Antaki F, Malt FE, Xu A, Duval R. Deep learning for ultra-widefield imaging: a scoping review. Graefes Arch Clin Exp Ophthalmol 2022; 260:3737-3778. [PMID: 35857087 DOI: 10.1007/s00417-022-05741-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 05/16/2022] [Accepted: 06/22/2022] [Indexed: 11/04/2022] Open
Abstract
PURPOSE This article is a scoping review of published and peer-reviewed articles using deep-learning (DL) applied to ultra-widefield (UWF) imaging. This study provides an overview of the published uses of DL and UWF imaging for the detection of ophthalmic and systemic diseases, generative image synthesis, quality assessment of images, and segmentation and localization of ophthalmic image features. METHODS A literature search was performed up to August 31st, 2021 using PubMed, Embase, Cochrane Library, and Google Scholar. The inclusion criteria were as follows: (1) deep learning, (2) ultra-widefield imaging. The exclusion criteria were as follows: (1) articles published in any language other than English, (2) articles not peer-reviewed (usually preprints), (3) no full-text availability, (4) articles using machine learning algorithms other than deep learning. No study design was excluded from consideration. RESULTS A total of 36 studies were included. Twenty-three studies discussed ophthalmic disease detection and classification, 5 discussed segmentation and localization of ultra-widefield images (UWFIs), 3 discussed generative image synthesis, 3 discussed ophthalmic image quality assessment, and 2 discussed detecting systemic diseases via UWF imaging. CONCLUSION The application of DL to UWF imaging has demonstrated significant effectiveness in the diagnosis and detection of ophthalmic diseases including diabetic retinopathy, retinal detachment, and glaucoma. DL has also been applied in the generation of synthetic ophthalmic images. This scoping review highlights and discusses the current uses of DL with UWF imaging, and the future of DL applications in this field.
Collapse
Affiliation(s)
- Nishaant Bhambra
- Faculty of Medicine, McGill University, Montréal, Québec, Canada
| | - Fares Antaki
- Department of Ophthalmology, Université de Montréal, Montréal, Québec, Canada.,Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de L'Est-de-L'Île-de-Montréal, 5415 Assumption Blvd, Montréal, Québec, H1T 2M4, Canada
| | - Farida El Malt
- Faculty of Medicine, McGill University, Montréal, Québec, Canada
| | - AnQi Xu
- Faculty of Medicine, Université de Montréal, Montréal, Québec, Canada
| | - Renaud Duval
- Department of Ophthalmology, Université de Montréal, Montréal, Québec, Canada. .,Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de L'Est-de-L'Île-de-Montréal, 5415 Assumption Blvd, Montréal, Québec, H1T 2M4, Canada.
| |
Collapse
|
12
|
Li Z, Guo C, Nie D, Lin D, Cui T, Zhu Y, Chen C, Zhao L, Zhang X, Dongye M, Wang D, Xu F, Jin C, Zhang P, Han Y, Yan P, Lin H. Automated detection of retinal exudates and drusen in ultra-widefield fundus images based on deep learning. Eye (Lond) 2021; 36:1681-1686. [PMID: 34345030 PMCID: PMC9307785 DOI: 10.1038/s41433-021-01715-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2021] [Revised: 07/14/2021] [Accepted: 07/22/2021] [Indexed: 01/20/2023] Open
Abstract
BACKGROUND Retinal exudates and/or drusen (RED) can be signs of many fundus diseases that can lead to irreversible vision loss. Early detection and treatment of these diseases are critical for improving vision prognosis. However, manual RED screening on a large scale is time-consuming and labour-intensive. Here, we aim to develop and assess a deep learning system for automated detection of RED using ultra-widefield fundus (UWF) images. METHODS A total of 26,409 UWF images from 14,994 subjects were used to develop and evaluate the deep learning system. The Zhongshan Ophthalmic Center (ZOC) dataset was selected to compare the performance of the system to that of retina specialists in RED detection. The saliency map visualization technique was used to understand which areas in the UWF image had the most influence on our deep learning system when detecting RED. RESULTS The system for RED detection achieved areas under the receiver operating characteristic curve of 0.994 (95% confidence interval [CI]: 0.991-0.996), 0.972 (95% CI: 0.957-0.984), and 0.988 (95% CI: 0.983-0.992) in three independent datasets. The performance of the system in the ZOC dataset was comparable to that of an experienced retina specialist. Regions of RED were highlighted by saliency maps in UWF images. CONCLUSIONS Our deep learning system is reliable in the automated detection of RED in UWF images. As a screening tool, our system may promote the early diagnosis and management of RED-related fundus diseases.
Collapse
Affiliation(s)
- Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chong Guo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Danyao Nie
- Shenzhen Eye Hospital, Shenzhen Key Laboratory of Ophthalmology, Affiliated Shenzhen Eye Hospital of Jinan University, Shenzhen, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Tingxin Cui
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yi Zhu
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Chuan Chen
- Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xulin Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Meimei Dongye
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Dongni Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chenjin Jin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Ping Zhang
- Xudong Ophthalmic Hospital, Inner Mongolia, China
| | - Yu Han
- EYE & ENT Hospital of Fudan University, Shanghai, China
| | - Pisong Yan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China. .,Center for Precision Medicine, Sun Yat-sen University, Guangzhou, China.
| |
Collapse
|
13
|
Cai S, Parker F, Urias MG, Goldberg MF, Hager GD, Scott AW. Deep Learning Detection of Sea Fan Neovascularization From Ultra-Widefield Color Fundus Photographs of Patients With Sickle Cell Hemoglobinopathy. JAMA Ophthalmol 2021; 139:206-213. [PMID: 33377944 DOI: 10.1001/jamaophthalmol.2020.5900] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Importance Adherence to screening for vision-threatening proliferative sickle cell retinopathy is limited among patients with sickle cell hemoglobinopathy despite guidelines recommending dilated fundus examinations beginning in childhood. An automated algorithm for detecting sea fan neovascularization from ultra-widefield color fundus photographs could expand access to rapid retinal evaluations to identify patients at risk of vision loss from proliferative sickle cell retinopathy. Objective To develop a deep learning system for detecting sea fan neovascularization from ultra-widefield color fundus photographs from patients with sickle cell hemoglobinopathy. Design, Setting, and Participants In a cross-sectional study conducted at a single-institution, tertiary academic referral center, deidentified, retrospectively collected, ultra-widefield color fundus photographs from 190 adults with sickle cell hemoglobinopathy were independently graded by 2 masked retinal specialists for presence or absence of sea fan neovascularization. A third masked retinal specialist regraded images with discordant or indeterminate grades. Consensus retinal specialist reference standard grades were used to train a convolutional neural network to classify images for presence or absence of sea fan neovascularization. Participants included nondiabetic adults with sickle cell hemoglobinopathy receiving care from a Wilmer Eye Institute retinal specialist; the patients had received no previous laser or surgical treatment for sickle cell retinopathy and underwent imaging with ultra-widefield color fundus photographs between January 1, 2012, and January 30, 2019. Interventions Deidentified ultra-widefield color fundus photographs were retrospectively collected. Main Outcomes and Measures Sensitivity, specificity, and area under the receiver operating characteristic curve of the convolutional neural network for sea fan detection. Results A total of 1182 images from 190 patients were included. Of the 190 patients, 101 were women (53.2%), and the mean (SD) age at baseline was 36.2 (12.3) years; 119 patients (62.6%) had hemoglobin SS disease and 46 (24.2%) had hemoglobin SC disease. One hundred seventy-nine patients (94.2%) were of Black or African descent. Images with sea fan neovascularization were obtained in 57 patients (30.0%). The convolutional neural network had an area under the curve of 0.988 (95% CI, 0.969-0.999), with sensitivity of 97.4% (95% CI, 86.5%-99.9%) and specificity of 97.0% (95% CI, 93.5%-98.9%) for detecting sea fan neovascularization from ultra-widefield color fundus photographs. Conclusions and Relevance This study reports an automated system with high sensitivity and specificity for detecting sea fan neovascularization from ultra-widefield color fundus photographs from patients with sickle cell hemoglobinopathy, with potential applications for improving screening for vision-threatening proliferative sickle cell retinopathy.
Collapse
Affiliation(s)
- Sophie Cai
- Retina Division, Wilmer Eye Institute, The Johns Hopkins University School of Medicine and Hospital, Baltimore, Maryland.,Retina Division, Duke Eye Center, Durham, North Carolina
| | - Felix Parker
- Center for Systems Science and Engineering, The Johns Hopkins University, Baltimore, Maryland.,Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland
| | - Muller G Urias
- Retina Division, Wilmer Eye Institute, The Johns Hopkins University School of Medicine and Hospital, Baltimore, Maryland.,Retina Division, Ophthalmology and Vision Sciences Department, Federal University of São Paulo, São Paulo, Brazil
| | - Morton F Goldberg
- Retina Division, Wilmer Eye Institute, The Johns Hopkins University School of Medicine and Hospital, Baltimore, Maryland
| | - Gregory D Hager
- Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland.,Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland
| | - Adrienne W Scott
- Retina Division, Wilmer Eye Institute, The Johns Hopkins University School of Medicine and Hospital, Baltimore, Maryland
| |
Collapse
|
14
|
Imamura H, Tabuchi H, Nagasato D, Masumoto H, Baba H, Furukawa H, Maruoka S. Automatic screening of tear meniscus from lacrimal duct obstructions using anterior segment optical coherence tomography images by deep learning. Graefes Arch Clin Exp Ophthalmol 2021; 259:1569-1577. [PMID: 33576859 DOI: 10.1007/s00417-021-05078-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Revised: 11/23/2020] [Accepted: 01/06/2021] [Indexed: 10/22/2022] Open
Abstract
PURPOSE We assessed the ability of deep learning (DL) models to distinguish between tear meniscus of lacrimal duct obstruction (LDO) patients and normal subjects using anterior segment optical coherence tomography (ASOCT) images. METHODS The study included 117 ASOCT images (19 men and 98 women; mean age, 66.6 ± 13.6 years) from 101 LDO patients and 113 ASOCT images (29 men and 84 women; mean age, 38.3 ± 19.9 years) from 71 normal subjects. We trained to construct 9 single and 502 ensemble DL models with 9 different network structures, and calculated the area under the curve (AUC), sensitivity, and specificity to compare the distinguishing abilities of these single and ensemble DL models. RESULTS For the highest single DL model (DenseNet169), the AUC, sensitivity, and specificity for distinguishing LDO were 0.778, 64.6%, and 72.1%, respectively. For the highest ensemble DL model (VGG16, ResNet50, DenseNet121, DenseNet169, InceptionResNetV2, InceptionV3, and Xception), the AUC, sensitivity, and specificity for distinguishing LDO were 0.824, 84.8%, and 58.8%, respectively. The heat maps indicated that these DL models placed their focus on the tear meniscus region of the ASOCT images. CONCLUSION The combination of DL and ASOCT images could distinguish between tear meniscus of LDO patients and normal subjects with a high level of accuracy. These results suggest that DL might be useful for automatic screening of patients for LDO.
Collapse
Affiliation(s)
- Hitoshi Imamura
- Department of Ophthalmology, Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan
| | - Hitoshi Tabuchi
- Department of Ophthalmology, Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan.,Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan
| | - Daisuke Nagasato
- Department of Ophthalmology, Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan. .,Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan.
| | - Hiroki Masumoto
- Department of Ophthalmology, Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan.,Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan
| | - Hiroaki Baba
- Department of Ophthalmology, Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan
| | - Hiroki Furukawa
- Department of Ophthalmology, Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan
| | - Sachiko Maruoka
- Department of Ophthalmology, Tsukazaki Hospital, 68-1 Waku, Aboshi-ku, Himeji City, Hyogo, 671-1227, Japan
| |
Collapse
|
15
|
Tsuiki S, Nagaoka T, Fukuda T, Sakamoto Y, Almeida FR, Nakayama H, Inoue Y, Enno H. Machine learning for image-based detection of patients with obstructive sleep apnea: an exploratory study. Sleep Breath 2021; 25:2297-2305. [PMID: 33559004 PMCID: PMC8590647 DOI: 10.1007/s11325-021-02301-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2020] [Revised: 12/22/2020] [Accepted: 01/15/2021] [Indexed: 02/07/2023]
Abstract
PURPOSE In 2-dimensional lateral cephalometric radiographs, patients with severe obstructive sleep apnea (OSA) exhibit a more crowded oropharynx in comparison with non-OSA. We tested the hypothesis that machine learning, an application of artificial intelligence (AI), could be used to detect patients with severe OSA based on 2-dimensional images. METHODS A deep convolutional neural network was developed (n = 1258; 90%) and tested (n = 131; 10%) using data from 1389 (100%) lateral cephalometric radiographs obtained from individuals diagnosed with severe OSA (n = 867; apnea hypopnea index > 30 events/h sleep) or non-OSA (n = 522; apnea hypopnea index < 5 events/h sleep) at a single center for sleep disorders. Three kinds of data sets were prepared by changing the area of interest using a single image: the original image without any modification (full image), an image containing a facial profile, upper airway, and craniofacial soft/hard tissues (main region), and an image containing part of the occipital region (head only). A radiologist also performed a conventional manual cephalometric analysis of the full image for comparison. RESULTS The sensitivity/specificity was 0.87/0.82 for full image, 0.88/0.75 for main region, 0.71/0.63 for head only, and 0.54/0.80 for the manual analysis. The area under the receiver-operating characteristic curve was the highest for main region 0.92, for full image 0.89, for head only 0.70, and for manual cephalometric analysis 0.75. CONCLUSIONS A deep convolutional neural network identified individuals with severe OSA with high accuracy. Future research on this concept using AI and images can be further encouraged when discussing triage of OSA.
Collapse
Affiliation(s)
- Satoru Tsuiki
- Institute of Neuropsychiatry, 91, Bentencho, Shinjuku-ku, Tokyo, 162-0851, Japan. .,Yoyogi Sleep Disorder Center, Tokyo, Japan. .,Aging and Geriatric Dentistry, Tohoku University Graduate School of Dentistry, Sendai, Japan. .,Department of Oral Health Sciences, Faculty of Dentistry, The University of British Columbia, Vancouver, Canada.
| | | | - Tatsuya Fukuda
- Institute of Neuropsychiatry, 91, Bentencho, Shinjuku-ku, Tokyo, 162-0851, Japan
| | - Yuki Sakamoto
- Rist Inc., Kyoto, Japan.,Research Institute for Sustainable Humanosphere, Kyoto University, Kyoto, Japan
| | - Fernanda R Almeida
- Department of Oral Health Sciences, Faculty of Dentistry, The University of British Columbia, Vancouver, Canada
| | - Hideaki Nakayama
- Institute of Neuropsychiatry, 91, Bentencho, Shinjuku-ku, Tokyo, 162-0851, Japan.,Yoyogi Sleep Disorder Center, Tokyo, Japan.,Department of Somnology, Tokyo Medical University, Tokyo, Japan
| | - Yuichi Inoue
- Institute of Neuropsychiatry, 91, Bentencho, Shinjuku-ku, Tokyo, 162-0851, Japan.,Yoyogi Sleep Disorder Center, Tokyo, Japan.,Department of Somnology, Tokyo Medical University, Tokyo, Japan
| | - Hiroki Enno
- Rist Inc., Kyoto, Japan.,Plasma Inc., Tokyo, Japan
| |
Collapse
|
16
|
Development of a deep-learning system for detection of lattice degeneration, retinal breaks, and retinal detachment in tessellated eyes using ultra-wide-field fundus images: a pilot study. Graefes Arch Clin Exp Ophthalmol 2021; 259:2225-2234. [PMID: 33538890 DOI: 10.1007/s00417-021-05105-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2020] [Revised: 01/10/2021] [Accepted: 01/26/2021] [Indexed: 10/22/2022] Open
Abstract
PURPOSE To investigate the detection of lattice degeneration, retinal breaks, and retinal detachment in tessellated eyes using ultra-wide-field fundus imaging system (Optos) with convolutional neural network technology. METHODS This study included 1500 Optos color images for tessellated fundus confirmation and peripheral retinal lesion (lattice degeneration, retinal breaks, and retinal detachment) assessment. Three retinal specialists evaluated all images and proposed the reference standard when an agreement was achieved. Then, 722 images were used to train and verify a combined deep-learning system of 3 optimal binary classification models trained using seResNext50 algorithm with 2 preprocessing methods (original resizing and cropping), and a test set of 189 images were applied to verify the performance compared to the reference standard. RESULTS With optimal preprocessing approach (original resizing method for lattice degeneration and retinal detachment, cropping method for retinal breaks), the combined deep-learning system exhibited an area under curve of 0.888, 0.953, and 1.000 for detection of lattice degeneration, retinal breaks, and retinal detachment respectively in tessellated eyes. The referral accuracy of this system was 79.8% compared to the reference standard. CONCLUSION A deep-learning system is feasible to detect lattice degeneration, retinal breaks, and retinal detachment in tessellated eyes using ultra-wide-field images. And this system may be considered for screening and telemedicine.
Collapse
|
17
|
Yoo TK, Choi JY, Kim HK. Feasibility study to improve deep learning in OCT diagnosis of rare retinal diseases with few-shot classification. Med Biol Eng Comput 2021; 59:401-415. [PMID: 33492598 PMCID: PMC7829497 DOI: 10.1007/s11517-021-02321-1] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 01/15/2021] [Indexed: 01/16/2023]
Abstract
Deep learning (DL) has been successfully applied to the diagnosis of ophthalmic diseases. However, rare diseases are commonly neglected due to insufficient data. Here, we demonstrate that few-shot learning (FSL) using a generative adversarial network (GAN) can improve the applicability of DL in the optical coherence tomography (OCT) diagnosis of rare diseases. Four major classes with a large number of datasets and five rare disease classes with a few-shot dataset are included in this study. Before training the classifier, we constructed GAN models to generate pathological OCT images of each rare disease from normal OCT images. The Inception-v3 architecture was trained using an augmented training dataset, and the final model was validated using an independent test dataset. The synthetic images helped in the extraction of the characteristic features of each rare disease. The proposed DL model demonstrated a significant improvement in the accuracy of the OCT diagnosis of rare retinal diseases and outperformed the traditional DL models, Siamese network, and prototypical network. By increasing the accuracy of diagnosing rare retinal diseases through FSL, clinicians can avoid neglecting rare diseases with DL assistance, thereby reducing diagnosis delay and patient burden.
Collapse
Affiliation(s)
- Tae Keun Yoo
- Department of Ophthalmology, Medical Research Center, Aerospace Medical Center, Republic of Korea Air Force, 635 Danjae-ro, Sangdang-gu, Cheongju, South Korea.
| | - Joon Yul Choi
- Epilepsy Center, Neurological Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Hong Kyu Kim
- Department of Ophthalmology, Dankook University Hospital, Dankook University College of Medicine, Cheonan, South Korea
| |
Collapse
|
18
|
Prediction of age and brachial-ankle pulse-wave velocity using ultra-wide-field pseudo-color images by deep learning. Sci Rep 2020; 10:19369. [PMID: 33168888 PMCID: PMC7652944 DOI: 10.1038/s41598-020-76513-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Accepted: 10/29/2020] [Indexed: 12/22/2022] Open
Abstract
This study examined whether age and brachial-ankle pulse-wave velocity (baPWV) can be predicted with ultra-wide-field pseudo-color (UWPC) images using deep learning (DL). We examined 170 UWPC images of both eyes of 85 participants (40 men and 45 women, mean age: 57.5 ± 20.9 years). Three types of images were included (total, central, and peripheral) and analyzed by k-fold cross-validation (k = 5) using Visual Geometry Group-16. After bias was eliminated using the generalized linear mixed model, the standard regression coefficients (SRCs) between actual age and baPWV and predicted age and baPWV from the UWPC images by the neural network were calculated, and the prediction accuracies of the DL model for age and baPWV were examined. The SRC between actual age and predicted age by the neural network was 0.833 for all images, 0.818 for central images, and 0.649 for peripheral images (all P < 0.001) and between the actual baPWV and the predicted baPWV was 0.390 for total images, 0.419 for central images, and 0.312 for peripheral images (all P < 0.001). These results show the potential prediction capability of DL for age and vascular aging and could be useful for disease prevention and early treatment.
Collapse
|
19
|
Li Z, Guo C, Nie D, Lin D, Zhu Y, Chen C, Zhao L, Wu X, Dongye M, Xu F, Jin C, Zhang P, Han Y, Yan P, Lin H. Deep learning from "passive feeding" to "selective eating" of real-world data. NPJ Digit Med 2020; 3:143. [PMID: 33145439 PMCID: PMC7603327 DOI: 10.1038/s41746-020-00350-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Accepted: 09/24/2020] [Indexed: 12/23/2022] Open
Abstract
Artificial intelligence (AI) based on deep learning has shown excellent diagnostic performance in detecting various diseases with good-quality clinical images. Recently, AI diagnostic systems developed from ultra-widefield fundus (UWF) images have become popular standard-of-care tools in screening for ocular fundus diseases. However, in real-world settings, these systems must base their diagnoses on images with uncontrolled quality ("passive feeding"), leading to uncertainty about their performance. Here, using 40,562 UWF images, we develop a deep learning-based image filtering system (DLIFS) for detecting and filtering out poor-quality images in an automated fashion such that only good-quality images are transferred to the subsequent AI diagnostic system ("selective eating"). In three independent datasets from different clinical institutions, the DLIFS performed well with sensitivities of 96.9%, 95.6% and 96.6%, and specificities of 96.6%, 97.9% and 98.8%, respectively. Furthermore, we show that the application of our DLIFS significantly improves the performance of established AI diagnostic systems in real-world settings. Our work demonstrates that "selective eating" of real-world data is necessary and needs to be considered in the development of image-based AI systems.
Collapse
Affiliation(s)
- Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Chong Guo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Danyao Nie
- Shenzhen Eye Hospital, Shenzhen Key Laboratory of Ophthalmology, Affiliated Shenzhen Eye Hospital of Jinan University, 518001 Shenzhen, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Yi Zhu
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, FL 33136 USA
| | - Chuan Chen
- Sylvester Comprehensive Cancer Centre, University of Miami Miller School of Medicine, Miami, FL 33136 USA
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Meimei Dongye
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Chenjin Jin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Ping Zhang
- Xudong Ophthalmic Hospital, 015000 Inner Mongolia, China
| | - Yu Han
- EYE and ENT Hospital of Fudan University, 200031 Shanghai, China
| | - Pisong Yan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
- Centre for Precision Medicine, Sun Yat-sen University, 510060 Guangzhou, China
| |
Collapse
|
20
|
Du KF, Chen C, Huang XJ, Xie LY, Kong WJ, Dong HW, Wei WB. Utility of Ultra-Wide-Field Imaging for Screening of AIDS-Related Cytomegalovirus Retinitis. Ophthalmologica 2020; 244:334-338. [PMID: 33120392 DOI: 10.1159/000512634] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 07/17/2020] [Indexed: 11/19/2022]
Abstract
PURPOSE To explore the potential use of ultra-wide-field (UWF) imaging for screening of cytomegalovirus retinitis (CMVR) in AIDS patients. METHODS Ninety-four patients whose CD4 count was below 200 cells/μL were enrolled in a prospective study. Each patient underwent UWF imaging and indirect ophthalmoscopy. The main outcome measures were the concordance and detection rates of these 2 approaches and the sensitivity and specificity of UWF imaging. RESULTS Twenty-seven eyes in 18 patients were diagnosed with CMVR by the indirect ophthalmoscopy. UWF imaging missed the diagnosis in 1 eye because of a zone 3 CMVR lesion. The UWF image showed several CMVR patterns and locations: hemorrhagic necrotizing lesion, granular lesion, frosted branch angiitis, and optic neuropathy lesion. The concordance of the 2 approaches was excellent for the diagnosis of CMVR, classification of CMVR pattern, and location of CMVR. The detection rates of UWF imaging and indirect ophthalmoscopy were 14.0% (26/186; 95% CI 0.089-0.190) and 14.5% (27/186; 95% CI 0.094-0.196), respectively (p = 1.000). The sensitivity and specificity of UWF imaging were 96.3 and 100%, respectively. CONCLUSIONS UWF imaging is capable of documentation of different CMVR lesions and AIDS-related CMVR screening when examination by an ophthalmologist is not available.
Collapse
Affiliation(s)
- Kui-Fang Du
- Department of Ophthalmology, Beijing Youan Hospital, Capital Medical University, Beijing, China
| | - Chao Chen
- Department of Ophthalmology, Beijing Youan Hospital, Capital Medical University, Beijing, China
| | - Xiao-Jie Huang
- Department of Infectious Diseases, Beijing Youan Hospital, Capital Medical University, Beijing, China
| | - Lian-Yong Xie
- Department of Ophthalmology, Beijing Youan Hospital, Capital Medical University, Beijing, China
| | - Wen-Jun Kong
- Department of Ophthalmology, Beijing Youan Hospital, Capital Medical University, Beijing, China
| | - Hong-Wei Dong
- Department of Ophthalmology, Beijing Youan Hospital, Capital Medical University, Beijing, China
| | - Wen-Bin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology and Visual Sciences Key Lab, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
21
|
Tan TE, Ting DSW, Wong TY, Sim DA. Deep learning for identification of peripheral retinal degeneration using ultra-wide-field fundus images: is it sufficient for clinical translation? ANNALS OF TRANSLATIONAL MEDICINE 2020; 8:611. [PMID: 32566548 PMCID: PMC7290643 DOI: 10.21037/atm.2020.03.142] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Tien-En Tan
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Daniel Shu Wei Ting
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore.,Duke-National University of Singapore Medical School, Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore.,Duke-National University of Singapore Medical School, Singapore
| | - Dawn A Sim
- Moorfields Eye Hospital, London, UK.,National Institute for Health and Research Biomedical Centre, Moorfields Eye Hospital, London, UK.,Institute of Ophthalmology, University College London, London, UK
| |
Collapse
|
22
|
Sogawa T, Tabuchi H, Nagasato D, Masumoto H, Ikuno Y, Ohsugi H, Ishitobi N, Mitamura Y. Accuracy of a deep convolutional neural network in the detection of myopic macular diseases using swept-source optical coherence tomography. PLoS One 2020; 15:e0227240. [PMID: 32298265 PMCID: PMC7161961 DOI: 10.1371/journal.pone.0227240] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Accepted: 03/29/2020] [Indexed: 12/20/2022] Open
Abstract
This study examined and compared outcomes of deep learning (DL) in identifying swept-source optical coherence tomography (OCT) images without myopic macular lesions [i.e., no high myopia (nHM) vs. high myopia (HM)], and OCT images with myopic macular lesions [e.g., myopic choroidal neovascularization (mCNV) and retinoschisis (RS)]. A total of 910 SS-OCT images were included in the study as follows and analyzed by k-fold cross-validation (k = 5) using DL's renowned model, Visual Geometry Group-16: nHM, 146 images; HM, 531 images; mCNV, 122 images; and RS, 111 images (n = 910). The binary classification of OCT images with or without myopic macular lesions; the binary classification of HM images and images with myopic macular lesions (i.e., mCNV and RS images); and the ternary classification of HM, mCNV, and RS images were examined. Additionally, sensitivity, specificity, and the area under the curve (AUC) for the binary classifications as well as the correct answer rate for ternary classification were examined. The classification results of OCT images with or without myopic macular lesions were as follows: AUC, 0.970; sensitivity, 90.6%; specificity, 94.2%. The classification results of HM images and images with myopic macular lesions were as follows: AUC, 1.000; sensitivity, 100.0%; specificity, 100.0%. The correct answer rate in the ternary classification of HM images, mCNV images, and RS images were as follows: HM images, 96.5%; mCNV images, 77.9%; and RS, 67.6% with mean, 88.9%.Using noninvasive, easy-to-obtain swept-source OCT images, the DL model was able to classify OCT images without myopic macular lesions and OCT images with myopic macular lesions such as mCNV and RS with high accuracy. The study results suggest the possibility of conducting highly accurate screening of ocular diseases using artificial intelligence, which may improve the prevention of blindness and reduce workloads for ophthalmologists.
Collapse
Affiliation(s)
- Takahiro Sogawa
- Department of Ophthalmology, Tsukazaki Hospital, Himeji, Japan
| | - Hitoshi Tabuchi
- Department of Ophthalmology, Tsukazaki Hospital, Himeji, Japan
- Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan
| | - Daisuke Nagasato
- Department of Ophthalmology, Tsukazaki Hospital, Himeji, Japan
- Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan
| | - Hiroki Masumoto
- Department of Ophthalmology, Tsukazaki Hospital, Himeji, Japan
- Department of Technology and Design Thinking for Medicine, Hiroshima University Graduate School, Hiroshima, Japan
| | | | | | | | - Yoshinori Mitamura
- Department of Ophthalmology, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan
| |
Collapse
|
23
|
Deep Neural Network-Based Method for Detecting Obstructive Meibomian Gland Dysfunction With in Vivo Laser Confocal Microscopy. Cornea 2020; 39:720-725. [PMID: 32040007 DOI: 10.1097/ico.0000000000002279] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
24
|
Cho M, Kim JH, Hong KS, Kim JS, Kong HJ, Kim S. Identification of cecum time-location in a colonoscopy video by deep learning analysis of colonoscope movement. PeerJ 2019; 7:e7256. [PMID: 31392088 PMCID: PMC6673422 DOI: 10.7717/peerj.7256] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2018] [Accepted: 06/05/2019] [Indexed: 12/11/2022] Open
Abstract
Background Cecal intubation time is an important component for quality colonoscopy. Cecum is the turning point that determines the insertion and withdrawal phase of the colonoscope. For this reason, obtaining information related with location of the cecum in the endoscopic procedure is very useful. Also, it is necessary to detect the direction of colonoscope's movement and time-location of the cecum. Methods In order to analysis the direction of scope's movement, the Horn-Schunck algorithm was used to compute the pixel's motion change between consecutive frames. Horn-Schunk-algorithm applied images were trained and tested through convolutional neural network deep learning methods, and classified to the insertion, withdrawal and stop movements. Based on the scope's movement, the graph was drawn with a value of +1 for insertion, -1 for withdrawal, and 0 for stop. We regarded the turning point as a cecum candidate point when the total graph area sum in a certain section recorded the lowest. Results A total of 328,927 frame images were obtained from 112 patients. The overall accuracy, drawn from 5-fold cross-validation, was 95.6%. When the value of "t" was 30 s, accuracy of cecum discovery was 96.7%. In order to increase visibility, the movement of the scope was added to summary report of colonoscopy video. Insertion, withdrawal, and stop movements were mapped to each color and expressed with various scale. As the scale increased, the distinction between the insertion phase and the withdrawal phase became clearer. Conclusion Information obtained in this study can be utilized as metadata for proficiency assessment. Since insertion and withdrawal are technically different movements, data of scope's movement and phase can be quantified and utilized to express pattern unique to the colonoscopist and to assess proficiency. Also, we hope that the findings of this study can contribute to the informatics field of medical records so that medical charts can be transmitted graphically and effectively in the field of colonoscopy.
Collapse
Affiliation(s)
- Minwoo Cho
- Interdisciplinary Program for Bioengineering, Graduate School, Seoul National University, Seoul, South Korea
| | - Jee Hyun Kim
- Department of Gastroenterology, Seoul National University Boramae Medical Center, Seoul, South Korea
| | - Kyoung Sup Hong
- Department of Gastroenterology, Mediplex Sejong Hospital, Incheon, South Korea
| | - Joo Sung Kim
- Department of Internal Medicine, Seoul National University College of Medicine, Seoul, South Korea
| | - Hyoun-Joong Kong
- Department of Biomedical Engineering, Chungnam National University College of Medicine, Daejeon, South Korea
| | - Sungwan Kim
- Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul, South Korea
| |
Collapse
|
25
|
Masumoto H, Tabuchi H, Nakakura S, Ohsugi H, Enno H, Ishitobi N, Ohsugi E, Mitamura Y. Accuracy of a deep convolutional neural network in detection of retinitis pigmentosa on ultrawide-field images. PeerJ 2019; 7:e6900. [PMID: 31119087 PMCID: PMC6510218 DOI: 10.7717/peerj.6900] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2018] [Accepted: 04/02/2019] [Indexed: 11/20/2022] Open
Abstract
Evaluating the discrimination ability of a deep convolution neural network for ultrawide-field pseudocolor imaging and ultrawide-field autofluorescence of retinitis pigmentosa. In total, the 373 ultrawide-field pseudocolor and ultrawide-field autofluorescence images (150, retinitis pigmentosa; 223, normal) obtained from the patients who visited the Department of Ophthalmology, Tsukazaki Hospital were used. Training with a convolutional neural network on these learning data objects was conducted. We examined the K-fold cross validation (K = 5). The mean area under the curve of the ultrawide-field pseudocolor group was 0.998 (95% confidence interval (CI) [0.9953-1.0]) and that of the ultrawide-field autofluorescence group was 1.0 (95% CI [0.9994-1.0]). The sensitivity and specificity of the ultrawide-field pseudocolor group were 99.3% (95% CI [96.3%-100.0%]) and 99.1% (95% CI [96.1%-99.7%]), and those of the ultrawide-field autofluorescence group were 100% (95% CI [97.6%-100%]) and 99.5% (95% CI [96.8%-99.9%]), respectively. Heatmaps were in accordance with the clinician's observations. Using the proposed deep neural network model, retinitis pigmentosa can be distinguished from healthy eyes with high sensitivity and specificity on ultrawide-field pseudocolor and ultrawide-field autofluorescence images.
Collapse
Affiliation(s)
- Hiroki Masumoto
- Department of Ophthalmology, Tsukazaki Hospital, Himeji, Japan
| | - Hitoshi Tabuchi
- Department of Ophthalmology, Tsukazaki Hospital, Himeji, Japan
| | | | - Hideharu Ohsugi
- Department of Ophthalmology, Tsukazaki Hospital, Himeji, Japan
| | | | | | - Eiko Ohsugi
- Department of Ophthalmology, Tsukazaki Hospital, Himeji, Japan
| | - Yoshinori Mitamura
- Department of Ophthalmology, Insutitute of Biomedical Science, Tokushima University Graduate School, Tokushima, Japan
| |
Collapse
|