1
|
Feng L, Zhang Y, Wei W, Qiu H, Shi M. Applying deep learning to recognize the properties of vitreous opacity in ophthalmic ultrasound images. Eye (Lond) 2024; 38:380-385. [PMID: 37596401 PMCID: PMC10810903 DOI: 10.1038/s41433-023-02705-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Revised: 07/20/2023] [Accepted: 08/09/2023] [Indexed: 08/20/2023] Open
Abstract
BACKGROUND To explore the feasibility of artificial intelligence technology based on deep learning to automatically recognize the properties of vitreous opacities in ophthalmic ultrasound images. METHODS A total of 2000 greyscale Doppler ultrasound images containing non-pathological eye and three typical vitreous opacities confirmed as physiological vitreous opacity (VO), asteroid hyalosis (AH), and vitreous haemorrhage (VH) were selected and labelled for each lesion type. Five residual networks (ResNet) and two GoogLeNet models were trained to recognize vitreous lesions. Seventy-five percent of the images were randomly selected as the training set, and the remaining 25% were selected as the test set. The accuracy and parameters were recorded and compared among these seven different deep learning (DL) models. The precision, recall, F1 score, and area under the receiver operating characteristic curve (AUC) values for recognizing vitreous lesions were calculated for the most accurate DL model. RESULTS These seven DL models had significant differences in terms of their accuracy and parameters. GoogLeNet Inception V1 achieved the highest accuracy (95.5%) and minor parameters (10315580) in vitreous lesion recognition. GoogLeNet Inception V1 achieved precision values of 0.94, 0.94, 0.96, and 0.96, recall values of 0.94, 0.93, 0.97 and 0.98, and F1 scores of 0.94, 0.93, 0.96 and 0.97 for normal, VO, AH, and VH recognition, respectively. The AUC values for these four vitreous lesion types were 0.99, 1.0, 0.99, and 0.99, respectively. CONCLUSIONS GoogLeNet Inception V1 has shown promising results in ophthalmic ultrasound image recognition. With increasing ultrasound image data, a wide variety of confidential information on eye diseases can be detected automatically by artificial intelligence technology based on deep learning.
Collapse
Affiliation(s)
- Li Feng
- Department of Ophthalmology, The Fourth Affiliated Hospital of China Medical University, Eye Hospital of China Medical University, The Key Laboratory of Lens in Liaoning Province, Shenyang, China
| | | | - Wei Wei
- Hebei Eye Hospital, Xingtai, China
| | - Hui Qiu
- Department of Ophthalmology, The Fourth Affiliated Hospital of China Medical University, Eye Hospital of China Medical University, The Key Laboratory of Lens in Liaoning Province, Shenyang, China
| | - Mingyu Shi
- Department of Ophthalmology, The Fourth Affiliated Hospital of China Medical University, Eye Hospital of China Medical University, The Key Laboratory of Lens in Liaoning Province, Shenyang, China.
| |
Collapse
|
2
|
Soh ZD, Tan M, Nongpiur ME, Xu BY, Friedman D, Zhang X, Leung C, Liu Y, Koh V, Aung T, Cheng CY. Assessment of angle closure disease in the age of artificial intelligence: A review. Prog Retin Eye Res 2024; 98:101227. [PMID: 37926242 DOI: 10.1016/j.preteyeres.2023.101227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 11/02/2023] [Accepted: 11/02/2023] [Indexed: 11/07/2023]
Abstract
Primary angle closure glaucoma is a visually debilitating disease that is under-detected worldwide. Many of the challenges in managing primary angle closure disease (PACD) are related to the lack of convenient and precise tools for clinic-based disease assessment and monitoring. Artificial intelligence (AI)- assisted tools to detect and assess PACD have proliferated in recent years with encouraging results. Machine learning (ML) algorithms that utilize clinical data have been developed to categorize angle closure eyes by disease mechanism. Other ML algorithms that utilize image data have demonstrated good performance in detecting angle closure. Nonetheless, deep learning (DL) algorithms trained directly on image data generally outperformed traditional ML algorithms in detecting PACD, were able to accurately differentiate between angle status (open, narrow, closed), and automated the measurement of quantitative parameters. However, more work is required to expand the capabilities of these AI algorithms and for deployment into real-world practice settings. This includes the need for real-world evaluation, establishing the use case for different algorithms, and evaluating the feasibility of deployment while considering other clinical, economic, social, and policy-related factors.
Collapse
Affiliation(s)
- Zhi Da Soh
- Singapore Eye Research Institute, Singapore National Eye Centre, 20 College Road, 169856, Singapore; Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, 21 Lower Kent Ridge Road, 119077, Singapore.
| | - Mingrui Tan
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*Star), 1 Fusionopolis Way, 138632, Singapore.
| | - Monisha Esther Nongpiur
- Singapore Eye Research Institute, Singapore National Eye Centre, 20 College Road, 169856, Singapore; Ophthalmology & Visual Sciences Academic Clinical Programme, Academic Medicine, Duke-NUS Medical School, 8 College Road, 169857, Singapore.
| | - Benjamin Yixing Xu
- Roski Eye Institute, Keck School of Medicine, University of Southern California, 1450 San Pablo St #4400, Los Angeles, CA, 90033, USA.
| | - David Friedman
- Department of Ophthalmology, Harvard Medical School, 25 Shattuck Street, Boston, MA, 02115, USA; Massachusetts Eye and Ear, Mass General Brigham, 243 Charles Street, Boston, MA, 02114, USA.
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat Sen University, No. 54 Xianlie South Road, Yuexiu District, Guangzhou, China.
| | - Christopher Leung
- Department of Ophthalmology, School of Clinical Medicine, The University of Hong Kong, Cyberport 4, 100 Cyberport Road, Hong Kong; Department of Ophthalmology, Queen Mary Hospital, 102 Pok Fu Lam Road, Hong Kong.
| | - Yong Liu
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*Star), 1 Fusionopolis Way, 138632, Singapore.
| | - Victor Koh
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, 21 Lower Kent Ridge Road, 119077, Singapore; Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, 1E Kent Ridge Road, NUHS Tower Block, Level 7, 119228, Singapore.
| | - Tin Aung
- Singapore Eye Research Institute, Singapore National Eye Centre, 20 College Road, 169856, Singapore; Ophthalmology & Visual Sciences Academic Clinical Programme, Academic Medicine, Duke-NUS Medical School, 8 College Road, 169857, Singapore.
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, 20 College Road, 169856, Singapore; Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, 21 Lower Kent Ridge Road, 119077, Singapore; Ophthalmology & Visual Sciences Academic Clinical Programme, Academic Medicine, Duke-NUS Medical School, 8 College Road, 169857, Singapore; Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, 1E Kent Ridge Road, NUHS Tower Block, Level 7, 119228, Singapore.
| |
Collapse
|
3
|
Li Z, Yang J, Wang X, Zhou S. Establishment and Evaluation of Intelligent Diagnostic Model for Ophthalmic Ultrasound Images Based on Deep Learning. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:1760-1767. [PMID: 37137742 DOI: 10.1016/j.ultrasmedbio.2023.03.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 12/12/2022] [Accepted: 03/28/2023] [Indexed: 05/05/2023]
Abstract
OBJECTIVE The goal of the work described here was to construct a deep learning-based intelligent diagnostic model for ophthalmic ultrasound images to provide auxiliary analysis for the intelligent clinical diagnosis of posterior ocular segment diseases. METHODS The InceptionV3-Xception fusion model was established by using two pre-trained network models-InceptionV3 and Xception-in series to achieve multilevel feature extraction and fusion, and a classifier more suitable for the multiclassification recognition task of ophthalmic ultrasound images was designed to classify 3402 ophthalmic ultrasound images. The accuracy, macro-average precision, macro-average sensitivity, macro-average F1 value, subject working feature curves and area under the curve were used as model evaluation metrics, and the credibility of the model was assessed by testing the decision basis of the model using a gradient-weighted class activation mapping method. RESULTS The accuracy, precision, sensitivity and area under the subject working feature curve of the InceptionV3-Xception fusion model on the test set reached 0.9673, 0.9521, 0.9528 and 0.9988, respectively. The model decision basis was consistent with the clinical diagnosis basis of the ophthalmologist, which proves that the model has good reliability. CONCLUSION The deep learning-based ophthalmic ultrasound image intelligent diagnosis model can accurately screen and identify five posterior ocular segment diseases, which is beneficial to the intelligent development of ophthalmic clinical diagnosis.
Collapse
Affiliation(s)
- Zemeng Li
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, China
| | - Jun Yang
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, China
| | - Xiaochun Wang
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, China.
| | - Sheng Zhou
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, China.
| |
Collapse
|
4
|
Xu Z, Xu J, Shi C, Xu W, Jin X, Han W, Jin K, Grzybowski A, Yao K. Artificial Intelligence for Anterior Segment Diseases: A Review of Potential Developments and Clinical Applications. Ophthalmol Ther 2023; 12:1439-1455. [PMID: 36884203 PMCID: PMC10164195 DOI: 10.1007/s40123-023-00690-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 02/13/2023] [Indexed: 03/09/2023] Open
Abstract
Artificial intelligence (AI) technology is promising in the field of healthcare. With the developments of big data and image-based analysis, AI shows potential value in ophthalmology applications. Recently, machine learning and deep learning algorithms have made significant progress. Emerging evidence has demonstrated the capability of AI in the diagnosis and management of anterior segment diseases. In this review, we provide an overview of AI applications and potential future applications in anterior segment diseases, focusing on cornea, refractive surgery, cataract, anterior chamber angle detection, and refractive error prediction.
Collapse
Affiliation(s)
- Zhe Xu
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang, China
| | - Jia Xu
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang, China
| | - Ce Shi
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang, China
| | - Wen Xu
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang, China
| | - Xiuming Jin
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang, China
| | - Wei Han
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang, China
| | - Kai Jin
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang, China
| | - Andrzej Grzybowski
- Department of Ophthalmology, University of Warmia and Mazury, Olsztyn, Poland.
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| | - Ke Yao
- Eye Center of the Second Affiliated Hospital, School of Medicine, Zhejiang University, No. 88 Jiefang Road, Hangzhou, 310009, Zhejiang, China.
| |
Collapse
|
5
|
Yoo TK, Ryu IH, Kim JK, Lee IS, Kim HK. A deep learning approach for detection of shallow anterior chamber depth based on the hidden features of fundus photographs. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 219:106735. [PMID: 35305492 DOI: 10.1016/j.cmpb.2022.106735] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Revised: 02/15/2022] [Accepted: 03/04/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVES Patients with angle-closure glaucoma (ACG) are asymptomatic until they experience a painful attack. Shallow anterior chamber depth (ACD) is considered a significant risk factor for ACG. We propose a deep learning approach to detect shallow ACD using fundus photographs and to identify the hidden features of shallow ACD. METHODS This retrospective study assigned healthy subjects to the training (n = 1188 eyes) and test (n = 594) datasets (prospective validation design). We used a deep learning approach to estimate ACD and build a classification model to identify eyes with a shallow ACD. The proposed method, including subtraction of the input and output images of CycleGAN and a thresholding algorithm, was adopted to visualize the characteristic features of fundus photographs with a shallow ACD. RESULTS The deep learning model integrating fundus photographs and clinical variables achieved areas under the receiver operating characteristic curve of 0.978 (95% confidence interval [CI], 0.963-0.988) for an ACD ≤ 2.60 mm and 0.895 (95% CI, 0.868-0.919) for an ACD ≤ 2.80 mm, and outperformed the regression model using only clinical variables. However, the difference between shallow and deep ACD classes on fundus photographs was difficult to be detected with the naked eye. We were unable to identify the features of shallow ACD using the Grad-CAM. The CycleGAN-based feature images showed that area around the macula and optic disk significantly contributed to the classification of fundus photographs with a shallow ACD. CONCLUSIONS We demonstrated the feasibility of a novel deep learning model to detect a shallow ACD as a screening tool for ACG using fundus photographs. The CycleGAN-based feature map showed the hidden characteristic features of shallow ACD that were previously undetectable by conventional techniques and ophthalmologists. This framework will facilitate the early detection of shallow ACD to prevent overlooking the risks associated with ACG.
Collapse
Affiliation(s)
- Tae Keun Yoo
- B&VIIT Eye Center, Seoul, South Korea; Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, Cheongju, South Korea.
| | - Ik Hee Ryu
- B&VIIT Eye Center, Seoul, South Korea; VISUWORKS, Seoul, South Korea
| | - Jin Kuk Kim
- B&VIIT Eye Center, Seoul, South Korea; VISUWORKS, Seoul, South Korea
| | | | - Hong Kyu Kim
- Department of Ophthalmology, Dankook University Hospital, Dankook University College of Medicine, Cheonan, South Korea
| |
Collapse
|
6
|
Wang W, Wang L, Wang X, Zhou S, Lin S, Yang J. A Deep Learning System for Automatic Assessment of Anterior Chamber Angle in Ultrasound Biomicroscopy Images. Transl Vis Sci Technol 2021; 10:21. [PMID: 34570190 PMCID: PMC8479575 DOI: 10.1167/tvst.10.11.21] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
Abstract
Purpose To develop and assess a deep learning system that automatically detects angle closure and quantitatively measures angle parameters from ultrasound biomicroscopy (UBM) images using a deep learning algorithm. Methods A total of 3788 UBM images (2146 open angle and 1642 angle closure) from 1483 patients were collected. We developed a convolutional neural network (CNN) based on the InceptionV3 network for automatic classification of angle closure and open angle. For nonclosed images, we developed a CNN based on the EfficienttNetB3 network for the automatic localization of the scleral spur and the angle recess; then, the Unet network was used to segment the anterior chamber angle (ACA) tissue automatically. Based on the results of the latter two processes, we developed an algorithm to automatically measure the trabecular-iris angle (TIA500 and TIA750), angle-opening distance (AOD500 and AOD750), and angle recess area (ARA500 and ARA750) for quantitative evaluation of angle width. Results Using manual labeling as the reference standard, the ACA classification network's accuracy reached 98.18%, and the sensitivity and specificity for angle closure reached 98.74% and 97.44%, respectively. The deep learning system realized the automatic measurement of the angle parameters, and the mean of differences was generally small between automatic measurement and manual measurement. The coefficients of variation of TIA500, TIA750, AOD500, AOD750, ARA500, and ARA750 measured by the deep learning system were 5.77%, 4.67%, 10.76%, 7.71%, 16.77%, and 12.70%, respectively. The within-subject standard deviations of TIA500, TIA750, AOD500, AOD750, ARA500, and ARA750 were 5.77 degrees, 4.56 degrees, 155.92 µm, 147.51 µm, 0.10 mm2, and 0.12 mm2, respectively. The intraclass correlation coefficients of all the angle parameters were greater than 0.935. Conclusions The deep learning system can effectively and accurately evaluate the ACA automatically based on fully automated analysis of a UBM image. Translational Relevance The present work suggests that the deep learning system described here could automatically detect angle closure and quantitatively measure angle parameters from UBM images and enhancing the intelligent diagnosis and management of primary angle-closure glaucoma.
Collapse
Affiliation(s)
- Wensai Wang
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, China
| | - Lingxiao Wang
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, China
| | - Xiaochun Wang
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, China
| | - Sheng Zhou
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, China
| | - Song Lin
- Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Jun Yang
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, China
| |
Collapse
|
7
|
Wang W, Wang L, Wang T, Wang X, Zhou S, Yang J, Lin S. Automatic Localization of the Scleral Spur Using Deep Learning and Ultrasound Biomicroscopy. Transl Vis Sci Technol 2021; 10:28. [PMID: 34427626 PMCID: PMC8399238 DOI: 10.1167/tvst.10.9.28] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose The purpose of this study was to develop a convolutional neural network (CNN) for automated localization of the scleral spur in ultrasound biomicroscopy (UBM) images of open-angle eyes. Methods UBM images were acquired, and one glaucoma specialist provided reference coordinates of scleral spur locations in all images. A CNN model based on the EfficientNetB3 architecture was developed to detect the scleral spur in each image. The prediction errors and Euclidean distance were used to evaluate localization performance of the CNN model. Trabecular-iris angle 500 (TIA500) and angle-opening distance 500 (AOD500) were measured and analyzed using the scleral spur locations provided by the specialist and predicted by the CNN model. Results The CNN was developed using a training dataset of 2328 images and tested using an independent dataset of 258 images. The mean absolute prediction errors of CNN model were 48.06 ± 45.40 µm for X-coordinates and 30.84 ± 27.03 µm for Y-coordinates. The mean absolute intraobserver variability was 47.80 ± 44.45 µm for X-coordinates and 29.50 ± 25.77 µm for Y-coordinates. The mean Euclidean distance of the CNN was 60.41 ± 49.02 µm and the intraobserver mean Euclidean distance was 59.78 ± 47.12 µm. The mean absolute error in TIA500 was 1.26 ± 1.38 degrees for all test images and in AOD500 was 0.039 ± 0.051 mm. Conclusions A CNN can detect the scleral spur on UBM images of open-angle eyes with performance similar to that of a glaucoma specialist. Translational Relevance Deep learning algorithms for automating scleral spur localization would facilitate the quantitative assessment of the opening of the angle and the risk in angle closure.
Collapse
Affiliation(s)
- Wensai Wang
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, China
| | - Lingxiao Wang
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, China
| | - Tao Wang
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, China
| | - Xiaochun Wang
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, China
| | - Sheng Zhou
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, China
| | - Jun Yang
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, China
| | - Song Lin
- Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, China
| |
Collapse
|
8
|
Li W, Chen Q, Jiang C, Shi G, Deng G, Sun X. Automatic Anterior Chamber Angle Classification Using Deep Learning System and Anterior Segment Optical Coherence Tomography Images. Transl Vis Sci Technol 2021; 10:19. [PMID: 34111263 PMCID: PMC8142723 DOI: 10.1167/tvst.10.6.19] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023] Open
Abstract
Purpose The purpose of this study was to develop a software package for the automatic classification of anterior chamber angle using anterior segment optical coherence tomography (AS-OCT). Methods AS-OCT images were collected from subjects with open, narrow, and closure anterior chamber angles, which were graded based on ultrasound biomicroscopy (UBM) results. The Inception version 3 network and the transfer learning technique were applied in the design of an algorithm for anterior chamber angle classification. The classification performance was evaluated by fivefold cross-validation and on an independent test dataset. Results The proposed algorithm reached a sensitivity of 0.999 and specificity of 1.000 in the judgment of closed and nonclosed angles. The overall classification of the proposed method in open angle, narrow angle, and angle-closure classifications reached a sensitivity of 0.989 and specificity of 0.995. Additionally, the sensitivity and specificity reached 1.000 and 1.000 for angle-closure, 0.983 and 0.993 for narrow angle, and 0.985 and 0.991 for open angle. Conclusions The experimental results showed that the proposed method can achieve a high accuracy of anterior chamber angle classification using AS-OCT images, and could be of value in future practice. Translational Relevance The proposed deep learning-based method that automate the classification of anterior chamber angle can facilitate clinical assessment of glaucoma.
Collapse
Affiliation(s)
- Wanyue Li
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| | - Qian Chen
- Eye Institute and Department of Ophthalmology, Eye and ENT Hospital, Fudan University, Shanghai, China.,Key NHC Key Laboratory of Myopia (Fudan University); Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
| | - Chunhui Jiang
- Eye Institute and Department of Ophthalmology, Eye and ENT Hospital, Fudan University, Shanghai, China.,Key NHC Key Laboratory of Myopia (Fudan University); Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
| | - Guohua Shi
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China.,CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, China
| | - Guohua Deng
- Department of Ophthalmology, the Third People's Hospital of Changzhou, Changzhou, Jiangsu, China
| | - Xinghuai Sun
- Eye Institute and Department of Ophthalmology, Eye and ENT Hospital, Fudan University, Shanghai, China.,Key NHC Key Laboratory of Myopia (Fudan University); Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
| |
Collapse
|
9
|
Porporato N, Tun TA, Baskaran M, Wong DWK, Husain R, Fu H, Sultana R, Perera S, Schmetterer L, Aung T. Towards 'automated gonioscopy': a deep learning algorithm for 360° angle assessment by swept-source optical coherence tomography. Br J Ophthalmol 2021; 106:1387-1392. [PMID: 33846160 DOI: 10.1136/bjophthalmol-2020-318275] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Revised: 03/05/2021] [Accepted: 03/28/2021] [Indexed: 11/03/2022]
Abstract
AIMS To validate a deep learning (DL) algorithm (DLA) for 360° angle assessment on swept-source optical coherence tomography (SS-OCT) (CASIA SS-1000, Tomey Corporation, Nagoya, Japan). METHODS This was a reliability analysis from a cross-sectional study. An independent test set of 39 936 SS-OCT scans from 312 phakic subjects (128 SS-OCT meridional scans per eye) was analysed. Participants above 50 years with no previous history of intraocular surgery were consecutively recruited from glaucoma clinics. Indentation gonioscopy and dark room SS-OCT were performed. Gonioscopic angle closure was defined as non-visibility of the posterior trabecular meshwork in ≥180° of the angle. For each subject, all images were analysed by a DL-based network based on the VGG-16 architecture, for gonioscopic angle-closure detection. Area under receiver operating characteristic curves (AUCs) and other diagnostic performance indicators were calculated for the DLA (index test) against gonioscopy (reference standard). RESULTS Approximately 80% of the participants were Chinese, and more than half were women (57.4%). The prevalence of gonioscopic angle closure in this hospital-based sample was 20.2%. After analysing a total of 39 936 SS-OCT scans, the AUC of the DLA was 0.85 (95% CI:0.80 to 0.90, with sensitivity of 83% and a specificity of 87%) to classify gonioscopic angle closure with the optimal cut-off value of >35% of circumferential angle closure. CONCLUSIONS The DLA exhibited good diagnostic performance for detection of gonioscopic angle closure on 360° SS-OCT scans in a glaucoma clinic setting. Such an algorithm, independent of the identification of the scleral spur, may be the foundation for a non-contact, fast and reproducible 'automated gonioscopy' in future.
Collapse
Affiliation(s)
- Natalia Porporato
- Singapore Eye Research Institute/Singapore National Eye Centre, Singapore
| | - Tin A Tun
- Singapore Eye Research Institute/Singapore National Eye Centre, Singapore
| | - Mani Baskaran
- Singapore Eye Research Institute/Singapore National Eye Centre, Singapore
| | - Damon W K Wong
- Singapore Eye Research Institute/Singapore National Eye Centre, Singapore.,SERI-NTU Advanced Ocular Engineering (STANCE) Program, Nanyang Technological University, Singapore
| | - Rahat Husain
- Singapore Eye Research Institute/Singapore National Eye Centre, Singapore
| | - Huazhu Fu
- Inception Institute of Artificial Intelligence, Abu Dhabi, UAE
| | | | - Shamira Perera
- Singapore Eye Research Institute/Singapore National Eye Centre, Singapore.,Duke-NUS Graduate Medical School, Singapore
| | - Leopold Schmetterer
- Singapore Eye Research Institute/Singapore National Eye Centre, Singapore.,SERI-NTU Advanced Ocular Engineering (STANCE) Program, Nanyang Technological University, Singapore.,School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore.,Department of Clinical Pharmacology, Medical University of Vienna, Austria, Austria.,Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Austria, Austria
| | - Tin Aung
- Singapore Eye Research Institute/Singapore National Eye Centre, Singapore .,Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| |
Collapse
|
10
|
Alexander JL, Wei L, Palmer J, Darras A, Levin MR, Berry JL, Ludeman E. A systematic review of ultrasound biomicroscopy use in pediatric ophthalmology. Eye (Lond) 2021; 35:265-276. [PMID: 32963311 PMCID: PMC7853121 DOI: 10.1038/s41433-020-01184-4] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Revised: 08/27/2020] [Accepted: 09/07/2020] [Indexed: 12/13/2022] Open
Abstract
Ultrasound biomicroscopy (UBM) is the only available option for noninvasive, high-resolution imaging of the intricate iridociliary complex, and for anterior segment imaging with corneal haze or opacity. While these unique features render UBM essential for specific types of trauma, congenital anomalies, and anterior segment tumors, UBM imaging has found clinical utility in a broad spectrum of diseases for structural assessments not limited to the anterior intraocular anatomy, but also for eyelid and orbit anatomy. This imaging tool has a very specific niche in the pediatric population where anterior segment disease can be accompanied by corneal opacity or clouding, and anomalies posterior to the iris may be present. Pediatric patients present additional diagnostic challenges. They are often unable to offer detailed histories or fully cooperate with examination, thus amplifying the need for high-resolution imaging. This purpose of this systematic review is to identify and synthesize the body of literature involving use of UBM to describe, evaluate, diagnose, or optimize treatment of pediatric ocular disease. The collated peer-reviewed research details the utility of this imaging modality, clarifies the structures and diseases most relevant for this tool, and describes quantitative and qualitative features of UBM imaging among pediatric subjects. This summary will include information about the specific applications available to enhance clinical care for pediatric eye disease.
Collapse
Affiliation(s)
- Janet L Alexander
- Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, 419 West Redwood Street, Suite 479, Baltimore, MD, 21201, USA.
| | - Libby Wei
- University of Maryland School of Medicine, 419 West Redwood Street, Suite 479, Baltimore, MD, 21201, USA
| | - Jamie Palmer
- University of Maryland School of Medicine, 419 West Redwood Street, Suite 479, Baltimore, MD, 21201, USA
| | - Alex Darras
- Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, 419 West Redwood Street, Suite 479, Baltimore, MD, 21201, USA
| | - Moran R Levin
- Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, 419 West Redwood Street, Suite 479, Baltimore, MD, 21201, USA
| | - Jesse L Berry
- Children's Hospital Los Angeles & The USC Roski Eye Institute, USC Keck School of Medicine, 4650 Sunset Blvd., Mailstop #88, Los Angeles, CA, 90027, USA
| | - Emilie Ludeman
- Health Sciences and Human Services Library, University of Maryland, 601W Lombard Street, Baltimore, MD, 21201-1512, USA
| |
Collapse
|
11
|
Le C, Baroni M, Vinnett A, Levin MR, Martinez C, Jaafar M, Madigan WP, Alexander JL. Deep Learning Model for Accurate Automatic Determination of Phakic Status in Pediatric and Adult Ultrasound Biomicroscopy Images. Transl Vis Sci Technol 2020; 9:63. [PMID: 33409005 PMCID: PMC7779873 DOI: 10.1167/tvst.9.2.63] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Accepted: 11/06/2020] [Indexed: 12/31/2022] Open
Abstract
Purpose Ultrasound biomicroscopy (UBM) is a noninvasive method for assessing anterior segment anatomy. Previous studies were prone to intergrader variability, lacked assessment of the lens-iris diaphragm, and excluded pediatric subjects. Lens status classification is an objective task applicable in pediatric and adult populations. We developed and validated a neural network to classify lens status from UBM images. Methods Two hundred eighty-five UBM images were collected in the Pediatric Anterior Segment Imaging Innovation Study (PASIIS) from 80 eyes of 51 pediatric and adult subjects (median age = 4.6 years, range = 3 weeks to 90 years) with lens status phakic, aphakic, or pseudophakic (n = 33, 7, and 21 subjects, respectively). Following transfer learning, a pretrained Densenet-121 model was fine-tuned on these images. Metrics were calculated for testing dataset results aggregated from fivefold cross-validation. For each fold, 20% of total subjects were partitioned for testing and the remaining subjects were used for training and validation (80:20 split). Results Our neural network trained across 60 epochs achieved recall 96.15%, precision 96.14%, F1-score 96.14%, false positive rate 3.74%, and area under the curve (AUC) 0.992. Feature saliency heatmaps consistently involved the lens. Algorithm performance was compared using 2 image sets, 1 from subjects of all ages, and the second from only subjects under age 10 years, with similar performance under both circumstances. Conclusions A neural network trained on a relatively small UBM image set classified lens status with satisfactory recall and precision. Adult and pediatric image sets offered roughly equivalent performance. Future studies will explore automated UBM image classification for complex anterior segment pathology. Translational Relevance Deep learning models can evaluate lens status from UBM images in adult and pediatric subjects using a limited image set.
Collapse
Affiliation(s)
- Christopher Le
- Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Mariana Baroni
- Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Alfred Vinnett
- Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Moran R Levin
- Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Camilo Martinez
- Department of Ophthalmology, Children's National Medical System, Washington, DC, USA
| | - Mohamad Jaafar
- Department of Ophthalmology, Children's National Medical System, Washington, DC, USA
| | - William P Madigan
- Department of Ophthalmology, Children's National Medical System, Washington, DC, USA
| | - Janet L Alexander
- Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, Baltimore, MD, USA
| |
Collapse
|
12
|
Yu J, Li W, Chen Q, Deng G, Jiang C, Liu G, Shi G, Sun X. Automatic Classification of Anterior Chamber Angle Based on Ultrasound Biomicroscopy Images. Ophthalmic Res 2020; 64:732-739. [PMID: 32810851 DOI: 10.1159/000510924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Accepted: 08/14/2020] [Indexed: 11/19/2022]
Abstract
INTRODUCTION Evaluating the anterior chamber angle (ACA) is important for the early diagnosis and treatment of primary angle-closure glaucoma. The assessment of ultrasound biomicroscopy (UBM) images usually requires well-trained ophthalmologists and screening for patients with narrow ACA is usually time- and labor-intensive. Therefore, the automatic assessment of UBM could be cost-effective and valuable in daily practice. OBJECTIVE The objective of this study is to develop an automatic method for localizing and classifying ACA based on UBM images. METHODS UBM images were collected and a coarse-to-fine method was used to localize the apex of the angle recess. By analyzing the grayscale features around the angle recess, closed angles were identified, and the rest were then classified as open or narrow angles, based on the degree of ACA. Using manual classification as the reference standard, the overall accuracy (OAcc), sensitivity (Sen), specificity (Spe), and balanced accuracy of the automatic classification method were evaluated. RESULTS A total of 540 UBM images from 290 participants were analyzed. Using these UBM images and the proposed method, the ACA was classified as open, narrow, or closed. During processing, the method localized the angle recess with 95% accuracy. The OAcc of the ACA classification was 77.8%, and the Spe and Sen of our method were 85.8 and 81.7% for angle closure; 88.9 and 75.6% for open angles; 91.9 and 76.1% for narrow angles, respectively. CONCLUSIONS Our method of automatic angle localization and classification based on UBM images is feasible and reliable. The automatic classification of ACA provides a basis and reference for future studies.
Collapse
Affiliation(s)
- Jian Yu
- Department of Ophthalmology and Vision Science, Eye and ENT Hospital, Fudan University, Shanghai, China, .,Key Laboratory of Myopia of State Health Ministry, Key Laboratory of Visual Impairment and Restoration of Shanghai, Shanghai, China, .,NHC Key Laboratory of Myopia, Fudan University, Shanghai, China,
| | - Wanyue Li
- Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
| | - Qian Chen
- Department of Ophthalmology and Vision Science, Eye and ENT Hospital, Fudan University, Shanghai, China.,Key Laboratory of Myopia of State Health Ministry, Key Laboratory of Visual Impairment and Restoration of Shanghai, Shanghai, China.,NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
| | - Guohua Deng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Shanghai, China
| | - Chunhui Jiang
- Department of Ophthalmology and Vision Science, Eye and ENT Hospital, Fudan University, Shanghai, China.,Key Laboratory of Myopia of State Health Ministry, Key Laboratory of Visual Impairment and Restoration of Shanghai, Shanghai, China.,NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
| | - Guangxing Liu
- Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
| | - Guohua Shi
- Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
| | - Xinghuai Sun
- Department of Ophthalmology and Vision Science, Eye and ENT Hospital, Fudan University, Shanghai, China.,Key Laboratory of Myopia of State Health Ministry, Key Laboratory of Visual Impairment and Restoration of Shanghai, Shanghai, China.,NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
| |
Collapse
|
13
|
Heisler M, Karst S, Lo J, Mammo Z, Yu T, Warner S, Maberley D, Beg MF, Navajas EV, Sarunic MV. Ensemble Deep Learning for Diabetic Retinopathy Detection Using Optical Coherence Tomography Angiography. Transl Vis Sci Technol 2020; 9:20. [PMID: 32818081 PMCID: PMC7396168 DOI: 10.1167/tvst.9.2.20] [Citation(s) in RCA: 44] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Accepted: 01/23/2020] [Indexed: 02/06/2023] Open
Abstract
Purpose To evaluate the role of ensemble learning techniques with deep learning in classifying diabetic retinopathy (DR) in optical coherence tomography angiography (OCTA) images and their corresponding co-registered structural images. Methods A total of 463 volumes from 380 eyes were acquired using the 3 × 3-mm OCTA protocol on the Zeiss Plex Elite system. Enface images of the superficial and deep capillary plexus were exported from both the optical coherence tomography and OCTA data. Component neural networks were constructed using single data-types and fine-tuned using VGG19, ResNet50, and DenseNet architectures pretrained on ImageNet weights. These networks were then ensembled using majority soft voting and stacking techniques. Results were compared with a classifier using manually engineered features. Class activation maps (CAMs) were created using the original CAM algorithm and Grad-CAM. Results The networks trained with the VGG19 architecture outperformed the networks trained on deeper architectures. Ensemble networks constructed using the four fine-tuned VGG19 architectures achieved accuracies of 0.92 and 0.90 for the majority soft voting and stacking methods respectively. Both ensemble methods outperformed the highest single data-type network and the network trained on hand-crafted features. Grad-CAM was shown to more accurately highlight areas of disease. Conclusions Ensemble learning increases the predictive accuracy of CNNs for classifying referable DR on OCTA datasets. Translational Relevance Because the diagnostic accuracy of OCTA images is shown to be greater than the manually extracted features currently used in the literature, the proposed methods may be beneficial toward developing clinically valuable solutions for DR diagnoses.
Collapse
Affiliation(s)
- Morgan Heisler
- School of Engineering Science, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Sonja Karst
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Julian Lo
- School of Engineering Science, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Zaid Mammo
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Timothy Yu
- School of Engineering Science, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Simon Warner
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - David Maberley
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Mirza Faisal Beg
- School of Engineering Science, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Eduardo V Navajas
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Marinko V Sarunic
- School of Engineering Science, Simon Fraser University, Burnaby, British Columbia, Canada
| |
Collapse
|