1
|
Li F, Wang D, Yang Z, Zhang Y, Jiang J, Liu X, Kong K, Zhou F, Tham CC, Medeiros F, Han Y, Grzybowski A, Zangwill LM, Lam DSC, Zhang X. The AI revolution in glaucoma: Bridging challenges with opportunities. Prog Retin Eye Res 2024; 103:101291. [PMID: 39186968 DOI: 10.1016/j.preteyeres.2024.101291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 08/19/2024] [Accepted: 08/19/2024] [Indexed: 08/28/2024]
Abstract
Recent advancements in artificial intelligence (AI) herald transformative potentials for reshaping glaucoma clinical management, improving screening efficacy, sharpening diagnosis precision, and refining the detection of disease progression. However, incorporating AI into healthcare usages faces significant hurdles in terms of developing algorithms and putting them into practice. When creating algorithms, issues arise due to the intensive effort required to label data, inconsistent diagnostic standards, and a lack of thorough testing, which often limits the algorithms' widespread applicability. Additionally, the "black box" nature of AI algorithms may cause doctors to be wary or skeptical. When it comes to using these tools, challenges include dealing with lower-quality images in real situations and the systems' limited ability to work well with diverse ethnic groups and different diagnostic equipment. Looking ahead, new developments aim to protect data privacy through federated learning paradigms, improving algorithm generalizability by diversifying input data modalities, and augmenting datasets with synthetic imagery. The integration of smartphones appears promising for using AI algorithms in both clinical and non-clinical settings. Furthermore, bringing in large language models (LLMs) to act as interactive tool in medicine may signify a significant change in how healthcare will be delivered in the future. By navigating through these challenges and leveraging on these as opportunities, the field of glaucoma AI will not only have improved algorithmic accuracy and optimized data integration but also a paradigmatic shift towards enhanced clinical acceptance and a transformative improvement in glaucoma care.
Collapse
Affiliation(s)
- Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Deming Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Zefeng Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Yinhang Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Jiaxuan Jiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Xiaoyi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Kangjie Kong
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Fengqi Zhou
- Ophthalmology, Mayo Clinic Health System, Eau Claire, WI, USA.
| | - Clement C Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China.
| | - Felipe Medeiros
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA.
| | - Ying Han
- University of California, San Francisco, Department of Ophthalmology, San Francisco, CA, USA; The Francis I. Proctor Foundation for Research in Ophthalmology, University of California, San Francisco, CA, USA.
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| | - Linda M Zangwill
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, CA, USA.
| | - Dennis S C Lam
- The International Eye Research Institute of the Chinese University of Hong Kong (Shenzhen), Shenzhen, China; The C-MER Dennis Lam & Partners Eye Center, C-MER International Eye Care Group, Hong Kong, China.
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| |
Collapse
|
2
|
Chen Z, Ishikawa H, Wang Y, Wollstein G, Schuman JS. Deep-Learning-Based Group Pointwise Spatial Mapping of Structure to Function in Glaucoma. OPHTHALMOLOGY SCIENCE 2024; 4:100523. [PMID: 38881610 PMCID: PMC11179402 DOI: 10.1016/j.xops.2024.100523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 03/12/2024] [Accepted: 03/25/2024] [Indexed: 06/18/2024]
Abstract
Purpose To establish generalizable pointwise spatial relationship between structure and function through occlusion analysis of a deep-learning (DL) model for predicting the visual field (VF) sensitivities from 3-dimensional (3D) OCT scan. Design Retrospective cross-sectional study. Participants A total of 2151 eyes from 1129 patients. Methods A DL model was trained to predict 52 VF sensitivities of 24-2 standard automated perimetry from 3D spectral-domain OCT images of the optic nerve head (ONH) with 12 915 OCT-VF pairs. Using occlusion analysis, the contribution of each individual cube covering a 240 × 240 × 31.25 μm region of the ONH to the model's prediction was systematically evaluated for each OCT-VF pair in a separate test set that consisted of 996 OCT-VF pairs. After simple translation (shifting in x- and y-axes to match the ONH center), group t-statistic maps were derived to visualize statistically significant ONH regions for each VF test point within a group. This analysis allowed for understanding the importance of each super voxel (240 × 240 × 31.25 μm covering the entire 4.32 × 4.32 × 1.125 mm ONH cube) in predicting VF test points for specific patient groups. Main Outcome Measures The region at the ONH corresponding to each VF test point and the effect of the former on the latter. Results The test set was divided to 2 groups, the healthy-to-early-glaucoma group (792 OCT-VF pairs, VF mean deviation [MD]: -1.32 ± 1.90 decibels [dB]) and the moderate-to-advanced-glaucoma group (204 OCT-VF pairs, VF MD: -17.93 ± 7.68 dB). Two-dimensional group t-statistic maps (x, y projection) were generated for both groups, assigning related ONH regions to visual field test points. The identified influential structural locations for VF sensitivity prediction at each test point aligned well with existing knowledge and understanding of structure-function spatial relationships. Conclusions This study successfully visualized the global trend of point-by-point spatial relationships between OCT-based structure and VF-based function without the need for prior knowledge or segmentation of OCTs. The revealed spatial correlations were consistent with previously published mappings. This presents possibilities of learning from trained machine learning models without applying any prior knowledge, potentially robust, and free from bias. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Zhiqi Chen
- Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, Brooklyn, New York
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, New York
| | - Hiroshi Ishikawa
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, New York
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, Oregon
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health and Science University, Portland, Oregon
| | - Yao Wang
- Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, Brooklyn, New York
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, New York
| | - Gadi Wollstein
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, New York
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, New York
- Center for Neural Science, NYU College of Arts and Sciences, New York, New York
| | - Joel S. Schuman
- Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, Brooklyn, New York
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, New York
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, New York
- Center for Neural Science, NYU College of Arts and Sciences, New York, New York
- Glaucoma Service, Eye Hospital, Philadelphia, Pennsylvania
- Department of Ophthalmology, Sidney Kimmel Medical College at Thomas Jefferson University, Philadelphia, Pennsylvania
- Drexel University School of Biomedical Engineering, Sciences and Health Studies
| |
Collapse
|
3
|
Pham AT, Pan AA, Yohannan J. Big data in visual field testing for glaucoma. Taiwan J Ophthalmol 2024; 14:289-298. [PMID: 39430358 PMCID: PMC11488814 DOI: 10.4103/tjo.tjo-d-24-00059] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2024] [Accepted: 07/02/2024] [Indexed: 10/22/2024] Open
Abstract
Recent technological advancements and the advent of ever-growing databases in health care have fueled the emergence of "big data" analytics. Big data has the potential to revolutionize health care, particularly ophthalmology, given the data-intensive nature of the medical specialty. As one of the leading causes of irreversible blindness worldwide, glaucoma is an ocular disease that receives significant interest for developing innovations in eye care. Among the most vital sources of data in glaucoma is visual field (VF) testing, which stands as a cornerstone for diagnosing and managing the disease. The expanding accessibility of large VF databases has led to a surge in studies investigating various applications of big data analytics in glaucoma. In this study, we review the use of big data for evaluating the reliability of VF tests, gaining insights into real-world clinical practices and outcomes, understanding new disease associations and risk factors, characterizing the patterns of VF loss, defining the structure-function relationship of glaucoma, enhancing early diagnosis or earlier detection of progression, informing clinical decisions, and improving clinical trials. Equally important, we discuss current challenges in big data analytics and future directions for improvement.
Collapse
Affiliation(s)
- Alex T. Pham
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Annabelle A. Pan
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Jithin Yohannan
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
4
|
Wu JH, Lin S, Moghimi S. Application of artificial intelligence in glaucoma care: An updated review. Taiwan J Ophthalmol 2024; 14:340-351. [PMID: 39430354 PMCID: PMC11488804 DOI: 10.4103/tjo.tjo-d-24-00044] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2024] [Accepted: 06/05/2024] [Indexed: 10/22/2024] Open
Abstract
The application of artificial intelligence (AI) in ophthalmology has been increasingly explored in the past decade. Numerous studies have shown promising results supporting the utility of AI to improve the management of ophthalmic diseases, and glaucoma is of no exception. Glaucoma is an irreversible vision condition with insidious onset, complex pathophysiology, and chronic treatment. Since there remain various challenges in the clinical management of glaucoma, the potential role of AI in facilitating glaucoma care has garnered significant attention. In this study, we reviewed the relevant literature published in recent years that investigated the application of AI in glaucoma management. The main aspects of AI applications that will be discussed include glaucoma risk prediction, glaucoma detection and diagnosis, visual field estimation and pattern analysis, glaucoma progression detection, and other applications.
Collapse
Affiliation(s)
- Jo-Hsuan Wu
- Shiley Eye Institute and Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla, California
- Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York
| | - Shan Lin
- Glaucoma Center of San Francisco, San Francisco, CA, United States
| | - Sasan Moghimi
- Shiley Eye Institute and Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla, California
| |
Collapse
|
5
|
Zhang L, Tang L, Xia M, Cao G. The application of artificial intelligence in glaucoma diagnosis and prediction. Front Cell Dev Biol 2023; 11:1173094. [PMID: 37215077 PMCID: PMC10192631 DOI: 10.3389/fcell.2023.1173094] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Accepted: 04/24/2023] [Indexed: 05/24/2023] Open
Abstract
Artificial intelligence is a multidisciplinary and collaborative science, the ability of deep learning for image feature extraction and processing gives it a unique advantage in dealing with problems in ophthalmology. The deep learning system can assist ophthalmologists in diagnosing characteristic fundus lesions in glaucoma, such as retinal nerve fiber layer defects, optic nerve head damage, optic disc hemorrhage, etc. Early detection of these lesions can help delay structural damage, protect visual function, and reduce visual field damage. The development of deep learning led to the emergence of deep convolutional neural networks, which are pushing the integration of artificial intelligence with testing devices such as visual field meters, fundus imaging and optical coherence tomography to drive more rapid advances in clinical glaucoma diagnosis and prediction techniques. This article details advances in artificial intelligence combined with visual field, fundus photography, and optical coherence tomography in the field of glaucoma diagnosis and prediction, some of which are familiar and some not widely known. Then it further explores the challenges at this stage and the prospects for future clinical applications. In the future, the deep cooperation between artificial intelligence and medical technology will make the datasets and clinical application rules more standardized, and glaucoma diagnosis and prediction tools will be simplified in a single direction, which will benefit multiple ethnic groups.
Collapse
Affiliation(s)
- Linyu Zhang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Li Tang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Min Xia
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Guofan Cao
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| |
Collapse
|
6
|
Moon S, Lee JH, Choi H, Lee SY, Lee J. Deep learning approaches to predict 10-2 visual field from wide-field swept-source optical coherence tomography en face images in glaucoma. Sci Rep 2022; 12:21041. [PMID: 36471039 PMCID: PMC9722778 DOI: 10.1038/s41598-022-25660-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 12/02/2022] [Indexed: 12/12/2022] Open
Abstract
Close monitoring of central visual field (VF) defects with 10-2 VF helps prevent blindness in glaucoma. We aimed to develop a deep learning model to predict 10-2 VF from wide-field swept-source optical coherence tomography (SS-OCT) images. Macular ganglion cell/inner plexiform layer thickness maps with either wide-field en face images (en face model) or retinal nerve fiber layer thickness maps (RNFLT model) were extracted, combined, and preprocessed. Inception-ResNet-V2 was trained to predict 10-2 VF from combined images. Estimation performance was evaluated using mean absolute error (MAE) between actual and predicted threshold values, and the two models were compared with different input data. The training dataset comprised paired 10-2 VF and SS-OCT images of 3,025 eyes of 1,612 participants and the test dataset of 337 eyes of 186 participants. Global prediction errors (MAEpoint-wise) were 3.10 and 3.17 dB for the en face and RNFLT models, respectively. The en face model performed better than the RNFLT model in superonasal and inferonasal sectors (P = 0.011 and P = 0.030). Prediction errors were smaller in the inferior versus superior hemifields for both models. The deep learning model effectively predicted 10-2 VF from wide-field SS-OCT images and might help clinicians efficiently individualize the frequency of 10-2 VF in clinical practice.
Collapse
Affiliation(s)
- Sangwoo Moon
- grid.262229.f0000 0001 0719 8572Department of Ophthalmology, Pusan National University College of Medicine, Busan, 49241 Korea ,grid.412588.20000 0000 8611 7824Biomedical Research Institute, Pusan National University Hospital, Busan, 49241 Korea
| | - Jae Hyeok Lee
- Department of Medical AI, Deepnoid Inc, Seoul, 08376 Korea
| | - Hyunju Choi
- Department of Medical AI, Deepnoid Inc, Seoul, 08376 Korea
| | - Sun Yeop Lee
- Department of Medical AI, Deepnoid Inc, Seoul, 08376 Korea
| | - Jiwoong Lee
- grid.262229.f0000 0001 0719 8572Department of Ophthalmology, Pusan National University College of Medicine, Busan, 49241 Korea ,grid.412588.20000 0000 8611 7824Biomedical Research Institute, Pusan National University Hospital, Busan, 49241 Korea
| |
Collapse
|
7
|
Hashimoto Y, Kiwaki T, Sugiura H, Asano S, Murata H, Fujino Y, Matsuura M, Miki A, Mori K, Ikeda Y, Kanamoto T, Yamagami J, Inoue K, Tanito M, Yamanishi K, Asaoka R. Predicting 10-2 Visual Field From Optical Coherence Tomography in Glaucoma Using Deep Learning Corrected With 24-2/30-2 Visual Field. Transl Vis Sci Technol 2021; 10:28. [PMID: 34812893 PMCID: PMC8626848 DOI: 10.1167/tvst.10.13.28] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022] Open
Abstract
Purpose To investigate whether a correction based on a Humphrey field analyzer (HFA) 24-2/30-2 visual field (VF) can improve the prediction performance of a deep learning model to predict the HFA 10-2 VF test from macular optical coherence tomography (OCT) measurements. Methods This is a multicenter, cross-sectional study. The training dataset comprised 493 eyes of 285 subjects (407, open-angle glaucoma [OAG]; 86, normative) who underwent HFA 10-2 testing and macular OCT. The independent testing dataset comprised 104 OAG eyes of 82 subjects who had undergone HFA 10-2 test, HFA 24-2/30-2 test, and macular OCT. A convolutional neural network (CNN) DL model was trained to predict threshold sensitivity (TH) values in HFA 10-2 from retinal thickness measured by macular OCT. The predicted TH values was modified by pattern-based regularization (PBR) and corrected with HFA 24-2/30-2. Absolute error (AE) of mean TH values and mean absolute error (MAE) of TH values were compared between the CNN-PBR alone model and the CNN-PBR corrected with HFA 24-2/30-2. Results AE of mean TH values was lower in the CNN-PBR with HFA 24-2/30-2 correction than in the CNN-PBR alone (1.9dB vs. 2.6dB; P = 0.006). MAE of TH values was lower in the CNN-PBR with correction compared to the CNN-PBR alone (4.2dB vs. 5.3 dB; P < 0.001). The inferior temporal quadrant showed lower prediction errors compared with other quadrants. Conclusions The performance of a DL model to predict 10-2 VF from macular OCT was improved by the correction with HFA 24-2/30-2. Translational Relevance This model can reduce the burden of additional HFA 10-2 by making the best use of routinely performed HFA 24-2/30-2 and macular OCT.
Collapse
Affiliation(s)
- Yohei Hashimoto
- Department of Ophthalmology, The University of Tokyo, Tokyo, Japan
| | - Taichi Kiwaki
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Hiroki Sugiura
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Shotaro Asano
- Department of Ophthalmology, The University of Tokyo, Tokyo, Japan
| | - Hiroshi Murata
- Department of Ophthalmology, The University of Tokyo, Tokyo, Japan.,Department of Ophthalmology, National Center for Global Health and Medicine, Tokyo, Japan
| | - Yuri Fujino
- Department of Ophthalmology, The University of Tokyo, Tokyo, Japan.,Department of Ophthalmology, Shimane University Faculty of Medicine, Izumo, Japan
| | - Masato Matsuura
- Department of Ophthalmology, The University of Tokyo, Tokyo, Japan
| | - Atsuya Miki
- Department of Ophthalmology, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Kazuhiko Mori
- Department of Ophthalmology, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Yoko Ikeda
- Department of Ophthalmology, Kyoto Prefectural University of Medicine, Kyoto, Japan.,Oike-Ganka Ikeda Clinic, Kyoto, Japan
| | | | | | | | - Masaki Tanito
- Department of Ophthalmology, Shimane University Faculty of Medicine, Shimane, Japan
| | - Kenji Yamanishi
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Ryo Asaoka
- Department of Ophthalmology, The University of Tokyo, Tokyo, Japan.,Department of Ophthalmology, Seirei Hamamatsu General Hospital, Shizuoka, Japan.,Seirei Christopher University, Shizuoka, Japan.,Nanovision Research Division, Research Institute of Electronics, Shizuoka University, Shizuoka, Japan.,The Graduate School for the Creation of New Photonics Industries, Shizuoka, Japan
| |
Collapse
|
8
|
Xiang Y, Chen J, Xu F, Lin Z, Xiao J, Lin Z, Lin H. Longtime Vision Function Prediction in Childhood Cataract Patients Based on Optical Coherence Tomography Images. Front Bioeng Biotechnol 2021; 9:646479. [PMID: 33748090 PMCID: PMC7973224 DOI: 10.3389/fbioe.2021.646479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Accepted: 02/08/2021] [Indexed: 11/21/2022] Open
Abstract
The results of visual prediction reflect the tendency and speed of visual development during a future period, based on which ophthalmologists and guardians can know the potential visual prognosis in advance, decide on an intervention plan, and contribute to visual development. In our study, we developed an intelligent system based on the features of optical coherence tomography images for long-term prediction of best corrected visual acuity (BCVA) 3 and 5 years in advance. Two hundred eyes of 132 patients were included. Six machine learning algorithms were applied. In the BCVA predictions, small errors within two lines of the visual chart were achieved. The mean absolute errors (MAEs) between the prediction results and ground truth were 0.1482–0.2117 logMAR for 3-year predictions and 0.1198–0.1845 logMAR for 5-year predictions; the root mean square errors (RMSEs) were 0.1916–0.2942 logMAR for 3-year predictions and 0.1692–0.2537 logMAR for 5-year predictions. This is the first study to predict post-therapeutic BCVAs in young children. This work establishes a reliable method to predict prognosis 5 years in advance. The application of our research contributes to the design of visual intervention plans and visual prognosis.
Collapse
Affiliation(s)
- Yifan Xiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Jingjing Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zhuoling Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Jun Xiao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,Center of Precision Medicine, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
9
|
Kang NY, Ra H, Lee K, Lee JH, Lee WK, Baek J. Classification of pachychoroid on optical coherence tomography using deep learning. Graefes Arch Clin Exp Ophthalmol 2021; 259:1803-1809. [PMID: 33616757 DOI: 10.1007/s00417-021-05104-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 01/11/2021] [Accepted: 01/26/2021] [Indexed: 01/22/2023] Open
Abstract
PURPOSE Pachychoroid is characterized by dilated Haller vessels and choriocapillaris attenuation that are seen on optical coherence tomography (OCT) B-scans. This study investigated the feasibility of using deep learning (DL) models to classify pachychoroid and non-pachychoroid eyes from OCT B-scan images. METHODS In total, 1898 OCT B-scan images were collected from eyes with macular diseases. Images were labeled as pachychoroid or non-pachychoroid based on strict quantitative and qualitative criteria for multimodal imaging analysis by two retina specialists. DL models were trained (80%) and validated (20%) using pretrained convolutional neural networks (CNNs). Model performance was assessed using an independent test set of 50 non-pachychoroid and 50 pachychoroid images. RESULTS The final accuracy of AlexNet and VGG-16 was 57.52% for both models. ResNet50, Inception-v3, Inception-ResNet-v2, and Xception showed a final accuracy of 96.31%, 95.25%, 93.40%, and 92.61%, respectively, for the validation set. These models demonstrated accuracy on an independent test set of 78.00%, 86.00%, 90.00%, and 92.00%, and an F1 score of 0.718, 0.841, 0.894, and 0.920, respectively. CONCLUSION DL models classified pachychoroid and non-pachychoroid images with good performance. Accurate classification can be achieved using CNN models with deep rather than shallow neural networks.
Collapse
Affiliation(s)
- Nam Yeo Kang
- Department of Ophthalmology, Bucheon St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Gyeonggi-do, Republic of Korea
| | - Ho Ra
- Department of Ophthalmology, Bucheon St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Gyeonggi-do, Republic of Korea
| | - Kook Lee
- Department of Ophthalmology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Jun Hyuk Lee
- Department of Ophthalmology, Bucheon St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Gyeonggi-do, Republic of Korea
| | - Won Ki Lee
- Retina Division, Nune Eye Center, Seoul, Republic of Korea
| | - Jiwon Baek
- Department of Ophthalmology, Bucheon St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Gyeonggi-do, Republic of Korea.
| |
Collapse
|