1
|
Chen L, Tseng VS, Tsung TH, Lu DW. A multi-label transformer-based deep learning approach to predict focal visual field progression. Graefes Arch Clin Exp Ophthalmol 2024; 262:2227-2235. [PMID: 38334809 DOI: 10.1007/s00417-024-06393-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 01/19/2024] [Accepted: 01/30/2024] [Indexed: 02/10/2024] Open
Abstract
PURPOSE Tracking functional changes in visual fields (VFs) through standard automated perimetry remains a clinical standard for glaucoma diagnosis. This study aims to develop and evaluate a deep learning (DL) model to predict regional VF progression, which has not been explored in prior studies. METHODS The study included 2430 eyes of 1283 patients with four or more consecutive VF examinations from the baseline. A multi-label transformer-based network (MTN) using longitudinal VF data was developed to predict progression in six VF regions mapped to the optic disc. Progression was defined using the mean deviation (MD) slope and calculated for all six VF regions, referred to as clusters. Separate MTN models, trained for focal progression detection and forecasting on various numbers of VFs as model input, were tested on a held-out test set. RESULTS The MTNs overall demonstrated excellent macro-average AUCs above 0.884 in detecting focal VF progression given five or more VFs. With a minimum of 6 VFs, the model demonstrated superior and more stable overall and per-cluster performance, compared to 5 VFs. The MTN given 6 VFs achieved a macro-average AUC of 0.848 for forecasting progression across 8 VF tests. The MTN also achieved excellent performance (AUCs ≥ 0.86, 1.0 sensitivity, and specificity ≥ 0.70) in four out of six clusters for the eyes already with severe VF loss (baseline MD ≤ - 12 dB). CONCLUSION The high prediction accuracy suggested that multi-label DL networks trained with longitudinal VF results may assist in identifying and forecasting progression in VF regions.
Collapse
Affiliation(s)
- Ling Chen
- Institute of Hospital and Health Care Administration, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Vincent S Tseng
- Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Ta-Hsin Tsung
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, No.325, Sec.2, Chenggong Rd., Neihu District, Taipei, Taiwan
| | - Da-Wen Lu
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, No.325, Sec.2, Chenggong Rd., Neihu District, Taipei, Taiwan.
| |
Collapse
|
2
|
Montesano G, Crabb DP, Wright DM, Rabiolo A, Ometto G, Garway-Heath DF. Estimating the Distribution of True Rates of Visual Field Progression in Glaucoma. Transl Vis Sci Technol 2024; 13:15. [PMID: 38591945 PMCID: PMC11008752 DOI: 10.1167/tvst.13.4.15] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Accepted: 03/07/2024] [Indexed: 04/10/2024] Open
Abstract
Purpose The purpose of this study was to estimate the distribution of the true rates of progression (RoP) of visual field (VF) loss. Methods We analyzed the progression of mean deviation over time in series of ≥ 10 tests from 3352 eyes (one per patient) from 5 glaucoma clinics, using a novel Bayesian hierarchical Linear Mixed Model (LMM); this modeled the random-effect distribution of RoPs as the sum of 2 independent processes following, respectively, a negative exponential distribution (the "true" distribution of RoPs) and a Gaussian distribution (the "noise"), resulting in a skewed exGaussian distribution. The exGaussian-LMM was compared to a standard Gaussian-LMM using the Watanabe-Akaike Information Criterion (WAIC). The random-effect distributions were compared to the empirical cumulative distribution function (eCDF) of linear regression RoPs using a Kolmogorov-Smirnov test. Results The WAIC indicated a better fit with the exGaussian-LMM (estimate [standard error]: 192174.4 [721.2]) than with the Gaussian-LMM (192595 [697.4], with a difference of 157.2 [22.6]). There was a significant difference between the eCDF and the Gaussian-LMM distribution (P < 0.0001), but not with the exGaussian-LMM distribution (P = 0.108). The estimated mean (95% credible intervals, CIs) "true" RoP (-0.377, 95% CI = -0.396 to -0.359 dB/year) was more negative than the observed mean RoP (-0.283, 95% CI = -0.299 to -0.268 dB/year), indicating a bias likely due to learning in standard LMMs. Conclusions The distribution of "true" RoPs can be estimated with an exGaussian-LMM, improving model accuracy. Translational Relevance We used these results to develop a fast and accurate analytical approximation for sample-size calculations in clinical trials using standard LMMs, which was integrated in a freely available web application.
Collapse
Affiliation(s)
- Giovanni Montesano
- City, University of London, Optometry and Visual Sciences, London, UK
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - David P. Crabb
- City, University of London, Optometry and Visual Sciences, London, UK
| | - David M. Wright
- Centre for Public Health, Queen's University Belfast, ICSA, Royal Victoria Hospital, Belfast, Northern Ireland, UK
| | - Alessandro Rabiolo
- Department of Health Sciences, University of Eastern Piedmont “A. Avogadro,” Novara, Italy
- Ophthalmology Unit, University Hospital Maggiore della Carità, Novara, Italy
| | - Giovanni Ometto
- City, University of London, Optometry and Visual Sciences, London, UK
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - David F. Garway-Heath
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| |
Collapse
|
3
|
Mahmoudinezhad G, Moghimi S, Cheng J, Ru L, Yang D, Agrawal K, Dixit R, Beheshtaein S, Du KH, Latif K, Gunasegaran G, Micheletti E, Nishida T, Kamalipour A, Walker E, Christopher M, Zangwill L, Vasconcelos N, Weinreb RN. Deep Learning Estimation of 10-2 Visual Field Map Based on Macular Optical Coherence Tomography Angiography Measurements. Am J Ophthalmol 2024; 257:187-200. [PMID: 37734638 DOI: 10.1016/j.ajo.2023.09.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 09/07/2023] [Accepted: 09/13/2023] [Indexed: 09/23/2023]
Abstract
PURPOSE To develop deep learning (DL) models estimating the central visual field (VF) from optical coherence tomography angiography (OCTA) vessel density (VD) measurements. DESIGN Development and validation of a deep learning model. METHODS A total of 1051 10-2 VF OCTA pairs from healthy, glaucoma suspects, and glaucoma eyes were included. DL models were trained on en face macula VD images from OCTA to estimate 10-2 mean deviation (MD), pattern standard deviation (PSD), 68 total deviation (TD) and pattern deviation (PD) values and compared with a linear regression (LR) model with the same input. Accuracy of the models was evaluated by calculating the average mean absolute error (MAE) and the R2 (squared Pearson correlation coefficients) of the estimated and actual VF values. RESULTS DL models predicting 10-2 MD achieved R2 of 0.85 (95% confidence interval [CI], 74-0.92) for 10-2 MD and MAEs of 1.76 dB (95% CI, 1.39-2.17 dB) for MD. This was significantly better than mean linear estimates for 10-2 MD. The DL model outperformed the LR model for the estimation of pointwise TD values with an average MAE of 2.48 dB (95% CI, 1.99-3.02) and R2 of 0.69 (95% CI, 0.57-0.76) over all test points. The DL model outperformed the LR model for the estimation of all sectors. CONCLUSIONS DL models enable the estimation of VF loss from OCTA images with high accuracy. Applying DL to the OCTA images may enhance clinical decision making. It also may improve individualized patient care and risk stratification of patients who are at risk for central VF damage.
Collapse
Affiliation(s)
- Golnoush Mahmoudinezhad
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Sasan Moghimi
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Jiacheng Cheng
- Department of Electrical and Computer Engineering (J.C., L.R., K.A., R.D., N.V.), University of California San Diego, La Jolla, California
| | - Liyang Ru
- Department of Electrical and Computer Engineering (J.C., L.R., K.A., R.D., N.V.), University of California San Diego, La Jolla, California
| | - Dongchen Yang
- Department of Computer Science and Engineering (D.Y.), University of California San Diego, La Jolla, California
| | - Kushagra Agrawal
- Department of Electrical and Computer Engineering (J.C., L.R., K.A., R.D., N.V.), University of California San Diego, La Jolla, California
| | - Rajeev Dixit
- Department of Electrical and Computer Engineering (J.C., L.R., K.A., R.D., N.V.), University of California San Diego, La Jolla, California
| | | | - Kelvin H Du
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Kareem Latif
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Gopikasree Gunasegaran
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Eleonora Micheletti
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Takashi Nishida
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Alireza Kamalipour
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Evan Walker
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Mark Christopher
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Linda Zangwill
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Nuno Vasconcelos
- Department of Electrical and Computer Engineering (J.C., L.R., K.A., R.D., N.V.), University of California San Diego, La Jolla, California
| | - Robert N Weinreb
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California.
| |
Collapse
|
4
|
Hussain S, Chua J, Wong D, Lo J, Kadziauskiene A, Asoklis R, Barbastathis G, Schmetterer L, Yong L. Predicting glaucoma progression using deep learning framework guided by generative algorithm. Sci Rep 2023; 13:19960. [PMID: 37968437 PMCID: PMC10651936 DOI: 10.1038/s41598-023-46253-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 10/30/2023] [Indexed: 11/17/2023] Open
Abstract
Glaucoma is a slowly progressing optic neuropathy that may eventually lead to blindness. To help patients receive customized treatment, predicting how quickly the disease will progress is important. Structural assessment using optical coherence tomography (OCT) can be used to visualize glaucomatous optic nerve and retinal damage, while functional visual field (VF) tests can be used to measure the extent of vision loss. However, VF testing is patient-dependent and highly inconsistent, making it difficult to track glaucoma progression. In this work, we developed a multimodal deep learning model comprising a convolutional neural network (CNN) and a long short-term memory (LSTM) network, for glaucoma progression prediction. We used OCT images, VF values, demographic and clinical data of 86 glaucoma patients with five visits over 12 months. The proposed method was used to predict VF changes 12 months after the first visit by combining past multimodal inputs with synthesized future images generated using generative adversarial network (GAN). The patients were classified into two classes based on their VF mean deviation (MD) decline: slow progressors (< 3 dB) and fast progressors (> 3 dB). We showed that our generative model-based novel approach can achieve the best AUC of 0.83 for predicting the progression 6 months earlier. Further, the use of synthetic future images enabled the model to accurately predict the vision loss even earlier (9 months earlier) with an AUC of 0.81, compared to using only structural (AUC = 0.68) or only functional measures (AUC = 0.72). This study provides valuable insights into the potential of using synthetic follow-up OCT images for early detection of glaucoma progression.
Collapse
Affiliation(s)
- Shaista Hussain
- Institute of High Performance Computing, A*STAR, Singapore, Singapore.
| | - Jacqueline Chua
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore
| | - Damon Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Singapore, Singapore
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | | | - Aiste Kadziauskiene
- Clinic of Ears, Nose, Throat and Eye Diseases, Institute of Clinical Medicine, Faculty of Medicine, Vilnius University, Vilnius, Lithuania
- Department of Eye Diseases, Vilnius University Hospital Santaros Klinikos, Vilnius, Lithuania
| | - Rimvydas Asoklis
- Clinic of Ears, Nose, Throat and Eye Diseases, Institute of Clinical Medicine, Faculty of Medicine, Vilnius University, Vilnius, Lithuania
- Department of Eye Diseases, Vilnius University Hospital Santaros Klinikos, Vilnius, Lithuania
| | - George Barbastathis
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
- Singapore-MIT Alliance for Research and Technology (SMART) Centre, Singapore, Singapore
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.
- Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore.
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland.
- Department of Ophthalmology, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore.
- School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore, Singapore.
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria.
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria.
| | - Liu Yong
- Institute of High Performance Computing, A*STAR, Singapore, Singapore
| |
Collapse
|
5
|
Montesano G, Garway-Heath DF, Rabiolo A, De Moraes CG, Ometto G, Crabb DP. Validating Trend-Based End Points for Neuroprotection Trials in Glaucoma. Transl Vis Sci Technol 2023; 12:20. [PMID: 37906055 PMCID: PMC10619697 DOI: 10.1167/tvst.12.10.20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 10/08/2023] [Indexed: 11/02/2023] Open
Abstract
Purpose The purpose of this study was to evaluate the power of trend-based visual field (VF) progression end points against long-term development of event-based end points accepted by the US Food and Drug Administration (FDA). Methods One eye from 3352 patients with ≥10 24-2 VFs (median = 11 years) follow-up were analyzed. Two FDA-compatible criteria were applied to these series to label "true-progressed" eyes: ≥5 locations changing from baseline by more than 7 dB (FDA-7) or by more than the expected test-retest variability (GPA-like) in 2 consecutive tests. Observed rates of progression (RoP) were used to simulate trial-like series (2 years) randomly assigned (1000 times) to a "placebo" or a "treatment" arm. We simulated neuroprotective "treatment" effects by changing the proportion of "true progressed" eyes in the two arms. Two trend-based methods for mean deviation (MD) were assessed: (1) linear mixed model (LMM), testing average difference in RoP between the two arms, and (2) time-to-progression (TTP), calculated by linear regression as time needed for MD to decline by predefined cutoffs from baseline. Power curves with 95% confidence intervals were calculated for trend and event-based methods on the simulated series. Results The FDA-7 and GPA-like progression was achieved by 45% and 55% of the eyes in the clinical database. LMM and TTP had similar power, significantly superior to the event-based methods, none of which reached 80% power. All methods had a 5% false-positive rate. Conclusions The trend-based methods can efficiently detect treatment effects defined by long-term FDA-compatible progression. Translational Relevance The assessment of the power of trend-based methods to detect clinically relevant progression end points.
Collapse
Affiliation(s)
- Giovanni Montesano
- City, University of London, Optometry and Visual Sciences, London, UK
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - David F Garway-Heath
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Alessandro Rabiolo
- Department of Health Sciences, Università del Piemonte Orientale "A. Avogadro," Novara, Italy
- Eye Clinic, University Hospital Maggiore della Carità, Novara, Italy
| | - Carlos Gustavo De Moraes
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center, New York, NY, USA
| | - Giovanni Ometto
- City, University of London, Optometry and Visual Sciences, London, UK
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - David P Crabb
- City, University of London, Optometry and Visual Sciences, London, UK
| |
Collapse
|
6
|
Tirsi A, Gliagias V, Sheha H, Patel B, Moehringer J, Tsai J, Gupta R, Obstbaum SA, Tello C. Retinal Ganglion Cell Functional Recovery after Intraocular Pressure Lowering Treatment Using Prostaglandin Analogs in Glaucoma Suspects: A Prospective Pilot Study. J Curr Glaucoma Pract 2023; 17:178-190. [PMID: 38269268 PMCID: PMC10803274 DOI: 10.5005/jp-journals-10078-1423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 10/10/2023] [Indexed: 01/26/2024] Open
Abstract
Aim and background To evaluate the ability of pattern electroretinogram (PERG) to detect improvement of retinal ganglion cell (RGC) function in glaucoma suspects (GS) after medically reducing intraocular pressure (IOP) using prostaglandin analog drops. Materials and methods Six subjects (eight eyes) received topical IOP lowering treatment based on their clinical examination and were observed at Manhattan Eye, Ear & Throat Hospital over an average of 3.1 ± 2.2 months. During this time, participants underwent a full ophthalmologic exam and were evaluated with a Humphrey visual field analyzer (HFA) 24-2 [24-2 mean deviation (MD), 24-2 pattern standard deviation (PSD), and 24-2 visual field indices (VFI)], Diopsys NOVA PERG optimized for glaucoma [magnitude (Mag), magnitudeD (MagD), and magnitudeD/magnitude ratio (MagD/Mag ratio)] and optical coherence tomography (OCT)-derived average retinal nerve fiber layer thickness (avRNFLT) and average ganglion cell layer + inner plexiform layer (avGCL + IPL) thicknesses at baseline visit (pretreatment) and 3 months later (posttreatment). Goldman applanation tonometry was used to measure IOP at each visit. Paired sample t-tests were conducted to determine the statistical significance of the change in IOP, HFA indices, PERG parameters, and OCT thickness measurements between the two visits. Results Lowering IOP by 22.29% resulted in a significant increase (32.98 and 15.49%) in MagD [t (7) = -3.174, 95% confidence interval (CI) = -0.53, -0.08, p = 0.016] and MagD/Mag ratio [t (7) = -3.233, 95% CI = -0.20, -0.03, p = 0.014], respectively. There was a positive percentage change for all variables of interest, however, 24-2 MD, Mag, avRNFLT, and GCL+ IPLT did not reach statistical significance. Conclusion After reducing IOP by 22.29% for a duration of 3.1 months, the PERG parameters, MagD and MagD/Mag ratio, significantly improved by 32.98 and 15.49%, respectively. Clinical significance Pattern electroretinogram (PERG) may be a crucial tool for clinicians to locate a window of opportunity in which degenerating yet viable RGCs could be rescued from irreversible damage. We suggest consideration of PERG as a tool in early retinal ganglion cell (RGC) dysfunction detection as well as for monitoring IOP lowering treatment. How to cite this article Tirsi A, Gliagias V, Sheha H, et al. Retinal Ganglion Cell Functional Recovery after Intraocular Pressure Lowering Treatment Using Prostaglandin Analogs in Glaucoma Suspects: A Prospective Pilot Study. J Curr Glaucoma Pract 2023;17(4):178-190.
Collapse
Affiliation(s)
- Andrew Tirsi
- Manhattan Eye, Ear and Throat Hospital; Donald and Barbara Zucker School of Medicine at Hofstra University/Northwell Health, Hempstead, New York, United States
| | - Vasiliki Gliagias
- Donald and Barbara Zucker School of Medicine at Hofstra University/Northwell Health, Hempstead, New York, United States
| | - Hosam Sheha
- Manhattan Eye, Ear and Throat Hospital; Donald and Barbara Zucker School of Medicine at Hofstra University/Northwell Health, Hempstead, New York, United States
| | - Bhakti Patel
- Donald and Barbara Zucker School of Medicine at Hofstra University/Northwell Health, Hempstead, New York, United States
| | - Julie Moehringer
- Sanford H. Calhoun High School, Merrick, New York, United States
| | - Joby Tsai
- Broward Health Medical Center, Fort Lauderdale, United States
| | - Rohun Gupta
- Donald and Barbara Zucker School of Medicine at Hofstra University/Northwell Health, Hempstead, New York, United States
| | - Stephen A Obstbaum
- Manhattan Eye, Ear and Throat Hospital; Donald and Barbara Zucker School of Medicine at Hofstra University/Northwell Health, Hempstead, New York, United States
| | - Celso Tello
- Manhattan Eye, Ear and Throat Hospital; Donald and Barbara Zucker School of Medicine at Hofstra University/Northwell Health, Hempstead, New York, United States
| |
Collapse
|
7
|
Knapp AN, Leng T, Rahimy E. Ophthalmology at the Forefront of Big Data Integration in Medicine: Insights from the IRIS Registry Database. THE YALE JOURNAL OF BIOLOGY AND MEDICINE 2023; 96:421-426. [PMID: 37780991 PMCID: PMC10524808 DOI: 10.59249/vupm2510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 10/03/2023]
Abstract
Ophthalmology stands at the vanguard of incorporating big data into medicine, as exemplified by the integration of The Intelligent Research in Sight (IRIS) Registry. This synergy cultivates patient-centered care, demonstrates real world efficacy and safety data for new therapies, and facilitates comprehensive population health insights. By evaluating the creation and utilization of the world's largest specialty clinical data registry, we underscore the transformative capacity of data-driven medical paradigms, current shortcomings, and future directions. We aim to provide a scaffold for other specialties to adopt big data integration into medicine.
Collapse
Affiliation(s)
- Austen N. Knapp
- Department of Ophthalmology, Byers Eye Institute,
Stanford University School of Medicine, Palo Alto, CA, USA
| | - Theodore Leng
- Department of Ophthalmology, Byers Eye Institute,
Stanford University School of Medicine, Palo Alto, CA, USA
| | - Ehsan Rahimy
- Department of Ophthalmology, Byers Eye Institute,
Stanford University School of Medicine, Palo Alto, CA, USA
- Department of Ophthalmology, Palo Alto Medical
Foundation, Palo Alto, CA, USA
| |
Collapse
|
8
|
Herbert P, Hou K, Bradley C, Hager G, Boland MV, Ramulu P, Unberath M, Yohannan J. Forecasting Risk of Future Rapid Glaucoma Worsening Using Early Visual Field, OCT, and Clinical Data. Ophthalmol Glaucoma 2023; 6:466-473. [PMID: 36944385 PMCID: PMC10509314 DOI: 10.1016/j.ogla.2023.03.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Revised: 01/20/2023] [Accepted: 03/10/2023] [Indexed: 03/22/2023]
Abstract
PURPOSE To assess whether we can forecast future rapid visual field (VF) worsening using deep learning models (DLMs) trained on early VF, OCT, and clinical data. DESIGN A retrospective cohort study. SUBJECTS In total, 4536 eyes from 2962 patients. Overall, 263 (5.80%) eyes underwent rapid VF worsening (mean deviation slope less than -1 dB/year across all VFs). METHODS We included eyes that met the following criteria: (1) followed for glaucoma or suspect status; (2) had at least 5 longitudinal reliable VFs (VF1, VF2, VF3, VF4, and VF5); and (3) had 1 reliable baseline OCT scan (OCT1) and 1 set of baseline clinical measurements (clinical1) at the time of VF1. We designed a DLM to forecast future rapid VF worsening. The input consisted of spatially oriented total deviation values from VF1 (including or not including VF2 and VF3 in some models) and retinal nerve fiber layer thickness values from the baseline OCT. We passed this VF/OCT stack into a vision transformer feature extractor, the output of which was concatenated with baseline clinical data before putting it through a linear classifier to predict the eye's risk of rapid VF worsening across the 5 VFs. We compared the performance of models with differing inputs by computing area under the curve (AUC) in the test set. Specifically, we trained models with the following inputs: (1) model V: VF1; (2) VC: VF1+ Clinical1; (3) VO: VF1+ OCT1; (4) VOC: VF1+ Clinical1+ OCT1; (5) V2: VF1 + VF2; (6) V2OC: VF1 + VF2 + Clinical1 + OCT1; (7) V3: VF1 + VF2 + VF3; and (8) V3OC: VF1 + VF2 + VF3 + Clinical1 + OCT1. MAIN OUTCOME MEASURES The AUC of DLMs when forecasting rapidly worsening eyes. RESULTS Model V3OC best forecasted rapid worsening with an AUC (95% confidence interval [CI]) of 0.87 (0.77-0.97). Remaining models in descending order of performance and their respective AUC (95% CI) were as follows: (1) model V3 (0.84 [0.74-0.95]), (2) model V2OC (0.81 [0.70-0.92]), (3) model V2 (0.81 [0.70-0.82]), (4) model VOC (0.77 [0.65-0.88]), (5) model VO (0.75 [0.64-0.88]), (6) model VC (0.75 [0.63-0.87]), and (7) model V (0.74 [0.62-0.86]). CONCLUSIONS Deep learning models can forecast future rapid glaucoma worsening with modest to high performance when trained using data from early in the disease course. Including baseline data from multiple modalities and subsequent visits improves performance beyond using VF data alone. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Patrick Herbert
- Malone Center For Engineering in Healthcare, Johns Hopkins University, Baltimore, Maryland
| | - Kaihua Hou
- Malone Center For Engineering in Healthcare, Johns Hopkins University, Baltimore, Maryland
| | - Chris Bradley
- Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland
| | - Greg Hager
- Malone Center For Engineering in Healthcare, Johns Hopkins University, Baltimore, Maryland
| | - Michael V Boland
- Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, Massachusetts
| | - Pradeep Ramulu
- Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland
| | - Mathias Unberath
- Malone Center For Engineering in Healthcare, Johns Hopkins University, Baltimore, Maryland
| | - Jithin Yohannan
- Malone Center For Engineering in Healthcare, Johns Hopkins University, Baltimore, Maryland; Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland.
| |
Collapse
|
9
|
Kim H, Lee J, Moon S, Kim S, Kim T, Jin SW, Kim JL, Shin J, Lee SU, Jang G, Hu Y, Park JR. Visual field prediction using a deep bidirectional gated recurrent unit network model. Sci Rep 2023; 13:11154. [PMID: 37429862 DOI: 10.1038/s41598-023-37360-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 06/20/2023] [Indexed: 07/12/2023] Open
Abstract
Although deep learning architecture has been used to process sequential data, only a few studies have explored the usefulness of deep learning algorithms to detect glaucoma progression. Here, we proposed a bidirectional gated recurrent unit (Bi-GRU) algorithm to predict visual field loss. In total, 5413 eyes from 3321 patients were included in the training set, whereas 1272 eyes from 1272 patients were included in the test set. Data from five consecutive visual field examinations were used as input; the sixth visual field examinations were compared with predictions by the Bi-GRU. The performance of Bi-GRU was compared with the performances of conventional linear regression (LR) and long short-term memory (LSTM) algorithms. Overall prediction error was significantly lower for Bi-GRU than for LR and LSTM algorithms. In pointwise prediction, Bi-GRU showed the lowest prediction error among the three models in most test locations. Furthermore, Bi-GRU was the least affected model in terms of worsening reliability indices and glaucoma severity. Accurate prediction of visual field loss using the Bi-GRU algorithm may facilitate decision-making regarding the treatment of patients with glaucoma.
Collapse
Grants
- HI19C0481 Ministry of Health & Welfare, Republic of Korea
- HI19C0481 Ministry of Health & Welfare, Republic of Korea
- HI19C0481 Ministry of Health & Welfare, Republic of Korea
- HI19C0481 Ministry of Health & Welfare, Republic of Korea
- HI19C0481 Ministry of Health & Welfare, Republic of Korea
- HI19C0481 Ministry of Health & Welfare, Republic of Korea
- HI19C0481 Ministry of Health & Welfare, Republic of Korea
- HI19C0481 Ministry of Health & Welfare, Republic of Korea
- HI19C0481 Ministry of Health & Welfare, Republic of Korea
- HI19C0481 Ministry of Health & Welfare, Republic of Korea
- HI19C0481 Ministry of Health & Welfare, Republic of Korea
- HI19C0481 Ministry of Health & Welfare, Republic of Korea
- HC19C0276 Ministry of Health & Welfare, Republic of Korea
- HC19C0276 Ministry of Health & Welfare, Republic of Korea
- HC19C0276 Ministry of Health & Welfare, Republic of Korea
- HC19C0276 Ministry of Health & Welfare, Republic of Korea
- HC19C0276 Ministry of Health & Welfare, Republic of Korea
- HC19C0276 Ministry of Health & Welfare, Republic of Korea
- HC19C0276 Ministry of Health & Welfare, Republic of Korea
- HC19C0276 Ministry of Health & Welfare, Republic of Korea
- HC19C0276 Ministry of Health & Welfare, Republic of Korea
- HC19C0276 Ministry of Health & Welfare, Republic of Korea
- HC19C0276 Ministry of Health & Welfare, Republic of Korea
- HC19C0276 Ministry of Health & Welfare, Republic of Korea
- NRF-2021R1I1A1A01057767 Korean government
- NRF-2021R1I1A1A01057767 Korean government
- NRF-2021R1I1A1A01057767 Korean government
- NRF-2021R1I1A1A01057767 Korean government
- NRF-2021R1I1A1A01057767 Korean government
- NRF-2021R1I1A1A01057767 Korean government
- NRF-2021R1I1A1A01057767 Korean government
- NRF-2021R1I1A1A01057767 Korean government
- NRF-2021R1I1A1A01057767 Korean government
- NRF-2021R1I1A1A01057767 Korean government
- NRF-2021R1I1A1A01057767 Korean government
- NRF-2021R1I1A1A01057767 Korean government
- NRF-2021R1A2B5B03087097 Korean government
- NRF-2021R1A2B5B03087097 Korean government
- NRF-2021R1A2B5B03087097 Korean government
- NRF-2021R1A2B5B03087097 Korean government
- NRF-2021R1A2B5B03087097 Korean government
- NRF-2021R1A2B5B03087097 Korean government
- NRF-2021R1A2B5B03087097 Korean government
- NRF-2021R1A2B5B03087097 Korean government
- NRF-2021R1A2B5B03087097 Korean government
- NRF-2021R1A2B5B03087097 Korean government
- NRF-2021R1A2B5B03087097 Korean government
- NRF-2021R1A2B5B03087097 Korean government
- NRF-2017R1A5A1015722M Korean government
- NRF-2017R1A5A1015722M Korean government
- NRF-2017R1A5A1015722M Korean government
- NRF-2017R1A5A1015722M Korean government
- NRF-2017R1A5A1015722M Korean government
- NRF-2017R1A5A1015722M Korean government
- NRF-2017R1A5A1015722M Korean government
- NRF-2017R1A5A1015722M Korean government
- NRF-2017R1A5A1015722M Korean government
- NRF-2017R1A5A1015722M Korean government
- NRF-2017R1A5A1015722M Korean government
- NRF-2017R1A5A1015722M Korean government
- NRF-2022R1A5A1033624 Korean government
- NRF-2022R1A5A1033624 Korean government
- NRF-2022R1A5A1033624 Korean government
- NRF-2022R1A5A1033624 Korean government
- NRF-2022R1A5A1033624 Korean government
- NRF-2022R1A5A1033624 Korean government
- NRF-2022R1A5A1033624 Korean government
- NRF-2022R1A5A1033624 Korean government
- NRF-2022R1A5A1033624 Korean government
- NRF-2022R1A5A1033624 Korean government
- NRF-2022R1A5A1033624 Korean government
- NRF-2022R1A5A1033624 Korean government
Collapse
Affiliation(s)
- Hwayeong Kim
- Department of Ophthalmology, Pusan National University College of Medicine, Busan, Korea
| | - Jiwoong Lee
- Department of Ophthalmology, Pusan National University College of Medicine, Busan, Korea
- Biomedical Research Institute, Pusan National University Hospital, Busan, Korea
| | - Sangwoo Moon
- Department of Ophthalmology, Pusan National University College of Medicine, Busan, Korea
| | - Sangil Kim
- Department of Mathematics, Pusan National University, Busan, Republic of Korea
| | - Taehyeong Kim
- Department of Mathematics, Pusan National University, Busan, Republic of Korea
| | - Sang Wook Jin
- Department of Ophthalmology, Dong-A University College of Medicine, Busan, Korea
| | - Jung Lim Kim
- Department of Ophthalmology, Busan Paik Hospital, Inje University College of Medicine, Busan, Korea
| | - Jonghoon Shin
- Department of Ophthalmology, Pusan National University Yangsan Hospital, Pusan National University School of Medicine, Yangsan, Korea
| | - Seung Uk Lee
- Department of Ophthalmology, Kosin University College of Medicine, Busan, Korea
| | - Geunsoo Jang
- Nonlinear Dynamics and Mathematical Application Center, Kyungpook National University, Daegu, Korea
| | - Yuanmeng Hu
- Department of Mathematics, Pusan National University, Busan, Republic of Korea
| | - Jeong Rye Park
- Department of Mathematics, Kyungpook National University, 80, Daehak-ro, Buk-gu, Daegu, 41566, Republic of Korea.
| |
Collapse
|
10
|
Gu B, Sidhu S, Weinreb RN, Christopher M, Zangwill LM, Baxter SL. Review of Visualization Approaches in Deep Learning Models of Glaucoma. Asia Pac J Ophthalmol (Phila) 2023; 12:392-401. [PMID: 37523431 DOI: 10.1097/apo.0000000000000619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 05/11/2023] [Indexed: 08/02/2023] Open
Abstract
Glaucoma is a major cause of irreversible blindness worldwide. As glaucoma often presents without symptoms, early detection and intervention are important in delaying progression. Deep learning (DL) has emerged as a rapidly advancing tool to help achieve these objectives. In this narrative review, data types and visualization approaches for presenting model predictions, including models based on tabular data, functional data, and/or structural data, are summarized, and the importance of data source diversity for improving the utility and generalizability of DL models is explored. Examples of innovative approaches to understanding predictions of artificial intelligence (AI) models and alignment with clinicians are provided. In addition, methods to enhance the interpretability of clinical features from tabular data used to train AI models are investigated. Examples of published DL models that include interfaces to facilitate end-user engagement and minimize cognitive and time burdens are highlighted. The stages of integrating AI models into existing clinical workflows are reviewed, and challenges are discussed. Reviewing these approaches may help inform the generation of user-friendly interfaces that are successfully integrated into clinical information systems. This review details key principles regarding visualization approaches in DL models of glaucoma. The articles reviewed here focused on usability, explainability, and promotion of clinician trust to encourage wider adoption for clinical use. These studies demonstrate important progress in addressing visualization and explainability issues required for successful real-world implementation of DL models in glaucoma.
Collapse
Affiliation(s)
- Byoungyoung Gu
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
- Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, CA, US
| | - Sophia Sidhu
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
- Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, CA, US
| | - Robert N Weinreb
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
| | - Mark Christopher
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
| | - Linda M Zangwill
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
| | - Sally L Baxter
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
- Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, CA, US
| |
Collapse
|
11
|
Park JR, Kim S, Kim T, Jin SW, Kim JL, Shin J, Lee SU, Jang G, Hu Y, Lee JW. Data Preprocessing and Augmentation Improved Visual Field Prediction of Recurrent Neural Network with Multi-Central Datasets. Ophthalmic Res 2023; 66:978-991. [PMID: 37231880 PMCID: PMC10357387 DOI: 10.1159/000531144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 05/15/2023] [Indexed: 05/27/2023]
Abstract
INTRODUCTION The purpose of this study was to determine whether data preprocessing and augmentation could improve visual field (VF) prediction of recurrent neural network (RNN) with multi-central datasets. METHODS This retrospective study collected data from five glaucoma services between June 2004 and January 2021. From an initial dataset of 331,691 VFs, we considered reliable VF tests with fixed intervals. Since the VF monitoring interval is very variable, we applied data augmentation using multiple sets of data for patients with more than eight VFs. We obtained 5,430 VFs from 463 patients and 13,747 VFs from 1,076 patients by setting the fixed test interval to 365 ± 60 days (D = 365) and 180 ± 60 days (D = 180), respectively. Five consecutive VFs were provided to the constructed RNN as input and the 6th VF was compared with the output of the RNN. The performance of the periodic RNN (D = 365) was compared to that of an aperiodic RNN. The performance of the RNN with 6 long- and short-term memory (LSTM) cells (D = 180) was compared with that of the RNN with 5-LSTM cells. To compare the prediction performance, the root mean square error (RMSE) and mean absolute error (MAE) of the total deviation value (TDV) were calculated as accuracy metrics. RESULTS The performance of the periodic model (D = 365) improved significantly over aperiodic model. Overall prediction error (MAE) was 2.56 ± 0.46 dB versus 3.26 ± 0.41 dB (periodic vs. aperiodic) (p < 0.001). A higher perimetric frequency was better for predicting future VF. The overall prediction error (RMSE) was 3.15 ± 2.29 dB versus 3.42 ± 2.25 dB (D = 180 vs. D = 365). Increasing the number of input VFs improved the performance of VF prediction in D = 180 periodic model (3.15 ± 2.29 dB vs. 3.18 ± 2.34 dB, p < 0.001). The 6-LSTM in the D = 180 periodic model was more robust to worsening of VF reliability and disease severity. The prediction accuracy worsened as the false-negative rate increased and the mean deviation decreased. CONCLUSION Data preprocessing with augmentation improved the VF prediction of the RNN model using multi-center datasets. The periodic RNN model predicted the future VF significantly better than the aperiodic RNN model.
Collapse
Affiliation(s)
- Jeong Rye Park
- Finance Fishery Manufacture Industrial Center on Big Data, Pusan National University, Busan, South Korea
| | - Sangil Kim
- Department of Mathematics, Pusan National University, Busan, South Korea
| | - Taehyeong Kim
- Department of Mathematics, Pusan National University, Busan, South Korea
| | - Sang Wook Jin
- Department of Ophthalmology, Dong-A University College of Medicine, Busan, South Korea
| | - Jung Lim Kim
- Department of Ophthalmology, Busan Paik Hospital, Inje University College of Medicine, Busan, South Korea
| | - Jonghoon Shin
- Department of Ophthalmology, Pusan National University Yangsan Hospital, Pusan National University School of Medicine, Yangsan, South Korea
| | - Seung Uk Lee
- Department of Ophthalmology, Kosin University College of Medicine, Busan, South Korea
| | - Geunsoo Jang
- Department of Mathematics, Pusan National University, Busan, South Korea
| | - Yuanmeng Hu
- Department of Mathematics, Pusan National University, Busan, South Korea
| | - Ji Woong Lee
- Department of Ophthalmology, Pusan National University College of Medicine, Busan, South Korea
- Biomedical Research Institute, Pusan National University Hospital, Busan, South Korea
| |
Collapse
|
12
|
Thakur S, Dinh LL, Lavanya R, Quek TC, Liu Y, Cheng CY. Use of artificial intelligence in forecasting glaucoma progression. Taiwan J Ophthalmol 2023; 13:168-183. [PMID: 37484617 PMCID: PMC10361424 DOI: 10.4103/tjo.tjo-d-23-00022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 03/03/2023] [Indexed: 07/25/2023] Open
Abstract
Artificial intelligence (AI) has been widely used in ophthalmology for disease detection and monitoring progression. For glaucoma research, AI has been used to understand progression patterns and forecast disease trajectory based on analysis of clinical and imaging data. Techniques such as machine learning, natural language processing, and deep learning have been employed for this purpose. The results from studies using AI for forecasting glaucoma progression however vary considerably due to dataset constraints, lack of a standard progression definition and differences in methodology and approach. While glaucoma detection and screening have been the focus of most research that has been published in the last few years, in this narrative review we focus on studies that specifically address glaucoma progression. We also summarize the current evidence, highlight studies that have translational potential, and provide suggestions on how future research that addresses glaucoma progression can be improved.
Collapse
Affiliation(s)
- Sahil Thakur
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Linh Le Dinh
- Institute of High Performance Computing, The Agency for Science, Technology and Research, Singapore
| | - Raghavan Lavanya
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Ten Cheer Quek
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Yong Liu
- Institute of High Performance Computing, The Agency for Science, Technology and Research, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Department of Ophthalmology, Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| |
Collapse
|
13
|
Precision Medicine in Glaucoma: Artificial Intelligence, Biomarkers, Genetics and Redox State. Int J Mol Sci 2023; 24:ijms24032814. [PMID: 36769127 PMCID: PMC9917798 DOI: 10.3390/ijms24032814] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 01/07/2023] [Accepted: 01/18/2023] [Indexed: 02/05/2023] Open
Abstract
Glaucoma is a multifactorial neurodegenerative illness requiring early diagnosis and strict monitoring of the disease progression. Current exams for diagnosis and prognosis are based on clinical examination, intraocular pressure (IOP) measurements, visual field tests, and optical coherence tomography (OCT). In this scenario, there is a critical unmet demand for glaucoma-related biomarkers to enhance clinical testing for early diagnosis and tracking of the disease's development. The introduction of validated biomarkers would allow for prompt intervention in the clinic to help with prognosis prediction and treatment response monitoring. This review aims to report the latest acquisitions on biomarkers in glaucoma, from imaging analysis to genetics and metabolic markers.
Collapse
|
14
|
Fluorescence Angiography with Dual Fluorescence for the Early Detection and Longitudinal Quantitation of Vascular Leakage in Retinopathy. Biomedicines 2023; 11:biomedicines11020293. [PMID: 36830829 PMCID: PMC9953145 DOI: 10.3390/biomedicines11020293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 01/03/2023] [Accepted: 01/18/2023] [Indexed: 01/26/2023] Open
Abstract
BACKGROUND Diabetic retinopathy (DR) afflicts more than 93 million people worldwide and is a leading cause of vision loss in working adults. While DR therapies are available, early DR development may go undetected without treatment due to the lack of sufficiently sensitive tools. Therefore, early detection is critically important to enable efficient treatment before progression to vision-threatening complications. A major clinical manifestation of early DR is retinal vascular leakage that may progress from diffuse to more localized focal leakage, leading to increased retinal thickness and diabetic macular edema (DME). In preclinical research, a hallmark of DR in mouse models is diffuse retinal leakage without increased thickness or DME, which limits the utility of optical coherence tomography and fluorescein angiography (FA) for early detection. The Evans blue assay detects diffuse leakage but requires euthanasia, which precludes longitudinal studies in the same animals. METHODS We developed a new modality of ratiometric fluorescence angiography with dual fluorescence (FA-DF) to reliably detect and longitudinally quantify diffuse retinal vascular leakage in mouse models of induced and spontaneous DR. RESULTS These studies demonstrated the feasibility and sensitivity of FA-DF in detecting and quantifying retinal vascular leakage in the same mice over time during DR progression in association with chronic hyperglycemia and age. CONCLUSIONS These proof-of-concept studies demonstrated the promise of FA-DF as a minimally invasive method to quantify DR leakage in preclinical mouse models longitudinally.
Collapse
|
15
|
A deep learning model incorporating spatial and temporal information successfully detects visual field worsening using a consensus based approach. Sci Rep 2023; 13:1041. [PMID: 36658309 PMCID: PMC9852268 DOI: 10.1038/s41598-023-28003-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Accepted: 01/11/2023] [Indexed: 01/20/2023] Open
Abstract
Glaucoma is a leading cause of irreversible blindness, and its worsening is most often monitored with visual field (VF) testing. Deep learning models (DLM) may help identify VF worsening consistently and reproducibly. In this study, we developed and investigated the performance of a DLM on a large population of glaucoma patients. We included 5099 patients (8705 eyes) seen at one institute from June 1990 to June 2020 that had VF testing as well as clinician assessment of VF worsening. Since there is no gold standard to identify VF worsening, we used a consensus of six commonly used algorithmic methods which include global regressions as well as point-wise change in the VFs. We used the consensus decision as a reference standard to train/test the DLM and evaluate clinician performance. 80%, 10%, and 10% of patients were included in training, validation, and test sets, respectively. Of the 873 eyes in the test set, 309 [60.6%] were from females and the median age was 62.4; (IQR 54.8-68.9). The DLM achieved an AUC of 0.94 (95% CI 0.93-0.99). Even after removing the 6 most recent VFs, providing fewer data points to the model, the DLM successfully identified worsening with an AUC of 0.78 (95% CI 0.72-0.84). Clinician assessment of worsening (based on documentation from the health record at the time of the final VF in each eye) had an AUC of 0.64 (95% CI 0.63-0.66). Both the DLM and clinician performed worse when the initial disease was more severe. This data shows that a DLM trained on a consensus of methods to define worsening successfully identified VF worsening and could help guide clinicians during routine clinical care.
Collapse
|
16
|
Jaumandreu L, Antón A, Pazos M, Rodriguez-Uña I, Rodriguez Agirretxe I, Martinez de la Casa JM, Ayala ME, Parrilla-Vallejo M, Dyrda A, Díez-Álvarez L, Rebolleda G, Muñoz-Negrete FJ. Glaucoma progression. Clinical practice guide. ARCHIVOS DE LA SOCIEDAD ESPANOLA DE OFTALMOLOGIA 2023; 98:40-57. [PMID: 36089479 DOI: 10.1016/j.oftale.2022.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Accepted: 05/19/2022] [Indexed: 01/11/2023]
Abstract
OBJECTIVE To provide general recommendations that serve as a guide for the evaluation and management of glaucomatous progression in daily clinical practice based on the existing quality of clinical evidence. METHODS After defining the objectives and scope of the guide, the working group was formed and structured clinical questions were formulated following the PICO (Patient, Intervention, Comparison, Outcomes) format. Once all the existing clinical evidence had been independently evaluated with the AMSTAR 2 (Assessment of Multiple Systematic Reviews) and Cochrane "Risk of bias" tools by at least two reviewers, recommendations were formulated following the Scottish Intercollegiate Guideline network (SIGN) methodology. RESULTS Recommendations with their corresponding levels of evidence that may be useful in the interpretation and decision-making related to the different methods for the detection of glaucomatous progression are presented. CONCLUSIONS Despite the fact that for many of the questions the level of scientific evidence available is not very high, this clinical practice guideline offers an updated review of the different existing aspects related to the evaluation and management of glaucomatous progression.
Collapse
Affiliation(s)
- L Jaumandreu
- Servicio de Oftalmología, Hospital Universitario Ramón y Cajal, IRYCIS, Universidad de Alcalá, Alcalá de Henares, Madrid, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain.
| | - A Antón
- Institut Català de la Retina (ICR), Barcelona, Spain; Universitat Internacional de Catalunya (UIC), Barcelona, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - M Pazos
- Institut Clínic d'Oftalmologia, Hospital Clínic de Barcelona, IDIBAPS, Universitat de Barcelona, Barcelona, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - I Rodriguez-Uña
- Instituto Oftalmológico Fernández-Vega, Universidad de Oviedo, Oviedo, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - I Rodriguez Agirretxe
- Servicio de Oftalmología, Hospital Universitario Donostia, San Sebastián, Gipuzkoa, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - J M Martinez de la Casa
- Servicio de Oftalmología, Hospital Clinico San Carlos, Instituto de investigación sanitaria del Hospital Clínico San Carlos (IsISSC), IIORC, Universidad Complutense de Madrid, Madrid, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - M E Ayala
- Institut Català de la Retina (ICR), Barcelona, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - M Parrilla-Vallejo
- Servicio de Oftalmología, Hospital Universitario Virgen Macarena, Sevilla, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - A Dyrda
- Institut Català de la Retina (ICR), Barcelona, Spain
| | - L Díez-Álvarez
- Servicio de Oftalmología, Hospital Universitario Ramón y Cajal, IRYCIS, Universidad de Alcalá, Alcalá de Henares, Madrid, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - G Rebolleda
- Servicio de Oftalmología, Hospital Universitario Ramón y Cajal, IRYCIS, Universidad de Alcalá, Alcalá de Henares, Madrid, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| | - F J Muñoz-Negrete
- Servicio de Oftalmología, Hospital Universitario Ramón y Cajal, IRYCIS, Universidad de Alcalá, Alcalá de Henares, Madrid, Spain; Red de Oftalmología RETICS OFTARED del Instituto de Salud Carlos III (ISCIII), Madrid, Spain
| |
Collapse
|
17
|
Chen D, Ran Ran A, Fang Tan T, Ramachandran R, Li F, Cheung CY, Yousefi S, Tham CCY, Ting DSW, Zhang X, Al-Aswad LA. Applications of Artificial Intelligence and Deep Learning in Glaucoma. Asia Pac J Ophthalmol (Phila) 2023; 12:80-93. [PMID: 36706335 DOI: 10.1097/apo.0000000000000596] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 12/06/2022] [Indexed: 01/28/2023] Open
Abstract
Diagnosis and detection of progression of glaucoma remains challenging. Artificial intelligence-based tools have the potential to improve and standardize the assessment of glaucoma but development of these algorithms is difficult given the multimodal and variable nature of the diagnosis. Currently, most algorithms are focused on a single imaging modality, specifically screening and diagnosis based on fundus photos or optical coherence tomography images. Use of anterior segment optical coherence tomography and goniophotographs is limited. The majority of algorithms designed for disease progression prediction are based on visual fields. No studies in our literature search assessed the use of artificial intelligence for treatment response prediction and no studies conducted prospective testing of their algorithms. Additional challenges to the development of artificial intelligence-based tools include scarcity of data and a lack of consensus in diagnostic criteria. Although research in the use of artificial intelligence for glaucoma is promising, additional work is needed to develop clinically usable tools.
Collapse
Affiliation(s)
- Dinah Chen
- Department of Ophthalmology, NYU Langone Health, New York City, NY
- Genentech Inc, South San Francisco, CA
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
- Lam Kin Chung, Jet King-Shing Ho Glaucoma Treatment And Research Centre, The Chinese University of Hong Kong, Hong Kong, China
| | - Ting Fang Tan
- Singapore Eye Research Institute, Singapore
- Singapore National Eye Center, Singapore
| | | | - Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
- Lam Kin Chung, Jet King-Shing Ho Glaucoma Treatment And Research Centre, The Chinese University of Hong Kong, Hong Kong, China
| | - Siamak Yousefi
- Department of Ophthalmology, The University of Tennessee Health Science Center, Memphis, TN
| | - Clement C Y Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
- Lam Kin Chung, Jet King-Shing Ho Glaucoma Treatment And Research Centre, The Chinese University of Hong Kong, Hong Kong, China
| | - Daniel S W Ting
- Singapore Eye Research Institute, Singapore
- Singapore National Eye Center, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | | |
Collapse
|
18
|
Yousefi S. Clinical Applications of Artificial Intelligence in Glaucoma. J Ophthalmic Vis Res 2023; 18:97-112. [PMID: 36937202 PMCID: PMC10020779 DOI: 10.18502/jovr.v18i1.12730] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 11/05/2022] [Indexed: 02/25/2023] Open
Abstract
Ophthalmology is one of the major imaging-intensive fields of medicine and thus has potential for extensive applications of artificial intelligence (AI) to advance diagnosis, drug efficacy, and other treatment-related aspects of ocular disease. AI has made impressive progress in ophthalmology within the past few years and two autonomous AI-enabled systems have received US regulatory approvals for autonomously screening for mid-level or advanced diabetic retinopathy and macular edema. While no autonomous AI-enabled system for glaucoma screening has yet received US regulatory approval, numerous assistive AI-enabled software tools are already employed in commercialized instruments for quantifying retinal images and visual fields to augment glaucoma research and clinical practice. In this literature review (non-systematic), we provide an overview of AI applications in glaucoma, and highlight some limitations and considerations for AI integration and adoption into clinical practice.
Collapse
Affiliation(s)
- Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, TN, USA
| |
Collapse
|
19
|
Chen A, Montesano G, Lu R, Lee CS, Crabb DP, Lee AY. Visual Field Endpoints for Neuroprotective Trials: A Case for AI-Driven Patient Enrichment. Am J Ophthalmol 2022; 243:118-124. [PMID: 35907473 PMCID: PMC9837863 DOI: 10.1016/j.ajo.2022.07.013] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 06/27/2022] [Accepted: 07/18/2022] [Indexed: 01/18/2023]
Abstract
PURPOSE To evaluate whether an artificial intelligence (AI) model can better select candidates that would demonstrate visual field (VF) progression, in order to shorten the duration or the number of patients needed for a clinical trial. DESIGN Retrospective cohort study. METHODS 7428 eyes of 3871 patients from the University of Washington Department of Ophthalmology VF Dataset were included. Progression was defined as at least 5 locations with >7 dB of change compared with baseline on 2 consecutive tests. Progression for all patients, a subgroup of the fastest progressing based on survival curves, and patients selected based on an elastic net Cox regression model were compared. The model was trained on pointwise threshold deviation values of the first VF, age, gender, laterality, and the mean total deviation (MD) at baseline. RESULTS A total of 13% of all patients met the criteria for progression at 5 years. Differences in survival were observed when stratified by MD and age (P < .0001). Those at risk of progression included patients aged 60 to 80 years with an initial MD < -5.0. This subgroup decreased the sample size required to detect progression compared with the entire cohort. The AI model-selected patients required the lowest number of patients for all effect sizes and trial lengths. For a trial length of 3 years and effect size of 30%, the number of patients required was 1656 (95% CI, 1638-1674), 903 (95% CI, 884-922), and 636 (95% CI, 625-646) for the entire cohort, the subgroup, and the model-selected patients, respectively. CONCLUSION An AI model can identify high-risk patients to substantially reduce the number of patients needed or study duration required to meet clinical trial endpoints.
Collapse
Affiliation(s)
- Andrew Chen
- Department of Ophthalmology, University of Washington, Seattle, Washington, United States
| | - Giovanni Montesano
- Optometry and Visual Sciences, City, University of London, London, UK, NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Randy Lu
- Department of Ophthalmology, University of Washington, Seattle, Washington, United States
| | - Cecilia S. Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington, United States
| | - David P. Crabb
- Optometry and Visual Sciences, City, University of London, London, UK
| | - Aaron Y. Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington, United States
| |
Collapse
|
20
|
Jin K, Ye J. Artificial intelligence and deep learning in ophthalmology: Current status and future perspectives. ADVANCES IN OPHTHALMOLOGY PRACTICE AND RESEARCH 2022; 2:100078. [PMID: 37846285 PMCID: PMC10577833 DOI: 10.1016/j.aopr.2022.100078] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 08/01/2022] [Accepted: 08/18/2022] [Indexed: 10/18/2023]
Abstract
Background The ophthalmology field was among the first to adopt artificial intelligence (AI) in medicine. The availability of digitized ocular images and substantial data have made deep learning (DL) a popular topic. Main text At the moment, AI in ophthalmology is mostly used to improve disease diagnosis and assist decision-making aiming at ophthalmic diseases like diabetic retinopathy (DR), glaucoma, age-related macular degeneration (AMD), cataract and other anterior segment diseases. However, most of the AI systems developed to date are still in the experimental stages, with only a few having achieved clinical applications. There are a number of reasons for this phenomenon, including security, privacy, poor pervasiveness, trust and explainability concerns. Conclusions This review summarizes AI applications in ophthalmology, highlighting significant clinical considerations for adopting AI techniques and discussing the potential challenges and future directions.
Collapse
Affiliation(s)
- Kai Jin
- Department of Ophthalmology, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Juan Ye
- Department of Ophthalmology, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| |
Collapse
|
21
|
Nunez R, Harris A, Ibrahim O, Keller J, Wikle CK, Robinson E, Zukerman R, Siesky B, Verticchio A, Rowe L, Guidoboni G. Artificial Intelligence to Aid Glaucoma Diagnosis and Monitoring: State of the Art and New Directions. PHOTONICS 2022; 9:810. [PMID: 36816462 PMCID: PMC9934292 DOI: 10.3390/photonics9110810] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Recent developments in the use of artificial intelligence in the diagnosis and monitoring of glaucoma are discussed. To set the context and fix terminology, a brief historic overview of artificial intelligence is provided, along with some fundamentals of statistical modeling. Next, recent applications of artificial intelligence techniques in glaucoma diagnosis and the monitoring of glaucoma progression are reviewed, including the classification of visual field images and the detection of glaucomatous change in retinal nerve fiber layer thickness. Current challenges in the direct application of artificial intelligence to further our understating of this disease are also outlined. The article also discusses how the combined use of mathematical modeling and artificial intelligence may help to address these challenges, along with stronger communication between data scientists and clinicians.
Collapse
Affiliation(s)
- Roberto Nunez
- Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO 65211, USA
| | - Alon Harris
- Department of Ophthalmology, Icahn School of Medicine at Mt. Sinai, New York, NY 10029, USA
| | - Omar Ibrahim
- Department of Electrical Engineering, Tikrit University, Tikrit P.O. Box 42, Iraq
| | - James Keller
- Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO 65211, USA
| | | | - Erin Robinson
- Department of Social Work, University of Missouri, Columbia, MO 65211, USA
| | - Ryan Zukerman
- Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center, New York-Presbyterian Hospital, New York, NY 10034, USA
| | - Brent Siesky
- Department of Ophthalmology, Icahn School of Medicine at Mt. Sinai, New York, NY 10029, USA
| | - Alice Verticchio
- Department of Ophthalmology, Icahn School of Medicine at Mt. Sinai, New York, NY 10029, USA
| | - Lucas Rowe
- Department of Ophthalmology, Indiana University School of Medicine, Indianapolis, IN 46202, USA
| | - Giovanna Guidoboni
- Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO 65211, USA
- Department of Mathematics, University of Missouri, Columbia, MO 65211, USA
| |
Collapse
|
22
|
Eslami M, Kim JA, Zhang M, Boland MV, Wang M, Chang DS, Elze T. Visual Field Prediction: Evaluating the Clinical Relevance of Deep Learning Models. OPHTHALMOLOGY SCIENCE 2022; 3:100222. [PMID: 36325476 PMCID: PMC9619031 DOI: 10.1016/j.xops.2022.100222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 07/28/2022] [Accepted: 09/07/2022] [Indexed: 12/27/2022]
Abstract
Purpose Two novel deep learning methods using a convolutional neural network (CNN) and a recurrent neural network (RNN) have recently been developed to forecast future visual fields (VFs). Although the original evaluations of these models focused on overall accuracy, it was not assessed whether they can accurately identify patients with progressive glaucomatous vision loss to aid clinicians in preventing further decline. We evaluated these 2 prediction models for potential biases in overestimating or underestimating VF changes over time. Design Retrospective observational cohort study. Participants All available and reliable Swedish Interactive Thresholding Algorithm Standard 24-2 VFs from Massachusetts Eye and Ear Glaucoma Service collected between 1999 and 2020 were extracted. Because of the methods' respective needs, the CNN data set included 54 373 samples from 7472 patients, and the RNN data set included 24 430 samples from 1809 patients. Methods The CNN and RNN methods were reimplemented. A fivefold cross-validation procedure was performed on each model, and pointwise mean absolute error (PMAE) was used to measure prediction accuracy. Test data were stratified into categories based on the severity of VF progression to investigate the models' performances on predicting worsening cases. The models were additionally compared with a no-change model that uses the baseline VF (for the CNN) and the last-observed VF (for the RNN) for its prediction. Main Outcome Measures PMAE in predictions. Results The overall PMAE 95% confidence intervals were 2.21 to 2.24 decibels (dB) for the CNN and 2.56 to 2.61 dB for the RNN, which were close to the original studies' reported values. However, both models exhibited large errors in identifying patients with worsening VFs and often failed to outperform the no-change model. Pointwise mean absolute error values were higher in patients with greater changes in mean sensitivity (for the CNN) and mean total deviation (for the RNN) between baseline and follow-up VFs. Conclusions Although our evaluation confirms the low overall PMAEs reported in the original studies, our findings also reveal that both models severely underpredict worsening of VF loss. Because the accurate detection and projection of glaucomatous VF decline is crucial in ophthalmic clinical practice, we recommend that this consideration is explicitly taken into account when developing and evaluating future deep learning models.
Collapse
Key Words
- Artificial intelligence
- CI, confidence interval
- CNN, convolutional neural network
- DL, deep learning
- Deep learning
- Glaucoma
- MD, mean deviation
- MPark, recurrent neural network method from Park et al
- MWen, convolutional neural network method from Wen et al
- PMAE, pointwise mean absolute error
- Prediction
- RNN, recurrent neural network
- ROP, rate of progression
- TD, total deviation
- VF, visual field
- Visual fields
- dB, decibel
Collapse
Affiliation(s)
- Mohammad Eslami
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts,Correspondence: Mohammad Eslami, PhD, Schepens Eye Research Institute of Massachusetts Eye and Ear, 20 Staniford Street, Boston, MA 02114.
| | - Julia A. Kim
- Early Clinical Development, Genentech, Inc, South San Francisco, California
| | - Miao Zhang
- Early Clinical Development, Genentech, Inc, South San Francisco, California
| | - Michael V. Boland
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts
| | - Mengyu Wang
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts
| | - Dolly S. Chang
- Early Clinical Development, Genentech, Inc, South San Francisco, California,Byers Eye Institute, Stanford University, Palo Alto, California
| | - Tobias Elze
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
23
|
Al-Aswad LA, Ramachandran R, Schuman JS, Medeiros F, Eydelman MB. Artificial Intelligence for Glaucoma: Creating and Implementing Artificial Intelligence for Disease Detection and Progression. Ophthalmol Glaucoma 2022; 5:e16-e25. [PMID: 35218987 PMCID: PMC9399304 DOI: 10.1016/j.ogla.2022.02.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 02/14/2022] [Accepted: 02/17/2022] [Indexed: 12/15/2022]
Abstract
On September 3, 2020, the Collaborative Community on Ophthalmic Imaging conducted its first 2-day virtual workshop on the role of artificial intelligence (AI) and related machine learning techniques in the diagnosis and treatment of various ophthalmic conditions. In a session entitled "Artificial Intelligence for Glaucoma," a panel of glaucoma specialists, researchers, industry experts, and patients convened to share current research on the application of AI to commonly used diagnostic modalities, including fundus photography, OCT imaging, standard automated perimetry, and gonioscopy. The conference participants focused on the use of AI as a tool for disease prediction, highlighted its ability to address inequalities, and presented the limitations of and challenges to its clinical application. The panelists' discussion addressed AI and health equities from clinical, societal, and regulatory perspectives.
Collapse
Affiliation(s)
- Lama A Al-Aswad
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, New York; Department of Population Health, NYU Langone Health, NYU Grossman School of Medicine, New York, New York.
| | - Rithambara Ramachandran
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, New York
| | - Joel S Schuman
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, New York; Department of Biomedical Engineering, New York University Tandon School of Engineering, Brooklyn, New York; Department of Electrical and Computer Engineering, New York University Tandon School of Engineering, Brooklyn, New York; Center for Neural Science, NYU, New York, New York; Neuroscience Institute, NYU Langone Health, New York, New York
| | - Felipe Medeiros
- Department of Ophthalmology, Duke University School of Medicine, Durham, North Carolina; Department of Electrical and Computer Engineering, Pratt School of Engineering, Duke University, Durham, North Carolina
| | | |
Collapse
|
24
|
Charng J, Alam K, Swartz G, Kugelman J, Alonso-Caneiro D, Mackey DA, Chen FK. Deep learning: applications in retinal and optic nerve diseases. Clin Exp Optom 2022:1-10. [PMID: 35999058 DOI: 10.1080/08164622.2022.2111201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022] Open
Abstract
Deep learning (DL) represents a paradigm-shifting, burgeoning field of research with emerging clinical applications in optometry. Unlike traditional programming, which relies on human-set specific rules, DL works by exposing the algorithm to a large amount of annotated data and allowing the software to develop its own set of rules (i.e. learn) by adjusting the parameters inside the model (network) during a training process in order to complete the task on its own. One major limitation of traditional programming is that, with complex tasks, it may require an extensive set of rules to accurately complete the assignment. Additionally, traditional programming can be susceptible to human bias from programmer experience. With the dramatic increase in the amount and the complexity of clinical data, DL has been utilised to automate data analysis and thus to assist clinicians in patient management. This review will present the latest advances in DL, for managing posterior eye diseases as well as DL-based solutions for patients with vision loss.
Collapse
Affiliation(s)
- Jason Charng
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Khyber Alam
- Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Gavin Swartz
- Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Jason Kugelman
- School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Australia
| | - David Alonso-Caneiro
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Australia
| | - David A Mackey
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia.,Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | - Fred K Chen
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia.,Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia.,Department of Ophthalmology, Royal Perth Hospital, Western Australia, Perth, Australia
| |
Collapse
|
25
|
Young SL, Jain N, Tatham AJ. The application of advanced imaging techniques in glaucoma. EXPERT REVIEW OF OPHTHALMOLOGY 2022. [DOI: 10.1080/17469899.2022.2101449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Affiliation(s)
- Su Ling Young
- Princess Alexandra Eye Pavilion, Edinburgh, UK
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| | - Nikhil Jain
- Addenbrooke’s Hospital, Cambridge University Hospitals NHS trust, Cambridge, UK
| | - Andrew J Tatham
- Princess Alexandra Eye Pavilion, Edinburgh, UK
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
26
|
Young LH, Kim J, Yakin M, Lin H, Dao DT, Kodati S, Sharma S, Lee AY, Lee CS, Sen HN. Automated Detection of Vascular Leakage in Fluorescein Angiography - A Proof of Concept. Transl Vis Sci Technol 2022; 11:19. [PMID: 35877095 PMCID: PMC9339697 DOI: 10.1167/tvst.11.7.19] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose The purpose of this paper was to develop a deep learning algorithm to detect retinal vascular leakage (leakage) in fluorescein angiography (FA) of patients with uveitis and use the trained algorithm to determine clinically notable leakage changes. Methods An algorithm was trained and tested to detect leakage on a set of 200 FA images (61 patients) and evaluated on a separate 50-image test set (21 patients). The ground truth was leakage segmentation by two clinicians. The Dice Similarity Coefficient (DSC) was used to measure concordance. Results During training, the algorithm achieved a best average DSC of 0.572 (95% confidence interval [CI] = 0.548–0.596). The trained algorithm achieved a DSC of 0.563 (95% CI = 0.543–0.582) when tested on an additional set of 50 images. The trained algorithm was then used to detect leakage on pairs of FA images from longitudinal patient visits. Longitudinal leakage follow-up showed a >2.21% change in the visible retina area covered by leakage (as detected by the algorithm) had a sensitivity and specificity of 90% (area under the curve [AUC] = 0.95) of detecting a clinically notable change compared to the gold standard, an expert clinician's assessment. Conclusions This deep learning algorithm showed modest concordance in identifying vascular leakage compared to ground truth but was able to aid in identifying vascular FA leakage changes over time. Translational Relevance This is a proof-of-concept study that vascular leakage can be detected in a more standardized way and that tools can be developed to help clinicians more objectively compare vascular leakage between FAs.
Collapse
Affiliation(s)
- LeAnne H Young
- National Eye Institute, Bethesda, MD, USA.,Cleveland Clinic Lerner College of Medicine, Cleveland, OH, USA
| | - Jongwoo Kim
- National Library of Medicine, Bethesda, MD, USA
| | | | - Henry Lin
- National Eye Institute, Bethesda, MD, USA
| | | | | | - Sumit Sharma
- Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA
| | | | | | - H Nida Sen
- National Eye Institute, Bethesda, MD, USA
| |
Collapse
|
27
|
Zarranz-Ventura J, Bernal-Morales C, Saenz de Viteri M, Castro Alonso FJ, Urcola JA. Reply to comment "Neglect what is near", related to the Editorial "Artificial Intelligence and Ophthalmology: Current status". ARCHIVOS DE LA SOCIEDAD ESPANOLA DE OFTALMOLOGIA 2022; 97:418-419. [PMID: 35292223 DOI: 10.1016/j.oftale.2022.03.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Affiliation(s)
- J Zarranz-Ventura
- Institut Clínic de Oftalmologia (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain; Institut de Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain.
| | - C Bernal-Morales
- Institut Clínic de Oftalmologia (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain; Institut de Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain; Moorfields Eye Hospital, London, United Kingdom
| | | | | | - J A Urcola
- Oftalmología, Hospital Universitario de Araba, Vitoria, Spain
| |
Collapse
|
28
|
Alexopoulos P, Madu C, Wollstein G, Schuman JS. The Development and Clinical Application of Innovative Optical Ophthalmic Imaging Techniques. Front Med (Lausanne) 2022; 9:891369. [PMID: 35847772 PMCID: PMC9279625 DOI: 10.3389/fmed.2022.891369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 05/23/2022] [Indexed: 11/22/2022] Open
Abstract
The field of ophthalmic imaging has grown substantially over the last years. Massive improvements in image processing and computer hardware have allowed the emergence of multiple imaging techniques of the eye that can transform patient care. The purpose of this review is to describe the most recent advances in eye imaging and explain how new technologies and imaging methods can be utilized in a clinical setting. The introduction of optical coherence tomography (OCT) was a revolution in eye imaging and has since become the standard of care for a plethora of conditions. Its most recent iterations, OCT angiography, and visible light OCT, as well as imaging modalities, such as fluorescent lifetime imaging ophthalmoscopy, would allow a more thorough evaluation of patients and provide additional information on disease processes. Toward that goal, the application of adaptive optics (AO) and full-field scanning to a variety of eye imaging techniques has further allowed the histologic study of single cells in the retina and anterior segment. Toward the goal of remote eye care and more accessible eye imaging, methods such as handheld OCT devices and imaging through smartphones, have emerged. Finally, incorporating artificial intelligence (AI) in eye images has the potential to become a new milestone for eye imaging while also contributing in social aspects of eye care.
Collapse
Affiliation(s)
- Palaiologos Alexopoulos
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, United States
| | - Chisom Madu
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, United States
| | - Gadi Wollstein
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, United States
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, NY, United States
- Center for Neural Science, College of Arts & Science, New York University, New York, NY, United States
| | - Joel S. Schuman
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, United States
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, NY, United States
- Center for Neural Science, College of Arts & Science, New York University, New York, NY, United States
- Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, Brooklyn, NY, United States
| |
Collapse
|
29
|
Li F, Su Y, Lin F, Li Z, Song Y, Nie S, Xu J, Chen L, Chen S, Li H, Xue K, Che H, Chen Z, Yang B, Zhang H, Ge M, Zhong W, Yang C, Chen L, Wang F, Jia Y, Li W, Wu Y, Li Y, Gao Y, Zhou Y, Zhang K, Zhang X. A deep-learning system predicts glaucoma incidence and progression using retinal photographs. J Clin Invest 2022; 132:157968. [PMID: 35642636 PMCID: PMC9151694 DOI: 10.1172/jci157968] [Citation(s) in RCA: 34] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Accepted: 04/12/2022] [Indexed: 02/05/2023] Open
Abstract
BackgroundDeep learning has been widely used for glaucoma diagnosis. However, there is no clinically validated algorithm for glaucoma incidence and progression prediction. This study aims to develop a clinically feasible deep-learning system for predicting and stratifying the risk of glaucoma onset and progression based on color fundus photographs (CFPs), with clinical validation of performance in external population cohorts.MethodsWe established data sets of CFPs and visual fields collected from longitudinal cohorts. The mean follow-up duration was 3 to 5 years across the data sets. Artificial intelligence (AI) models were developed to predict future glaucoma incidence and progression based on the CFPs of 17,497 eyes in 9346 patients. The area under the receiver operating characteristic (AUROC) curve, sensitivity, and specificity of the AI models were calculated with reference to the labels provided by experienced ophthalmologists. Incidence and progression of glaucoma were determined based on longitudinal CFP images or visual fields, respectively.ResultsThe AI model to predict glaucoma incidence achieved an AUROC of 0.90 (0.81-0.99) in the validation set and demonstrated good generalizability, with AUROCs of 0.89 (0.83-0.95) and 0.88 (0.79-0.97) in external test sets 1 and 2, respectively. The AI model to predict glaucoma progression achieved an AUROC of 0.91 (0.88-0.94) in the validation set, and also demonstrated outstanding predictive performance with AUROCs of 0.87 (0.81-0.92) and 0.88 (0.83-0.94) in external test sets 1 and 2, respectively.ConclusionOur study demonstrates the feasibility of deep-learning algorithms in the early detection and prediction of glaucoma progression.FUNDINGNational Natural Science Foundation of China (NSFC); the High-level Hospital Construction Project, Zhongshan Ophthalmic Center, Sun Yat-sen University; the Science and Technology Program of Guangzhou, China (2021), the Science and Technology Development Fund (FDCT) of Macau, and FDCT-NSFC.
Collapse
Affiliation(s)
- Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Yuandong Su
- State Key Laboratory of Biotherapy and Center for Translational Innovations, West China Hospital and Sichuan University, Chengdu, China.,PKU-MUST Center for Future Technology, Faculty of Medicine, Macao University of Science and Technology, Macau, China
| | - Fengbin Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Zhihuan Li
- PKU-MUST Center for Future Technology, Faculty of Medicine, Macao University of Science and Technology, Macau, China
| | - Yunhe Song
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Sheng Nie
- State Key Laboratory of Organ Failure Research, National Clinical Research Center for Kidney Disease and Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Jie Xu
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Linjiang Chen
- Department of Ophthalmology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Shiyan Chen
- Department of Ophthalmology, Sichuan Academy of Medical Sciences & Sichuan Provincial People's Hospital, Chengdu, China
| | - Hao Li
- Department of Ophthalmology, Guizhou Provincial People's Hospital, Guiyang, China
| | - Kanmin Xue
- Nuffield Laboratory of Ophthalmology, Department of Clinical Neurosciences, University of Oxford and Oxford University Hospitals NHS Foundation Trust, Oxford, United Kingdom
| | - Huixin Che
- He Eye Specialist Hospital, Shenyang, Liaoning Province, China
| | - Zhengui Chen
- Jiangmen Xinhui Aier New Hope Eye Hospital, Jiangmen, Guangdong, China
| | - Bin Yang
- Department of Ophthalmology, Zigong Third People's Hospital, Zigong, China
| | - Huiying Zhang
- Department of Ophthalmology, Fujian Provincial Hospital, Fuzhou, China
| | - Ming Ge
- Department of Ophthalmology and Optometry, Guizhou Nursing Vocational College, Guiyang, China
| | - Weihui Zhong
- Department of Ophthalmology, Guangzhou Development District Hospital, Guangzhou, China
| | - Chunman Yang
- Department of Ophthalmology, The Second Affiliated Hospital of Guizhou Medical University, Kaili, China
| | - Lina Chen
- Department of Ophthalmology, The Third People's Hospital of Dalian, Dalian, Liaoning Province, China
| | - Fanyin Wang
- Department of Ophthalmology, Shenzhen Qianhai Shekou Free Trade Zone Hospital, Shenzhen, China
| | - Yunqin Jia
- Department of Ophthalmology, Dali Bai Autonomous Prefecture People's Hospital, Dali, China
| | - Wanlin Li
- Department of Ophthalmology, Wuwei People's Hospital, Wuwei, Gansu Province, China
| | - Yuqing Wu
- Department of Ophthalmology, Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yingjie Li
- Department of Ophthalmology, The First Hospital of Nanchang City, Nanchang, China
| | - Yuanxu Gao
- PKU-MUST Center for Future Technology, Faculty of Medicine, Macao University of Science and Technology, Macau, China.,State Key Laboratory of Lunar and Planetary Sciences, Macao University of Science and Technology, Taipa, Macau, China
| | - Yong Zhou
- Clinical Research Institute, Shanghai General Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Kang Zhang
- PKU-MUST Center for Future Technology, Faculty of Medicine, Macao University of Science and Technology, Macau, China
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| |
Collapse
|
30
|
Wang SY, Tseng B, Hernandez-Boussard T. Deep Learning Approaches for Predicting Glaucoma Progression Using Electronic Health Records and Natural Language Processing. OPHTHALMOLOGY SCIENCE 2022; 2:100127. [PMID: 36249690 PMCID: PMC9559076 DOI: 10.1016/j.xops.2022.100127] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 01/19/2022] [Accepted: 02/07/2022] [Indexed: 11/09/2022]
Abstract
Purpose Advances in artificial intelligence have produced a few predictive models in glaucoma, including a logistic regression model predicting glaucoma progression to surgery. However, uncertainty exists regarding how to integrate the wealth of information in free-text clinical notes. The purpose of this study was to predict glaucoma progression requiring surgery using deep learning (DL) approaches on data from electronic health records (EHRs), including features from structured clinical data and from natural language processing of clinical free-text notes. Design Development of DL predictive model in an observational cohort. Participants Adult patients with glaucoma at a single center treated from 2008 through 2020. Methods Ophthalmology clinical notes of patients with glaucoma were identified from EHRs. Available structured data included patient demographic information, diagnosis codes, prior surgeries, and clinical information including intraocular pressure, visual acuity, and central corneal thickness. In addition, words from patients’ first 120 days of notes were mapped to ophthalmology domain-specific neural word embeddings trained on PubMed ophthalmology abstracts. Word embeddings and structured clinical data were used as inputs to DL models to predict subsequent glaucoma surgery. Main Outcome Measures Evaluation metrics included area under the receiver operating characteristic curve (AUC) and F1 score, the harmonic mean of positive predictive value, and sensitivity on a held-out test set. Results Seven hundred forty-eight of 4512 patients with glaucoma underwent surgery. The model that incorporated both structured clinical features as well as input features from clinical notes achieved an AUC of 73% and F1 of 40%, compared with only structured clinical features, (AUC, 66%; F1, 34%) and only clinical free-text features (AUC, 70%; F1, 42%). All models outperformed predictions from a glaucoma specialist’s review of clinical notes (F1, 29.5%). Conclusions We can successfully predict which patients with glaucoma will need surgery using DL models on EHRs unstructured text. Models incorporating free-text data outperformed those using only structured inputs. Future predictive models using EHRs should make use of information from within clinical free-text notes to improve predictive performance. Additional research is needed to investigate optimal methods of incorporating imaging data into future predictive models as well.
Collapse
|
31
|
Villasana GA, Bradley C, Elze T, Myers JS, Pasquale L, De Moraes CG, Wellik S, Boland MV, Ramulu P, Hager G, Unberath M, Yohannan J. Improving Visual Field Forecasting by Correcting for the Effects of Poor Visual Field Reliability. Transl Vis Sci Technol 2022; 11:27. [PMID: 35616923 PMCID: PMC9145029 DOI: 10.1167/tvst.11.5.27] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Accepted: 03/30/2022] [Indexed: 11/24/2022] Open
Abstract
Purpose The purpose of this study was to accurately forecast future reliable visual field (VF) mean deviation (MD) values by correcting for poor reliability. Methods Four linear regression techniques (standard, unfiltered, corrected, and weighted) were fit to VF data from 5939 eyes with a final reliable VF. For each eye, all VFs, except the final one, were used to fit the models. Then, the difference between the final VF MD value and each model's estimate for the final VF MD value was used to calculate model error. We aggregated the error for each model across all eyes to compare model performance. The results were further broken down into eye-level reliability subgroups to track performance as reliability levels fluctuate. Results The standard method, used in the Humphrey Field Analyzer (HFA), was the worst performing model with an average residual that was 0.69 dB higher than the average from the unfiltered method, and 0.79 dB higher than that of the weighted and corrected methods. The weighted method was the best performing model, beating the standard model by as much as 1.75 dB in the 40% to 50% eye-level reliability subgroup. However, its average 95% prediction interval was relatively large at 7.67 dB. Conclusions Including all VFs in the trend estimation has more predictive power for future reliable VFs than excluding unreliable VFs. Correcting for VF reliability further improves model accuracy. Translational Relevance The VF correction methods described in this paper may allow clinicians to catch VF worsening at an earlier stage.
Collapse
Affiliation(s)
- Gabriel A. Villasana
- Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, USA
| | - Chris Bradley
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Tobias Elze
- Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA, USA
| | | | - Louis Pasquale
- Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | | | - Sarah Wellik
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Michael V. Boland
- Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA, USA
| | - Pradeep Ramulu
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Greg Hager
- Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, USA
| | - Mathias Unberath
- Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, USA
| | - Jithin Yohannan
- Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, USA
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
32
|
Ittoop SM, Jaccard N, Lanouette G, Kahook MY. The Role of Artificial Intelligence in the Diagnosis and Management of Glaucoma. J Glaucoma 2022; 31:137-146. [PMID: 34930873 DOI: 10.1097/ijg.0000000000001972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Accepted: 12/10/2021] [Indexed: 11/26/2022]
Abstract
Glaucomatous optic neuropathy is the leading cause of irreversible blindness worldwide. Diagnosis and monitoring of disease involves integrating information from the clinical examination with subjective data from visual field testing and objective biometric data that includes pachymetry, corneal hysteresis, and optic nerve and retinal imaging. This intricate process is further complicated by the lack of clear definitions for the presence and progression of glaucomatous optic neuropathy, which makes it vulnerable to clinician interpretation error. Artificial intelligence (AI) and AI-enabled workflows have been proposed as a plausible solution. Applications derived from this field of computer science can improve the quality and robustness of insights obtained from clinical data that can enhance the clinician's approach to patient care. This review clarifies key terms and concepts used in AI literature, discusses the current advances of AI in glaucoma, elucidates the clinical advantages and challenges to implementing this technology, and highlights potential future applications.
Collapse
Affiliation(s)
- Sabita M Ittoop
- The George Washington University Medical Faculty Associates, Washington, DC
| | | | | | - Malik Y Kahook
- Sue Anschutz-Rodgers Eye Center, The University of Colorado School of Medicine, Aurora, CO
| |
Collapse
|
33
|
Detecting glaucoma with only OCT: Implications for the clinic, research, screening, and AI development. Prog Retin Eye Res 2022; 90:101052. [PMID: 35216894 DOI: 10.1016/j.preteyeres.2022.101052] [Citation(s) in RCA: 31] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 01/21/2022] [Accepted: 02/01/2022] [Indexed: 12/25/2022]
Abstract
A method for detecting glaucoma based only on optical coherence tomography (OCT) is of potential value for routine clinical decisions, for inclusion criteria for research studies and trials, for large-scale clinical screening, as well as for the development of artificial intelligence (AI) decision models. Recent work suggests that the OCT probability (p-) maps, also known as deviation maps, can play a key role in an OCT-based method. However, artifacts seen on the p-maps of healthy control eyes can resemble patterns of damage due to glaucoma. We document in section 2 that these glaucoma-like artifacts are relatively common and are probably due to normal anatomical variations in healthy eyes. We also introduce a simple anatomical artifact model based upon known anatomical variations to help distinguish these artifacts from actual glaucomatous damage. In section 3, we apply this model to an OCT-based method for detecting glaucoma that starts with an examination of the retinal nerve fiber layer (RNFL) p-map. While this method requires a judgment by the clinician, sections 4 and 5 describe automated methods that do not. In section 4, the simple model helps explain the relatively poor performance of commonly employed summary statistics, including circumpapillary RNFL thickness. In section 5, the model helps account for the success of an AI deep learning model, which in turn validates our focus on the RNFL p-map. Finally, in section 6 we consider the implications of OCT-based methods for the clinic, research, screening, and the development of AI models.
Collapse
|
34
|
Shamsi F, Liu R, Owsley C, Kwon M. Identifying the Retinal Layers Linked to Human Contrast Sensitivity Via Deep Learning. Invest Ophthalmol Vis Sci 2022; 63:27. [PMID: 35179554 PMCID: PMC8859491 DOI: 10.1167/iovs.63.2.27] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022] Open
Abstract
Purpose Luminance contrast is the fundamental building block of human spatial vision. Therefore contrast sensitivity, the reciprocal of contrast threshold required for target detection, has been a barometer of human visual function. Although retinal ganglion cells (RGCs) are known to be involved in contrast coding, it still remains unknown whether the retinal layers containing RGCs are linked to a person's contrast sensitivity (e.g., Pelli-Robson contrast sensitivity) and, if so, to what extent the retinal layers are related to behavioral contrast sensitivity. Thus the current study aims to identify the retinal layers and features critical for predicting a person's contrast sensitivity via deep learning. Methods Data were collected from 225 subjects including individuals with either glaucoma, age-related macular degeneration, or normal vision. A deep convolutional neural network trained to predict a person's Pelli-Robson contrast sensitivity from structural retinal images measured with optical coherence tomography was used. Then, activation maps that represent the critical features learned by the network for the output prediction were computed. Results The thickness of both ganglion cell and inner plexiform layers, reflecting RGC counts, were found to be significantly correlated with contrast sensitivity (r = 0.26 ∼ 0.58,Ps < 0.001 for different eccentricities). Importantly, the results showed that retinal layers containing RGCs were the critical features the network uses to predict a person's contrast sensitivity (an average R2 = 0.36 ± 0.10). Conclusions The findings confirmed the structure and function relationship for contrast sensitivity while highlighting the role of RGC density for human contrast sensitivity.
Collapse
Affiliation(s)
- Foroogh Shamsi
- Department of Psychology, Northeastern University, Boston, Massachusetts, United States
| | - Rong Liu
- Department of Psychology, Northeastern University, Boston, Massachusetts, United States.,Department of Ophthalmology and Visual Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, Alabama, United States.,Department of life science and medicine, University of Science and Technology of China, Hefei, China
| | - Cynthia Owsley
- Department of Ophthalmology and Visual Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, Alabama, United States
| | - MiYoung Kwon
- Department of Psychology, Northeastern University, Boston, Massachusetts, United States.,Department of Ophthalmology and Visual Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, Alabama, United States
| |
Collapse
|
35
|
Bunod R, Augstburger E, Brasnu E, Labbe A, Baudouin C. [Artificial intelligence and glaucoma: A literature review]. J Fr Ophtalmol 2022; 45:216-232. [PMID: 34991909 DOI: 10.1016/j.jfo.2021.11.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 11/18/2021] [Indexed: 11/26/2022]
Abstract
In recent years, research in artificial intelligence (AI) has experienced an unprecedented surge in the field of ophthalmology, in particular glaucoma. The diagnosis and follow-up of glaucoma is complex and relies on a body of clinical evidence and ancillary tests. This large amount of information from structural and functional testing of the optic nerve and macula makes glaucoma a particularly appropriate field for the application of AI. In this paper, we will review work using AI in the field of glaucoma, whether for screening, diagnosis or detection of progression. Many AI strategies have shown promising results for glaucoma detection using fundus photography, optical coherence tomography, or automated perimetry. The combination of these imaging modalities increases the performance of AI algorithms, with results comparable to those of humans. We will discuss potential applications as well as obstacles and limitations to the deployment and validation of such models. While there is no doubt that AI has the potential to revolutionize glaucoma management and screening, research in the coming years will need to address unavoidable questions regarding the clinical significance of such results and the explicability of the predictions.
Collapse
Affiliation(s)
- R Bunod
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France.
| | - E Augstburger
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France
| | - E Brasnu
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France; CHNO des Quinze-Vingts, IHU FOReSIGHT, INSERM-DGOS CIC 1423, 17, rue Moreau, 75012 Paris, France; Sorbonne universités, INSERM, CNRS, institut de la Vision, 17, rue Moreau, 75012 Paris, France
| | - A Labbe
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France; CHNO des Quinze-Vingts, IHU FOReSIGHT, INSERM-DGOS CIC 1423, 17, rue Moreau, 75012 Paris, France; Sorbonne universités, INSERM, CNRS, institut de la Vision, 17, rue Moreau, 75012 Paris, France; Service d'ophtalmologie, hôpital Ambroise-Paré, AP-HP, université de Paris Saclay, 9, avenue Charles-de-Gaulle, 92100 Boulogne-Billancourt, France
| | - C Baudouin
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France; CHNO des Quinze-Vingts, IHU FOReSIGHT, INSERM-DGOS CIC 1423, 17, rue Moreau, 75012 Paris, France; Sorbonne universités, INSERM, CNRS, institut de la Vision, 17, rue Moreau, 75012 Paris, France; Service d'ophtalmologie, hôpital Ambroise-Paré, AP-HP, université de Paris Saclay, 9, avenue Charles-de-Gaulle, 92100 Boulogne-Billancourt, France
| |
Collapse
|
36
|
Montesano G, Chen A, Lu R, Lee CS, Lee AY. UWHVF: A Real-World, Open Source Dataset of Perimetry Tests From the Humphrey Field Analyzer at the University of Washington. Transl Vis Sci Technol 2022; 11:2. [PMID: 34978561 PMCID: PMC8742531 DOI: 10.1167/tvst.11.1.1] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022] Open
Abstract
Purpose This article describes the Humphrey field analyzer (HFA) dataset from the Department of Ophthalmology at the University of Washington. Methods Pointwise sensitivities were extracted from HFA 24-2, stimulus III visual fields (VF). Total deviation (TD), mean TD (MTD), pattern deviation, and pattern standard deviation (PSD) were calculated. Progression analysis was performed with simple linear regression on global, regional, and pointwise values for VF series with greater than four tests spanning at least four months. VF data were extracted independently of clinical information except for patient age, gender, and laterality Results This dataset includes 28,943 VFs from 7248 eyes of 3871 patients. Progression was calculated for 2985 eyes from 1579 patients. Median [interquartile range] age was 64 years [54, 73], and follow-up was 2.49 years [1.11, 5.03]. Baseline MTD was −4.51 dB [−8.01, −2.65], and baseline PSD was 2.41 dB [1.7, 5.34]. Conclusion MTD was found to decrease by −0.10 dB/yr [−0.40, 0.11] in eyes for which progression analysis was able to be performed. VFs with deep localized defects, PSD > 12 dB and MTD −15 dB to −25 dB, were plotted, visually inspected, and found to be consistent with neurologic or glaucomatous VFs from patients. For a small number of tests, extracted sensitivity values were compared to corresponding printouts and confirmed to match. Translational Relevance This open access pointwise VF dataset serves as a source of raw data for investigation such as VF behavior, clinical comparisons to trials, and development of new machine learning algorithms.
Collapse
Affiliation(s)
- Giovanni Montesano
- City, University of London, Optometry and Visual Sciences, London, UK.,NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Andrew Chen
- Department of Ophthalmology, University of Washington, Seattle, Washington, USA
| | - Randy Lu
- Department of Ophthalmology, University of Washington, Seattle, Washington, USA
| | - Cecilia S Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington, USA
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington, USA
| |
Collapse
|
37
|
Labkovich M, Paul M, Kim E, A. Serafini R, Lakhtakia S, Valliani AA, Warburton AJ, Patel A, Zhou D, Sklar B, Chelnis J, Elahi E. Portable hardware & software technologies for addressing ophthalmic health disparities: A systematic review. Digit Health 2022; 8:20552076221090042. [PMID: 35558637 PMCID: PMC9087242 DOI: 10.1177/20552076221090042] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Accepted: 03/09/2022] [Indexed: 11/19/2022] Open
Abstract
Vision impairment continues to be a major global problem, as the WHO estimates
2.2 billion people struggling with vision loss or blindness. One billion of
these cases, however, can be prevented by expanding diagnostic capabilities.
Direct global healthcare costs associated with these conditions totaled $255
billion in 2010, with a rapid upward projection to $294 billion in 2020.
Accordingly, WHO proposed 2030 targets to enhance integration and
patient-centered vision care by expanding refractive error and cataract
worldwide coverage. Due to the limitations in cost and portability of adapted
vision screening models, there is a clear need for new, more accessible vision
testing tools in vision care. This comparative, systematic review highlights the
need for new ophthalmic equipment and approaches while looking at existing and
emerging technologies that could expand the capacity for disease identification
and access to diagnostic tools. Specifically, the review focuses on portable
hardware- and software-centered strategies that can be deployed in remote
locations for detection of ophthalmic conditions and refractive error.
Advancements in portable hardware, automated software screening tools, and big
data-centric analytics, including machine learning, may provide an avenue for
improving ophthalmic healthcare.
Collapse
Affiliation(s)
- Margarita Labkovich
- Department of Medical Education, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Megan Paul
- Department of Medical Education, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Eliott Kim
- Department of Medical Education, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Randal A. Serafini
- Department of Medical Education, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Nash Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | | | - Aly A Valliani
- Department of Medical Education, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Andrew J Warburton
- Department of Medical Education, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Aashay Patel
- Department of Medical Education, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Davis Zhou
- Department of Ophthalmology, New York Eye and Ear Infirmary of Mount Sinai, New York, NY, USA
| | - Bonnie Sklar
- Department of Ophthalmology, Wills Eye Hospital, Philadelphia, PA, USA
| | - James Chelnis
- Department of Ophthalmology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Ebrahim Elahi
- Department of Ophthalmology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| |
Collapse
|
38
|
Shon K, Sung KR, Shin JW. Can Artificial Intelligence Predict Glaucomatous Visual Field Progression? A Spatial-Ordinal Convolutional Neural Network Model. Am J Ophthalmol 2022; 233:124-134. [PMID: 34283982 DOI: 10.1016/j.ajo.2021.06.025] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 06/23/2021] [Accepted: 06/25/2021] [Indexed: 11/28/2022]
Abstract
PURPOSE To develop an artificial neural network model incorporating both spatial and ordinal approaches to predict glaucomatous visual field (VF) progression. DESIGN Cohort study. Methods From a cohort of primary open-angle glaucoma patients, 9212 eyes of 6047 patients who underwent regular reliable VF examinations for >4 years were included. We constructed all possible spatial-ordinal tensors by stacking 3 consecutive VF tests (VF-blocks) with at least 3 years of follow-up. Trend-based, event-based, and combined criteria were defined to determine the progression. VF-blocks were considered "progressed" if progression occurred within 3 years; the progression was further confirmed after 3 years. We constructed 6 convolutional neural network (NN) models and 2 linear models: regression on global indices and pointwise linear regression (PLR). We compared area under the receiver operating characteristic curve (AUROC) of each model for the prediction of glaucomatous VF progression. RESULTS Among 43,260 VF-blocks, 4406 (10.2%), 4376 (10.1%), and 2394 (5.5%) VF-blocks were classified as progression-based on trend-based and event-based and combined criteria. For all 3 criteria, the progression group was significantly older and had worse initial MD and VF index (VFI) than the nonprogression group (P < .001 for all). The best-performing NN model had an AUROC of 0.864 with a sensitivity of 0.42 at a specificity of 0.95. In contrast, an AUROC of 0.611 was estimated from a sensitivity of 0.28 at a specificity of 0.84 for the PLR. CONCLUSIONS The NN models incorporating spatial-ordinal characteristics demonstrated significantly better performance than the linear models in the prediction of glaucomatous VF progression.
Collapse
Affiliation(s)
- Kilhwan Shon
- From the Department of Ophthalmology (K.S.), Gangneung Asan Hospital, Gangneung, Korea
| | - Kyung Rim Sung
- Department of Ophthalmology (K.R.S., J.W.S.), College of Medicine, University of Ulsan, Asan Medical Center, Seoul, Korea..
| | - Joong Won Shin
- Department of Ophthalmology (K.R.S., J.W.S.), College of Medicine, University of Ulsan, Asan Medical Center, Seoul, Korea
| |
Collapse
|
39
|
Tan Z, Zhu Z, He Z, He M. Artificial Intelligence in Ophthalmology. Artif Intell Med 2022. [DOI: 10.1007/978-981-19-1223-8_7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
40
|
Wang Z, Keane PA, Chiang M, Cheung CY, Wong TY, Ting DSW. Artificial Intelligence and Deep Learning in Ophthalmology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
41
|
López-Dorado A, Ortiz M, Satue M, Rodrigo MJ, Barea R, Sánchez-Morla EM, Cavaliere C, Rodríguez-Ascariz JM, Orduna-Hospital E, Boquete L, Garcia-Martin E. Early Diagnosis of Multiple Sclerosis Using Swept-Source Optical Coherence Tomography and Convolutional Neural Networks Trained with Data Augmentation. SENSORS (BASEL, SWITZERLAND) 2021; 22:167. [PMID: 35009710 PMCID: PMC8747672 DOI: 10.3390/s22010167] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 12/21/2021] [Accepted: 12/22/2021] [Indexed: 05/07/2023]
Abstract
BACKGROUND The aim of this paper is to implement a system to facilitate the diagnosis of multiple sclerosis (MS) in its initial stages. It does so using a convolutional neural network (CNN) to classify images captured with swept-source optical coherence tomography (SS-OCT). METHODS SS-OCT images from 48 control subjects and 48 recently diagnosed MS patients have been used. These images show the thicknesses (45 × 60 points) of the following structures: complete retina, retinal nerve fiber layer, two ganglion cell layers (GCL+, GCL++) and choroid. The Cohen distance is used to identify the structures and the regions within them with greatest discriminant capacity. The original database of OCT images is augmented by a deep convolutional generative adversarial network to expand the CNN's training set. RESULTS The retinal structures with greatest discriminant capacity are the GCL++ (44.99% of image points), complete retina (26.71%) and GCL+ (22.93%). Thresholding these images and using them as inputs to a CNN comprising two convolution modules and one classification module obtains sensitivity = specificity = 1.0. CONCLUSIONS Feature pre-selection and the use of a convolutional neural network may be a promising, nonharmful, low-cost, easy-to-perform and effective means of assisting the early diagnosis of MS based on SS-OCT thickness data.
Collapse
Affiliation(s)
- Almudena López-Dorado
- Biomedical Engineering Group, Department of Electronics, University of Alcalá, 28801 Alcalá de Henares, Spain; (A.L.-D.); (R.B.); (C.C.); (J.M.R.-A.)
| | - Miguel Ortiz
- Computer Vision, Imaging and Machine Intelligence Research Group, Interdisciplinary Center for Security, Reliability and Trust (SnT), University of Luxembourg, 4365 Luxembourg, Luxembourg;
| | - María Satue
- Miguel Servet Ophthalmology Innovation and Research Group (GIMSO), Department of Ophthalmology, Aragon Institute for Health Research (IIS Aragon), Miguel Servet University Hospital, University of Zaragoza, 50018 Zaragoza, Spain; (M.S.); (M.J.R.); (E.O.-H.)
| | - María J. Rodrigo
- Miguel Servet Ophthalmology Innovation and Research Group (GIMSO), Department of Ophthalmology, Aragon Institute for Health Research (IIS Aragon), Miguel Servet University Hospital, University of Zaragoza, 50018 Zaragoza, Spain; (M.S.); (M.J.R.); (E.O.-H.)
| | - Rafael Barea
- Biomedical Engineering Group, Department of Electronics, University of Alcalá, 28801 Alcalá de Henares, Spain; (A.L.-D.); (R.B.); (C.C.); (J.M.R.-A.)
| | - Eva M. Sánchez-Morla
- Department of Psychiatry, Hospital 12 de Octubre Research Institute (i+12), 28041 Madrid, Spain;
- Faculty of Medicine, Complutense University of Madrid, 28040 Madrid, Spain
- Biomedical Research Networking Centre in Mental Health (CIBERSAM), 28029 Madrid, Spain
| | - Carlo Cavaliere
- Biomedical Engineering Group, Department of Electronics, University of Alcalá, 28801 Alcalá de Henares, Spain; (A.L.-D.); (R.B.); (C.C.); (J.M.R.-A.)
| | - José M. Rodríguez-Ascariz
- Biomedical Engineering Group, Department of Electronics, University of Alcalá, 28801 Alcalá de Henares, Spain; (A.L.-D.); (R.B.); (C.C.); (J.M.R.-A.)
| | - Elvira Orduna-Hospital
- Miguel Servet Ophthalmology Innovation and Research Group (GIMSO), Department of Ophthalmology, Aragon Institute for Health Research (IIS Aragon), Miguel Servet University Hospital, University of Zaragoza, 50018 Zaragoza, Spain; (M.S.); (M.J.R.); (E.O.-H.)
| | - Luciano Boquete
- Biomedical Engineering Group, Department of Electronics, University of Alcalá, 28801 Alcalá de Henares, Spain; (A.L.-D.); (R.B.); (C.C.); (J.M.R.-A.)
| | - Elena Garcia-Martin
- Miguel Servet Ophthalmology Innovation and Research Group (GIMSO), Department of Ophthalmology, Aragon Institute for Health Research (IIS Aragon), Miguel Servet University Hospital, University of Zaragoza, 50018 Zaragoza, Spain; (M.S.); (M.J.R.); (E.O.-H.)
| |
Collapse
|
42
|
Christopher M, Bowd C, Proudfoot JA, Belghith A, Goldbaum MH, Rezapour J, Fazio MA, Girkin CA, De Moraes G, Liebmann JM, Weinreb RN, Zangwill LM. Deep Learning Estimation of 10-2 and 24-2 Visual Field Metrics Based on Thickness Maps from Macula OCT. Ophthalmology 2021; 128:1534-1548. [PMID: 33901527 DOI: 10.1016/j.ophtha.2021.04.022] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Revised: 03/16/2021] [Accepted: 04/19/2021] [Indexed: 01/27/2023] Open
Abstract
PURPOSE To develop deep learning (DL) systems estimating visual function from macula-centered spectral-domain (SD) OCT images. DESIGN Evaluation of a diagnostic technology. PARTICIPANTS A total of 2408 10-2 visual field (VF) SD OCT pairs and 2999 24-2 VF SD OCT pairs collected from 645 healthy and glaucoma subjects (1222 eyes). METHODS Deep learning models were trained on thickness maps from Spectralis macula SD OCT to estimate 10-2 and 24-2 VF mean deviation (MD) and pattern standard deviation (PSD). Individual and combined DL models were trained using thickness data from 6 layers (retinal nerve fiber layer [RNFL], ganglion cell layer [GCL], inner plexiform layer [IPL], ganglion cell-IPL [GCIPL], ganglion cell complex [GCC] and retina). Linear regression of mean layer thicknesses were used for comparison. MAIN OUTCOME MEASURES Deep learning models were evaluated using R2 and mean absolute error (MAE) compared with 10-2 and 24-2 VF measurements. RESULTS Combined DL models estimating 10-2 achieved R2 of 0.82 (95% confidence interval [CI], 0.68-0.89) for MD and 0.69 (95% CI, 0.55-0.81) for PSD and MAEs of 1.9 dB (95% CI, 1.6-2.4 dB) for MD and 1.5 dB (95% CI, 1.2-1.9 dB) for PSD. This was significantly better than mean thickness estimates for 10-2 MD (0.61 [95% CI, 0.47-0.71] and 3.0 dB [95% CI, 2.5-3.5 dB]) and 10-2 PSD (0.46 [95% CI, 0.31-0.60] and 2.3 dB [95% CI, 1.8-2.7 dB]). Combined DL models estimating 24-2 achieved R2 of 0.79 (95% CI, 0.72-0.84) for MD and 0.68 (95% CI, 0.53-0.79) for PSD and MAEs of 2.1 dB (95% CI, 1.8-2.5 dB) for MD and 1.5 dB (95% CI, 1.3-1.9 dB) for PSD. This was significantly better than mean thickness estimates for 24-2 MD (0.41 [95% CI, 0.26-0.57] and 3.4 dB [95% CI, 2.7-4.5 dB]) and 24-2 PSD (0.38 [95% CI, 0.20-0.57] and 2.4 dB [95% CI, 2.0-2.8 dB]). The GCIPL (R2 = 0.79) and GCC (R2 = 0.75) had the highest performance estimating 10-2 and 24-2 MD, respectively. CONCLUSIONS Deep learning models improved estimates of functional loss from SD OCT imaging. Accurate estimates can help clinicians to individualize VF testing to patients.
Collapse
Affiliation(s)
- Mark Christopher
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Christopher Bowd
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - James A Proudfoot
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Akram Belghith
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Michael H Goldbaum
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Jasmin Rezapour
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California; Department of Ophthalmology, University Medical Center Mainz, Mainz, Germany
| | - Massimo A Fazio
- School of Medicine, University of Alabama-Birmingham, Birmingham, Alabama
| | | | - Gustavo De Moraes
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Medical Center, New York, New York
| | - Jeffrey M Liebmann
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Medical Center, New York, New York
| | - Robert N Weinreb
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Linda M Zangwill
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California.
| |
Collapse
|
43
|
Updates in deep learning research in ophthalmology. Clin Sci (Lond) 2021; 135:2357-2376. [PMID: 34661658 DOI: 10.1042/cs20210207] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 09/14/2021] [Accepted: 09/29/2021] [Indexed: 12/13/2022]
Abstract
Ophthalmology has been one of the early adopters of artificial intelligence (AI) within the medical field. Deep learning (DL), in particular, has garnered significant attention due to the availability of large amounts of data and digitized ocular images. Currently, AI in Ophthalmology is mainly focused on improving disease classification and supporting decision-making when treating ophthalmic diseases such as diabetic retinopathy, age-related macular degeneration (AMD), glaucoma and retinopathy of prematurity (ROP). However, most of the DL systems (DLSs) developed thus far remain in the research stage and only a handful are able to achieve clinical translation. This phenomenon is due to a combination of factors including concerns over security and privacy, poor generalizability, trust and explainability issues, unfavorable end-user perceptions and uncertain economic value. Overcoming this challenge would require a combination approach. Firstly, emerging techniques such as federated learning (FL), generative adversarial networks (GANs), autonomous AI and blockchain will be playing an increasingly critical role to enhance privacy, collaboration and DLS performance. Next, compliance to reporting and regulatory guidelines, such as CONSORT-AI and STARD-AI, will be required to in order to improve transparency, minimize abuse and ensure reproducibility. Thirdly, frameworks will be required to obtain patient consent, perform ethical assessment and evaluate end-user perception. Lastly, proper health economic assessment (HEA) must be performed to provide financial visibility during the early phases of DLS development. This is necessary to manage resources prudently and guide the development of DLS.
Collapse
|
44
|
Zhang Y, Wang N, Liu H. Re: Christopher et al.: Deep learning approaches predict glaucomatous visual field damage from OCT optic nerve head en face images and retinal nerve fiber layer thickness maps (Ophthalmology. 2020;127:346-356). Ophthalmology 2021; 129:e4-e5. [PMID: 34629193 DOI: 10.1016/j.ophtha.2021.07.035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Revised: 07/15/2021] [Accepted: 07/15/2021] [Indexed: 10/20/2022] Open
Affiliation(s)
- Yue Zhang
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing Ophthalmology and Visual Sciences Key Laboratory, Capital Medical University, Beijing, China
| | - Ningli Wang
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing Ophthalmology and Visual Sciences Key Laboratory, Capital Medical University, Beijing, China; Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Hanruo Liu
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing Ophthalmology and Visual Sciences Key Laboratory, Capital Medical University, Beijing, China; Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China; School of Information and Electronics, Beijing Institute of Technology, Beijing, China.
| |
Collapse
|
45
|
PeriorbitAI: Artificial Intelligence Automation of Eyelid and Periorbital Measurements. Am J Ophthalmol 2021; 230:285-296. [PMID: 34010596 DOI: 10.1016/j.ajo.2021.05.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 05/02/2021] [Accepted: 05/05/2021] [Indexed: 12/20/2022]
Abstract
PURPOSE To develop a deep learning semantic segmentation network to automate the assessment of 8 periorbital measurements DESIGN: Development and validation of an artificial intelligence (AI) segmentation algorithm METHODS: A total of 418 photographs of periorbital areas were used to train a deep learning semantic segmentation model to segment iris, aperture, and brow areas. These data were used to develop a post-processing algorithm that measured margin reflex distance (MRD) 1 and 2, medial canthal height (MCH), lateral canthal height (LCH), medial brow height (MBH), lateral brow height (LBH), medial intercanthal distance (MID), and lateral intercanthal distance (LID). The algorithm validity was evaluated on a prospective hold-out test set against 3 graders. The main outcome measures were dice coefficient, mean absolute difference, intraclass correlation coefficient, and Bland-Altman analysis. A smartphone video was also segmented and evaluated as proof of concept. RESULTS The AI algorithm performed in close agreement with all human graders, with a mean absolute difference of 0.5 mm for MRD1, MRD2, LCH, and MCH. The mean absolute difference between graders is approximately 1.5-2 mm for LBH and MBH and approximately 2-4 mm for MID and LID. The 95% confidence intervals for all graders overlapped in most cases, demonstrating that the algorithm performs similarly to human graders. The segmentation of a smartphone video demonstrated that MRD1 can be dynamically measured. CONCLUSIONS We present, to our knowledge, the first open-sourced, artificial intelligence system capable of automating static and dynamic periorbital measurements. A fully automated tool stands to transform the delivery of clinical care and quantification of surgical outcomes.
Collapse
|
46
|
Kamiya K, Ayatsuka Y, Kato Y, Shoji N, Miyai T, Ishii H, Mori Y, Miyata K. Prediction of keratoconus progression using deep learning of anterior segment optical coherence tomography maps. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:1287. [PMID: 34532424 PMCID: PMC8422102 DOI: 10.21037/atm-21-1772] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Accepted: 06/11/2021] [Indexed: 12/12/2022]
Abstract
Background To predict keratoconus progression using deep learning of the color-coded maps measured with a swept-source anterior segment optical coherence tomography (As-OCT) device. Methods We enrolled 218 keratoconic eyes with and without disease progression. Using deep learning of the 6 color-coded maps (anterior elevation, anterior curvature, posterior elevation, posterior curvature, total refractive power, and pachymetry map) obtained by the As-OCT (CASIA, Tomey), we assessed the accuracy, sensitivity, and specificity of prediction of keratoconus progression in such eyes. Results Deep learning of the 6 color-coded maps exhibited an accuracy of 0.794 in discriminating keratoconus with and without progression. For a single map analysis, posterior elevation map (0.798) showed the highest accuracy, followed by anterior curvature map (0.775), posterior corneal curvature map (0.757), anterior elevation map (0.752), total refractive power map (0.729), and pachymetry map (0.720), in distinguishing between progressive and non-progressive keratoconus. The use of the adjusted algorithm by age subgroups improved to an accuracy of 0.849. Conclusions Deep learning of the As-OCT color-coded maps effectively discriminates progressive keratoconus from non-progressive keratoconus with an accuracy of approximately 85% using the adjusted age algorithm, indicating that it will become an aid for predicting the progression of the disease, which is clinically beneficial for decision-making of the surgical indication of corneal cross-linking (CXL).
Collapse
Affiliation(s)
- Kazutaka Kamiya
- Visual Physiology, Kitasato University, School of Allied Health Sciences, Kanagawa, Japan
| | | | | | - Nobuyuki Shoji
- Department of Ophthalmology, Kitasato University, School of Medicine, Kanagawa, Japan
| | - Takashi Miyai
- Department of Ophthalmology, Tokyo University, School of Medicine, Tokyo, Japan
| | - Hitoha Ishii
- Department of Ophthalmology, Tokyo University, School of Medicine, Tokyo, Japan
| | - Yosai Mori
- Department of Ophthalmology, Miyata Eye Hospital, Miyazaki, Japan
| | - Kazunori Miyata
- Department of Ophthalmology, Miyata Eye Hospital, Miyazaki, Japan
| |
Collapse
|
47
|
Buisson M, Navel V, Labbé A, Watson SL, Baker JS, Murtagh P, Chiambaretta F, Dutheil F. Deep learning versus ophthalmologists for screening for glaucoma on fundus examination: A systematic review and meta-analysis. Clin Exp Ophthalmol 2021; 49:1027-1038. [PMID: 34506041 DOI: 10.1111/ceo.14000] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Revised: 09/02/2021] [Accepted: 09/08/2021] [Indexed: 11/29/2022]
Abstract
BACKGROUND In this systematic review and meta-analysis, we aimed to compare deep learning versus ophthalmologists in glaucoma diagnosis on fundus examinations. METHOD PubMed, Cochrane, Embase, ClinicalTrials.gov and ScienceDirect databases were searched for studies reporting a comparison between the glaucoma diagnosis performance of deep learning and ophthalmologists on fundus examinations on the same datasets, until 10 December 2020. Studies had to report an area under the receiver operating characteristics (AUC) with SD or enough data to generate one. RESULTS We included six studies in our meta-analysis. There was no difference in AUC between ophthalmologists (AUC = 82.0, 95% confidence intervals [CI] 65.4-98.6) and deep learning (97.0, 89.4-104.5). There was also no difference using several pessimistic and optimistic variants of our meta-analysis: the best (82.2, 60.0-104.3) or worst (77.7, 53.1-102.3) ophthalmologists versus the best (97.1, 89.5-104.7) or worst (97.1, 88.5-105.6) deep learning of each study. We did not retrieve any factors influencing those results. CONCLUSION Deep learning had similar performance compared to ophthalmologists in glaucoma diagnosis from fundus examinations. Further studies should evaluate deep learning in clinical situations.
Collapse
Affiliation(s)
- Mathieu Buisson
- CHU Clermont-Ferrand, Ophthalmology, University Hospital of Clermont-Ferrand, Clermont-Ferrand, France
| | - Valentin Navel
- CHU Clermont-Ferrand, Ophthalmology, University Hospital of Clermont-Ferrand, Clermont-Ferrand, France.,CNRS UMR 6293, INSERM U1103, Genetic Reproduction and Development Laboratory (GReD), Translational Approach to Epithelial Injury and Repair Team, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Antoine Labbé
- Department of Ophthalmology III, Quinze-Vingts National Ophthalmology Hospital, IHU FOReSIGHT, Paris, France.,Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France.,Department of Ophthalmology, Ambroise Paré Hospital, APHP, Université de Versailles Saint-Quentin en Yvelines, Versailles, France
| | - Stephanie L Watson
- Save Sight Institute, Discipline of Ophthalmology, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia.,Corneal Unit, Sydney Eye Hospital, Sydney, New South Wales, Australia
| | - Julien S Baker
- Centre for Health and Exercise Science Research, Department of Sport, Physical Education and Health, Hong Kong Baptist University, Kowloon Tong, Hong Kong
| | - Patrick Murtagh
- Department of Ophthalmology, Royal Victoria Eye and Ear Hospital, Dublin, Ireland
| | - Frédéric Chiambaretta
- CHU Clermont-Ferrand, Ophthalmology, University Hospital of Clermont-Ferrand, Clermont-Ferrand, France.,CNRS UMR 6293, INSERM U1103, Genetic Reproduction and Development Laboratory (GReD), Translational Approach to Epithelial Injury and Repair Team, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Frédéric Dutheil
- Université Clermont Auvergne, CNRS, LaPSCo, Physiological and Psychosocial Stress, CHU Clermont-Ferrand, University Hospital of Clermont-Ferrand, Preventive and Occupational Medicine, Witty Fit, Clermont-Ferrand, France
| |
Collapse
|
48
|
Wong SH, Tsai JC. Telehealth and Screening Strategies in the Diagnosis and Management of Glaucoma. J Clin Med 2021; 10:jcm10163452. [PMID: 34441748 PMCID: PMC8396962 DOI: 10.3390/jcm10163452] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 07/31/2021] [Accepted: 08/02/2021] [Indexed: 11/16/2022] Open
Abstract
Telehealth has become a viable option for glaucoma screening and glaucoma monitoring due to advances in technology. The ability to measure intraocular pressure without an anesthetic and to take optic nerve photographs without pharmacologic pupillary dilation using portable equipment have allowed glaucoma screening programs to generate enough data for assessment. At home, patients can perform visual acuity testing, web-based visual field testing, rebound tonometry, and video visits with the physician to monitor for glaucomatous progression. Artificial intelligence will enhance the accuracy of data interpretation and inspire confidence in popularizing telehealth for glaucoma.
Collapse
|
49
|
Kim TM, Choi W, Choi IY, Park SJ, Yoon KH, Chang DJ. Semi-AI and Full-AI digitizer: The ways to digitalize visual field big data. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 207:106168. [PMID: 34051411 DOI: 10.1016/j.cmpb.2021.106168] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 05/04/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Glaucoma is one of the major diseases that cause blindness, which is incurable and irreversible, and it is essential to detect glaucoma vision deficits in treatment and check the progression of vision disorders in advance. In order to minimize the risk of glaucoma, it is necessary not only to diagnose and observe glaucoma but also to predict prognosis via indicators from Visual Field (VF) tests. However, information from the VF test cannot be directly used in clinical studies because most medical institutions store VF test sheets in Portable Document Format (PDF) or image files in different standards. METHODS We developed AI-based real-time VF big data digitizing systems that digitalize VF test images in real-time in two ways; Semi-AI and Full-AI digitizer. The Semi-AI digitizer detects the VF text area with actual coordinates derived from mouse handler system. Full-AI digitizer detects the VF text area with Faster Region Based Convolutional Neural Networks (RCNN). After detecting the text area, both systems extract texts with Recurrent Neural Network based Optical Character Recognition. Semi-AI and Full-AI digitizer post-processes the extracted text results with in-system algorithm and out-of-system algorithm, respectively. RESULTS Both systems used 325,310 VF test sheets from a tertiary hospital and extracted a total of 5,530,270 texts. From the 100 randomly selected VF sheets, 3,400 texts were used for the validation. Semi-AI and Full-AI digitizer showed 0.993 and 0.983 of accuracy, respectively. CONCLUSION This study demonstrates the effectiveness of AI applications in detecting text areas and the different implementation methodologies of the post-processing process. In detecting text area, Semi-AI may be better than Full-AI digitizer in terms of system speed and human labor labeling if the number of types to be classified is small. However, Full-AI digitizer is recommended because it allows detecting text area regardless of resolution and size of the VF sheets, as the types of real-world VF test sheets cannot be predicted, and the types become more unpredictable when extended to multi-hospital studies. For Post-preprocessing, Semi-AI methodology is recommended because Semi-AI produced higher results with less effort and considered the convenience of researchers by implementing them as in-system.
Collapse
Affiliation(s)
- Tong Min Kim
- Department of Biomedicine & Health Sciences, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea.
| | - Wonseo Choi
- Department of Electrical Engineering, Hanyang University of Korea, Seoul 04763, Republic of Korea.
| | - In-Young Choi
- Department of Medical Informatics, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea.
| | - Sang Jun Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Gyunggi do 13620, Republic of Korea.
| | - Kun-Ho Yoon
- Division of Endocrinology and Metabolism, Department of Internal Medicine, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea.
| | - Dong-Jin Chang
- Department of Ophthalmology and Visual Science, The Catholic University of Korea College of Medicine, Catholic University of Korea Yeouido Saint Mary's Hospital, Seoul 06591, Republic of Korea.
| |
Collapse
|
50
|
Zarranz-Ventura J, Bernal-Morales C, Saenz de Viteri M, Castro Alonso FJ, Urcola JA. Artificial intelligence and ophthalmology: Current status. ARCHIVOS DE LA SOCIEDAD ESPANOLA DE OFTALMOLOGIA 2021; 96:399-400. [PMID: 34340776 DOI: 10.1016/j.oftale.2021.06.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Accepted: 06/23/2021] [Indexed: 06/13/2023]
Affiliation(s)
- J Zarranz-Ventura
- Institut Clínic de Oftalmologia (ICOF), Hospital Clínic de Barcelona, Spain; Institut de Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain.
| | - C Bernal-Morales
- Institut Clínic de Oftalmologia (ICOF), Hospital Clínic de Barcelona, Spain; Institut de Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
| | | | | | - J A Urcola
- Oftalmología, Hospital Universitario de Araba, Vitoria, Spain
| |
Collapse
|