1
|
Chen Z, Ishikawa H, Wang Y, Wollstein G, Schuman JS. Deep-Learning-Based Group Pointwise Spatial Mapping of Structure to Function in Glaucoma. OPHTHALMOLOGY SCIENCE 2024; 4:100523. [PMID: 38881610 PMCID: PMC11179402 DOI: 10.1016/j.xops.2024.100523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 03/12/2024] [Accepted: 03/25/2024] [Indexed: 06/18/2024]
Abstract
Purpose To establish generalizable pointwise spatial relationship between structure and function through occlusion analysis of a deep-learning (DL) model for predicting the visual field (VF) sensitivities from 3-dimensional (3D) OCT scan. Design Retrospective cross-sectional study. Participants A total of 2151 eyes from 1129 patients. Methods A DL model was trained to predict 52 VF sensitivities of 24-2 standard automated perimetry from 3D spectral-domain OCT images of the optic nerve head (ONH) with 12 915 OCT-VF pairs. Using occlusion analysis, the contribution of each individual cube covering a 240 × 240 × 31.25 μm region of the ONH to the model's prediction was systematically evaluated for each OCT-VF pair in a separate test set that consisted of 996 OCT-VF pairs. After simple translation (shifting in x- and y-axes to match the ONH center), group t-statistic maps were derived to visualize statistically significant ONH regions for each VF test point within a group. This analysis allowed for understanding the importance of each super voxel (240 × 240 × 31.25 μm covering the entire 4.32 × 4.32 × 1.125 mm ONH cube) in predicting VF test points for specific patient groups. Main Outcome Measures The region at the ONH corresponding to each VF test point and the effect of the former on the latter. Results The test set was divided to 2 groups, the healthy-to-early-glaucoma group (792 OCT-VF pairs, VF mean deviation [MD]: -1.32 ± 1.90 decibels [dB]) and the moderate-to-advanced-glaucoma group (204 OCT-VF pairs, VF MD: -17.93 ± 7.68 dB). Two-dimensional group t-statistic maps (x, y projection) were generated for both groups, assigning related ONH regions to visual field test points. The identified influential structural locations for VF sensitivity prediction at each test point aligned well with existing knowledge and understanding of structure-function spatial relationships. Conclusions This study successfully visualized the global trend of point-by-point spatial relationships between OCT-based structure and VF-based function without the need for prior knowledge or segmentation of OCTs. The revealed spatial correlations were consistent with previously published mappings. This presents possibilities of learning from trained machine learning models without applying any prior knowledge, potentially robust, and free from bias. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Zhiqi Chen
- Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, Brooklyn, New York
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, New York
| | - Hiroshi Ishikawa
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, New York
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, Oregon
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health and Science University, Portland, Oregon
| | - Yao Wang
- Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, Brooklyn, New York
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, New York
| | - Gadi Wollstein
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, New York
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, New York
- Center for Neural Science, NYU College of Arts and Sciences, New York, New York
| | - Joel S. Schuman
- Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, Brooklyn, New York
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, New York
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, New York
- Center for Neural Science, NYU College of Arts and Sciences, New York, New York
- Glaucoma Service, Eye Hospital, Philadelphia, Pennsylvania
- Department of Ophthalmology, Sidney Kimmel Medical College at Thomas Jefferson University, Philadelphia, Pennsylvania
- Drexel University School of Biomedical Engineering, Sciences and Health Studies
| |
Collapse
|
2
|
Zhu Y, Salowe R, Chow C, Li S, Bastani O, O'Brien JM. Advancing Glaucoma Care: Integrating Artificial Intelligence in Diagnosis, Management, and Progression Detection. Bioengineering (Basel) 2024; 11:122. [PMID: 38391608 PMCID: PMC10886285 DOI: 10.3390/bioengineering11020122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 01/23/2024] [Accepted: 01/24/2024] [Indexed: 02/24/2024] Open
Abstract
Glaucoma, the leading cause of irreversible blindness worldwide, comprises a group of progressive optic neuropathies requiring early detection and lifelong treatment to preserve vision. Artificial intelligence (AI) technologies are now demonstrating transformative potential across the spectrum of clinical glaucoma care. This review summarizes current capabilities, future outlooks, and practical translation considerations. For enhanced screening, algorithms analyzing retinal photographs and machine learning models synthesizing risk factors can identify high-risk patients needing diagnostic workup and close follow-up. To augment definitive diagnosis, deep learning techniques detect characteristic glaucomatous patterns by interpreting results from optical coherence tomography, visual field testing, fundus photography, and other ocular imaging. AI-powered platforms also enable continuous monitoring, with algorithms that analyze longitudinal data alerting physicians about rapid disease progression. By integrating predictive analytics with patient-specific parameters, AI can also guide precision medicine for individualized glaucoma treatment selections. Advances in robotic surgery and computer-based guidance demonstrate AI's potential to improve surgical outcomes and surgical training. Beyond the clinic, AI chatbots and reminder systems could provide patient education and counseling to promote medication adherence. However, thoughtful approaches to clinical integration, usability, diversity, and ethical implications remain critical to successfully implementing these emerging technologies. This review highlights AI's vast capabilities to transform glaucoma care while summarizing key achievements, future prospects, and practical considerations to progress from bench to bedside.
Collapse
Affiliation(s)
- Yan Zhu
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Rebecca Salowe
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Caven Chow
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Shuo Li
- Department of Computer & Information Science, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Osbert Bastani
- Department of Computer & Information Science, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Joan M O'Brien
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA
| |
Collapse
|
3
|
Mahmoudinezhad G, Moghimi S, Cheng J, Ru L, Yang D, Agrawal K, Dixit R, Beheshtaein S, Du KH, Latif K, Gunasegaran G, Micheletti E, Nishida T, Kamalipour A, Walker E, Christopher M, Zangwill L, Vasconcelos N, Weinreb RN. Deep Learning Estimation of 10-2 Visual Field Map Based on Macular Optical Coherence Tomography Angiography Measurements. Am J Ophthalmol 2024; 257:187-200. [PMID: 37734638 DOI: 10.1016/j.ajo.2023.09.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 09/07/2023] [Accepted: 09/13/2023] [Indexed: 09/23/2023]
Abstract
PURPOSE To develop deep learning (DL) models estimating the central visual field (VF) from optical coherence tomography angiography (OCTA) vessel density (VD) measurements. DESIGN Development and validation of a deep learning model. METHODS A total of 1051 10-2 VF OCTA pairs from healthy, glaucoma suspects, and glaucoma eyes were included. DL models were trained on en face macula VD images from OCTA to estimate 10-2 mean deviation (MD), pattern standard deviation (PSD), 68 total deviation (TD) and pattern deviation (PD) values and compared with a linear regression (LR) model with the same input. Accuracy of the models was evaluated by calculating the average mean absolute error (MAE) and the R2 (squared Pearson correlation coefficients) of the estimated and actual VF values. RESULTS DL models predicting 10-2 MD achieved R2 of 0.85 (95% confidence interval [CI], 74-0.92) for 10-2 MD and MAEs of 1.76 dB (95% CI, 1.39-2.17 dB) for MD. This was significantly better than mean linear estimates for 10-2 MD. The DL model outperformed the LR model for the estimation of pointwise TD values with an average MAE of 2.48 dB (95% CI, 1.99-3.02) and R2 of 0.69 (95% CI, 0.57-0.76) over all test points. The DL model outperformed the LR model for the estimation of all sectors. CONCLUSIONS DL models enable the estimation of VF loss from OCTA images with high accuracy. Applying DL to the OCTA images may enhance clinical decision making. It also may improve individualized patient care and risk stratification of patients who are at risk for central VF damage.
Collapse
Affiliation(s)
- Golnoush Mahmoudinezhad
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Sasan Moghimi
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Jiacheng Cheng
- Department of Electrical and Computer Engineering (J.C., L.R., K.A., R.D., N.V.), University of California San Diego, La Jolla, California
| | - Liyang Ru
- Department of Electrical and Computer Engineering (J.C., L.R., K.A., R.D., N.V.), University of California San Diego, La Jolla, California
| | - Dongchen Yang
- Department of Computer Science and Engineering (D.Y.), University of California San Diego, La Jolla, California
| | - Kushagra Agrawal
- Department of Electrical and Computer Engineering (J.C., L.R., K.A., R.D., N.V.), University of California San Diego, La Jolla, California
| | - Rajeev Dixit
- Department of Electrical and Computer Engineering (J.C., L.R., K.A., R.D., N.V.), University of California San Diego, La Jolla, California
| | | | - Kelvin H Du
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Kareem Latif
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Gopikasree Gunasegaran
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Eleonora Micheletti
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Takashi Nishida
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Alireza Kamalipour
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Evan Walker
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Mark Christopher
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Linda Zangwill
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California
| | - Nuno Vasconcelos
- Department of Electrical and Computer Engineering (J.C., L.R., K.A., R.D., N.V.), University of California San Diego, La Jolla, California
| | - Robert N Weinreb
- From the Hamilton Glaucoma Center (G.M., S.M., K.H.D., K.L., G.G., E.M., T.N., A.K., E.W., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology, UC San Diego, La Jolla, California.
| |
Collapse
|
4
|
Kurysheva NI, Rodionova OY, Pomerantsev AL, Sharova GA. [Application of artificial intelligence in glaucoma. Part 1. Neural networks and deep learning in glaucoma screening and diagnosis]. Vestn Oftalmol 2024; 140:82-87. [PMID: 38962983 DOI: 10.17116/oftalma202414003182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/05/2024]
Abstract
This article reviews literature on the use of artificial intelligence (AI) for screening, diagnosis, monitoring and treatment of glaucoma. The first part of the review provides information how AI methods improve the effectiveness of glaucoma screening, presents the technologies using deep learning, including neural networks, for the analysis of big data obtained by methods of ocular imaging (fundus imaging, optical coherence tomography of the anterior and posterior eye segments, digital gonioscopy, ultrasound biomicroscopy, etc.), including a multimodal approach. The results found in the reviewed literature are contradictory, indicating that improvement of the AI models requires further research and a standardized approach. The use of neural networks for timely detection of glaucoma based on multimodal imaging will reduce the risk of blindness associated with glaucoma.
Collapse
Affiliation(s)
- N I Kurysheva
- Medical Biological University of Innovations and Continuing Education of the Federal Biophysical Center named after A.I. Burnazyan, Moscow, Russia
- Ophthalmological Center of the Federal Medical-Biological Agency at the Federal Biophysical Center named after A.I. Burnazyan, Moscow, Russia
| | - O Ye Rodionova
- N.N. Semenov Federal Research Center for Chemical Physics, Moscow, Russia
| | - A L Pomerantsev
- N.N. Semenov Federal Research Center for Chemical Physics, Moscow, Russia
| | - G A Sharova
- Medical Biological University of Innovations and Continuing Education of the Federal Biophysical Center named after A.I. Burnazyan, Moscow, Russia
- OOO Glaznaya Klinika Doktora Belikovoy, Moscow, Russia
| |
Collapse
|
5
|
Huang X, Islam MR, Akter S, Ahmed F, Kazami E, Serhan HA, Abd-Alrazaq A, Yousefi S. Artificial intelligence in glaucoma: opportunities, challenges, and future directions. Biomed Eng Online 2023; 22:126. [PMID: 38102597 PMCID: PMC10725017 DOI: 10.1186/s12938-023-01187-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 12/01/2023] [Indexed: 12/17/2023] Open
Abstract
Artificial intelligence (AI) has shown excellent diagnostic performance in detecting various complex problems related to many areas of healthcare including ophthalmology. AI diagnostic systems developed from fundus images have become state-of-the-art tools in diagnosing retinal conditions and glaucoma as well as other ocular diseases. However, designing and implementing AI models using large imaging data is challenging. In this study, we review different machine learning (ML) and deep learning (DL) techniques applied to multiple modalities of retinal data, such as fundus images and visual fields for glaucoma detection, progression assessment, staging and so on. We summarize findings and provide several taxonomies to help the reader understand the evolution of conventional and emerging AI models in glaucoma. We discuss opportunities and challenges facing AI application in glaucoma and highlight some key themes from the existing literature that may help to explore future studies. Our goal in this systematic review is to help readers and researchers to understand critical aspects of AI related to glaucoma as well as determine the necessary steps and requirements for the successful development of AI models in glaucoma.
Collapse
Affiliation(s)
- Xiaoqin Huang
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, USA
| | - Md Rafiqul Islam
- Business Information Systems, Australian Institute of Higher Education, Sydney, Australia
| | - Shanjita Akter
- School of Computer Science, Taylors University, Subang Jaya, Malaysia
| | - Fuad Ahmed
- Department of Computer Science & Engineering, Islamic University of Technology (IUT), Gazipur, Bangladesh
| | - Ehsan Kazami
- Ophthalmology, General Hospital of Mahabad, Urmia University of Medical Sciences, Urmia, Iran
| | - Hashem Abu Serhan
- Department of Ophthalmology, Hamad Medical Corporations, Doha, Qatar
| | - Alaa Abd-Alrazaq
- AI Center for Precision Health, Weill Cornell Medicine-Qatar, Doha, Qatar
| | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, USA.
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, USA.
| |
Collapse
|
6
|
Schmetterer L, Scholl H, Garhöfer G, Janeschitz-Kriegl L, Corvi F, Sadda SR, Medeiros FA. Endpoints for clinical trials in ophthalmology. Prog Retin Eye Res 2023; 97:101160. [PMID: 36599784 DOI: 10.1016/j.preteyeres.2022.101160] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 12/22/2022] [Accepted: 12/28/2022] [Indexed: 01/03/2023]
Abstract
With the identification of novel targets, the number of interventional clinical trials in ophthalmology has increased. Visual acuity has for a long time been considered the gold standard endpoint for clinical trials, but in the recent years it became evident that other endpoints are required for many indications including geographic atrophy and inherited retinal disease. In glaucoma the currently available drugs were approved based on their IOP lowering capacity. Some recent findings do, however, indicate that at the same level of IOP reduction, not all drugs have the same effect on visual field progression. For neuroprotection trials in glaucoma, novel surrogate endpoints are required, which may either include functional or structural parameters or a combination of both. A number of potential surrogate endpoints for ophthalmology clinical trials have been identified, but their validation is complicated and requires solid scientific evidence. In this article we summarize candidates for clinical endpoints in ophthalmology with a focus on retinal disease and glaucoma. Functional and structural biomarkers, as well as quality of life measures are discussed, and their potential to serve as endpoints in pivotal trials is critically evaluated.
Collapse
Affiliation(s)
- Leopold Schmetterer
- Singapore Eye Research Institute, Singapore; SERI-NTU Advanced Ocular Engineering (STANCE), Singapore; Academic Clinical Program, Duke-NUS Medical School, Singapore; School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore; Department of Clinical Pharmacology, Medical University Vienna, Vienna, Austria; Center for Medical Physics and Biomedical Engineering, Medical University Vienna, Vienna, Austria; Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland.
| | - Hendrik Scholl
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland; Department of Ophthalmology, University of Basel, Basel, Switzerland
| | - Gerhard Garhöfer
- Department of Clinical Pharmacology, Medical University Vienna, Vienna, Austria
| | - Lucas Janeschitz-Kriegl
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland; Department of Ophthalmology, University of Basel, Basel, Switzerland
| | - Federico Corvi
- Eye Clinic, Department of Biomedical and Clinical Sciences "Luigi Sacco", University of Milan, Italy
| | - SriniVas R Sadda
- Doheny Eye Institute, Los Angeles, CA, USA; Department of Ophthalmology, David Geffen School of Medicine at University of California, Los Angeles, CA, USA
| | - Felipe A Medeiros
- Vision, Imaging and Performance Laboratory, Department of Ophthalmology, Duke Eye Center, Duke University, Durham, NC, USA
| |
Collapse
|
7
|
Kim D, Seo SB, Park SJ, Cho HK. Deep learning visual field global index prediction with optical coherence tomography parameters in glaucoma patients. Sci Rep 2023; 13:18304. [PMID: 37880259 PMCID: PMC10600216 DOI: 10.1038/s41598-023-43104-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 09/20/2023] [Indexed: 10/27/2023] Open
Abstract
The aim of this study was to predict three visual filed (VF) global indexes, mean deviation (MD), pattern standard deviation (PSD), and visual field index (VFI), from optical coherence tomography (OCT) parameters including Bruch's Membrane Opening-Minimum Rim Width (BMO-MRW) and retinal nerve fiber layer (RNFL) based on a deep-learning model. Subjects consisted of 224 eyes with Glaucoma suspects (GS), 245 eyes with early NTG, 58 eyes with moderate stage of NTG, 36 eyes with PACG, 57 eyes with PEXG, and 99 eyes with POAG. A deep neural network (DNN) algorithm was developed to predict values of VF global indexes such as MD, VFI, and PSD. To evaluate performance of the model, mean absolute error (MAE) was determined. The MAE range of the DNN model on cross validation was 1.9-2.9 (dB) for MD, 1.6-2.0 (dB) for PSD, and 5.0 to 7.0 (%) for VFI. Ranges of Pearson's correlation coefficients were 0.76-0.85, 0.74-0.82, and 0.70-0.81 for MD, PSD, and VFI, respectively. Our deep-learning model might be useful in the management of glaucoma for diagnosis and follow-up, especially in situations when immediate VF results are not available because VF test requires time and space with a subjective nature.
Collapse
Affiliation(s)
- Dongbock Kim
- Department of Mathematics Education, School of Education, Kyungnam University, 7 Kyugnamdaehak‑ro, Masanhappo‑gu, Changwon, Gyeongsangnam-do, 51767, Republic of Korea
| | - Sat Byul Seo
- Department of Mathematics Education, School of Education, Kyungnam University, 7 Kyugnamdaehak‑ro, Masanhappo‑gu, Changwon, Gyeongsangnam-do, 51767, Republic of Korea
| | - Seong Joon Park
- Department of Mathematics Education, School of Education, Kyungnam University, 7 Kyugnamdaehak‑ro, Masanhappo‑gu, Changwon, Gyeongsangnam-do, 51767, Republic of Korea
| | - Hyun-Kyung Cho
- Department of Ophthalmology, Gyeongsang National University Changwon Hospital, School of Medicine, Gyeongsang National University, 11 Samjeongja-ro, Seongsan-gu, Changwon, Gyeongsangnam-do, 51472, Republic of Korea.
- Institute of Health Sciences, School of Medicine, Gyeongsang National University, Jinju, Republic of Korea.
| |
Collapse
|
8
|
Singh R, Rauscher FG, Li Y, Eslami M, Kazeminasab S, Zebardast N, Wang M, Elze T. Normative Percentiles of Retinal Nerve Fiber Layer Thickness and Glaucomatous Visual Field Loss. Transl Vis Sci Technol 2023; 12:13. [PMID: 37844261 PMCID: PMC10584025 DOI: 10.1167/tvst.12.10.13] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 08/18/2023] [Indexed: 10/18/2023] Open
Abstract
Purpose Circumpapillary retinal nerve fiber layer thickness (RNFLT) measurement aids in the clinical diagnosis of glaucoma. Spectral domain optical coherence tomography (SD-OCT) machines measure RNFLT and provide normative color-coded plots. In this retrospective study, we investigate whether normative percentiles of RNFLT (pRNFLT) from Spectralis SD-OCT improve prediction of glaucomatous visual field loss over raw RNFLT. Methods A longitudinal database containing OCT scans and visual fields from Massachusetts Eye & Ear glaucoma clinic patients was generated. Reliable OCT-visual field pairs were selected. Spectralis OCT normative distributions were extracted from machine printouts. Supervised machine learning models compared predictive performance between pRNFLT and raw RNFLT inputs. Regional structure-function associations were assessed with univariate regression to predict mean deviation (MD). Multivariable classification predicted MD, pattern standard deviation, MD change per year, and glaucoma hemifield test. Results There were 3016 OCT-visual field pairs that met the reliability criteria. Spectralis norms were found to be independent of age, sex, and ocular magnification. Regional analysis showed significant decrease in R2 from pRNFLT models compared to raw RNFLT models in inferotemporal sectors, across multiple regressors. In multivariable classification, there were no significant improvements in area under the curve of receiver operating characteristic curve (ROC-AUC) score with pRNFLT models compared to raw RNFLT models. Conclusions Our results challenge the assumption that normative percentiles from OCT machines improve prediction of glaucomatous visual field loss. Raw RNFLT alone shows strong prediction, with no models presenting improvement by the manufacturer norms. This may result from insufficient patient stratification in tested norms. Translational Relevance Understanding correlation of normative databases to visual function may improve clinical interpretation of OCT data.
Collapse
Affiliation(s)
- Rishabh Singh
- Boston University School of Medicine, Boston, MA, USA
- Schepens Eye Research Institute, Harvard Medical School, Boston, MA, USA
| | - Franziska G. Rauscher
- Institute for Medical Informatics, Statistics, and Epidemiology (IMISE), Leipzig University, Leipzig, Germany
- Leipzig Research Centre for Civilization Diseases (LIFE), Leipzig University, Leipzig, Germany
| | - Yangjiani Li
- Schepens Eye Research Institute, Harvard Medical School, Boston, MA, USA
| | - Mohammad Eslami
- Schepens Eye Research Institute, Harvard Medical School, Boston, MA, USA
| | - Saber Kazeminasab
- Schepens Eye Research Institute, Harvard Medical School, Boston, MA, USA
| | - Nazlee Zebardast
- Schepens Eye Research Institute, Harvard Medical School, Boston, MA, USA
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Mengyu Wang
- Schepens Eye Research Institute, Harvard Medical School, Boston, MA, USA
| | - Tobias Elze
- Schepens Eye Research Institute, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
9
|
Montesano G, Lazaridis G, Ometto G, Crabb DP, Garway-Heath DF. Improving the Accuracy and Speed of Visual Field Testing in Glaucoma With Structural Information and Deep Learning. Transl Vis Sci Technol 2023; 12:10. [PMID: 37831447 PMCID: PMC10587851 DOI: 10.1167/tvst.12.10.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 09/15/2023] [Indexed: 10/14/2023] Open
Abstract
Purpose To assess the performance of a perimetric strategy using structure-function predictions from a deep learning (DL) model. Methods Visual field test-retest data from 146 eyes (75 patients) with glaucoma with (median [5th-95th percentile]) 10 [7, 10] tests per eye were used. Structure-function predictions were generated with a previously described DL model using cicumpapillary optical coherence tomography (OCT) scans. Structurally informed prior distributions were built grouping the observed measured sensitivities for each predicted value and recalculated for each subject with a leave-one-out approach. A zippy estimation by sequential testing (ZEST) strategy was used for the simulations (1000 per eye). Ground-truth sensitivities for each eye were the medians of the test-retest values. Two variations of ZEST were compared in terms of speed (average total number of presentations [NP] per eye) and accuracy (average mean absolute error [MAE] per eye), using either a combination of normal and abnormal thresholds (ZEST) or the calculated structural distributions (S-ZEST) as prior information. Two additional versions of these strategies employing spatial correlations were tested. Results S-ZEST was significantly faster, with a mean average NP of 213.87 (SD = 28.18), than ZEST, with a mean average NP of 255.65 (SD = 50.27) (P < 0.001). The average MAE was smaller for S-ZEST (1.98; SD = 2.37) than ZEST (2.43; SD = 2.69) (P < 0.001). Spatial correlations further improved both strategies (P < 0.001), but the differences between ZEST and S-ZEST remained significant (P < 0.001). Conclusions DL structure-function predictions can significantly improve perimetric tests. Translational Relevance DL structure-function predictions from clinically available OCT scans can improve perimetry in glaucoma patients.
Collapse
Affiliation(s)
- Giovanni Montesano
- City, University of London, Optometry and Visual Sciences, London, UK
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Georgios Lazaridis
- City, University of London, Optometry and Visual Sciences, London, UK
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
- Centre for Medical Image Computing, University College London, London, UK
| | - Giovanni Ometto
- City, University of London, Optometry and Visual Sciences, London, UK
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - David P. Crabb
- City, University of London, Optometry and Visual Sciences, London, UK
| | - David F. Garway-Heath
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| |
Collapse
|
10
|
Chen Z, Shemuelian E, Wollstein G, Wang Y, Ishikawa H, Schuman JS. Segmentation-Free OCT-Volume-Based Deep Learning Model Improves Pointwise Visual Field Sensitivity Estimation. Transl Vis Sci Technol 2023; 12:28. [PMID: 37382575 PMCID: PMC10318595 DOI: 10.1167/tvst.12.6.28] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 05/18/2023] [Indexed: 06/30/2023] Open
Abstract
Purpose The structural changes measured by optical coherence tomography (OCT) are related to functional changes in visual fields (VFs). This study aims to accurately assess the structure-function relationship and overcome the challenges brought by the minimal measurable level (floor effect) of segmentation-dependent OCT measurements commonly used in prior studies. Methods We developed a deep learning model to estimate the functional performance directly from three-dimensional (3D) OCT volumes and compared it to the model trained with segmentation-dependent two-dimensional (2D) OCT thickness maps. Moreover, we proposed a gradient loss to utilize the spatial information of VFs. Results Our 3D model was significantly better than the 2D model both globally and pointwise regarding both mean absolute error (MAE = 3.11 + 3.54 vs. 3.47 ± 3.75 dB, P < 0.001) and Pearson's correlation coefficient (0.80 vs. 0.75, P < 0.001). On a subset of test data with floor effects, the 3D model showed less influence from floor effects than the 2D model (MAE = 5.24 ± 3.99 vs. 6.34 ± 4.58 dB, P < 0.001, and correlation 0.83 vs. 0.74, P < 0.001). The gradient loss improved the estimation error for low-sensitivity values. Furthermore, our 3D model outperformed all prior studies. Conclusions By providing a better quantitative model to encapsulate the structure-function relationship more accurately, our method may help deriving VF test surrogates. Translational Relevance DL-based VF surrogates not only benefit patients by reducing the testing time of VFs but also allow clinicians to make clinical judgments without the inherent limitations of VFs.
Collapse
Affiliation(s)
- Zhiqi Chen
- Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, Brooklyn, NY, USA
| | - Eitan Shemuelian
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, USA
| | - Gadi Wollstein
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, USA
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, NY, USA
- Center for Neural Science, NYU College of Arts and Sciences, New York, NY, USA
| | - Yao Wang
- Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, Brooklyn, NY, USA
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, NY, USA
| | - Hiroshi Ishikawa
- Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, Brooklyn, NY, USA
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, OR, USA
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health and Science University, Portland, OR, USA
| | - Joel S. Schuman
- Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, Brooklyn, NY, USA
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, USA
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, NY, USA
- Center for Neural Science, NYU College of Arts and Sciences, New York, NY, USA
- Wills Eye Hospital, Philadelphia, PA, USA
| |
Collapse
|
11
|
Lee GA, Kong GYX, Liu CH. Visual fields in glaucoma: Where are we now? Clin Exp Ophthalmol 2023; 51:162-169. [PMID: 36751125 DOI: 10.1111/ceo.14210] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 01/25/2023] [Accepted: 02/03/2023] [Indexed: 02/09/2023]
Abstract
Visual fields are an integral part of glaucoma diagnosis and management. COVID has heightened the awareness of the potential for viral spread with the practice of visual fields modified. Mask artefacts can occur due to fogging of the inferior rim of the trail lens. Fortunately, the risk of airborne transmission when field testing is low. The 24-2c may be useful to detect early disease and the 10-2 more sensitive to detect advanced loss. The SITA faster test algorithm is able to reduce testing time thereby improving clinic efficiency, however, may show milder results for moderate or severe glaucoma. The technician has an important role of supervising the visual field performance to achieve reliable output. Home monitoring can provide earlier detection of progression and thus improve monitoring of glaucoma as well as reduce the burden of in-clinic assessments. Artificial Intelligence has been found to have high sensitivity and specificity compared to expert observers in detecting field abnormalities and progression as well as integrating structure with function. Although these advances will improve efficiency and guide accuracy, there will remain a need for clinicians to interpret the results and instigate management.
Collapse
Affiliation(s)
- Graham A Lee
- City Eye Centre, Brisbane, Queensland, Australia.,University of Queensland, Herston, Queensland, Australia.,Department of Ophthalmology, Mater Hospital, Brisbane, Queensland, Australia
| | - George Y X Kong
- Glaucoma Investigation and Research Unit, Royal Victorian Eye and Ear Hospital VIC, East Melbourne, Victoria, Australia.,Centre for Eye Research Australia, Royal Victorian Eye, and Ear Hospital, East Melbourne, Victoria, Australia.,Ophthalmology, Department of Surgery, The University of Melbourne, Parkville, Victoria, Australia
| | | |
Collapse
|
12
|
Christopher M, Hoseini P, Walker E, Proudfoot JA, Bowd C, Fazio MA, Girkin CA, De Moraes CG, Liebmann JM, Weinreb RN, Schwartzman A, Zangwill LM, Welsbie DS. A Deep Learning Approach to Improve Retinal Structural Predictions and Aid Glaucoma Neuroprotective Clinical Trial Design. Ophthalmol Glaucoma 2023; 6:147-159. [PMID: 36038107 DOI: 10.1016/j.ogla.2022.08.014] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 08/02/2022] [Accepted: 08/19/2022] [Indexed: 10/15/2022]
Abstract
PURPOSE To investigate the efficacy of a deep learning regression method to predict macula ganglion cell-inner plexiform layer (GCIPL) and optic nerve head (ONH) retinal nerve fiber layer (RNFL) thickness for use in glaucoma neuroprotection clinical trials. DESIGN Cross-sectional study. PARTICIPANTS Glaucoma patients with good quality macula and ONH scans enrolled in 2 longitudinal studies, the African Descent and Glaucoma Evaluation Study and the Diagnostic Innovations in Glaucoma Study. METHODS Spectralis macula posterior pole scans and ONH circle scans on 3327 pairs of GCIPL/RNFL scans from 1096 eyes (550 patients) were included. Participants were randomly distributed into a training and validation dataset (90%) and a test dataset (10%) by participant. Networks had access to GCIPL and RNFL data from one hemiretina of the probe eye and all data of the fellow eye. The models were then trained to predict the GCIPL or RNFL thickness of the remaining probe eye hemiretina. MAIN OUTCOME MEASURES Mean absolute error (MAE) and squared Pearson correlation coefficient (r2) were used to evaluate model performance. RESULTS The deep learning model was able to predict superior and inferior GCIPL thicknesses with a global r2 value of 0.90 and 0.86, r2 of mean of 0.90 and 0.86, and mean MAE of 3.72 μm and 4.2 μm, respectively. For superior and inferior RNFL thickness predictions, model performance was slightly lower, with a global r2 of 0.75 and 0.84, r2 of mean of 0.81 and 0.82, and MAE of 9.31 μm and 8.57 μm, respectively. There was only a modest decrease in model performance when predicting GCIPL and RNFL in more severe disease. Using individualized hemiretinal predictions to account for variability across patients, we estimate that a clinical trial can detect a difference equivalent to a 25% treatment effect over 24 months with an 11-fold reduction in the number of patients compared to a conventional trial. CONCLUSIONS Our deep learning models were able to accurately estimate both macula GCIPL and ONH RNFL hemiretinal thickness. Using an internal control based on these model predictions may help reduce clinical trial sample size requirements and facilitate investigation of new glaucoma neuroprotection therapies. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found after the references.
Collapse
Affiliation(s)
- Mark Christopher
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, California
| | - Pourya Hoseini
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, California
| | - Evan Walker
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, California
| | - James A Proudfoot
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, California
| | - Christopher Bowd
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, California
| | - Massimo A Fazio
- Callahan Eye Hospital, Heersink School of Medicine, University of Alabama-Birmingham, Birmingham, Alabama
| | - Christopher A Girkin
- Callahan Eye Hospital, Heersink School of Medicine, University of Alabama-Birmingham, Birmingham, Alabama
| | - Carlos Gustavo De Moraes
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, New York
| | - Jeffrey M Liebmann
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, New York
| | - Robert N Weinreb
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, California
| | - Armin Schwartzman
- Division of Biostatistics, Herbert Wertheim School of Public Health, University of California, San Diego, California; Halıcıoğlu Data Science Institute, University of California, San Diego, California
| | - Linda M Zangwill
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, California
| | - Derek S Welsbie
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, California.
| |
Collapse
|
13
|
Kamalipour A, Moghimi S, Khosravi P, Jazayeri MS, Nishida T, Mahmoudinezhad G, Li EH, Christopher M, Liebmann JM, Fazio MA, Girkin CA, Zangwill L, Weinreb RN. Deep Learning Estimation of 10-2 Visual Field Map Based on Circumpapillary Retinal Nerve Fiber Layer Thickness Measurements. Am J Ophthalmol 2023; 246:163-173. [PMID: 36328198 DOI: 10.1016/j.ajo.2022.10.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 10/14/2022] [Accepted: 10/20/2022] [Indexed: 11/06/2022]
Abstract
PURPOSE To estimate central 10-degree visual field (VF) map from spectral-domain optical coherence tomography (SD-OCT) retinal nerve fiber layer thickness (RNFL) measurements in glaucoma with artificial intelligence. DESIGN Artificial intelligence (convolutional neural networks) study. METHODS This study included 5352 SD-OCT scans and 10-2 VF pairs from 1365 eyes of 724 healthy patients, patients with suspected glaucoma, and patients with glaucoma. Convolutional neural networks (CNNs) were developed to estimate the 68 individual sensitivity thresholds of 10-2 VF map using all-sectors (CNNA) and temporal-sectors (CNNT) RNFL thickness information of the SD-OCT circle scan (768 thickness points). 10-2 indices including pointwise total deviation (TD) values, mean deviation (MD), and pattern standard deviation (PSD) were generated using the CNN-estimated sensitivity thresholds at individual test locations. Linear regression (LR) models with the same input were used for comparison. RESULTS The CNNA model achieved an average pointwise mean absolute error of 4.04 dB (95% confidence interval [CI] 3.76-4.35) and correlation coefficient (r) of 0.59 (95% CI 0.52-0.64) over 10-2 map and the mean absolute error and r of 2.88 dB (95% CI 2.63-3.15) and 0.74 (95% CI 0.67-0.80) for MD, and 2.31 dB (95% CI 2.03-2.61) and 0.59 (95% CI 0.51-0.65) for PSD estimations, respectively, significantly outperforming the LRA model. CONCLUSIONS The proposed CNNA model improved the estimation of 10-2 VF map based on circumpapillary SD-OCT RNFL thickness measurements. These artificial intelligence methods using SD-OCT structural data show promise to individualize the frequency of central VF assessment in patients with glaucoma and would enable the reallocation of resources from patients at lowest risk to those at highest risk of central VF damage.
Collapse
Affiliation(s)
- Alireza Kamalipour
- From the Hamilton Glaucoma Center (A.K., S.M., T.N., G.M., E.H.L., M.C., M.A.F., L.Z., R.N.W.),; Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla; School of Medicine (P.K.),; University of California Irvine, Irvine; Department of Civil, Construction, and Environmental Engineering (M.S.J.),; San Diego State University, San Diego, California; Bernard and Shirlee Brown Glaucoma Research Laboratory (J.M.L.),; Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Medical Center, New York, New York; and the Department of Ophthalmology and Vision Sciences (M.A.F., C.A.G.),; Heersink School of Medicine, The University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Sasan Moghimi
- From the Hamilton Glaucoma Center (A.K., S.M., T.N., G.M., E.H.L., M.C., M.A.F., L.Z., R.N.W.),; Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla; School of Medicine (P.K.),; University of California Irvine, Irvine; Department of Civil, Construction, and Environmental Engineering (M.S.J.),; San Diego State University, San Diego, California; Bernard and Shirlee Brown Glaucoma Research Laboratory (J.M.L.),; Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Medical Center, New York, New York; and the Department of Ophthalmology and Vision Sciences (M.A.F., C.A.G.),; Heersink School of Medicine, The University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Pooya Khosravi
- From the Hamilton Glaucoma Center (A.K., S.M., T.N., G.M., E.H.L., M.C., M.A.F., L.Z., R.N.W.),; Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla; School of Medicine (P.K.),; University of California Irvine, Irvine; Department of Civil, Construction, and Environmental Engineering (M.S.J.),; San Diego State University, San Diego, California; Bernard and Shirlee Brown Glaucoma Research Laboratory (J.M.L.),; Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Medical Center, New York, New York; and the Department of Ophthalmology and Vision Sciences (M.A.F., C.A.G.),; Heersink School of Medicine, The University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Mohammad Sadegh Jazayeri
- From the Hamilton Glaucoma Center (A.K., S.M., T.N., G.M., E.H.L., M.C., M.A.F., L.Z., R.N.W.),; Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla; School of Medicine (P.K.),; University of California Irvine, Irvine; Department of Civil, Construction, and Environmental Engineering (M.S.J.),; San Diego State University, San Diego, California; Bernard and Shirlee Brown Glaucoma Research Laboratory (J.M.L.),; Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Medical Center, New York, New York; and the Department of Ophthalmology and Vision Sciences (M.A.F., C.A.G.),; Heersink School of Medicine, The University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Takashi Nishida
- From the Hamilton Glaucoma Center (A.K., S.M., T.N., G.M., E.H.L., M.C., M.A.F., L.Z., R.N.W.),; Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla; School of Medicine (P.K.),; University of California Irvine, Irvine; Department of Civil, Construction, and Environmental Engineering (M.S.J.),; San Diego State University, San Diego, California; Bernard and Shirlee Brown Glaucoma Research Laboratory (J.M.L.),; Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Medical Center, New York, New York; and the Department of Ophthalmology and Vision Sciences (M.A.F., C.A.G.),; Heersink School of Medicine, The University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Golnoush Mahmoudinezhad
- From the Hamilton Glaucoma Center (A.K., S.M., T.N., G.M., E.H.L., M.C., M.A.F., L.Z., R.N.W.),; Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla; School of Medicine (P.K.),; University of California Irvine, Irvine; Department of Civil, Construction, and Environmental Engineering (M.S.J.),; San Diego State University, San Diego, California; Bernard and Shirlee Brown Glaucoma Research Laboratory (J.M.L.),; Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Medical Center, New York, New York; and the Department of Ophthalmology and Vision Sciences (M.A.F., C.A.G.),; Heersink School of Medicine, The University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Elizabeth H Li
- From the Hamilton Glaucoma Center (A.K., S.M., T.N., G.M., E.H.L., M.C., M.A.F., L.Z., R.N.W.),; Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla; School of Medicine (P.K.),; University of California Irvine, Irvine; Department of Civil, Construction, and Environmental Engineering (M.S.J.),; San Diego State University, San Diego, California; Bernard and Shirlee Brown Glaucoma Research Laboratory (J.M.L.),; Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Medical Center, New York, New York; and the Department of Ophthalmology and Vision Sciences (M.A.F., C.A.G.),; Heersink School of Medicine, The University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Mark Christopher
- From the Hamilton Glaucoma Center (A.K., S.M., T.N., G.M., E.H.L., M.C., M.A.F., L.Z., R.N.W.),; Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla; School of Medicine (P.K.),; University of California Irvine, Irvine; Department of Civil, Construction, and Environmental Engineering (M.S.J.),; San Diego State University, San Diego, California; Bernard and Shirlee Brown Glaucoma Research Laboratory (J.M.L.),; Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Medical Center, New York, New York; and the Department of Ophthalmology and Vision Sciences (M.A.F., C.A.G.),; Heersink School of Medicine, The University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Jeffrey M Liebmann
- From the Hamilton Glaucoma Center (A.K., S.M., T.N., G.M., E.H.L., M.C., M.A.F., L.Z., R.N.W.),; Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla; School of Medicine (P.K.),; University of California Irvine, Irvine; Department of Civil, Construction, and Environmental Engineering (M.S.J.),; San Diego State University, San Diego, California; Bernard and Shirlee Brown Glaucoma Research Laboratory (J.M.L.),; Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Medical Center, New York, New York; and the Department of Ophthalmology and Vision Sciences (M.A.F., C.A.G.),; Heersink School of Medicine, The University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Massimo A Fazio
- From the Hamilton Glaucoma Center (A.K., S.M., T.N., G.M., E.H.L., M.C., M.A.F., L.Z., R.N.W.),; Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla; School of Medicine (P.K.),; University of California Irvine, Irvine; Department of Civil, Construction, and Environmental Engineering (M.S.J.),; San Diego State University, San Diego, California; Bernard and Shirlee Brown Glaucoma Research Laboratory (J.M.L.),; Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Medical Center, New York, New York; and the Department of Ophthalmology and Vision Sciences (M.A.F., C.A.G.),; Heersink School of Medicine, The University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Christopher A Girkin
- From the Hamilton Glaucoma Center (A.K., S.M., T.N., G.M., E.H.L., M.C., M.A.F., L.Z., R.N.W.),; Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla; School of Medicine (P.K.),; University of California Irvine, Irvine; Department of Civil, Construction, and Environmental Engineering (M.S.J.),; San Diego State University, San Diego, California; Bernard and Shirlee Brown Glaucoma Research Laboratory (J.M.L.),; Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Medical Center, New York, New York; and the Department of Ophthalmology and Vision Sciences (M.A.F., C.A.G.),; Heersink School of Medicine, The University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Linda Zangwill
- From the Hamilton Glaucoma Center (A.K., S.M., T.N., G.M., E.H.L., M.C., M.A.F., L.Z., R.N.W.),; Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla; School of Medicine (P.K.),; University of California Irvine, Irvine; Department of Civil, Construction, and Environmental Engineering (M.S.J.),; San Diego State University, San Diego, California; Bernard and Shirlee Brown Glaucoma Research Laboratory (J.M.L.),; Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Medical Center, New York, New York; and the Department of Ophthalmology and Vision Sciences (M.A.F., C.A.G.),; Heersink School of Medicine, The University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Robert N Weinreb
- From the Hamilton Glaucoma Center (A.K., S.M., T.N., G.M., E.H.L., M.C., M.A.F., L.Z., R.N.W.),; Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla; School of Medicine (P.K.),; University of California Irvine, Irvine; Department of Civil, Construction, and Environmental Engineering (M.S.J.),; San Diego State University, San Diego, California; Bernard and Shirlee Brown Glaucoma Research Laboratory (J.M.L.),; Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Medical Center, New York, New York; and the Department of Ophthalmology and Vision Sciences (M.A.F., C.A.G.),; Heersink School of Medicine, The University of Alabama at Birmingham, Birmingham, Alabama, USA..
| |
Collapse
|
14
|
Chen D, Ran Ran A, Fang Tan T, Ramachandran R, Li F, Cheung CY, Yousefi S, Tham CCY, Ting DSW, Zhang X, Al-Aswad LA. Applications of Artificial Intelligence and Deep Learning in Glaucoma. Asia Pac J Ophthalmol (Phila) 2023; 12:80-93. [PMID: 36706335 DOI: 10.1097/apo.0000000000000596] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 12/06/2022] [Indexed: 01/28/2023] Open
Abstract
Diagnosis and detection of progression of glaucoma remains challenging. Artificial intelligence-based tools have the potential to improve and standardize the assessment of glaucoma but development of these algorithms is difficult given the multimodal and variable nature of the diagnosis. Currently, most algorithms are focused on a single imaging modality, specifically screening and diagnosis based on fundus photos or optical coherence tomography images. Use of anterior segment optical coherence tomography and goniophotographs is limited. The majority of algorithms designed for disease progression prediction are based on visual fields. No studies in our literature search assessed the use of artificial intelligence for treatment response prediction and no studies conducted prospective testing of their algorithms. Additional challenges to the development of artificial intelligence-based tools include scarcity of data and a lack of consensus in diagnostic criteria. Although research in the use of artificial intelligence for glaucoma is promising, additional work is needed to develop clinically usable tools.
Collapse
Affiliation(s)
- Dinah Chen
- Department of Ophthalmology, NYU Langone Health, New York City, NY
- Genentech Inc, South San Francisco, CA
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
- Lam Kin Chung, Jet King-Shing Ho Glaucoma Treatment And Research Centre, The Chinese University of Hong Kong, Hong Kong, China
| | - Ting Fang Tan
- Singapore Eye Research Institute, Singapore
- Singapore National Eye Center, Singapore
| | | | - Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
- Lam Kin Chung, Jet King-Shing Ho Glaucoma Treatment And Research Centre, The Chinese University of Hong Kong, Hong Kong, China
| | - Siamak Yousefi
- Department of Ophthalmology, The University of Tennessee Health Science Center, Memphis, TN
| | - Clement C Y Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
- Lam Kin Chung, Jet King-Shing Ho Glaucoma Treatment And Research Centre, The Chinese University of Hong Kong, Hong Kong, China
| | - Daniel S W Ting
- Singapore Eye Research Institute, Singapore
- Singapore National Eye Center, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | | |
Collapse
|
15
|
Wong D, Chua J, Bujor I, Chong RS, Nongpiur ME, Vithana EN, Husain R, Aung T, Popa‐Cherecheanu A, Schmetterer L. Comparison of machine learning approaches for structure-function modeling in glaucoma. Ann N Y Acad Sci 2022; 1515:237-248. [PMID: 35729796 PMCID: PMC10946805 DOI: 10.1111/nyas.14844] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
To evaluate machine learning (ML) approaches for structure-function modeling to estimate visual field (VF) loss in glaucoma, models from different ML approaches were trained on optical coherence tomography thickness measurements to estimate global VF mean deviation (VF MD) and focal VF loss from 24-2 standard automated perimetry. The models were compared using mean absolute errors (MAEs). Baseline MAEs were obtained from the VF values and their means. Data of 832 eyes from 569 participants were included, with 537 Asian eyes for training, and 148 Asian and 111 Caucasian eyes set aside as the respective test sets. All ML models performed significantly better than baseline. Gradient-boosted trees (XGB) achieved the lowest MAE of 3.01 (95% CI: 2.57, 3.48) dB and 3.04 (95% CI: 2.59, 3.99) dB for VF MD estimation in the Asian and Caucasian test sets, although difference between models was not significant. In focal VF estimation, XGB achieved median MAEs of 4.44 [IQR 3.45-5.17] dB and 3.87 [IQR 3.64-4.22] dB across the 24-2 VF for the Asian and Caucasian test sets and was comparable to VF estimates from support vector regression (SVR) models. VF estimates from both XGB and SVR were significantly better than the other models. These results show that XGB and SVR could potentially be used for both global and focal structure-function modeling in glaucoma.
Collapse
Affiliation(s)
- Damon Wong
- SERI‐NTU Advanced Ocular Engineering (STANCE)Singapore
- School of Chemical and Biomedical EngineeringNanyang Technological UniversitySingapore
- Singapore Eye Research InstituteSingapore National Eye CentreSingapore
- Institute of Molecular and Clinical OphthalmologyBaselSwitzerland
| | - Jacqueline Chua
- SERI‐NTU Advanced Ocular Engineering (STANCE)Singapore
- Singapore Eye Research InstituteSingapore National Eye CentreSingapore
| | - Inna Bujor
- Carol Davila University of Medicine and PharmacyBucharestRomania
| | - Rachel S. Chong
- Singapore Eye Research InstituteSingapore National Eye CentreSingapore
| | | | - Eranga N. Vithana
- Singapore Eye Research InstituteSingapore National Eye CentreSingapore
| | - Rahat Husain
- Singapore Eye Research InstituteSingapore National Eye CentreSingapore
| | - Tin Aung
- Singapore Eye Research InstituteSingapore National Eye CentreSingapore
- Yong Loo Lin School of MedicineNational University of SingaporeSingapore
| | - Alina Popa‐Cherecheanu
- Carol Davila University of Medicine and PharmacyBucharestRomania
- Department of OphthalmologyEmergency University HospitalBucharestRomania
| | - Leopold Schmetterer
- SERI‐NTU Advanced Ocular Engineering (STANCE)Singapore
- School of Chemical and Biomedical EngineeringNanyang Technological UniversitySingapore
- Singapore Eye Research InstituteSingapore National Eye CentreSingapore
- Institute of Molecular and Clinical OphthalmologyBaselSwitzerland
- Department of Clinical PharmacologyMedical University of ViennaViennaAustria
- Center for Medical Physics and Biomedical EngineeringMedical University of ViennaViennaAustria
| |
Collapse
|
16
|
Hemelings R, Elen B, Barbosa-Breda J, Bellon E, Blaschko MB, De Boever P, Stalmans I. Pointwise Visual Field Estimation From Optical Coherence Tomography in Glaucoma Using Deep Learning. Transl Vis Sci Technol 2022; 11:22. [PMID: 35998059 PMCID: PMC9424967 DOI: 10.1167/tvst.11.8.22] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose Standard automated perimetry is the gold standard to monitor visual field (VF) loss in glaucoma management, but it is prone to intrasubject variability. We trained and validated a customized deep learning (DL) regression model with Xception backbone that estimates pointwise and overall VF sensitivity from unsegmented optical coherence tomography (OCT) scans. Methods DL regression models have been trained with four imaging modalities (circumpapillary OCT at 3.5 mm, 4.1 mm, and 4.7 mm diameter) and scanning laser ophthalmoscopy en face images to estimate mean deviation (MD) and 52 threshold values. This retrospective study used data from patients who underwent a complete glaucoma examination, including a reliable Humphrey Field Analyzer (HFA) 24-2 SITA Standard (SS) VF exam and a SPECTRALIS OCT. Results For MD estimation, weighted prediction averaging of all four individuals yielded a mean absolute error (MAE) of 2.89 dB (2.50-3.30) on 186 test images, reducing the baseline by 54% (MAEdecr%). For 52 VF threshold values' estimation, the weighted ensemble model resulted in an MAE of 4.82 dB (4.45-5.22), representing an MAEdecr% of 38% from baseline when predicting the pointwise mean value. DL managed to explain 75% and 58% of the variance (R2) in MD and pointwise sensitivity estimation, respectively. Conclusions Deep learning can estimate global and pointwise VF sensitivities that fall almost entirely within the 90% test-retest confidence intervals of the 24-2 SS test. Translational Relevance Fast and consistent VF prediction from unsegmented OCT scans could become a solution for visual function estimation in patients unable to perform reliable VF exams.
Collapse
Affiliation(s)
- Ruben Hemelings
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium.,Unit Health, Flemish Institute for Technological Research (VITO), Mol, Belgium
| | - Bart Elen
- Unit Health, Flemish Institute for Technological Research (VITO), Mol, Belgium
| | - João Barbosa-Breda
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium.,Cardiovascular R&D Center - UnIC@RISE, Department of Surgery and Physiology, Faculty of Medicine of the University of Porto, Porto, Portugal.,Department of Ophthalmology, Centro Hospitalar e Universitário São João, Porto, Portugal
| | - Erwin Bellon
- Department of Information Technology, University Hospitals Leuven, Leuven, Belgium
| | | | - Patrick De Boever
- Unit Health, Flemish Institute for Technological Research (VITO), Mol, Belgium.,Center for Environmental Sciences, Faculty of Industrial Engineering, Hasselt University, Diepenbeek, Belgium.,Department of Biology, University of Antwerp, Wilrijk, Belgium
| | - Ingeborg Stalmans
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium.,Ophthalmology Department, UZ Leuven, Leuven, Belgium
| |
Collapse
|
17
|
Lazaridis G, Montesano G, Afgeh SS, Mohamed-Noriega J, Ourselin S, Lorenzi M, Garway-Heath DF. Predicting Visual Fields From Optical Coherence Tomography via an Ensemble of Deep Representation Learners. Am J Ophthalmol 2022; 238:52-65. [PMID: 34998718 DOI: 10.1016/j.ajo.2021.12.020] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 12/23/2021] [Accepted: 12/27/2021] [Indexed: 02/04/2023]
Abstract
PURPOSE To develop and validate a deep learning method of predicting visual function from spectral domain optical coherence tomography (SD-OCT)-derived retinal nerve fiber layer thickness (RNFLT) measurements and corresponding SD-OCT images. DESIGN Development and evaluation of diagnostic technology. METHODS Two deep learning ensemble models to predict pointwise VF sensitivity from SD-OCT images (model 1: RNFLT profile only; model 2: RNFLT profile plus SD-OCT image) and 2 reference models were developed. All models were tested in an independent test-retest data set comprising 2181 SD-OCT/VF pairs; the median of ∼10 VFs per eye was taken as the best available estimate (BAE) of the true VF. The performance of single VFs predicting the BAE VF was also evaluated. The training data set comprised 954 eyes of 220 healthy and 332 glaucomatous participants, and the test data set, 144 eyes of 72 glaucomatous participants. The main outcome measures included the pointwise prediction mean error (ME), mean absolute error (MAE), and correlation of predictions with the BAE VF sensitivity. RESULTS The median mean deviation was -4.17 dB (-14.22 to 0.88). Model 2 had excellent accuracy (ME 0.5 dB, SD 0.8) and overall performance (MAE 2.3 dB, SD 3.1), and significantly (paired t test) outperformed the other methods. For single VFs predicting the BAE VF, the pointwise MAE was 1.5 dB (SD 0.7). The association between SD-OCT and single VF predictions of the BAE pointwise VF sensitivities was R2 = 0.78 and R2 = 0.88, respectively. CONCLUSIONS Our method outperformed standard statistical and deep learning approaches. Predictions of BAEs from OCT images approached the accuracy of single real VF estimates of the BAE.
Collapse
Affiliation(s)
- Georgios Lazaridis
- From the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology (G.L., G.M., J.M.-N., D.F.G.-H.), London, United Kingdom; Centre for Medical Image Computing, University College London (G.L.), London, United Kingdom.
| | - Giovanni Montesano
- From the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology (G.L., G.M., J.M.-N., D.F.G.-H.), London, United Kingdom; Optometry and Visual Sciences, City, University of London, London, United Kingdom
| | | | - Jibran Mohamed-Noriega
- From the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology (G.L., G.M., J.M.-N., D.F.G.-H.), London, United Kingdom; Departamento de Oftalmología, Hospital Universitario (J.M.-N.), UANL, México
| | - Sebastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, King's College London (S.O.), London, United Kingdom and
| | - Marco Lorenzi
- Université Côte d'Azur, Inria Sophia Antipolis, Epione Research Project (M.L.), Valbonne, France
| | - David F Garway-Heath
- From the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology (G.L., G.M., J.M.-N., D.F.G.-H.), London, United Kingdom
| |
Collapse
|
18
|
Tong J, Alonso-Caneiro D, Kalloniatis M, Zangerl B. Prediction of visual field defects from macular optical coherence tomography in glaucoma using cluster analysis. Ophthalmic Physiol Opt 2022; 42:948-964. [PMID: 35598146 PMCID: PMC9544890 DOI: 10.1111/opo.12997] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 04/19/2022] [Accepted: 04/20/2022] [Indexed: 11/30/2022]
Abstract
Purpose To assess the accuracy of cluster analysis‐based models in predicting visual field (VF) defects from macular ganglion cell‐inner plexiform layer (GCIPL) measurements in glaucomatous and healthy cohorts. Methods GCIPL measurements were extracted from posterior pole optical coherence tomography (OCT), from locations corresponding to central VF test grids. Models incorporating cluster analysis methods and corrections for age and fovea to optic disc tilt were developed from 493 healthy participants, and 5th and 1st percentile limits of GCIPL thickness were derived. These limits were compared with pointwise 5th and 1st percentile limits by calculating sensitivities and specificities in an additional 40 normal and 37 glaucomatous participants, as well as applying receiver operating characteristic (ROC) curve analyses to assess the accuracy of predicting VF results from co‐localised GCIPL measurements. Results Clustered models demonstrated globally low sensitivity, but high specificity in the glaucoma cohort (0.28–0.53 and 0.77–0.91, respectively), and high specificity in the healthy cohort (0.91–0.98). Clustered models showed similar sensitivities and superior specificities compared with pointwise methods (0.41–0.65 and 0.71–0.98, respectively). There were significant differences in accuracy between clusters, with relatively poor accuracy at peripheral macular locations (p < 0.0001 for all comparisons). Conclusions Cluster analysis‐based models incorporating age correction and holistic consideration of fovea to optic disc tilt demonstrated superior performance in predicting VF results to pointwise methods in both glaucomatous and healthy eyes. However, relatively low sensitivity and poorer performance at the peripheral macula indicate that OCT in isolation may be insufficient to predict visual function across the macula accurately. With modifications to criteria for abnormality, the concepts suggested by the described normative models may guide prioritisation of VF assessment requirements, with the potential to limit excessive VF testing.
Collapse
Affiliation(s)
- Janelle Tong
- Centre for Eye Health, University of New South Wales (UNSW), Sydney, New South Wales, Australia.,School of Optometry and Vision Science, University of New South Wales (UNSW), Sydney, New South Wales, Australia
| | - David Alonso-Caneiro
- Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Queensland University of Technology, Kelvin Grove, Queensland, Australia
| | - Michael Kalloniatis
- Centre for Eye Health, University of New South Wales (UNSW), Sydney, New South Wales, Australia.,School of Optometry and Vision Science, University of New South Wales (UNSW), Sydney, New South Wales, Australia
| | - Barbara Zangerl
- School of Optometry and Vision Science, University of New South Wales (UNSW), Sydney, New South Wales, Australia.,Coronary Care Unit, Royal Prince Alfred Hospital, Sydney, New South Wales, Australia
| |
Collapse
|
19
|
Leong YY, Vasseneix C, Finkelstein MT, Milea D, Najjar RP. Artificial Intelligence Meets Neuro-Ophthalmology. Asia Pac J Ophthalmol (Phila) 2022; 11:111-125. [PMID: 35533331 DOI: 10.1097/apo.0000000000000512] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
ABSTRACT Recent advances in artificial intelligence have provided ophthalmologists with fast, accurate, and automated means for diagnosing and treating ocular conditions, paving the way to a modern and scalable eye care system. Compared to other ophthalmic disciplines, neuro-ophthalmology has, until recently, not benefitted from significant advances in the area of artificial intelligence. In this narrative review, we summarize and discuss recent advancements utilizing artificial intelligence for the detection of structural and functional optic nerve head abnormalities, and ocular movement disorders in neuro-ophthalmology.
Collapse
Affiliation(s)
| | - Caroline Vasseneix
- Singapore Eye Research Institute, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | | | - Dan Milea
- Singapore National Eye Center, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Raymond P Najjar
- Singapore Eye Research Institute, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| |
Collapse
|
20
|
Mortensen PW, Wong TY, Milea D, Lee AG. The Eye Is a Window to Systemic and Neuro-Ophthalmic Diseases. Asia Pac J Ophthalmol (Phila) 2022; 11:91-93. [PMID: 35533329 DOI: 10.1097/apo.0000000000000531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Affiliation(s)
- Peter W Mortensen
- Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, TX, US
| | - Tien Y Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Duke-NUS Medical School, Singapore
- Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Dan Milea
- Singapore Eye Research Institute, Singapore National Eye Centre, Duke-NUS Medical School, Singapore
- Copenhagen University, Denmark
| | - Andrew G Lee
- Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, TX, US
- Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, NY, US
- Department of Ophthalmology, University of Texas Medical Branch, Galveston, Texas, US
- University of Texas MD Anderson Cancer Center, Houston, Texas, US
- Texas A and M College of Medicine, Bryan, Texas, US
- Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, Iowa, US
| |
Collapse
|
21
|
Kihara Y, Montesano G, Chen A, Amerasinghe N, Dimitriou C, Jacob A, Chabi A, Crabb DP, Lee AY. Policy-Driven, Multimodal Deep Learning for Predicting Visual Fields from the Optic Disc and Optical Coherence Tomography Imaging. Ophthalmology 2022; 129:781-791. [PMID: 35202616 DOI: 10.1016/j.ophtha.2022.02.017] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 01/28/2022] [Accepted: 02/15/2022] [Indexed: 12/17/2022] Open
Abstract
PURPOSE To develop and validate a deep learning (DL) system for predicting each point on visual fields (VF) from disc and optical coherence tomography (OCT) imaging and derive a structure-function mapping. DESIGN Retrospective, cross-sectional database study PARTICIPANTS: 6437 patients undergoing routine care for glaucoma in three clinical sites in the UK. METHODS OCT and infrared reflectance (IR) optic disc imaging was paired with the closest VF within 7 days. Efficient-Net B2 was used to train two single modality DL models to predict each of the 52 sensitivity points on the 24-2 VF pattern. A policy DL model was designed and trained to fuse the two model predictions. MAIN OUTCOME MEASURES Pointwise Mean Absolute Error (PMAE) RESULTS: A total of 5078 imaging to VF pairs were used as a held-out test set to measure the final performance. The improvement in PMAE with the policy model was 0.485 [0.438, 0.533] dB compared to the IR image of the disc alone and 0.060 [0.047, 0.073] dB compared to the OCT alone. The improvement with the policy fusion model was statistically significant (p < 0.0001). Occlusion masking shows that the DL models learned the correct structure function mapping in a data-driven, feature agnostic fashion. CONCLUSIONS The multimodal, policy DL model performed the best; it provided explainable maps of its confidence in fusing data from single modalities and provides a pathway for probing the structure-function relationship in glaucoma.
Collapse
Affiliation(s)
- Yuka Kihara
- University of Washington, Department of Ophthalmology, Seattle, Washington
| | - Giovanni Montesano
- City, University of London, Optometry and Visual Sciences, London, United Kingdom; NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, UCL Institute of Ophthalmology, London, UK
| | - Andrew Chen
- University of Washington, Department of Ophthalmology, Seattle, Washington
| | - Nishani Amerasinghe
- University Hospital Southampton NHS Foundation Trust, Southampton, United Kingdom
| | - Chrysostomos Dimitriou
- Colchester Hospital, East Suffolk and North Essex NHS Foundation Trust, Colchester, United Kingdom
| | - Aby Jacob
- University Hospital Southampton NHS Foundation Trust, Southampton, United Kingdom
| | | | - David P Crabb
- City, University of London, Optometry and Visual Sciences, London, United Kingdom
| | - Aaron Y Lee
- University of Washington, Department of Ophthalmology, Seattle, Washington.
| |
Collapse
|
22
|
Shamsi F, Liu R, Owsley C, Kwon M. Identifying the Retinal Layers Linked to Human Contrast Sensitivity Via Deep Learning. Invest Ophthalmol Vis Sci 2022; 63:27. [PMID: 35179554 PMCID: PMC8859491 DOI: 10.1167/iovs.63.2.27] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Accepted: 01/31/2022] [Indexed: 12/18/2022] Open
Abstract
Purpose Luminance contrast is the fundamental building block of human spatial vision. Therefore contrast sensitivity, the reciprocal of contrast threshold required for target detection, has been a barometer of human visual function. Although retinal ganglion cells (RGCs) are known to be involved in contrast coding, it still remains unknown whether the retinal layers containing RGCs are linked to a person's contrast sensitivity (e.g., Pelli-Robson contrast sensitivity) and, if so, to what extent the retinal layers are related to behavioral contrast sensitivity. Thus the current study aims to identify the retinal layers and features critical for predicting a person's contrast sensitivity via deep learning. Methods Data were collected from 225 subjects including individuals with either glaucoma, age-related macular degeneration, or normal vision. A deep convolutional neural network trained to predict a person's Pelli-Robson contrast sensitivity from structural retinal images measured with optical coherence tomography was used. Then, activation maps that represent the critical features learned by the network for the output prediction were computed. Results The thickness of both ganglion cell and inner plexiform layers, reflecting RGC counts, were found to be significantly correlated with contrast sensitivity (r = 0.26 ∼ 0.58, Ps < 0.001 for different eccentricities). Importantly, the results showed that retinal layers containing RGCs were the critical features the network uses to predict a person's contrast sensitivity (an average R2 = 0.36 ± 0.10). Conclusions The findings confirmed the structure and function relationship for contrast sensitivity while highlighting the role of RGC density for human contrast sensitivity.
Collapse
Affiliation(s)
- Foroogh Shamsi
- Department of Psychology, Northeastern University, Boston, Massachusetts, United States
| | - Rong Liu
- Department of Psychology, Northeastern University, Boston, Massachusetts, United States
- Department of Ophthalmology and Visual Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, Alabama, United States
- Department of life science and medicine, University of Science and Technology of China, Hefei, China
| | - Cynthia Owsley
- Department of Ophthalmology and Visual Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, Alabama, United States
| | - MiYoung Kwon
- Department of Psychology, Northeastern University, Boston, Massachusetts, United States
- Department of Ophthalmology and Visual Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, Alabama, United States
| |
Collapse
|
23
|
Tan Z, Zhu Z, He Z, He M. Artificial Intelligence in Ophthalmology. Artif Intell Med 2022. [DOI: 10.1007/978-981-19-1223-8_7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
24
|
Wang Z, Keane PA, Chiang M, Cheung CY, Wong TY, Ting DSW. Artificial Intelligence and Deep Learning in Ophthalmology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
25
|
Shigueoka LS, Jammal AA, Medeiros FA, Costa VP. Artificial Intelligence in Ophthalmology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
26
|
Schuman JS, Angeles Ramos Cadena MDL, McGee R, Al-Aswad LA, Medeiros FA. A Case for The Use of Artificial Intelligence in Glaucoma Assessment. Ophthalmol Glaucoma 2021; 5:e3-e13. [PMID: 34954220 PMCID: PMC9133028 DOI: 10.1016/j.ogla.2021.12.003] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 12/15/2021] [Accepted: 12/16/2021] [Indexed: 12/23/2022]
Abstract
We hypothesize that artificial intelligence applied to relevant clinical testing in glaucoma has the potential to enhance the ability to detect glaucoma. This premise was discussed at the recent Collaborative Community for Ophthalmic Imaging meeting, "The Future of Artificial Intelligence-Enabled Ophthalmic Image Interpretation: Accelerating Innovation and Implementation Pathways," held virtually September 3-4, 2020. The Collaborative Community in Ophthalmic Imaging (CCOI) is an independent self-governing consortium of stakeholders with broad international representation from academic institutions, government agencies, and the private sector whose mission is to act as a forum for the purpose of helping speed innovation in healthcare technology. It was one of the first two such organizations officially designated by the FDA in September 2019 in response to their announcement of the collaborative community program as a strategic priority for 2018-2020. Further information on the CCOI can be found online at their website (https://www.cc-oi.org/about). Artificial intelligence for glaucoma diagnosis would have high utility globally, as access to care is limited in many parts of the world and half of all people with glaucoma are unaware of their illness. The application of artificial intelligence technology to glaucoma diagnosis has the potential to broadly increase access to care worldwide, in essence flattening the Earth by providing expert level evaluation to individuals even in the most remote regions of the planet.
Collapse
Affiliation(s)
- Joel S Schuman
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, USA; Departments of Biomedical Engineering and Electrical and Computer Engineering, New York University Tandon School of Engineering, Brooklyn, NY, USA; Center for Neural Science, NYU, New York, NY, USA; Neuroscience Institute, NYU Langone Health, New York, NY, USA.
| | | | - Rebecca McGee
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, USA
| | - Lama A Al-Aswad
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, USA; Department of Population Health, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, USA
| | - Felipe A Medeiros
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA; Department of Electrical and Computer Engineering, Pratt School of Engineering, Duke University, Durham, NC, USA
| | | |
Collapse
|
27
|
Christopher M, Bowd C, Proudfoot JA, Belghith A, Goldbaum MH, Rezapour J, Fazio MA, Girkin CA, De Moraes G, Liebmann JM, Weinreb RN, Zangwill LM. Deep Learning Estimation of 10-2 and 24-2 Visual Field Metrics Based on Thickness Maps from Macula OCT. Ophthalmology 2021; 128:1534-1548. [PMID: 33901527 DOI: 10.1016/j.ophtha.2021.04.022] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Revised: 03/16/2021] [Accepted: 04/19/2021] [Indexed: 01/27/2023] Open
Abstract
PURPOSE To develop deep learning (DL) systems estimating visual function from macula-centered spectral-domain (SD) OCT images. DESIGN Evaluation of a diagnostic technology. PARTICIPANTS A total of 2408 10-2 visual field (VF) SD OCT pairs and 2999 24-2 VF SD OCT pairs collected from 645 healthy and glaucoma subjects (1222 eyes). METHODS Deep learning models were trained on thickness maps from Spectralis macula SD OCT to estimate 10-2 and 24-2 VF mean deviation (MD) and pattern standard deviation (PSD). Individual and combined DL models were trained using thickness data from 6 layers (retinal nerve fiber layer [RNFL], ganglion cell layer [GCL], inner plexiform layer [IPL], ganglion cell-IPL [GCIPL], ganglion cell complex [GCC] and retina). Linear regression of mean layer thicknesses were used for comparison. MAIN OUTCOME MEASURES Deep learning models were evaluated using R2 and mean absolute error (MAE) compared with 10-2 and 24-2 VF measurements. RESULTS Combined DL models estimating 10-2 achieved R2 of 0.82 (95% confidence interval [CI], 0.68-0.89) for MD and 0.69 (95% CI, 0.55-0.81) for PSD and MAEs of 1.9 dB (95% CI, 1.6-2.4 dB) for MD and 1.5 dB (95% CI, 1.2-1.9 dB) for PSD. This was significantly better than mean thickness estimates for 10-2 MD (0.61 [95% CI, 0.47-0.71] and 3.0 dB [95% CI, 2.5-3.5 dB]) and 10-2 PSD (0.46 [95% CI, 0.31-0.60] and 2.3 dB [95% CI, 1.8-2.7 dB]). Combined DL models estimating 24-2 achieved R2 of 0.79 (95% CI, 0.72-0.84) for MD and 0.68 (95% CI, 0.53-0.79) for PSD and MAEs of 2.1 dB (95% CI, 1.8-2.5 dB) for MD and 1.5 dB (95% CI, 1.3-1.9 dB) for PSD. This was significantly better than mean thickness estimates for 24-2 MD (0.41 [95% CI, 0.26-0.57] and 3.4 dB [95% CI, 2.7-4.5 dB]) and 24-2 PSD (0.38 [95% CI, 0.20-0.57] and 2.4 dB [95% CI, 2.0-2.8 dB]). The GCIPL (R2 = 0.79) and GCC (R2 = 0.75) had the highest performance estimating 10-2 and 24-2 MD, respectively. CONCLUSIONS Deep learning models improved estimates of functional loss from SD OCT imaging. Accurate estimates can help clinicians to individualize VF testing to patients.
Collapse
Affiliation(s)
- Mark Christopher
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Christopher Bowd
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - James A Proudfoot
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Akram Belghith
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Michael H Goldbaum
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Jasmin Rezapour
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California; Department of Ophthalmology, University Medical Center Mainz, Mainz, Germany
| | - Massimo A Fazio
- School of Medicine, University of Alabama-Birmingham, Birmingham, Alabama
| | | | - Gustavo De Moraes
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Medical Center, New York, New York
| | - Jeffrey M Liebmann
- Bernard and Shirlee Brown Glaucoma Research Laboratory, Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Medical Center, New York, New York
| | - Robert N Weinreb
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California
| | - Linda M Zangwill
- Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California, San Diego, La Jolla, California.
| |
Collapse
|
28
|
Datta S, Mariottoni EB, Dov D, Jammal AA, Carin L, Medeiros FA. RetiNerveNet: using recursive deep learning to estimate pointwise 24-2 visual field data based on retinal structure. Sci Rep 2021; 11:12562. [PMID: 34131181 PMCID: PMC8206091 DOI: 10.1038/s41598-021-91493-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2020] [Accepted: 05/27/2021] [Indexed: 11/09/2022] Open
Abstract
Glaucoma is the leading cause of irreversible blindness in the world, affecting over 70 million people. The cumbersome Standard Automated Perimetry (SAP) test is most frequently used to detect visual loss due to glaucoma. Due to the SAP test’s innate difficulty and its high test-retest variability, we propose the RetiNerveNet, a deep convolutional recursive neural network for obtaining estimates of the SAP visual field. RetiNerveNet uses information from the more objective Spectral-Domain Optical Coherence Tomography (SDOCT). RetiNerveNet attempts to trace-back the arcuate convergence of the retinal nerve fibers, starting from the Retinal Nerve Fiber Layer (RNFL) thickness around the optic disc, to estimate individual age-corrected 24-2 SAP values. Recursive passes through the proposed network sequentially yield estimates of the visual locations progressively farther from the optic disc. While all the methods used for our experiments exhibit lower performance for the advanced disease group (possibly due to the “floor effect” for the SDOCT test), the proposed network is observed to be more accurate than all the baselines for estimating the individual visual field values. We further augment the proposed network to additionally predict the SAP Mean Deviation values and also facilitate the assignment of higher weightage to the underrepresented groups in the data. We then study the resulting performance trade-offs of the RetiNerveNet on the early, moderate and severe disease groups.
Collapse
Affiliation(s)
- Shounak Datta
- Department of Electrical and Computer Engineering, Pratt School of Engineering, Duke University, Durham, NC, 27708, USA
| | - Eduardo B Mariottoni
- Vision, Imaging and Performance (VIP) Laboratory, Duke Eye Center, Duke University, Durham, NC, 27705, USA
| | - David Dov
- Department of Electrical and Computer Engineering, Pratt School of Engineering, Duke University, Durham, NC, 27708, USA
| | - Alessandro A Jammal
- Vision, Imaging and Performance (VIP) Laboratory, Duke Eye Center, Duke University, Durham, NC, 27705, USA
| | - Lawrence Carin
- Department of Electrical and Computer Engineering, Pratt School of Engineering, Duke University, Durham, NC, 27708, USA
| | - Felipe A Medeiros
- Department of Electrical and Computer Engineering, Pratt School of Engineering, Duke University, Durham, NC, 27708, USA. .,Vision, Imaging and Performance (VIP) Laboratory, Duke Eye Center, Duke University, Durham, NC, 27705, USA.
| |
Collapse
|
29
|
Diener R, Treder M, Eter N. [Diagnostics of diseases of the optic nerve head in times of artificial intelligence and big data]. Ophthalmologe 2021; 118:893-899. [PMID: 33890129 PMCID: PMC8062109 DOI: 10.1007/s00347-021-01385-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/25/2021] [Indexed: 11/19/2022]
Abstract
Hintergrund Der Einsatz von künstlicher Intelligenz (KI) ist unter anderem in der automatischen Bildsegmentierung, -analyse und Klassifikation interessant und bereits für verschiedene Bereiche der Augenheilkunde beschrieben. Fragestellung Diese Arbeit soll einen Überblick über aktuelle Ansätze und Fortschritte bei der Anwendung von Big Data und KI bei verschiedenen Erkrankungen des Sehnervenkopfes geben. Material und Methode Es wurde eine PubMed-Recherche durchgeführt. Gesucht wurde nach Studien, die klinische Fragestellungen mithilfe von Big-Data-Ansätzen beantworteten oder klassische Methoden des maschinellen Lernens bei der Analyse von multimodaler Bildgebung des Sehnervenkopfes verwendeten. Ergebnisse Big Data kann bei Volkskrankheiten wie dem Glaukom helfen, klinische Fragestellungen zu beantworten. KI findet sowohl bei der Segmentierung von multimodaler Bildgebung des Sehnervenkopfes als auch bei der Klassifikation von Erkrankungen wie dem Glaukom oder der Stauungspapille auf diesen Bilddaten Anwendung. Schlussfolgerung Mithilfe von Big Data und KI können Zusammenhänge besser erkannt und die Diagnostik und Verlaufsbeurteilung von Erkrankungen des Sehnervenkopfes erleichtert oder automatisiert werden. Eine Voraussetzung für die klinische Anwendung ist in Europa die CE-Kennzeichnung als ein Medizinprodukt und in den USA die Zulassung durch die Food and Drug Administration.
Collapse
Affiliation(s)
- R Diener
- Klinik für Augenheilkunde, Universitätsklinikum Münster, Domagkstr. 15, 48149, Münster, Deutschland.
| | - M Treder
- Klinik für Augenheilkunde, Universitätsklinikum Münster, Domagkstr. 15, 48149, Münster, Deutschland
| | - N Eter
- Klinik für Augenheilkunde, Universitätsklinikum Münster, Domagkstr. 15, 48149, Münster, Deutschland
| |
Collapse
|
30
|
Sekimitsu S, Zebardast N. Glaucoma and Machine Learning: A Call for Increased Diversity in Data. Ophthalmol Glaucoma 2021; 4:339-342. [PMID: 33879422 DOI: 10.1016/j.ogla.2021.03.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 02/25/2021] [Accepted: 03/01/2021] [Indexed: 02/07/2023]
|
31
|
Manco L, Maffei N, Strolin S, Vichi S, Bottazzi L, Strigari L. Basic of machine learning and deep learning in imaging for medical physicists. Phys Med 2021; 83:194-205. [DOI: 10.1016/j.ejmp.2021.03.026] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 03/07/2021] [Accepted: 03/16/2021] [Indexed: 02/08/2023] Open
|
32
|
Artificial intelligence and complex statistical modeling in glaucoma diagnosis and management. Curr Opin Ophthalmol 2021; 32:105-117. [PMID: 33395111 DOI: 10.1097/icu.0000000000000741] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
PURPOSE OF REVIEW The field of artificial intelligence has grown exponentially in recent years with new technology, methods, and applications emerging at a rapid rate. Many of these advancements have been used to improve the diagnosis and management of glaucoma. We aim to provide an overview of recent publications regarding the use of artificial intelligence to enhance the detection and treatment of glaucoma. RECENT FINDINGS Machine learning classifiers and deep learning algorithms have been developed to autonomously detect early structural and functional changes of glaucoma using different imaging and testing modalities such as fundus photography, optical coherence tomography, and standard automated perimetry. Artificial intelligence has also been used to further delineate structure-function correlation in glaucoma. Additional 'structure-structure' predictions have been successfully estimated. Other machine learning techniques utilizing complex statistical modeling have been used to detect glaucoma progression, as well as to predict future progression. Although not yet approved for clinical use, these artificial intelligence techniques have the potential to significantly improve glaucoma diagnosis and management. SUMMARY Rapidly emerging artificial intelligence algorithms have been used for the detection and management of glaucoma. These algorithms may aid the clinician in caring for patients with this complex disease. Further validation is required prior to employing these techniques widely in clinical practice.
Collapse
|
33
|
Shigueoka LS, Jammal AA, Medeiros FA, Costa VP. Artificial Intelligence in Ophthalmology. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_201-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
34
|
Artificial Intelligence and Deep Learning in Ophthalmology. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_200-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
35
|
Wang M, Shen LQ, Pasquale LR, Wang H, Li D, Choi EY, Yousefi S, Bex PJ, Elze T. An Artificial Intelligence Approach to Assess Spatial Patterns of Retinal Nerve Fiber Layer Thickness Maps in Glaucoma. Transl Vis Sci Technol 2020; 9:41. [PMID: 32908804 PMCID: PMC7453051 DOI: 10.1167/tvst.9.9.41] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 07/15/2020] [Indexed: 11/24/2022] Open
Abstract
Purpose The purpose of this study was to classify the spatial patterns of retinal nerve fiber layer thickness (RNFLT) and assess their associations with visual field (VF) loss in glaucoma. Methods We used paired reliable 24-2 VFs and optical coherence tomography scans of 691 eyes from 691 patients. The RNFLT maps were used to determine the RNFLT patterns (RPs) by non-negative matrix factorization (NMF). The RPs were correlated with mean deviation (MD), spherical equivalent (SE), and major blood vessel locations. The RPs were further used to predict the 52 total deviation (TD) values by linear regression compared with models using 24 15-degree sectors. Last, we associated the RPs with average TDs of the central upper two locations (C2-TD). Stepwise regression was applied to remove redundant features. Results NMF highlighted 16 distinct RPs. Twelve RPs had arcuate-like informative zones (iZones): six with superior iZones, five with inferior iZones, and one RP with a bi-hemifield iZone, and four with non-arcuate-like temporal or nasal iZones. Twelve, nine, nine, and nine RPs were significantly (P < 0.05) correlated to MD, SE, and superior and inferior artery locations, respectively. Using RPs significantly (P < 0.05) improved the prediction of 52 TDs compared with using 24 15-degree sectors. Using RPs significantly (P < 0.001) improved the C2-TD prediction related to thinning in the inferior vulnerability zone compared with using the 24 sectoral RNFLTs. Conclusions Using RPs improved the VF prediction compared with using sectoral RNFLTs. Translational Relevance The RPs characterizing both pathological and anatomical variations can potentially assist clinicians better assess RNFLT loss.
Collapse
Affiliation(s)
- Mengyu Wang
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Lucy Q Shen
- Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Louis R Pasquale
- Eye and Vision Research Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA.,Channing Division of Network Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Hui Wang
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA.,Institute for Psychology and Behavior, Jilin University of Finance and Economics, Changchun, China
| | - Dian Li
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Eun Young Choi
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Siamak Yousefi
- Hamilton Eye Institute, University of Tennessee Health Science Center, Memphis, TN, USA
| | - Peter J Bex
- Department of Psychology, Northeastern University, Boston, MA, USA
| | - Tobias Elze
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA.,Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany
| |
Collapse
|
36
|
Girard MJA, Schmetterer L. Artificial intelligence and deep learning in glaucoma: Current state and future prospects. PROGRESS IN BRAIN RESEARCH 2020; 257:37-64. [PMID: 32988472 DOI: 10.1016/bs.pbr.2020.07.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Over the past few years, there has been an unprecedented and tremendous excitement for artificial intelligence (AI) research in the field of Ophthalmology; this has naturally been translated to glaucoma-a progressive optic neuropathy characterized by retinal ganglion cell axon loss and associated visual field defects. In this review, we aim to discuss how AI may have a unique opportunity to tackle the many challenges faced in the glaucoma clinic. This is because glaucoma remains poorly understood with difficulties in providing early diagnosis and prognosis accurately and in a timely fashion. In the short term, AI could also become a game changer by paving the way for the first cost-effective glaucoma screening campaigns. While there are undeniable technical and clinical challenges ahead, and more so than for other ophthalmic disorders whereby AI is already booming, we strongly believe that glaucoma specialists should embrace AI as a companion to their practice. Finally, this review will also remind ourselves that glaucoma is a complex group of disorders with a multitude of physiological manifestations that cannot yet be observed clinically. AI in glaucoma is here to stay, but it will not be the only tool to solve glaucoma.
Collapse
Affiliation(s)
- Michaël J A Girard
- Ophthalmic Engineering & Innovation Laboratory (OEIL), Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.
| | - Leopold Schmetterer
- Ocular Imaging, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore; School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, Singapore; SERI-NTU Advanced Ocular Engineering (STANCE), Singapore, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore; Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria; Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria; Institute of Clinical and Experimental Ophthalmology, Basel, Switzerland.
| |
Collapse
|
37
|
Yu HH, Maetschke SR, Antony BJ, Ishikawa H, Wollstein G, Schuman JS, Garnavi R. Estimating Global Visual Field Indices in Glaucoma by Combining Macula and Optic Disc OCT Scans Using 3-Dimensional Convolutional Neural Networks. Ophthalmol Glaucoma 2020; 4:102-112. [PMID: 32826205 DOI: 10.1016/j.ogla.2020.07.002] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 07/01/2020] [Accepted: 07/06/2020] [Indexed: 12/21/2022]
Abstract
PURPOSE To evaluate the accuracy at which visual field global indices could be estimated from OCT scans of the retina using deep neural networks and to quantify the contributions to the estimates by the macula (MAC) and the optic nerve head (ONH). DESIGN Observational cohort study. PARTICIPANTS A total of 10 370 eyes from 109 healthy patients, 697 glaucoma suspects, and 872 patients with glaucoma over multiple visits (median = 3). METHODS Three-dimensional convolutional neural networks were trained to estimate global visual field indices derived from automated Humphrey perimetry (SITA 24-2) tests (Zeiss, Dublin, CA), using OCT scans centered on MAC, ONH, or both (MAC + ONH) as inputs. MAIN OUTCOME MEASURES Spearman's rank correlation coefficients, Pearson's correlation coefficient, and absolute errors calculated for 2 indices: visual field index (VFI) and mean deviation (MD). RESULTS The MAC + ONH achieved 0.76 Spearman's correlation coefficient and 0.87 Pearson's correlation for VFI and MD. Median absolute error was 2.7 for VFI and 1.57 decibels (dB) for MD. Separate MAC or ONH estimates were significantly less correlated and less accurate. Accuracy was dependent on the OCT signal strength and the stage of glaucoma severity. CONCLUSIONS The accuracy of global visual field indices estimate is improved by integrating information from MAC and ONH in advanced glaucoma, suggesting that structural changes of the 2 regions have different time courses in the disease severity spectrum.
Collapse
Affiliation(s)
- Hsin-Hao Yu
- IBM Research Australia, Melbourne, Victoria, Australia.
| | | | | | - Hiroshi Ishikawa
- Department of Ophthalmology, NYU Langone Health, New York, New York; Department of Biomedical Engineering, NYU Tandon School of Engineering, New York, New York; Center for Neural Science, NYU, New York, New York
| | - Gadi Wollstein
- Department of Ophthalmology, NYU Langone Health, New York, New York; Department of Biomedical Engineering, NYU Tandon School of Engineering, New York, New York; Center for Neural Science, NYU, New York, New York
| | - Joel S Schuman
- Department of Ophthalmology, NYU Langone Health, New York, New York; Department of Biomedical Engineering, NYU Tandon School of Engineering, New York, New York; Center for Neural Science, NYU, New York, New York; Department of Physiology and Neuroscience, NYU Langone Health, New York, New York; Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, New York, New York
| | - Rahil Garnavi
- IBM Research Australia, Melbourne, Victoria, Australia
| |
Collapse
|