1
|
Schmetterer L, Scholl H, Garhöfer G, Janeschitz-Kriegl L, Corvi F, Sadda SR, Medeiros FA. Endpoints for clinical trials in ophthalmology. Prog Retin Eye Res 2023; 97:101160. [PMID: 36599784 DOI: 10.1016/j.preteyeres.2022.101160] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 12/22/2022] [Accepted: 12/28/2022] [Indexed: 01/03/2023]
Abstract
With the identification of novel targets, the number of interventional clinical trials in ophthalmology has increased. Visual acuity has for a long time been considered the gold standard endpoint for clinical trials, but in the recent years it became evident that other endpoints are required for many indications including geographic atrophy and inherited retinal disease. In glaucoma the currently available drugs were approved based on their IOP lowering capacity. Some recent findings do, however, indicate that at the same level of IOP reduction, not all drugs have the same effect on visual field progression. For neuroprotection trials in glaucoma, novel surrogate endpoints are required, which may either include functional or structural parameters or a combination of both. A number of potential surrogate endpoints for ophthalmology clinical trials have been identified, but their validation is complicated and requires solid scientific evidence. In this article we summarize candidates for clinical endpoints in ophthalmology with a focus on retinal disease and glaucoma. Functional and structural biomarkers, as well as quality of life measures are discussed, and their potential to serve as endpoints in pivotal trials is critically evaluated.
Collapse
Affiliation(s)
- Leopold Schmetterer
- Singapore Eye Research Institute, Singapore; SERI-NTU Advanced Ocular Engineering (STANCE), Singapore; Academic Clinical Program, Duke-NUS Medical School, Singapore; School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore; Department of Clinical Pharmacology, Medical University Vienna, Vienna, Austria; Center for Medical Physics and Biomedical Engineering, Medical University Vienna, Vienna, Austria; Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland.
| | - Hendrik Scholl
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland; Department of Ophthalmology, University of Basel, Basel, Switzerland
| | - Gerhard Garhöfer
- Department of Clinical Pharmacology, Medical University Vienna, Vienna, Austria
| | - Lucas Janeschitz-Kriegl
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland; Department of Ophthalmology, University of Basel, Basel, Switzerland
| | - Federico Corvi
- Eye Clinic, Department of Biomedical and Clinical Sciences "Luigi Sacco", University of Milan, Italy
| | - SriniVas R Sadda
- Doheny Eye Institute, Los Angeles, CA, USA; Department of Ophthalmology, David Geffen School of Medicine at University of California, Los Angeles, CA, USA
| | - Felipe A Medeiros
- Vision, Imaging and Performance Laboratory, Department of Ophthalmology, Duke Eye Center, Duke University, Durham, NC, USA
| |
Collapse
|
2
|
Chen Z, Shemuelian E, Wollstein G, Wang Y, Ishikawa H, Schuman JS. Segmentation-Free OCT-Volume-Based Deep Learning Model Improves Pointwise Visual Field Sensitivity Estimation. Transl Vis Sci Technol 2023; 12:28. [PMID: 37382575 PMCID: PMC10318595 DOI: 10.1167/tvst.12.6.28] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 05/18/2023] [Indexed: 06/30/2023] Open
Abstract
Purpose The structural changes measured by optical coherence tomography (OCT) are related to functional changes in visual fields (VFs). This study aims to accurately assess the structure-function relationship and overcome the challenges brought by the minimal measurable level (floor effect) of segmentation-dependent OCT measurements commonly used in prior studies. Methods We developed a deep learning model to estimate the functional performance directly from three-dimensional (3D) OCT volumes and compared it to the model trained with segmentation-dependent two-dimensional (2D) OCT thickness maps. Moreover, we proposed a gradient loss to utilize the spatial information of VFs. Results Our 3D model was significantly better than the 2D model both globally and pointwise regarding both mean absolute error (MAE = 3.11 + 3.54 vs. 3.47 ± 3.75 dB, P < 0.001) and Pearson's correlation coefficient (0.80 vs. 0.75, P < 0.001). On a subset of test data with floor effects, the 3D model showed less influence from floor effects than the 2D model (MAE = 5.24 ± 3.99 vs. 6.34 ± 4.58 dB, P < 0.001, and correlation 0.83 vs. 0.74, P < 0.001). The gradient loss improved the estimation error for low-sensitivity values. Furthermore, our 3D model outperformed all prior studies. Conclusions By providing a better quantitative model to encapsulate the structure-function relationship more accurately, our method may help deriving VF test surrogates. Translational Relevance DL-based VF surrogates not only benefit patients by reducing the testing time of VFs but also allow clinicians to make clinical judgments without the inherent limitations of VFs.
Collapse
Affiliation(s)
- Zhiqi Chen
- Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, Brooklyn, NY, USA
| | - Eitan Shemuelian
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, USA
| | - Gadi Wollstein
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, USA
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, NY, USA
- Center for Neural Science, NYU College of Arts and Sciences, New York, NY, USA
| | - Yao Wang
- Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, Brooklyn, NY, USA
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, NY, USA
| | - Hiroshi Ishikawa
- Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, Brooklyn, NY, USA
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, OR, USA
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health and Science University, Portland, OR, USA
| | - Joel S. Schuman
- Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, Brooklyn, NY, USA
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, USA
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, NY, USA
- Center for Neural Science, NYU College of Arts and Sciences, New York, NY, USA
- Wills Eye Hospital, Philadelphia, PA, USA
| |
Collapse
|
3
|
Chen D, Ran Ran A, Fang Tan T, Ramachandran R, Li F, Cheung CY, Yousefi S, Tham CCY, Ting DSW, Zhang X, Al-Aswad LA. Applications of Artificial Intelligence and Deep Learning in Glaucoma. Asia Pac J Ophthalmol (Phila) 2023; 12:80-93. [PMID: 36706335 DOI: 10.1097/apo.0000000000000596] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 12/06/2022] [Indexed: 01/28/2023] Open
Abstract
Diagnosis and detection of progression of glaucoma remains challenging. Artificial intelligence-based tools have the potential to improve and standardize the assessment of glaucoma but development of these algorithms is difficult given the multimodal and variable nature of the diagnosis. Currently, most algorithms are focused on a single imaging modality, specifically screening and diagnosis based on fundus photos or optical coherence tomography images. Use of anterior segment optical coherence tomography and goniophotographs is limited. The majority of algorithms designed for disease progression prediction are based on visual fields. No studies in our literature search assessed the use of artificial intelligence for treatment response prediction and no studies conducted prospective testing of their algorithms. Additional challenges to the development of artificial intelligence-based tools include scarcity of data and a lack of consensus in diagnostic criteria. Although research in the use of artificial intelligence for glaucoma is promising, additional work is needed to develop clinically usable tools.
Collapse
Affiliation(s)
- Dinah Chen
- Department of Ophthalmology, NYU Langone Health, New York City, NY
- Genentech Inc, South San Francisco, CA
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
- Lam Kin Chung, Jet King-Shing Ho Glaucoma Treatment And Research Centre, The Chinese University of Hong Kong, Hong Kong, China
| | - Ting Fang Tan
- Singapore Eye Research Institute, Singapore
- Singapore National Eye Center, Singapore
| | | | - Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
- Lam Kin Chung, Jet King-Shing Ho Glaucoma Treatment And Research Centre, The Chinese University of Hong Kong, Hong Kong, China
| | - Siamak Yousefi
- Department of Ophthalmology, The University of Tennessee Health Science Center, Memphis, TN
| | - Clement C Y Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
- Lam Kin Chung, Jet King-Shing Ho Glaucoma Treatment And Research Centre, The Chinese University of Hong Kong, Hong Kong, China
| | - Daniel S W Ting
- Singapore Eye Research Institute, Singapore
- Singapore National Eye Center, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | | |
Collapse
|
4
|
Moon S, Lee JH, Choi H, Lee SY, Lee J. Deep learning approaches to predict 10-2 visual field from wide-field swept-source optical coherence tomography en face images in glaucoma. Sci Rep 2022; 12:21041. [PMID: 36471039 PMCID: PMC9722778 DOI: 10.1038/s41598-022-25660-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 12/02/2022] [Indexed: 12/12/2022] Open
Abstract
Close monitoring of central visual field (VF) defects with 10-2 VF helps prevent blindness in glaucoma. We aimed to develop a deep learning model to predict 10-2 VF from wide-field swept-source optical coherence tomography (SS-OCT) images. Macular ganglion cell/inner plexiform layer thickness maps with either wide-field en face images (en face model) or retinal nerve fiber layer thickness maps (RNFLT model) were extracted, combined, and preprocessed. Inception-ResNet-V2 was trained to predict 10-2 VF from combined images. Estimation performance was evaluated using mean absolute error (MAE) between actual and predicted threshold values, and the two models were compared with different input data. The training dataset comprised paired 10-2 VF and SS-OCT images of 3,025 eyes of 1,612 participants and the test dataset of 337 eyes of 186 participants. Global prediction errors (MAEpoint-wise) were 3.10 and 3.17 dB for the en face and RNFLT models, respectively. The en face model performed better than the RNFLT model in superonasal and inferonasal sectors (P = 0.011 and P = 0.030). Prediction errors were smaller in the inferior versus superior hemifields for both models. The deep learning model effectively predicted 10-2 VF from wide-field SS-OCT images and might help clinicians efficiently individualize the frequency of 10-2 VF in clinical practice.
Collapse
Affiliation(s)
- Sangwoo Moon
- grid.262229.f0000 0001 0719 8572Department of Ophthalmology, Pusan National University College of Medicine, Busan, 49241 Korea ,grid.412588.20000 0000 8611 7824Biomedical Research Institute, Pusan National University Hospital, Busan, 49241 Korea
| | - Jae Hyeok Lee
- Department of Medical AI, Deepnoid Inc, Seoul, 08376 Korea
| | - Hyunju Choi
- Department of Medical AI, Deepnoid Inc, Seoul, 08376 Korea
| | - Sun Yeop Lee
- Department of Medical AI, Deepnoid Inc, Seoul, 08376 Korea
| | - Jiwoong Lee
- grid.262229.f0000 0001 0719 8572Department of Ophthalmology, Pusan National University College of Medicine, Busan, 49241 Korea ,grid.412588.20000 0000 8611 7824Biomedical Research Institute, Pusan National University Hospital, Busan, 49241 Korea
| |
Collapse
|
5
|
Hemelings R, Elen B, Barbosa-Breda J, Bellon E, Blaschko MB, De Boever P, Stalmans I. Pointwise Visual Field Estimation From Optical Coherence Tomography in Glaucoma Using Deep Learning. Transl Vis Sci Technol 2022; 11:22. [PMID: 35998059 PMCID: PMC9424967 DOI: 10.1167/tvst.11.8.22] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose Standard automated perimetry is the gold standard to monitor visual field (VF) loss in glaucoma management, but it is prone to intrasubject variability. We trained and validated a customized deep learning (DL) regression model with Xception backbone that estimates pointwise and overall VF sensitivity from unsegmented optical coherence tomography (OCT) scans. Methods DL regression models have been trained with four imaging modalities (circumpapillary OCT at 3.5 mm, 4.1 mm, and 4.7 mm diameter) and scanning laser ophthalmoscopy en face images to estimate mean deviation (MD) and 52 threshold values. This retrospective study used data from patients who underwent a complete glaucoma examination, including a reliable Humphrey Field Analyzer (HFA) 24-2 SITA Standard (SS) VF exam and a SPECTRALIS OCT. Results For MD estimation, weighted prediction averaging of all four individuals yielded a mean absolute error (MAE) of 2.89 dB (2.50-3.30) on 186 test images, reducing the baseline by 54% (MAEdecr%). For 52 VF threshold values' estimation, the weighted ensemble model resulted in an MAE of 4.82 dB (4.45-5.22), representing an MAEdecr% of 38% from baseline when predicting the pointwise mean value. DL managed to explain 75% and 58% of the variance (R2) in MD and pointwise sensitivity estimation, respectively. Conclusions Deep learning can estimate global and pointwise VF sensitivities that fall almost entirely within the 90% test-retest confidence intervals of the 24-2 SS test. Translational Relevance Fast and consistent VF prediction from unsegmented OCT scans could become a solution for visual function estimation in patients unable to perform reliable VF exams.
Collapse
Affiliation(s)
- Ruben Hemelings
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium.,Unit Health, Flemish Institute for Technological Research (VITO), Mol, Belgium
| | - Bart Elen
- Unit Health, Flemish Institute for Technological Research (VITO), Mol, Belgium
| | - João Barbosa-Breda
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium.,Cardiovascular R&D Center - UnIC@RISE, Department of Surgery and Physiology, Faculty of Medicine of the University of Porto, Porto, Portugal.,Department of Ophthalmology, Centro Hospitalar e Universitário São João, Porto, Portugal
| | - Erwin Bellon
- Department of Information Technology, University Hospitals Leuven, Leuven, Belgium
| | | | - Patrick De Boever
- Unit Health, Flemish Institute for Technological Research (VITO), Mol, Belgium.,Center for Environmental Sciences, Faculty of Industrial Engineering, Hasselt University, Diepenbeek, Belgium.,Department of Biology, University of Antwerp, Wilrijk, Belgium
| | - Ingeborg Stalmans
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium.,Ophthalmology Department, UZ Leuven, Leuven, Belgium
| |
Collapse
|
6
|
Shin Y, Cho H, Shin YU, Seong M, Choi JW, Lee WJ. Comparison between Deep-Learning-Based Ultra-Wide-Field Fundus Imaging and True-Colour Confocal Scanning for Diagnosing Glaucoma. J Clin Med 2022; 11:jcm11113168. [PMID: 35683577 PMCID: PMC9181263 DOI: 10.3390/jcm11113168] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 05/25/2022] [Accepted: 06/01/2022] [Indexed: 02/05/2023] Open
Abstract
In this retrospective, comparative study, we evaluated and compared the performance of two confocal imaging modalities in detecting glaucoma based on a deep learning (DL) classifier: ultra-wide-field (UWF) fundus imaging and true-colour confocal scanning. A total of 777 eyes, including 273 normal control eyes and 504 glaucomatous eyes, were tested. A convolutional neural network was used for each true-colour confocal scan (Eidon AF™, CenterVue, Padova, Italy) and UWF fundus image (Optomap™, Optos PLC, Dunfermline, UK) to detect glaucoma. The diagnostic model was trained using 545 training and 232 test images. The presence of glaucoma was determined, and the accuracy and area under the receiver operating characteristic curve (AUC) metrics were assessed for diagnostic power comparison. DL-based UWF fundus imaging achieved an AUC of 0.904 (95% confidence interval (CI): 0.861−0.937) and accuracy of 83.62%. In contrast, DL-based true-colour confocal scanning achieved an AUC of 0.868 (95% CI: 0.824−0.912) and accuracy of 81.46%. Both DL-based confocal imaging modalities showed no significant differences in their ability to diagnose glaucoma (p = 0.135) and were comparable to the traditional optical coherence tomography parameter-based methods (all p > 0.005). Therefore, using a DL-based algorithm on true-colour confocal scanning and UWF fundus imaging, we confirmed that both confocal fundus imaging techniques had high value in diagnosing glaucoma.
Collapse
Affiliation(s)
- Younji Shin
- Department of Electrical Engineering, Hanyang University, Seoul 04763, Korea;
| | - Hyunsoo Cho
- Department of Ophthalmology, Hanyang University College of Medicine, Seoul 04763, Korea; (H.C.); (Y.U.S.); (M.S.)
| | - Yong Un Shin
- Department of Ophthalmology, Hanyang University College of Medicine, Seoul 04763, Korea; (H.C.); (Y.U.S.); (M.S.)
| | - Mincheol Seong
- Department of Ophthalmology, Hanyang University College of Medicine, Seoul 04763, Korea; (H.C.); (Y.U.S.); (M.S.)
| | - Jun Won Choi
- Department of Electrical Engineering, Hanyang University, Seoul 04763, Korea;
- Correspondence: (J.W.C.); (W.J.L.); Tel.: +82-2-2290-2316 (J.W.C.); +82-2-2290-8570 (W.J.L.)
| | - Won June Lee
- Department of Ophthalmology, Hanyang University College of Medicine, Seoul 04763, Korea; (H.C.); (Y.U.S.); (M.S.)
- Correspondence: (J.W.C.); (W.J.L.); Tel.: +82-2-2290-2316 (J.W.C.); +82-2-2290-8570 (W.J.L.)
| |
Collapse
|
7
|
Kihara Y, Montesano G, Chen A, Amerasinghe N, Dimitriou C, Jacob A, Chabi A, Crabb DP, Lee AY. Policy-Driven, Multimodal Deep Learning for Predicting Visual Fields from the Optic Disc and Optical Coherence Tomography Imaging. Ophthalmology 2022; 129:781-791. [PMID: 35202616 DOI: 10.1016/j.ophtha.2022.02.017] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 01/28/2022] [Accepted: 02/15/2022] [Indexed: 12/17/2022] Open
Abstract
PURPOSE To develop and validate a deep learning (DL) system for predicting each point on visual fields (VF) from disc and optical coherence tomography (OCT) imaging and derive a structure-function mapping. DESIGN Retrospective, cross-sectional database study PARTICIPANTS: 6437 patients undergoing routine care for glaucoma in three clinical sites in the UK. METHODS OCT and infrared reflectance (IR) optic disc imaging was paired with the closest VF within 7 days. Efficient-Net B2 was used to train two single modality DL models to predict each of the 52 sensitivity points on the 24-2 VF pattern. A policy DL model was designed and trained to fuse the two model predictions. MAIN OUTCOME MEASURES Pointwise Mean Absolute Error (PMAE) RESULTS: A total of 5078 imaging to VF pairs were used as a held-out test set to measure the final performance. The improvement in PMAE with the policy model was 0.485 [0.438, 0.533] dB compared to the IR image of the disc alone and 0.060 [0.047, 0.073] dB compared to the OCT alone. The improvement with the policy fusion model was statistically significant (p < 0.0001). Occlusion masking shows that the DL models learned the correct structure function mapping in a data-driven, feature agnostic fashion. CONCLUSIONS The multimodal, policy DL model performed the best; it provided explainable maps of its confidence in fusing data from single modalities and provides a pathway for probing the structure-function relationship in glaucoma.
Collapse
Affiliation(s)
- Yuka Kihara
- University of Washington, Department of Ophthalmology, Seattle, Washington
| | - Giovanni Montesano
- City, University of London, Optometry and Visual Sciences, London, United Kingdom; NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, UCL Institute of Ophthalmology, London, UK
| | - Andrew Chen
- University of Washington, Department of Ophthalmology, Seattle, Washington
| | - Nishani Amerasinghe
- University Hospital Southampton NHS Foundation Trust, Southampton, United Kingdom
| | - Chrysostomos Dimitriou
- Colchester Hospital, East Suffolk and North Essex NHS Foundation Trust, Colchester, United Kingdom
| | - Aby Jacob
- University Hospital Southampton NHS Foundation Trust, Southampton, United Kingdom
| | | | - David P Crabb
- City, University of London, Optometry and Visual Sciences, London, United Kingdom
| | - Aaron Y Lee
- University of Washington, Department of Ophthalmology, Seattle, Washington.
| |
Collapse
|
8
|
Bunod R, Augstburger E, Brasnu E, Labbe A, Baudouin C. [Artificial intelligence and glaucoma: A literature review]. J Fr Ophtalmol 2022; 45:216-232. [PMID: 34991909 DOI: 10.1016/j.jfo.2021.11.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 11/18/2021] [Indexed: 11/26/2022]
Abstract
In recent years, research in artificial intelligence (AI) has experienced an unprecedented surge in the field of ophthalmology, in particular glaucoma. The diagnosis and follow-up of glaucoma is complex and relies on a body of clinical evidence and ancillary tests. This large amount of information from structural and functional testing of the optic nerve and macula makes glaucoma a particularly appropriate field for the application of AI. In this paper, we will review work using AI in the field of glaucoma, whether for screening, diagnosis or detection of progression. Many AI strategies have shown promising results for glaucoma detection using fundus photography, optical coherence tomography, or automated perimetry. The combination of these imaging modalities increases the performance of AI algorithms, with results comparable to those of humans. We will discuss potential applications as well as obstacles and limitations to the deployment and validation of such models. While there is no doubt that AI has the potential to revolutionize glaucoma management and screening, research in the coming years will need to address unavoidable questions regarding the clinical significance of such results and the explicability of the predictions.
Collapse
Affiliation(s)
- R Bunod
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France.
| | - E Augstburger
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France
| | - E Brasnu
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France; CHNO des Quinze-Vingts, IHU FOReSIGHT, INSERM-DGOS CIC 1423, 17, rue Moreau, 75012 Paris, France; Sorbonne universités, INSERM, CNRS, institut de la Vision, 17, rue Moreau, 75012 Paris, France
| | - A Labbe
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France; CHNO des Quinze-Vingts, IHU FOReSIGHT, INSERM-DGOS CIC 1423, 17, rue Moreau, 75012 Paris, France; Sorbonne universités, INSERM, CNRS, institut de la Vision, 17, rue Moreau, 75012 Paris, France; Service d'ophtalmologie, hôpital Ambroise-Paré, AP-HP, université de Paris Saclay, 9, avenue Charles-de-Gaulle, 92100 Boulogne-Billancourt, France
| | - C Baudouin
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France; CHNO des Quinze-Vingts, IHU FOReSIGHT, INSERM-DGOS CIC 1423, 17, rue Moreau, 75012 Paris, France; Sorbonne universités, INSERM, CNRS, institut de la Vision, 17, rue Moreau, 75012 Paris, France; Service d'ophtalmologie, hôpital Ambroise-Paré, AP-HP, université de Paris Saclay, 9, avenue Charles-de-Gaulle, 92100 Boulogne-Billancourt, France
| |
Collapse
|
9
|
Jiao S, Jia Y, Yao X. Emerging imaging developments in experimental vision sciences and ophthalmology. Exp Biol Med (Maywood) 2021; 246:2137-2139. [PMID: 34404253 PMCID: PMC8718248 DOI: 10.1177/15353702211038891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Affiliation(s)
- Shuliang Jiao
- Department of Biomedical Engineering, Florida International University, Miami, FL 33174, USA
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Xincheng Yao
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| |
Collapse
|