1
|
Wang Y, Wei R, Yang D, Song K, Shen Y, Niu L, Li M, Zhou X. Development and validation of a deep learning model to predict axial length from ultra-wide field images. Eye (Lond) 2024; 38:1296-1300. [PMID: 38102471 PMCID: PMC11076502 DOI: 10.1038/s41433-023-02885-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 11/22/2023] [Accepted: 11/30/2023] [Indexed: 12/17/2023] Open
Abstract
BACKGROUND To validate the feasibility of building a deep learning model to predict axial length (AL) for moderate to high myopic patients from ultra-wide field (UWF) images. METHODS This study included 6174 UWF images from 3134 myopic patients during 2014 to 2020 in Eye and ENT Hospital of Fudan University. Of 6174 images, 4939 were used for training, 617 for validation, and 618 for testing. The coefficient of determination (R2), mean absolute error (MAE), and mean squared error (MSE) were used for model performance evaluation. RESULTS The model predicted AL with high accuracy. Evaluating performance of R2, MSE and MAE were 0.579, 1.419 and 0.9043, respectively. Prediction bias of 64.88% of the tests was under 1-mm error, 76.90% of tests was within the range of 5% error and 97.57% within 10% error. The prediction bias had a strong negative correlation with true AL values and showed significant difference between male and female (P < 0.001). Generated heatmaps demonstrated that the model focused on posterior atrophy changes in pathological fundus and peri-optic zone in normal fundus. In sex-specific models, R2, MSE, and MAE results of the female AL model were 0.411, 1.357, and 0.911 in female dataset and 0.343, 2.428, and 1.264 in male dataset. The corresponding metrics of male AL models were 0.216, 2.900, and 1.352 in male dataset and 0.083, 2.112, and 1.154 in female dataset. CONCLUSIONS It is feasible to utilize deep learning models to predict AL for moderate to high myopic patients with UWF images.
Collapse
Affiliation(s)
- Yunzhe Wang
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Ruoyan Wei
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
- Shanghai Medical College and Zhongshan Hospital Immunotherapy Translational Research Center, Shanghai, China
| | - Danjuan Yang
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Kaimin Song
- Beijing Airdoc Technology Co., Ltd, Beijing, China
| | - Yang Shen
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Lingling Niu
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Meiyan Li
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China.
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China.
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China.
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China.
| | - Xingtao Zhou
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China.
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China.
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China.
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China.
| |
Collapse
|
2
|
Jongsma KR, Sand M, Milota M. Why we should not mistake accuracy of medical AI for efficiency. NPJ Digit Med 2024; 7:57. [PMID: 38438477 PMCID: PMC10912629 DOI: 10.1038/s41746-024-01047-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 02/16/2024] [Indexed: 03/06/2024] Open
Affiliation(s)
- Karin Rolanda Jongsma
- Bioethics & Health Humanities, Julius Center, University Medical Center Utrecht, Utrecht University, PO Box 85500, 3508 CA, Utrecht, The Netherlands.
| | - Martin Sand
- TU Delft, Department of Values, Technology and Innovation, Faculty of Technology, Policy and Management, Jaffalaan 5, 2628 BX, Delft, The Netherlands
| | - Megan Milota
- Bioethics & Health Humanities, Julius Center, University Medical Center Utrecht, Utrecht University, PO Box 85500, 3508 CA, Utrecht, The Netherlands
| |
Collapse
|
3
|
Fan W, Yang Y, Qi J, Zhang Q, Liao C, Wen L, Wang S, Wang G, Xia Y, Wu Q, Fan X, Chen X, He M, Xiao J, Yang L, Liu Y, Chen J, Wang B, Zhang L, Yang L, Gan H, Zhang S, Liu G, Ge X, Cai Y, Zhao G, Zhang X, Xie M, Xu H, Zhang Y, Chen J, Li J, Han S, Mu K, Xiao S, Xiong T, Nian Y, Zhang D. A deep-learning-based framework for identifying and localizing multiple abnormalities and assessing cardiomegaly in chest X-ray. Nat Commun 2024; 15:1347. [PMID: 38355644 PMCID: PMC10867134 DOI: 10.1038/s41467-024-45599-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 01/30/2024] [Indexed: 02/16/2024] Open
Abstract
Accurate identification and localization of multiple abnormalities are crucial steps in the interpretation of chest X-rays (CXRs); however, the lack of a large CXR dataset with bounding boxes severely constrains accurate localization research based on deep learning. We created a large CXR dataset named CXR-AL14, containing 165,988 CXRs and 253,844 bounding boxes. On the basis of this dataset, a deep-learning-based framework was developed to identify and localize 14 common abnormalities and calculate the cardiothoracic ratio (CTR) simultaneously. The mean average precision values obtained by the model for 14 abnormalities reached 0.572-0.631 with an intersection-over-union threshold of 0.5, and the intraclass correlation coefficient of the CTR algorithm exceeded 0.95 on the held-out, multicentre and prospective test datasets. This framework shows an excellent performance, good generalization ability and strong clinical applicability, which is superior to senior radiologists and suitable for routine clinical settings.
Collapse
Affiliation(s)
- Weijie Fan
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Yi Yang
- Department of Digital Medicine, School of Biomedical Engineering and Imaging Medicine, Army Medical University, Chongqing, 400038, P. R. China
| | - Jing Qi
- Department of Digital Medicine, School of Biomedical Engineering and Imaging Medicine, Army Medical University, Chongqing, 400038, P. R. China
| | - Qichuan Zhang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Cuiwei Liao
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Li Wen
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Shuang Wang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Guangxian Wang
- Department of Radiology, People's Hospital of Banan, Chongqing Medical University, Chongqing, 401320, P. R. China
| | - Yu Xia
- Department of Radiology, Xishui hospital of Traditional Chinese Medicine, Zunyi of Guizhou province, 564600, P. R. China
| | - Qihua Wu
- Department of Radiology, People's Hospital of Nanchuan, Chongqing, 408400, P. R. China
| | - Xiaotao Fan
- Department of Radiology, Fengdu People's Hospital, Chongqing, 408200, P. R. China
| | - Xingcai Chen
- Department of Digital Medicine, School of Biomedical Engineering and Imaging Medicine, Army Medical University, Chongqing, 400038, P. R. China
| | - Mi He
- Department of Digital Medicine, School of Biomedical Engineering and Imaging Medicine, Army Medical University, Chongqing, 400038, P. R. China
| | - JingJing Xiao
- Department of Medical Engineering, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Liu Yang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Yun Liu
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Jia Chen
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Bing Wang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Lei Zhang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Liuqing Yang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Hui Gan
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Shushu Zhang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Guofang Liu
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Xiaodong Ge
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Yuanqing Cai
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Gang Zhao
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Xi Zhang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Mingxun Xie
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Huilin Xu
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Yi Zhang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Jiao Chen
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Jun Li
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Shuang Han
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Ke Mu
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Shilin Xiao
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Tingwei Xiong
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China
| | - Yongjian Nian
- Department of Digital Medicine, School of Biomedical Engineering and Imaging Medicine, Army Medical University, Chongqing, 400038, P. R. China.
| | - Dong Zhang
- Department of Radiology, Second Affiliated Hospital, Army Medical University, Chongqing, 400037, P. R. China.
| |
Collapse
|
4
|
Maloca PM, Pfau M, Janeschitz-Kriegl L, Reich M, Goerdt L, Holz FG, Müller PL, Valmaggia P, Fasler K, Keane PA, Zarranz-Ventura J, Zweifel S, Wiesendanger J, Kaiser P, Enz TJ, Rothenbuehler SP, Hasler PW, Juedes M, Freichel C, Egan C, Tufail A, Scholl HPN, Denk N. Human selection bias drives the linear nature of the more ground truth effect in explainable deep learning optical coherence tomography image segmentation. JOURNAL OF BIOPHOTONICS 2024; 17:e202300274. [PMID: 37795556 DOI: 10.1002/jbio.202300274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 09/11/2023] [Accepted: 10/04/2023] [Indexed: 10/06/2023]
Abstract
Supervised deep learning (DL) algorithms are highly dependent on training data for which human graders are assigned, for example, for optical coherence tomography (OCT) image annotation. Despite the tremendous success of DL, due to human judgment, these ground truth labels can be inaccurate and/or ambiguous and cause a human selection bias. We therefore investigated the impact of the size of the ground truth and variable numbers of graders on the predictive performance of the same DL architecture and repeated each experiment three times. The largest training dataset delivered a prediction performance close to that of human experts. All DL systems utilized were highly consistent. Nevertheless, the DL under-performers could not achieve any further autonomous improvement even after repeated training. Furthermore, a quantifiable linear relationship between ground truth ambiguity and the beneficial effect of having a larger amount of ground truth data was detected and marked as the more-ground-truth effect.
Collapse
Affiliation(s)
- Peter M Maloca
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland
- Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Maximilian Pfau
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland
- Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
- Department of Ophthalmology, University of Bonn, Bonn, Germany
| | - Lucas Janeschitz-Kriegl
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland
- Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
| | - Michael Reich
- Eye Center, Medical Center-University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Lukas Goerdt
- Department of Ophthalmology, University of Bonn, Bonn, Germany
| | - Frank G Holz
- Department of Ophthalmology, University of Bonn, Bonn, Germany
| | - Philipp L Müller
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Department of Ophthalmology, University of Bonn, Bonn, Germany
- Makula Center, Suedblick Eye Centers, Augsburg, Germany
| | - Philippe Valmaggia
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland
- Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Katrin Fasler
- Department of Ophthalmology, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Pearse A Keane
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | | | - Sandrine Zweifel
- Department of Ophthalmology, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | | | | | - Tim J Enz
- Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
| | | | - Pascal W Hasler
- Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
| | - Marlene Juedes
- Pharma Research and Early Development (pRED), Pharmaceutical Sciences (PS), Roche, Innovation Center Basel, Basel, Switzerland
| | - Christian Freichel
- Pharma Research and Early Development (pRED), Pharmaceutical Sciences (PS), Roche, Innovation Center Basel, Basel, Switzerland
| | - Catherine Egan
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Adnan Tufail
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Hendrik P N Scholl
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland
- Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
| | - Nora Denk
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland
- Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
- Pharma Research and Early Development (pRED), Pharmaceutical Sciences (PS), Roche, Innovation Center Basel, Basel, Switzerland
| |
Collapse
|
5
|
Schmetterer L, Scholl H, Garhöfer G, Janeschitz-Kriegl L, Corvi F, Sadda SR, Medeiros FA. Endpoints for clinical trials in ophthalmology. Prog Retin Eye Res 2023; 97:101160. [PMID: 36599784 DOI: 10.1016/j.preteyeres.2022.101160] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 12/22/2022] [Accepted: 12/28/2022] [Indexed: 01/03/2023]
Abstract
With the identification of novel targets, the number of interventional clinical trials in ophthalmology has increased. Visual acuity has for a long time been considered the gold standard endpoint for clinical trials, but in the recent years it became evident that other endpoints are required for many indications including geographic atrophy and inherited retinal disease. In glaucoma the currently available drugs were approved based on their IOP lowering capacity. Some recent findings do, however, indicate that at the same level of IOP reduction, not all drugs have the same effect on visual field progression. For neuroprotection trials in glaucoma, novel surrogate endpoints are required, which may either include functional or structural parameters or a combination of both. A number of potential surrogate endpoints for ophthalmology clinical trials have been identified, but their validation is complicated and requires solid scientific evidence. In this article we summarize candidates for clinical endpoints in ophthalmology with a focus on retinal disease and glaucoma. Functional and structural biomarkers, as well as quality of life measures are discussed, and their potential to serve as endpoints in pivotal trials is critically evaluated.
Collapse
Affiliation(s)
- Leopold Schmetterer
- Singapore Eye Research Institute, Singapore; SERI-NTU Advanced Ocular Engineering (STANCE), Singapore; Academic Clinical Program, Duke-NUS Medical School, Singapore; School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore; Department of Clinical Pharmacology, Medical University Vienna, Vienna, Austria; Center for Medical Physics and Biomedical Engineering, Medical University Vienna, Vienna, Austria; Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland.
| | - Hendrik Scholl
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland; Department of Ophthalmology, University of Basel, Basel, Switzerland
| | - Gerhard Garhöfer
- Department of Clinical Pharmacology, Medical University Vienna, Vienna, Austria
| | - Lucas Janeschitz-Kriegl
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland; Department of Ophthalmology, University of Basel, Basel, Switzerland
| | - Federico Corvi
- Eye Clinic, Department of Biomedical and Clinical Sciences "Luigi Sacco", University of Milan, Italy
| | - SriniVas R Sadda
- Doheny Eye Institute, Los Angeles, CA, USA; Department of Ophthalmology, David Geffen School of Medicine at University of California, Los Angeles, CA, USA
| | - Felipe A Medeiros
- Vision, Imaging and Performance Laboratory, Department of Ophthalmology, Duke Eye Center, Duke University, Durham, NC, USA
| |
Collapse
|
6
|
Cardinale-Villalobos L, Jimenez-Delgado E, García-Ramírez Y, Araya-Solano L, Solís-García LA, Méndez-Porras A, Alfaro-Velasco J. IoT System Based on Artificial Intelligence for Hot Spot Detection in Photovoltaic Modules for a Wide Range of Irradiances. SENSORS (BASEL, SWITZERLAND) 2023; 23:6749. [PMID: 37571532 PMCID: PMC10422287 DOI: 10.3390/s23156749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 07/15/2023] [Accepted: 07/26/2023] [Indexed: 08/13/2023]
Abstract
Infrared thermography (IRT) is a technique used to diagnose Photovoltaic (PV) installations to detect sub-optimal conditions. The increase of PV installations in smart cities has generated the search for technology that improves the use of IRT, which requires irradiance conditions to be greater than 700 W/m2, making it impossible to use at times when irradiance goes under that value. This project presents an IoT platform working on artificial intelligence (AI) which automatically detects hot spots in PV modules by analyzing the temperature differentials between modules exposed to irradiances greater than 300 W/m2. For this purpose, two AI (Deep learning and machine learning) were trained and tested in a real PV installation where hot spots were induced. The system was able to detect hot spots with a sensitivity of 0.995 and an accuracy of 0.923 under dirty, short-circuited, and partially shaded conditions. This project differs from others because it proposes an alternative to facilitate the implementation of diagnostics with IRT and evaluates the real temperatures of PV modules, which represents a potential economic saving for PV installation managers and inspectors.
Collapse
Affiliation(s)
| | - Efren Jimenez-Delgado
- School of Computer Engineering, Costa Rica Institute of Technology, Cartago 159-7050, Costa Rica
| | - Yariel García-Ramírez
- School of Electronic Engineering, Costa Rica Institute of Technology, Cartago 159-7050, Costa Rica
| | - Luis Araya-Solano
- School of Physics, Costa Rica Institute of Technology, Cartago 159-7050, Costa Rica
| | | | - Abel Méndez-Porras
- School of Computer Engineering, Costa Rica Institute of Technology, Cartago 159-7050, Costa Rica
| | - Jorge Alfaro-Velasco
- School of Computer Engineering, Costa Rica Institute of Technology, Cartago 159-7050, Costa Rica
| |
Collapse
|
7
|
Mittal P, Bhatnagar C. Effectual accuracy of OCT image retinal segmentation with the aid of speckle noise reduction and boundary edge detection strategy. J Microsc 2023; 289:164-179. [PMID: 36373509 DOI: 10.1111/jmi.13152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2021] [Revised: 09/19/2022] [Accepted: 10/13/2022] [Indexed: 11/16/2022]
Abstract
Optical coherence tomography (OCT) has shown to be a valuable imaging tool in the field of ophthalmology, and it is becoming increasingly relevant in the field of neurology. Several OCT image segmentation methods have been developed previously to segment retinal images, however sophisticated speckle noises with low-intensity restrictions, complex retinal tissues, and inaccurate retinal layer structure remain a challenge to perform effective retinal segmentation. Hence, in this research, complicated speckle noises are removed by using a novel far-flung ratio algorithm in which preprocessing has been done to treat the speckle noise thereby highly decreasing the speckle noise through new similarity and statistical measures. Additionally, a novel haphazard walk and inter-frame flattening algorithms have been presented to tackle the weak object boundaries in OCT images. These algorithms are effective at detecting edges and estimating minimal weighted paths to better diverge, which reduces the time complexity. In addition, the segmentation of OCT images is made simpler by using a novel N-ret layer segmentation approach that executes simultaneous segmentation of various surfaces, ensures unambiguous segmentation across neighbouring layers, and improves segmentation accuracy by using two grey scale values to construct data. Consequently, the novel work outperformed the OCT image segmentation with 98.5% of accuracy.
Collapse
Affiliation(s)
- Praveen Mittal
- Computer Engineering & Applications, GLA University, Mathura, UP, India
| | - Charul Bhatnagar
- Computer Engineering & Applications, GLA University, Mathura, UP, India
| |
Collapse
|
8
|
The Classification of Common Macular Diseases Using Deep Learning on Optical Coherence Tomography Images with and without Prior Automated Segmentation. Diagnostics (Basel) 2023; 13:diagnostics13020189. [PMID: 36672999 PMCID: PMC9858554 DOI: 10.3390/diagnostics13020189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 12/20/2022] [Accepted: 12/23/2022] [Indexed: 01/06/2023] Open
Abstract
We compared the performance of deep learning (DL) in the classification of optical coherence tomography (OCT) images of macular diseases between automated classification alone and in combination with automated segmentation. OCT images were collected from patients with neovascular age-related macular degeneration, polypoidal choroidal vasculopathy, diabetic macular edema, retinal vein occlusion, cystoid macular edema in Irvine-Gass syndrome, and other macular diseases, along with the normal fellow eyes. A total of 14,327 OCT images were used to train DL models. Three experiments were conducted: classification alone (CA), use of automated segmentation of the OCT images by RelayNet, and the graph-cut technique before the classification (combination method 1 (CM1) and 2 (CM2), respectively). For validation of classification of the macular diseases, the sensitivity, specificity, and accuracy of CA were found at 62.55%, 95.16%, and 93.14%, respectively, whereas the sensitivity, specificity, and accuracy of CM1 were found at 72.90%, 96.20%, and 93.92%, respectively, and of CM2 at 71.36%, 96.42%, and 94.80%, respectively. The accuracy of CM2 was statistically higher than that of CA (p = 0.05878). All three methods achieved AUC at 97%. Applying DL for segmentation of OCT images prior to classification of the images by another DL model may improve the performance of the classification.
Collapse
|
9
|
Carrera-Escalé L, Benali A, Rathert AC, Martín-Pinardel R, Bernal-Morales C, Alé-Chilet A, Barraso M, Marín-Martinez S, Feu-Basilio S, Rosinés-Fonoll J, Hernandez T, Vilá I, Castro-Dominguez R, Oliva C, Vinagre I, Ortega E, Gimenez M, Vellido A, Romero E, Zarranz-Ventura J. Radiomics-Based Assessment of OCT Angiography Images for Diabetic Retinopathy Diagnosis. OPHTHALMOLOGY SCIENCE 2022; 3:100259. [PMID: 36578904 PMCID: PMC9791596 DOI: 10.1016/j.xops.2022.100259] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Revised: 10/25/2022] [Accepted: 11/14/2022] [Indexed: 11/23/2022]
Abstract
Purpose To evaluate the diagnostic accuracy of machine learning (ML) techniques applied to radiomic features extracted from OCT and OCT angiography (OCTA) images for diabetes mellitus (DM), diabetic retinopathy (DR), and referable DR (R-DR) diagnosis. Design Cross-sectional analysis of a retinal image dataset from a previous prospective OCTA study (ClinicalTrials.govNCT03422965). Participants Patients with type 1 DM and controls included in the progenitor study. Methods Radiomic features were extracted from fundus retinographies, OCT, and OCTA images in each study eye. Logistic regression, linear discriminant analysis, support vector classifier (SVC)-linear, SVC-radial basis function, and random forest models were created to evaluate their diagnostic accuracy for DM, DR, and R-DR diagnosis in all image types. Main Outcome Measures Area under the receiver operating characteristic curve (AUC) mean and standard deviation for each ML model and each individual and combined image types. Results A dataset of 726 eyes (439 individuals) were included. For DM diagnosis, the greatest AUC was observed for OCT (0.82, 0.03). For DR detection, the greatest AUC was observed for OCTA (0.77, 0.03), especially in the 3 × 3 mm superficial capillary plexus OCTA scan (0.76, 0.04). For R-DR diagnosis, the greatest AUC was observed for OCTA (0.87, 0.12) and the deep capillary plexus OCTA scan (0.86, 0.08). The addition of clinical variables (age, sex, etc.) improved most models AUC for DM, DR and R-DR diagnosis. The performance of the models was similar in unilateral and bilateral eyes image datasets. Conclusions Radiomics extracted from OCT and OCTA images allow identification of patients with DM, DR, and R-DR using standard ML classifiers. OCT was the best test for DM diagnosis, OCTA for DR and R-DR diagnosis and the addition of clinical variables improved most models. This pioneer study demonstrates that radiomics-based ML techniques applied to OCT and OCTA images may be an option for DR screening in patients with type 1 DM. Financial Disclosures Proprietary or commercial disclosure may be found after the references.
Collapse
Key Words
- AI, artificial intelligence
- AUC, area under the curve
- Artificial intelligence
- DCP, deep capillary plexus
- DM, diabetes mellitus
- DR, diabetic retinopathy
- Diabetic retinopathy
- FR, fundus retinographies
- LDA, linear discriminant analysis
- LR, logistic regression
- ML, machine learning
- Machine learning
- OCT angiography
- OCTA, OCT angiography
- R-DR, referable DR
- RF, random forest
- Radiomics
- SCP, superficial capillary plexus
- SVC, support vector classifier
- rbf, radial basis function
Collapse
Affiliation(s)
- Laura Carrera-Escalé
- Intelligent Data Science and Artificial Intelligence (IDEAI) Research Center,Department of Computer Science, Facultat d’Informàtica de Barcelona (FIB), Universitat Politècnica de Catalunya (UPC), Barcelona, Spain
| | - Anass Benali
- Intelligent Data Science and Artificial Intelligence (IDEAI) Research Center,Department of Computer Science, Facultat d’Informàtica de Barcelona (FIB), Universitat Politècnica de Catalunya (UPC), Barcelona, Spain
| | - Ann-Christin Rathert
- Intelligent Data Science and Artificial Intelligence (IDEAI) Research Center,Department of Computer Science, Facultat d’Informàtica de Barcelona (FIB), Universitat Politècnica de Catalunya (UPC), Barcelona, Spain
| | - Ruben Martín-Pinardel
- Intelligent Data Science and Artificial Intelligence (IDEAI) Research Center,Department of Computer Science, Facultat d’Informàtica de Barcelona (FIB), Universitat Politècnica de Catalunya (UPC), Barcelona, Spain,August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain
| | | | - Anibal Alé-Chilet
- Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain
| | - Marina Barraso
- Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain
| | - Sara Marín-Martinez
- Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain
| | - Silvia Feu-Basilio
- Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain
| | - Josep Rosinés-Fonoll
- Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain
| | - Teresa Hernandez
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain,Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain
| | - Irene Vilá
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain,Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain
| | | | - Cristian Oliva
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain,Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain
| | - Irene Vinagre
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain,Diabetes Unit, Hospital Clínic de Barcelona, Spain,Institut Clínic de Malalties Digestives i Metaboliques (ICMDM), Hospital Clínic de Barcelona, Spain
| | - Emilio Ortega
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain,Diabetes Unit, Hospital Clínic de Barcelona, Spain,Institut Clínic de Malalties Digestives i Metaboliques (ICMDM), Hospital Clínic de Barcelona, Spain
| | - Marga Gimenez
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain,Diabetes Unit, Hospital Clínic de Barcelona, Spain,Institut Clínic de Malalties Digestives i Metaboliques (ICMDM), Hospital Clínic de Barcelona, Spain
| | - Alfredo Vellido
- Intelligent Data Science and Artificial Intelligence (IDEAI) Research Center,Department of Computer Science, Facultat d’Informàtica de Barcelona (FIB), Universitat Politècnica de Catalunya (UPC), Barcelona, Spain
| | - Enrique Romero
- Intelligent Data Science and Artificial Intelligence (IDEAI) Research Center,Department of Computer Science, Facultat d’Informàtica de Barcelona (FIB), Universitat Politècnica de Catalunya (UPC), Barcelona, Spain
| | - Javier Zarranz-Ventura
- August Pi i Sunyer Biomedical Research Institute (IDIBAPS), Barcelona, Spain,Institut Clínic d´Oftalmología (ICOF), Hospital Clínic de Barcelona, Barcelona, Spain,Diabetes Unit, Hospital Clínic de Barcelona, Spain,School of Medicine, Universitat de Barcelona, Spain,Correspondence: Javier Zarranz-Ventura, MD, PhD, C/ Sabino Arana 1, Barcelona 08028, Spain.
| |
Collapse
|
10
|
Automated Detection of Posterior Vitreous Detachment on OCT Using Computer Vision and Deep Learning Algorithms. OPHTHALMOLOGY SCIENCE 2022; 3:100254. [PMID: 36691594 PMCID: PMC9860346 DOI: 10.1016/j.xops.2022.100254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 11/03/2022] [Accepted: 11/07/2022] [Indexed: 11/13/2022]
Abstract
Objective To develop automated algorithms for the detection of posterior vitreous detachment (PVD) using OCT imaging. Design Evaluation of a diagnostic test or technology. Subjects Overall, 42 385 consecutive OCT images (865 volumetric OCT scans) obtained with Heidelberg Spectralis from 865 eyes from 464 patients at an academic retina clinic between October 2020 and December 2021 were retrospectively reviewed. Methods We developed a customized computer vision algorithm based on image filtering and edge detection to detect the posterior vitreous cortex for the determination of PVD status. A second deep learning (DL) image classification model based on convolutional neural networks and ResNet-50 architecture was also trained to identify PVD status from OCT images. The training dataset consisted of 674 OCT volume scans (33 026 OCT images), while the validation testing set consisted of 73 OCT volume scans (3577 OCT images). Overall, 118 OCT volume scans (5782 OCT images) were used as a separate external testing dataset. Main Outcome Measures Accuracy, sensitivity, specificity, F1-scores, and area under the receiver operator characteristic curves (AUROCs) were measured to assess the performance of the automated algorithms. Results Both the customized computer vision algorithm and DL model results were largely in agreement with the PVD status labeled by trained graders. The DL approach achieved an accuracy of 90.7% and an F1-score of 0.932 with a sensitivity of 100% and a specificity of 74.5% for PVD detection from an OCT volume scan. The AUROC was 89% at the image level and 96% at the volume level for the DL model. The customized computer vision algorithm attained an accuracy of 89.5% and an F1-score of 0.912 with a sensitivity of 91.9% and a specificity of 86.1% on the same task. Conclusions Both the computer vision algorithm and the DL model applied on OCT imaging enabled reliable detection of PVD status, demonstrating the potential for OCT-based automated PVD status classification to assist with vitreoretinal surgical planning. Financial Disclosures Proprietary or commercial disclosure may be found after the references.
Collapse
|
11
|
Pfau M, Schmitz-Valckenberg S, Ribeiro R, Safaei R, McKeown A, Fleckenstein M, Holz FG. Association of complement C3 inhibitor pegcetacoplan with reduced photoreceptor degeneration beyond areas of geographic atrophy. Sci Rep 2022; 12:17870. [PMID: 36284220 PMCID: PMC9596427 DOI: 10.1038/s41598-022-22404-9] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 10/14/2022] [Indexed: 01/20/2023] Open
Abstract
Preservation of photoreceptors beyond areas of retinal pigment epithelium atrophy is a critical treatment goal in eyes with geographic atrophy (GA) to prevent vision loss. Thus, we assessed the association of treatment with the complement C3 inhibitor pegcetacoplan with optical coherence tomography (OCT)-based photoreceptor laminae thicknesses in this post hoc analysis of the FILLY trial (NCT02503332). Retinal layers in OCT were segmented using a deep-learning-based pipeline and extracted along evenly spaced contour-lines surrounding areas of GA. The primary outcome measure was change from baseline in (standardized) outer nuclear layer (ONL) thickness at the 5.16°-contour-line at month 12. Participants treated with pegcetacoplan monthly had a thicker ONL along the 5.16° contour-line compared to the pooled sham arm (mean difference [95% CI] + 0.29 z-score units [0.16, 0.42], P < 0.001). The same was evident for eyes treated with pegcetacoplan every other month (+ 0.26 z-score units [0.13, 0.4], P < 0.001). Additionally, eyes treated with pegcetacoplan exhibited a thicker photoreceptor inner segment layer along the 5.16°-contour-line at month 12. These findings suggest that pegcetacoplan could slow GA progression and lead to reduced thinning of photoreceptor layers beyond the GA boundary. Future trials in earlier disease stages, i.e., intermediate AMD, aiming to slow photoreceptor degeneration warrant consideration.
Collapse
Affiliation(s)
- Maximilian Pfau
- Department of Ophthalmology, University of Bonn, Bonn, Germany
- GRADE Reading Center, Bonn, Germany
- Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
| | - Steffen Schmitz-Valckenberg
- Department of Ophthalmology, University of Bonn, Bonn, Germany.
- GRADE Reading Center, Bonn, Germany.
- Department of Ophthalmology & Visual Sciences, John A. Moran Eye Center, University of Utah, 65 North Mario Capecchi Drive, Salt Lake City, UT, 84312, USA.
| | | | | | | | - Monika Fleckenstein
- GRADE Reading Center, Bonn, Germany
- Department of Ophthalmology & Visual Sciences, John A. Moran Eye Center, University of Utah, 65 North Mario Capecchi Drive, Salt Lake City, UT, 84312, USA
| | - Frank G Holz
- Department of Ophthalmology, University of Bonn, Bonn, Germany
- GRADE Reading Center, Bonn, Germany
| |
Collapse
|
12
|
Trinh M, Eshow N, Alonso-Caneiro D, Kalloniatis M, Nivison-Smith L. Reticular Pseudodrusen Are Associated With More Advanced Para-Central Photoreceptor Degeneration in Intermediate Age-Related Macular Degeneration. Invest Ophthalmol Vis Sci 2022; 63:12. [PMID: 36251316 PMCID: PMC9586134 DOI: 10.1167/iovs.63.11.12] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose The purpose of this study was to examine retinal topographical differences between intermediate age-related macular degeneration (iAMD) with reticular pseudodrusen (RPD) versus iAMD without RPD, using high-density optical coherence tomography (OCT) cluster analysis. Methods Single eyes from 153 individuals (51 with iAMD+RPD, 51 with iAMD, and 51 healthy) were propensity-score matched by age, sex, and refraction. High-density OCT grid-wise (60 × 60 grids, each approximately 0.01 mm2 area) thicknesses were custom-extracted from macular cube scans, then compared between iAMD+RPD and iAMD eyes with correction for confounding factors. These "differences (µm)" were clustered and results de-convoluted to reveal mean difference (95% confidence interval [CI]) and topography of the inner retina (retinal nerve fiber, ganglion cell, inner plexiform, and inner nuclear layers) and outer retina (outer plexiform/Henle's fiber/outer nuclear layers, inner and outer segments, and retinal pigment epithelium-to-Bruch's membrane [RPE-BM]). Differences were also converted to Z-scores using normal data. Results In iAMD+RPD compared to iAMD eyes, the inner retina was thicker (up to +5.89 [95% CI = +2.44 to +9.35] µm, P < 0.0001 to 0.05), the outer para-central retina was thinner (up to -3.21 [95% CI = -5.39 to -1.03] µm, P < 0.01 to 0.001), and the RPE-BM was thicker (+3.38 [95% CI = +1.05 to +5.71] µm, P < 0.05). The majority of effect sizes (Z-scores) were large (-3.13 to +1.91). Conclusions OCT retinal topography differed across all retinal layers between iAMD eyes with versus without RPD. Greater para-central photoreceptor thinning in RPD eyes was suggestive of more advanced degeneration, whereas the significance of inner retinal thickening was unclear. In the future, quantitative evaluation of photoreceptor thicknesses may help clinicians monitor the potential deleterious effects of RPD on retinal integrity.
Collapse
Affiliation(s)
- Matt Trinh
- Centre for Eye Health, University of New South Wales, Sydney, New South Wales, Australia.,School of Optometry and Vision Science, University of New South Wales, Sydney, New South Wales, Australia
| | - Natalie Eshow
- Centre for Eye Health, University of New South Wales, Sydney, New South Wales, Australia.,School of Optometry and Vision Science, University of New South Wales, Sydney, New South Wales, Australia
| | - David Alonso-Caneiro
- Contact Lens and Visual Optics Laboratory, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Michael Kalloniatis
- Centre for Eye Health, University of New South Wales, Sydney, New South Wales, Australia.,School of Optometry and Vision Science, University of New South Wales, Sydney, New South Wales, Australia.,School of Medicine (Optometry), Deakin University, Geelong, Victoria, Australia
| | - Lisa Nivison-Smith
- Centre for Eye Health, University of New South Wales, Sydney, New South Wales, Australia.,School of Optometry and Vision Science, University of New South Wales, Sydney, New South Wales, Australia
| |
Collapse
|
13
|
|
14
|
Valmaggia P, Friedli P, Hörmann B, Kaiser P, Scholl HPN, Cattin PC, Sandkühler R, Maloca PM. Feasibility of Automated Segmentation of Pigmented Choroidal Lesions in OCT Data With Deep Learning. Transl Vis Sci Technol 2022; 11:25. [PMID: 36156729 PMCID: PMC9526362 DOI: 10.1167/tvst.11.9.25] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose To evaluate the feasibility of automated segmentation of pigmented choroidal lesions (PCLs) in optical coherence tomography (OCT) data and compare the performance of different deep neural networks. Methods Swept-source OCT image volumes were annotated pixel-wise for PCLs and background. Three deep neural network architectures were applied to the data: the multi-dimensional gated recurrent units (MD-GRU), the V-Net, and the nnU-Net. The nnU-Net was used to compare the performance of two-dimensional (2D) versus three-dimensional (3D) predictions. Results A total of 121 OCT volumes were analyzed (100 normal and 21 PCLs). Automated PCL segmentations were successful with all neural networks. The 3D nnU-Net predictions showed the highest recall with a mean of 0.77 ± 0.22 (MD-GRU, 0.60 ± 0.31; V-Net, 0.61 ± 0.25). The 3D nnU-Net predicted PCLs with a Dice coefficient of 0.78 ± 0.13, outperforming MD-GRU (0.62 ± 0.23) and V-Net (0.59 ± 0.24). The smallest distance to the manual annotation was found using 3D nnU-Net with a mean maximum Hausdorff distance of 315 ± 172 µm (MD-GRU, 1542 ± 1169 µm; V-Net, 2408 ± 1060 µm). The 3D nnU-Net showed a superior performance compared with stacked 2D predictions. Conclusions The feasibility of automated deep learning segmentation of PCLs was demonstrated in OCT data. The neural network architecture had a relevant impact on PCL predictions. Translational Relevance This work serves as proof of concept for segmentations of choroidal pathologies in volumetric OCT data; improvements are conceivable to meet clinical demands for the diagnosis, monitoring, and treatment evaluation of PCLs.
Collapse
Affiliation(s)
- Philippe Valmaggia
- Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland.,Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland.,Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
| | | | | | | | - Hendrik P N Scholl
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland.,Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
| | - Philippe C Cattin
- Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland
| | - Robin Sandkühler
- Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland
| | - Peter M Maloca
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland.,Department of Ophthalmology, University Hospital Basel, Basel, Switzerland.,Moorfields Eye Hospital NHS Foundation Trust, London, EC1V 2PD, UK
| |
Collapse
|
15
|
Evolution and Applications of Artificial Intelligence to Cataract Surgery. OPHTHALMOLOGY SCIENCE 2022; 2:100164. [PMID: 36245750 PMCID: PMC9559105 DOI: 10.1016/j.xops.2022.100164] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 03/27/2022] [Accepted: 04/19/2022] [Indexed: 11/22/2022]
Abstract
Topic Despite significant recent advances in artificial intelligence (AI) technology within several ophthalmic subspecialties, AI seems to be underutilized in the diagnosis and management of cataracts. In this article, we review AI technology that may soon become central to the cataract surgical pathway, from diagnosis to completion of surgery. Clinical Relevance This review describes recent advances in AI in the preoperative, intraoperative, and postoperative phase of cataract surgery, demonstrating its impact on the pathway and the surgical team. Methods A systematic search of PubMed was conducted to identify relevant publications on the topic of AI for cataract surgery. Articles of high quality and relevance to the topic were selected. Results Before surgery, diagnosis and grading of cataracts through AI-based image analysis has been demonstrated in several research settings. Optimal intraocular lens (IOL) power to achieve the desired postoperative refraction can be calculated with a higher degree of accuracy using AI-based modeling compared with traditional IOL formulae. During surgery, innovative AI-based video analysis tools are in development, promoting a paradigm shift for documentation, storage, and cataloging libraries of surgical videos with applications for teaching and training, complication review, and surgical research. Situation-aware computer-assisted devices can be connected to surgical microscopes for automated video capture and cloud storage upload. Artificial intelligence-based software can provide workflow analysis, tool detection, and video segmentation for skill evaluation by the surgeon and the trainee. Mixed reality features, such as real-time intraoperative warnings, may have a role in improving surgical decision-making with the key aim of reducing complications by recognizing surgical risks in advance and alerting the operator to them. For the management of patient flow through the pathway, AI-based mathematical models generating patient referral patterns are in development, as are simulations to optimize operating room use. In the postoperative phase, AI has been shown to predict the posterior capsule status with reasonable accuracy, and can therefore improve the triage pathway in the treatment of posterior capsular opacification. Discussion Artificial intelligence for cataract surgery will be as relevant as in other subspecialties of ophthalmology and will eventually constitute a future cornerstone for an enhanced cataract surgery pathway.
Collapse
|
16
|
Trinh M, Kalloniatis M, Alonso-Caneiro D, Nivison-Smith L. High-Density Optical Coherence Tomography Analysis Provides Insights Into Early/Intermediate Age-Related Macular Degeneration Retinal Layer Changes. Invest Ophthalmol Vis Sci 2022; 63:36. [PMID: 35622354 PMCID: PMC9150835 DOI: 10.1167/iovs.63.5.36] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Purpose To topographically map all of the thickness differences in individual retinal layers between early/intermediate age-related macular degeneration (AMDearly/AMDint) and normal eyes and to determine interlayer relationships. Methods Ninety-six AMDtotal (48 AMDearly and 48 AMDint) and 96 normal eyes from 192 participants were propensity-score matched by age, sex, and refraction. Retrospective optical coherence tomography (OCT) macular cube scans were acquired, and high-density (60 × 60 0.01-mm2) grid thicknesses were custom extracted for comparison between AMDtotal and normal eyes corrected for confounding. Resultant "normal differences" underwent cluster, interlayer correlation, and dose-response analyses for the retinal nerve fiber layer (RNFL), ganglion cell layer (GCL), inner plexiform layer (IPL), inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer + Henle's fiber layer (ONL+HFL), inner and outer segment (IS/OS) thickness, and retinal pigment epithelium (RPE) to Bruch's membrane (BM) thickness. Results AMDtotal inner retinal clusters demonstrated extensively thinned RNFL, GCL, IPL, and paracentral INL and thickened INL elsewhere, with normal difference means ranging from -8.13 µm (95% confidence interval [CI], -11.12 to -5.13) to 1.58 µm (95% CI, 1.07-2.09) (P < 0.0001 to P < 0.05). Outer retinal clusters displayed thinned paracentral OPL/ONL+HFL, central IS/OS, and peripheral RPE-BM and thickened central RPE-BM, with means ranging from -1.31 µm (95% CI, -2.06 to -0.55) to 2.99 µm (95% CI, 0.97-5.01] (P < 0.0001 to P <0.05). Effect sizes (-2.56 to 9.93 SD), cluster sizes, and eccentricity effects varied. All interlayer correlations were negligible to moderate regardless of AMD severity. Only the RPE-BM was partly thicker with greater AMD severity (up to 5.44 µm; 95% CI, 4.88-6.00; P < 0.01). Conclusions From the early stage, AMD eyes demonstrate thickness differences compared to normal with unique topographies across all retinal layers. Poor interlayer correlations highlight that the outer retina inadequately reflects complete retinal health. The clinical importance of OCT assessment across all individual retinal layers in early/intermediate AMD requires further investigation.
Collapse
Affiliation(s)
- Matt Trinh
- Centre for Eye Health, University of New South Wales, Sydney, New South Wales, Australia.,School of Optometry and Vision Science, University of New South Wales, Sydney, New South Wales, Australia
| | - Michael Kalloniatis
- Centre for Eye Health, University of New South Wales, Sydney, New South Wales, Australia.,School of Optometry and Vision Science, University of New South Wales, Sydney, New South Wales, Australia
| | - David Alonso-Caneiro
- Contact Lens and Visual Optics Laboratory, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Lisa Nivison-Smith
- Centre for Eye Health, University of New South Wales, Sydney, New South Wales, Australia.,School of Optometry and Vision Science, University of New South Wales, Sydney, New South Wales, Australia
| |
Collapse
|
17
|
Dow ER, Keenan TDL, Lad EM, Lee AY, Lee CS, Loewenstein A, Eydelman MB, Chew EY, Keane PA, Lim JI. From Data to Deployment: The Collaborative Community on Ophthalmic Imaging Roadmap for Artificial Intelligence in Age-Related Macular Degeneration. Ophthalmology 2022; 129:e43-e59. [PMID: 35016892 PMCID: PMC9859710 DOI: 10.1016/j.ophtha.2022.01.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Revised: 12/16/2021] [Accepted: 01/04/2022] [Indexed: 01/25/2023] Open
Abstract
OBJECTIVE Health care systems worldwide are challenged to provide adequate care for the 200 million individuals with age-related macular degeneration (AMD). Artificial intelligence (AI) has the potential to make a significant, positive impact on the diagnosis and management of patients with AMD; however, the development of effective AI devices for clinical care faces numerous considerations and challenges, a fact evidenced by a current absence of Food and Drug Administration (FDA)-approved AI devices for AMD. PURPOSE To delineate the state of AI for AMD, including current data, standards, achievements, and challenges. METHODS Members of the Collaborative Community on Ophthalmic Imaging Working Group for AI in AMD attended an inaugural meeting on September 7, 2020, to discuss the topic. Subsequently, they undertook a comprehensive review of the medical literature relevant to the topic. Members engaged in meetings and discussion through December 2021 to synthesize the information and arrive at a consensus. RESULTS Existing infrastructure for robust AI development for AMD includes several large, labeled data sets of color fundus photography and OCT images; however, image data often do not contain the metadata necessary for the development of reliable, valid, and generalizable models. Data sharing for AMD model development is made difficult by restrictions on data privacy and security, although potential solutions are under investigation. Computing resources may be adequate for current applications, but knowledge of machine learning development may be scarce in many clinical ophthalmology settings. Despite these challenges, researchers have produced promising AI models for AMD for screening, diagnosis, prediction, and monitoring. Future goals include defining benchmarks to facilitate regulatory authorization and subsequent clinical setting generalization. CONCLUSIONS Delivering an FDA-authorized, AI-based device for clinical care in AMD involves numerous considerations, including the identification of an appropriate clinical application; acquisition and development of a large, high-quality data set; development of the AI architecture; training and validation of the model; and functional interactions between the model output and clinical end user. The research efforts undertaken to date represent starting points for the medical devices that eventually will benefit providers, health care systems, and patients.
Collapse
Affiliation(s)
- Eliot R Dow
- Byers Eye Institute, Stanford University, Palo Alto, California
| | - Tiarnan D L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Eleonora M Lad
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Cecilia S Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Anat Loewenstein
- Division of Ophthalmology, Tel Aviv Medical Center, Tel Aviv, Israel
| | - Malvina B Eydelman
- Office of Health Technology 1, Center of Devices and Radiological Health, Food and Drug Administration, Silver Spring, Maryland
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland.
| | - Pearse A Keane
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom.
| | - Jennifer I Lim
- Department of Ophthalmology, University of Illinois at Chicago, Chicago, Illinois.
| |
Collapse
|
18
|
Oral Cancer Screening by Artificial Intelligence-Oriented Interpretation of Optical Coherence Tomography Images. Radiol Res Pract 2022; 2022:1614838. [PMID: 35502299 PMCID: PMC9056242 DOI: 10.1155/2022/1614838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Revised: 03/23/2022] [Accepted: 04/11/2022] [Indexed: 11/29/2022] Open
Abstract
Early diagnosis of oral cancer is critical to improve the survival rate of patients. The current strategies for screening of patients for oral premalignant and malignant lesions unfortunately miss a significant number of involved patients. Optical coherence tomography (OCT) is an optical imaging modality that has been widely investigated in the field of oncology for identification of cancerous entities. Since the interpretation of OCT images requires professional training and OCT images contain information that cannot be inferred visually, artificial intelligence (AI) with trained algorithms has the ability to quantify visually undetectable variations, thus overcoming the barriers that have postponed the involvement of OCT in the process of screening of oral neoplastic lesions. This literature review aimed to highlight the features of precancerous and cancerous oral lesions on OCT images and specify how AI can assist in screening and diagnosis of such pathologies.
Collapse
|
19
|
Sekiryu T. Choroidal imaging using optical coherence tomography: techniques and interpretations. Jpn J Ophthalmol 2022; 66:213-226. [PMID: 35171356 DOI: 10.1007/s10384-022-00902-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 12/23/2021] [Indexed: 02/05/2023]
Abstract
The choroid is vascularized membranous tissue that supplies oxygen and nutrients to the photoreceptors and outer retina. Choroidal vessels underlying the retinal pigment epithelium are difficult to visualize by ophthalmoscopy and slit-lamp examinations. Optical coherence tomography (OCT) imaging made significant advancements in the last 2 decades; it allows visualization of the choroid and its vasculature. Enhanced-depth imaging techniques and swept-source OCT provide detailed choroidal images. A recent breakthrough, OCT angiography (OCTA), visualizes blood flow in the choriocapillaris. However, despite using OCTA, it is hard to visualize the choroidal vessel blood flow. In conventional structural OCT the choroidal vessel structure appears as a low-intensity objects. Image-processing techniques help obtain structural information about these vessels. Manual or automated segmentation of the choroid and binarization techniques enable evaluation of choroidal vessels. Viewing the three-dimensional choroidal vasculature is also possible using high-scan speed volumetric OCT. Unfortunately, although choroidal image analyses are possible using the images obtained by commercially available OCT, the built-in function that analyzes the choroidal vasculature may be insufficient to perform quantitative imaging analysis. Physicians must do that themselves. This review summarizes recent choroidal imaging processing techniques and explains the interpretation of the results for the benefit of imaging experts and ophthalmologists alike.
Collapse
Affiliation(s)
- Tetsuju Sekiryu
- Department of Ophthalmology, Fukushima Medical University, 1 Hikarigaoka, Fukushima, Fukushima, 960-1295, Japan.
| |
Collapse
|
20
|
Hsiao CH, Huang YL, Tse SL, Hsia WP, Chen HJ, Cheng YS, Chang CJ. Automatic Segment and Quantify Choroid Layer in Myopic eyes: Deep Learning based Model. Semin Ophthalmol 2022; 37:611-618. [PMID: 35138208 DOI: 10.1080/08820538.2022.2036350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
PURPOSE To report a rapid and accurate method based upon deep learning for automatic segmentation and measurement of the choroidal thickness (CT) in myopic eyes, and to determine the relationship between refractive error (RE) and CT. METHODS Fifty-four healthy subjects 20-39 years of age were retrospectively reviewed. Data reviewed included age, gender, laterality, visual acuity, RE, and Enhanced Depth Imaging Optical Coherence Tomography (EDI-OCT) images. The choroid layer was labeled by manual and automatic method using EDI-OCT. A Mask Region-convolutional Neural Network (Mask R-CNN) model, using deep Residual Network (ResNet) and Feature Pyramid Networks (FPN) as a backbone network, was trained to automatically outline and quantify the choroid layer. RESULTS ResNet 50 model was adopted for its 90% accuracy rate and 6.97 s average execution time. CT determined by the manual method had a mean thickness of 258.75 ± 66.11 µm, a positive correlation with RE (r = 0.596, p < .01) and significant association with gender (p = .011) and RE (p < .001) in multivariable linear regression analysis. Meanwhile, CT determined by deep learning presented a mean thickness of 226.39 ± 54.65 µm, a positive correlation with RE (r = 0.546, p < .01) and significant association with gender (p = .043) and RE (p < .001) in multivariable linear regression analysis. Both methods revealed that CT decreased with the increase in myopic RE. CONCLUSIONS This deep learning method using Mask-RCNN was able to successfully determine the relationship between RE and CT in an accurate and rapid way. It could eliminate the need for manual process, while demonstrating a feasible clinical application.
Collapse
Affiliation(s)
- Chung-Hao Hsiao
- Department of Ophthalmology, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Yu-Len Huang
- Department of Computer Science, Tunghai University, Taichung, Taiwan
| | - Siu-Lun Tse
- Department of Computer Science, Tunghai University, Taichung, Taiwan
| | - Wei-Ping Hsia
- Department of Ophthalmology, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Hung-Ju Chen
- Department of Ophthalmology, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Yuan-Shao Cheng
- Department of Ophthalmology, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Chia-Jen Chang
- Department of Ophthalmology, Taichung Veterans General Hospital, Taichung, Taiwan.,Department of Optometry, Central Taiwan University of Science and Technology, Taichung, Taiwan
| |
Collapse
|
21
|
Gutfleisch M, Ester O, Aydin S, Quassowski M, Spital G, Lommatzsch A, Rothaus K, Dubis AM, Pauleikhoff D. Clinically applicable deep learning-based decision aids for treatment of neovascular AMD. Graefes Arch Clin Exp Ophthalmol 2022; 260:2217-2230. [DOI: 10.1007/s00417-022-05565-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 01/06/2022] [Accepted: 01/11/2022] [Indexed: 01/22/2023] Open
|
22
|
Rahman L, Hafejee A, Anantharanjit R, Wei W, Cordeiro MF. Accelerating precision ophthalmology: recent advances. EXPERT REVIEW OF PRECISION MEDICINE AND DRUG DEVELOPMENT 2022. [DOI: 10.1080/23808993.2022.2154146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Affiliation(s)
- Loay Rahman
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Healthcare NHS Trust, London, UK
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, UK
| | - Ammaarah Hafejee
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Healthcare NHS Trust, London, UK
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, UK
| | - Rajeevan Anantharanjit
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Healthcare NHS Trust, London, UK
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, UK
| | - Wei Wei
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Healthcare NHS Trust, London, UK
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, UK
| | | |
Collapse
|
23
|
Maloca PM, Seeger C, Booler H, Valmaggia P, Kawamoto K, Kaba Q, Inglin N, Balaskas K, Egan C, Tufail A, Scholl HPN, Hasler PW, Denk N. Uncovering of intraspecies macular heterogeneity in cynomolgus monkeys using hybrid machine learning optical coherence tomography image segmentation. Sci Rep 2021; 11:20647. [PMID: 34667265 PMCID: PMC8526684 DOI: 10.1038/s41598-021-99704-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 09/27/2021] [Indexed: 12/13/2022] Open
Abstract
The fovea is a depression in the center of the macula and is the site of the highest visual acuity. Optical coherence tomography (OCT) has contributed considerably in elucidating the pathologic changes in the fovea and is now being considered as an accompanying imaging method in drug development, such as antivascular endothelial growth factor and its safety profiling. Because animal numbers are limited in preclinical studies and automatized image evaluation tools have not yet been routinely employed, essential reference data describing the morphologic variations in macular thickness in laboratory cynomolgus monkeys are sparse to nonexistent. A hybrid machine learning algorithm was applied for automated OCT image processing and measurements of central retina thickness and surface area values. Morphological variations and the effects of sex and geographical origin were determined. Based on our findings, the fovea parameters are specific to the geographic origin. Despite morphological similarities among cynomolgus monkeys, considerable variations in the foveolar contour, even within the same species but from different geographic origins, were found. The results of the reference database show that not only the entire retinal thickness, but also the macular subfields, should be considered when designing preclinical studies and in the interpretation of foveal data.
Collapse
Affiliation(s)
- Peter M Maloca
- Department of Ophthalmology, University of Basel, 4031, Basel, Switzerland. .,Institute of Molecular and Clinical Ophthalmology Basel (IOB), 4031, Basel, Switzerland. .,Moorfields Eye Hospital NHS Foundation Trust, London, EC1V 2PD, UK.
| | - Christine Seeger
- Preclinical Research and Early Development, Pharmaceutical Sciences, Hoffmann-La Roche, 4070, Basel, Switzerland
| | - Helen Booler
- Preclinical Research and Early Development, Pharmaceutical Sciences, Hoffmann-La Roche, 4070, Basel, Switzerland
| | - Philippe Valmaggia
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), 4031, Basel, Switzerland
| | - Ken Kawamoto
- Moorfields Eye Hospital NHS Foundation Trust, London, EC1V 2PD, UK
| | - Qayim Kaba
- Moorfields Eye Hospital NHS Foundation Trust, London, EC1V 2PD, UK
| | - Nadja Inglin
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), 4031, Basel, Switzerland
| | | | - Catherine Egan
- Moorfields Eye Hospital NHS Foundation Trust, London, EC1V 2PD, UK
| | - Adnan Tufail
- Moorfields Eye Hospital NHS Foundation Trust, London, EC1V 2PD, UK
| | - Hendrik P N Scholl
- Department of Ophthalmology, University of Basel, 4031, Basel, Switzerland.,Institute of Molecular and Clinical Ophthalmology Basel (IOB), 4031, Basel, Switzerland
| | - Pascal W Hasler
- Department of Ophthalmology, University of Basel, 4031, Basel, Switzerland
| | - Nora Denk
- Department of Ophthalmology, University of Basel, 4031, Basel, Switzerland.,Institute of Molecular and Clinical Ophthalmology Basel (IOB), 4031, Basel, Switzerland.,Preclinical Research and Early Development, Pharmaceutical Sciences, Hoffmann-La Roche, 4070, Basel, Switzerland
| |
Collapse
|
24
|
Trinh M, Khou V, Kalloniatis M, Nivison-Smith L. Location-Specific Thickness Patterns in Intermediate Age-Related Macular Degeneration Reveals Anatomical Differences in Multiple Retinal Layers. Invest Ophthalmol Vis Sci 2021; 62:13. [PMID: 34661608 PMCID: PMC8525852 DOI: 10.1167/iovs.62.13.13] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose To examine individual retinal layers’ location-specific patterns of thicknesses in intermediate age-related macular degeneration (iAMD) using optical coherence tomography (OCT). Methods OCT macular cube scans were retrospectively acquired from 84 iAMD eyes of 84 participants and 84 normal eyes of 84 participants propensity-score matched on age, sex, and spherical equivalent refraction. Thicknesses of the retinal nerve fiber layer (RNFL), ganglion cell layer (GCL), inner plexiform layer (IPL), inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer + Henle's fiber layer (ONL+HFL), inner- and outer-segment layers (IS/OS), and retinal pigment epithelium to Bruch's membrane (RPE-BM) were calculated across an 8 × 8 grid (total 24° × 24° area). Location-specific analysis was performed using cluster(normal) and grid(iAMD)-to-cluster(normal) comparisons. Results In iAMD versus normal eyes, the central RPE-BM was thickened (mean difference ± SEM up to 27.45% ± 7.48%, P < 0.001; up to 7.6 SD-from-normal), whereas there was thinned outer (OPL, ONL+HFL, and non-central RPE-BM, up to −6.76% ± 2.47%, P < 0.001; up to −1.6 SD-from-normal) and inner retina (GCL and IPL, up to −4.83% ± 1.56%, P < 0.01; up to −1.7 SD-from-normal) with eccentricity-based effects. Interlayer correlations were greater against the ONL+HFL (mean |r| ± SEM 0.19 ± 0.03, P = 0.14 to < 0.0001) than the RPE-BM (0.09 ± 0, P = 0.72 to < 0.0001). Conclusions Location-specific analysis suggests altered retinal anatomy between iAMD and normal eyes. These data could direct clinical diagnosis and monitoring of AMD toward targeted locations.
Collapse
Affiliation(s)
- Matt Trinh
- Centre for Eye Health, University of New South Wales, Sydney, Australia.,School of Optometry and Vision Science, University of New South Wales, Sydney, Australia
| | - Vincent Khou
- Centre for Eye Health, University of New South Wales, Sydney, Australia.,School of Optometry and Vision Science, University of New South Wales, Sydney, Australia
| | - Michael Kalloniatis
- Centre for Eye Health, University of New South Wales, Sydney, Australia.,School of Optometry and Vision Science, University of New South Wales, Sydney, Australia
| | - Lisa Nivison-Smith
- Centre for Eye Health, University of New South Wales, Sydney, Australia.,School of Optometry and Vision Science, University of New South Wales, Sydney, Australia
| |
Collapse
|
25
|
Zarranz-Ventura J, Bernal-Morales C, Saenz de Viteri M, Castro Alonso FJ, Urcola JA. Artificial intelligence and ophthalmology: Current status. ARCHIVOS DE LA SOCIEDAD ESPANOLA DE OFTALMOLOGIA 2021; 96:399-400. [PMID: 34340776 DOI: 10.1016/j.oftale.2021.06.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Accepted: 06/23/2021] [Indexed: 06/13/2023]
Affiliation(s)
- J Zarranz-Ventura
- Institut Clínic de Oftalmologia (ICOF), Hospital Clínic de Barcelona, Spain; Institut de Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain.
| | - C Bernal-Morales
- Institut Clínic de Oftalmologia (ICOF), Hospital Clínic de Barcelona, Spain; Institut de Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
| | | | | | - J A Urcola
- Oftalmología, Hospital Universitario de Araba, Vitoria, Spain
| |
Collapse
|
26
|
von der Emde L, Pfau M, Holz FG, Fleckenstein M, Kortuem K, Keane PA, Rubin DL, Schmitz-Valckenberg S. AI-based structure-function correlation in age-related macular degeneration. Eye (Lond) 2021; 35:2110-2118. [PMID: 33767409 PMCID: PMC8302753 DOI: 10.1038/s41433-021-01503-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Revised: 02/24/2021] [Accepted: 03/09/2021] [Indexed: 11/22/2022] Open
Abstract
Sensitive and robust outcome measures of retinal function are pivotal for clinical trials in age-related macular degeneration (AMD). A recent development is the implementation of artificial intelligence (AI) to infer results of psychophysical examinations based on findings derived from multimodal imaging. We conducted a review of the current literature referenced in PubMed and Web of Science among others with the keywords ‘artificial intelligence’ and ‘machine learning’ in combination with ‘perimetry’, ‘best-corrected visual acuity (BCVA)’, ‘retinal function’ and ‘age-related macular degeneration’. So far AI-based structure-function correlations have been applied to infer conventional visual field, fundus-controlled perimetry, and electroretinography data, as well as BCVA, and patient-reported outcome measures (PROM). In neovascular AMD, inference of BCVA (hereafter termed inferred BCVA) can estimate BCVA results with a root mean squared error of ~7–11 letters, which is comparable to the accuracy of actual visual acuity assessment. Further, AI-based structure-function correlation can successfully infer fundus-controlled perimetry (FCP) results both for mesopic as well as dark-adapted (DA) cyan and red testing (hereafter termed inferred sensitivity). Accuracy of inferred sensitivity can be augmented by adding short FCP examinations and reach mean absolute errors (MAE) of ~3–5 dB for mesopic, DA cyan and DA red testing. Inferred BCVA, and inferred retinal sensitivity, based on multimodal imaging, may be considered as a quasi-functional surrogate endpoint for future interventional clinical trials in the future.
Collapse
Affiliation(s)
| | - Maximilian Pfau
- Department of Ophthalmology, University of Bonn, Bonn, Germany.,Department of Biomedical Data Science, Radiology, and Medicine, Stanford University, Stanford, CA, USA
| | - Frank G Holz
- Department of Ophthalmology, University of Bonn, Bonn, Germany
| | | | - Karsten Kortuem
- Augenklinik, Universität Ulm, Ulm, Deutschland.,Augenarztpraxis Dres. Kortüm, Ludwigsburg, Deutschland
| | - Pearse A Keane
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Daniel L Rubin
- Department of Biomedical Data Science, Radiology, and Medicine, Stanford University, Stanford, CA, USA
| | - Steffen Schmitz-Valckenberg
- Department of Ophthalmology, University of Bonn, Bonn, Germany. .,John A. Moran Eye Center, University of Utah, Salt Lake City, UT, USA.
| |
Collapse
|
27
|
Andersen NK, Trøjgaard P, Herschend NO, Størling ZM. Automated Assessment of Peristomal Skin Discoloration and Leakage Area Using Artificial Intelligence. Front Artif Intell 2021; 3:72. [PMID: 33733189 PMCID: PMC7861335 DOI: 10.3389/frai.2020.00072] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Accepted: 08/05/2020] [Indexed: 01/22/2023] Open
Abstract
For people living with an ostomy, development of peristomal skin complications (PSCs) is the most common post-operative challenge. A visual sign of PSCs is discoloration (redness) of the peristomal skin often resulting from leakage of ostomy output under the baseplate. If left unattended, a mild skin condition may progress into a severe disorder; consequently, it is important to monitor discoloration and leakage patterns closely. The Ostomy Skin Tool is current state-of-the-art for evaluation of peristomal skin, but it relies on patients visiting their healthcare professional regularly. To enable close monitoring of peristomal skin over time, an automated strategy not relying on scheduled consultations is required. Several medical fields have implemented automated image analysis based on artificial intelligence, and these deep learning algorithms have become increasingly recognized as a valuable tool in healthcare. Therefore, the main objective of this study was to develop deep learning algorithms which could provide automated, consistent, and objective assessments of changes in peristomal skin discoloration and leakage patterns. A total of 614 peristomal skin images were used for development of the discoloration model, which predicted the area of the discolored peristomal skin with an accuracy of 95% alongside precision and recall scores of 79.6 and 75.0%, respectively. The algorithm predicting leakage patterns was developed based on 954 product images, and leakage area was determined with 98.8% accuracy, 75.0% precision, and 71.5% recall. Combined, these data for the first time demonstrate implementation of artificial intelligence for automated assessment of changes in peristomal skin discoloration and leakage patterns.
Collapse
|
28
|
Fukutsu K, Saito M, Noda K, Murata M, Kase S, Shiba R, Isogai N, Asano Y, Hanawa N, Dohke M, Kase M, Ishida S. A Deep Learning Architecture for Vascular Area Measurement in Fundus Images. OPHTHALMOLOGY SCIENCE 2021; 1:100004. [PMID: 36246007 PMCID: PMC9560649 DOI: 10.1016/j.xops.2021.100004] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 02/06/2021] [Accepted: 02/16/2021] [Indexed: 12/27/2022]
Abstract
Purpose To develop a novel evaluation system for retinal vessel alterations caused by hypertension using a deep learning algorithm. Design Retrospective study. Participants Fundus photographs (n = 10 571) of health-check participants (n = 5598). Methods The participants were analyzed using a fully automatic architecture assisted by a deep learning system, and the total area of retinal arterioles and venules was assessed separately. The retinal vessels were extracted automatically from each photograph and categorized as arterioles or venules. Subsequently, the total arteriolar area (AA) and total venular area (VA) were measured. The correlations among AA, VA, age, systolic blood pressure (SBP), and diastolic blood pressure were analyzed. Six ophthalmologists manually evaluated the arteriovenous ratio (AVR) in fundus images (n = 102), and the correlation between the SBP and AVR was evaluated manually. Main Outcome Measures Total arteriolar area and VA. Results The deep learning algorithm demonstrated favorable properties of vessel segmentation and arteriovenous classification, comparable with pre-existing techniques. Using the algorithm, a significant positive correlation was found between AA and VA. Both AA and VA demonstrated negative correlations with age and blood pressure. Furthermore, the SBP showed a higher negative correlation with AA measured by the algorithm than with AVR. Conclusions The current data demonstrated that the retinal vascular area measured with the deep learning system could be a novel index of hypertension-related vascular changes.
Collapse
Affiliation(s)
- Kanae Fukutsu
- Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
| | - Michiyuki Saito
- Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
| | - Kousuke Noda
- Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
- Department of Ocular Circulation and Metabolism, Hokkaido University, Sapporo, Japan
- Correspondence: Kousuke Noda, MD, PhD, Department of Ophthalmology, Hokkaido University Graduate School of Medicine, N-15, W-7, Kita-ku, Sapporo 060-8638, Japan.
| | - Miyuki Murata
- Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
- Department of Ocular Circulation and Metabolism, Hokkaido University, Sapporo, Japan
| | - Satoru Kase
- Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
| | | | | | | | | | | | - Manabu Kase
- Department of Ophthalmology, Teine Keijinkai Hospital, Sapporo, Japan
| | - Susumu Ishida
- Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
- Department of Ocular Circulation and Metabolism, Hokkaido University, Sapporo, Japan
| |
Collapse
|
29
|
Müller PL, Liefers B, Treis T, Rodrigues FG, Olvera-Barrios A, Paul B, Dhingra N, Lotery A, Bailey C, Taylor P, Sánchez CI, Tufail A. Reliability of Retinal Pathology Quantification in Age-Related Macular Degeneration: Implications for Clinical Trials and Machine Learning Applications. Transl Vis Sci Technol 2021; 10:4. [PMID: 34003938 PMCID: PMC7938003 DOI: 10.1167/tvst.10.3.4] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Accepted: 12/22/2020] [Indexed: 11/24/2022] Open
Abstract
Purpose To investigate the interreader agreement for grading of retinal alterations in age-related macular degeneration (AMD) using a reading center setting. Methods In this cross-sectional case series, spectral-domain optical coherence tomography (OCT; Topcon 3D OCT, Tokyo, Japan) scans of 112 eyes of 112 patients with neovascular AMD (56 treatment naive, 56 after three anti-vascular endothelial growth factor injections) were analyzed by four independent readers. Imaging features specific for AMD were annotated using a novel custom-built annotation platform. Dice score, Bland-Altman plots, coefficients of repeatability, coefficients of variation, and intraclass correlation coefficients were assessed. Results Loss of ellipsoid zone, pigment epithelium detachment, subretinal fluid, and drusen were the most abundant features in our cohort. Subretinal fluid, intraretinal fluid, hypertransmission, descent of the outer plexiform layer, and pigment epithelium detachment showed highest interreader agreement, while detection and measures of loss of ellipsoid zone and retinal pigment epithelium were more variable. The agreement on the size and location of the respective annotation was more consistent throughout all features. Conclusions The interreader agreement depended on the respective OCT-based feature. A selection of reliable features might provide suitable surrogate markers for disease progression and possible treatment effects focusing on different disease stages. Translational Relevance This might give opportunities for a more time- and cost-effective patient assessment and improved decision making as well as have implications for clinical trials and training machine learning algorithms.
Collapse
Affiliation(s)
- Philipp L. Müller
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
- Department of Ophthalmology, University of Bonn, Bonn, Germany
| | - Bart Liefers
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Tim Treis
- BioQuant, University of Heidelberg, Heidelberg, Germany
| | - Filipa Gomes Rodrigues
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Abraham Olvera-Barrios
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Bobby Paul
- Barking, Havering and Redbridge University Hospitals NHS Trust, Romford, UK
| | | | - Andrew Lotery
- University Hospital Southampton NHS Foundation Trust, Southampton, UK
| | - Clare Bailey
- University Hospitals Bristol NHS Foundation Trust, Bristol, UK
| | - Paul Taylor
- Institute of Health Informatics, University College London, London, UK
| | - Clarisa I. Sánchez
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen, The Netherlands
- Informatics Institute, Faculty of Science, University of Amsterdam, Amsterdam, The Netherlands
| | - Adnan Tufail
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| |
Collapse
|
30
|
Pfau M, von der Emde L, de Sisternes L, Hallak JA, Leng T, Schmitz-Valckenberg S, Holz FG, Fleckenstein M, Rubin DL. Progression of Photoreceptor Degeneration in Geographic Atrophy Secondary to Age-related Macular Degeneration. JAMA Ophthalmol 2021; 138:1026-1034. [PMID: 32789526 DOI: 10.1001/jamaophthalmol.2020.2914] [Citation(s) in RCA: 49] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Importance Sensitive outcome measures for disease progression are needed for treatment trials in geographic atrophy (GA) secondary to age-related macular degeneration (AMD). Objective To quantify photoreceptor degeneration outside regions of GA in eyes with nonexudative AMD, to evaluate its association with future GA progression, and to characterize its spatio-temporal progression. Design, Setting, and Participants Monocenter cohort study (Directional Spread in Geographic Atrophy [NCT02051998]) and analysis of data from a normative data study at a tertiary referral center. One hundred fifty-eight eyes of 89 patients with a mean (SD) age of 77.7 (7.1) years, median area of GA of 8.87 mm2 (IQR, 4.09-15.60), and median follow-up of 1.1 years (IQR, 0.52-1.7 years), as well as 93 normal eyes from 93 participants. Exposures Longitudinal spectral-domain optical coherence tomography (SD-OCT) volume scans (121 B-scans across 30° × 25°) were segmented with a deep-learning pipeline and standardized in a pointwise manner with age-adjusted normal data (z scores). Outer nuclear layer (ONL), photoreceptor inner segment (IS), and outer segment (OS) thickness were quantified along evenly spaced contour lines surrounding GA lesions. Linear mixed models were applied to assess the association between photoreceptor-related imaging features and GA progression rates and characterize the pattern of photoreceptor degeneration over time. Main Outcomes and Measures Association of ONL thinning with follow-up time (after adjusting for age, retinal topography [z score], and distance to the GA boundary). Results The study included 158 eyes of 89 patients (51 women and 38 men) with a mean (SD) age of 77.7 (7.1) years. The fully automated B-scan segmentation was accurate (dice coefficient, 0.82; 95% CI, 0.80-0.85; compared with manual markings) and revealed a marked interpatient variability in photoreceptor degeneration. The ellipsoid zone (EZ) loss-to-GA boundary distance and OS thickness were prognostic for future progression rates. Outer nuclear layer and IS thinning over time was significant even when adjusting for age and proximity to the GA boundary (estimates of -0.16 μm/y; 95% CI, -0.30 to -0.02; and -0.17 μm/y; 95% CI, -0.26 to -0.09). Conclusions and Relevance Distinct and progressive alterations of photoreceptor laminae (exceeding GA spatially) were detectable and quantifiable. The degree of photoreceptor degeneration outside of regions of retinal pigment epithelium atrophy varied markedly between eyes and was associated with future GA progression. Macula-wide photoreceptor laminae thinning represents a potential candidate end point to monitor treatment effects beyond mere GA lesion size progression.
Collapse
Affiliation(s)
- Maximilian Pfau
- Department of Biomedical Data Science, Stanford University, Stanford, California.,Department of Ophthalmology, University of Bonn, Bonn, Germany
| | | | - Luis de Sisternes
- Research and Development, Carl Zeiss Meditec Inc, Dublin, California
| | - Joelle A Hallak
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago
| | - Theodore Leng
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California
| | - Steffen Schmitz-Valckenberg
- Department of Ophthalmology, University of Bonn, Bonn, Germany.,John A. Moran Eye Center, University of Utah, Salt Lake City
| | - Frank G Holz
- Department of Ophthalmology, University of Bonn, Bonn, Germany
| | - Monika Fleckenstein
- Department of Ophthalmology, University of Bonn, Bonn, Germany.,John A. Moran Eye Center, University of Utah, Salt Lake City
| | - Daniel L Rubin
- Department of Biomedical Data Science, Stanford University, Stanford, California
| |
Collapse
|
31
|
Maloca PM, Müller PL, Lee AY, Tufail A, Balaskas K, Niklaus S, Kaiser P, Suter S, Zarranz-Ventura J, Egan C, Scholl HPN, Schnitzer TK, Singer T, Hasler PW, Denk N. Unraveling the deep learning gearbox in optical coherence tomography image segmentation towards explainable artificial intelligence. Commun Biol 2021; 4:170. [PMID: 33547415 PMCID: PMC7864998 DOI: 10.1038/s42003-021-01697-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Accepted: 01/13/2021] [Indexed: 01/30/2023] Open
Abstract
Machine learning has greatly facilitated the analysis of medical data, while the internal operations usually remain intransparent. To better comprehend these opaque procedures, a convolutional neural network for optical coherence tomography image segmentation was enhanced with a Traceable Relevance Explainability (T-REX) technique. The proposed application was based on three components: ground truth generation by multiple graders, calculation of Hamming distances among graders and the machine learning algorithm, as well as a smart data visualization ('neural recording'). An overall average variability of 1.75% between the human graders and the algorithm was found, slightly minor to 2.02% among human graders. The ambiguity in ground truth had noteworthy impact on machine learning results, which could be visualized. The convolutional neural network balanced between graders and allowed for modifiable predictions dependent on the compartment. Using the proposed T-REX setup, machine learning processes could be rendered more transparent and understandable, possibly leading to optimized applications.
Collapse
Affiliation(s)
- Peter M. Maloca
- grid.508836.0Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland ,grid.410567.1OCTlab, Department of Ophthalmology, University Hospital Basel, Basel, Switzerland ,grid.6612.30000 0004 1937 0642Department of Ophthalmology, University of Basel, Basel, Switzerland ,grid.436474.60000 0000 9168 0080Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Philipp L. Müller
- grid.436474.60000 0000 9168 0080Moorfields Eye Hospital NHS Foundation Trust, London, UK ,grid.10388.320000 0001 2240 3300Department of Ophthalmology, University of Bonn, Bonn, Germany
| | - Aaron Y. Lee
- grid.267047.00000 0001 2105 7936Department of Ophthalmology, Puget Sound Veteran Affairs, Seattle, WA USA ,grid.34477.330000000122986657eScience Institute, University of Washington, Seattle, WA USA ,grid.34477.330000000122986657Department of Ophthalmology, University of Washington, Seattle, WA USA
| | - Adnan Tufail
- grid.436474.60000 0000 9168 0080Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Konstantinos Balaskas
- grid.436474.60000 0000 9168 0080Moorfields Eye Hospital NHS Foundation Trust, London, UK ,Moorfields Ophthalmic Reading Centre, London, UK
| | - Stephanie Niklaus
- grid.417570.00000 0004 0374 1269Pharma Research and Early Development (pRED), Pharmaceutical Sciences (PS), Roche, Innovation Center Basel, Basel, Switzerland
| | - Pascal Kaiser
- grid.483647.aSupercomputing Systems, Zurich, Switzerland
| | - Susanne Suter
- grid.483647.aSupercomputing Systems, Zurich, Switzerland ,grid.19739.350000000122291644Zurich University of Applied Sciences, Waedenswil, Switzerland
| | - Javier Zarranz-Ventura
- grid.410458.c0000 0000 9635 9413Institut Clínic d’Oftalmologia, Hospital Clínic de Barcelona, Barcelona, Spain
| | - Catherine Egan
- grid.436474.60000 0000 9168 0080Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Hendrik P. N. Scholl
- grid.508836.0Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland ,grid.6612.30000 0004 1937 0642Department of Ophthalmology, University of Basel, Basel, Switzerland
| | - Tobias K. Schnitzer
- grid.417570.00000 0004 0374 1269Pharma Research and Early Development (pRED), Pharmaceutical Sciences (PS), Roche, Innovation Center Basel, Basel, Switzerland
| | - Thomas Singer
- grid.417570.00000 0004 0374 1269Pharma Research and Early Development (pRED), Pharmaceutical Sciences (PS), Roche, Innovation Center Basel, Basel, Switzerland
| | - Pascal W. Hasler
- grid.410567.1OCTlab, Department of Ophthalmology, University Hospital Basel, Basel, Switzerland ,grid.6612.30000 0004 1937 0642Department of Ophthalmology, University of Basel, Basel, Switzerland
| | - Nora Denk
- grid.6612.30000 0004 1937 0642Department of Ophthalmology, University of Basel, Basel, Switzerland ,grid.417570.00000 0004 0374 1269Pharma Research and Early Development (pRED), Pharmaceutical Sciences (PS), Roche, Innovation Center Basel, Basel, Switzerland
| |
Collapse
|
32
|
Pfau M, Walther G, von der Emde L, Berens P, Faes L, Fleckenstein M, Heeren TFC, Kortüm K, Künzel SH, Müller PL, Maloca PM, Waldstein SM, Wintergerst MWM, Schmitz-Valckenberg S, Finger RP, Holz FG. [Artificial intelligence in ophthalmology : Guidelines for physicians for the critical evaluation of studies]. Ophthalmologe 2020; 117:973-988. [PMID: 32857270 DOI: 10.1007/s00347-020-01209-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
BACKGROUND Empirical models have been an integral part of everyday clinical practice in ophthalmology since the introduction of the Sanders-Retzlaff-Kraff (SRK) formula. Recent developments in the field of statistical learning (artificial intelligence, AI) now enable an empirical approach to a wide range of ophthalmological questions with an unprecedented precision. OBJECTIVE Which criteria must be considered for the evaluation of AI-related studies in ophthalmology? MATERIAL AND METHODS Exemplary prediction of visual acuity (continuous outcome) and classification of healthy and diseased eyes (discrete outcome) using retrospectively compiled optical coherence tomography data (50 eyes of 50 patients, 50 healthy eyes of 50 subjects). The data were analyzed with nested cross-validation (for learning algorithm selection and hyperparameter optimization). RESULTS Based on nested cross-validation for training, visual acuity could be predicted in the separate test data-set with a mean absolute error (MAE, 95% confidence interval, CI of 0.142 LogMAR [0.077; 0.207]). Healthy versus diseased eyes could be classified in the test data-set with an agreement of 0.92 (Cohen's kappa). The exemplary incorrect learning algorithm and variable selection resulted in an MAE for visual acuity prediction of 0.229 LogMAR [0.150; 0.309] for the test data-set. The drastic overfitting became obvious on comparison of the MAE with the null model MAE (0.235 LogMAR [0.148; 0.322]). CONCLUSION Selection of an unsuitable measure of the goodness-of-fit, inadequate validation, or withholding of a null or reference model can obscure the actual goodness-of-fit of AI models. The illustrated pitfalls can help clinicians to identify such shortcomings.
Collapse
Affiliation(s)
- Maximilian Pfau
- Department of Biomedical Data Science, Stanford University, Medical School Office Building (MSOB), 1265 Welch Road, 94305-5479, Stanford, CA, USA.
- Universitäts-Augenklinik Bonn, Bonn, Deutschland.
| | | | | | - Philipp Berens
- Forschungsinstitut für Augenheilkunde, Universität Tübingen, Tübingen, Deutschland
- Interfakultäres Institut für Bioinformatik und Medizininformatik, Universität Tübingen, Tübingen, Deutschland
| | - Livia Faes
- Augenklinik, Luzerner Kantonsspital, Luzern, Schweiz
- Moorfields Eye Hopsital NHS Foundation Trust, London, Großbritannien
| | | | - Tjebo F C Heeren
- Moorfields Eye Hopsital NHS Foundation Trust, London, Großbritannien
| | - Karsten Kortüm
- Augenklinik, Ludwig-Maximilians-Universität München, München, Deutschland
- Augenarztpraxis Dres. Kortüm, Ludwigsburg, Deutschland
| | | | - Philipp L Müller
- Universitäts-Augenklinik Bonn, Bonn, Deutschland
- Forschungsinstitut für Augenheilkunde, Universität Tübingen, Tübingen, Deutschland
| | - Peter M Maloca
- Moorfields Eye Hopsital NHS Foundation Trust, London, Großbritannien
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Schweiz
- OCTlab, Universitätsspital Basel, Basel, Schweiz
| | - Sebastian M Waldstein
- Univ.-Klinik für Augenheilkunde und Optometrie, Medizinische Universität Wien, Wien, Österreich
- Department of Ophthalmology, Westmead Hospital, University of Sydney, Sydney, Australien
| | | | - Steffen Schmitz-Valckenberg
- Universitäts-Augenklinik Bonn, Bonn, Deutschland
- John A. Moran Eye Center, University of Utah, Salt Lake City, USA
| | | | - Frank G Holz
- Universitäts-Augenklinik Bonn, Bonn, Deutschland
| |
Collapse
|
33
|
Zéboulon P, Debellemanière G, Bouvet M, Gatinel D. Corneal Topography Raw Data Classification Using a Convolutional Neural Network. Am J Ophthalmol 2020; 219:33-39. [PMID: 32533948 DOI: 10.1016/j.ajo.2020.06.005] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 05/18/2020] [Accepted: 06/03/2020] [Indexed: 02/07/2023]
Abstract
PURPOSE We investigated the efficiency of a convolutional neural network applied to corneal topography raw data to classify examinations of 3 categories: normal, keratoconus (KC), and history of refractive surgery (RS). DESIGN Retrospective machine-learning experimental study. METHODS A total of 3,000 Orbscan examinations (1,000 of each class) of different patients of our institution were selected for model training and validation. One hundred examinations of each class were randomly assigned to the test set. For each examination, the raw numerical data from "elevation against the anterior best fit sphere (BFS)," "elevation against the posterior BFS" "axial anterior curvature," and "pachymetry" maps were used. Each map was a square matrix of 2,500 values. The 4 maps were stacked and used as if they were 4 channels of a single image.A convolutional neural network was built and trained on the training set. Classification accuracy and class wise sensitivity and specificity were calculated for the validation set. RESULTS Overall classification accuracy of the validation set (n = 300) was 99.3% (98.3%-100%). Sensitivity and specificity were, respectively, 100% and 100% for KC, 100% and 99% (94.9%-100%) for normal examinations, and 98% (97.4%-100%) and 100% for RS examinations. CONCLUSION Using combined corneal topography raw data with a convolutional neural network is an effective way to classify examinations and probably the most thorough way to automatically analyze corneal topography. It should be considered for other routine tasks performed on corneal topography, such as refractive surgery screening.
Collapse
|
34
|
Unsupervised learning for large-scale corneal topography clustering. Sci Rep 2020; 10:16973. [PMID: 33046810 PMCID: PMC7550569 DOI: 10.1038/s41598-020-73902-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2020] [Accepted: 09/15/2020] [Indexed: 01/31/2023] Open
Abstract
Machine learning algorithms have recently shown their precision and potential in many different use cases and fields of medicine. Most of the algorithms used are supervised and need a large quantity of labeled data to achieve high accuracy. Also, most applications of machine learning in medicine are attempts to mimic or exceed human diagnostic capabilities but little work has been done to show the power of these algorithms to help collect and pre-process a large amount of data. In this study we show how unsupervised learning can extract and sort usable data from large unlabeled datasets with minimal human intervention. Our digital examination tools used in clinical practice store such databases and are largely under-exploited. We applied unsupervised algorithms to corneal topography examinations which remains the gold standard test for diagnosis and follow-up of many corneal diseases and refractive surgery screening. We could extract 7019 usable examinations which were automatically sorted in 3 common diagnoses (Normal, Keratoconus and History of Refractive Surgery) from an unlabeled database with an overall accuracy of 96.5%. Similar methods could be used on any form of digital examination database and greatly speed up the data collection process and yield to the elaboration of stronger supervised models.
Collapse
|
35
|
Abstract
PURPOSE OF REVIEW As artificial intelligence continues to develop new applications in ophthalmic image recognition, we provide here an introduction for ophthalmologists and a primer on the mechanisms of deep learning systems. RECENT FINDINGS Deep learning has lent itself to the automated interpretation of various retinal imaging modalities, including fundus photography and optical coherence tomography. Convolutional neural networks (CNN) represent the primary class of deep neural networks applied to these image analyses. These have been configured to aid in the detection of diabetes retinopathy, AMD, retinal detachment, glaucoma, and ROP, among other ocular disorders. Predictive models for retinal disease prognosis and treatment are also being validated. SUMMARY Deep learning systems have begun to demonstrate a reliable level of diagnostic accuracy equal or better to human graders for narrow image recognition tasks. However, challenges regarding the use of deep learning systems in ophthalmology remain. These include trust of unsupervised learning systems and the limited ability to recognize broad ranges of disorders.
Collapse
|
36
|
Lo J, Heisler M, Vanzan V, Karst S, Matovinović IZ, Lončarić S, Navajas EV, Beg MF, Šarunić MV. Microvasculature Segmentation and Intercapillary Area Quantification of the Deep Vascular Complex Using Transfer Learning. Transl Vis Sci Technol 2020; 9:38. [PMID: 32855842 PMCID: PMC7424950 DOI: 10.1167/tvst.9.2.38] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2019] [Accepted: 05/08/2020] [Indexed: 12/28/2022] Open
Abstract
Purpose Optical coherence tomography angiography (OCT-A) permits visualization of the changes to the retinal circulation due to diabetic retinopathy (DR), a microvascular complication of diabetes. We demonstrate accurate segmentation of the vascular morphology for the superficial capillary plexus (SCP) and deep vascular complex (DVC) using a convolutional neural network (CNN) for quantitative analysis. Methods The main CNN training dataset consisted of retinal OCT-A with a 6 × 6-mm field of view (FOV), acquired using a Zeiss PlexElite. Multiple-volume acquisition and averaging enhanced the vasculature contrast used for constructing the ground truth for neural network training. We used transfer learning from a CNN trained on smaller FOVs of the SCP acquired using different OCT instruments. Quantitative analysis of perfusion was performed on the resulting automated vasculature segmentations in representative patients with DR. Results The automated segmentations of the OCT-A images maintained the distinct morphologies of the SCP and DVC. The network segmented the SCP with an accuracy and Dice index of 0.8599 and 0.8618, respectively, and 0.7986 and 0.8139, respectively, for the DVC. The inter-rater comparisons for the SCP had an accuracy and Dice index of 0.8300 and 0.6700, respectively, and 0.6874 and 0.7416, respectively, for the DVC. Conclusions Transfer learning reduces the amount of manually annotated images required while producing high-quality automatic segmentations of the SCP and DVC that exceed inter-rater comparisons. The resulting intercapillary area quantification provides a tool for in-depth clinical analysis of retinal perfusion. Translational Relevance Accurate retinal microvasculature segmentation with the CNN results in improved perfusion analysis in diabetic retinopathy.
Collapse
Affiliation(s)
- Julian Lo
- School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada
| | - Morgan Heisler
- School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada
| | - Vinicius Vanzan
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, BC, Canada
| | - Sonja Karst
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, BC, Canada.,Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | | | - Sven Lončarić
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Eduardo V Navajas
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, BC, Canada
| | - Mirza Faisal Beg
- School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada
| | - Marinko V Šarunić
- School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada
| |
Collapse
|
37
|
Fourier-Domain OCT Imaging of the Ocular Surface and Tear Film Dynamics: A Review of the State of the Art and an Integrative Model of the Tear Behavior During the Inter-Blink Period and Visual Fixation. J Clin Med 2020; 9:jcm9030668. [PMID: 32131486 PMCID: PMC7141198 DOI: 10.3390/jcm9030668] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2020] [Revised: 02/26/2020] [Accepted: 02/27/2020] [Indexed: 12/26/2022] Open
Abstract
In the last few decades, the ocular surface and the tear film have been noninvasively investigated in vivo, in a three-dimensional, high resolution, and real-time mode, by optical coherence tomography (OCT). Recently, OCT technology has made great strides in improving the acquisition speed and image resolution, thus increasing its impact in daily clinical practice and in the research setting. All these results have been achieved because of a transition from traditional time-domain (TD) to Fourier-domain (FD) technology. FD-OCT devices include a spectrometer in the receiver that analyzes the spectrum of reflected light on the retina or ocular surface and transforms it into information about the depth of the structures according to the Fourier principle. In this review, we summarize and provide the state-of-the-art in FD-OCT imaging of the ocular surface system, addressing specific aspects such as tear film dynamics and epithelial changes under physiologic and pathologic conditions. A theory on the dynamic nature of the tear film has been developed to explain the variations within the individual compartments. Moreover, an integrative model of tear film behavior during the inter-blink period and visual fixation is proposed.
Collapse
|