1
|
Chen N, Lv X. Research on segmentation model of optic disc and optic cup in fundus. BMC Ophthalmol 2024; 24:273. [PMID: 38943095 PMCID: PMC11214242 DOI: 10.1186/s12886-024-03532-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 06/20/2024] [Indexed: 07/01/2024] Open
Abstract
BACKGROUND Glaucoma is a worldwide eye disease that can cause irreversible vision loss. Early detection of glaucoma is important to reduce vision loss, and retinal fundus image examination is one of the most commonly used solutions for glaucoma diagnosis due to its low cost. Clinically, the cup-disc ratio of fundus images is an important indicator for glaucoma diagnosis. In recent years, there have been an increasing number of algorithms for segmentation and recognition of the optic disc (OD) and optic cup (OC), but these algorithms generally have poor universality, segmentation performance, and segmentation accuracy. METHODS By improving the YOLOv8 algorithm for segmentation of OD and OC. Firstly, a set of algorithms was designed to adapt the REFUGE dataset's result images to the input format of the YOLOv8 algorithm. Secondly, in order to improve segmentation performance, the network structure of YOLOv8 was improved, including adding a ROI (Region of Interest) module, modifying the bounding box regression loss function from CIOU to Focal-EIoU. Finally, by training and testing the REFUGE dataset, the improved YOLOv8 algorithm was evaluated. RESULTS The experimental results show that the improved YOLOv8 algorithm achieves good segmentation performance on the REFUGE dataset. In the OD and OC segmentation tests, the F1 score is 0.999. CONCLUSIONS We improved the YOLOv8 algorithm and applied the improved model to the segmentation task of OD and OC in fundus images. The results show that our improved model is far superior to the mainstream U-Net model in terms of training speed, segmentation performance, and segmentation accuracy.
Collapse
Affiliation(s)
- Naigong Chen
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325000, China.
- State Key Laboratory of Ophthalmology, Optometry and Vision Science, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| | - Xiujuan Lv
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325000, China
- State Key Laboratory of Ophthalmology, Optometry and Vision Science, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| |
Collapse
|
2
|
Hasan MM, Phu J, Sowmya A, Meijering E, Kalloniatis M. Artificial intelligence in the diagnosis of glaucoma and neurodegenerative diseases. Clin Exp Optom 2024; 107:130-146. [PMID: 37674264 DOI: 10.1080/08164622.2023.2235346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 07/07/2023] [Indexed: 09/08/2023] Open
Abstract
Artificial Intelligence is a rapidly expanding field within computer science that encompasses the emulation of human intelligence by machines. Machine learning and deep learning - two primary data-driven pattern analysis approaches under the umbrella of artificial intelligence - has created considerable interest in the last few decades. The evolution of technology has resulted in a substantial amount of artificial intelligence research on ophthalmic and neurodegenerative disease diagnosis using retinal images. Various artificial intelligence-based techniques have been used for diagnostic purposes, including traditional machine learning, deep learning, and their combinations. Presented here is a review of the literature covering the last 10 years on this topic, discussing the use of artificial intelligence in analysing data from different modalities and their combinations for the diagnosis of glaucoma and neurodegenerative diseases. The performance of published artificial intelligence methods varies due to several factors, yet the results suggest that such methods can potentially facilitate clinical diagnosis. Generally, the accuracy of artificial intelligence-assisted diagnosis ranges from 67-98%, and the area under the sensitivity-specificity curve (AUC) ranges from 0.71-0.98, which outperforms typical human performance of 71.5% accuracy and 0.86 area under the curve. This indicates that artificial intelligence-based tools can provide clinicians with useful information that would assist in providing improved diagnosis. The review suggests that there is room for improvement of existing artificial intelligence-based models using retinal imaging modalities before they are incorporated into clinical practice.
Collapse
Affiliation(s)
- Md Mahmudul Hasan
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Jack Phu
- School of Optometry and Vision Science, University of New South Wales, Kensington, Australia
- Centre for Eye Health, University of New South Wales, Sydney, New South Wales, Australia
- School of Medicine (Optometry), Deakin University, Waurn Ponds, Victoria, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Michael Kalloniatis
- School of Optometry and Vision Science, University of New South Wales, Kensington, Australia
- School of Medicine (Optometry), Deakin University, Waurn Ponds, Victoria, Australia
| |
Collapse
|
3
|
Velpula VK, Sharma LD. Multi-stage glaucoma classification using pre-trained convolutional neural networks and voting-based classifier fusion. Front Physiol 2023; 14:1175881. [PMID: 37383146 PMCID: PMC10293617 DOI: 10.3389/fphys.2023.1175881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 05/19/2023] [Indexed: 06/30/2023] Open
Abstract
Aim: To design an automated glaucoma detection system for early detection of glaucoma using fundus images. Background: Glaucoma is a serious eye problem that can cause vision loss and even permanent blindness. Early detection and prevention are crucial for effective treatment. Traditional diagnostic approaches are time consuming, manual, and often inaccurate, thus making automated glaucoma diagnosis necessary. Objective: To propose an automated glaucoma stage classification model using pre-trained deep convolutional neural network (CNN) models and classifier fusion. Methods: The proposed model utilized five pre-trained CNN models: ResNet50, AlexNet, VGG19, DenseNet-201, and Inception-ResNet-v2. The model was tested using four public datasets: ACRIMA, RIM-ONE, Harvard Dataverse (HVD), and Drishti. Classifier fusion was created to merge the decisions of all CNN models using the maximum voting-based approach. Results: The proposed model achieved an area under the curve of 1 and an accuracy of 99.57% for the ACRIMA dataset. The HVD dataset had an area under the curve of 0.97 and an accuracy of 85.43%. The accuracy rates for Drishti and RIM-ONE were 90.55 and 94.95%, respectively. The experimental results showed that the proposed model performed better than the state-of-the-art methods in classifying glaucoma in its early stages. Understanding the model output includes both attribution-based methods such as activations and gradient class activation map and perturbation-based methods such as locally interpretable model-agnostic explanations and occlusion sensitivity, which generate heatmaps of various sections of an image for model prediction. Conclusion: The proposed automated glaucoma stage classification model using pre-trained CNN models and classifier fusion is an effective method for the early detection of glaucoma. The results indicate high accuracy rates and superior performance compared to the existing methods.
Collapse
|
4
|
Li F, Xiang W, Zhang L, Pan W, Zhang X, Jiang M, Zou H. Joint optic disk and cup segmentation for glaucoma screening using a region-based deep learning network. Eye (Lond) 2023; 37:1080-1087. [PMID: 35437003 PMCID: PMC10102238 DOI: 10.1038/s41433-022-02055-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 02/16/2022] [Accepted: 03/29/2022] [Indexed: 11/09/2022] Open
Abstract
OBJECTIVES To develop and validate an end-to-end region-based deep convolutional neural network (R-DCNN) to jointly segment the optic disc (OD) and optic cup (OC) in retinal fundus images for precise cup-to-disc ratio (CDR) measurement and glaucoma screening. METHODS In total, 2440 retinal fundus images were retrospectively obtained from 2033 participants. An R-DCNN was presented for joint OD and OC segmentation, where the OD and OC segmentation problems were formulated into object detection problems. We compared R-DCNN's segmentation performance on our in-house dataset with that of four ophthalmologists while performing quantitative, qualitative and generalization analyses on the publicly available both DRISHIT-GS and RIM-ONE v3 datasets. The Dice similarity coefficient (DC), Jaccard coefficient (JC), overlapping error (E), sensitivity (SE), specificity (SP) and area under the curve (AUC) were measured. RESULTS On our in-house dataset, the proposed model achieved a 98.51% DC and a 97.07% JC for OD segmentation, and a 97.63% DC and a 95.39% JC for OC segmentation, achieving a performance level comparable to that of the ophthalmologists. On the DRISHTI-GS dataset, our approach achieved 97.23% and 94.17% results in DC and JC results for OD segmentation, respectively, while it achieved a 94.56% DC and an 89.92% JC for OC segmentation. Additionally, on the RIM-ONE v3 dataset, our model generated DC and JC values of 96.89% and 91.32% on the OD segmentation task, respectively, whereas the DC and JC values acquired for OC segmentation were 88.94% and 78.21%, respectively. CONCLUSION The proposed approach achieved very encouraging performance on the OD and OC segmentation tasks, as well as in glaucoma screening. It has the potential to serve as a useful tool for computer-assisted glaucoma screening.
Collapse
Affiliation(s)
- Feng Li
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Wenjie Xiang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Lijuan Zhang
- School of Electrical and Electronic Engineering, Shanghai Institute of Technology, Shanghai, 201418, China
| | - Wenzhe Pan
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Xuedian Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
- School of Medical Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, 201318, China
| | - Minshan Jiang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
| | - Haidong Zou
- Department of Ophthalmology, Shanghai First People's Hospital, Shanghai, 200080, China
| |
Collapse
|
5
|
Du J, Huang M, Liu L. AI-Aided Disease Prediction in Visualized Medicine. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2023; 1199:107-126. [PMID: 37460729 DOI: 10.1007/978-981-32-9902-3_6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/20/2023]
Abstract
Artificial intelligence (AI) is playing a vitally important role in promoting the revolution of future technology. Healthcare is one of the promising applications in AI, which covers medical imaging, diagnosis, robotics, disease prediction, pharmacy, health management, and hospital management. Numbers of achievements that made in these fields overturn every aspect in traditional healthcare system. Therefore, to understand the state-of-art AI in healthcare, as well as the chances and obstacles in its development, the applications of AI in disease detection and outlook and the future trends of AI-aided disease prediction were discussed in this chapter.
Collapse
Affiliation(s)
- Juan Du
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China.
| | - Mengen Huang
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Lin Liu
- Tianjin Key Laboratory of Retinal Functions and Diseases, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, China
| |
Collapse
|
6
|
Yousefi S. Clinical Applications of Artificial Intelligence in Glaucoma. J Ophthalmic Vis Res 2023; 18:97-112. [PMID: 36937202 PMCID: PMC10020779 DOI: 10.18502/jovr.v18i1.12730] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 11/05/2022] [Indexed: 02/25/2023] Open
Abstract
Ophthalmology is one of the major imaging-intensive fields of medicine and thus has potential for extensive applications of artificial intelligence (AI) to advance diagnosis, drug efficacy, and other treatment-related aspects of ocular disease. AI has made impressive progress in ophthalmology within the past few years and two autonomous AI-enabled systems have received US regulatory approvals for autonomously screening for mid-level or advanced diabetic retinopathy and macular edema. While no autonomous AI-enabled system for glaucoma screening has yet received US regulatory approval, numerous assistive AI-enabled software tools are already employed in commercialized instruments for quantifying retinal images and visual fields to augment glaucoma research and clinical practice. In this literature review (non-systematic), we provide an overview of AI applications in glaucoma, and highlight some limitations and considerations for AI integration and adoption into clinical practice.
Collapse
Affiliation(s)
- Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, TN, USA
| |
Collapse
|
7
|
Superpixel-Based Optic Nerve Head Segmentation Method of Fundus Images for Glaucoma Assessment. Diagnostics (Basel) 2022; 12:diagnostics12123210. [PMID: 36553217 PMCID: PMC9777478 DOI: 10.3390/diagnostics12123210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2022] [Revised: 12/14/2022] [Accepted: 12/16/2022] [Indexed: 12/23/2022] Open
Abstract
Glaucoma disease is the second leading cause of blindness in the world. This progressive ocular neuropathy is mainly caused by uncontrolled high intraocular pressure. Although there is still no cure, early detection and appropriate treatment can stop the disease progression to low vision and blindness. In the clinical practice, the gold standard used by ophthalmologists for glaucoma diagnosis is fundus retinal imaging, in particular optic nerve head (ONH) subjective/manual examination. In this work, we propose an unsupervised superpixel-based method for the optic nerve head (ONH) segmentation. An automatic algorithm based on linear iterative clustering is used to compute an ellipse fitting for the automatic detection of the ONH contour. The tool has been tested using a public retinal fundus images dataset with medical expert ground truths of the ONH contour and validated with a classified (control vs. glaucoma eyes) database. Results showed that the automatic segmentation method provides similar results in ellipse fitting of the ONH that those obtained from the ground truth experts within the statistical range of inter-observation variability. Our method is a user-friendly available program that provides fast and reliable results for clinicians working on glaucoma screening using retinal fundus images.
Collapse
|
8
|
Haider A, Arsalan M, Park C, Sultan H, Park KR. Exploring deep feature-blending capabilities to assist glaucoma screening. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
9
|
Ferro Desideri L, Rutigliani C, Corazza P, Nastasi A, Roda M, Nicolo M, Traverso CE, Vagge A. The upcoming role of Artificial Intelligence (AI) for retinal and glaucomatous diseases. JOURNAL OF OPTOMETRY 2022; 15 Suppl 1:S50-S57. [PMID: 36216736 PMCID: PMC9732476 DOI: 10.1016/j.optom.2022.08.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/14/2022] [Accepted: 08/16/2022] [Indexed: 06/16/2023]
Abstract
In recent years, the role of artificial intelligence (AI) and deep learning (DL) models is attracting increasing global interest in the field of ophthalmology. DL models are considered the current state-of-art among the AI technologies. In fact, DL systems have the capability to recognize, quantify and describe pathological clinical features. Their role is currently being investigated for the early diagnosis and management of several retinal diseases and glaucoma. The application of DL models to fundus photographs, visual fields and optical coherence tomography (OCT) imaging has provided promising results in the early detection of diabetic retinopathy (DR), wet age-related macular degeneration (w-AMD), retinopathy of prematurity (ROP) and glaucoma. In this review we analyze the current evidence of AI applied to these ocular diseases, as well as discuss the possible future developments and potential clinical implications, without neglecting the present limitations and challenges in order to adopt AI and DL models as powerful tools in the everyday routine clinical practice.
Collapse
Affiliation(s)
- Lorenzo Ferro Desideri
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy.
| | | | - Paolo Corazza
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy
| | | | - Matilde Roda
- Ophthalmology Unit, Department of Experimental, Diagnostic and Specialty Medicine (DIMES), Alma Mater Studiorum University of Bologna and S.Orsola-Malpighi Teaching Hospital, Bologna, Italy
| | - Massimo Nicolo
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy
| | - Carlo Enrico Traverso
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy
| | - Aldo Vagge
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy
| |
Collapse
|
10
|
Wang Y, Yu X, Wu C. An Efficient Hierarchical Optic Disc and Cup Segmentation Network Combined with Multi-task Learning and Adversarial Learning. J Digit Imaging 2022; 35:638-653. [PMID: 35212860 PMCID: PMC9156633 DOI: 10.1007/s10278-021-00579-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 12/24/2021] [Accepted: 12/29/2021] [Indexed: 12/15/2022] Open
Abstract
Automatic and accurate segmentation of optic disc (OD) and optic cup (OC) in fundus images is a fundamental task in computer-aided ocular pathologies diagnosis. The complex structures, such as blood vessels and macular region, and the existence of lesions in fundus images bring great challenges to the segmentation task. Recently, the convolutional neural network-based methods have exhibited its potential in fundus image analysis. In this paper, we propose a cascaded two-stage network architecture for robust and accurate OD and OC segmentation in fundus images. In the first stage, the U-Net like framework with an improved attention mechanism and focal loss is proposed to detect accurate and reliable OD location from the full-scale resolution fundus images. Based on the outputs of the first stage, a refined segmentation network in the second stage that integrates multi-task framework and adversarial learning is further designed for OD and OC segmentation separately. The multi-task framework is conducted to predict the OD and OC masks by simultaneously estimating contours and distance maps as auxiliary tasks, which can guarantee the smoothness and shape of object in segmentation predictions. The adversarial learning technique is introduced to encourage the segmentation network to produce an output that is consistent with the true labels in space and shape distribution. We evaluate the performance of our method using two public retinal fundus image datasets (RIM-ONE-r3 and REFUGE). Extensive ablation studies and comparison experiments with existing methods demonstrate that our approach can produce competitive performance compared with state-of-the-art methods.
Collapse
Affiliation(s)
- Ying Wang
- grid.412252.20000 0004 0368 6968College of Information Science and Engineering, Northeastern University, Liaoning, 110819 China
| | - Xiaosheng Yu
- grid.412252.20000 0004 0368 6968Faculty of Robot Science and Engineering, Northeastern University, Liaoning, 110819 China
| | - Chengdong Wu
- grid.412252.20000 0004 0368 6968Faculty of Robot Science and Engineering, Northeastern University, Liaoning, 110819 China
| |
Collapse
|
11
|
Jain S, Indora S, Atal DK. Rider Manta Ray Foraging Optimization-based Generative Adversarial Network and CNN feature for detecting glaucoma. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103425] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
12
|
Sharifi M, Khatibi T, Emamian MH, Sadat S, Hashemi H, Fotouhi A. Development of glaucoma predictive model and risk factors assessment based on supervised models. BioData Min 2021; 14:48. [PMID: 34819128 PMCID: PMC8611977 DOI: 10.1186/s13040-021-00281-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Accepted: 10/31/2021] [Indexed: 11/22/2022] Open
Abstract
Objectives To develop and to propose a machine learning model for predicting glaucoma and identifying its risk factors. Method Data analysis pipeline is designed for this study based on Cross-Industry Standard Process for Data Mining (CRISP-DM) methodology. The main steps of the pipeline include data sampling, preprocessing, classification and evaluation and validation. Data sampling for providing the training dataset was performed with balanced sampling based on over-sampling and under-sampling methods. Data preprocessing steps were missing value imputation and normalization. For classification step, several machine learning models were designed for predicting glaucoma including Decision Trees (DTs), K-Nearest Neighbors (K-NN), Support Vector Machines (SVM), Random Forests (RFs), Extra Trees (ETs) and Bagging Ensemble methods. Moreover, in the classification step, a novel stacking ensemble model is designed and proposed using the superior classifiers. Results The data were from Shahroud Eye Cohort Study including demographic and ophthalmology data for 5190 participants aged 40-64 living in Shahroud, northeast Iran. The main variables considered in this dataset were 67 demographics, ophthalmologic, optometric, perimetry, and biometry features for 4561 people, including 4474 non-glaucoma participants and 87 glaucoma patients. Experimental results show that DTs and RFs trained based on under-sampling of the training dataset have superior performance for predicting glaucoma than the compared single classifiers and bagging ensemble methods with the average accuracy of 87.61 and 88.87, the sensitivity of 73.80 and 72.35, specificity of 87.88 and 89.10 and area under the curve (AUC) of 91.04 and 94.53, respectively. The proposed stacking ensemble has an average accuracy of 83.56, a sensitivity of 82.21, a specificity of 81.32, and an AUC of 88.54. Conclusions In this study, a machine learning model is proposed and developed to predict glaucoma disease among persons aged 40-64. Top predictors in this study considered features for discriminating and predicting non-glaucoma persons from glaucoma patients include the number of the visual field detect on perimetry, vertical cup to disk ratio, white to white diameter, systolic blood pressure, pupil barycenter on Y coordinate, age, and axial length.
Collapse
Affiliation(s)
- Mahyar Sharifi
- School of Industrial and Systems Engineering, Tarbiat Modares University, Tehran, Iran
| | - Toktam Khatibi
- School of Industrial and Systems Engineering, Tarbiat Modares University, Tehran, Iran.
| | - Mohammad Hassan Emamian
- Ophthalmic Epidemiology Research Center, Shahroud University of Medical Sciences, Shahroud, Iran
| | - Somayeh Sadat
- Centre for Analytics and Artificial Intelligence Engineering, University of Toronto, Toronto, Canada
| | - Hassan Hashemi
- Noor Ophthalmology Research Center, Noor Eye Hospital, Tehran, Iran
| | - Akbar Fotouhi
- Department of Epidemiology and Biostatistics, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
13
|
Zhao R, Chen X, Liu X, Chen Z, Guo F, Li S. Direct Cup-to-Disc Ratio Estimation for Glaucoma Screening via Semi-Supervised Learning. IEEE J Biomed Health Inform 2020; 24:1104-1113. [DOI: 10.1109/jbhi.2019.2934477] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
14
|
Optic Disc and Cup Image Segmentation Utilizing Contour-Based Transformation and Sequence Labeling Networks. J Med Syst 2020; 44:96. [DOI: 10.1007/s10916-020-01561-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Accepted: 03/05/2020] [Indexed: 10/24/2022]
|
15
|
Armstrong GW, Lorch AC. A(eye): A Review of Current Applications of Artificial Intelligence and Machine Learning in Ophthalmology. Int Ophthalmol Clin 2020; 60:57-71. [PMID: 31855896 DOI: 10.1097/iio.0000000000000298] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
|
16
|
Pang T, Guo S, Zhang X, Zhao L. Automatic Lung Segmentation Based on Texture and Deep Features of HRCT Images with Interstitial Lung Disease. BIOMED RESEARCH INTERNATIONAL 2019; 2019:2045432. [PMID: 31871932 PMCID: PMC6907046 DOI: 10.1155/2019/2045432] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2019] [Accepted: 10/01/2019] [Indexed: 01/25/2023]
Abstract
Lung segmentation in high-resolution computed tomography (HRCT) images is necessary before the computer-aided diagnosis (CAD) of interstitial lung disease (ILD). Traditional methods are less intelligent and have lower accuracy of segmentation. This paper develops a novel automatic segmentation model using radiomics with a combination of hand-crafted features and deep features. The study uses ILD Database-MedGIFT from 128 patients with 108 annotated image series and selects 1946 regions of interest (ROI) of lung tissue patterns for training and testing. First, images are denoised by Wiener filter. Then, segmentation is performed by fusion of features that are extracted from the gray-level co-occurrence matrix (GLCM) which is a classic texture analysis method and U-Net which is a standard convolutional neural network (CNN). The final experiment result for segmentation in terms of dice similarity coefficient (DSC) is 89.42%, which is comparable to the state-of-the-art methods. The training performance shows the effectiveness for a combination of texture and deep radiomics features in lung segmentation.
Collapse
Affiliation(s)
- Ting Pang
- Center of Network and Information, Xinxiang Medical University, Xinxiang 453000, China
| | - Shaoyong Guo
- Center of Network and Information, Xinxiang Medical University, Xinxiang 453000, China
| | - Xinwang Zhang
- Center of Network and Information, Xinxiang Medical University, Xinxiang 453000, China
| | - Lijie Zhao
- Center of Network and Information, Xinxiang Medical University, Xinxiang 453000, China
| |
Collapse
|
17
|
Optic Disc and Cup Segmentation in Retinal Images for Glaucoma Diagnosis by Locally Statistical Active Contour Model with Structure Prior. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2019; 2019:8973287. [PMID: 31827591 PMCID: PMC6886352 DOI: 10.1155/2019/8973287] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/20/2019] [Revised: 09/26/2019] [Accepted: 10/22/2019] [Indexed: 11/17/2022]
Abstract
Accurate optic disc and optic cup segmentation plays an important role for diagnosing glaucoma. However, most existing segmentation approaches suffer from the following limitations. On the one hand, image devices or illumination variations always lead to intensity inhomogeneity in the fundus image. On the other hand, the spatial prior knowledge of optic disc and optic cup, e.g., the optic cup is always contained inside the optic disc region, is ignored. Therefore, the effectiveness of segmentation approaches is greatly reduced. Different from most previous approaches, we present a novel locally statistical active contour model with the structure prior (LSACM-SP) approach to jointly and robustly segment the optic disc and optic cup structures. First, some preprocessing techniques are used to automatically extract initial contour of object. Then, we introduce the locally statistical active contour model (LSACM) to optic disc and optic cup segmentation in the presence of intensity inhomogeneity. Finally, taking the specific morphology of optic disc and optic cup into consideration, a novel structure prior is proposed to guide the model to generate accurate segmentation results. Experimental results demonstrate the advantage and superiority of our approach on two publicly available databases, i.e., DRISHTI-GS and RIM-ONE r2, by comparing with some well-known algorithms.
Collapse
|
18
|
Snyder BM, Nam SM, Khunsongkiet P, Ausayakhun S, Leeungurasatien T, Leiter MR, Sevastopolsky A, Joye AS, Berlinberg EJ, Liu Y, Ramirez DA, Moe CA, Ausayakhun S, Stamper RL, Keenan JD. Accuracy of computer-assisted vertical cup-to-disk ratio grading for glaucoma screening. PLoS One 2019; 14:e0220362. [PMID: 31393904 PMCID: PMC6687168 DOI: 10.1371/journal.pone.0220362] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2019] [Accepted: 07/15/2019] [Indexed: 11/18/2022] Open
Abstract
Purpose Glaucoma screening can be performed by assessing the vertical-cup-to-disk ratio (VCDR) of the optic nerve head from fundus photography, but VCDR grading is inherently subjective. This study investigated whether computer software could improve the accuracy and repeatability of VCDR assessment. Methods In this cross-sectional diagnostic accuracy study, 5 ophthalmologists independently assessed the VCDR from a set of 200 optic disk images, with the median grade used as the reference standard for subsequent analyses. Eight non-ophthalmologists graded each image by two different methods: by visual inspection and with assistance from a custom-made publicly available software program. Agreement with the reference standard grade was assessed for each method by calculating the intraclass correlation coefficient (ICC), and the sensitivity and specificity determined relative to a median ophthalmologist grade of ≥0.7. Results VCDR grades ranged from 0.1 to 0.9 for visual assessment and from 0.1 to 1.0 for software-assisted grading, with a median grade of 0.4 for each. Agreement between each of the 8 graders and the reference standard was higher for visual inspection (median ICC 0.65, interquartile range 0.57 to 0.82) than for software-assisted grading (median ICC 0.59, IQR 0.44 to 0.71); P = 0.02, Wilcoxon signed-rank test). Visual inspection and software assistance had similar sensitivity and specificity for detecting glaucomatous cupping. Conclusion The computer software used in this study did not improve the reproducibility or validity of VCDR grading from fundus photographs compared with simple visual inspection. More clinical experience was correlated with higher agreement with the ophthalmologist VCDR reference standard.
Collapse
Affiliation(s)
- Blake M. Snyder
- School of Medicine, University of Colorado Denver, Aurora, Colorado, United States of America
- Francis I. Proctor Foundation, University of California San Francisco, San Francisco, CA, United States of America
| | - Sang Min Nam
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA, United States of America
- Department of Ophthalmology, CHA Bundang Medical Center, CHA University, Seongnam, Republic of Korea
| | - Preeyanuch Khunsongkiet
- Department of Ophthalmology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Sakarin Ausayakhun
- Department of Ophthalmology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | | | - Maxwell R. Leiter
- Francis I. Proctor Foundation, University of California San Francisco, San Francisco, CA, United States of America
| | - Artem Sevastopolsky
- Youth Laboratories LLC, Moscow, Russia
- Skolkovo Institute of Science and Technology, Moscow, Russia
| | - Ashlin S. Joye
- Francis I. Proctor Foundation, University of California San Francisco, San Francisco, CA, United States of America
| | - Elyse J. Berlinberg
- Francis I. Proctor Foundation, University of California San Francisco, San Francisco, CA, United States of America
| | - Yingna Liu
- Francis I. Proctor Foundation, University of California San Francisco, San Francisco, CA, United States of America
| | - David A. Ramirez
- Francis I. Proctor Foundation, University of California San Francisco, San Francisco, CA, United States of America
| | - Caitlin A. Moe
- Francis I. Proctor Foundation, University of California San Francisco, San Francisco, CA, United States of America
| | - Somsanguan Ausayakhun
- Department of Ophthalmology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Robert L. Stamper
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA, United States of America
| | - Jeremy D. Keenan
- Francis I. Proctor Foundation, University of California San Francisco, San Francisco, CA, United States of America
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA, United States of America
- * E-mail:
| |
Collapse
|
19
|
Singh D, Gunasekaran S, Hada M, Gogia V. Clinical validation of RIA-G, an automated optic nerve head analysis software. Indian J Ophthalmol 2019; 67:1089-1094. [PMID: 31238418 PMCID: PMC6611301 DOI: 10.4103/ijo.ijo_1509_18] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
Purpose To clinically validate a new automated glaucoma diagnosis software RIA-G. Methods A double-blinded study was conducted where 229 valid random fundus images were evaluated independently by RIA-G and three expert ophthalmologists. Optic nerve head parameters [vertical and horizontal cup-disc ratio (CDR) and neuroretinal rim (NRR) changes] were quantified. Disc damage likelihood scale (DDLS) staging and presence of glaucoma were noted. The software output was compared with consensus values of ophthalmologists. Results Mean difference between the vertical CDR output by RIA-G and the ophthalmologists was - 0.004 ± 0.1. Good agreement and strong correlation existed between the two [interclass correlation coefficient (ICC) 0.79; r = 0.77, P < 0.005]. Mean difference for horizontal CDR was - 0.07 ± 0.13 with a moderate to strong agreement and correlation (ICC 0.48; r = 0.61, P < 0.05). Experts and RIA-G found a violation of the inferior-superior NRR in 47 and 54 images, respectively (Cohen's kappa = 0.56 ± 0.07). RIA-G accurately detected DDLS in 66.2% cases, while in 93.8% cases, output was within ± 1 stage (ICC 0.51). Sensitivity and specificity of RIA-G to diagnose glaucomatous neuropathy were 82.3% and 91.8%, respectively. Overall agreement between RIA-G and experts for glaucoma diagnosis was good (Cohen's kappa = 0.62 ± 0.07). Overall accuracy of RIA-G to detect glaucomatous neuropathy was 90.3%. A detection error rate of 5% was noted. Conclusion RIA-G showed good agreement with the experts and proved to be a reliable software for detecting glaucomatous optic neuropathy. The ability to quantify optic nerve head parameters from simple fundus photographs will prove particularly useful in glaucoma screening, where no direct patient-doctor contact is established.
Collapse
Affiliation(s)
- Digvijay Singh
- Noble Eye Care; Narayana Superspecialty Hospital, Gurugram, Haryana, India
| | | | - Maya Hada
- SMS Medical College, Jaipur, Rajasthan, India
| | - Varun Gogia
- Noble Eye Care, Gurugram, Haryana; IClinix-Advanced Eye Centre, New Delhi, India
| |
Collapse
|
20
|
Smits DJ, Elze T, Wang H, Pasquale LR. Machine Learning in the Detection of the Glaucomatous Disc and Visual Field. Semin Ophthalmol 2019; 34:232-242. [PMID: 31132292 DOI: 10.1080/08820538.2019.1620801] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Glaucoma is the leading cause of irreversible blindness worldwide. Early detection is of utmost importance as there is abundant evidence that early treatment prevents disease progression, preserves vision, and improves patients' long-term quality of life. The structure and function thresholds that alert to the diagnosis of glaucoma can be obtained entirely via digital means, and as such, screening is well suited to benefit from artificial intelligence and specifically machine learning. This paper reviews the concepts and current literature on the use of machine learning for detection of the glaucomatous disc and visual field.
Collapse
Affiliation(s)
- David J Smits
- a Department of Ophthalmology , Massachusetts Eye and Ear Infirmary, Harvard Medical School , Boston , USA
| | - Tobias Elze
- b Schepens Eye Research Institute , Massachusetts Eye and Ear Infirmary, Harvard Medical School , Boston , USA
| | - Haobing Wang
- c Harvard Medical School , Massachusetts Eye and Ear Infirmary , Boston , USA
| | - Louis R Pasquale
- d Department of Ophthalmology , Icahn School of Medicine at Mount Sinai , New York , NY , USA
| |
Collapse
|
21
|
The Role of Artificial Intelligence in the Diagnosis and Management of Glaucoma. CURRENT OPHTHALMOLOGY REPORTS 2019. [DOI: 10.1007/s40135-019-00209-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
22
|
Combination of Enhanced Depth Imaging Optical Coherence Tomography and Fundus Images for Glaucoma Screening. J Med Syst 2019; 43:163. [PMID: 31044289 DOI: 10.1007/s10916-019-1303-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2019] [Accepted: 04/22/2019] [Indexed: 10/26/2022]
Abstract
Glaucoma is an eye disease that damages the optic nerve and can lead to irreversible loss of peripheral vision gradually and even blindness without treatment. Thus, diagnosing glaucoma in the early stage is essential for treatment. In this paper, an automatic method for early glaucoma screening is proposed. The proposed method combines structural parameters and textural features extracted from enhanced depth imaging optical coherence tomography (EDI-OCT) images and fundus images. The method first segments anterior the lamina cribrosa surface (ALCS) based on region-aware strategy and residual U-Net and then extracts structural features of the lamina cribrosa, such as lamina cribrosa depth and deformation of lamina cribrosa. In fundus images, scanning lines based on disc center and brightness reduction are used for optic disc segmentation and brightness compensation is utilized for segmenting the optic cup. Afterward, the cup-to-disc ratio (CDR) and textural features are extracted from fundus images. Hybrid features are used for training and classification to screen glaucoma by gcForest in the early stage. The proposed method has given exceptional results with 96.88% accuracy and 91.67% sensitivity.
Collapse
|
23
|
Thomas PBM, Chan T, Nixon T, Muthusamy B, White A. Feasibility of simple machine learning approaches to support detection of non-glaucomatous visual fields in future automated glaucoma clinics. Eye (Lond) 2019; 33:1133-1139. [PMID: 30833668 DOI: 10.1038/s41433-019-0386-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2018] [Accepted: 02/13/2019] [Indexed: 11/09/2022] Open
Abstract
OBJECTIVES To assess the performance of feed-forward back-propagation artificial neural networks (ANNs) in detecting field defects caused by pituitary disease from among a glaucomatous population. METHODS 24-2 Humphrey Visual Field reports were gathered from 121 pituitary patients and 907 glaucomatous patients. Optical character recognition was used to extract the threshold values from PDF reports. Left and right eye visual fields were coupled for each patient in an array to create bilateral field representations. ANNs were created to detect chiasmal field defects. We also assessed the ability of ANNs to identify a single pituitary field among 907 glaucomatous distractors. RESULTS Mean field thresholds across all locations were lower for pituitary patients (20.3 dB, SD = 5.2 dB) than for glaucoma patients (24.4 dB, SD = 5.0 dB) indicating a greater degree of field loss (p < 0.0001) in the pituitary group. However, substantial overlap between the groups meant that mean bilateral field loss was not a reliable indicator of aetiology. Representative ANNs showed good performance in the discrimination task with sensitivity and specificity routinely above 95%. Where a single pituitary field was hidden among 907 glaucomatous fields, it had one of the five highest indexes of suspicion on 91% of 2420 ANNs. CONCLUSIONS Traditional artificial neural networks perform well at detecting chiasmal field defects among a glaucoma cohort by inspecting bilateral field representations. Increasing automation of care means we will need robust methods of automatically diagnosing and managing disease. This work shows that machine learning can perform a useful role in diagnostic oversight in highly automated glaucoma clinics, enhancing patient safety.
Collapse
Affiliation(s)
- Peter B M Thomas
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, EC1V 9EL, UK.
| | - Thomas Chan
- Discipline of Ophthalmology, University of Sydney, Sydney, Australia
| | - Thomas Nixon
- Department of Ophthalmology, Faculty of Clinical Medicine, University of Cambridge, Addenbrooke's Hospital, Cambridge, UK
| | - Brinda Muthusamy
- Department of Ophthalmology, Addenbrooke's Hospital, Cambridge, UK
| | - Andrew White
- Discipline of Ophthalmology, University of Sydney, Sydney, Australia.,PersonalEYES, Sydney, NSW, Australia
| |
Collapse
|
24
|
Yang C, Lu M, Duan Y, Liu B. An efficient optic cup segmentation method decreasing the influences of blood vessels. Biomed Eng Online 2018; 17:130. [PMID: 30257677 PMCID: PMC6158914 DOI: 10.1186/s12938-018-0560-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2018] [Accepted: 09/15/2018] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Optic cup is an important structure in ophthalmologic diagnosis such as glaucoma. Automatic optic cup segmentation is also a key issue in computer aided diagnosis based on digital fundus image. However, current methods didn't effectively solve the problem of edge blurring caused by blood vessels around the optic cup. METHODS In this study, an improved Bertalmio-Sapiro-Caselles-Ballester (BSCB) model was proposed to eliminate the noising induced by blood vessel. First, morphological operations were performed to get the enhanced green channel image. Then blood vessels were extracted and filled by improved BSCB model. Finally, Local Chart-Vest model was used to segment the optic cup. A total of 94 samples which included 32 glaucoma fundus images and 62 normal fundus images were experimented. RESULTS The evaluation parameters of F-score and the boundary distance achieved by the proposed method against the results from experts were 0.7955 ± 0.0724 and 11.42 ± 3.61, respectively. Average vertical optic cup-to-disc ratio values of the normal and glaucoma samples achieved by the proposed method were 0.4369 ± 0.1193 and 0.7156 ± 0.0698, which were also close to those by experts. In addition, 39 glaucoma images from the public dataset RIM-ONE were also used for methodology evaluation. CONCLUSIONS The results showed that our proposed method could overcome the influence of blood vessels in some degree and was competitive to other current optic cup segmentation algorithms. This novel methodology will be expected to use in clinic in the field of glaucoma early detection.
Collapse
Affiliation(s)
- Chunlan Yang
- College of Life Science and Bioengineering, Beijing University of Technology, Beijing, 100124, China.
| | - Min Lu
- College of Life Science and Bioengineering, Beijing University of Technology, Beijing, 100124, China
| | - Yanhua Duan
- College of Life Science and Bioengineering, Beijing University of Technology, Beijing, 100124, China
| | - Bing Liu
- Department of Ophthalmology, Hospital of Beijing University of Technology, Beijing, 100124, China
| |
Collapse
|
25
|
Schmidt-Erfurth U, Sadeghipour A, Gerendas BS, Waldstein SM, Bogunović H. Artificial intelligence in retina. Prog Retin Eye Res 2018; 67:1-29. [PMID: 30076935 DOI: 10.1016/j.preteyeres.2018.07.004] [Citation(s) in RCA: 358] [Impact Index Per Article: 59.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Revised: 07/24/2018] [Accepted: 07/31/2018] [Indexed: 02/08/2023]
Abstract
Major advances in diagnostic technologies are offering unprecedented insight into the condition of the retina and beyond ocular disease. Digital images providing millions of morphological datasets can fast and non-invasively be analyzed in a comprehensive manner using artificial intelligence (AI). Methods based on machine learning (ML) and particularly deep learning (DL) are able to identify, localize and quantify pathological features in almost every macular and retinal disease. Convolutional neural networks thereby mimic the path of the human brain for object recognition through learning of pathological features from training sets, supervised ML, or even extrapolation from patterns recognized independently, unsupervised ML. The methods of AI-based retinal analyses are diverse and differ widely in their applicability, interpretability and reliability in different datasets and diseases. Fully automated AI-based systems have recently been approved for screening of diabetic retinopathy (DR). The overall potential of ML/DL includes screening, diagnostic grading as well as guidance of therapy with automated detection of disease activity, recurrences, quantification of therapeutic effects and identification of relevant targets for novel therapeutic approaches. Prediction and prognostic conclusions further expand the potential benefit of AI in retina which will enable personalized health care as well as large scale management and will empower the ophthalmologist to provide high quality diagnosis/therapy and successfully deal with the complexity of 21st century ophthalmology.
Collapse
Affiliation(s)
- Ursula Schmidt-Erfurth
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Amir Sadeghipour
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Bianca S Gerendas
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Sebastian M Waldstein
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Hrvoje Bogunović
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| |
Collapse
|