1
|
Liu J, Xu S, He P, Wu S, Luo X, Deng Y, Huang H. VSG-GAN: A high-fidelity image synthesis method with semantic manipulation in retinal fundus image. Biophys J 2024:S0006-3495(24)00139-5. [PMID: 38414236 DOI: 10.1016/j.bpj.2024.02.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 01/29/2024] [Accepted: 02/22/2024] [Indexed: 02/29/2024] Open
Abstract
In recent years, advancements in retinal image analysis, driven by machine learning and deep learning techniques, have enhanced disease detection and diagnosis through automated feature extraction. However, challenges persist, including limited data set diversity due to privacy concerns and imbalanced sample pairs, hindering effective model training. To address these issues, we introduce the vessel and style guided generative adversarial network (VSG-GAN), an innovative algorithm building upon the foundational concept of GAN. In VSG-GAN, a generator and discriminator engage in an adversarial process to produce realistic retinal images. Our approach decouples retinal image generation into distinct modules: the vascular skeleton and background style. Leveraging style transformation and GAN inversion, our proposed hierarchical variational autoencoder module generates retinal images with diverse morphological traits. In addition, the spatially adaptive denormalization module ensures consistency between input and generated images. We evaluate our model on MESSIDOR and RITE data sets using various metrics, including structural similarity index measure, inception score, Fréchet inception distance, and kernel inception distance. Our results demonstrate the superiority of VSG-GAN, outperforming existing methods across all evaluation assessments. This underscores its effectiveness in addressing data set limitations and imbalances. Our algorithm provides a novel solution to challenges in retinal image analysis by offering diverse and realistic retinal image generation. Implementing the VSG-GAN augmentation approach on downstream diabetic retinopathy classification tasks has shown enhanced disease diagnosis accuracy, further advancing the utility of machine learning in this domain.
Collapse
Affiliation(s)
- Junjie Liu
- Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science, Zhuhai, China; BNU-HKBU United International College, Zhuhai, China; Faculty of Science, Hong Kong Baptist University, Hong Kong SAR, China; Trinity College Dublin, Dublin 2, Ireland
| | - Shixin Xu
- Data Science Research Center, Duke Kunshan University, Kunshan, Jiangsu, China
| | - Ping He
- Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science, Zhuhai, China; BNU-HKBU United International College, Zhuhai, China
| | - Sirong Wu
- Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science, Zhuhai, China; BNU-HKBU United International College, Zhuhai, China; Faculty of Science, Hong Kong Baptist University, Hong Kong SAR, China
| | - Xi Luo
- Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science, Zhuhai, China; BNU-HKBU United International College, Zhuhai, China; Faculty of Science, Hong Kong Baptist University, Hong Kong SAR, China
| | - Yuhui Deng
- Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science, Zhuhai, China; BNU-HKBU United International College, Zhuhai, China.
| | - Huaxiong Huang
- Research Center for Mathematics, Beijing Normal University, Zhuhai, China; Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science, Zhuhai, China; Department of Mathematics and Statistics, York University, Toronto, ON, Canada.
| |
Collapse
|
2
|
Wang S, Yu X, Jia W, Chi J, Lv P, Wang J, Wu C. Optic disc detection based on fully convolutional network and weighted matrix recovery model. Med Biol Eng Comput 2023; 61:3319-3333. [PMID: 37668892 DOI: 10.1007/s11517-023-02891-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 07/09/2023] [Indexed: 09/06/2023]
Abstract
Eye diseases often affect human health. Accurate detection of the optic disc contour is one of the important steps in diagnosing and treating eye diseases. However, the structure of fundus images is complex, and the optic disc region is often disturbed by blood vessels. Considering that the optic disc is usually a saliency region in fundus images, we propose a weakly-supervised optic disc detection method based on the fully convolution neural network (FCN) combined with the weighted low-rank matrix recovery model (WLRR). Firstly, we extract the low-level features of the fundus image and cluster the pixels using the Simple Linear Iterative Clustering (SLIC) algorithm to generate the feature matrix. Secondly, the top-down semantic prior information provided by FCN and bottom-up background prior information of the optic disc region are used to jointly construct the prior information weighting matrix, which more accurately guides the decomposition of the feature matrix into a sparse matrix representing the optic disc and a low-rank matrix representing the background. Experimental results on the DRISHTI-GS dataset and IDRiD dataset show that our method can segment the optic disc region accurately, and its performance is better than existing weakly-supervised optic disc segmentation methods. Graphical abstract of optic disc segmentation.
Collapse
Affiliation(s)
- Siqi Wang
- Faculty of Robot Science and Engineering, Northeastern University, 110170, Shen Yang, Liao Ning, China
| | - Xiaosheng Yu
- Faculty of Robot Science and Engineering, Northeastern University, 110170, Shen Yang, Liao Ning, China.
| | - Wenzhuo Jia
- Art School, HE University, 110163, Shen Yang, Liao Ning, China
| | - Jianning Chi
- Faculty of Robot Science and Engineering, Northeastern University, 110170, Shen Yang, Liao Ning, China
- Key Laboratory of Intelligent Computing in Medical Image of Ministry of Education, Northeastern University, 110170, Shen Yang, Liao Ning, China
| | - Pengfei Lv
- Faculty of Robot Science and Engineering, Northeastern University, 110170, Shen Yang, Liao Ning, China
| | - Junxiang Wang
- Faculty of Robot Science and Engineering, Northeastern University, 110170, Shen Yang, Liao Ning, China
| | - Chengdong Wu
- Faculty of Robot Science and Engineering, Northeastern University, 110170, Shen Yang, Liao Ning, China
| |
Collapse
|
3
|
Chen B, Thandiackal K, Pati P, Goksel O. Generative appearance replay for continual unsupervised domain adaptation. Med Image Anal 2023; 89:102924. [PMID: 37597316 DOI: 10.1016/j.media.2023.102924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Revised: 06/19/2023] [Accepted: 08/01/2023] [Indexed: 08/21/2023]
Abstract
Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on three datasets with different organs and modalities, where it substantially outperforms existing techniques. Our code is available at: https://github.com/histocartography/generative-appearance-replay.
Collapse
Affiliation(s)
- Boqi Chen
- ETH AI Center, Zurich, Switzerland; Department of Computer Science, ETH Zurich, Switzerland
| | - Kevin Thandiackal
- IBM Research Europe, Zurich, Switzerland; Computer-Assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland.
| | | | - Orcun Goksel
- Computer-Assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland; Department of Information Technology, Uppsala University, Uppsala, Sweden
| |
Collapse
|
4
|
Islam MT, Khan HA, Naveed K, Nauman A, Gulfam SM, Kim SW. LUVS-Net: A Lightweight U-Net Vessel Segmentor for Retinal Vasculature Detection in Fundus Images. ELECTRONICS 2023; 12:1786. [DOI: 10.3390/electronics12081786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
This paper presents LUVS-Net, which is a lightweight convolutional network for retinal vessel segmentation in fundus images that is designed for resource-constrained devices that are typically unable to meet the computational requirements of large neural networks. The computational challenges arise due to low-quality retinal images, wide variance in image acquisition conditions and disparities in intensity. Consequently, the training of existing segmentation methods requires a multitude of trainable parameters for the training of networks, resulting in computational complexity. The proposed Lightweight U-Net for Vessel Segmentation Network (LUVS-Net) can achieve high segmentation performance with only a few trainable parameters. This network uses an encoder–decoder framework in which edge data are transposed from the first layers of the encoder to the last layer of the decoder, massively improving the convergence latency. Additionally, LUVS-Net’s design allows for a dual-stream information flow both inside as well as outside of the encoder–decoder pair. The network width is enhanced using group convolutions, which allow the network to learn a larger number of low- and intermediate-level features. Spatial information loss is minimized using skip connections, and class imbalances are mitigated using dice loss for pixel-wise classification. The performance of the proposed network is evaluated on the publicly available retinal blood vessel datasets DRIVE, CHASE_DB1 and STARE. LUVS-Net proves to be quite competitive, outperforming alternative state-of-the-art segmentation methods and achieving comparable accuracy using trainable parameters that are reduced by two to three orders of magnitude compared with those of comparative state-of-the-art methods.
Collapse
Affiliation(s)
- Muhammad Talha Islam
- Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan
| | - Haroon Ahmed Khan
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan
| | - Khuram Naveed
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan
- Department of Electrical and Computer Engineering, Aarhus University, 8000 Aarhus, Denmark
| | - Ali Nauman
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan-si 38541, Republic of Korea
| | - Sardar Muhammad Gulfam
- Department of Electrical and Computer Engineering, Abbottabad Campus, COMSATS University Islamabad (CUI), Abbottabad 22060, Pakistan
| | - Sung Won Kim
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan-si 38541, Republic of Korea
| |
Collapse
|
5
|
Shalini R, Gopi VP. Deep learning approaches based improved light weight U-Net with attention module for optic disc segmentation. Phys Eng Sci Med 2022; 45:1111-1122. [PMID: 36094722 DOI: 10.1007/s13246-022-01178-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 09/05/2022] [Indexed: 12/15/2022]
Abstract
Glaucoma is a major cause of blindness worldwide, and its early detection is essential for the timely management of the condition. Glaucoma-induced anomalies of the optic nerve head may cause variation in the Optic Disc (OD) size. Therefore, robust OD segmentation techniques are necessary for the screening for glaucoma. Computer-aided segmentation has become a promising diagnostic tool for the early detection of glaucoma, and there has been much interest in recent years in using neural networks for medical image segmentation. This study proposed an enhanced lightweight U-Net model with an Attention Gate (AG) to segment OD images. We also used a transfer learning strategy to extract relevant features using a pre-trained EfficientNet-B0 CNN, which preserved the receptive field size and AG, which reduced the impact of gradient vanishing and overfitting. Additionally, the neural network trained using the binary focal loss function improved segmentation accuracy. The pre-trained Attention U-Net was validated using publicly available datasets, such as DRIONS-DB, DRISHTI-GS, and MESSIDOR. The model significantly reduced parameter quantity by around 0.53 M and had inference times of 40.3 ms, 44.2 ms, and 60.6 ms, respectively.
Collapse
Affiliation(s)
- R Shalini
- Department of Electronics and Communication Engineering, National Institute of Technology, Tiruchirappalli, Tamilnadu, 620015, India
| | - Varun P Gopi
- Department of Electronics and Communication Engineering, National Institute of Technology, Tiruchirappalli, Tamilnadu, 620015, India.
| |
Collapse
|
6
|
Deep Learning-Based Glaucoma Screening Using Regional RNFL Thickness in Fundus Photography. Diagnostics (Basel) 2022; 12:diagnostics12112894. [PMID: 36428954 PMCID: PMC9689347 DOI: 10.3390/diagnostics12112894] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 11/14/2022] [Accepted: 11/14/2022] [Indexed: 11/23/2022] Open
Abstract
Since glaucoma is a progressive and irreversible optic neuropathy, accurate screening and/or early diagnosis is critical in preventing permanent vision loss. Recently, optical coherence tomography (OCT) has become an accurate diagnostic tool to observe and extract the thickness of the retinal nerve fiber layer (RNFL), which closely reflects the nerve damage caused by glaucoma. However, OCT is less accessible than fundus photography due to higher cost and expertise required for operation. Though widely used, fundus photography is effective for early glaucoma detection only when used by experts with extensive training. Here, we introduce a deep learning-based approach to predict the RNFL thickness around optic disc regions in fundus photography for glaucoma screening. The proposed deep learning model is based on a convolutional neural network (CNN) and utilizes images taken with fundus photography and with RNFL thickness measured with OCT for model training and validation. Using a dataset acquired from normal tension glaucoma (NTG) patients, the trained model can estimate RNFL thicknesses in 12 optic disc regions from fundus photos. Using intuitive thickness labels to identify localized damage of the optic nerve head and then estimating regional RNFL thicknesses from fundus images, we determine that screening for glaucoma could achieve 92% sensitivity and 86.9% specificity. Receiver operating characteristic (ROC) analysis results for specificity of 80% demonstrate that use of the localized mean over superior and inferior regions reaches 90.7% sensitivity, whereas 71.2% sensitivity is reached using the global RNFL thicknesses for specificity at 80%. This demonstrates that the new approach of using regional RNFL thicknesses in fundus images holds good promise as a potential screening technique for early stage of glaucoma.
Collapse
|
7
|
Rasheed HA, Davis T, Morales E, Fei Z, Grassi L, De Gainza A, Nouri-Mahdavi K, Caprioli J. DDLSNet: A Novel Deep Learning-Based System for Grading Funduscopic Images for Glaucomatous Damage. OPHTHALMOLOGY SCIENCE 2022; 3:100255. [PMID: 36619716 PMCID: PMC9813574 DOI: 10.1016/j.xops.2022.100255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 10/03/2022] [Accepted: 11/03/2022] [Indexed: 11/13/2022]
Abstract
Purpose To report an image analysis pipeline, DDLSNet, consisting of a rim segmentation (RimNet) branch and a disc size classification (DiscNet) branch to automate estimation of the disc damage likelihood scale (DDLS). Design Retrospective observational. Participants RimNet and DiscNet were developed with 1208 and 11 536 optic disc photographs (ODPs), respectively. DDLSNet performance was evaluated on 120 ODPs from the RimNet test set, for which the DDLS scores were graded by clinicians. Reproducibility was evaluated on a group of 781 eyes, each with 2 ODPs taken within 4 years apart. Methods Disc damage likelihood scale calculation requires estimation of optic disc size, provided by DiscNet (VGG19 network), and the minimum rim-to-disc ratio (mRDR) or absent rim width (ARW), provided by RimNet (InceptionV3/LinkNet segmentation model). To build RimNet's dataset, glaucoma specialists marked optic disc rim and cup boundaries on ODPs. The "ground truth" mRDR or ARW was calculated. For DiscNet's dataset, corresponding OCT images provided "ground truth" disc size. Optic disc photographs were split into 80/10/10 for training, validation, and testing, respectively, for RimNet and DiscNet. DDLSNet estimation was tested against manual grading of DDLS by clinicians with the average score used as "ground truth." Reproducibility of DDLSNet grading was evaluated by repeating DDLS estimation on a dataset of nonprogressing paired ODPs taken at separate times. Main Outcome Measures The main outcome measure was a weighted kappa score between clinicians and the DDLSNet pipeline with agreement defined as ± 1 DDLS score difference. Results RimNet achieved an mRDR mean absolute error (MAE) of 0.04 (± 0.03) and an ARW MAE of 48.9 (± 35.9) degrees when compared to clinician segmentations. DiscNet achieved 73% (95% confidence interval [CI]: 70%, 75%) classification accuracy. DDLSNet achieved an average weighted kappa agreement of 0.54 (95% CI: 0.40, 0.68) compared to clinicians. Average interclinician agreement was 0.52 (95% CI: 0.49, 0.56). Reproducibility testing demonstrated that 96% of ODP pairs had a difference of ≤ 1 DDLS score. Conclusions DDLSNet achieved moderate agreement with clinicians for DDLS grading. This novel approach illustrates the feasibility of automated ODP grading for assessing glaucoma severity. Further improvements may be achieved by increasing the number of incomplete rims sample size, expanding the hyperparameter search, and increasing the agreement of clinicians grading ODPs.
Collapse
Affiliation(s)
- Haroon Adam Rasheed
- University of California Los Angeles David Geffen School of Medicine, Los Angeles, California
| | - Tyler Davis
- Department of Computer Science, University of California Los Angeles, Los Angeles, California
| | - Esteban Morales
- Glaucoma Division, Jules Stein Eye Institute, Los Angeles, California
| | - Zhe Fei
- University of California Los Angeles Jonathan and Karin Fielding School of Public Health, Los Angeles, California,Department of Biostatistics, University of California Los Angeles, Los Angeles, California
| | - Lourdes Grassi
- Glaucoma Division, Jules Stein Eye Institute, Los Angeles, California
| | | | | | - Joseph Caprioli
- Glaucoma Division, Jules Stein Eye Institute, Los Angeles, California,Correspondence: Joseph Caprioli, MD, Glaucoma Division, Jules Stein Eye Institute, 100 Stein Plaza, Los Angeles, CA 90095.
| |
Collapse
|
8
|
Panahi A, Askari Moghadam R, Tarvirdizadeh B, Madani K. Simplified U-Net as a deep learning intelligent medical assistive tool in glaucoma detection. EVOLUTIONARY INTELLIGENCE 2022. [DOI: 10.1007/s12065-022-00775-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
9
|
Balasubramanian K, Ramya K, Gayathri Devi K. Improved swarm optimization of deep features for glaucoma classification using SEGSO and VGGNet. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
10
|
Retinal Glaucoma Public Datasets: What Do We Have and What Is Missing? J Clin Med 2022; 11:jcm11133850. [PMID: 35807135 PMCID: PMC9267177 DOI: 10.3390/jcm11133850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 06/29/2022] [Accepted: 06/30/2022] [Indexed: 11/16/2022] Open
Abstract
Public databases for glaucoma studies contain color images of the retina, emphasizing the optic papilla. These databases are intended for research and standardized automated methodologies such as those using deep learning techniques. These techniques are used to solve complex problems in medical imaging, particularly in the automated screening of glaucomatous disease. The development of deep learning techniques has demonstrated potential for implementing protocols for large-scale glaucoma screening in the population, eliminating possible diagnostic doubts among specialists, and benefiting early treatment to delay the onset of blindness. However, the images are obtained by different cameras, in distinct locations, and from various population groups and are centered on multiple parts of the retina. We can also cite the small number of data, the lack of segmentation of the optic papillae, and the excavation. This work is intended to offer contributions to the structure and presentation of public databases used in the automated screening of glaucomatous papillae, adding relevant information from a medical point of view. The gold standard public databases present images with segmentations of the disc and cupping made by experts and division between training and test groups, serving as a reference for use in deep learning architectures. However, the data offered are not interchangeable. The quality and presentation of images are heterogeneous. Moreover, the databases use different criteria for binary classification with and without glaucoma, do not offer simultaneous pictures of the two eyes, and do not contain elements for early diagnosis.
Collapse
|
11
|
Wang Y, Yu X, Wu C. An Efficient Hierarchical Optic Disc and Cup Segmentation Network Combined with Multi-task Learning and Adversarial Learning. J Digit Imaging 2022; 35:638-653. [PMID: 35212860 PMCID: PMC9156633 DOI: 10.1007/s10278-021-00579-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 12/24/2021] [Accepted: 12/29/2021] [Indexed: 12/15/2022] Open
Abstract
Automatic and accurate segmentation of optic disc (OD) and optic cup (OC) in fundus images is a fundamental task in computer-aided ocular pathologies diagnosis. The complex structures, such as blood vessels and macular region, and the existence of lesions in fundus images bring great challenges to the segmentation task. Recently, the convolutional neural network-based methods have exhibited its potential in fundus image analysis. In this paper, we propose a cascaded two-stage network architecture for robust and accurate OD and OC segmentation in fundus images. In the first stage, the U-Net like framework with an improved attention mechanism and focal loss is proposed to detect accurate and reliable OD location from the full-scale resolution fundus images. Based on the outputs of the first stage, a refined segmentation network in the second stage that integrates multi-task framework and adversarial learning is further designed for OD and OC segmentation separately. The multi-task framework is conducted to predict the OD and OC masks by simultaneously estimating contours and distance maps as auxiliary tasks, which can guarantee the smoothness and shape of object in segmentation predictions. The adversarial learning technique is introduced to encourage the segmentation network to produce an output that is consistent with the true labels in space and shape distribution. We evaluate the performance of our method using two public retinal fundus image datasets (RIM-ONE-r3 and REFUGE). Extensive ablation studies and comparison experiments with existing methods demonstrate that our approach can produce competitive performance compared with state-of-the-art methods.
Collapse
Affiliation(s)
- Ying Wang
- grid.412252.20000 0004 0368 6968College of Information Science and Engineering, Northeastern University, Liaoning, 110819 China
| | - Xiaosheng Yu
- grid.412252.20000 0004 0368 6968Faculty of Robot Science and Engineering, Northeastern University, Liaoning, 110819 China
| | - Chengdong Wu
- grid.412252.20000 0004 0368 6968Faculty of Robot Science and Engineering, Northeastern University, Liaoning, 110819 China
| |
Collapse
|
12
|
A Comprehensive Review of Methods and Equipment for Aiding Automatic Glaucoma Tracking. Diagnostics (Basel) 2022; 12:diagnostics12040935. [PMID: 35453985 PMCID: PMC9031684 DOI: 10.3390/diagnostics12040935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 04/05/2022] [Accepted: 04/07/2022] [Indexed: 02/01/2023] Open
Abstract
Glaucoma is a chronic optic neuropathy characterized by irreversible damage to the retinal nerve fiber layer (RNFL), resulting in changes in the visual field (VC). Glaucoma screening is performed through a complete ophthalmological examination, using images of the optic papilla obtained in vivo for the evaluation of glaucomatous characteristics, eye pressure, and visual field. Identifying the glaucomatous papilla is quite important, as optical papillary images are considered the gold standard for tracking. Therefore, this article presents a review of the diagnostic methods used to identify the glaucomatous papilla through technology over the last five years. Based on the analyzed works, the current state-of-the-art methods are identified, the current challenges are analyzed, and the shortcomings of these methods are investigated, especially from the point of view of automation and independence in performing these measurements. Finally, the topics for future work and the challenges that need to be solved are proposed.
Collapse
|
13
|
Singh LK, Garg H, Khanna M. Performance evaluation of various deep learning based models for effective glaucoma evaluation using optical coherence tomography images. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:27737-27781. [PMID: 35368855 PMCID: PMC8962290 DOI: 10.1007/s11042-022-12826-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 02/20/2022] [Accepted: 03/09/2022] [Indexed: 06/14/2023]
Abstract
Glaucoma is the dominant reason for irreversible blindness worldwide, and its best remedy is early and timely detection. Optical coherence tomography has come to be the most commonly used imaging modality in detecting glaucomatous damage in recent years. Deep Learning using Optical Coherence Tomography Modality helps in predicting glaucoma more accurately and less tediously. This experimental study aims to perform glaucoma prediction using eight different ImageNet models from Optical Coherence Tomography of Glaucoma. A thorough investigation is performed to evaluate these models' performances on various efficiency metrics, which will help discover the best performing model. Every net is tested on three different optimizers, namely Adam, Root Mean Squared Propagation, and Stochastic Gradient Descent, to find the best relevant results. An attempt has been made to improvise the performance of models using transfer learning and fine-tuning. The work presented in this study was initially trained and tested on a private database that consists of 4220 images (2110 normal optical coherence tomography and 2110 glaucoma optical coherence tomography). Based on the results, the four best-performing models are shortlisted. Later, these models are tested on the well-recognized standard public Mendeley dataset. Experimental results illustrate that VGG16 using the Root Mean Squared Propagation Optimizer attains auspicious performance with 95.68% accuracy. The proposed work concludes that different ImageNet models are a good alternative as a computer-based automatic glaucoma screening system. This fully automated system has a lot of potential to tell the difference between normal Optical Coherence Tomography and glaucomatous Optical Coherence Tomography automatically. The proposed system helps in efficiently detecting this retinal infection in suspected patients for better diagnosis to avoid vision loss and also decreases senior ophthalmologists' (experts) precious time and involvement.
Collapse
Affiliation(s)
- Law Kumar Singh
- Department of Computer Science and Engineering, Sharda University , Greater Noida, India
- Department of Computer Science and Engineering, Hindustan College of Science and Technology, Mathura, India
| | - Hitendra Garg
- Department of Computer Engineering and Applications, GLA University, Mathura, India
| | - Munish Khanna
- Department of Computer Science and Engineering, Hindustan College of Science and Technology, Mathura, India
| |
Collapse
|
14
|
Sonti K, Dhuli DR. Shape and texture based identification of glaucoma from retinal fundus images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103473] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
15
|
Meng Y, Zhang H, Zhao Y, Yang X, Qiao Y, MacCormick IJC, Huang X, Zheng Y. Graph-Based Region and Boundary Aggregation for Biomedical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:690-701. [PMID: 34714742 DOI: 10.1109/tmi.2021.3123567] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Segmentation is a fundamental task in biomedical image analysis. Unlike the existing region-based dense pixel classification methods or boundary-based polygon regression methods, we build a novel graph neural network (GNN) based deep learning framework with multiple graph reasoning modules to explicitly leverage both region and boundary features in an end-to-end manner. The mechanism extracts discriminative region and boundary features, referred to as initialized region and boundary node embeddings, using a proposed Attention Enhancement Module (AEM). The weighted links between cross-domain nodes (region and boundary feature domains) in each graph are defined in a data-dependent way, which retains both global and local cross-node relationships. The iterative message aggregation and node update mechanism can enhance the interaction between each graph reasoning module's global semantic information and local spatial characteristics. Our model, in particular, is capable of concurrently addressing region and boundary feature reasoning and aggregation at several different feature levels due to the proposed multi-level feature node embeddings in different parallel graph reasoning modules. Experiments on two types of challenging datasets demonstrate that our method outperforms state-of-the-art approaches for segmentation of polyps in colonoscopy images and of the optic disc and optic cup in colour fundus images. The trained models will be made available at: https://github.com/smallmax00/Graph_Region_Boudnary.
Collapse
|
16
|
Thainimit S, Chaipayom P, Sa-arnwong N, Gansawat D, Petchyim S, Pongrujikorn S. Robotic process automation support in telemedicine: Glaucoma screening usage case. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.101001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022] Open
|
17
|
Hemelings R, Elen B, Barbosa-Breda J, Blaschko MB, De Boever P, Stalmans I. Deep learning on fundus images detects glaucoma beyond the optic disc. Sci Rep 2021; 11:20313. [PMID: 34645908 PMCID: PMC8514536 DOI: 10.1038/s41598-021-99605-1] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 09/21/2021] [Indexed: 02/07/2023] Open
Abstract
Although unprecedented sensitivity and specificity values are reported, recent glaucoma detection deep learning models lack in decision transparency. Here, we propose a methodology that advances explainable deep learning in the field of glaucoma detection and vertical cup-disc ratio (VCDR), an important risk factor. We trained and evaluated deep learning models using fundus images that underwent a certain cropping policy. We defined the crop radius as a percentage of image size, centered on the optic nerve head (ONH), with an equidistant spaced range from 10-60% (ONH crop policy). The inverse of the cropping mask was also applied (periphery crop policy). Trained models using original images resulted in an area under the curve (AUC) of 0.94 [95% CI 0.92-0.96] for glaucoma detection, and a coefficient of determination (R2) equal to 77% [95% CI 0.77-0.79] for VCDR estimation. Models that were trained on images with absence of the ONH are still able to obtain significant performance (0.88 [95% CI 0.85-0.90] AUC for glaucoma detection and 37% [95% CI 0.35-0.40] R2 score for VCDR estimation in the most extreme setup of 60% ONH crop). Our findings provide the first irrefutable evidence that deep learning can detect glaucoma from fundus image regions outside the ONH.
Collapse
Affiliation(s)
- Ruben Hemelings
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium.
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium.
| | - Bart Elen
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium
| | - João Barbosa-Breda
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium
- Cardiovascular R&D Center, Faculty of Medicine of the University of Porto, Alameda Prof. Hernâni Monteiro, 4200-319, Porto, Portugal
- Department of Ophthalmology, Centro Hospitalar E Universitário São João, Alameda Prof. Hernâni Monteiro, 4200-319, Porto, Portugal
| | | | - Patrick De Boever
- Hasselt University, Agoralaan building D, 3590, Diepenbeek, Belgium
- Department of Biology, University of Antwerp, 2610, Wilrijk, Belgium
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium
| | - Ingeborg Stalmans
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium
- Ophthalmology Department, UZ Leuven, Herestraat 49, 3000, Leuven, Belgium
| |
Collapse
|
18
|
Krishnadas R. The many challenges in automated glaucoma diagnosis based on fundus imaging. Indian J Ophthalmol 2021; 69:2566-2567. [PMID: 34571593 PMCID: PMC8597437 DOI: 10.4103/ijo.ijo_2294_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Affiliation(s)
- R Krishnadas
- Consultant, Glaucoma Services, Aravind Eye Care System, Madurai, Tamil Nadu, India
| |
Collapse
|
19
|
Buisson M, Navel V, Labbé A, Watson SL, Baker JS, Murtagh P, Chiambaretta F, Dutheil F. Deep learning versus ophthalmologists for screening for glaucoma on fundus examination: A systematic review and meta-analysis. Clin Exp Ophthalmol 2021; 49:1027-1038. [PMID: 34506041 DOI: 10.1111/ceo.14000] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Revised: 09/02/2021] [Accepted: 09/08/2021] [Indexed: 11/29/2022]
Abstract
BACKGROUND In this systematic review and meta-analysis, we aimed to compare deep learning versus ophthalmologists in glaucoma diagnosis on fundus examinations. METHOD PubMed, Cochrane, Embase, ClinicalTrials.gov and ScienceDirect databases were searched for studies reporting a comparison between the glaucoma diagnosis performance of deep learning and ophthalmologists on fundus examinations on the same datasets, until 10 December 2020. Studies had to report an area under the receiver operating characteristics (AUC) with SD or enough data to generate one. RESULTS We included six studies in our meta-analysis. There was no difference in AUC between ophthalmologists (AUC = 82.0, 95% confidence intervals [CI] 65.4-98.6) and deep learning (97.0, 89.4-104.5). There was also no difference using several pessimistic and optimistic variants of our meta-analysis: the best (82.2, 60.0-104.3) or worst (77.7, 53.1-102.3) ophthalmologists versus the best (97.1, 89.5-104.7) or worst (97.1, 88.5-105.6) deep learning of each study. We did not retrieve any factors influencing those results. CONCLUSION Deep learning had similar performance compared to ophthalmologists in glaucoma diagnosis from fundus examinations. Further studies should evaluate deep learning in clinical situations.
Collapse
Affiliation(s)
- Mathieu Buisson
- CHU Clermont-Ferrand, Ophthalmology, University Hospital of Clermont-Ferrand, Clermont-Ferrand, France
| | - Valentin Navel
- CHU Clermont-Ferrand, Ophthalmology, University Hospital of Clermont-Ferrand, Clermont-Ferrand, France.,CNRS UMR 6293, INSERM U1103, Genetic Reproduction and Development Laboratory (GReD), Translational Approach to Epithelial Injury and Repair Team, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Antoine Labbé
- Department of Ophthalmology III, Quinze-Vingts National Ophthalmology Hospital, IHU FOReSIGHT, Paris, France.,Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France.,Department of Ophthalmology, Ambroise Paré Hospital, APHP, Université de Versailles Saint-Quentin en Yvelines, Versailles, France
| | - Stephanie L Watson
- Save Sight Institute, Discipline of Ophthalmology, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia.,Corneal Unit, Sydney Eye Hospital, Sydney, New South Wales, Australia
| | - Julien S Baker
- Centre for Health and Exercise Science Research, Department of Sport, Physical Education and Health, Hong Kong Baptist University, Kowloon Tong, Hong Kong
| | - Patrick Murtagh
- Department of Ophthalmology, Royal Victoria Eye and Ear Hospital, Dublin, Ireland
| | - Frédéric Chiambaretta
- CHU Clermont-Ferrand, Ophthalmology, University Hospital of Clermont-Ferrand, Clermont-Ferrand, France.,CNRS UMR 6293, INSERM U1103, Genetic Reproduction and Development Laboratory (GReD), Translational Approach to Epithelial Injury and Repair Team, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Frédéric Dutheil
- Université Clermont Auvergne, CNRS, LaPSCo, Physiological and Psychosocial Stress, CHU Clermont-Ferrand, University Hospital of Clermont-Ferrand, Preventive and Occupational Medicine, Witty Fit, Clermont-Ferrand, France
| |
Collapse
|
20
|
Zheng B, Jiang Q, Lu B, He K, Wu MN, Hao XL, Zhou HX, Zhu SJ, Yang WH. Five-Category Intelligent Auxiliary Diagnosis Model of Common Fundus Diseases Based on Fundus Images. Transl Vis Sci Technol 2021; 10:20. [PMID: 34132760 PMCID: PMC8212443 DOI: 10.1167/tvst.10.7.20] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Purpose The discrepancy of the number between ophthalmologists and patients in China is large. Retinal vein occlusion (RVO), high myopia, glaucoma, and diabetic retinopathy (DR) are common fundus diseases. Therefore, in this study, a five-category intelligent auxiliary diagnosis model for common fundus diseases is proposed; the model's area of focus is marked. Methods A total of 2000 fundus images were collected; 3 different 5-category intelligent auxiliary diagnosis models for common fundus diseases were trained via different transfer learning and image preprocessing techniques. A total of 1134 fundus images were used for testing. The clinical diagnostic results were compared with the diagnostic results. The main evaluation indicators included sensitivity, specificity, F1-score, area under the concentration-time curve (AUC), 95% confidence interval (CI), kappa, and accuracy. The interpretation methods were used to obtain the model's area of focus in the fundus image. Results The accuracy rates of the 3 intelligent auxiliary diagnosis models on the 1134 fundus images were all above 90%, the kappa values were all above 88%, the diagnosis consistency was good, and the AUC approached 0.90. For the 4 common fundus diseases, the best results of sensitivity, specificity, and F1-scores of the 3 models were 88.27%, 97.12%, and 84.02%; 89.94%, 99.52%, and 93.90%; 95.24%, 96.43%, and 85.11%; and 88.24%, 98.21%, and 89.55%, respectively. Conclusions This study designed a five-category intelligent auxiliary diagnosis model for common fundus diseases. It can be used to obtain the diagnostic category of fundus images and the model's area of focus. Translational Relevance This study will help the primary doctors to provide effective services to all ophthalmologic patients.
Collapse
Affiliation(s)
- Bo Zheng
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Qin Jiang
- Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| | - Bing Lu
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Kai He
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Mao-Nian Wu
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Xiu-Lan Hao
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Hong-Xia Zhou
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China.,College of Computer and Information, Hehai University, Nanjing, Jiangsu, China
| | - Shao-Jun Zhu
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Wei-Hua Yang
- Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| |
Collapse
|
21
|
Liu H, Li L, Wormstone IM, Qiao C, Zhang C, Liu P, Li S, Wang H, Mou D, Pang R, Yang D, Zangwill LM, Moghimi S, Hou H, Bowd C, Jiang L, Chen Y, Hu M, Xu Y, Kang H, Ji X, Chang R, Tham C, Cheung C, Ting DSW, Wong TY, Wang Z, Weinreb RN, Xu M, Wang N. Development and Validation of a Deep Learning System to Detect Glaucomatous Optic Neuropathy Using Fundus Photographs. JAMA Ophthalmol 2021; 137:1353-1360. [PMID: 31513266 DOI: 10.1001/jamaophthalmol.2019.3501] [Citation(s) in RCA: 145] [Impact Index Per Article: 48.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Importance A deep learning system (DLS) that could automatically detect glaucomatous optic neuropathy (GON) with high sensitivity and specificity could expedite screening for GON. Objective To establish a DLS for detection of GON using retinal fundus images and glaucoma diagnosis with convoluted neural networks (GD-CNN) that has the ability to be generalized across populations. Design, Setting, and Participants In this cross-sectional study, a DLS for the classification of GON was developed for automated classification of GON using retinal fundus images obtained from the Chinese Glaucoma Study Alliance, the Handan Eye Study, and online databases. The researchers selected 241 032 images were selected as the training data set. The images were entered into the databases on June 9, 2009, obtained on July 11, 2018, and analyses were performed on December 15, 2018. The generalization of the DLS was tested in several validation data sets, which allowed assessment of the DLS in a clinical setting without exclusions, testing against variable image quality based on fundus photographs obtained from websites, evaluation in a population-based study that reflects a natural distribution of patients with glaucoma within the cohort and an additive data set that has a diverse ethnic distribution. An online learning system was established to transfer the trained and validated DLS to generalize the results with fundus images from new sources. To better understand the DLS decision-making process, a prediction visualization test was performed that identified regions of the fundus images utilized by the DLS for diagnosis. Exposures Use of a deep learning system. Main Outcomes and Measures Area under the receiver operating characteristics curve (AUC), sensitivity and specificity for DLS with reference to professional graders. Results From a total of 274 413 fundus images initially obtained from CGSA, 269 601 images passed initial image quality review and were graded for GON. A total of 241 032 images (definite GON 29 865 [12.4%], probable GON 11 046 [4.6%], unlikely GON 200 121 [83%]) from 68 013 patients were selected using random sampling to train the GD-CNN model. Validation and evaluation of the GD-CNN model was assessed using the remaining 28 569 images from CGSA. The AUC of the GD-CNN model in primary local validation data sets was 0.996 (95% CI, 0.995-0.998), with sensitivity of 96.2% and specificity of 97.7%. The most common reason for both false-negative and false-positive grading by GD-CNN (51 of 119 [46.3%] and 191 of 588 [32.3%]) and manual grading (50 of 113 [44.2%] and 183 of 538 [34.0%]) was pathologic or high myopia. Conclusions and Relevance Application of GD-CNN to fundus images from different settings and varying image quality demonstrated a high sensitivity, specificity, and generalizability for detecting GON. These findings suggest that automated DLS could enhance current screening programs in a cost-effective and time-efficient manner.
Collapse
Affiliation(s)
- Hanruo Liu
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Liu Li
- School of Electronic and Information Engineering, Beihang University, Beijing, China
| | - I Michael Wormstone
- School of Biological Sciences, University of East Anglia, Norwich, United Kingdom
| | - Chunyan Qiao
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Chun Zhang
- Department of Ophthalmology, Peking University Third Hospital, Beijing, China
| | - Ping Liu
- Ophthalmology Hospital, First Hospital of Harbin Medical University, Harbin, Heilongjiang, China
| | - Shuning Li
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Huaizhou Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Dapeng Mou
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Ruiqi Pang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Diya Yang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Linda M Zangwill
- Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Sasan Moghimi
- Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Huiyuan Hou
- Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Christopher Bowd
- Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Lai Jiang
- School of Electronic and Information Engineering, Beihang University, Beijing, China
| | - Yihan Chen
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| | - Man Hu
- Department of Ophthalmology, Beijing Children's Hospital, Capital Medical University, Beijing, China
| | - Yongli Xu
- Department of Mathematics, Beijing University of Chemical Technology, Beijing, China
| | - Hong Kang
- College of Computer Science,Nankai University, Tianjin, China
| | - Xin Ji
- Beijing Shanggong Medical Technology Co., Ltd, Beijing, China
| | - Robert Chang
- Department of Ophthalmology, Byers Eye Institute at Stanford University, Palo Alto, California
| | - Clement Tham
- Department of Ophthalmology and Visual Sciences, Faculty of Medicine, The Chinese University of Hong Kong, Kowloon, Hong Kong, China
| | - Carol Cheung
- Department of Ophthalmology and Visual Sciences, Faculty of Medicine, The Chinese University of Hong Kong, Kowloon, Hong Kong, China
| | | | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
| | - Zulin Wang
- School of Electronic and Information Engineering, Beihang University, Beijing, China
| | - Robert N Weinreb
- Shiley Eye Institute, University of California, San Diego, La Jolla, California
| | - Mai Xu
- School of Electronic and Information Engineering, Beihang University, Beijing, China
| | - Ningli Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.,Beijing Ophthalmology and Visual Science Key Lab, Beijing, China
| |
Collapse
|
22
|
Rehman AU, Taj IA, Sajid M, Karimov KS. An ensemble framework based on Deep CNNs architecture for glaucoma classification using fundus photography. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2021; 18:5321-5346. [PMID: 34517490 DOI: 10.3934/mbe.2021270] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Glaucoma is a chronic ocular degenerative disease that can cause blindness if left untreated in its early stages. Deep Convolutional Neural Networks (Deep CNNs) and its variants have provided superior performance in glaucoma classification, segmentation, and detection. In this paper, we propose a two-staged glaucoma classification scheme based on Deep CNN architectures. In stage one, four different ImageNet pre-trained Deep CNN architectures, i.e., AlexNet, InceptionV3, InceptionResNetV2, and NasNet-Large are used and it is observed that NasNet-Large architecture provides superior performance in terms of sensitivity (99.1%), specificity (99.4%), accuracy (99.3%), and area under the receiver operating characteristic curve (97.8%) metrics. A detailed performance comparison is also presented among these on public datasets, i.e., ACRIMA, ORIGA-Light, and RIM-ONE as well as locally available datasets, i.e., AFIO, and HMC. In the second stage, we propose an ensemble classifier with two novel ensembling techniques, i.e., accuracy based weighted voting, and accuracy/score based weighted averaging to further improve the glaucoma classification results. It is shown that ensemble with accuracy/score based scheme improves the accuracy (99.5%) for diverse databases. As an outcome of this study, it is presented that the NasNet-Large architecture is a feasible option in terms of its performance as a single classifier while ensemble classifier further improves the generalized performance for automatic glaucoma classification.
Collapse
Affiliation(s)
- Aziz Ur Rehman
- Faculty of Electrical Engineering, GIK Institute of Engineering Sciences and Technology, Topi 23640, District Swabi, KPK, Pakistan
| | - Imtiaz A Taj
- Department of Electrical Engineering, Capital University of Science and Technology Islamabad Expressway, Kahuta Road, Zone-V Islamabad, Pakistan
| | - Muhammad Sajid
- Department of Electrical Engineering, Mirpur University of Science and Technology (MUST), Mirpur 10250 (AJK), Pakistan
| | - Khasan S Karimov
- Faculty of Electrical Engineering, GIK Institute of Engineering Sciences and Technology, Topi 23640, District Swabi, KPK, Pakistan
- Centre for Innovative and New Technologies of Academy of Sciences of the Republic of Tajikistan, 734015, Rudaki Ave., 33. Dushanbe Tajikistan
| |
Collapse
|
23
|
Mrad Y, Elloumi Y, Akil M, Bedoui MH. A fast and accurate method for glaucoma screening from smartphone-captured fundus images. Ing Rech Biomed 2021. [DOI: 10.1016/j.irbm.2021.06.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
|
24
|
Krishna Adithya V, Williams BM, Czanner S, Kavitha S, Friedman DS, Willoughby CE, Venkatesh R, Czanner G. EffUnet-SpaGen: An Efficient and Spatial Generative Approach to Glaucoma Detection. J Imaging 2021. [PMCID: PMC8321378 DOI: 10.3390/jimaging7060092] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
Current research in automated disease detection focuses on making algorithms “slimmer” reducing the need for large training datasets and accelerating recalibration for new data while achieving high accuracy. The development of slimmer models has become a hot research topic in medical imaging. In this work, we develop a two-phase model for glaucoma detection, identifying and exploiting a redundancy in fundus image data relating particularly to the geometry. We propose a novel algorithm for the cup and disc segmentation “EffUnet” with an efficient convolution block and combine this with an extended spatial generative approach for geometry modelling and classification, termed “SpaGen” We demonstrate the high accuracy achievable by EffUnet in detecting the optic disc and cup boundaries and show how our algorithm can be quickly trained with new data by recalibrating the EffUnet layer only. Our resulting glaucoma detection algorithm, “EffUnet-SpaGen”, is optimized to significantly reduce the computational burden while at the same time surpassing the current state-of-art in glaucoma detection algorithms with AUROC 0.997 and 0.969 in the benchmark online datasets ORIGA and DRISHTI, respectively. Our algorithm also allows deformed areas of the optic rim to be displayed and investigated, providing explainability, which is crucial to successful adoption and implementation in clinical settings.
Collapse
Affiliation(s)
- Venkatesh Krishna Adithya
- Department of Glaucoma, Aravind Eye Care System, Thavalakuppam, Pondicherry 605007, India; (V.K.A.); (S.K.); (R.V.)
| | - Bryan M. Williams
- School of Computing and Communications, Lancaster University, Bailrigg, Lancaster LA1 4WA, UK;
| | - Silvester Czanner
- School of Computer Science and Mathematics, Liverpool John Moores University, Liverpool L3 3AF, UK;
| | - Srinivasan Kavitha
- Department of Glaucoma, Aravind Eye Care System, Thavalakuppam, Pondicherry 605007, India; (V.K.A.); (S.K.); (R.V.)
| | - David S. Friedman
- Glaucoma Center of Excellence, Harvard Medical School, Boston, MA 02114, USA;
| | - Colin E. Willoughby
- Biomedical Research Institute, Ulster University, Coleraine, Co. Londonderry BT52 1SA, UK;
| | - Rengaraj Venkatesh
- Department of Glaucoma, Aravind Eye Care System, Thavalakuppam, Pondicherry 605007, India; (V.K.A.); (S.K.); (R.V.)
| | - Gabriela Czanner
- School of Computer Science and Mathematics, Liverpool John Moores University, Liverpool L3 3AF, UK;
- Correspondence:
| |
Collapse
|
25
|
Li JPO, Liu H, Ting DSJ, Jeon S, Chan RVP, Kim JE, Sim DA, Thomas PBM, Lin H, Chen Y, Sakomoto T, Loewenstein A, Lam DSC, Pasquale LR, Wong TY, Lam LA, Ting DSW. Digital technology, tele-medicine and artificial intelligence in ophthalmology: A global perspective. Prog Retin Eye Res 2021; 82:100900. [PMID: 32898686 PMCID: PMC7474840 DOI: 10.1016/j.preteyeres.2020.100900] [Citation(s) in RCA: 189] [Impact Index Per Article: 63.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Revised: 08/25/2020] [Accepted: 08/31/2020] [Indexed: 12/29/2022]
Abstract
The simultaneous maturation of multiple digital and telecommunications technologies in 2020 has created an unprecedented opportunity for ophthalmology to adapt to new models of care using tele-health supported by digital innovations. These digital innovations include artificial intelligence (AI), 5th generation (5G) telecommunication networks and the Internet of Things (IoT), creating an inter-dependent ecosystem offering opportunities to develop new models of eye care addressing the challenges of COVID-19 and beyond. Ophthalmology has thrived in some of these areas partly due to its many image-based investigations. Tele-health and AI provide synchronous solutions to challenges facing ophthalmologists and healthcare providers worldwide. This article reviews how countries across the world have utilised these digital innovations to tackle diabetic retinopathy, retinopathy of prematurity, age-related macular degeneration, glaucoma, refractive error correction, cataract and other anterior segment disorders. The review summarises the digital strategies that countries are developing and discusses technologies that may increasingly enter the clinical workflow and processes of ophthalmologists. Furthermore as countries around the world have initiated a series of escalating containment and mitigation measures during the COVID-19 pandemic, the delivery of eye care services globally has been significantly impacted. As ophthalmic services adapt and form a "new normal", the rapid adoption of some of telehealth and digital innovation during the pandemic is also discussed. Finally, challenges for validation and clinical implementation are considered, as well as recommendations on future directions.
Collapse
Affiliation(s)
- Ji-Peng Olivia Li
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Hanruo Liu
- Beijing Tongren Hospital; Capital Medical University; Beijing Institute of Ophthalmology; Beijing, China
| | - Darren S J Ting
- Academic Ophthalmology, University of Nottingham, United Kingdom
| | - Sohee Jeon
- Keye Eye Center, Seoul, Republic of Korea
| | | | - Judy E Kim
- Medical College of Wisconsin, Milwaukee, WI, USA
| | - Dawn A Sim
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Peter B M Thomas
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Haotian Lin
- Zhongshan Ophthalmic Center, State Key Laboratory of Ophthalmology, Guangzhou, China
| | - Youxin Chen
- Peking Union Medical College Hospital, Beijing, China
| | - Taiji Sakomoto
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Japan
| | | | - Dennis S C Lam
- C-MER Dennis Lam Eye Center, C-Mer International Eye Care Group Limited, Hong Kong, Hong Kong; International Eye Research Institute of the Chinese University of Hong Kong (Shenzhen), Shenzhen, China
| | - Louis R Pasquale
- Department of Ophthalmology, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Tien Y Wong
- Singapore National Eye Center, Duke-NUS Medical School Singapore, Singapore
| | - Linda A Lam
- USC Roski Eye Institute, University of Southern California (USC) Keck School of Medicine, Los Angeles, CA, USA
| | - Daniel S W Ting
- Singapore National Eye Center, Duke-NUS Medical School Singapore, Singapore.
| |
Collapse
|
26
|
Xu X, Guan Y, Li J, Ma Z, Zhang L, Li L. Automatic glaucoma detection based on transfer induced attention network. Biomed Eng Online 2021; 20:39. [PMID: 33892734 PMCID: PMC8066979 DOI: 10.1186/s12938-021-00877-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Accepted: 04/13/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Glaucoma is one of the causes that leads to irreversible vision loss. Automatic glaucoma detection based on fundus images has been widely studied in recent years. However, existing methods mainly depend on a considerable amount of labeled data to train the model, which is a serious constraint for real-world glaucoma detection. METHODS In this paper, we introduce a transfer learning technique that leverages the fundus feature learned from similar ophthalmic data to facilitate diagnosing glaucoma. Specifically, a Transfer Induced Attention Network (TIA-Net) for automatic glaucoma detection is proposed, which extracts the discriminative features that fully characterize the glaucoma-related deep patterns under limited supervision. By integrating the channel-wise attention and maximum mean discrepancy, our proposed method can achieve a smooth transition between general and specific features, thus enhancing the feature transferability. RESULTS To delimit the boundary between general and specific features precisely, we first investigate how many layers should be transferred during training with the source dataset network. Next, we compare our proposed model to previously mentioned methods and analyze their performance. Finally, with the advantages of the model design, we provide a transparent and interpretable transferring visualization by highlighting the key specific features in each fundus image. We evaluate the effectiveness of TIA-Net on two real clinical datasets and achieve an accuracy of 85.7%/76.6%, sensitivity of 84.9%/75.3%, specificity of 86.9%/77.2%, and AUC of 0.929 and 0.835, far better than other state-of-the-art methods. CONCLUSION Different from previous studies applied classic CNN models to transfer features from the non-medical dataset, we leverage knowledge from the similar ophthalmic dataset and propose an attention-based deep transfer learning model for the glaucoma diagnosis task. Extensive experiments on two real clinical datasets show that our TIA-Net outperforms other state-of-the-art methods, and meanwhile, it has certain medical value and significance for the early diagnosis of other medical tasks.
Collapse
Affiliation(s)
- Xi Xu
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Yu Guan
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Zerui Ma
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Li Zhang
- Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Li Li
- Beijing Children’s Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
27
|
Shabbir A, Rasheed A, Shehraz H, Saleem A, Zafar B, Sajid M, Ali N, Dar SH, Shehryar T. Detection of glaucoma using retinal fundus images: A comprehensive review. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2021; 18:2033-2076. [PMID: 33892536 DOI: 10.3934/mbe.2021106] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Content-based image analysis and computer vision techniques are used in various health-care systems to detect the diseases. The abnormalities in a human eye are detected through fundus images captured through a fundus camera. Among eye diseases, glaucoma is considered as the second leading case that can result in neurodegeneration illness. The inappropriate intraocular pressure within the human eye is reported as the main cause of this disease. There are no symptoms of glaucoma at earlier stages and if the disease remains unrectified then it can lead to complete blindness. The early diagnosis of glaucoma can prevent permanent loss of vision. Manual examination of human eye is a possible solution however it is dependant on human efforts. The automatic detection of glaucoma by using a combination of image processing, artificial intelligence and computer vision can help to prevent and detect this disease. In this review article, we aim to present a comprehensive review about the various types of glaucoma, causes of glaucoma, the details about the possible treatment, details about the publicly available image benchmarks, performance metrics, and various approaches based on digital image processing, computer vision, and deep learning. The review article presents a detailed study of various published research models that aim to detect glaucoma from low-level feature extraction to recent trends based on deep learning. The pros and cons of each approach are discussed in detail and tabular representations are used to summarize the results of each category. We report our findings and provide possible future research directions to detect glaucoma in conclusion.
Collapse
Affiliation(s)
- Amsa Shabbir
- Department of Software Engineering, Mirpur University of Science and Technology (MUST), Mirpur- AJK 10250, Pakistan
| | - Aqsa Rasheed
- Department of Software Engineering, Mirpur University of Science and Technology (MUST), Mirpur- AJK 10250, Pakistan
| | - Huma Shehraz
- Department of Software Engineering, Mirpur University of Science and Technology (MUST), Mirpur- AJK 10250, Pakistan
| | - Aliya Saleem
- Department of Software Engineering, Mirpur University of Science and Technology (MUST), Mirpur- AJK 10250, Pakistan
| | - Bushra Zafar
- Department of Computer Science, Government College University, Faisalabad 38000, Pakistan
| | - Muhammad Sajid
- Department of Electrical Engineering, Mirpur University of Science and Technology (MUST), Mirpur- AJK 10250, Pakistan
| | - Nouman Ali
- Department of Software Engineering, Mirpur University of Science and Technology (MUST), Mirpur- AJK 10250, Pakistan
| | - Saadat Hanif Dar
- Department of Software Engineering, Mirpur University of Science and Technology (MUST), Mirpur- AJK 10250, Pakistan
| | - Tehmina Shehryar
- Department of Software Engineering, Mirpur University of Science and Technology (MUST), Mirpur- AJK 10250, Pakistan
| |
Collapse
|
28
|
Chai Y, Bian Y, Liu H, Li J, Xu J. Glaucoma diagnosis in the Chinese context: An uncertainty information-centric Bayesian deep learning model. Inf Process Manag 2021. [DOI: 10.1016/j.ipm.2020.102454] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
29
|
Singh LK, Garg H, Khanna M, Bhadoria RS. An enhanced deep image model for glaucoma diagnosis using feature-based detection in retinal fundus. Med Biol Eng Comput 2021; 59:333-353. [PMID: 33439453 DOI: 10.1007/s11517-020-02307-5] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Accepted: 12/26/2020] [Indexed: 11/26/2022]
Abstract
This paper proposes a deep image analysis-based model for glaucoma diagnosis that uses several features to detect the formation of glaucoma in retinal fundus. These features are combined with most extracted parameters like inferior, superior, nasal, and temporal region area, and cup-to-disc ratio that overall forms a deep image analysis. This proposed model is exercised to investigate the various aspects related to the prediction of glaucoma in retinal fundus images that help the ophthalmologist in making better decisions for the human eye. The proposed model is presented with the combination of four machine learning algorithms that provide the classification accuracy of 98.60% while other existing models like support vector machine (SVM), K-nearest neighbors (KNN), and Naïve Bayes provide individually with accuracies of 97.61%, 90.47%, and 95.23% respectively. These results clearly demonstrate that this proposed model offers the best methodology to an early diagnosis of glaucoma in retinal fundus.
Collapse
Affiliation(s)
- Law Kumar Singh
- Department of Computer Science and Engineering, School of Engineering and Technology, Sharda University, Knowledge Park III, Greater Noida, India
- Department of Computer Science and Engineering, Hindustan College of Science and Technology, Mathura, India
| | - Hitendra Garg
- Department of Computer Engineering and Applications, GLA University, Mathura, India
| | - Munish Khanna
- Department of Computer Science and Engineering, Hindustan College of Science and Technology, Mathura, India
| | - Robin Singh Bhadoria
- Department of Computer Science and Engineering, Birla Institute of Applied Sciences (BIAS), Bhimtal, Uttarakhand, India.
| |
Collapse
|
30
|
Automated segmentation and classifcation of retinal features for glaucoma diagnosis. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102244] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
31
|
Li Q, Li S, Wu Y, Guo W, Qi S, Huang G, Chen S, Liu Z, Chen X. Orientation-independent Feature Matching (OIFM) for Multimodal Retinal Image Registration. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101957] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
32
|
Tian Z, Zheng Y, Li X, Du S, Xu X. Graph convolutional network based optic disc and cup segmentation on fundus images. BIOMEDICAL OPTICS EXPRESS 2020; 11:3043-3057. [PMID: 32637240 PMCID: PMC7316013 DOI: 10.1364/boe.390056] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2020] [Revised: 05/05/2020] [Accepted: 05/06/2020] [Indexed: 06/11/2023]
Abstract
Calculating the cup-to-disc ratio is one of the methods for glaucoma screening with other clinical features. In this paper, we propose a graph convolutional network (GCN) based method to implement the optic disc (OD) and optic cup (OC) segmentation task. We first present a multi-scale convolutional neural network (CNN) as the feature map extractor to generate feature map. The GCN takes the feature map concatenated with the graph nodes as the input for segmentation task. The experimental results on the REFUGE dataset show that the Jaccard index (Jacc) of the proposed method on OD and OC are 95.64% and 91.60%, respectively, while the Dice similarity coefficients (DSC) are 97.76% and 95.58%, respectively. The proposed method outperforms the state-of-the-art methods on the REFUGE leaderboard. We also evaluate the proposed method on the Drishthi-GS1 dataset. The results show that the proposed method outperforms the state-of-the-art methods.
Collapse
Affiliation(s)
- Zhiqiang Tian
- School of Software Engineering, Xi’an Jiaotong University, Xi’an 710049, China
| | - Yaoyue Zheng
- School of Software Engineering, Xi’an Jiaotong University, Xi’an 710049, China
| | - Xiaojian Li
- School of Software Engineering, Xi’an Jiaotong University, Xi’an 710049, China
| | - Shaoyi Du
- Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an 710049, China
| | - Xiayu Xu
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi’an Jiaotong University, Xi’an 710049, China
- Bioinspired Engineering and Biomechanics Center (BEBC), Xi’an Jiaotong University, Xi’an 710049, China
| |
Collapse
|
33
|
Deep learning assisted detection of glaucomatous optic neuropathy and potential designs for a generalizable model. PLoS One 2020; 15:e0233079. [PMID: 32407355 PMCID: PMC7224540 DOI: 10.1371/journal.pone.0233079] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2019] [Accepted: 04/28/2020] [Indexed: 11/30/2022] Open
Abstract
Purpose To evaluate ways to improve the generalizability of a deep learning algorithm for identifying glaucomatous optic neuropathy (GON) using a limited number of fundus photographs, as well as the key features being used for classification. Methods A total of 944 fundus images from Taipei Veterans General Hospital (TVGH) were retrospectively collected. Clinical and demographic characteristics, including structural and functional measurements of the images with GON, were recorded. Transfer learning based on VGGNet was used to construct a convolutional neural network (CNN) to identify GON. To avoid missing cases with advanced GON, an ensemble model was adopted in which a support vector machine classifier would make final classification based on cup-to-disc ratio if the CNN classifier had low-confidence score. The CNN classifier was first established using TVGH dataset, and then fine-tuned by combining the training images of TVGH and Drishti-GS datasets. Class activation map (CAM) was used to identify key features used for CNN classification. Performance of each classifier was determined through area under receiver operating characteristic curve (AUC) and compared with the ensemble model by diagnostic accuracy. Results In 187 TVGH test images, the accuracy, sensitivity, and specificity of the CNN classifier were 95.0%, 95.7%, and 94.2%, respectively, and the AUC was 0.992 compared to the 92.8% accuracy rate of the ensemble model. For the Drishti-GS test images, the accuracy of the CNN, the fine-tuned CNN and ensemble model was 33.3%, 80.3%, and 80.3%, respectively. The CNN classifier did not misclassify images with moderate to severe diseases. Class-discriminative regions revealed by CAM co-localized with known characteristics of GON. Conclusions The ensemble model or a fine-tuned CNN classifier may be potential designs to build a generalizable deep learning model for glaucoma detection when large image databases are not available.
Collapse
|
34
|
KANSE SHILPASAMEER, YADAV DM. HG-SVNN: HARMONIC GENETIC-BASED SUPPORT VECTOR NEURAL NETWORK CLASSIFIER FOR THE GLAUCOMA DETECTION. J MECH MED BIOL 2020. [DOI: 10.1142/s0219519419500659] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
Glaucoma has emerged as the one of the leading causes of blindness. Even though the diagnosis of this disease has not yet been found, the early detection can cure the glaucoma disease. Various works presented for the glaucoma detection have many disadvantages such as increased run time, complex architecture, etc., during the real-time implementations. This work introduces the glaucoma detection system based on the proposed harmonic genetic-based support vector neural network (HG-SVNN) classifier. The proposed system detects glaucoma in the database through four major steps, (1) pre-processing, (2) proposed hybrid feature extraction, (3) segmentation and (4) classification through the proposed HG-SVNN classifier. The proposed model uses both the statistical and the vessel features from the segmented and the pre-processed images to construct the feature vector. The proposed HG-SVNN classifier uses both the harmonic operator and the genetic algorithm (GA) for the neural network training. From the simulation results, it is evident that the proposed glaucoma detection system has better performance than the existing works with the values of 0.945, 0.9, 0.9333 and 0.86667 for the segmentation accuracy, accuracy, sensitivity and specificity metric.
Collapse
Affiliation(s)
| | - D. M. YADAV
- Academic Dean G. H. Raisoni College of Engineering and Management, Wagholi, Pune, Maharashtra 412207, India
| |
Collapse
|
35
|
Orlando JI, Fu H, Barbosa Breda J, van Keer K, Bathula DR, Diaz-Pinto A, Fang R, Heng PA, Kim J, Lee J, Lee J, Li X, Liu P, Lu S, Murugesan B, Naranjo V, Phaye SSR, Shankaranarayana SM, Sikka A, Son J, van den Hengel A, Wang S, Wu J, Wu Z, Xu G, Xu Y, Yin P, Li F, Zhang X, Xu Y, Bogunović H. REFUGE Challenge: A unified framework for evaluating automated methods for glaucoma assessment from fundus photographs. Med Image Anal 2020; 59:101570. [DOI: 10.1016/j.media.2019.101570] [Citation(s) in RCA: 83] [Impact Index Per Article: 20.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2019] [Revised: 07/26/2019] [Accepted: 10/01/2019] [Indexed: 01/01/2023]
|
36
|
Zou B, Chen C, Zhao R, Ouyang P, Zhu C, Chen Q, Duan X. A novel glaucomatous representation method based on Radon and wavelet transform. BMC Bioinformatics 2019; 20:693. [PMID: 31874641 PMCID: PMC6929399 DOI: 10.1186/s12859-019-3267-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
Background Glaucoma is an irreversible eye disease caused by the optic nerve injury. Therefore, it usually changes the structure of the optic nerve head (ONH). Clinically, ONH assessment based on fundus image is one of the most useful way for glaucoma detection. However, the effective representation for ONH assessment is a challenging task because its structural changes result in the complex and mixed visual patterns. Method We proposed a novel feature representation based on Radon and Wavelet transform to capture these visual patterns. Firstly, Radon transform (RT) is used to map the fundus image into Radon domain, in which the spatial radial variations of ONH are converted to a discrete signal for the description of image structural features. Secondly, the discrete wavelet transform (DWT) is utilized to capture differences and get quantitative representation. Finally, principal component analysis (PCA) and support vector machine (SVM) are used for dimensionality reduction and glaucoma detection. Results The proposed method achieves the state-of-the-art detection performance on RIMONE-r2 dataset with the accuracy and area under the curve (AUC) at 0.861 and 0.906, respectively. Conclusion In conclusion, we showed that the proposed method has the capacity as an effective tool for large-scale glaucoma screening, and it can provide a reference for the clinical diagnosis on glaucoma.
Collapse
Affiliation(s)
- Beiji Zou
- School of Computer Science and Engineering, Central South University, Changsha, 410083, China.,Hunan Province Engineering Technology Research Center of Computer Vision and Intelligent Medical Treatment, Changsha, 410083, China
| | - Changlong Chen
- School of Computer Science and Engineering, Central South University, Changsha, 410083, China.,Hunan Province Engineering Technology Research Center of Computer Vision and Intelligent Medical Treatment, Changsha, 410083, China
| | - Rongchang Zhao
- School of Computer Science and Engineering, Central South University, Changsha, 410083, China. .,Hunan Province Engineering Technology Research Center of Computer Vision and Intelligent Medical Treatment, Changsha, 410083, China.
| | - Pingbo Ouyang
- School of Computer Science and Engineering, Central South University, Changsha, 410083, China.,The Second Xiangya Hospital of Central South University, Changsha, 410011, China
| | - Chengzhang Zhu
- School of Computer Science and Engineering, Central South University, Changsha, 410083, China.,Hunan Province Engineering Technology Research Center of Computer Vision and Intelligent Medical Treatment, Changsha, 410083, China
| | - Qilin Chen
- School of Computer Science and Engineering, Central South University, Changsha, 410083, China.,Hunan Province Engineering Technology Research Center of Computer Vision and Intelligent Medical Treatment, Changsha, 410083, China
| | - Xuanchu Duan
- The Second Xiangya Hospital of Central South University, Changsha, 410011, China
| |
Collapse
|
37
|
Sarhan A, Rokne J, Alhajj R. Glaucoma detection using image processing techniques: A literature review. Comput Med Imaging Graph 2019; 78:101657. [PMID: 31675645 DOI: 10.1016/j.compmedimag.2019.101657] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2019] [Revised: 09/02/2019] [Accepted: 09/09/2019] [Indexed: 11/26/2022]
Abstract
The term glaucoma refers to a group of heterogeneous diseases that cause the degeneration of retinal ganglion cells (RGCs). The degeneration of RGCs leads to two main issues: (i) structural changes to the optic nerve head as well as the nerve fiber layer, and (ii) simultaneous functional failure of the visual field. These two effects of glaucoma may lead to peripheral vision loss and, if the condition is left to progress it may eventually lead to blindness. No cure for glaucoma exists apart from early detection and treatment by optometrists and ophthalmologists. The degeneration of RGCs is normally detected from retinal images which are assessed by an expert. These retinal images also provide other vital information about the health of an eye. Thus, it is essential to develop automated techniques for extracting this information. The rapid development of digital images and computer vision techniques have increased the potential for analysis of eye health from images. This paper surveys current approaches to detect glaucoma from 2D and 3D images; both the limitations and possible future directions are highlighted. This study also describes the datasets used for retinal analysis along with existing evaluation algorithms. The main topics covered by this study may be enumerated as follows.
Collapse
Affiliation(s)
- Abdullah Sarhan
- Department of Computer Science, University of Calgary, Calgary, AB, Canada.
| | - Jon Rokne
- Department of Computer Science, University of Calgary, Calgary, AB, Canada
| | - Reda Alhajj
- Department of Computer Science, University of Calgary, Calgary, AB, Canada; Department of Computer Engineering, Istanbul Medipol University, Istanbul, Turkey
| |
Collapse
|
38
|
Abdullah AS, Rahebi J, Özok YE, Aljanabi M. A new and effective method for human retina optic disc segmentation with fuzzy clustering method based on active contour model. Med Biol Eng Comput 2019; 58:25-37. [DOI: 10.1007/s11517-019-02032-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2018] [Accepted: 08/13/2019] [Indexed: 10/26/2022]
|
39
|
Medinoid: Computer-Aided Diagnosis and Localization of Glaucoma Using Deep Learning †. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9153064] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Glaucoma is a leading eye disease, causing vision loss by gradually affecting peripheral vision if left untreated. Current diagnosis of glaucoma is performed by ophthalmologists, human experts who typically need to analyze different types of medical images generated by different types of medical equipment: fundus, Retinal Nerve Fiber Layer (RNFL), Optical Coherence Tomography (OCT) disc, OCT macula, perimetry, and/or perimetry deviation. Capturing and analyzing these medical images is labor intensive and time consuming. In this paper, we present a novel approach for glaucoma diagnosis and localization, only relying on fundus images that are analyzed by making use of state-of-the-art deep learning techniques. Specifically, our approach towards glaucoma diagnosis and localization leverages Convolutional Neural Networks (CNNs) and Gradient-weighted Class Activation Mapping (Grad-CAM), respectively. We built and evaluated different predictive models using a large set of fundus images, collected and labeled by ophthalmologists at Samsung Medical Center (SMC). Our experimental results demonstrate that our most effective predictive model is able to achieve a high diagnosis accuracy of 96%, as well as a high sensitivity of 96% and a high specificity of 100% for Dataset-Optic Disc (OD), a set of center-cropped fundus images highlighting the optic disc. Furthermore, we present Medinoid, a publicly-available prototype web application for computer-aided diagnosis and localization of glaucoma, integrating our most effective predictive model in its back-end.
Collapse
|
40
|
Singh D, Gunasekaran S, Hada M, Gogia V. Clinical validation of RIA-G, an automated optic nerve head analysis software. Indian J Ophthalmol 2019; 67:1089-1094. [PMID: 31238418 PMCID: PMC6611301 DOI: 10.4103/ijo.ijo_1509_18] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
Purpose To clinically validate a new automated glaucoma diagnosis software RIA-G. Methods A double-blinded study was conducted where 229 valid random fundus images were evaluated independently by RIA-G and three expert ophthalmologists. Optic nerve head parameters [vertical and horizontal cup-disc ratio (CDR) and neuroretinal rim (NRR) changes] were quantified. Disc damage likelihood scale (DDLS) staging and presence of glaucoma were noted. The software output was compared with consensus values of ophthalmologists. Results Mean difference between the vertical CDR output by RIA-G and the ophthalmologists was - 0.004 ± 0.1. Good agreement and strong correlation existed between the two [interclass correlation coefficient (ICC) 0.79; r = 0.77, P < 0.005]. Mean difference for horizontal CDR was - 0.07 ± 0.13 with a moderate to strong agreement and correlation (ICC 0.48; r = 0.61, P < 0.05). Experts and RIA-G found a violation of the inferior-superior NRR in 47 and 54 images, respectively (Cohen's kappa = 0.56 ± 0.07). RIA-G accurately detected DDLS in 66.2% cases, while in 93.8% cases, output was within ± 1 stage (ICC 0.51). Sensitivity and specificity of RIA-G to diagnose glaucomatous neuropathy were 82.3% and 91.8%, respectively. Overall agreement between RIA-G and experts for glaucoma diagnosis was good (Cohen's kappa = 0.62 ± 0.07). Overall accuracy of RIA-G to detect glaucomatous neuropathy was 90.3%. A detection error rate of 5% was noted. Conclusion RIA-G showed good agreement with the experts and proved to be a reliable software for detecting glaucomatous optic neuropathy. The ability to quantify optic nerve head parameters from simple fundus photographs will prove particularly useful in glaucoma screening, where no direct patient-doctor contact is established.
Collapse
Affiliation(s)
- Digvijay Singh
- Noble Eye Care; Narayana Superspecialty Hospital, Gurugram, Haryana, India
| | | | - Maya Hada
- SMS Medical College, Jaipur, Rajasthan, India
| | - Varun Gogia
- Noble Eye Care, Gurugram, Haryana; IClinix-Advanced Eye Centre, New Delhi, India
| |
Collapse
|
41
|
Gómez-Valverde JJ, Antón A, Fatti G, Liefers B, Herranz A, Santos A, Sánchez CI, Ledesma-Carbayo MJ. Automatic glaucoma classification using color fundus images based on convolutional neural networks and transfer learning. BIOMEDICAL OPTICS EXPRESS 2019; 10:892-913. [PMID: 30800522 PMCID: PMC6377910 DOI: 10.1364/boe.10.000892] [Citation(s) in RCA: 63] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 12/23/2018] [Accepted: 12/23/2018] [Indexed: 06/01/2023]
Abstract
Glaucoma detection in color fundus images is a challenging task that requires expertise and years of practice. In this study we exploited the application of different Convolutional Neural Networks (CNN) schemes to show the influence in the performance of relevant factors like the data set size, the architecture and the use of transfer learning vs newly defined architectures. We also compared the performance of the CNN based system with respect to human evaluators and explored the influence of the integration of images and data collected from the clinical history of the patients. We accomplished the best performance using a transfer learning scheme with VGG19 achieving an AUC of 0.94 with sensitivity and specificity ratios similar to the expert evaluators of the study. The experimental results using three different data sets with 2313 images indicate that this solution can be a valuable option for the design of a computer aid system for the detection of glaucoma in large-scale screening programs.
Collapse
Affiliation(s)
- Juan J Gómez-Valverde
- Biomedical Image Technologies Laboratory (BIT), ETSI Telecomunicación, Universidad Politécnica de Madrid, 28040 Madrid, Spain
- Biomedical Research Center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Spain
| | - Alfonso Antón
- Parc de Salut Mar, Barcelona, Spain
- Universitat Internacional de Catalunya, Barcelona, Spain
- Institut Catala de Retina, Barcelona, Spain
| | | | - Bart Liefers
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | - Andrés Santos
- Biomedical Image Technologies Laboratory (BIT), ETSI Telecomunicación, Universidad Politécnica de Madrid, 28040 Madrid, Spain
- Biomedical Research Center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Spain
| | - Clara I Sánchez
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
| | - María J Ledesma-Carbayo
- Biomedical Image Technologies Laboratory (BIT), ETSI Telecomunicación, Universidad Politécnica de Madrid, 28040 Madrid, Spain
- Biomedical Research Center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Spain
| |
Collapse
|
42
|
Chai Y, Liu H, Xu J. Glaucoma diagnosis based on both hidden features and domain knowledge through deep learning models. Knowl Based Syst 2018. [DOI: 10.1016/j.knosys.2018.07.043] [Citation(s) in RCA: 70] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
43
|
Applications of Artificial Intelligence in Ophthalmology: General Overview. J Ophthalmol 2018; 2018:5278196. [PMID: 30581604 PMCID: PMC6276430 DOI: 10.1155/2018/5278196] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2018] [Revised: 10/06/2018] [Accepted: 10/17/2018] [Indexed: 12/26/2022] Open
Abstract
With the emergence of unmanned plane, autonomous vehicles, face recognition, and language processing, the artificial intelligence (AI) has remarkably revolutionized our lifestyle. Recent studies indicate that AI has astounding potential to perform much better than human beings in some tasks, especially in the image recognition field. As the amount of image data in imaging center of ophthalmology is increasing dramatically, analyzing and processing these data is in urgent need. AI has been tried to apply to decipher medical data and has made extraordinary progress in intelligent diagnosis. In this paper, we presented the basic workflow for building an AI model and systematically reviewed applications of AI in the diagnosis of eye diseases. Future work should focus on setting up systematic AI platforms to diagnose general eye diseases based on multimodal data in the real world.
Collapse
|
44
|
An exudate detection method for diagnosis risk of diabetic macular edema in retinal images using feature-based and supervised classification. Med Biol Eng Comput 2018; 56:1379-1390. [DOI: 10.1007/s11517-017-1771-2] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2017] [Accepted: 12/13/2017] [Indexed: 12/30/2022]
|
45
|
Haleem MS, Han L, Hemert JV, Li B, Fleming A, Pasquale LR, Song BJ. A Novel Adaptive Deformable Model for Automated Optic Disc and Cup Segmentation to Aid Glaucoma Diagnosis. J Med Syst 2017; 42:20. [PMID: 29218460 PMCID: PMC5719827 DOI: 10.1007/s10916-017-0859-4] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2017] [Accepted: 11/07/2017] [Indexed: 11/26/2022]
Abstract
This paper proposes a novel Adaptive Region-based Edge Smoothing Model (ARESM) for automatic boundary detection of optic disc and cup to aid automatic glaucoma diagnosis. The novelty of our approach consists of two aspects: 1) automatic detection of initial optimum object boundary based on a Region Classification Model (RCM) in a pixel-level multidimensional feature space; 2) an Adaptive Edge Smoothing Update model (AESU) of contour points (e.g. misclassified or irregular points) based on iterative force field calculations with contours obtained from the RCM by minimising energy function (an approach that does not require predefined geometric templates to guide auto-segmentation). Such an approach provides robustness in capturing a range of variations and shapes. We have conducted a comprehensive comparison between our approach and the state-of-the-art existing deformable models and validated it with publicly available datasets. The experimental evaluation shows that the proposed approach significantly outperforms existing methods. The generality of the proposed approach will enable segmentation and detection of other object boundaries and provide added value in the field of medical image processing and analysis.
Collapse
Affiliation(s)
- Muhammad Salman Haleem
- School of Computing, Mathematics and Digital Technology, Manchester Metropolitan University, Manchester, M1 5GD UK
| | - Liangxiu Han
- School of Computing, Mathematics and Digital Technology, Manchester Metropolitan University, Manchester, M1 5GD UK
| | - Jano van Hemert
- Optos Plc, Queensferry House, Carnegie Business Campus, Enterprise Way, Dunfermline, Scotland, KY11 8GR UK
| | - Baihua Li
- Department of Computer Science, Loughborough University, Loughborough, LE11 3TU UK
| | - Alan Fleming
- Optos Plc, Queensferry House, Carnegie Business Campus, Enterprise Way, Dunfermline, Scotland, KY11 8GR UK
| | - Louis R. Pasquale
- Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA USA
| | - Brian J. Song
- Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA USA
| |
Collapse
|
46
|
Molina-Casado JM, Carmona EJ, García-Feijoó J. Fast detection of the main anatomical structures in digital retinal images based on intra- and inter-structure relational knowledge. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2017; 149:55-68. [PMID: 28802330 DOI: 10.1016/j.cmpb.2017.06.022] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Revised: 06/15/2017] [Accepted: 06/23/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND AND OBJECTIVE The anatomical structure detection in retinal images is an open problem. However, most of the works in the related literature are oriented to the detection of each structure individually or assume the previous detection of a structure which is used as a reference. The objective of this paper is to obtain simultaneous detection of the main retinal structures (optic disc, macula, network of vessels and vascular bundle) in a fast and robust way. METHODS We propose a new methodology oriented to accomplish the mentioned objective. It consists of two stages. In an initial stage, a set of operators is applied to the retinal image. Each operator uses intra-structure relational knowledge in order to produce a set of candidate blobs that belongs to the desired structure. In a second stage, a set of tuples is created, each of which contains a different combination of the candidate blobs. Next, filtering operators, using inter-structure relational knowledge, are used in order to find the winner tuple. A method using template matching and mathematical morphology is implemented following the proposed methodology. RESULTS A success is achieved if the distance between the automatically detected blob center and the actual structure center is less than or equal to one optic disc radius. The success rates obtained in the different public databases analyzed were: MESSIDOR (99.33%, 98.58%, 97.92%), DIARETDB1 (96.63%, 100%, 97.75%), DRIONS (100%, n/a, 100%) and ONHSD (100%, 98.85%, 97.70%) for optic disc (OD), macula (M) and vascular bundle (VB), respectively. Finally, the overall success rate obtained in this study for each structure was: 99.26% (OD), 98.69% (M) and 98.95% (VB). The average time of processing per image was 4.16 ± 0.72 s. CONCLUSIONS The main advantage of the use of inter-structure relational knowledge was the reduction of the number of false positives in the detection process. The implemented method is able to simultaneously detect four structures. It is fast, robust and its detection results are competitive in relation to other methods of the recent literature.
Collapse
Affiliation(s)
- José M Molina-Casado
- Department of Artificial Intelligence, ETS Ingeniería Informática, Universidad Nacional de Educación a Distancia (UNED), C/ Juan del Rosal 16, Madrid 28040, Spain.
| | - Enrique J Carmona
- Department of Artificial Intelligence, ETS Ingeniería Informática, Universidad Nacional de Educación a Distancia (UNED), C/ Juan del Rosal 16, Madrid 28040, Spain.
| | - Julián García-Feijoó
- Department of Ophthalmology, Faculty of Medicine, Complutense University, Madrid, Spain; Ocular Pathology National Net OFTARED of the Institute of Health Carlos III, Spain; Department of Ophthalmology, Sanitary Research Institute of the San Carlos Clinical Hospital, Madrid, Spain.
| |
Collapse
|
47
|
Sigut J, Nunez O, Fumero F, Gonzalez M, Arnay R. Contrast based circular approximation for accurate and robust optic disc segmentation in retinal images. PeerJ 2017; 5:e3763. [PMID: 28894642 PMCID: PMC5592085 DOI: 10.7717/peerj.3763] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2017] [Accepted: 08/15/2017] [Indexed: 11/20/2022] Open
Abstract
A new method for automatic optic disc localization and segmentation is presented. The localization procedure combines vascular and brightness information to provide the best estimate of the optic disc center which is the starting point for the segmentation algorithm. A detection rate of 99.58% and 100% was achieved for the Messidor and ONHSD databases, respectively. A simple circular approximation to the optic disc boundary is proposed based on the maximum average contrast between the inner and outer ring of a circle centered on the estimated location. An average overlap coefficient of 0.890 and 0.865 was achieved for the same datasets, outperforming other state of the art methods. The results obtained confirm the advantages of using a simple circular model under non-ideal conditions as opposed to more complex deformable models.
Collapse
Affiliation(s)
- Jose Sigut
- Department of Computer Engineering and Systems, Universidad de La Laguna, San Cristobal de La Laguna, Spain
| | - Omar Nunez
- Department of Computer Engineering and Systems, Universidad de La Laguna, San Cristobal de La Laguna, Spain
| | - Francisco Fumero
- Department of Computer Engineering and Systems, Universidad de La Laguna, San Cristobal de La Laguna, Spain
| | - Marta Gonzalez
- Department of Ophthalmology, Hospital Universitario de Canarias, San Cristobal de La Laguna, Spain
| | - Rafael Arnay
- Department of Computer Engineering and Systems, Universidad de La Laguna, San Cristobal de La Laguna, Spain
| |
Collapse
|
48
|
Integrating holistic and local deep features for glaucoma classification. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2016:1328-1331. [PMID: 28268570 DOI: 10.1109/embc.2016.7590952] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Automated glaucoma detection is an important application of retinal image analysis. Compared with segmentation based approaches, image classification based approaches have a potential of better performance. However, it still remains a challenging problem for two reasons. Firstly, due to insufficient sample size, learning effective features is difficult. Secondly, the shape variations of optic disc introduce misalignment. To address these problem, a new classification based approach for glaucoma detection is proposed, in which deep convolutional networks derived from large-scale generic dataset is used to representing the visual appearance and holistic and local features are combined to mitigate the influence of misalignment. The proposed method achieves an area under the receiver operating characteristic curve of 0.8384 on the Origa dataset, which clearly demonstrates its effectiveness.
Collapse
|
49
|
Koh JE, Acharya UR, Hagiwara Y, Raghavendra U, Tan JH, Sree SV, Bhandary SV, Rao AK, Sivaprasad S, Chua KC, Laude A, Tong L. Diagnosis of retinal health in digital fundus images using continuous wavelet transform (CWT) and entropies. Comput Biol Med 2017; 84:89-97. [DOI: 10.1016/j.compbiomed.2017.03.008] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2016] [Revised: 02/16/2017] [Accepted: 03/12/2017] [Indexed: 10/20/2022]
|
50
|
Xiong L, Li H, Xu L. An Approach to Evaluate Blurriness in Retinal Images with Vitreous Opacity for Cataract Diagnosis. JOURNAL OF HEALTHCARE ENGINEERING 2017; 2017:5645498. [PMID: 29065620 PMCID: PMC5424487 DOI: 10.1155/2017/5645498] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2016] [Revised: 01/22/2017] [Accepted: 02/13/2017] [Indexed: 11/17/2022]
Abstract
Cataract is one of the leading causes of blindness in the world's population. A method to evaluate blurriness for cataract diagnosis in retinal images with vitreous opacity is proposed in this paper. Three types of features are extracted, which include pixel number of visible structures, mean contrast between vessels and background, and local standard deviation. To avoid the wrong detection of vitreous opacity as retinal structures, a morphological method is proposed to detect and remove such lesions from retinal visible structure segmentation. Based on the extracted features, a decision tree is trained to classify retinal images into five grades of blurriness. The proposed approach was tested using 1355 clinical retinal images, and the accuracies of two-class classification and five-grade grading compared with that of manual grading are 92.8% and 81.1%, respectively. The kappa value between automatic grading and manual grading is 0.74 in five-grade grading, in which both variance and P value are less than 0.001. Experimental results show that the grading difference between automatic grading and manual grading is all within 1 grade, which is much improvement compared with that of other available methods. The proposed grading method provides a universal measure of cataract severity and can facilitate the decision of cataract surgery.
Collapse
Affiliation(s)
- Li Xiong
- School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Huiqi Li
- School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Liang Xu
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Beijing 100730, China
| |
Collapse
|