1
|
Liang X, Wen H, Duan Y, He K, Feng X, Zhou G. Nonproliferative diabetic retinopathy dataset(NDRD): A database for diabetic retinopathy screening research and deep learning evaluation. Health Informatics J 2024; 30:14604582241259328. [PMID: 38864242 DOI: 10.1177/14604582241259328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/13/2024]
Abstract
OBJECTIVES In this article, we provide a database of nonproliferative diabetes retinopathy, which focuses on early diabetes retinopathy with hard exudation, and further explore its clinical application in disease recognition. METHODS We collect the photos of nonproliferative diabetes retinopathy taken by Optos Panoramic 200 laser scanning ophthalmoscope, filter out the pictures with poor quality, and label the hard exudative lesions in the images under the guidance of professional medical personnel. To validate the effectiveness of the datasets, five deep learning models are used to perform learning predictions on the datasets. Furthermore, we evaluate the performance of the model using evaluation metrics. RESULTS Nonproliferative diabetes retinopathy is smaller than proliferative retinopathy and more difficult to identify. The existing segmentation models have poor lesion segmentation performance, while the intersection over union (IOU) value for deep lesion segmentation of models targeting small lesions can reach 66.12%, which is higher than ordinary lesion segmentation models, but there is still a lot of room for improvement. CONCLUSION The segmentation of small hard exudative lesions is more challenging than that of large hard exudative lesions. More targeted datasets are needed for model training. Compared with the previous diabetes retina datasets, the NDRD dataset pays more attention to micro lesions.
Collapse
Affiliation(s)
- Xing Liang
- Third Hospital of Shanxi Medical University, Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Tongji Shanxi Hospital, Taiyuan, China
| | - Haiqi Wen
- Taiyuan University of Technology School of Software, Taiyuan, China
| | - Yajian Duan
- Department of Ophthalmology, Shanxi Bethune Hospital, Taiyuan, China
| | - Kan He
- Taiyuan University of Technology School of Mathematics, Taiyuan, China
| | - Xiufang Feng
- Taiyuan University of Technology School of Software, Taiyuan, China
| | - Guohong Zhou
- Department of Ophthalmology, Shanxi Eye Hospital Affiliated to Shanxi Medical UniversityTaiyuan, China
| |
Collapse
|
2
|
Gonçalves MB, Nakayama LF, Ferraz D, Faber H, Korot E, Malerbi FK, Regatieri CV, Maia M, Celi LA, Keane PA, Belfort R. Image quality assessment of retinal fundus photographs for diabetic retinopathy in the machine learning era: a review. Eye (Lond) 2024; 38:426-433. [PMID: 37667028 PMCID: PMC10858054 DOI: 10.1038/s41433-023-02717-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 06/26/2023] [Accepted: 08/25/2023] [Indexed: 09/06/2023] Open
Abstract
This study aimed to evaluate the image quality assessment (IQA) and quality criteria employed in publicly available datasets for diabetic retinopathy (DR). A literature search strategy was used to identify relevant datasets, and 20 datasets were included in the analysis. Out of these, 12 datasets mentioned performing IQA, but only eight specified the quality criteria used. The reported quality criteria varied widely across datasets, and accessing the information was often challenging. The findings highlight the importance of IQA for AI model development while emphasizing the need for clear and accessible reporting of IQA information. The study suggests that automated quality assessments can be a valid alternative to manual labeling and emphasizes the importance of establishing quality standards based on population characteristics, clinical use, and research purposes. In conclusion, image quality assessment is important for AI model development; however, strict data quality standards must not limit data sharing. Given the importance of IQA for developing, validating, and implementing deep learning (DL) algorithms, it's recommended that this information be reported in a clear, specific, and accessible way whenever possible. Automated quality assessments are a valid alternative to the traditional manual labeling process, and quality standards should be determined according to population characteristics, clinical use, and research purpose.
Collapse
Affiliation(s)
- Mariana Batista Gonçalves
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil
- Instituto Paulista de Estudos e Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, SP, Brazil
- NIHR Biomedical Research Centre for Ophthalmology, Moorfield Eye Hospital, NHS Foundation Trust, and UCL Institute of Ophthalmology, London, UK
| | - Luis Filipe Nakayama
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil.
- Massachusetts Institute of Technology, Laboratory for Computational Physiology, Cambridge, MA, USA.
| | - Daniel Ferraz
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil
- Instituto Paulista de Estudos e Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, SP, Brazil
- NIHR Biomedical Research Centre for Ophthalmology, Moorfield Eye Hospital, NHS Foundation Trust, and UCL Institute of Ophthalmology, London, UK
| | - Hanna Faber
- Department of Ophthalmology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Department of Ophthalmology, University of Tuebingen, Tuebingen, Germany
| | - Edward Korot
- Retina Specialists of Michigan, Grand Rapids, MI, USA
- Stanford University Byers Eye Institute Palo Alto, Palo Alto, CA, USA
| | | | | | - Mauricio Maia
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil
| | - Leo Anthony Celi
- Massachusetts Institute of Technology, Laboratory for Computational Physiology, Cambridge, MA, USA
- Harvard TH Chan School of Public Health, Department of Biostatistics, Boston, MA, USA
- Beth Israel Deaconess Medical Center, Department of Medicine, Boston, MA, USA
| | - Pearse A Keane
- NIHR Biomedical Research Centre for Ophthalmology, Moorfield Eye Hospital, NHS Foundation Trust, and UCL Institute of Ophthalmology, London, UK
| | - Rubens Belfort
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil
- Instituto Paulista de Estudos e Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, SP, Brazil
| |
Collapse
|
3
|
Li F, Xiang W, Zhang L, Pan W, Zhang X, Jiang M, Zou H. Joint optic disk and cup segmentation for glaucoma screening using a region-based deep learning network. Eye (Lond) 2023; 37:1080-1087. [PMID: 35437003 PMCID: PMC10102238 DOI: 10.1038/s41433-022-02055-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 02/16/2022] [Accepted: 03/29/2022] [Indexed: 11/09/2022] Open
Abstract
OBJECTIVES To develop and validate an end-to-end region-based deep convolutional neural network (R-DCNN) to jointly segment the optic disc (OD) and optic cup (OC) in retinal fundus images for precise cup-to-disc ratio (CDR) measurement and glaucoma screening. METHODS In total, 2440 retinal fundus images were retrospectively obtained from 2033 participants. An R-DCNN was presented for joint OD and OC segmentation, where the OD and OC segmentation problems were formulated into object detection problems. We compared R-DCNN's segmentation performance on our in-house dataset with that of four ophthalmologists while performing quantitative, qualitative and generalization analyses on the publicly available both DRISHIT-GS and RIM-ONE v3 datasets. The Dice similarity coefficient (DC), Jaccard coefficient (JC), overlapping error (E), sensitivity (SE), specificity (SP) and area under the curve (AUC) were measured. RESULTS On our in-house dataset, the proposed model achieved a 98.51% DC and a 97.07% JC for OD segmentation, and a 97.63% DC and a 95.39% JC for OC segmentation, achieving a performance level comparable to that of the ophthalmologists. On the DRISHTI-GS dataset, our approach achieved 97.23% and 94.17% results in DC and JC results for OD segmentation, respectively, while it achieved a 94.56% DC and an 89.92% JC for OC segmentation. Additionally, on the RIM-ONE v3 dataset, our model generated DC and JC values of 96.89% and 91.32% on the OD segmentation task, respectively, whereas the DC and JC values acquired for OC segmentation were 88.94% and 78.21%, respectively. CONCLUSION The proposed approach achieved very encouraging performance on the OD and OC segmentation tasks, as well as in glaucoma screening. It has the potential to serve as a useful tool for computer-assisted glaucoma screening.
Collapse
Affiliation(s)
- Feng Li
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Wenjie Xiang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Lijuan Zhang
- School of Electrical and Electronic Engineering, Shanghai Institute of Technology, Shanghai, 201418, China
| | - Wenzhe Pan
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Xuedian Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
- School of Medical Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, 201318, China
| | - Minshan Jiang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
| | - Haidong Zou
- Department of Ophthalmology, Shanghai First People's Hospital, Shanghai, 200080, China
| |
Collapse
|
4
|
Lu S, Zhao H, Liu H, Li H, Wang N. PKRT-Net: Prior Knowledge-based Relation Transformer Network for Optic Cup and Disc Segmentation. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.03.044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
|
5
|
Chákṣu: A glaucoma specific fundus image database. Sci Data 2023; 10:70. [PMID: 36737439 PMCID: PMC9898274 DOI: 10.1038/s41597-023-01943-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Accepted: 01/06/2023] [Indexed: 02/05/2023] Open
Abstract
We introduce Chákṣu-a retinal fundus image database for the evaluation of computer-assisted glaucoma prescreening techniques. The database contains 1345 color fundus images acquired using three brands of commercially available fundus cameras. Each image is provided with the outlines for the optic disc (OD) and optic cup (OC) using smooth closed contours and a decision of normal versus glaucomatous by five expert ophthalmologists. In addition, segmentation ground-truths of the OD and OC are provided by fusing the expert annotations using the mean, median, majority, and Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm. The performance indices show that the ground-truth agreement with the experts is the best with STAPLE algorithm, followed by majority, median, and mean. The vertical, horizontal, and area cup-to-disc ratios are provided based on the expert annotations. Image-wise glaucoma decisions are also provided based on majority voting among the experts. Chákṣu is the largest Indian-ethnicity-specific fundus image database with expert annotations and would aid in the development of artificial intelligence based glaucoma diagnostics.
Collapse
|
6
|
|
7
|
Retinal Glaucoma Public Datasets: What Do We Have and What Is Missing? J Clin Med 2022; 11:jcm11133850. [PMID: 35807135 PMCID: PMC9267177 DOI: 10.3390/jcm11133850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 06/29/2022] [Accepted: 06/30/2022] [Indexed: 11/16/2022] Open
Abstract
Public databases for glaucoma studies contain color images of the retina, emphasizing the optic papilla. These databases are intended for research and standardized automated methodologies such as those using deep learning techniques. These techniques are used to solve complex problems in medical imaging, particularly in the automated screening of glaucomatous disease. The development of deep learning techniques has demonstrated potential for implementing protocols for large-scale glaucoma screening in the population, eliminating possible diagnostic doubts among specialists, and benefiting early treatment to delay the onset of blindness. However, the images are obtained by different cameras, in distinct locations, and from various population groups and are centered on multiple parts of the retina. We can also cite the small number of data, the lack of segmentation of the optic papillae, and the excavation. This work is intended to offer contributions to the structure and presentation of public databases used in the automated screening of glaucomatous papillae, adding relevant information from a medical point of view. The gold standard public databases present images with segmentations of the disc and cupping made by experts and division between training and test groups, serving as a reference for use in deep learning architectures. However, the data offered are not interchangeable. The quality and presentation of images are heterogeneous. Moreover, the databases use different criteria for binary classification with and without glaucoma, do not offer simultaneous pictures of the two eyes, and do not contain elements for early diagnosis.
Collapse
|
8
|
Biswas S, Khan MIA, Hossain MT, Biswas A, Nakai T, Rohdin J. Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? LIFE (BASEL, SWITZERLAND) 2022; 12:life12070973. [PMID: 35888063 PMCID: PMC9321111 DOI: 10.3390/life12070973] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 05/25/2022] [Accepted: 06/01/2022] [Indexed: 11/22/2022]
Abstract
Color fundus photographs are the most common type of image used for automatic diagnosis of retinal diseases and abnormalities. As all color photographs, these images contain information about three primary colors, i.e., red, green, and blue, in three separate color channels. This work aims to understand the impact of each channel in the automatic diagnosis of retinal diseases and abnormalities. To this end, the existing works are surveyed extensively to explore which color channel is used most commonly for automatically detecting four leading causes of blindness and one retinal abnormality along with segmenting three retinal landmarks. From this survey, it is clear that all channels together are typically used for neural network-based systems, whereas for non-neural network-based systems, the green channel is most commonly used. However, from the previous works, no conclusion can be drawn regarding the importance of the different channels. Therefore, systematic experiments are conducted to analyse this. A well-known U-shaped deep neural network (U-Net) is used to investigate which color channel is best for segmenting one retinal abnormality and three retinal landmarks.
Collapse
Affiliation(s)
- Sangeeta Biswas
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
- Correspondence: or
| | - Md. Iqbal Aziz Khan
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Md. Tanvir Hossain
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Angkan Biswas
- CAPM Company Limited, Bonani, Dhaka 1213, Bangladesh;
| | - Takayoshi Nakai
- Faculty of Engineering, Shizuoka University, Hamamatsu 432-8561, Japan;
| | - Johan Rohdin
- Faculty of Information Technology, Brno University of Technology, 61200 Brno, Czech Republic;
| |
Collapse
|
9
|
Ramani RG, Shanthamalar JJ. Automated image quality appraisal through partial least squares discriminant analysis. Int J Comput Assist Radiol Surg 2022; 17:1367-1377. [PMID: 35650346 DOI: 10.1007/s11548-022-02668-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Accepted: 04/29/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Automatic retinal fundus image quality analysis is one of the most essential preliminary stages in automatic computer-aided retinal disease diagnosis system, which allows good-quality fundus images for accurate disease prediction through localization and segmentation of retinal regions. This paper presents new feature extraction methods using full-reference and no-reference image quality metrics for image quality classification. METHODS Basic image features, reference and no-reference features are extracted from the fundus image and applied through different classification techniques to determine the image quality for further diagnosis. In this paper, human-made categorization including good and non-good-quality fundus image classification is constructed by considering major features of retinal fundus images are illumination, clarity, image intensity, contrast and region visibility. The proposed system presented fundus image quality classification by automatic extraction of features from fundus images through image processing techniques and automatic classification of image quality through different classification algorithm. RESULTS This system was thoroughly investigated on 2674 retinal fundus images from publically available datasets, namely MESSIDOR, Drishti-GS1, DRIVE, HRF, DRIONS-DB, DIARETDB0, DIARETDB1, IDRiD, INSPIRE-AVR, CHASE-DB1, ONHSD, DRIMDB and e-ophtha-EX with better performance results in terms of sensitivity, accuracy, precision and F1 score of 99.36%, 96.79%, 96.29% and 97.79%, respectively. CONCLUSION The proposed system results were compared to the existing state-of-the-art approaches and outperform existing methods for image quality assessment representing the efficiency and robustness of our system is most suitable for automatic image analysis during retinal disease diagnosis.
Collapse
Affiliation(s)
- R Geetha Ramani
- Department of Information Science and Technology, Anna University, Chennai, India
| | - J Jeslin Shanthamalar
- Sathyabama Institute of Science and Technology, Sathyabama University, Chennai, India.
| |
Collapse
|
10
|
Wang Y, Yu X, Wu C. An Efficient Hierarchical Optic Disc and Cup Segmentation Network Combined with Multi-task Learning and Adversarial Learning. J Digit Imaging 2022; 35:638-653. [PMID: 35212860 PMCID: PMC9156633 DOI: 10.1007/s10278-021-00579-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 12/24/2021] [Accepted: 12/29/2021] [Indexed: 12/15/2022] Open
Abstract
Automatic and accurate segmentation of optic disc (OD) and optic cup (OC) in fundus images is a fundamental task in computer-aided ocular pathologies diagnosis. The complex structures, such as blood vessels and macular region, and the existence of lesions in fundus images bring great challenges to the segmentation task. Recently, the convolutional neural network-based methods have exhibited its potential in fundus image analysis. In this paper, we propose a cascaded two-stage network architecture for robust and accurate OD and OC segmentation in fundus images. In the first stage, the U-Net like framework with an improved attention mechanism and focal loss is proposed to detect accurate and reliable OD location from the full-scale resolution fundus images. Based on the outputs of the first stage, a refined segmentation network in the second stage that integrates multi-task framework and adversarial learning is further designed for OD and OC segmentation separately. The multi-task framework is conducted to predict the OD and OC masks by simultaneously estimating contours and distance maps as auxiliary tasks, which can guarantee the smoothness and shape of object in segmentation predictions. The adversarial learning technique is introduced to encourage the segmentation network to produce an output that is consistent with the true labels in space and shape distribution. We evaluate the performance of our method using two public retinal fundus image datasets (RIM-ONE-r3 and REFUGE). Extensive ablation studies and comparison experiments with existing methods demonstrate that our approach can produce competitive performance compared with state-of-the-art methods.
Collapse
Affiliation(s)
- Ying Wang
- grid.412252.20000 0004 0368 6968College of Information Science and Engineering, Northeastern University, Liaoning, 110819 China
| | - Xiaosheng Yu
- grid.412252.20000 0004 0368 6968Faculty of Robot Science and Engineering, Northeastern University, Liaoning, 110819 China
| | - Chengdong Wu
- grid.412252.20000 0004 0368 6968Faculty of Robot Science and Engineering, Northeastern University, Liaoning, 110819 China
| |
Collapse
|
11
|
Shi D, Lin Z, Wang W, Tan Z, Shang X, Zhang X, Meng W, Ge Z, He M. A Deep Learning System for Fully Automated Retinal Vessel Measurement in High Throughput Image Analysis. Front Cardiovasc Med 2022; 9:823436. [PMID: 35391847 PMCID: PMC8980780 DOI: 10.3389/fcvm.2022.823436] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2021] [Accepted: 02/22/2022] [Indexed: 11/27/2022] Open
Abstract
Motivation Retinal microvasculature is a unique window for predicting and monitoring major cardiovascular diseases, but high throughput tools based on deep learning for in-detail retinal vessel analysis are lacking. As such, we aim to develop and validate an artificial intelligence system (Retina-based Microvascular Health Assessment System, RMHAS) for fully automated vessel segmentation and quantification of the retinal microvasculature. Results RMHAS achieved good segmentation accuracy across datasets with diverse eye conditions and image resolutions, having AUCs of 0.91, 0.88, 0.95, 0.93, 0.97, 0.95, 0.94 for artery segmentation and 0.92, 0.90, 0.96, 0.95, 0.97, 0.95, 0.96 for vein segmentation on the AV-WIDE, AVRDB, HRF, IOSTAR, LES-AV, RITE, and our internal datasets. Agreement and repeatability analysis supported the robustness of the algorithm. For vessel analysis in quantity, less than 2 s were needed to complete all required analysis.
Collapse
Affiliation(s)
- Danli Shi
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zhihong Lin
- Faculty of Engineering, Monash University, Melbourne, VIC, Australia
| | - Wei Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zachary Tan
- Centre for Eye Research Australia, East Melbourne, VIC, Australia
| | - Xianwen Shang
- Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Eye Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Xueli Zhang
- Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Eye Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Wei Meng
- Guangzhou Vision Tech Medical Technology Co., Ltd., Guangzhou, China
| | - Zongyuan Ge
- Research Center and Faculty of Engineering, Monash University, Melbourne, VIC, Australia
| | - Mingguang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Centre for Eye Research Australia, East Melbourne, VIC, Australia
- Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Eye Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
- *Correspondence: Mingguang He
| |
Collapse
|
12
|
Kako NA, Abdulazeez AM. Peripapillary Atrophy Segmentation and Classification Methodologies for Glaucoma Image Detection: A Review. Curr Med Imaging 2022; 18:1140-1159. [PMID: 35260060 DOI: 10.2174/1573405618666220308112732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 12/04/2021] [Accepted: 12/22/2021] [Indexed: 11/22/2022]
Abstract
Information-based image processing and computer vision methods are utilized in several healthcare organizations to diagnose diseases. The irregularities in the visual system are identified over fundus images shaped over a fundus camera. Among ophthalmology diseases, glaucoma is measured as the most common case that can lead to neurodegenerative illness. The unsuitable fluid pressure inside the eye within the visual system is described as the major cause of those diseases. Glaucoma has no symptoms in the early stages, and if it is not treated, it may result in total blindness. Diagnosing glaucoma at an early stage may prevent permanent blindness. Manual inspection of the human eye may be a solution, but it depends on the skills of the individuals involved. The auto diagnosis of glaucoma by applying a consolidation of computer vision, artificial intelligence, and image processing can aid in the ban and detection of those diseases. In this review article, we aim to introduce a review of the numerous approaches based on peripapillary atrophy segmentation and classification that can detect these diseases, as well as details about the publicly available image benchmarks, datasets, and measurement of performance. The review article introduces the demonstrated research of numerous available study models that objectively diagnose glaucoma via peripapillary atrophy from the lowest level of feature extraction to the current direction based on deep learning. The advantages and disadvantages of each method are addressed in detail, and tabular descriptions are included to highlight the results of each category. Moreover, the frameworks of each approach and fundus image datasets are provided. The improved reporting of our study would help in providing possible future work directions to diagnose glaucoma in conclusion.
Collapse
Affiliation(s)
- Najdavan A Kako
- Duhok Polytechnic University, Technical Institute of Administration, MIS, Duhok, Iraq
| | | |
Collapse
|
13
|
Mahmood MT, Lee IH. Optic Disc Localization in Fundus Images through Accumulated Directional and Radial Blur Analysis. Comput Med Imaging Graph 2022; 98:102058. [DOI: 10.1016/j.compmedimag.2022.102058] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 10/29/2021] [Accepted: 03/17/2022] [Indexed: 10/18/2022]
|
14
|
Xiong H, Liu S, Sharan RV, Coiera E, Berkovsky S. Weak label based Bayesian U-Net for optic disc segmentation in fundus images. Artif Intell Med 2022; 126:102261. [DOI: 10.1016/j.artmed.2022.102261] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 01/18/2022] [Accepted: 02/20/2022] [Indexed: 01/27/2023]
|
15
|
Camara J, Neto A, Pires IM, Villasana MV, Zdravevski E, Cunha A. Literature Review on Artificial Intelligence Methods for Glaucoma Screening, Segmentation, and Classification. J Imaging 2022; 8:jimaging8020019. [PMID: 35200722 PMCID: PMC8878383 DOI: 10.3390/jimaging8020019] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 01/11/2022] [Accepted: 01/17/2022] [Indexed: 12/20/2022] Open
Abstract
Artificial intelligence techniques are now being applied in different medical solutions ranging from disease screening to activity recognition and computer-aided diagnosis. The combination of computer science methods and medical knowledge facilitates and improves the accuracy of the different processes and tools. Inspired by these advances, this paper performs a literature review focused on state-of-the-art glaucoma screening, segmentation, and classification based on images of the papilla and excavation using deep learning techniques. These techniques have been shown to have high sensitivity and specificity in glaucoma screening based on papilla and excavation images. The automatic segmentation of the contours of the optic disc and the excavation then allows the identification and assessment of the glaucomatous disease’s progression. As a result, we verified whether deep learning techniques may be helpful in performing accurate and low-cost measurements related to glaucoma, which may promote patient empowerment and help medical doctors better monitor patients.
Collapse
Affiliation(s)
- José Camara
- R. Escola Politécnica, Universidade Aberta, 1250-100 Lisboa, Portugal;
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal;
| | - Alexandre Neto
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal;
- Escola de Ciências e Tecnologia, University of Trás-os-Montes e Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal;
| | - Ivan Miguel Pires
- Escola de Ciências e Tecnologia, University of Trás-os-Montes e Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal;
- Instituto de Telecomunicações, Universidade da Beira Interior, 6200-001 Covilhã, Portugal
| | - María Vanessa Villasana
- Centro Hospitalar Universitário Cova da Beira, 6200-251 Covilhã, Portugal;
- UICISA:E Research Centre, School of Health, Polytechnic Institute of Viseu, 3504-510 Viseu, Portugal
| | - Eftim Zdravevski
- Faculty of Computer Science and Engineering, University Ss Cyril and Methodius, 1000 Skopje, North Macedonia;
| | - António Cunha
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal;
- Escola de Ciências e Tecnologia, University of Trás-os-Montes e Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal;
- Correspondence: ; Tel.: +351-931-636-373
| |
Collapse
|
16
|
Gour N, Tanveer M, Khanna P. Challenges for ocular disease identification in the era of artificial intelligence. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06770-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
17
|
Zheng Y, Zhang X, Xu X, Tian Z, Du S. Deep level set method for optic disc and cup segmentation on fundus images. BIOMEDICAL OPTICS EXPRESS 2021; 12:6969-6983. [PMID: 34858692 PMCID: PMC8606159 DOI: 10.1364/boe.439713] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Revised: 10/08/2021] [Accepted: 10/08/2021] [Indexed: 06/13/2023]
Abstract
Glaucoma is a leading cause of blindness. The measurement of vertical cup-to-disc ratio combined with other clinical features is one of the methods used to screen glaucoma. In this paper, we propose a deep level set method to implement the segmentation of optic cup (OC) and optic disc (OD). We present a multi-scale convolutional neural network as the prediction network to generate level set initial contour and evolution parameters. The initial contour will be further refined based on the evolution parameters. The network is integrated with augmented prior knowledge and supervised by active contour loss, which makes the level set evolution yield more accurate shape and boundary details. The experimental results on the REFUGE dataset show that the IoU of the OC and OD are 93.61% and 96.69%, respectively. To evaluate the robustness of the proposed method, we further test the model on the Drishthi-GS1 dataset. The segmentation results show that the proposed method outperforms the state-of-the-art methods.
Collapse
Affiliation(s)
- Yaoyue Zheng
- Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an 710049, China
| | - Xuetao Zhang
- Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an 710049, China
| | - Xiayu Xu
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi’an Jiaotong University, Xi’an 710049, China
- Bioinspired Engineering and Biomechanics Center (BEBC), Xi’an Jiaotong University, Xi’an 710049, China
| | - Zhiqiang Tian
- School of Software Engineering, Xi’an Jiaotong University, Xi’an 710049, China
| | - Shaoyi Du
- Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an 710049, China
| |
Collapse
|
18
|
Pachade S, Porwal P, Kokare M, Giancardo L, Mériaudeau F. NENet: Nested EfficientNet and adversarial learning for joint optic disc and cup segmentation. Med Image Anal 2021; 74:102253. [PMID: 34614474 DOI: 10.1016/j.media.2021.102253] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Revised: 03/15/2021] [Accepted: 09/22/2021] [Indexed: 01/27/2023]
Abstract
Glaucoma is an ocular disease threatening irreversible vision loss. Primary screening of Glaucoma involves computation of optic cup (OC) to optic disc (OD) ratio that is widely accepted metric. Recent deep learning frameworks for OD and OC segmentation have shown promising results and ways to attain remarkable performance. In this paper, we present a novel segmentation network, Nested EfficientNet (NENet) that consists of EfficientNetB4 as an encoder along with a nested network of pre-activated residual blocks, atrous spatial pyramid pooling (ASPP) block and attention gates (AGs). The combination of cross-entropy and dice coefficient (DC) loss is utilized to guide the network for accurate segmentation. Further, a modified patch-based discriminator is designed for use with the NENet to improve the local segmentation details. Three publicly available datasets, REFUGE, Drishti-GS, and RIM-ONE-r3 were utilized to evaluate the performances of the proposed network. In our experiments, NENet outperformed state-of-the-art methods for segmentation of OD and OC. Additionally, we show that NENet has excellent generalizability across camera types and image resolution. The obtained results suggest that the proposed technique has potential to be an important component for an automated Glaucoma screening system.
Collapse
Affiliation(s)
- Samiksha Pachade
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India.
| | - Prasanna Porwal
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India
| | - Manesh Kokare
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India
| | - Luca Giancardo
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, USA
| | | |
Collapse
|
19
|
Huang C, Zong Y, Ding Y, Luo X, Clawson K, Peng Y. A new deep learning approach for the retinal hard exudates detection based on superpixel multi-feature extraction and patch-based CNN. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.07.145] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
20
|
Ashraf MN, Hussain M, Habib Z. Review of Various Tasks Performed in the Preprocessing Phase of a Diabetic Retinopathy Diagnosis System. Curr Med Imaging 2021; 16:397-426. [PMID: 32410541 DOI: 10.2174/1573405615666190219102427] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 12/31/2018] [Accepted: 01/20/2019] [Indexed: 12/15/2022]
Abstract
Diabetic Retinopathy (DR) is a major cause of blindness in diabetic patients. The increasing population of diabetic patients and difficulty to diagnose it at an early stage are limiting the screening capabilities of manual diagnosis by ophthalmologists. Color fundus images are widely used to detect DR lesions due to their comfortable, cost-effective and non-invasive acquisition procedure. Computer Aided Diagnosis (CAD) of DR based on these images can assist ophthalmologists and help in saving many sight years of diabetic patients. In a CAD system, preprocessing is a crucial phase, which significantly affects its performance. Commonly used preprocessing operations are the enhancement of poor contrast, balancing the illumination imbalance due to the spherical shape of a retina, noise reduction, image resizing to support multi-resolution, color normalization, extraction of a field of view (FOV), etc. Also, the presence of blood vessels and optic discs makes the lesion detection more challenging because these two artifacts exhibit specific attributes, which are similar to those of DR lesions. Preprocessing operations can be broadly divided into three categories: 1) fixing the native defects, 2) segmentation of blood vessels, and 3) localization and segmentation of optic discs. This paper presents a review of the state-of-the-art preprocessing techniques related to three categories of operations, highlighting their significant aspects and limitations. The survey is concluded with the most effective preprocessing methods, which have been shown to improve the accuracy and efficiency of the CAD systems.
Collapse
Affiliation(s)
| | - Muhammad Hussain
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Zulfiqar Habib
- Department of Computer Science, COMSATS University Islamabad, Lahore, Pakistan
| |
Collapse
|
21
|
Joint optic disc and optic cup segmentation based on boundary prior and adversarial learning. Int J Comput Assist Radiol Surg 2021; 16:905-914. [PMID: 33963969 DOI: 10.1007/s11548-021-02373-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 04/08/2021] [Indexed: 10/21/2022]
Abstract
PURPOSE The most direct means of glaucoma screening is to use cup-to-disc ratio via colour fundus photography, the first step of which is the precise segmentation of the optic cup (OC) and optic disc (OD). In recent years, convolution neural networks (CNN) have shown outstanding performance in medical segmentation tasks. However, most CNN-based methods ignore the effect of boundary ambiguity on performance, which leads to low generalization. This paper is dedicated to solving this issue. METHODS In this paper, we propose a novel segmentation architecture, called BGA-Net, which introduces an auxiliary boundary branch and adversarial learning to jointly segment OD and OC in a multi-label manner. To generate more accurate results, the generative adversarial network is exploited to encourage boundary and mask predictions to be similar to the ground truth ones. RESULTS Experimental results show that our BGA-Net system achieves state-of-the-art OC and OD segmentation performance on three publicly available datasets, i.e., the Dice scores for the optic disc/cup on the Drishti-GS, RIM-ONE-r3 and REFUGE datasets are 0.975/0.898, 0.967/0.872 and 0.951/0.866, respectively. CONCLUSION In this work, we not only achieve superior OD and OC segmentation results, but also confirm that the values calculated through the geometric relationship between the former two are highly related to glaucoma.
Collapse
|
22
|
Shabbir A, Rasheed A, Shehraz H, Saleem A, Zafar B, Sajid M, Ali N, Dar SH, Shehryar T. Detection of glaucoma using retinal fundus images: A comprehensive review. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2021; 18:2033-2076. [PMID: 33892536 DOI: 10.3934/mbe.2021106] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Content-based image analysis and computer vision techniques are used in various health-care systems to detect the diseases. The abnormalities in a human eye are detected through fundus images captured through a fundus camera. Among eye diseases, glaucoma is considered as the second leading case that can result in neurodegeneration illness. The inappropriate intraocular pressure within the human eye is reported as the main cause of this disease. There are no symptoms of glaucoma at earlier stages and if the disease remains unrectified then it can lead to complete blindness. The early diagnosis of glaucoma can prevent permanent loss of vision. Manual examination of human eye is a possible solution however it is dependant on human efforts. The automatic detection of glaucoma by using a combination of image processing, artificial intelligence and computer vision can help to prevent and detect this disease. In this review article, we aim to present a comprehensive review about the various types of glaucoma, causes of glaucoma, the details about the possible treatment, details about the publicly available image benchmarks, performance metrics, and various approaches based on digital image processing, computer vision, and deep learning. The review article presents a detailed study of various published research models that aim to detect glaucoma from low-level feature extraction to recent trends based on deep learning. The pros and cons of each approach are discussed in detail and tabular representations are used to summarize the results of each category. We report our findings and provide possible future research directions to detect glaucoma in conclusion.
Collapse
Affiliation(s)
- Amsa Shabbir
- Department of Software Engineering, Mirpur University of Science and Technology (MUST), Mirpur- AJK 10250, Pakistan
| | - Aqsa Rasheed
- Department of Software Engineering, Mirpur University of Science and Technology (MUST), Mirpur- AJK 10250, Pakistan
| | - Huma Shehraz
- Department of Software Engineering, Mirpur University of Science and Technology (MUST), Mirpur- AJK 10250, Pakistan
| | - Aliya Saleem
- Department of Software Engineering, Mirpur University of Science and Technology (MUST), Mirpur- AJK 10250, Pakistan
| | - Bushra Zafar
- Department of Computer Science, Government College University, Faisalabad 38000, Pakistan
| | - Muhammad Sajid
- Department of Electrical Engineering, Mirpur University of Science and Technology (MUST), Mirpur- AJK 10250, Pakistan
| | - Nouman Ali
- Department of Software Engineering, Mirpur University of Science and Technology (MUST), Mirpur- AJK 10250, Pakistan
| | - Saadat Hanif Dar
- Department of Software Engineering, Mirpur University of Science and Technology (MUST), Mirpur- AJK 10250, Pakistan
| | - Tehmina Shehryar
- Department of Software Engineering, Mirpur University of Science and Technology (MUST), Mirpur- AJK 10250, Pakistan
| |
Collapse
|
23
|
Veena H, Muruganandham A, Senthil Kumaran T. A novel optic disc and optic cup segmentation technique to diagnose glaucoma using deep learning convolutional neural network over retinal fundus images. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2021. [DOI: 10.1016/j.jksuci.2021.02.003] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|
24
|
Li T, Bo W, Hu C, Kang H, Liu H, Wang K, Fu H. Applications of deep learning in fundus images: A review. Med Image Anal 2021; 69:101971. [PMID: 33524824 DOI: 10.1016/j.media.2021.101971] [Citation(s) in RCA: 81] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 01/12/2021] [Indexed: 02/06/2023]
Abstract
The use of fundus images for the early screening of eye diseases is of great clinical importance. Due to its powerful performance, deep learning is becoming more and more popular in related applications, such as lesion segmentation, biomarkers segmentation, disease diagnosis and image synthesis. Therefore, it is very necessary to summarize the recent developments in deep learning for fundus images with a review paper. In this review, we introduce 143 application papers with a carefully designed hierarchy. Moreover, 33 publicly available datasets are presented. Summaries and analyses are provided for each task. Finally, limitations common to all tasks are revealed and possible solutions are given. We will also release and regularly update the state-of-the-art results and newly-released datasets at https://github.com/nkicsl/Fundus_Review to adapt to the rapid development of this field.
Collapse
Affiliation(s)
- Tao Li
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Wang Bo
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Chunyu Hu
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Hong Kang
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Hanruo Liu
- Beijing Tongren Hospital, Capital Medical University, Address, Beijing 100730 China
| | - Kai Wang
- College of Computer Science, Nankai University, Tianjin 300350, China.
| | - Huazhu Fu
- Inception Institute of Artificial Intelligence (IIAI), Abu Dhabi, UAE
| |
Collapse
|
25
|
Wang S, Yu L, Li K, Yang X, Fu CW, Heng PA. DoFE: Domain-Oriented Feature Embedding for Generalizable Fundus Image Segmentation on Unseen Datasets. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4237-4248. [PMID: 32776876 DOI: 10.1109/tmi.2020.3015224] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Deep convolutional neural networks have significantly boosted the performance of fundus image segmentation when test datasets have the same distribution as the training datasets. However, in clinical practice, medical images often exhibit variations in appearance for various reasons, e.g., different scanner vendors and image quality. These distribution discrepancies could lead the deep networks to over-fit on the training datasets and lack generalization ability on the unseen test datasets. To alleviate this issue, we present a novel Domain-oriented Feature Embedding (DoFE) framework to improve the generalization ability of CNNs on unseen target domains by exploring the knowledge from multiple source domains. Our DoFE framework dynamically enriches the image features with additional domain prior knowledge learned from multi-source domains to make the semantic features more discriminative. Specifically, we introduce a Domain Knowledge Pool to learn and memorize the prior information extracted from multi-source domains. Then the original image features are augmented with domain-oriented aggregated features, which are induced from the knowledge pool based on the similarity between the input image and multi-source domain images. We further design a novel domain code prediction branch to infer this similarity and employ an attention-guided mechanism to dynamically combine the aggregated features with the semantic features. We comprehensively evaluate our DoFE framework on two fundus image segmentation tasks, including the optic cup and disc segmentation and vessel segmentation. Our DoFE framework generates satisfying segmentation results on unseen datasets and surpasses other domain generalization and network regularization methods.
Collapse
|
26
|
Bian X, Luo X, Wang C, Liu W, Lin X. Optic disc and optic cup segmentation based on anatomy guided cascade network. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 197:105717. [PMID: 32957060 DOI: 10.1016/j.cmpb.2020.105717] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/01/2020] [Accepted: 08/15/2020] [Indexed: 05/26/2023]
Abstract
BACKGROUND AND OBJECTIVE Glaucoma, a worldwide eye disease, may cause irreversible vision damage. If not treated properly at an early stage, glaucoma eventually deteriorates into blindness. Various glaucoma screening methods, e.g. Ultrasound Biomicroscopy (UBM), Optical Coherence Tomography (OCT), and Heidelberg Retinal Scanner (HRT), are available. However, retinal fundus image photography examination, because of its low cost, is one of the most common solutions used to diagnose glaucoma. Clinically, the cup-to-disk ratio is an important indicator in glaucoma diagnosis. Therefore, precise fundus image segmentation to calculate the cup-to-disk ratio is the basis for screening glaucoma. METHODS In this paper, we propose a deep neural network that uses anatomical knowledge to guide the segmentation of fundus images, which accurately segments the optic cup and the optic disc in a fundus image to accurately calculate the cup-to-disk ratio. Optic disc and optic cup segmentation are typical small target segmentation problems in biomedical images. We propose to use an attention-based cascade network to effectively accelerate the convergence of small target segmentation during training and accurately reserve detailed contours of small targets. RESULTS Our method, which was validated in the MICCAI REFUGE fundus image segmentation competition, achieves 93.31% dice score in optic disc segmentation and 88.04% dice score in optic cup segmentation. Moreover, we win a high CDR evaluation score, which is useful for glaucoma screening. CONCLUSIONS The proposed method successfully introduce anatomical knowledge into segmentation task, and achieve state-of-the-art performance in fundus image segmentation. It also can be used for both automatic segmentation and semiautomatic segmentation with human interaction.
Collapse
Affiliation(s)
- Xuesheng Bian
- Fujian Key Laboratory of Sensing and Computing for Smart Cities, Department of Computer Science, School of Informatics, Xiamen University, Xiamen 361005, China
| | - Xiongbiao Luo
- Fujian Key Laboratory of Sensing and Computing for Smart Cities, Department of Computer Science, School of Informatics, Xiamen University, Xiamen 361005, China
| | - Cheng Wang
- Fujian Key Laboratory of Sensing and Computing for Smart Cities, Department of Computer Science, School of Informatics, Xiamen University, Xiamen 361005, China.
| | - Weiquan Liu
- Fujian Key Laboratory of Sensing and Computing for Smart Cities, Department of Computer Science, School of Informatics, Xiamen University, Xiamen 361005, China
| | - Xiuhong Lin
- Fujian Key Laboratory of Sensing and Computing for Smart Cities, Department of Computer Science, School of Informatics, Xiamen University, Xiamen 361005, China
| |
Collapse
|
27
|
IOSUDA: an unsupervised domain adaptation with input and output space alignment for joint optic disc and cup segmentation. APPL INTELL 2020. [DOI: 10.1007/s10489-020-01956-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
28
|
Simultaneous segmentation of the optic disc and fovea in retinal images using evolutionary algorithms. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-05060-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
29
|
Tian Z, Zheng Y, Li X, Du S, Xu X. Graph convolutional network based optic disc and cup segmentation on fundus images. BIOMEDICAL OPTICS EXPRESS 2020; 11:3043-3057. [PMID: 32637240 PMCID: PMC7316013 DOI: 10.1364/boe.390056] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2020] [Revised: 05/05/2020] [Accepted: 05/06/2020] [Indexed: 06/11/2023]
Abstract
Calculating the cup-to-disc ratio is one of the methods for glaucoma screening with other clinical features. In this paper, we propose a graph convolutional network (GCN) based method to implement the optic disc (OD) and optic cup (OC) segmentation task. We first present a multi-scale convolutional neural network (CNN) as the feature map extractor to generate feature map. The GCN takes the feature map concatenated with the graph nodes as the input for segmentation task. The experimental results on the REFUGE dataset show that the Jaccard index (Jacc) of the proposed method on OD and OC are 95.64% and 91.60%, respectively, while the Dice similarity coefficients (DSC) are 97.76% and 95.58%, respectively. The proposed method outperforms the state-of-the-art methods on the REFUGE leaderboard. We also evaluate the proposed method on the Drishthi-GS1 dataset. The results show that the proposed method outperforms the state-of-the-art methods.
Collapse
Affiliation(s)
- Zhiqiang Tian
- School of Software Engineering, Xi’an Jiaotong University, Xi’an 710049, China
| | - Yaoyue Zheng
- School of Software Engineering, Xi’an Jiaotong University, Xi’an 710049, China
| | - Xiaojian Li
- School of Software Engineering, Xi’an Jiaotong University, Xi’an 710049, China
| | - Shaoyi Du
- Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an 710049, China
| | - Xiayu Xu
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi’an Jiaotong University, Xi’an 710049, China
- Bioinspired Engineering and Biomechanics Center (BEBC), Xi’an Jiaotong University, Xi’an 710049, China
| |
Collapse
|
30
|
Stolte S, Fang R. A survey on medical image analysis in diabetic retinopathy. Med Image Anal 2020; 64:101742. [PMID: 32540699 DOI: 10.1016/j.media.2020.101742] [Citation(s) in RCA: 48] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 02/03/2020] [Accepted: 05/28/2020] [Indexed: 01/12/2023]
Abstract
Diabetic Retinopathy (DR) represents a highly-prevalent complication of diabetes in which individuals suffer from damage to the blood vessels in the retina. The disease manifests itself through lesion presence, starting with microaneurysms, at the nonproliferative stage before being characterized by neovascularization in the proliferative stage. Retinal specialists strive to detect DR early so that the disease can be treated before substantial, irreversible vision loss occurs. The level of DR severity indicates the extent of treatment necessary - vision loss may be preventable by effective diabetes management in mild (early) stages, rather than subjecting the patient to invasive laser surgery. Using artificial intelligence (AI), highly accurate and efficient systems can be developed to help assist medical professionals in screening and diagnosing DR earlier and without the full resources that are available in specialty clinics. In particular, deep learning facilitates diagnosis earlier and with higher sensitivity and specificity. Such systems make decisions based on minimally handcrafted features and pave the way for personalized therapies. Thus, this survey provides a comprehensive description of the current technology used in each step of DR diagnosis. First, it begins with an introduction to the disease and the current technologies and resources available in this space. It proceeds to discuss the frameworks that different teams have used to detect and classify DR. Ultimately, we conclude that deep learning systems offer revolutionary potential to DR identification and prevention of vision loss.
Collapse
Affiliation(s)
- Skylar Stolte
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, 1275 Center Drive, Biomedical Sciences Building JG56 P.O. Box 116131 Gainesville, FL 32611-6131, USA.
| | - Ruogu Fang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, 1275 Center Drive, Biomedical Sciences Building JG56 P.O. Box 116131 Gainesville, FL 32611-6131, USA.
| |
Collapse
|
31
|
Channel and Spatial Attention Regression Network for Cup-to-Disc Ratio Estimation. ELECTRONICS 2020. [DOI: 10.3390/electronics9060909] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Cup-to-disc ratio (CDR) is of great importance during assessing structural changes at the optic nerve head (ONH) and diagnosis of glaucoma. While most efforts have been put on acquiring the CDR number through CNN-based segmentation algorithms followed by the calculation of CDR, these methods usually only focus on the features in the convolution kernel, which is, after all, the operation of the local region, ignoring the contribution of rich global features (such as distant pixels) to the current features. In this paper, a new end-to-end channel and spatial attention regression deep learning network is proposed to deduces CDR number from the regression perspective and combine the self-attention mechanism with the regression network. Our network consists of four modules: the feature extraction module to extract deep features expressing the complicated pattern of optic disc (OD) and optic cup (OC), the attention module including the channel attention block (CAB) and the spatial attention block (SAB) to improve feature representation by aggregating long-range contextual information, the regression module to deduce CDR number directly, and the segmentation-auxiliary module to focus the model’s attention on the relevant features instead of the background region. Especially, the CAB selects relatively important feature maps in channel dimension, shifting the emphasis on the OD and OC region; meanwhile, the SAB learns the discriminative ability of feature representation at pixel level by capturing the relationship of intra-feature map. The experimental results of ORIGA dataset show that our method obtains absolute CDR error of 0.067 and the Pearson’s correlation coefficient of 0.694 in estimating CDR and our method has a great potential in predicting the CDR number.
Collapse
|
32
|
Barros DMS, Moura JCC, Freire CR, Taleb AC, Valentim RAM, Morais PSG. Machine learning applied to retinal image processing for glaucoma detection: review and perspective. Biomed Eng Online 2020; 19:20. [PMID: 32293466 PMCID: PMC7160894 DOI: 10.1186/s12938-020-00767-2] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Accepted: 04/06/2020] [Indexed: 02/07/2023] Open
Abstract
INTRODUCTION This is a systematic review on the main algorithms using machine learning (ML) in retinal image processing for glaucoma diagnosis and detection. ML has proven to be a significant tool for the development of computer aided technology. Furthermore, secondary research has been widely conducted over the years for ophthalmologists. Such aspects indicate the importance of ML in the context of retinal image processing. METHODS The publications that were chosen to compose this review were gathered from Scopus, PubMed, IEEEXplore and Science Direct databases. Then, the papers published between 2014 and 2019 were selected . Researches that used the segmented optic disc method were excluded. Moreover, only the methods which applied the classification process were considered. The systematic analysis was performed in such studies and, thereupon, the results were summarized. DISCUSSION Based on architectures used for ML in retinal image processing, some studies applied feature extraction and dimensionality reduction to detect and isolate important parts of the analyzed image. Differently, other works utilized a deep convolutional network. Based on the evaluated researches, the main difference between the architectures is the number of images demanded for processing and the high computational cost required to use deep learning techniques. CONCLUSIONS All the analyzed publications indicated it was possible to develop an automated system for glaucoma diagnosis. The disease severity and its high occurrence rates justify the researches which have been carried out. Recent computational techniques, such as deep learning, have shown to be promising technologies in fundus imaging. Although such a technique requires an extensive database and high computational costs, the studies show that the data augmentation and transfer learning techniques have been applied as an alternative way to optimize and reduce networks training.
Collapse
Affiliation(s)
- Daniele M S Barros
- Laboratory of Technological Innovation in Health, Federal University of Rio Grande do Norte, Natal, Brazil.
| | - Julio C C Moura
- Laboratory of Technological Innovation in Health, Federal University of Rio Grande do Norte, Natal, Brazil
| | - Cefas R Freire
- Laboratory of Technological Innovation in Health, Federal University of Rio Grande do Norte, Natal, Brazil
| | | | - Ricardo A M Valentim
- Laboratory of Technological Innovation in Health, Federal University of Rio Grande do Norte, Natal, Brazil
| | - Philippi S G Morais
- Laboratory of Technological Innovation in Health, Federal University of Rio Grande do Norte, Natal, Brazil
| |
Collapse
|
33
|
Jiang Y, Duan L, Cheng J, Gu Z, Xia H, Fu H, Li C, Liu J. JointRCNN: A Region-Based Convolutional Neural Network for Optic Disc and Cup Segmentation. IEEE Trans Biomed Eng 2020; 67:335-343. [DOI: 10.1109/tbme.2019.2913211] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
34
|
A region growing and local adaptive thresholding-based optic disc detection. PLoS One 2020; 15:e0227566. [PMID: 31999720 PMCID: PMC6991997 DOI: 10.1371/journal.pone.0227566] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2019] [Accepted: 12/21/2019] [Indexed: 11/23/2022] Open
Abstract
Automatic optic disc (OD) localization and segmentation is not a simple process as the OD appearance and size may significantly vary from person to person. This paper presents a novel approach for OD localization and segmentation which is fast as well as robust. In the proposed method, the image is first enhanced by de-hazing and then cropped around the OD region. The cropped image is converted to HSV domain and then V channel is used for OD detection. The vessels are extracted from the Green channel in the cropped region by multi-scale line detector and then removed by the Laplace Transform. Local adaptive thresholding and region growing are applied for binarization. Furthermore, two region properties, eccentricity, and area are then used to detect the true OD region. Finally, ellipse fitting is used to fill the region. Several datasets are used for testing the proposed method. Test results show that the accuracy and sensitivity of the proposed method are much higher than the existing state-of-the-art methods.
Collapse
|
35
|
Orlando JI, Fu H, Barbosa Breda J, van Keer K, Bathula DR, Diaz-Pinto A, Fang R, Heng PA, Kim J, Lee J, Lee J, Li X, Liu P, Lu S, Murugesan B, Naranjo V, Phaye SSR, Shankaranarayana SM, Sikka A, Son J, van den Hengel A, Wang S, Wu J, Wu Z, Xu G, Xu Y, Yin P, Li F, Zhang X, Xu Y, Bogunović H. REFUGE Challenge: A unified framework for evaluating automated methods for glaucoma assessment from fundus photographs. Med Image Anal 2020; 59:101570. [DOI: 10.1016/j.media.2019.101570] [Citation(s) in RCA: 83] [Impact Index Per Article: 20.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2019] [Revised: 07/26/2019] [Accepted: 10/01/2019] [Indexed: 01/01/2023]
|
36
|
Sengupta S, Singh A, Leopold HA, Gulati T, Lakshminarayanan V. Ophthalmic diagnosis using deep learning with fundus images - A critical review. Artif Intell Med 2019; 102:101758. [PMID: 31980096 DOI: 10.1016/j.artmed.2019.101758] [Citation(s) in RCA: 64] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2019] [Revised: 11/04/2019] [Accepted: 11/05/2019] [Indexed: 12/23/2022]
Abstract
An overview of the applications of deep learning for ophthalmic diagnosis using retinal fundus images is presented. We describe various retinal image datasets that can be used for deep learning purposes. Applications of deep learning for segmentation of optic disk, optic cup, blood vessels as well as detection of lesions are reviewed. Recent deep learning models for classification of diseases such as age-related macular degeneration, glaucoma, and diabetic retinopathy are also discussed. Important critical insights and future research directions are given.
Collapse
Affiliation(s)
- Sourya Sengupta
- Theoretical and Experimental Epistemology Lab, School of Optometry and Vision Science, University of Waterloo, Ontario, Canada; Department of Systems Design Engineering, University of Waterloo, Ontario, Canada.
| | - Amitojdeep Singh
- Theoretical and Experimental Epistemology Lab, School of Optometry and Vision Science, University of Waterloo, Ontario, Canada; Department of Systems Design Engineering, University of Waterloo, Ontario, Canada
| | - Henry A Leopold
- Theoretical and Experimental Epistemology Lab, School of Optometry and Vision Science, University of Waterloo, Ontario, Canada; Department of Systems Design Engineering, University of Waterloo, Ontario, Canada
| | - Tanmay Gulati
- Department of Computer Science and Engineering, Manipal Institute of Technology, India
| | - Vasudevan Lakshminarayanan
- Theoretical and Experimental Epistemology Lab, School of Optometry and Vision Science, University of Waterloo, Ontario, Canada; Department of Systems Design Engineering, University of Waterloo, Ontario, Canada
| |
Collapse
|
37
|
Optic Disc and Cup Segmentation in Retinal Images for Glaucoma Diagnosis by Locally Statistical Active Contour Model with Structure Prior. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2019; 2019:8973287. [PMID: 31827591 PMCID: PMC6886352 DOI: 10.1155/2019/8973287] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/20/2019] [Revised: 09/26/2019] [Accepted: 10/22/2019] [Indexed: 11/17/2022]
Abstract
Accurate optic disc and optic cup segmentation plays an important role for diagnosing glaucoma. However, most existing segmentation approaches suffer from the following limitations. On the one hand, image devices or illumination variations always lead to intensity inhomogeneity in the fundus image. On the other hand, the spatial prior knowledge of optic disc and optic cup, e.g., the optic cup is always contained inside the optic disc region, is ignored. Therefore, the effectiveness of segmentation approaches is greatly reduced. Different from most previous approaches, we present a novel locally statistical active contour model with the structure prior (LSACM-SP) approach to jointly and robustly segment the optic disc and optic cup structures. First, some preprocessing techniques are used to automatically extract initial contour of object. Then, we introduce the locally statistical active contour model (LSACM) to optic disc and optic cup segmentation in the presence of intensity inhomogeneity. Finally, taking the specific morphology of optic disc and optic cup into consideration, a novel structure prior is proposed to guide the model to generate accurate segmentation results. Experimental results demonstrate the advantage and superiority of our approach on two publicly available databases, i.e., DRISHTI-GS and RIM-ONE r2, by comparing with some well-known algorithms.
Collapse
|
38
|
|
39
|
Wang S, Yu L, Yang X, Fu CW, Heng PA. Patch-Based Output Space Adversarial Learning for Joint Optic Disc and Cup Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2485-2495. [PMID: 30794170 DOI: 10.1109/tmi.2019.2899910] [Citation(s) in RCA: 104] [Impact Index Per Article: 20.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Glaucoma is a leading cause of irreversible blindness. Accurate segmentation of the optic disc (OD) and optic cup (OC) from fundus images is beneficial to glaucoma screening and diagnosis. Recently, convolutional neural networks demonstrate promising progress in the joint OD and OC segmentation. However, affected by the domain shift among different datasets, deep networks are severely hindered in generalizing across different scanners and institutions. In this paper, we present a novel patch-based output space adversarial learning framework ( p OSAL) to jointly and robustly segment the OD and OC from different fundus image datasets. We first devise a lightweight and efficient segmentation network as a backbone. Considering the specific morphology of OD and OC, a novel morphology-aware segmentation loss is proposed to guide the network to generate accurate and smooth segmentation. Our p OSAL framework then exploits unsupervised domain adaptation to address the domain shift challenge by encouraging the segmentation in the target domain to be similar to the source ones. Since the whole-segmentation-based adversarial loss is insufficient to drive the network to capture segmentation details, we further design the p OSAL in a patch-based fashion to enable fine-grained discrimination on local segmentation details. We extensively evaluate our p OSAL framework and demonstrate its effectiveness in improving the segmentation performance on three public retinal fundus image datasets, i.e., Drishti-GS, RIM-ONE-r3, and REFUGE. Furthermore, our p OSAL framework achieved the first place in the OD and OC segmentation tasks in the MICCAI 2018 Retinal Fundus Glaucoma Challenge.
Collapse
|
40
|
Xu YL, Lu S, Li HX, Li RR. Mixed Maximum Loss Design for Optic Disc and Optic Cup Segmentation with Deep Learning from Imbalanced Samples. SENSORS (BASEL, SWITZERLAND) 2019; 19:E4401. [PMID: 31614560 PMCID: PMC6833024 DOI: 10.3390/s19204401] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/12/2019] [Revised: 09/30/2019] [Accepted: 10/09/2019] [Indexed: 11/16/2022]
Abstract
Glaucoma is a serious eye disease that can cause permanent blindness and is difficult to diagnose early. Optic disc (OD) and optic cup (OC) play a pivotal role in the screening of glaucoma. Therefore, accurate segmentation of OD and OC from fundus images is a key task in the automatic screening of glaucoma. In this paper, we designed a U-shaped convolutional neural network with multi-scale input and multi-kernel modules (MSMKU) for OD and OC segmentation. Such a design gives MSMKU a rich receptive field and is able to effectively represent multi-scale features. In addition, we designed a mixed maximum loss minimization learning strategy (MMLM) for training the proposed MSMKU. This training strategy can adaptively sort the samples by the loss function and re-weight the samples through data enhancement, thereby synchronously improving the prediction performance of all samples. Experiments show that the proposed method has obtained a state-of-the-art breakthrough result for OD and OC segmentation on the RIM-ONE-V3 and DRISHTI-GS datasets. At the same time, the proposed method achieved satisfactory glaucoma screening performance on the RIM-ONE-V3 and DRISHTI-GS datasets. On datasets with an imbalanced distribution between typical and rare sample images, the proposed method obtained a higher accuracy than existing deep learning methods.
Collapse
Affiliation(s)
- Yong-Li Xu
- Department of Mathematics, Beijing University of Chemical Technology, Beijing 100029, China.
| | - Shuai Lu
- Department of Mathematics, Beijing University of Chemical Technology, Beijing 100029, China.
| | - Han-Xiong Li
- State Key Laboratory of High Performance Complex Manufacturing, Central South University, Changsha 410083, China.
- Department of Systems Engineering and Engineering Management, City University of Hong Kong, Hong Kong 999077, China.
| | - Rui-Rui Li
- College of Information Science & Technology, Beijing University of Chemical Technology, Beijing 100029, China.
| |
Collapse
|
41
|
Gu Z, Cheng J, Fu H, Zhou K, Hao H, Zhao Y, Zhang T, Gao S, Liu J. CE-Net: Context Encoder Network for 2D Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2281-2292. [PMID: 30843824 DOI: 10.1109/tmi.2019.2903562] [Citation(s) in RCA: 651] [Impact Index Per Article: 130.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Medical image segmentation is an important step in medical image analysis. With the rapid development of a convolutional neural network in image processing, deep learning has been used for medical image segmentation, such as optic disc segmentation, blood vessel detection, lung segmentation, cell segmentation, and so on. Previously, U-net based approaches have been proposed. However, the consecutive pooling and strided convolutional operations led to the loss of some spatial information. In this paper, we propose a context encoder network (CE-Net) to capture more high-level information and preserve spatial information for 2D medical image segmentation. CE-Net mainly contains three major components: a feature encoder module, a context extractor, and a feature decoder module. We use the pretrained ResNet block as the fixed feature extractor. The context extractor module is formed by a newly proposed dense atrous convolution block and a residual multi-kernel pooling block. We applied the proposed CE-Net to different 2D medical image segmentation tasks. Comprehensive results show that the proposed method outperforms the original U-Net method and other state-of-the-art methods for optic disc segmentation, vessel detection, lung segmentation, cell contour segmentation, and retinal optical coherence tomography layer segmentation.
Collapse
|
42
|
Diaz-Pinto A, Colomer A, Naranjo V, Morales S, Xu Y, Frangi AF. Retinal Image Synthesis and Semi-Supervised Learning for Glaucoma Assessment. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2211-2218. [PMID: 30843823 DOI: 10.1109/tmi.2019.2903434] [Citation(s) in RCA: 61] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Recent works show that generative adversarial networks (GANs) can be successfully applied to image synthesis and semi-supervised learning, where, given a small labeled database and a large unlabeled database, the goal is to train a powerful classifier. In this paper, we trained a retinal image synthesizer and a semi-supervised learning method for automatic glaucoma assessment using an adversarial model on a small glaucoma-labeled database and a large unlabeled database. Various studies have shown that glaucoma can be monitored by analyzing the optic disc and its surroundings, and for that reason, the images used in this paper were automatically cropped around the optic disc. The novelty of this paper is to propose a new retinal image synthesizer and a semi-supervised learning method for glaucoma assessment based on the deep convolutional GANs. In addition, and to the best of our knowledge, this system is trained on an unprecedented number of publicly available images (86926 images). This system, hence, is not only able to generate images synthetically but to provide labels automatically. Synthetic images were qualitatively evaluated using t-SNE plots of features associated with the images and their anatomical consistency was estimated by measuring the proportion of pixels corresponding to the anatomical structures around the optic disc. The resulting image synthesizer is able to generate realistic (cropped) retinal images, and subsequently, the glaucoma classifier is able to classify them into glaucomatous and normal with high accuracy (AUC = 0.9017). The obtained retinal image synthesizer and the glaucoma classifier could then be used to generate an unlimited number of cropped retinal images with glaucoma labels.
Collapse
|
43
|
Liu Q, Hong X, Li S, Chen Z, Zhao G, Zou B. A spatial-aware joint optic disc and cup segmentation method. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.05.039] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
|
44
|
Snyder BM, Nam SM, Khunsongkiet P, Ausayakhun S, Leeungurasatien T, Leiter MR, Sevastopolsky A, Joye AS, Berlinberg EJ, Liu Y, Ramirez DA, Moe CA, Ausayakhun S, Stamper RL, Keenan JD. Accuracy of computer-assisted vertical cup-to-disk ratio grading for glaucoma screening. PLoS One 2019; 14:e0220362. [PMID: 31393904 PMCID: PMC6687168 DOI: 10.1371/journal.pone.0220362] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2019] [Accepted: 07/15/2019] [Indexed: 11/18/2022] Open
Abstract
Purpose Glaucoma screening can be performed by assessing the vertical-cup-to-disk ratio (VCDR) of the optic nerve head from fundus photography, but VCDR grading is inherently subjective. This study investigated whether computer software could improve the accuracy and repeatability of VCDR assessment. Methods In this cross-sectional diagnostic accuracy study, 5 ophthalmologists independently assessed the VCDR from a set of 200 optic disk images, with the median grade used as the reference standard for subsequent analyses. Eight non-ophthalmologists graded each image by two different methods: by visual inspection and with assistance from a custom-made publicly available software program. Agreement with the reference standard grade was assessed for each method by calculating the intraclass correlation coefficient (ICC), and the sensitivity and specificity determined relative to a median ophthalmologist grade of ≥0.7. Results VCDR grades ranged from 0.1 to 0.9 for visual assessment and from 0.1 to 1.0 for software-assisted grading, with a median grade of 0.4 for each. Agreement between each of the 8 graders and the reference standard was higher for visual inspection (median ICC 0.65, interquartile range 0.57 to 0.82) than for software-assisted grading (median ICC 0.59, IQR 0.44 to 0.71); P = 0.02, Wilcoxon signed-rank test). Visual inspection and software assistance had similar sensitivity and specificity for detecting glaucomatous cupping. Conclusion The computer software used in this study did not improve the reproducibility or validity of VCDR grading from fundus photographs compared with simple visual inspection. More clinical experience was correlated with higher agreement with the ophthalmologist VCDR reference standard.
Collapse
Affiliation(s)
- Blake M. Snyder
- School of Medicine, University of Colorado Denver, Aurora, Colorado, United States of America
- Francis I. Proctor Foundation, University of California San Francisco, San Francisco, CA, United States of America
| | - Sang Min Nam
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA, United States of America
- Department of Ophthalmology, CHA Bundang Medical Center, CHA University, Seongnam, Republic of Korea
| | - Preeyanuch Khunsongkiet
- Department of Ophthalmology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Sakarin Ausayakhun
- Department of Ophthalmology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | | | - Maxwell R. Leiter
- Francis I. Proctor Foundation, University of California San Francisco, San Francisco, CA, United States of America
| | - Artem Sevastopolsky
- Youth Laboratories LLC, Moscow, Russia
- Skolkovo Institute of Science and Technology, Moscow, Russia
| | - Ashlin S. Joye
- Francis I. Proctor Foundation, University of California San Francisco, San Francisco, CA, United States of America
| | - Elyse J. Berlinberg
- Francis I. Proctor Foundation, University of California San Francisco, San Francisco, CA, United States of America
| | - Yingna Liu
- Francis I. Proctor Foundation, University of California San Francisco, San Francisco, CA, United States of America
| | - David A. Ramirez
- Francis I. Proctor Foundation, University of California San Francisco, San Francisco, CA, United States of America
| | - Caitlin A. Moe
- Francis I. Proctor Foundation, University of California San Francisco, San Francisco, CA, United States of America
| | - Somsanguan Ausayakhun
- Department of Ophthalmology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Robert L. Stamper
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA, United States of America
| | - Jeremy D. Keenan
- Francis I. Proctor Foundation, University of California San Francisco, San Francisco, CA, United States of America
- Department of Ophthalmology, University of California, San Francisco, San Francisco, CA, United States of America
- * E-mail:
| |
Collapse
|
45
|
Rong Y, Xiang D, Zhu W, Shi F, Gao E, Fan Z, Chen X. Deriving external forces via convolutional neural networks for biomedical image segmentation. BIOMEDICAL OPTICS EXPRESS 2019; 10:3800-3814. [PMID: 31452976 PMCID: PMC6701547 DOI: 10.1364/boe.10.003800] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2019] [Revised: 06/26/2019] [Accepted: 06/27/2019] [Indexed: 05/07/2023]
Abstract
Active contours, or snakes, are widely applied on biomedical image segmentation. They are curves defined within an image domain that can move to object boundaries under the influence of internal forces and external forces, in which the internal forces are generally computed from curves themselves and external forces from image data. Designing external forces properly is a key point with active contour algorithms since the external forces play a leading role in the evolution of active contours. One of most popular external forces for active contour models is gradient vector flow (GVF). However, GVF is sensitive to noise and false edges, which limits its application area. To handle this problem, in this paper, we propose using GVF as reference to train a convolutional neural network to derive an external force. The derived external force is then integrated into the active contour models for curve evolution. Three clinical applications, segmentation of optic disk in fundus images, fluid in retinal optical coherence tomography images and fetal head in ultrasound images, are employed to evaluate the proposed method. The results show that the proposed method is very promising since it achieves competitive performance for different tasks compared to the state-of-the-art algorithms.
Collapse
Affiliation(s)
- Yibiao Rong
- School of Electrical and Information Engineering, Soochow University, 215006, Suzhou, China
- Contributed equally to this work
| | - Dehui Xiang
- School of Electrical and Information Engineering, Soochow University, 215006, Suzhou, China
- Contributed equally to this work
| | - Weifang Zhu
- School of Electrical and Information Engineering, Soochow University, 215006, Suzhou, China
| | - Fei Shi
- School of Electrical and Information Engineering, Soochow University, 215006, Suzhou, China
| | - Enting Gao
- School of Electronic and Information Engineering, Suzhou University of Science and Technology, Suzhou, China
| | - Zhun Fan
- Key Laboratory of Digital Signal and Image Processing of Guangdong Provincial, College of Engineering, Shantou University, 515063, Shantou, China
| | - Xinjian Chen
- School of Electrical and Information Engineering, Soochow University, 215006, Suzhou, China
- State Key Laboratory of Radiation Medicine and Protection, Soochow University, 215123, Suzhou, China
| |
Collapse
|
46
|
Shankaranarayana SM, Ram K, Mitra K, Sivaprakasam M. Fully Convolutional Networks for Monocular Retinal Depth Estimation and Optic Disc-Cup Segmentation. IEEE J Biomed Health Inform 2019; 23:1417-1426. [DOI: 10.1109/jbhi.2019.2899403] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
47
|
Statistical Edge Detection and Circular Hough Transform for Optic Disk Localization. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9020350] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Accurate and efficient localization of the optic disk (OD) in retinal images is an essential process for the diagnosis of retinal diseases, such as diabetic retinopathy, papilledema, and glaucoma, in automatic retinal analysis systems. This paper presents an effective and robust framework for automatic detection of the OD. The framework begins with the process of elimination of the pixels below the average brightness level of the retinal images. Next, a method based on the modified robust rank order was used for edge detection. Finally, the circular Hough transform (CHT) was performed on the obtained retinal images for OD localization. Three public datasets were used to evaluate the performance of the proposed method. The optic disks were successfully located with the success rates of 100%, 96.92%, and 98.88% for the DRIVE, DIARETDB0, and DIARETDB1 datasets, respectively.
Collapse
|
48
|
Jiang Y, Xia H, Xu Y, Cheng J, Fu H, Duan L, Meng Z, Liu J. Optic Disc and Cup Segmentation with Blood Vessel Removal from Fundus Images for Glaucoma Detection. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:862-865. [PMID: 30440527 DOI: 10.1109/embc.2018.8512400] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Glaucoma is one of the major causes of blindness. Researchers keep looking for better ways to detect glaucoma in its early stage before it gets worse and cannot be cured. Among existing methods, the vertical cup to disc ratio (CDR) has been found to be effective for glaucoma measurement, which is calculated from the diameters of the optic cup and disc regions. Therefore, in order to achieve a more accurate CDR, a good segmentation of the optic disc and cup regions is quite important. Noting that the shape of the disc and cup regions can be assumed to be an ellipse, in this work we propose to find the minimal bounding boxes for the two regions based on the recent advances of deep learning. Also, considering blood vessels, passing through the disc area in a fundus image, can affect the detection of the bounding boxes, we further propose to remove the blood vessels beforehand in order to further boost the overall performance. Comprehensive experiments show that our proposed method achieves state-of-the-art performance on ORIGA-650 for optic disc and cup segmentation.
Collapse
|
49
|
Stevenson CH, Hong SC, Ogbuehi KC. Development of an artificial intelligence system to classify pathology and clinical features on retinal fundus images. Clin Exp Ophthalmol 2018; 47:484-489. [PMID: 30370587 DOI: 10.1111/ceo.13433] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2018] [Revised: 09/12/2018] [Accepted: 10/05/2018] [Indexed: 12/30/2022]
Abstract
IMPORTANCE Artificial intelligence (AI) algorithms are under development for use in diabetic retinopathy photo screening pathways. To be clinically acceptable, such systems must also be able to classify other fundus abnormalities and clinical features at the point of care. BACKGROUND We aimed to develop an AI system that can detect several fundus pathologies and report relevant clinical features. DESIGN Convolutional neural network training with retrospective data set. PARTICIPANTS Colour fundus photos were obtained from publicly available fundus image databases. METHODS Images were uploaded to a web-based AI platform for training and validation of AI classifiers. Separate classifiers were created for each fundus pathology and clinical feature. MAIN OUTCOME MEASURES Accuracy, sensitivity, specificity and area under receiver operating characteristic curve (AUC) for each classifier. RESULTS We obtained 4435 images from publicly available fundus image databases. AI classifiers were developed for each disease state above. Although statistical performance was limited by the small sample size, average accuracy was 89%, average sensitivity was 75%, average specificity was 89% and average AUC was 0.58. CONCLUSION AND RELEVANCE This study is a proof-of-concept AI system that could be implemented within a diabetic photo-screening pathway. Performance was promising but not yet at the level that would be required for clinical application. We have shown that it is possible for clinicians to develop AI classifiers with no previous programming or AI knowledge, using standard laptop computers.
Collapse
Affiliation(s)
- Clark H Stevenson
- Dunedin Hospital Eye Department, Dunedin, New Zealand.,University of Otago, Dunedin, New Zealand
| | | | | |
Collapse
|
50
|
|