1
|
Verma PK, Kaur J. Systematic Review of Retinal Blood Vessels Segmentation Based on AI-driven Technique. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1783-1799. [PMID: 38438695 PMCID: PMC11300804 DOI: 10.1007/s10278-024-01010-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 12/07/2023] [Accepted: 12/13/2023] [Indexed: 03/06/2024]
Abstract
Image segmentation is a crucial task in computer vision and image processing, with numerous segmentation algorithms being found in the literature. It has important applications in scene understanding, medical image analysis, robotic perception, video surveillance, augmented reality, image compression, among others. In light of this, the widespread popularity of deep learning (DL) and machine learning has inspired the creation of fresh methods for segmenting images using DL and ML models respectively. We offer a thorough analysis of this recent literature, encompassing the range of ground-breaking initiatives in semantic and instance segmentation, including convolutional pixel-labeling networks, encoder-decoder architectures, multi-scale and pyramid-based methods, recurrent networks, visual attention models, and generative models in adversarial settings. We study the connections, benefits, and importance of various DL- and ML-based segmentation models; look at the most popular datasets; and evaluate results in this Literature.
Collapse
Affiliation(s)
- Prem Kumari Verma
- Department of Computer Science and Engineering, Dr. B.R. Ambedkar National Institute of Technology, Jalandhar, 144008, Punjab, India.
| | - Jagdeep Kaur
- Department of Computer Science and Engineering, Dr. B.R. Ambedkar National Institute of Technology, Jalandhar, 144008, Punjab, India
| |
Collapse
|
2
|
Sebastian A, Elharrouss O, Al-Maadeed S, Almaadeed N. GAN-Based Approach for Diabetic Retinopathy Retinal Vasculature Segmentation. Bioengineering (Basel) 2023; 11:4. [PMID: 38275572 PMCID: PMC10812988 DOI: 10.3390/bioengineering11010004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 12/05/2023] [Accepted: 12/14/2023] [Indexed: 01/27/2024] Open
Abstract
Most diabetes patients develop a condition known as diabetic retinopathy after having diabetes for a prolonged period. Due to this ailment, damaged blood vessels may occur behind the retina, which can even progress to a stage of losing vision. Hence, doctors advise diabetes patients to screen their retinas regularly. Examining the fundus for this requires a long time and there are few ophthalmologists available to check the ever-increasing number of diabetes patients. To address this issue, several computer-aided automated systems are being developed with the help of many techniques like deep learning. Extracting the retinal vasculature is a significant step that aids in developing such systems. This paper presents a GAN-based model to perform retinal vasculature segmentation. The model achieves good results on the ARIA, DRIVE, and HRF datasets.
Collapse
Affiliation(s)
- Anila Sebastian
- Computer Science and Engineering Department, Qatar University, Doha P.O. Box 2713, Qatar; (O.E.); (S.A.-M.); (N.A.)
| | | | | | | |
Collapse
|
3
|
Elaouaber Z, Feroui A, Lazouni M, Messadi M. Blood vessel segmentation using deep learning architectures for aid diagnosis of diabetic retinopathy. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2145999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Affiliation(s)
- Z.A. Elaouaber
- Biomedical engineering, Universite Abou Bekr Belkaid Tlemcen Faculte de Technologie, Algeria, Tlemcen
| | - A. Feroui
- Biomedical engineering, Universite Abou Bekr Belkaid Tlemcen Faculte de Technologie, Algeria, Tlemcen
| | - M.E.A. Lazouni
- Biomedical engineering, Universite Abou Bekr Belkaid Tlemcen Faculte de Technologie, Algeria, Tlemcen
| | - M. Messadi
- Biomedical engineering, Universite Abou Bekr Belkaid Tlemcen Faculte de Technologie, Algeria, Tlemcen
| |
Collapse
|
4
|
An Automated Image Segmentation and Useful Feature Extraction Algorithm for Retinal Blood Vessels in Fundus Images. ELECTRONICS 2022. [DOI: 10.3390/electronics11091295] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
The manual segmentation of the blood vessels in retinal images has numerous limitations. It is very time consuming and prone to human error, particularly with a very twisted structure of the blood vessel and a vast number of retinal images that needs to be analysed. Therefore, an automatic algorithm for segmenting and extracting useful clinical features from the retinal blood vessels is critical to help ophthalmologists and eye specialists to diagnose different retinal diseases and to assess early treatment. An accurate, rapid, and fully automatic blood vessel segmentation and clinical features measurement algorithm for retinal fundus images is proposed to improve the diagnosis precision and decrease the workload of the ophthalmologists. The main pipeline of the proposed algorithm is composed of two essential stages: image segmentation and clinical features extraction stage. Several comprehensive experiments were carried out to assess the performance of the developed fully automated segmentation algorithm in detecting the retinal blood vessels using two extremely challenging fundus images datasets, named the DRIVE and HRF. Initially, the accuracy of the proposed algorithm was evaluated in terms of adequately detecting the retinal blood vessels. In these experiments, five quantitative performances were measured and calculated to validate the efficiency of the proposed algorithm, which consist of the Acc., Sen., Spe., PPV, and NPV measures compared with current state-of-the-art vessel segmentation approaches on the DRIVE dataset. The results obtained showed a significantly improvement by achieving an Acc., Sen., Spe., PPV, and NPV of 99.55%, 99.93%, 99.09%, 93.45%, and 98.89, respectively.
Collapse
|
5
|
Valanarasu JMJ, Sindagi VA, Hacihaliloglu I, Patel VM. KiU-Net: Overcomplete Convolutional Architectures for Biomedical Image and Volumetric Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:965-976. [PMID: 34813472 DOI: 10.1109/tmi.2021.3130469] [Citation(s) in RCA: 39] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Most methods for medical image segmentation use U-Net or its variants as they have been successful in most of the applications. After a detailed analysis of these "traditional" encoder-decoder based approaches, we observed that they perform poorly in detecting smaller structures and are unable to segment boundary regions precisely. This issue can be attributed to the increase in receptive field size as we go deeper into the encoder. The extra focus on learning high level features causes U-Net based approaches to learn less information about low-level features which are crucial for detecting small structures. To overcome this issue, we propose using an overcomplete convolutional architecture where we project the input image into a higher dimension such that we constrain the receptive field from increasing in the deep layers of the network. We design a new architecture for im- age segmentation- KiU-Net which has two branches: (1) an overcomplete convolutional network Kite-Net which learns to capture fine details and accurate edges of the input, and (2) U-Net which learns high level features. Furthermore, we also propose KiU-Net 3D which is a 3D convolutional architecture for volumetric segmentation. We perform a detailed study of KiU-Net by performing experiments on five different datasets covering various image modalities. We achieve a good performance with an additional benefit of fewer parameters and faster convergence. We also demonstrate that the extensions of KiU-Net based on residual blocks and dense blocks result in further performance improvements. Code: https://github.com/jeya-maria-jose/KiU-Net-pytorch.
Collapse
|
6
|
Kovács G, Fazekas A. A new baseline for retinal vessel segmentation: Numerical identification and correction of methodological inconsistencies affecting 100+ papers. Med Image Anal 2021; 75:102300. [PMID: 34814057 DOI: 10.1016/j.media.2021.102300] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Revised: 09/20/2021] [Accepted: 11/04/2021] [Indexed: 12/18/2022]
Abstract
In the last 15 years, the segmentation of vessels in retinal images has become an intensively researched problem in medical imaging, with hundreds of algorithms published. One of the de facto benchmarking data sets of vessel segmentation techniques is the DRIVE data set. Since DRIVE contains a predefined split of training and test images, the published performance results of the various segmentation techniques should provide a reliable ranking of the algorithms. Including more than 100 papers in the study, we performed a detailed numerical analysis of the coherence of the published performance scores. We found inconsistencies in the reported scores related to the use of the field of view (FoV), which has a significant impact on the performance scores. We attempted to eliminate the biases using numerical techniques to provide a more realistic picture of the state of the art. Based on the results, we have formulated several findings, most notably: despite the well-defined test set of DRIVE, most rankings in published papers are based on non-comparable figures; in contrast to the near-perfect accuracy scores reported in the literature, the highest accuracy score achieved to date is 0.9582 in the FoV region, which is 1% higher than that of human annotators. The methods we have developed for identifying and eliminating the evaluation biases can be easily applied to other domains where similar problems may arise.
Collapse
Affiliation(s)
- György Kovács
- Analytical Minds Ltd., Árpád street 5, Beregsurány 4933, Hungary.
| | - Attila Fazekas
- University of Debrecen, Faculty of Informatics, P.O.BOX 400, Debrecen 4002, Hungary.
| |
Collapse
|
7
|
GCA-Net: global context attention network for intestinal wall vascular segmentation. Int J Comput Assist Radiol Surg 2021; 17:569-578. [PMID: 34606060 DOI: 10.1007/s11548-021-02506-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Accepted: 09/17/2021] [Indexed: 12/22/2022]
Abstract
PURPOSE Precise segmentation of intestinal wall vessels is vital to colonic perforation prevention. However, there are interferences such as gastric juice in the vessel image of the intestinal wall, especially vessels and the mucosal folds are difficult to distinguish, which easily lead to mis-segmentation. In addition, the insufficient feature extraction of intricate vessel structures may leave out information of tiny vessels that result in rupture. To overcome these challenges, an effective network is proposed for segmentation of intestinal wall vessels. METHODS A global context attention network (GCA-Net) that employs a multi-scale fusion attention (MFA) module is proposed to adaptively integrate local and global context information to improve the distinguishability of mucosal folds and vessels, more importantly, the ability to capture tiny vessels. Also, a parallel decoder is used to introduce a contour loss function to solve the blurry and noisy blood vessel boundaries. RESULTS Extensive experimental results demonstrate the superiority of the GCA-Net, with accuracy of 94.84%, specificity of 97.89%, F1-score of 73.80%, AUC of 96.30%, and MeanIOU of 76.46% in fivefold cross-validation, exceeding the comparison methods. In addition, the public dataset DRIVE is used to verify the potential of GCA-Net in retinal vessel image segmentation. CONCLUSION A novel network for segmentation of intestinal wall vessels is developed, which can suppress interferences in intestinal wall vessel images, improve the discernibility of blood vessels and mucosal folds, enhance vessel boundaries, and capture tiny vessels. Comprehensive experiments prove that the proposed GCA-Net can accurately segment the intestinal wall vessels.
Collapse
|
8
|
Yang L, Wang H, Zeng Q, Liu Y, Bian G. A hybrid deep segmentation network for fundus vessels via deep-learning framework. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.03.085] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
9
|
Accurate Diagnosis of Diabetic Retinopathy and Glaucoma Using Retinal Fundus Images Based on Hybrid Features and Genetic Algorithm. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11136178] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
Diabetic retinopathy (DR) and glaucoma can both be incurable if they are not detected early enough. Therefore, ophthalmologists worldwide are striving to detect them by personally screening retinal fundus images. However, this procedure is not only tedious, subjective, and labor-intensive, but also error-prone. Worse yet, it may not even be attainable in some countries where ophthalmologists are in short supply. A practical solution to this complicated problem is a computer-aided diagnosis (CAD) system—the objective of this work. We propose an accurate system to detect at once any of the two diseases from retinal fundus images. The accuracy stems from two factors. First, we calculate a large set of hybrid features belonging to three groups: first-order statistics (FOS), higher-order statistics (HOS), and histogram of oriented gradient (HOG). Then, these features are skillfully reduced using a genetic algorithm scheme that selects only the most relevant and significant of them. Finally, the selected features are fed to a classifier to detect one of three classes: DR, glaucoma, or normal. Four classifiers are tested for this job: decision tree (DT), naive Bayes (NB), k-nearest neighbor (kNN), and linear discriminant analysis (LDA). The experimental work, conducted on three publicly available datasets, two of them merged into one, shows impressive performance in terms of four standard classification metrics, each computed using k-fold crossvalidation for added credibility. The highest accuracy has been provided by DT—96.67% for DR, 100% for glaucoma, and 96.67% for normal.
Collapse
|
10
|
Hu J, Wang H, Cao Z, Wu G, Jonas JB, Wang YX, Zhang J. Automatic Artery/Vein Classification Using a Vessel-Constraint Network for Multicenter Fundus Images. Front Cell Dev Biol 2021; 9:659941. [PMID: 34178986 PMCID: PMC8226261 DOI: 10.3389/fcell.2021.659941] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Accepted: 04/19/2021] [Indexed: 11/24/2022] Open
Abstract
Retinal blood vessel morphological abnormalities are generally associated with cardiovascular, cerebrovascular, and systemic diseases, automatic artery/vein (A/V) classification is particularly important for medical image analysis and clinical decision making. However, the current method still has some limitations in A/V classification, especially the blood vessel edge and end error problems caused by the single scale and the blurred boundary of the A/V. To alleviate these problems, in this work, we propose a vessel-constraint network (VC-Net) that utilizes the information of vessel distribution and edge to enhance A/V classification, which is a high-precision A/V classification model based on data fusion. Particularly, the VC-Net introduces a vessel-constraint (VC) module that combines local and global vessel information to generate a weight map to constrain the A/V features, which suppresses the background-prone features and enhances the edge and end features of blood vessels. In addition, the VC-Net employs a multiscale feature (MSF) module to extract blood vessel information with different scales to improve the feature extraction capability and robustness of the model. And the VC-Net can get vessel segmentation results simultaneously. The proposed method is tested on publicly available fundus image datasets with different scales, namely, DRIVE, LES, and HRF, and validated on two newly created multicenter datasets: Tongren and Kailuan. We achieve a balance accuracy of 0.9554 and F1 scores of 0.7616 and 0.7971 for the arteries and veins, respectively, on the DRIVE dataset. The experimental results prove that the proposed model achieves competitive performance in A/V classification and vessel segmentation tasks compared with state-of-the-art methods. Finally, we test the Kailuan dataset with other trained fusion datasets, the results also show good robustness. To promote research in this area, the Tongren dataset and source code will be made publicly available. The dataset and code will be made available at https://github.com/huawang123/VC-Net.
Collapse
Affiliation(s)
- Jingfei Hu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Hefei Innovation Research Institute, Beihang University, Hefei, China.,Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.,School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Hua Wang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Hefei Innovation Research Institute, Beihang University, Hefei, China.,Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.,School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Zhaohui Cao
- Hefei Innovation Research Institute, Beihang University, Hefei, China
| | - Guang Wu
- Hefei Innovation Research Institute, Beihang University, Hefei, China
| | - Jost B Jonas
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing, China.,Department of Ophthalmology, Medical Faculty Mannheim of the Ruprecht-Karls-University Heidelberg, Mannheim, Germany
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing, China
| | - Jicong Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Hefei Innovation Research Institute, Beihang University, Hefei, China.,Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.,School of Biomedical Engineering, Anhui Medical University, Hefei, China.,Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing, China
| |
Collapse
|
11
|
Efficient BFCN for Automatic Retinal Vessel Segmentation. J Ophthalmol 2021; 2020:6439407. [PMID: 33489334 PMCID: PMC7803293 DOI: 10.1155/2020/6439407] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Revised: 09/03/2020] [Accepted: 09/09/2020] [Indexed: 11/22/2022] Open
Abstract
Retinal vessel segmentation has high value for the research on the diagnosis of diabetic retinopathy, hypertension, and cardiovascular and cerebrovascular diseases. Most methods based on deep convolutional neural networks (DCNN) do not have large receptive fields or rich spatial information and cannot capture global context information of the larger areas. Therefore, it is difficult to identify the lesion area, and the segmentation efficiency is poor. This paper presents a butterfly fully convolutional neural network (BFCN). First, in view of the low contrast between blood vessels and the background in retinal blood vessel images, this paper uses automatic color enhancement (ACE) technology to increase the contrast between blood vessels and the background. Second, using the multiscale information extraction (MSIE) module in the backbone network can capture the global contextual information in a larger area to reduce the loss of feature information. At the same time, using the transfer layer (T_Layer) can not only alleviate gradient vanishing problem and repair the information loss in the downsampling process but also obtain rich spatial information. Finally, for the first time in the paper, the segmentation image is postprocessed, and the Laplacian sharpening method is used to improve the accuracy of vessel segmentation. The method mentioned in this paper has been verified by the DRIVE, STARE, and CHASE datasets, with the accuracy of 0.9627, 0.9735, and 0.9688, respectively.
Collapse
|
12
|
Samuel PM, Veeramalai T. VSSC Net: Vessel Specific Skip chain Convolutional Network for blood vessel segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 198:105769. [PMID: 33039919 DOI: 10.1016/j.cmpb.2020.105769] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Accepted: 09/18/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Deep learning techniques are instrumental in developing network models that aid in the early diagnosis of life-threatening diseases. To screen and diagnose the retinal fundus and coronary blood vessel disorders, the most important step is the proper segmentation of the blood vessels. METHODS This paper aims to segment the blood vessels from both the coronary angiogram and the retinal fundus images using a single VSSC Net after performing the image-specific preprocessing. The VSSC Net uses two-vessel extraction layers with added supervision on top of the base VGG-16 network. The vessel extraction layers comprise of the vessel-specific convolutional blocks to localize the blood vessels, skip chain convolutional layers to enable rich feature propagation, and a unique feature map summation. Supervision is associated with the two-vessel extraction layers using separate loss/sigmoid function. Finally, the weighted fusion of the individual loss/sigmoid function produces the desired blood vessel probability map. It is then binary segmented and validated for performance. RESULTS The VSSC Net shows improved accuracy values on the standard retinal and coronary angiogram datasets respectively. The computational time required to segment the blood vessels is 0.2 seconds using GPU. Moreover, the vessel extraction layer uses a lesser parameter count of 0.4 million parameters to accurately segment the blood vessels. CONCLUSION The proposed VSSC Net that segments blood vessels from both the retinal fundus images and coronary angiogram can be used for the early diagnosis of vessel disorders. Moreover, it could aid the physician to analyze the blood vessel structure of images obtained from multiple imaging sources.
Collapse
Affiliation(s)
- Pearl Mary Samuel
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, India.
| | | |
Collapse
|
13
|
Abstract
Accurate segmentation of retinal blood vessels is a key step in the diagnosis of fundus diseases, among which cataracts, glaucoma, and diabetic retinopathy (DR) are the main diseases that cause blindness. Most segmentation methods based on deep convolutional neural networks can effectively extract features. However, convolution and pooling operations also filter out some useful information, and the final segmented retinal vessels have problems such as low classification accuracy. In this paper, we propose a multi-scale residual attention network called MRA-UNet. Multi-scale inputs enable the network to learn features at different scales, which increases the robustness of the network. In the encoding phase, we reduce the negative influence of the background and eliminate noise by using the residual attention module. We use the bottom reconstruction module to aggregate the feature information under different receptive fields, so that the model can extract the information of different thicknesses of blood vessels. Finally, the spatial activation module is used to process the up-sampled image to further increase the difference between blood vessels and background, which promotes the recovery of small blood vessels at the edges. Our method was verified on the DRIVE, CHASE, and STARE datasets. Respectively, the segmentation accuracy rates reached 96.98%, 97.58%, and 97.63%; the specificity reached 98.28%, 98.54%, and 98.73%; and the F-measure scores reached 82.93%, 81.27%, and 84.22%. We compared the experimental results with some state-of-art methods, such as U-Net, R2U-Net, and AG-UNet in terms of accuracy, sensitivity, specificity, F-measure, and AUCROC. Particularly, MRA-UNet outperformed U-Net by 1.51%, 3.44%, and 0.49% on DRIVE, CHASE, and STARE datasets, respectively.
Collapse
|
14
|
Wang D, Haytham A, Pottenburgh J, Saeedi O, Tao Y. Hard Attention Net for Automatic Retinal Vessel Segmentation. IEEE J Biomed Health Inform 2020; 24:3384-3396. [DOI: 10.1109/jbhi.2020.3002985] [Citation(s) in RCA: 48] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
15
|
Musial G, Queener HM, Adhikari S, Mirhajianmoghadam H, Schill AW, Patel NB, Porter J. Automatic Segmentation of Retinal Capillaries in Adaptive Optics Scanning Laser Ophthalmoscope Perfusion Images Using a Convolutional Neural Network. Transl Vis Sci Technol 2020; 9:43. [PMID: 32855847 PMCID: PMC7424955 DOI: 10.1167/tvst.9.2.43] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Accepted: 06/02/2020] [Indexed: 12/11/2022] Open
Abstract
Purpose Adaptive optics scanning laser ophthalmoscope (AOSLO) capillary perfusion images can possess large variations in contrast, intensity, and background signal, thereby limiting the use of global or adaptive thresholding techniques for automatic segmentation. We sought to develop an automated approach to segment perfused capillaries in AOSLO images. Methods 12,979 image patches were extracted from manually segmented AOSLO montages from 14 eyes and used to train a convolutional neural network (CNN) that classified pixels as capillaries, large vessels, background, or image canvas. 1764 patches were extracted from AOSLO montages of four separate subjects, and were segmented manually by two raters (ground truth) and automatically by the CNN, an Otsu's approach, and a Frangi approach. A modified Dice coefficient was created to account for slight spatial differences between the same manually and CNN-segmented capillaries. Results CNN capillary segmentation had an accuracy (0.94), a Dice coefficient (0.67), and a modified Dice coefficient (0.90) that were significantly higher than other automated approaches (P < 0.05). There were no significant differences in capillary density and mean segment length between manual ground-truth and CNN segmentations (P > 0.05). Conclusions Close agreement between the CNN and manual segmentations enables robust and objective quantification of perfused capillary metrics. The developed CNN is time and computationally efficient, and distinguishes capillaries from areas containing diffuse background signal and larger underlying vessels. Translational Relevance This automatic segmentation algorithm greatly increases the efficiency of quantifying AOSLO capillary perfusion images.
Collapse
Affiliation(s)
- Gwen Musial
- Department of Biomedical Engineering, University of Houston, Houston, TX, USA
| | - Hope M Queener
- College of Optometry, University of Houston, Houston, TX, USA
| | - Suman Adhikari
- College of Optometry, University of Houston, Houston, TX, USA
| | | | - Alexander W Schill
- Department of Biomedical Engineering, University of Houston, Houston, TX, USA.,College of Optometry, University of Houston, Houston, TX, USA
| | - Nimesh B Patel
- College of Optometry, University of Houston, Houston, TX, USA
| | - Jason Porter
- Department of Biomedical Engineering, University of Houston, Houston, TX, USA.,College of Optometry, University of Houston, Houston, TX, USA
| |
Collapse
|
16
|
Semi-Supervised Learning Method of U-Net Deep Learning Network for Blood Vessel Segmentation in Retinal Images. Symmetry (Basel) 2020. [DOI: 10.3390/sym12071067] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Blood vessel segmentation methods based on deep neural networks have achieved satisfactory results. However, these methods are usually supervised learning methods, which require large numbers of retinal images with high quality pixel-level ground-truth labels. In practice, the task of labeling these retinal images is very costly, financially and in human effort. To deal with these problems, we propose a semi-supervised learning method which can be used in blood vessel segmentation with limited labeled data. In this method, we use the improved U-Net deep learning network to segment the blood vessel tree. On this basis, we implement the U-Net network-based training dataset updating strategy. A large number of experiments are presented to analyze the segmentation performance of the proposed semi-supervised learning method. The experiment results demonstrate that the proposed methodology is able to avoid the problems of insufficient hand-labels, and achieve satisfactory performance.
Collapse
|
17
|
Retinal Blood Vessel Segmentation Using Hybrid Features and Multi-Layer Perceptron Neural Networks. Symmetry (Basel) 2020. [DOI: 10.3390/sym12060894] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Segmentation of retinal blood vessels is the first step for several computer aided-diagnosis systems (CAD), not only for ocular disease diagnosis such as diabetic retinopathy (DR) but also of non-ocular disease, such as hypertension, stroke and cardiovascular diseases. In this paper, a supervised learning-based method, using a multi-layer perceptron neural network and carefully selected vector of features, is proposed. In particular, for each pixel of a retinal fundus image, we construct a 24-D feature vector, encoding information on the local intensity, morphology transformation, principal moments of phase congruency, Hessian, and difference of Gaussian values. A post-processing technique depending on mathematical morphological operators is used to optimise the segmentation. Moreover, the selected feature vector succeeded in outfitting the symmetric features that provided the final blood vessel probability as a binary map image. The proposed method is tested on three known datasets: Digital Retinal Image for Extraction (DRIVE), Structure Analysis of the Retina (STARE), and CHASED_DB1 datasets. The experimental results, both visual and quantitative, testify to the robustness of the proposed method. This proposed method achieved 0.9607, 0.7542, and 0.9843 in DRIVE, 0.9632, 0.7806, and 0.9825 on STARE, 0.9577, 0.7585 and 0.9846 in CHASE_DB1, with respectable accuracy, sensitivity, and specificity performance metrics. Furthermore, they testify that the method is superior to seven similar state-of-the-art methods.
Collapse
|
18
|
Abstract
Diabetes can induce diseases including diabetic retinopathy, cataracts, glaucoma, etc. The blindness caused by these diseases is irreversible. Early analysis of retinal fundus images, including optic disc and optic cup detection and retinal blood vessel segmentation, can effectively identify these diseases. The existing methods lack sufficient discrimination power for the fundus image and are easily affected by pathological regions. This paper proposes a novel multi-path recurrent U-Net architecture to achieve the segmentation of retinal fundus images. The effectiveness of the proposed network structure was proved by two segmentation tasks: optic disc and optic cup segmentation and retinal vessel segmentation. Our method achieved state-of-the-art results in the segmentation of the Drishti-GS1 dataset. Regarding optic disc segmentation, the accuracy and Dice values reached 0.9967 and 0.9817, respectively; as regards optic cup segmentation, the accuracy and Dice values reached 0.9950 and 0.8921, respectively. Our proposed method was also verified on the retinal blood vessel segmentation dataset DRIVE and achieved a good accuracy rate.
Collapse
|
19
|
Islam MM, Poly TN, Walther BA, Yang HC, Li YC(J. Artificial Intelligence in Ophthalmology: A Meta-Analysis of Deep Learning Models for Retinal Vessels Segmentation. J Clin Med 2020; 9:E1018. [PMID: 32260311 PMCID: PMC7231106 DOI: 10.3390/jcm9041018] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Revised: 03/27/2020] [Accepted: 03/28/2020] [Indexed: 11/16/2022] Open
Abstract
BACKGROUND AND OBJECTIVE Accurate retinal vessel segmentation is often considered to be a reliable biomarker of diagnosis and screening of various diseases, including cardiovascular diseases, diabetic, and ophthalmologic diseases. Recently, deep learning (DL) algorithms have demonstrated high performance in segmenting retinal images that may enable fast and lifesaving diagnoses. To our knowledge, there is no systematic review of the current work in this research area. Therefore, we performed a systematic review with a meta-analysis of relevant studies to quantify the performance of the DL algorithms in retinal vessel segmentation. METHODS A systematic search on EMBASE, PubMed, Google Scholar, Scopus, and Web of Science was conducted for studies that were published between 1 January 2000 and 15 January 2020. We followed the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) procedure. The DL-based study design was mandatory for a study's inclusion. Two authors independently screened all titles and abstracts against predefined inclusion and exclusion criteria. We used the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool for assessing the risk of bias and applicability. RESULTS Thirty-one studies were included in the systematic review; however, only 23 studies met the inclusion criteria for the meta-analysis. DL showed high performance for four publicly available databases, achieving an average area under the ROC of 0.96, 0.97, 0.96, and 0.94 on the DRIVE, STARE, CHASE_DB1, and HRF databases, respectively. The pooled sensitivity for the DRIVE, STARE, CHASE_DB1, and HRF databases was 0.77, 0.79, 0.78, and 0.81, respectively. Moreover, the pooled specificity of the DRIVE, STARE, CHASE_DB1, and HRF databases was 0.97, 0.97, 0.97, and 0.92, respectively. CONCLUSION The findings of our study showed the DL algorithms had high sensitivity and specificity for segmenting the retinal vessels from digital fundus images. The future role of DL algorithms in retinal vessel segmentation is promising, especially for those countries with limited access to healthcare. More compressive studies and global efforts are mandatory for evaluating the cost-effectiveness of DL-based tools for retinal disease screening worldwide.
Collapse
Affiliation(s)
- Md. Mohaimenul Islam
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; (M.M.I.); (T.N.P.); (H.C.Y.)
- International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan
- Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan
| | - Tahmina Nasrin Poly
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; (M.M.I.); (T.N.P.); (H.C.Y.)
- International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan
- Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan
| | - Bruno Andreas Walther
- Department of Biological Sciences, National Sun Yat-Sen University, Gushan District, Kaohsiung City 804, Taiwan;
| | - Hsuan Chia Yang
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; (M.M.I.); (T.N.P.); (H.C.Y.)
- International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan
- Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan
| | - Yu-Chuan (Jack) Li
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; (M.M.I.); (T.N.P.); (H.C.Y.)
- International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan
- Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan
- Department of Dermatology, Wan Fang Hospital, Taipei 110, Taiwan
- TMU Research Center of Cancer Translational Medicine, Taipei Medical University, Taipei 110, Taiwan
| |
Collapse
|