1
|
Ghislain F, Beaudelaire ST, Daniel T. An improved semi-supervised segmentation of the retinal vasculature using curvelet-based contrast adjustment and generalized linear model. Heliyon 2024; 10:e38027. [PMID: 39347436 PMCID: PMC11437861 DOI: 10.1016/j.heliyon.2024.e38027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Revised: 08/12/2024] [Accepted: 09/16/2024] [Indexed: 10/01/2024] Open
Abstract
Diagnosis of most ophthalmic conditions, such as diabetic retinopathy, generally relies on an effective analysis of retinal blood vessels. Techniques that depend solely on the visual observation of clinicians can be tedious and prone to numerous errors. In this article, we propose a semi-supervised automated approach for segmenting blood vessels in retinal color images. Our method effectively combines some classical filters with a Generalized Linear Model (GLM). We first apply the Curvelet Transform along with the Contrast-Limited Histogram Adaptive Equalization (CLAHE) technique to significantly enhance the contrast of vessels in the retinal image during the preprocessing phase. We then use Gabor transform to extract features from the enhanced image. For retinal vasculature identification, we use a GLM learning model with a simple link identity function. Binarization is then performed using an automatic optimal threshold based on the maximum Youden index. A morphological cleaning operation is applied to remove isolated or unwanted segments from the final segmented image. The proposed model is evaluated using statistical parameters on images from three publicly available databases. We achieve average accuracies of 0.9593, 0.9553 and 0.9643, with Receiver Operating Characteristic (ROC) analysis yielding Area Under Curve (AUC) values of 0.9722, 0.9682 and 0.9767 for the CHASE_DB1, STARE and DRIVE databases, respectively. Compared to some of the best results from similar approaches published recently, our results exceed their performance on several datasets.
Collapse
Affiliation(s)
- Feudjio Ghislain
- Research Unit of Condensed Matter, Electronics and Signal Processing (UR-MACETS). Department of Physics, Faculty of Sciences, University of Dschang, P.O. Box 67, Dschang, Cameroon
- Research Unit of Automation and Applied Computer (UR-AIA), Electrical Engineering Department of IUT-FV, University of Dschang, P.O. Box: 134, Bandjoun, Cameroon
| | - Saha Tchinda Beaudelaire
- Research Unit of Automation and Applied Computer (UR-AIA), Electrical Engineering Department of IUT-FV, University of Dschang, P.O. Box: 134, Bandjoun, Cameroon
| | - Tchiotsop Daniel
- Research Unit of Automation and Applied Computer (UR-AIA), Electrical Engineering Department of IUT-FV, University of Dschang, P.O. Box: 134, Bandjoun, Cameroon
| |
Collapse
|
2
|
Ghislain F, Beaudelaire ST, Daniel T. An accurate unsupervised extraction of retinal vasculature using curvelet transform and classical morphological operators. Comput Biol Med 2024; 178:108801. [PMID: 38917533 DOI: 10.1016/j.compbiomed.2024.108801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 06/18/2024] [Accepted: 06/21/2024] [Indexed: 06/27/2024]
Abstract
BACKGROUND Many ophthalmic disorders such as diabetic retinopathy and hypertension can be early diagnosed by analyzing changes related to the vascular structure of the retina. Accuracy and efficiency of the segmentation of retinal blood vessels are important parameters that can help the ophthalmologist to better characterize the targeted anomalies. METHOD In this work, we propose a new method for accurate unsupervised automatic segmentation of retinal blood vessels based on a simple and adequate combination of classical filters. Initially, contrast of vessels in retinal image is significantly improved by adding the Curvelet Transform to commonly used Contrast-Limited Adaptive Histogram Equalization technique. Afterwards, a morphological operator using Top Hat is applied to highlight vascular network. Then, a global threshold-based Otsu technique using minimum of intra-class variance is applied for vessel detection. Finally, a cleanup operation based on Match Filter and First Derivative Order Gaussian with fixed parameters is used to remove unwanted or isolated segments. We test the proposed method on images from two publicly available STARE and DRIVE databases. RESULTS We achieve in terms of sensitivity, specificity and accuracy the respective average performances of 0.7407, 0.9878 and 0.9667 on the DRIVE database, then 0.7028, 0.9755 and 0.9507 on the STARE database. CONCLUSIONS Compared to some recent similar work, the obtained results are quite promising and can thus contribute to the optimization of automatic tools to aid in the diagnosis of eye disorders.
Collapse
Affiliation(s)
- Feudjio Ghislain
- Unité de Recherche de Matière Condensée, d'Electronique et de Traitements du Signal (URMACETS), Department of Physics, Faculty of Science, University of Dschang, P.O.Box 67, Dschang, Cameroon; Unité de Recherche d'Automatique et d'Informatique Appliquée (URAIA), IUT-FV de Bandjoun, Université de Dschang-Cameroun, B.P. 134, Bandjoun, Cameroon.
| | - Saha Tchinda Beaudelaire
- Unité de Recherche d'Automatique et d'Informatique Appliquée (URAIA), IUT-FV de Bandjoun, Université de Dschang-Cameroun, B.P. 134, Bandjoun, Cameroon.
| | - Tchiotsop Daniel
- Unité de Recherche d'Automatique et d'Informatique Appliquée (URAIA), IUT-FV de Bandjoun, Université de Dschang-Cameroun, B.P. 134, Bandjoun, Cameroon.
| |
Collapse
|
3
|
Li J, Gao G, Yang L, Liu Y. A retinal vessel segmentation network with multiple-dimension attention and adaptive feature fusion. Comput Biol Med 2024; 172:108315. [PMID: 38503093 DOI: 10.1016/j.compbiomed.2024.108315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 02/28/2024] [Accepted: 03/12/2024] [Indexed: 03/21/2024]
Abstract
The incidence of blinding eye diseases is highly correlated with changes in retinal morphology, and is clinically detected by segmenting retinal structures in fundus images. However, some existing methods have limitations in accurately segmenting thin vessels. In recent years, deep learning has made a splash in the medical image segmentation, but the lack of edge information representation due to repetitive convolution and pooling, limits the final segmentation accuracy. To this end, this paper proposes a pixel-level retinal vessel segmentation network with multiple-dimension attention and adaptive feature fusion. Here, a multiple dimension attention enhancement (MDAE) block is proposed to acquire more local edge information. Meanwhile, a deep guidance fusion (DGF) block and a cross-pooling semantic enhancement (CPSE) block are proposed simultaneously to acquire more global contexts. Further, the predictions of different decoding stages are learned and aggregated by an adaptive weight learner (AWL) unit to obtain the best weights for effective feature fusion. The experimental results on three public fundus image datasets show that proposed network could effectively enhance the segmentation performance on retinal blood vessels. In particular, the proposed method achieves AUC of 98.30%, 98.75%, and 98.71% on the DRIVE, CHASE_DB1, and STARE datasets, respectively, while the F1 score on all three datasets exceeded 83%. The source code of the proposed model is available at https://github.com/gegao310/VesselSeg-Pytorch-master.
Collapse
Affiliation(s)
- Jianyong Li
- College of Computer and Communication Engineering, Zhengzhou University of Light Industry, Zhengzhou, Henan Province, 450002, China
| | - Ge Gao
- School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, Henan Province, 450001, China.
| | - Lei Yang
- School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, Henan Province, 450001, China.
| | - Yanhong Liu
- School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, Henan Province, 450001, China
| |
Collapse
|
4
|
Li C, Li Z, Liu W. TDCAU-Net: retinal vessel segmentation using transformer dilated convolutional attention-based U-Net method. Phys Med Biol 2023; 69:015003. [PMID: 38052089 DOI: 10.1088/1361-6560/ad1273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 12/05/2023] [Indexed: 12/07/2023]
Abstract
Retinal vessel segmentation plays a vital role in the medical field, facilitating the identification of numerous chronic conditions based on retinal vessel images. These conditions include diabetic retinopathy, hypertensive retinopathy, glaucoma, and others. Although the U-Net model has shown promising results in retinal vessel segmentation, it tends to struggle with fine branching and dense vessel segmentation. To further enhance the precision of retinal vessel segmentation, we propose a novel approach called transformer dilated convolution attention U-Net (TDCAU-Net), which builds upon the U-Net architecture with improved Transformer-based dilated convolution attention mechanisms. The proposed model retains the three-layer architecture of the U-Net network. The Transformer component enables the learning of contextual information for each pixel in the image, while the dilated convolution attention prevents information loss. The algorithm efficiently addresses several challenges to optimize blood vessel detection. The process starts with five-step preprocessing of the images, followed by chunking them into segments. Subsequently, the retinal images are fed into the modified U-Net network introduced in this paper for segmentation. The study employs eye fundus images from the DRIVE and CHASEDB1 databases for both training and testing purposes. Evaluation metrics are utilized to compare the algorithm's results with state-of-the-art methods. The experimental analysis on both databases demonstrates that the algorithm achieves high values of sensitivity, specificity, accuracy, and AUC. Specifically, for the first database, the achieved values are 0.8187, 0.9756, 0.9556, and 0.9795, respectively. For the second database, the corresponding values are 0.8243, 0.9836, 0.9738, and 0.9878, respectively. These results demonstrate that the proposed approach outperforms state-of-the-art methods, achieving higher performance on both datasets. The TDCAU-Net model presented in this study exhibits substantial capabilities in accurately segmenting fine branching and dense vessels. The segmentation performance of the network surpasses that of the U-Net algorithm and several mainstream methods.
Collapse
Affiliation(s)
- Chunyang Li
- School of Electronics and Information Engineering, University of Science and Technology Liaoning, Anshan, People's Republic of China
| | - Zhigang Li
- School of Electronics and Information Engineering, University of Science and Technology Liaoning, Anshan, People's Republic of China
| | - Weikang Liu
- School of Electronics and Information Engineering, University of Science and Technology Liaoning, Anshan, People's Republic of China
| |
Collapse
|
5
|
Zhu YF, Xu X, Zhang XD, Jiang MS. CCS-UNet: a cross-channel spatial attention model for accurate retinal vessel segmentation. BIOMEDICAL OPTICS EXPRESS 2023; 14:4739-4758. [PMID: 37791275 PMCID: PMC10545190 DOI: 10.1364/boe.495766] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 07/14/2023] [Accepted: 08/09/2023] [Indexed: 10/05/2023]
Abstract
Precise segmentation of retinal vessels plays an important role in computer-assisted diagnosis. Deep learning models have been applied to retinal vessel segmentation, but the efficacy is limited by the significant scale variation of vascular structures and the intricate background of retinal images. This paper supposes a cross-channel spatial attention U-Net (CCS-UNet) for accurate retinal vessel segmentation. In comparison to other models based on U-Net, our model employes a ResNeSt block for the encoder-decoder architecture. The block has a multi-branch structure that enables the model to extract more diverse vascular features. It facilitates weight distribution across channels through the incorporation of soft attention, which effectively aggregates contextual information in vascular images. Furthermore, we suppose an attention mechanism within the skip connection. This mechanism serves to enhance feature integration across various layers, thereby mitigating the degradation of effective information. It helps acquire cross-channel information and enhance the localization of regions of interest, ultimately leading to improved recognition of vascular structures. In addition, the feature fusion module (FFM) module is used to provide semantic information for a more refined vascular segmentation map. We evaluated CCS-UNet based on five benchmark retinal image datasets, DRIVE, CHASEDB1, STARE, IOSTAR and HRF. Our proposed method exhibits superior segmentation efficacy compared to other state-of-the-art techniques with a global accuracy of 0.9617/0.9806/0.9766/0.9786/0.9834 and AUC of 0.9863/0.9894/0.9938/0.9902/0.9855 on DRIVE, CHASEDB1, STARE, IOSTAR and HRF respectively. Ablation studies are also performed to evaluate the the relative contributions of different architectural components. Our proposed model is potential for diagnostic aid of retinal diseases.
Collapse
Affiliation(s)
| | | | - Xue-dian Zhang
- Shanghai Key Laboratory of Contemporary Optics System, College of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Min-shan Jiang
- Shanghai Key Laboratory of Contemporary Optics System, College of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| |
Collapse
|
6
|
Ni J, Sun H, Xu J, Liu J, Chen Z. A feature aggregation and feature fusion network for retinal vessel segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2023]
|
7
|
Yao Z, Luo R, Xing C, Li F, Zhu G, Wang Z, Zhang G. 3D-FVS: construction and application of three-dimensional fundus vascular structure model based on single image features. Eye (Lond) 2023; 37:2505-2510. [PMID: 36522528 PMCID: PMC10397231 DOI: 10.1038/s41433-022-02364-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 10/31/2022] [Accepted: 12/06/2022] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND Fundus microvasculature may be visually observed by ophthalmoscope and has been widely used in clinical practice. Due to the limitations of available equipment and technology, most studies only utilized the two-dimensional planar features of the fundus microvasculature. METHODS This study proposed a novel method for establishing the three-dimensional fundus vascular structure model and generating hemodynamic characteristics based on a single image. Firstly, the fundus vascular are segmented through our proposed network framework. Then, the length and width of vascular segments and the relationship among the adjacent segments are collected to construct the three-dimensional vascular structure model. Finally, the hemodynamic model is generated based on the vascular structure model, and highly correlated hemodynamic features are selected to diagnose the ophthalmic diseases. RESULTS In fundus vascular segmentation, the proposed network framework obtained 98.63% and 97.52% on Area Under Curve (AUC) and accuracy respectively. In diagnosis, the high correlation features extracted based on the proposed method achieved 95% on accuracy. CONCLUSIONS This study demonstrated that hemodynamic features filtered by relevance were essential for diagnosing retinal diseases. Additionally, the method proposed also outperformed the existing models on the levels of retina vessel segmentation. In conclusion, the proposed method may represent a novel way to diagnose retinal related diseases, which can analysis two-dimensional fundus pictures by extracting heterogeneous three-dimensional features.
Collapse
Affiliation(s)
- Zhaomin Yao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, 110167, China
- Department of Nuclear Medicine, General Hospital of Northern Theater Command, Shenyang, Liaoning, 110016, China
| | - Renli Luo
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, 110167, China
| | - Chencong Xing
- School of Computer Science and Software Engineering, East China Normal University, Shanghai, 200241, China
| | - Fei Li
- College of Computer Science and Technology, and Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Gancheng Zhu
- College of Computer Science and Technology, and Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Zhiguo Wang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, 110167, China.
- Department of Nuclear Medicine, General Hospital of Northern Theater Command, Shenyang, Liaoning, 110016, China.
| | - Guoxu Zhang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, 110167, China.
- Department of Nuclear Medicine, General Hospital of Northern Theater Command, Shenyang, Liaoning, 110016, China.
| |
Collapse
|
8
|
Raja Sankari VM, Snekhalatha U, Chandrasekaran A, Baskaran P. Automated diagnosis of Retinopathy of prematurity from retinal images of preterm infants using hybrid deep learning techniques. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104883] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
|
9
|
Tan Y, Zhao SX, Yang KF, Li YJ. A lightweight network guided with differential matched filtering for retinal vessel segmentation. Comput Biol Med 2023; 160:106924. [PMID: 37146492 DOI: 10.1016/j.compbiomed.2023.106924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 04/03/2023] [Accepted: 04/13/2023] [Indexed: 05/07/2023]
Abstract
The geometric morphology of retinal vessels reflects the state of cardiovascular health, and fundus images are important reference materials for ophthalmologists. Great progress has been made in automated vessel segmentation, but few studies have focused on thin vessel breakage and false-positives in areas with lesions or low contrast. In this work, we propose a new network, differential matched filtering guided attention UNet (DMF-AU), to address these issues, incorporating a differential matched filtering layer, feature anisotropic attention, and a multiscale consistency constrained backbone to perform thin vessel segmentation. The differential matched filtering is used for the early identification of locally linear vessels, and the resulting rough vessel map guides the backbone to learn vascular details. Feature anisotropic attention reinforces the vessel features of spatial linearity at each stage of the model. Multiscale constraints reduce the loss of vessel information while pooling within large receptive fields. In tests on multiple classical datasets, the proposed model performed well compared with other algorithms on several specially designed criteria for vessel segmentation. DMF-AU is a high-performance, lightweight vessel segmentation model. The source code is at https://github.com/tyb311/DMF-AU.
Collapse
Affiliation(s)
- Yubo Tan
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| | - Shi-Xuan Zhao
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| | - Kai-Fu Yang
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| | - Yong-Jie Li
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| |
Collapse
|
10
|
Du L, Liu H, Zhang L, Lu Y, Li M, Hu Y, Zhang Y. Deep ensemble learning for accurate retinal vessel segmentation. Comput Biol Med 2023; 158:106829. [PMID: 37054633 DOI: 10.1016/j.compbiomed.2023.106829] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Revised: 03/09/2023] [Accepted: 03/26/2023] [Indexed: 04/15/2023]
Abstract
Significant progress has been made in deep learning-based retinal vessel segmentation in recent years. However, the current methods suffer from low performance and the robust of the models is not that good. Our work introduces an novel framework for retinal vessel segmentation based on deep ensemble learning. The results of benchmarking comparisons indicate that our model outperforms the existing ones on multiple datasets, demonstrating that our models are more effective, superior, and robust for the retinal vessel segmentation. It evinces the capability of our model to capture the discriminative feature representations through introducing the ensemble strategy to integrate different base deep learning models like pyramid vision Transformer and FCN-Transformer. We expect our proposed method can benefit and accelerate the development of accurate retinal vessel segmentation in this field.
Collapse
Affiliation(s)
- Lingling Du
- Department of Ophthalmology, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Hanruo Liu
- The Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Lan Zhang
- Department of Cardiovascular, Fourth Affiliated Hospital, Harbin Medical University, Harbin, China
| | - Yao Lu
- Department of Ophthalmology, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Mengyao Li
- Department of Ophthalmology, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Yang Hu
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China
| | - Yi Zhang
- Department of Ophthalmology, The First Affiliated Hospital of Harbin Medical University, Harbin, China.
| |
Collapse
|
11
|
Rong Y, Xiong Y, Li C, Chen Y, Wei P, Wei C, Fan Z. Segmentation of retinal vessels in fundus images based on U-Net with self-calibrated convolutions and spatial attention modules. Med Biol Eng Comput 2023:10.1007/s11517-023-02806-1. [PMID: 36899285 DOI: 10.1007/s11517-023-02806-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Accepted: 02/08/2023] [Indexed: 03/12/2023]
Abstract
Automated and accurate segmentation of retinal vessels in fundus images is an important step for screening and diagnosing various ophthalmologic diseases. However, many factors, including the variations of vessels in color, shape and size, make this task become an intricate challenge. One kind of the most popular methods for vessel segmentation is U-Net based methods. However, in the U-Net based methods, the size of the convolution kernels is generally fixed. As a result, the receptive field for an individual convolution operation is single, which is not conducive to the segmentation of retinal vessels with various thicknesses. To overcome this problem, in this paper, we employed self-calibrated convolutions to replace the traditional convolutions for the U-Net, which can make the U-Net learn discriminative representations from different receptive fields. Besides, we proposed an improved spatial attention module, instead of using traditional convolutions, to connect the encoding part and decoding part of the U-Net, which can improve the ability of the U-Net to detect thin vessels. The proposed method has been tested on Digital Retinal Images for Vessel Extraction (DRIVE) database and Child Heart and Health Study in England Database (CHASE DB1). The metrics used to evaluate the performance of the proposed method are accuracy (ACC), sensitivity (SE), specificity (SP), F1-score (F1) and the area under the receiver operating characteristic curve (AUC). The ACC, SE, SP, F1 and AUC obtained by the proposed method are 0.9680, 0.8036, 0.9840, 0.8138 and 0.9840 respectively on DRIVE database, and 0.9756, 0.8118, 0.9867, 0.8068 and 0.9888 respectively on CHASE DB1, which are better than those obtained by the traditional U-Net (the ACC, SE, SP, F1 and AUC obtained by U-Net are 0.9646, 0.7895, 0.9814, 0.7963 and 0.9791 respectively on DRIVE database, and 0.9733, 0.7817, 0.9862, 0.7870 and 0.9810 respectively on CHASE DB1). The experimental results indicate that the proposed modifications in the U-Net are effective for vessel segmentation. The structure of the proposed network.
Collapse
Affiliation(s)
- YiBiao Rong
- Department of Electronic and Information Engineering, Shantou University, 515063, Guangdong, China
- Key Lab of Digital Signal and Image Processing of Guangdong Province, Shantou University, 515063, Guangdong, China
| | - Yu Xiong
- Department of Electronic and Information Engineering, Shantou University, 515063, Guangdong, China
- Key Lab of Digital Signal and Image Processing of Guangdong Province, Shantou University, 515063, Guangdong, China
| | - Chong Li
- Department of Electronic and Information Engineering, Shantou University, 515063, Guangdong, China
- Key Lab of Digital Signal and Image Processing of Guangdong Province, Shantou University, 515063, Guangdong, China
| | - Ying Chen
- Department of Electronic and Information Engineering, Shantou University, 515063, Guangdong, China
- Key Lab of Digital Signal and Image Processing of Guangdong Province, Shantou University, 515063, Guangdong, China
| | - Peiwei Wei
- Department of Electronic and Information Engineering, Shantou University, 515063, Guangdong, China
- Key Lab of Digital Signal and Image Processing of Guangdong Province, Shantou University, 515063, Guangdong, China
- Department of Microbiology and Immunology, Shantou University Medical College, Guangdong, 515041, China
| | - Chuliang Wei
- Department of Electronic and Information Engineering, Shantou University, 515063, Guangdong, China
- Key Lab of Digital Signal and Image Processing of Guangdong Province, Shantou University, 515063, Guangdong, China
| | - Zhun Fan
- Department of Electronic and Information Engineering, Shantou University, 515063, Guangdong, China.
- Key Lab of Digital Signal and Image Processing of Guangdong Province, Shantou University, 515063, Guangdong, China.
| |
Collapse
|
12
|
Directionality quantification of in vitro grown dorsal root ganglion neurites using Fast Fourier Transform. J Neurosci Methods 2023; 386:109796. [PMID: 36652975 DOI: 10.1016/j.jneumeth.2023.109796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 12/22/2022] [Accepted: 01/13/2023] [Indexed: 01/16/2023]
Abstract
BACKGROUND The directionality analysis of the neurite outgrowths is an important methodology in neuroscience, especially in determining the behavior of neurons grown on silicon substrates. NEW METHOD Here we aimed to describe the methodology for quantification of the directionality of neurites based on the Fast Fourier Transform (FFT). We performed an image analysis case study that incorporates several software solutions and provides a rapid and precise technique to determine the directionality of neurites. In order to elicit aligned or unaligned neurite growth patterns, we used adult and newborn dorsal root ganglion (DRG) neurons grown on silicon micro-pillar substrates (MPS) with different pillar widths and spacing. RESULTS Compared to the control glass surfaces the neonatal and adult N52 and IB4 DRG neurites exhibited regular growth patterns more pronounced in the MPS regions with s narrow pillar spacing range. The neurites were preferentially oriented along three directional axes at 30°, 90°, and 150°. CONCLUSION The proposed methodology showed that FFT analysis is a reliable and easily reproducible method that can be successfully used to test growth patterns of DRG neurites grown on different substrates by considering the direction and angle of the neurites as well as the size of the soma.
Collapse
|
13
|
Rodrigues EO, Rodrigues LO, Machado JHP, Casanova D, Teixeira M, Oliva JT, Bernardes G, Liatsis P. Local-Sensitive Connectivity Filter (LS-CF): A Post-Processing Unsupervised Improvement of the Frangi, Hessian and Vesselness Filters for Multimodal Vessel Segmentation. J Imaging 2022; 8:jimaging8100291. [PMID: 36286385 PMCID: PMC9604711 DOI: 10.3390/jimaging8100291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Revised: 09/13/2022] [Accepted: 09/16/2022] [Indexed: 11/07/2022] Open
Abstract
A retinal vessel analysis is a procedure that can be used as an assessment of risks to the eye. This work proposes an unsupervised multimodal approach that improves the response of the Frangi filter, enabling automatic vessel segmentation. We propose a filter that computes pixel-level vessel continuity while introducing a local tolerance heuristic to fill in vessel discontinuities produced by the Frangi response. This proposal, called the local-sensitive connectivity filter (LS-CF), is compared against a naive connectivity filter to the baseline thresholded Frangi filter response and to the naive connectivity filter response in combination with the morphological closing and to the current approaches in the literature. The proposal was able to achieve competitive results in a variety of multimodal datasets. It was robust enough to outperform all the state-of-the-art approaches in the literature for the OSIRIX angiographic dataset in terms of accuracy and 4 out of 5 works in the case of the IOSTAR dataset while also outperforming several works in the case of the DRIVE and STARE datasets and 6 out of 10 in the CHASE-DB dataset. For the CHASE-DB, it also outperformed all the state-of-the-art unsupervised methods.
Collapse
Affiliation(s)
- Erick O. Rodrigues
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco 85503-390, PR, Brazil
- Correspondence:
| | - Lucas O. Rodrigues
- Graduate Program of Sciences Applied to Health Products, Universidade Federal Fluminense (UFF), Niteroi 24241-000, RJ, Brazil
| | - João H. P. Machado
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco 85503-390, PR, Brazil
| | - Dalcimar Casanova
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco 85503-390, PR, Brazil
| | - Marcelo Teixeira
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco 85503-390, PR, Brazil
| | - Jeferson T. Oliva
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco 85503-390, PR, Brazil
| | - Giovani Bernardes
- Institute of Technological Sciences (ICT), Universidade Federal de Itajuba (UNIFEI), Itabira 35903-087, MG, Brazil
| | - Panos Liatsis
- Department of Electrical Engineering and Computer Science, Khalifa University of Science and Technology, Abu Dhabi P.O. Box 127788, United Arab Emirates
| |
Collapse
|
14
|
MHA-Net: A Multibranch Hybrid Attention Network for Medical Image Segmentation. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:8375981. [PMID: 36245836 PMCID: PMC9560845 DOI: 10.1155/2022/8375981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Revised: 09/11/2022] [Accepted: 09/13/2022] [Indexed: 12/02/2022]
Abstract
The robust segmentation of organs from the medical image is the key technique in medical image analysis for disease diagnosis. U-Net is a robust structure for medical image segmentation. However, U-Net adopts consecutive downsampling encoders to capture multiscale features, resulting in the loss of contextual information and insufficient recovery of high-level semantic features. In this paper, we present a new multibranch hybrid attention network (MHA-Net) to capture more contextual information and high-level semantic features. The main idea of our proposed MHA-Net is to use the multibranch hybrid attention feature decoder to recover more high-level semantic features. The lightweight pyramid split attention (PSA) module is used to connect the encoder and decoder subnetwork to obtain a richer multiscale feature map. We compare the proposed MHA-Net to state-of-art approaches on the DRIVE dataset, the fluoroscopic roentgenographic stereophotogrammetric analysis X-ray dataset, and the polyp dataset. The experimental results on different modal images reveal that our proposed MHA-Net provides better segmentation results than other segmentation approaches.
Collapse
|
15
|
Segmentation of retinal blood vessel using generalized extreme value probability distribution function(pdf)-based matched filter approach. Pattern Anal Appl 2022. [DOI: 10.1007/s10044-022-01108-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
16
|
Tan Y, Yang KF, Zhao SX, Li YJ. Retinal Vessel Segmentation With Skeletal Prior and Contrastive Loss. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2238-2251. [PMID: 35320091 DOI: 10.1109/tmi.2022.3161681] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The morphology of retinal vessels is closely associated with many kinds of ophthalmic diseases. Although huge progress in retinal vessel segmentation has been achieved with the advancement of deep learning, some challenging issues remain. For example, vessels can be disturbed or covered by other components presented in the retina (such as optic disc or lesions). Moreover, some thin vessels are also easily missed by current methods. In addition, existing fundus image datasets are generally tiny, due to the difficulty of vessel labeling. In this work, a new network called SkelCon is proposed to deal with these problems by introducing skeletal prior and contrastive loss. A skeleton fitting module is developed to preserve the morphology of the vessels and improve the completeness and continuity of thin vessels. A contrastive loss is employed to enhance the discrimination between vessels and background. In addition, a new data augmentation method is proposed to enrich the training samples and improve the robustness of the proposed model. Extensive validations were performed on several popular datasets (DRIVE, STARE, CHASE, and HRF), recently developed datasets (UoA-DR, IOSTAR, and RC-SLO), and some challenging clinical images (from RFMiD and JSIEC39 datasets). In addition, some specially designed metrics for vessel segmentation, including connectivity, overlapping area, consistency of vessel length, revised sensitivity, specificity, and accuracy were used for quantitative evaluation. The experimental results show that, the proposed model achieves state-of-the-art performance and significantly outperforms compared methods when extracting thin vessels in the regions of lesions or optic disc. Source code is available at https://www.github.com/tyb311/SkelCon.
Collapse
|
17
|
Alahmadi MD. Medical Image Segmentation with Learning Semantic and Global Contextual Representation. Diagnostics (Basel) 2022; 12:diagnostics12071548. [PMID: 35885454 PMCID: PMC9319384 DOI: 10.3390/diagnostics12071548] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 06/18/2022] [Accepted: 06/20/2022] [Indexed: 11/16/2022] Open
Abstract
Automatic medical image segmentation is an essential step toward accurate diseases diagnosis and designing a follow-up treatment. This assistive method facilitates the cancer detection process and provides a benchmark to highlight the affected area. The U-Net model has become the standard design choice. Although the symmetrical structure of the U-Net model enables this network to encode rich semantic representation, the intrinsic locality of the CNN layers limits this network’s capability in modeling long-range contextual dependency. On the other hand, sequence to sequence Transformer models with a multi-head attention mechanism can enable them to effectively model global contextual dependency. However, the lack of low-level information stemming from the Transformer architecture limits its performance for capturing local representation. In this paper, we propose a two parallel encoder model, where in the first path the CNN module captures the local semantic representation whereas the second path deploys a Transformer module to extract the long-range contextual representation. Next, by adaptively fusing these two feature maps, we encode both representations into a single representative tensor to be further processed by the decoder block. An experimental study demonstrates that our design can provide rich and generic representation features which are highly efficient for a fine-grained semantic segmentation task.
Collapse
Affiliation(s)
- Mohammad D Alahmadi
- Department of Software Engineering, College of Computer Science and Engineering, University of Jeddah, Jeddah 23890, Saudi Arabia
| |
Collapse
|
18
|
Hussain S, Guo F, Li W, Shen Z. DilUnet: A U-net based architecture for blood vessels segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 218:106732. [PMID: 35279601 DOI: 10.1016/j.cmpb.2022.106732] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 02/24/2022] [Accepted: 03/03/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Retinal image segmentation can help clinicians detect pathological disorders by studying changes in retinal blood vessels. This early detection can help prevent blindness and many other vision impairments. So far, several supervised and unsupervised methods have been proposed for the task of automatic blood vessel segmentation. However, the sensitivity and the robustness of these methods can be improved by correctly classifying more vessel pixels. METHOD We proposed an automatic, retinal blood vessel segmentation method based on the U-net architecture. This end-to-end framework utilizes preprocessing and a data augmentation pipeline for training. The architecture utilizes multiscale input and multioutput modules with improved skip connections and the correct use of dilated convolutions for effective feature extraction. In multiscale input, the input image is scaled down and concatenated with the output of convolutional blocks at different points in the encoder path to ensure the feature transfer of the original image. The multioutput module obtains upsampled outputs from each decoder block that are combined to obtain the final output. Skip paths connect each encoder block with the corresponding decoder block, and the whole architecture utilizes different dilation rates to improve the overall feature extraction. RESULTS The proposed method achieved an accuracy: of 0.9680, 0.9694, and 0.9701; sensitivity of 0.8837, 0.8263, and 0.8713; and Intersection Over Union (IOU) of 0.8698, 0.7951, and 0.8184 with three publicly available datasets: DRIVE, STARE, and CHASE, respectively. An ablation study is performed to show the contribution of each proposed module and technique. CONCLUSION The evaluation metrics revealed that the performance of the proposed method is higher than that of the original U-net and other U-net-based architectures, as well as many other state-of-the-art segmentation techniques, and that the proposed method is robust to noise.
Collapse
Affiliation(s)
- Snawar Hussain
- School of Automation, Central South University, Changsha, Hunan 410083, China
| | - Fan Guo
- School of Automation, Central South University, Changsha, Hunan 410083, China.
| | - Weiqing Li
- School of Automation, Central South University, Changsha, Hunan 410083, China
| | - Ziqi Shen
- School of Automation, Central South University, Changsha, Hunan 410083, China
| |
Collapse
|
19
|
Guo Q, Song H, Fan J, Ai D, Gao Y, Yu X, Yang J. Portal Vein and Hepatic Vein Segmentation in Multi-Phase MR Images Using Flow-Guided Change Detection. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:2503-2517. [PMID: 35275817 DOI: 10.1109/tip.2022.3157136] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Segmenting portal vein (PV) and hepatic vein (HV) from magnetic resonance imaging (MRI) scans is important for hepatic tumor surgery. Compared with single phase-based methods, multiple phases-based methods have better scalability in distinguishing HV and PV by exploiting multi-phase information. However, these methods just coarsely extract HV and PV from different phase images. In this paper, we propose a unified framework to automatically and robustly segment 3D HV and PV from multi-phase MR images, which considers both the change and appearance caused by the vascular flow event to improve segmentation performance. Firstly, inspired by change detection, flow-guided change detection (FGCD) is designed to detect the changed voxels related to hepatic venous flow by generating hepatic venous phase map and clustering the map. The FGCD uniformly deals with HV and PV clustering by the proposed shared clustering, thus making the appearance correlated with portal venous flow robustly delineate without increasing framework complexity. Then, to refine vascular segmentation results produced by both HV and PV clustering, interclass decision making (IDM) is proposed by combining the overlapping region discrimination and neighborhood direction consistency. Finally, our framework is evaluated on multi-phase clinical MR images of the public dataset (TCGA) and local hospital dataset. The quantitative and qualitative evaluations show that our framework outperforms the existing methods.
Collapse
|
20
|
SERR-U-Net: Squeeze-and-Excitation Residual and Recurrent Block-Based U-Net for Automatic Vessel Segmentation in Retinal Image. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:5976097. [PMID: 34422093 PMCID: PMC8371614 DOI: 10.1155/2021/5976097] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/02/2021] [Revised: 07/03/2021] [Accepted: 07/24/2021] [Indexed: 11/23/2022]
Abstract
Methods A new SERR-U-Net framework for retinal vessel segmentation is proposed, which leverages technologies including Squeeze-and-Excitation (SE), residual module, and recurrent block. First, the convolution layers of encoder and decoder are modified on the basis of U-Net, and the recurrent block is used to increase the network depth. Second, the residual module is utilized to alleviate the vanishing gradient problem. Finally, to derive more specific vascular features, we employed the SE structure to introduce attention mechanism into the U-shaped network. In addition, enhanced super-resolution generative adversarial networks (ESRGANs) are also deployed to remove the noise of retinal image. Results The effectiveness of this method was tested on two public datasets, DRIVE and STARE. In the experiment of DRIVE dataset, the accuracy and AUC (area under the curve) of our method were 0.9552 and 0.9784, respectively, and for SATRE dataset, 0.9796 and 0.9859 were achieved, respectively, which proved a high accuracy and promising prospect on clinical assistance. Conclusion An improved U-Net network combining SE, ResNet, and recurrent technologies is developed for automatic vessel segmentation from retinal image. This new model enables an improvement on the accuracy compared to learning-based methods, and its robustness in circumvent challenging cases such as small blood vessels and intersection of vessels is also well demonstrated and validated.
Collapse
|
21
|
Gegundez-Arias ME, Marin-Santos D, Perez-Borrero I, Vasallo-Vazquez MJ. A new deep learning method for blood vessel segmentation in retinal images based on convolutional kernels and modified U-Net model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 205:106081. [PMID: 33882418 DOI: 10.1016/j.cmpb.2021.106081] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Accepted: 03/28/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic monitoring of retinal blood vessels proves very useful for the clinical assessment of ocular vascular anomalies or retinopathies. This paper presents an efficient and accurate deep learning-based method for vessel segmentation in eye fundus images. METHODS The approach consists of a convolutional neural network based on a simplified version of the U-Net architecture that combines residual blocks and batch normalization in the up- and downscaling phases. The network receives patches extracted from the original image as input and is trained with a novel loss function that considers the distance of each pixel to the vascular tree. At its output, it generates the probability of each pixel of the input patch belonging to the vascular structure. The application of the network to the patches in which a retinal image can be divided allows obtaining the pixel-wise probability map of the complete image. This probability map is then binarized with a certain threshold to generate the blood vessel segmentation provided by the method. RESULTS The method has been developed and evaluated in the DRIVE, STARE and CHASE_Db1 databases, which offer a manual segmentation of the vascular tree by each of its images. Using this set of images as ground truth, the accuracy of the vessel segmentations obtained for an operating point proposal (established by a single threshold value for each database) was quantified. The overall performance was measured using the area of its receiver operating characteristic curve. The method demonstrated robustness in the face of the variability of the fundus images of diverse origin, being capable of working with the highest level of accuracy in the entire set of possible points of operation, compared to those provided by the most accurate methods found in literature. CONCLUSIONS The analysis of results concludes that the proposed method reaches better performance than the rest of state-of-art methods and can be considered the most promising for integration into a real tool for vascular structure segmentation.
Collapse
Affiliation(s)
- Manuel E Gegundez-Arias
- Vision, Prediction, Optimisation and Control Systems Department, Science and Technology Research Centre, University of Huelva, Avenida de las Fuerzas Armadas s/n, 21007, Huelva, Spain.
| | - Diego Marin-Santos
- Vision, Prediction, Optimisation and Control Systems Department, Science and Technology Research Centre, University of Huelva, Avenida de las Fuerzas Armadas s/n, 21007, Huelva, Spain.
| | - Isaac Perez-Borrero
- Vision, Prediction, Optimisation and Control Systems Department, Science and Technology Research Centre, University of Huelva, Avenida de las Fuerzas Armadas s/n, 21007, Huelva, Spain.
| | - Manuel J Vasallo-Vazquez
- Vision, Prediction, Optimisation and Control Systems Department, Science and Technology Research Centre, University of Huelva, Avenida de las Fuerzas Armadas s/n, 21007, Huelva, Spain.
| |
Collapse
|
22
|
Wang B, Wang S, Qiu S, Wei W, Wang H, He H. CSU-Net: A Context Spatial U-Net for Accurate Blood Vessel Segmentation in Fundus Images. IEEE J Biomed Health Inform 2021; 25:1128-1138. [PMID: 32750968 DOI: 10.1109/jbhi.2020.3011178] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Blood vessel segmentation in fundus images is a critical procedure in the diagnosis of ophthalmic diseases. Recent deep learning methods achieve high accuracy in vessel segmentation but still face the challenge to segment the microvascular and detect the vessel boundary. This is due to the fact that common Convolutional Neural Networks (CNN) are unable to preserve rich spatial information and a large receptive field simultaneously. Besides, CNN models for vessel segmentation usually are trained by equal pixel level cross-entropy loss, which tend to miss fine vessel structures. In this paper, we propose a novel Context Spatial U-Net (CSU-Net) for blood vessel segmentation. Compared with the other U-Net based models, we design a two-channel encoder: a context channel with multi-scale convolution to capture more receptive field and a spatial channel with large kernel to retain spatial information. Also, to combine and strengthen the features extracted from two paths, we introduce a feature fusion module (FFM) and an attention skip module (ASM). Furthermore, we propose a structure loss, which adds a spatial weight to cross-entropy loss and guide the network to focus more on the thin vessels and boundaries. We evaluated this model on three public datasets: DRIVE, CHASE-DB1 and STARE. The results show that the CSU-Net achieves higher segmentation accuracy than the current state-of-the-art methods.
Collapse
|
23
|
Wu H, Wang W, Zhong J, Lei B, Wen Z, Qin J. SCS-Net: A Scale and Context Sensitive Network for Retinal Vessel Segmentation. Med Image Anal 2021; 70:102025. [PMID: 33721692 DOI: 10.1016/j.media.2021.102025] [Citation(s) in RCA: 67] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Revised: 02/24/2021] [Accepted: 02/25/2021] [Indexed: 02/01/2023]
Abstract
Accurately segmenting retinal vessel from retinal images is essential for the detection and diagnosis of many eye diseases. However, it remains a challenging task due to (1) the large variations of scale in the retinal vessels and (2) the complicated anatomical context of retinal vessels, including complex vasculature and morphology, the low contrast between some vessels and the background, and the existence of exudates and hemorrhage. It is difficult for a model to capture representative and distinguishing features for retinal vessels under such large scale and semantics variations. Limited training data also make this task even harder. In order to comprehensively tackle these challenges, we propose a novel scale and context sensitive network (a.k.a., SCS-Net) for retinal vessel segmentation. We first propose a scale-aware feature aggregation (SFA) module, aiming at dynamically adjusting the receptive fields to effectively extract multi-scale features. Then, an adaptive feature fusion (AFF) module is designed to guide efficient fusion between adjacent hierarchical features to capture more semantic information. Finally, a multi-level semantic supervision (MSS) module is employed to learn more distinctive semantic representation for refining the vessel maps. We conduct extensive experiments on the six mainstream retinal image databases (DRIVE, CHASEDB1, STARE, IOSTAR, HRF, and LES-AV). The experimental results demonstrate the effectiveness of the proposed SCS-Net, which is capable of achieving better segmentation performance than other state-of-the-art approaches, especially for the challenging cases with large scale variations and complex context environments.
Collapse
Affiliation(s)
- Huisi Wu
- College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China, 518060
| | - Wei Wang
- College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China, 518060
| | - Jiafu Zhong
- College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China, 518060
| | - Baiying Lei
- School of Biomedical Engineering, Health Science Centers, Shenzhen University, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Marshall Laboratory of Biomedical Engineering, AI Research Center for Medical Image Analysis and Diagnosis, Shenzhen, China, 518060.
| | - Zhenkun Wen
- College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China, 518060
| | - Jing Qin
- Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong
| |
Collapse
|
24
|
Retinal blood vessels segmentation using classical edge detection filters and the neural network. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100521] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
|
25
|
Rodrigues EO, Conci A, Liatsis P. ELEMENT: Multi-Modal Retinal Vessel Segmentation Based on a Coupled Region Growing and Machine Learning Approach. IEEE J Biomed Health Inform 2020; 24:3507-3519. [PMID: 32750920 DOI: 10.1109/jbhi.2020.2999257] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Vascular structures in the retina contain important information for the detection and analysis of ocular diseases, including age-related macular degeneration, diabetic retinopathy and glaucoma. Commonly used modalities in diagnosis of these diseases are fundus photography, scanning laser ophthalmoscope (SLO) and fluorescein angiography (FA). Typically, retinal vessel segmentation is carried out either manually or interactively, which makes it time consuming and prone to human errors. In this research, we propose a new multi-modal framework for vessel segmentation called ELEMENT (vEsseL sEgmentation using Machine lEarning and coNnecTivity). This framework consists of feature extraction and pixel-based classification using region growing and machine learning. The proposed features capture complementary evidence based on grey level and vessel connectivity properties. The latter information is seamlessly propagated through the pixels at the classification phase. ELEMENT reduces inconsistencies and speeds up the segmentation throughput. We analyze and compare the performance of the proposed approach against state-of-the-art vessel segmentation algorithms in three major groups of experiments, for each of the ocular modalities. Our method produced higher overall performance, with an overall accuracy of 97.40%, compared to 25 of the 26 state-of-the-art approaches, including six works based on deep learning, evaluated on the widely known DRIVE fundus image dataset. In the case of the STARE, CHASE-DB, VAMPIRE FA, IOSTAR SLO and RC-SLO datasets, the proposed framework outperformed all of the state-of-the-art methods with accuracies of 98.27%, 97.78%, 98.34%, 98.04% and 98.35%, respectively.
Collapse
|
26
|
Saroj SK, Kumar R, Singh NP. Fréchet PDF based Matched Filter Approach for Retinal Blood Vessels Segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 194:105490. [PMID: 32504830 DOI: 10.1016/j.cmpb.2020.105490] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2019] [Revised: 03/20/2020] [Accepted: 03/31/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Retinal pathology diseases such as glaucoma, obesity, diabetes, hypertension etc. have deadliest impact on life of human being today. Retinal blood vessels consist of various significant information which are helpful in detection and treatment of these diseases. Therefore, it is essential to segment these retinal vessels. Various matched filter approaches for segmentation of retinal blood vessels are reported in the literature but their kernel templates are not appropriate to vessel profile resulting poor performance. To overcome this, a novel matched filter approach based on Fréchet probability distribution function has been proposed. METHODS Image processing operations which we have used in the proposed approach are basically divided into three major stages viz; pre processing, Fréchet matched filter and post processing. In pre processing, principle component analysis (PCA) method is used to convert color image into grayscale image thereafter contrast limited adaptive histogram equalization (CLAHE) is applied on obtained grayscale to get enhanced grayscale image. In Fréchet matched filter, exhaustive experimental tests are conducted to choose optimal values for both Fréchet function parameters and matched filter parameters to design new matched filter. In post processing, entropy based optimal thresholding technique is applied on obtained MFR image to get binary image followed by length filtering and masking methods are applied to generate to a clear and whole vascular tree. RESULTS For evaluation of the proposed approach, quantitative performance metrics such as average specificity, average sensitivity and average accuracy and root mean square deviation (RMSD) are computed in the literature. We found the average specificity 97.24%, average sensitivity 72.78%, average accuracy 95.09% for STARE dataset while average specificity 97.61%, average sensitivity 73.07%, average accuracy 95.44% for DRIVE dataset. Average RMSD values are found 0.07 and 0.04 for STARE and DRIVE databases respectively. CONCLUSIONS From experimental results, it can be observed that our proposed approach outperforms over latest and prominent works reported in the literature. The cause of improved performance is due to better matching between vessel profile and Fréchet template.
Collapse
Affiliation(s)
- Sushil Kumar Saroj
- Department of Computer Science and Engineering, MMM University of Technology, Gorakhpur, India.
| | - Rakesh Kumar
- Department of Computer Science and Engineering, MMM University of Technology, Gorakhpur, India.
| | - Nagendra Pratap Singh
- Department of Computer Science and Engineering, National Institute of Technology, Hamirpur, India.
| |
Collapse
|
27
|
Palanivel DA, Natarajan S, Gopalakrishnan S. Retinal vessel segmentation using multifractal characterization. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2020.106439] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
28
|
Khanal A, Estrada R. Dynamic Deep Networks for Retinal Vessel Segmentation. FRONTIERS IN COMPUTER SCIENCE 2020. [DOI: 10.3389/fcomp.2020.00035] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
29
|
Semi-Supervised Learning Method of U-Net Deep Learning Network for Blood Vessel Segmentation in Retinal Images. Symmetry (Basel) 2020. [DOI: 10.3390/sym12071067] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Blood vessel segmentation methods based on deep neural networks have achieved satisfactory results. However, these methods are usually supervised learning methods, which require large numbers of retinal images with high quality pixel-level ground-truth labels. In practice, the task of labeling these retinal images is very costly, financially and in human effort. To deal with these problems, we propose a semi-supervised learning method which can be used in blood vessel segmentation with limited labeled data. In this method, we use the improved U-Net deep learning network to segment the blood vessel tree. On this basis, we implement the U-Net network-based training dataset updating strategy. A large number of experiments are presented to analyze the segmentation performance of the proposed semi-supervised learning method. The experiment results demonstrate that the proposed methodology is able to avoid the problems of insufficient hand-labels, and achieve satisfactory performance.
Collapse
|
30
|
Feng S, Zhuo Z, Pan D, Tian Q. CcNet: A cross-connected convolutional network for segmenting retinal vessels using multi-scale features. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2018.10.098] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
31
|
Shukla AK, Pandey RK, Pachori RB. A fractional filter based efficient algorithm for retinal blood vessel segmentation. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101883] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
32
|
Cheng YL, Ma MN, Zhang LJ, Jin CJ, Ma L, Zhou Y. Retinal blood vessel segmentation based on Densely Connected U-Net. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2020; 17:3088-3108. [PMID: 32987518 DOI: 10.3934/mbe.2020175] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The segmentation of blood vessels from retinal images is an important and challenging task in medical analysis and diagnosis. This paper proposes a new architecture of the U-Net network for retinal blood vessel segmentation. Adding dense block to U-Net network makes each layer's input come from the all previous layer's output which improves the segmentation accuracy of small blood vessels. The effectiveness of the proposed method has been evaluated on two public datasets (DRIVE and CHASE_DB1). The obtained results (DRIVE: Acc = 0.9559, AUC = 0.9793, CHASE_DB1: Acc = 0.9488, AUC = 0.9785) demonstrate the better performance of the proposed method compared to the state-of-the-art methods. Also, the results show that our method achieves better results for the segmentation of small blood vessels and can be helpful to evaluate related ophthalmic diseases.
Collapse
Affiliation(s)
- Yin Lin Cheng
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou 510006, China
- Department of Medical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou 510006, China
| | - Meng Nan Ma
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou 510006, China
- Department of Medical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou 510006, China
| | - Liang Jun Zhang
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou 510006, China
| | - Chen Jin Jin
- Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510006, China
| | - Li Ma
- Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510006, China
| | - Yi Zhou
- Department of Medical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou 510006, China
| |
Collapse
|
33
|
Adapa D, Joseph Raj AN, Alisetti SN, Zhuang Z, K. G, Naik G. A supervised blood vessel segmentation technique for digital Fundus images using Zernike Moment based features. PLoS One 2020; 15:e0229831. [PMID: 32142540 PMCID: PMC7059933 DOI: 10.1371/journal.pone.0229831] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Accepted: 02/16/2020] [Indexed: 11/18/2022] Open
Abstract
This paper proposes a new supervised method for blood vessel segmentation using Zernike moment-based shape descriptors. The method implements a pixel wise classification by computing a 11-D feature vector comprising of both statistical (gray-level) features and shape-based (Zernike moment) features. Also the feature set contains optimal coefficients of the Zernike Moments which were derived based on the maximum differentiability between the blood vessel and background pixels. A manually selected training points obtained from the training set of the DRIVE dataset, covering all possible manifestations were used for training the ANN-based binary classifier. The method was evaluated on unknown test samples of DRIVE and STARE databases and returned accuracies of 0.945 and 0.9486 respectively, outperforming other existing supervised learning methods. Further, the segmented outputs were able to cover thinner blood vessels better than previous methods, aiding in early detection of pathologies.
Collapse
Affiliation(s)
- Dharmateja Adapa
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, Guangdong, China
| | - Alex Noel Joseph Raj
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, Guangdong, China
| | - Sai Nikhil Alisetti
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, Guangdong, China
| | - Zhemin Zhuang
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, Guangdong, China
| | - Ganesan K.
- TIFAC-CORE, School of Electronics, Vellore Institute of Technology, Vellore, India
| | - Ganesh Naik
- MARCS Institute, Western Sydney University, Australia
| |
Collapse
|
34
|
Strisciuglio N, Azzopardi G, Petkov N. Robust Inhibition-Augmented Operator for Delineation of Curvilinear Structures. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:5852-5866. [PMID: 31247549 DOI: 10.1109/tip.2019.2922096] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Delineation of curvilinear structures in images is an important basic step of several image processing applications, such as segmentation of roads or rivers in aerial images, vessels or staining membranes in medical images, and cracks in pavements and roads, among others. Existing methods suffer from insufficient robustness to noise. In this paper, we propose a novel operator for the detection of curvilinear structures in images, which we demonstrate to be robust to various types of noise and effective in several applications. We call it RUSTICO, which stands for RobUST Inhibition-augmented Curvilinear Operator. It is inspired by the push-pull inhibition in visual cortex and takes as input the responses of two trainable B-COSFIRE filters of opposite polarity. The output of RUSTICO consists of a magnitude map and an orientation map. We carried out experiments on a data set of synthetic stimuli with noise drawn from different distributions, as well as on several benchmark data sets of retinal fundus images, crack pavements, and aerial images and a new data set of rose bushes used for automatic gardening. We evaluated the performance of RUSTICO by a metric that considers the structural properties of line networks (connectivity, area, and length) and demonstrated that RUSTICO outperforms many existing methods with high statistical significance. RUSTICO exhibits high robustness to noise and texture.
Collapse
|
35
|
Singh N, Kaur L, Singh K. Segmentation of retinal blood vessels based on feature-oriented dictionary learning and sparse coding using ensemble classification approach. J Med Imaging (Bellingham) 2019; 6:044006. [DOI: 10.1117/1.jmi.6.4.044006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2018] [Accepted: 11/04/2019] [Indexed: 11/14/2022] Open
Affiliation(s)
- Navdeep Singh
- Punjabi University, Department of Computer Science and Engineering, Patiala, Punjab
| | - Lakhwinder Kaur
- Punjabi University, Department of Computer Science and Engineering, Patiala, Punjab
| | - Kuldeep Singh
- Malaviya National Institute of Technology, Jaipur, Rajasthan
| |
Collapse
|
36
|
Kassim YM, Maude RJ, Palaniappan K. Sensitivity of Cross-Trained Deep CNNs for Retinal Vessel Extraction. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:2736-2739. [PMID: 30440967 DOI: 10.1109/embc.2018.8512764] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Automatic segmentation of vascular network is a critical step in quantitatively characterizing vessel remodeling in retinal images and other tissues. We proposed a deep learning architecture consists of 14 layers to extract blood vessels in fundoscopy images for the popular standard datasets DRIVE and STARE. Experimental results show that our CNN characterized by superior identifying for the foreground vessel regions. It produces results with sensitivity higher by 10% than other methods when trained by the same data set and more than 1% with cross training (trained on DRIVE, tested with STARE and vice versa). Further, our results have better accuracy $> 0 .95$% compared to state of the art algorithms.
Collapse
|
37
|
Javidi M, Harati A, Pourreza H. Retinal image assessment using bi-level adaptive morphological component analysis. Artif Intell Med 2019; 99:101702. [PMID: 31606110 DOI: 10.1016/j.artmed.2019.07.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2018] [Revised: 07/25/2019] [Accepted: 07/26/2019] [Indexed: 10/26/2022]
Abstract
The automated analysis of retinal images is a widely researched area which can help to diagnose several diseases like diabetic retinopathy in early stages of the disease. More specifically, separation of vessels and lesions is very critical as features of these structures are directly related to the diagnosis and treatment process of diabetic retinopathy. The complexity of the retinal image contents especially in images with severe diabetic retinopathy makes detection of vascular structure and lesions difficult. In this paper, a novel framework based on morphological component analysis (MCA) is presented which benefits from the adaptive representations obtained via dictionary learning. In the proposed Bi-level Adaptive MCA (BAMCA), MCA is extended to locally deal with sparse representation of the retinal images at patch level whereas the decomposition process occurs globally at the image level. BAMCA method with appropriately offline learnt dictionaries is adopted to work on retinal images with severe diabetic retinopathy in order to simultaneously separate vessels and exudate lesions as diagnostically useful morphological components. To obtain the appropriate dictionaries, K-SVD dictionary learning algorithm is modified to use a gated error which guides the process toward learning the main structures of the retinal images using vessel or lesion maps. Computational efficiency of the proposed framework is also increased significantly through some improvement leading to noticeable reduction in run time. We experimentally show how effective dictionaries can be learnt which help BAMCA to successfully separate exudate and vessel components from retinal images even in severe cases of diabetic retinopathy. In this paper, in addition to visual qualitative assessment, the performance of the proposed method is quantitatively measured in the framework of vessel and exudate segmentation. The reported experimental results on public datasets demonstrate that the obtained components can be used to achieve competitive results with regard to the state-of-the-art vessel and exudate segmentation methods.
Collapse
Affiliation(s)
- Malihe Javidi
- Computer Engineering Department, Quchan University of Technology, Quchan, Iran.
| | - Ahad Harati
- Department of Computer Engineering, Ferdowsi University of Mashhad, Mashhad, Iran.
| | - HamidReza Pourreza
- Department of Computer Engineering, Ferdowsi University of Mashhad, Mashhad, Iran.
| |
Collapse
|
38
|
DCCMED-Net: Densely connected and concatenated multi Encoder-Decoder CNNs for retinal vessel extraction from fundus images. Med Hypotheses 2019; 134:109426. [PMID: 31622926 DOI: 10.1016/j.mehy.2019.109426] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Accepted: 10/09/2019] [Indexed: 11/22/2022]
Abstract
Recent studies have shown that convolutional neural networks (CNNs) can be more accurate, efficient and even deeper on their training if they include direct connections from the layers close to the input to those close to the output in order to transfer activation maps. Through this observation, this study introduces a new CNN model, namely Densely Connected and Concatenated Multi Encoder-Decoder (DCCMED) network. DCCMED contains concatenated multi encoder-decoder CNNs and connects certain layers to the corresponding input of the subsequent encoder-decoder block in a feed-forward fashion, for retinal vessel extraction from fundus image. The DCCMED model has assertive aspects such as reducing pixel-vanishing and encouraging features reuse. A patch-based data augmentation strategy is also developed for the training of the proposed DCCMED model that increases the generalization ability of the network. Experiments are carried out on two publicly available datasets, namely Digital Retinal Images for Vessel Extraction (DRIVE) and Structured Analysis of the Retina (STARE). Evaluation criterions such as sensitivity (Se), specificity (Sp), accuracy (Acc), dice and area under the receiver operating characteristic curve (AUC) are used for verifying the effectiveness of the proposed method. The obtained results are compared with several supervised and unsupervised state-of-the-art methods based on AUC scores. The obtained results demonstrate that the proposed DCCMED model yields the best performance compared with the-state-of-the-art methods according to accuracy and AUC scores.
Collapse
|
39
|
Zhang Y, Lian J, Rong L, Jia W, Li C, Zheng Y. Even faster retinal vessel segmentation via accelerated singular value decomposition. Neural Comput Appl 2019. [DOI: 10.1007/s00521-019-04505-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
40
|
Automatic Retinal Blood Vessel Segmentation Based on Fully Convolutional Neural Networks. Symmetry (Basel) 2019. [DOI: 10.3390/sym11091112] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Automated retinal vessel segmentation technology has become an important tool for disease screening and diagnosis in clinical medicine. However, most of the available methods of retinal vessel segmentation still have problems such as poor accuracy and low generalization ability. This is because the symmetrical and asymmetrical patterns between blood vessels are complicated, and the contrast between the vessel and the background is relatively low due to illumination and pathology. Robust vessel segmentation of the retinal image is essential for improving the diagnosis of diseases such as vein occlusions and diabetic retinopathy. Automated retinal vein segmentation remains a challenging task. In this paper, we proposed an automatic retinal vessel segmentation framework using deep fully convolutional neural networks (FCN), which integrate novel methods of data preprocessing, data augmentation, and full convolutional neural networks. It is an end-to-end framework that automatically and efficiently performs retinal vessel segmentation. The framework was evaluated on three publicly available standard datasets, achieving F1 score of 0.8321, 0.8531, and 0.8243, an average accuracy of 0.9706, 0.9777, and 0.9773, and average area under the Receiver Operating Characteristic (ROC) curve of 0.9880, 0.9923 and 0.9917 on the DRIVE, STARE, and CHASE_DB1 datasets, respectively. The experimental results show that our proposed framework achieves state-of-the-art vessel segmentation performance in all three benchmark tests.
Collapse
|
41
|
Tang P, Liang Q, Yan X, Zhang D, Coppola G, Sun W. Multi-proportion channel ensemble model for retinal vessel segmentation. Comput Biol Med 2019; 111:103352. [DOI: 10.1016/j.compbiomed.2019.103352] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2019] [Revised: 07/07/2019] [Accepted: 07/07/2019] [Indexed: 10/26/2022]
|
42
|
Binary Filter for Fast Vessel Pattern Extraction. Neural Process Lett 2019. [DOI: 10.1007/s11063-018-9866-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
43
|
Wang W, Wang W, Hu Z. Segmenting retinal vessels with revised top-bottom-hat transformation and flattening of minimum circumscribed ellipse. Med Biol Eng Comput 2019; 57:1481-1496. [DOI: 10.1007/s11517-019-01967-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2018] [Accepted: 02/23/2019] [Indexed: 11/29/2022]
|
44
|
Hashemzadeh M, Adlpour Azar B. Retinal blood vessel extraction employing effective image features and combination of supervised and unsupervised machine learning methods. Artif Intell Med 2019; 95:1-15. [PMID: 30904129 DOI: 10.1016/j.artmed.2019.03.001] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2018] [Revised: 12/08/2018] [Accepted: 03/01/2019] [Indexed: 11/30/2022]
Abstract
In medicine, retinal vessel analysis of fundus images is a prominent task for the screening and diagnosis of various ophthalmological and cardiovascular diseases. In this research, a method is proposed for extracting the retinal blood vessels employing a set of effective image features and combination of supervised and unsupervised machine learning techniques. Further to the common features used in extracting blood vessels, three strong features having a significant influence on the accuracy of the vessel extraction are utilized. The selected combination of the different types of individually efficient features results in a rich local information with better discrimination for vessel and non-vessel pixels. The proposed method first extracts the thick and clear vessels in an unsupervised manner, and then, it extracts the thin vessels in a supervised way. The goal of the combination of the supervised and unsupervised methods is to deal with the problem of intra-class high variance of image features calculated from various vessel pixels. The proposed method is evaluated on three publicly available databases DRIVE, STARE and CHASE_DB1. The obtained results (DRIVE: Acc = 0.9531, AUC = 0.9752; STARE: Acc = 0.9691, AUC = 0.9853; CHASE_DB1: Acc = 0.9623, AUC = 0.9789) demonstrate the better performance of the proposed method compared to the state-of-the-art methods.
Collapse
Affiliation(s)
- Mahdi Hashemzadeh
- Faculty of Information Technology and Computer Engineering, Azarbaijan Shahid Madani University, Tabriz-Azarshahr Road, 5375171379, Tabriz, Iran.
| | - Baharak Adlpour Azar
- Department of Computer Engineering, Tabriz Branch, Azad University, Tabriz, Iran.
| |
Collapse
|
45
|
Leopold HA, Orchard J, Zelek JS, Lakshminarayanan V. PixelBNN: Augmenting the PixelCNN with Batch Normalization and the Presentation of a Fast Architecture for Retinal Vessel Segmentation. J Imaging 2019; 5:jimaging5020026. [PMID: 34460474 PMCID: PMC8320904 DOI: 10.3390/jimaging5020026] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2018] [Revised: 01/05/2019] [Accepted: 01/24/2019] [Indexed: 01/06/2023] Open
Abstract
Analysis of retinal fundus images is essential for eye-care physicians in the diagnosis, care and treatment of patients. Accurate fundus and/or retinal vessel maps give rise to longitudinal studies able to utilize multimedia image registration and disease/condition status measurements, as well as applications in surgery preparation and biometrics. The segmentation of retinal morphology has numerous applications in assessing ophthalmologic and cardiovascular disease pathologies. Computer-aided segmentation of the vasculature has proven to be a challenge, mainly due to inconsistencies such as noise and variations in hue and brightness that can greatly reduce the quality of fundus images. The goal of this work is to collate different key performance indicators (KPIs) and state-of-the-art methods applied to this task, frame computational efficiency–performance trade-offs under varying degrees of information loss using common datasets, and introduce PixelBNN, a highly efficient deep method for automating the segmentation of fundus morphologies. The model was trained, tested and cross tested on the DRIVE, STARE and CHASE_DB1 retinal vessel segmentation datasets. Performance was evaluated using G-mean, Mathews Correlation Coefficient and F1-score, with the main success measure being computation speed. The network was 8.5× faster than the current state-of-the-art at test time and performed comparatively well, considering a 5× to 19× reduction in information from resizing images during preprocessing.
Collapse
Affiliation(s)
- Henry A. Leopold
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
- Correspondence:
| | - Jeff Orchard
- David R. Cheriton School of Computer Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| | - John S. Zelek
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| | | |
Collapse
|
46
|
Yang J, Wei J, Shi Y. Accurate ROI localization and hierarchical hyper-sphere model for finger-vein recognition. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.02.098] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
47
|
Fan Z, Lu J, Wei C, Huang H, Cai X, Chen X. A Hierarchical Image Matting Model for Blood Vessel Segmentation in Fundus Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 28:2367-2377. [PMID: 30571623 DOI: 10.1109/tip.2018.2885495] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
In this paper, a hierarchical image matting model is proposed to extract blood vessels from fundus images. More specifically, a hierarchical strategy is integrated into the image matting model for blood vessel segmentation. Normally the matting models require a user specified trimap, which separates the input image into three regions: the foreground, background and unknown regions. However, creating a user specified trimap is laborious for vessel segmentation tasks. In this paper, we propose a method that first generates trimap automatically by utilizing region features of blood vessels, then applies a hierarchical image matting model to extract the vessel pixels from the unknown regions. The proposed method has low calculation time and outperforms many other state-of-art supervised and unsupervised methods. It achieves a vessel segmentation accuracy of 96.0%, 95.7% and 95.1% in an average time of 10.72s, 15.74s and 50.71s on images from three publicly available fundus image datasets DRIVE, STARE, and CHASE DB1, respectively.
Collapse
|
48
|
Badawi SA, Fraz MM. Optimizing the trainable B-COSFIRE filter for retinal blood vessel segmentation. PeerJ 2018; 6:e5855. [PMID: 30479888 PMCID: PMC6238769 DOI: 10.7717/peerj.5855] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2018] [Accepted: 09/28/2018] [Indexed: 11/20/2022] Open
Abstract
Segmentation of the retinal blood vessels using filtering techniques is a widely used step in the development of an automated system for diagnostic retinal image analysis. This paper optimized the blood vessel segmentation, by extending the trainable B-COSFIRE filter via identification of more optimal parameters. The filter parameters are introduced using an optimization procedure to three public datasets (STARE, DRIVE, and CHASE-DB1). The suggested approach considers analyzing thresholding parameters selection followed by application of background artifacts removal techniques. The approach results are better than the other state of the art methods used for vessel segmentation. ANOVA analysis technique is also used to identify the most significant parameters that are impacting the performance results (p-value ¡ 0.05). The proposed enhancement has improved the vessel segmentation accuracy in DRIVE, STARE and CHASE-DB1 to 95.47, 95.30 and 95.30, respectively.
Collapse
Affiliation(s)
- Sufian A. Badawi
- School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Islamabad, Pakistan
| | - Muhammad Moazam Fraz
- School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Islamabad, Pakistan
| |
Collapse
|
49
|
Khan KB, Khaliq AA, Jalil A, Iftikhar MA, Ullah N, Aziz MW, Ullah K, Shahid M. A review of retinal blood vessels extraction techniques: challenges, taxonomy, and future trends. Pattern Anal Appl 2018. [DOI: 10.1007/s10044-018-0754-8] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
|
50
|
Yan Z, Yang X, Cheng KT. A Three-Stage Deep Learning Model for Accurate Retinal Vessel Segmentation. IEEE J Biomed Health Inform 2018; 23:1427-1436. [PMID: 30281503 DOI: 10.1109/jbhi.2018.2872813] [Citation(s) in RCA: 118] [Impact Index Per Article: 19.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Automatic retinal vessel segmentation is a fundamental step in the diagnosis of eye-related diseases, in which both thick vessels and thin vessels are important features for symptom detection. All existing deep learning models attempt to segment both types of vessels simultaneously by using a unified pixel-wise loss that treats all vessel pixels with equal importance. Due to the highly imbalanced ratio between thick vessels and thin vessels (namely the majority of vessel pixels belong to thick vessels), the pixel-wise loss would be dominantly guided by thick vessels and relatively little influence comes from thin vessels, often leading to low segmentation accuracy for thin vessels. To address the imbalance problem, in this paper, we explore to segment thick vessels and thin vessels separately by proposing a three-stage deep learning model. The vessel segmentation task is divided into three stages, namely thick vessel segmentation, thin vessel segmentation, and vessel fusion. As better discriminative features could be learned for separate segmentation of thick vessels and thin vessels, this process minimizes the negative influence caused by their highly imbalanced ratio. The final vessel fusion stage refines the results by further identifying nonvessel pixels and improving the overall vessel thickness consistency. The experiments on public datasets DRIVE, STARE, and CHASE_DB1 clearly demonstrate that the proposed three-stage deep learning model outperforms the current state-of-the-art vessel segmentation methods.
Collapse
|