1
|
Liu Q, Zhou F, Shen J, Xu J, Wan C, Xu X, Yan Z, Yao J. A fundus vessel segmentation method based on double skip connections combined with deep supervision. Front Cell Dev Biol 2024; 12:1477819. [PMID: 39430046 PMCID: PMC11487527 DOI: 10.3389/fcell.2024.1477819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2024] [Accepted: 09/20/2024] [Indexed: 10/22/2024] Open
Abstract
Background Fundus vessel segmentation is vital for diagnosing ophthalmic diseases like central serous chorioretinopathy (CSC), diabetic retinopathy, and glaucoma. Accurate segmentation provides crucial vessel morphology details, aiding the early detection and intervention of ophthalmic diseases. However, current algorithms struggle with fine vessel segmentation and maintaining sensitivity in complex regions. Challenges also stem from imaging variability and poor generalization across multimodal datasets, highlighting the need for more advanced algorithms in clinical practice. Methods This paper aims to explore a new vessel segmentation method to alleviate the above problems. We propose a fundus vessel segmentation model based on a combination of double skip connections, deep supervision, and TransUNet, namely DS2TUNet. Initially, the original fundus images are improved through grayscale conversion, normalization, histogram equalization, gamma correction, and other preprocessing techniques. Subsequently, by utilizing the U-Net architecture, the preprocessed fundus images are segmented to obtain the final vessel information. Specifically, the encoder firstly incorporates the ResNetV1 downsampling, dilated convolution downsampling, and Transformer to capture both local and global features, which upgrades its vessel feature extraction ability. Then, the decoder introduces the double skip connections to facilitate upsampling and refine segmentation outcomes. Finally, the deep supervision module introduces multiple upsampling vessel features from the decoder into the loss function, so that the model can learn vessel feature representations more effectively and alleviate gradient vanishing during the training phase. Results Extensive experiments on publicly available multimodal fundus datasets such as DRIVE, CHASE_DB1, and ROSE-1 demonstrate that the DS2TUNet model attains F1-scores of 0.8195, 0.8362, and 0.8425, with Accuracy of 0.9664, 0.9741, and 0.9557, Sensitivity of 0.8071, 0.8101, and 0.8586, and Specificity of 0.9823, 0.9869, and 0.9713, respectively. Additionally, the model also exhibits excellent test performance on the clinical fundus dataset CSC, with F1-score of 0.7757, Accuracy of 0.9688, Sensitivity of 0.8141, and Specificity of 0.9801 based on the weight trained on the CHASE_DB1 dataset. These results comprehensively validate that the proposed method obtains good performance in fundus vessel segmentation, thereby aiding clinicians in the further diagnosis and treatment of fundus diseases in terms of effectiveness and feasibility.
Collapse
Affiliation(s)
- Qingyou Liu
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Fen Zhou
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Jianxin Shen
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Jianguo Xu
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Cheng Wan
- College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Xiangzhong Xu
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Zhipeng Yan
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Jin Yao
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| |
Collapse
|
2
|
Liang J, Jiang Y, Yan H. Skip connection information enhancement network for retinal vessel segmentation. Med Biol Eng Comput 2024; 62:3163-3178. [PMID: 38789838 DOI: 10.1007/s11517-024-03108-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 04/22/2024] [Indexed: 05/26/2024]
Abstract
Many major diseases of the retina often show symptoms of lesions in the fundus of the eye. The extraction of blood vessels from retinal fundus images is essential to assist doctors. Some of the existing methods do not fully extract the detailed features of retinal images or lose some information, making it difficult to accurately segment capillaries located at the edges of the images. In this paper, we propose a multi-scale retinal vessel segmentation network (SCIE_Net) based on skip connection information enhancement. Firstly, the network processes retinal images at multiple scales to achieve network capture of features at different scales. Secondly, the feature aggregation module is proposed to aggregate the rich information of the shallow network. Further, the skip connection information enhancement module is proposed to take into account the detailed features of the shallow layer and the advanced features of the deeper network to avoid the problem of incomplete information interaction between the layers of the network. Finally, SCIE_Net achieves better vessel segmentation performance and results on the publicly available retinal image standard datasets DRIVE, CHASE_DB1, and STARE.
Collapse
Affiliation(s)
- Jing Liang
- Sichuan Vocational College of Information Technology, No.265 Xuefu Road, Guangyuan, 628040, Sichuan, China.
- College of Computer Science and Engineering, Northwest Normal University, No. 967 Anning East Road, Lanzhou, 730070, Gansu, China.
| | - Yun Jiang
- College of Computer Science and Engineering, Northwest Normal University, No. 967 Anning East Road, Lanzhou, 730070, Gansu, China
| | - Hao Yan
- MianYang Polytechnic, No.32, Section 1, Xianren Road, Mianyan, 621000, Sichuan, China
| |
Collapse
|
3
|
Verma PK, Kaur J. Systematic Review of Retinal Blood Vessels Segmentation Based on AI-driven Technique. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1783-1799. [PMID: 38438695 PMCID: PMC11300804 DOI: 10.1007/s10278-024-01010-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 12/07/2023] [Accepted: 12/13/2023] [Indexed: 03/06/2024]
Abstract
Image segmentation is a crucial task in computer vision and image processing, with numerous segmentation algorithms being found in the literature. It has important applications in scene understanding, medical image analysis, robotic perception, video surveillance, augmented reality, image compression, among others. In light of this, the widespread popularity of deep learning (DL) and machine learning has inspired the creation of fresh methods for segmenting images using DL and ML models respectively. We offer a thorough analysis of this recent literature, encompassing the range of ground-breaking initiatives in semantic and instance segmentation, including convolutional pixel-labeling networks, encoder-decoder architectures, multi-scale and pyramid-based methods, recurrent networks, visual attention models, and generative models in adversarial settings. We study the connections, benefits, and importance of various DL- and ML-based segmentation models; look at the most popular datasets; and evaluate results in this Literature.
Collapse
Affiliation(s)
- Prem Kumari Verma
- Department of Computer Science and Engineering, Dr. B.R. Ambedkar National Institute of Technology, Jalandhar, 144008, Punjab, India.
| | - Jagdeep Kaur
- Department of Computer Science and Engineering, Dr. B.R. Ambedkar National Institute of Technology, Jalandhar, 144008, Punjab, India
| |
Collapse
|
4
|
Ebrahimi B, Le D, Abtahi M, Dadzie AK, Rossi A, Rahimi M, Son T, Ostmo S, Campbell JP, Paul Chan RV, Yao X. Assessing spectral effectiveness in color fundus photography for deep learning classification of retinopathy of prematurity. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:076001. [PMID: 38912212 PMCID: PMC11188587 DOI: 10.1117/1.jbo.29.7.076001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 05/24/2024] [Accepted: 05/29/2024] [Indexed: 06/25/2024]
Abstract
Significance Retinopathy of prematurity (ROP) poses a significant global threat to childhood vision, necessitating effective screening strategies. This study addresses the impact of color channels in fundus imaging on ROP diagnosis, emphasizing the efficacy and safety of utilizing longer wavelengths, such as red or green for enhanced depth information and improved diagnostic capabilities. Aim This study aims to assess the spectral effectiveness in color fundus photography for the deep learning classification of ROP. Approach A convolutional neural network end-to-end classifier was utilized for deep learning classification of normal, stage 1, stage 2, and stage 3 ROP fundus images. The classification performances with individual-color-channel inputs, i.e., red, green, and blue, and multi-color-channel fusion architectures, including early-fusion, intermediate-fusion, and late-fusion, were quantitatively compared. Results For individual-color-channel inputs, similar performance was observed for green channel (88.00% accuracy, 76.00% sensitivity, and 92.00% specificity) and red channel (87.25% accuracy, 74.50% sensitivity, and 91.50% specificity), which is substantially outperforming the blue channel (78.25% accuracy, 56.50% sensitivity, and 85.50% specificity). For multi-color-channel fusion options, the early-fusion and intermediate-fusion architecture showed almost the same performance when compared to the green/red channel input, and they outperformed the late-fusion architecture. Conclusions This study reveals that the classification of ROP stages can be effectively achieved using either the green or red image alone. This finding enables the exclusion of blue images, acknowledged for their increased susceptibility to light toxicity.
Collapse
Affiliation(s)
- Behrouz Ebrahimi
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
| | - David Le
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
| | - Mansour Abtahi
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
| | - Albert K. Dadzie
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
| | - Alfa Rossi
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
| | - Mojtaba Rahimi
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
| | - Taeyoon Son
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
| | - Susan Ostmo
- Oregon Health and Science University, Casey Eye Institute, Department of Ophthalmology, Portland, Oregon, United States
| | - J. Peter Campbell
- Oregon Health and Science University, Casey Eye Institute, Department of Ophthalmology, Portland, Oregon, United States
| | - R. V. Paul Chan
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
- University of Illinois Chicago, Department of Ophthalmology and Visual Sciences, Chicago, Illinois, United States
| | - Xincheng Yao
- University of Illinois, Chicago, Department of Biomedical Engineering, Chicago, Illinois, United States
- University of Illinois Chicago, Department of Ophthalmology and Visual Sciences, Chicago, Illinois, United States
| |
Collapse
|
5
|
Li J, Ma Q, Yao M, Jiang Q, Wang Z, Yan B. Segmentation of retinal microaneurysms in fluorescein fundus angiography images by a novel three-step model. Front Med (Lausanne) 2024; 11:1372091. [PMID: 38962734 PMCID: PMC11220251 DOI: 10.3389/fmed.2024.1372091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 05/21/2024] [Indexed: 07/05/2024] Open
Abstract
Introduction Microaneurysms serve as early signs of diabetic retinopathy, and their accurate detection is critical for effective treatment. Due to their low contrast and similarity to retinal vessels, distinguishing microaneurysms from background noise and retinal vessels in fluorescein fundus angiography (FFA) images poses a significant challenge. Methods We present a model for automatic detection of microaneurysms. FFA images were pre-processed using Top-hat transformation, Gray-stretching, and Gaussian filter techniques to eliminate noise. The candidate microaneurysms were coarsely segmented using an improved matched filter algorithm. Real microaneurysms were segmented by a morphological strategy. To evaluate the segmentation performance, our proposed model was compared against other models, including Otsu's method, Region Growing, Global Threshold, Matched Filter, Fuzzy c-means, and K-means, using both self-constructed and publicly available datasets. Performance metrics such as accuracy, sensitivity, specificity, positive predictive value, and intersection-over-union were calculated. Results The proposed model outperforms other models in terms of accuracy, sensitivity, specificity, positive predictive value, and intersection-over-union. The segmentation results obtained with our model closely align with benchmark standard. Our model demonstrates significant advantages for microaneurysm segmentation in FFA images and holds promise for clinical application in the diagnosis of diabetic retinopathy. Conclusion The proposed model offers a robust and accurate approach to microaneurysm detection, outperforming existing methods and demonstrating potential for clinical application in the effective treatment of diabetic retinopathy.
Collapse
Affiliation(s)
- Jing Li
- Eye Institute and Department of Ophthalmology, Eye and ENT Hospital, State Key Laboratory of Medical Neurobiology, Fudan University, Shanghai, China
- College of Information Science, Shanghai Ocean University, Shanghai, China
| | - Qian Ma
- Department of Ophthalmology, General Hospital of Ningxia Medical University, Ningxia, China
| | - Mudi Yao
- Department of Ophthalmology and Optometry, The Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Qin Jiang
- Department of Ophthalmology and Optometry, The Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
| | - Zhenhua Wang
- College of Information Science, Shanghai Ocean University, Shanghai, China
| | - Biao Yan
- Eye Institute and Department of Ophthalmology, Eye and ENT Hospital, State Key Laboratory of Medical Neurobiology, Fudan University, Shanghai, China
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
6
|
Yang Y, Yue S, Quan H. CS-UNet: Cross-scale U-Net with Semantic-position dependencies for retinal vessel segmentation. NETWORK (BRISTOL, ENGLAND) 2024; 35:134-153. [PMID: 38050997 DOI: 10.1080/0954898x.2023.2288858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 11/23/2023] [Indexed: 12/07/2023]
Abstract
Accurate retinal vessel segmentation is the prerequisite for early recognition and treatment of retina-related diseases. However, segmenting retinal vessels is still challenging due to the intricate vessel tree in fundus images, which has a significant number of tiny vessels, low contrast, and lesion interference. For this task, the u-shaped architecture (U-Net) has become the de-facto standard and has achieved considerable success. However, U-Net is a pure convolutional network, which usually shows limitations in global modelling. In this paper, we propose a novel Cross-scale U-Net with Semantic-position Dependencies (CS-UNet) for retinal vessel segmentation. In particular, we first designed a Semantic-position Dependencies Aggregator (SPDA) and incorporate it into each layer of the encoder to better focus on global contextual information by integrating the relationship of semantic and position. To endow the model with the capability of cross-scale interaction, the Cross-scale Relation Refine Module (CSRR) is designed to dynamically select the information associated with the vessels, which helps guide the up-sampling operation. Finally, we have evaluated CS-UNet on three public datasets: DRIVE, CHASE_DB1, and STARE. Compared to most existing state-of-the-art methods, CS-UNet demonstrated better performance.
Collapse
Affiliation(s)
- Ying Yang
- College of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan, China
| | - Shengbin Yue
- College of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan, China
- Yunnan Provincial Key Laboratory of Artificial Intelligence, Kunming University of Science and Technology, Kunming, Yunnan, China
| | - Haiyan Quan
- College of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan, China
| |
Collapse
|
7
|
Jiang M, Zhu Y, Zhang X. CoVi-Net: A hybrid convolutional and vision transformer neural network for retinal vessel segmentation. Comput Biol Med 2024; 170:108047. [PMID: 38295476 DOI: 10.1016/j.compbiomed.2024.108047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Revised: 12/29/2023] [Accepted: 01/26/2024] [Indexed: 02/02/2024]
Abstract
Retinal vessel segmentation plays a crucial role in the diagnosis and treatment of ocular pathologies. Current methods have limitations in feature fusion and face challenges in simultaneously capturing global and local features from fundus images. To address these issues, this study introduces a hybrid network named CoVi-Net, which combines convolutional neural networks and vision transformer. In our proposed model, we have integrated a novel module for local and global feature aggregation (LGFA). This module facilitates remote information interaction while retaining the capability to effectively gather local information. In addition, we introduce a bidirectional weighted feature fusion module (BWF). Recognizing the variations in semantic information across layers, we allocate adjustable weights to different feature layers for adaptive feature fusion. BWF employs a bidirectional fusion strategy to mitigate the decay of effective information. We also incorporate horizontal and vertical connections to enhance feature fusion and utilization across various scales, thereby improving the segmentation of multiscale vessel images. Furthermore, we introduce an adaptive lateral feature fusion (ALFF) module that refines the final vessel segmentation map by enriching it with more semantic information from the network. In the evaluation of our model, we employed three well-established retinal image databases (DRIVE, CHASEDB1, and STARE). Our experimental results demonstrate that CoVi-Net outperforms other state-of-the-art techniques, achieving a global accuracy of 0.9698, 0.9756, and 0.9761 and an area under the curve of 0.9880, 0.9903, and 0.9915 on DRIVE, CHASEDB1, and STARE, respectively. We conducted ablation studies to assess the individual effectiveness of the three modules. In addition, we examined the adaptability of our CoVi-Net model for segmenting lesion images. Our experiments indicate that our proposed model holds promise in aiding the diagnosis of retinal vascular disorders.
Collapse
Affiliation(s)
- Minshan Jiang
- Shanghai Key Laboratory of Contemporary Optics System, College of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
| | - Yongfei Zhu
- Shanghai Key Laboratory of Contemporary Optics System, College of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Xuedian Zhang
- Shanghai Key Laboratory of Contemporary Optics System, College of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| |
Collapse
|
8
|
Guo H, Meng J, Zhao Y, Zhang H, Dai C. High-precision retinal blood vessel segmentation based on a multi-stage and dual-channel deep learning network. Phys Med Biol 2024; 69:045007. [PMID: 38198716 DOI: 10.1088/1361-6560/ad1cf6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Accepted: 01/10/2024] [Indexed: 01/12/2024]
Abstract
Objective.The high-precision segmentation of retinal vessels in fundus images is important for the early diagnosis of ophthalmic diseases. However, the extraction for microvessels is challenging due to their characteristics of low contrast and high structural complexity. Although some works have been developed to improve the segmentation ability in thin vessels, they have only been successful in recognizing small vessels with relatively high contrast.Approach.Therefore, we develop a deep learning (DL) framework with a multi-stage and dual-channel network model (MSDC_NET) to further improve the thin-vessel segmentation with low contrast. Specifically, an adaptive image enhancement strategy combining multiple preprocessing and the DL method is firstly proposed to elevate the contrast of thin vessels; then, a two-channel model with multi-scale perception is developed to implement whole- and thin-vessel segmentation; and finally, a series of post-processing operations are designed to extract more small vessels in the predicted maps from thin-vessel channels.Main results.Experiments on DRIVE, STARE and CHASE_DB1 demonstrate the superiorities of the proposed MSDC_NET in extracting more thin vessels in fundus images, and quantitative evaluations on several parameters based on the advanced ground truth further verify the advantages of our proposed DL model. Compared with the previous multi-branch method, the specificity and F1score are improved by about 2.18%, 0.68%, 1.73% and 2.91%, 0.24%, 8.38% on the three datasets, respectively.Significance.This work may provide richer information to ophthalmologists for the diagnosis and treatment of vascular-related ophthalmic diseases.
Collapse
Affiliation(s)
- Hui Guo
- School of Computer, Qufu Normal University, 276826 Rizhao, People's Republic of China
| | - Jing Meng
- School of Computer, Qufu Normal University, 276826 Rizhao, People's Republic of China
| | - Yongfu Zhao
- School of Computer, Qufu Normal University, 276826 Rizhao, People's Republic of China
| | - Hongdong Zhang
- School of Computer, Qufu Normal University, 276826 Rizhao, People's Republic of China
| | - Cuixia Dai
- College of Science, Shanghai Institute of Technology, 201418 Shanghai, People's Republic of China
| |
Collapse
|
9
|
Ma Z, Li X. An improved supervised and attention mechanism-based U-Net algorithm for retinal vessel segmentation. Comput Biol Med 2024; 168:107770. [PMID: 38056215 DOI: 10.1016/j.compbiomed.2023.107770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Revised: 11/08/2023] [Accepted: 11/26/2023] [Indexed: 12/08/2023]
Abstract
The segmentation results of retinal blood vessels are crucial for automatically diagnosing ophthalmic diseases such as diabetic retinopathy, hypertension, cardiovascular and cerebrovascular diseases. To improve the accuracy of vessel segmentation and better extract information about small vessels and edges, we introduce the U-Net algorithm with a supervised attention mechanism for retinal vessel segmentation. We achieve this by introducing a decoder fusion module (DFM) in the encoding part, effectively combining different convolutional blocks to extract features comprehensively. Additionally, in the decoding part of U-Net, we propose the context squeeze and excitation (CSE) decoding module to enhance important contextual feature information and the detection of tiny blood vessels. For the final output, we introduce the supervised fusion mechanism (SFM), which combines multiple branches from shallow to deep layers, effectively fusing multi-scale features and capturing information from different levels, fully integrating low-level and high-level features to improve segmentation performance. Our experimental results on the public datasets of DRIVE, STARE, and CHASED_B1 demonstrate the excellent performance of our proposed network.
Collapse
Affiliation(s)
- Zhendi Ma
- School of Computer Science and Technology, Zhejiang Normal University, Jinhua 321004, China
| | - Xiaobo Li
- School of Computer Science and Technology, Zhejiang Normal University, Jinhua 321004, China.
| |
Collapse
|
10
|
Mahapatra S, Agrawal S, Mishro PK, Panda R, Dora L, Pachori RB. A Review on Retinal Blood Vessel Enhancement and Segmentation Techniques for Color Fundus Photography. Crit Rev Biomed Eng 2024; 52:41-69. [PMID: 37938183 DOI: 10.1615/critrevbiomedeng.2023049348] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2023]
Abstract
The retinal image is a trusted modality in biomedical image-based diagnosis of many ophthalmologic and cardiovascular diseases. Periodic examination of the retina can help in spotting these abnormalities in the early stage. However, to deal with today's large population, computerized retinal image analysis is preferred over manual inspection. The precise extraction of the retinal vessel is the first and decisive step for clinical applications. Every year, many more articles are added to the literature that describe new algorithms for the problem at hand. The majority of the review article is restricted to a fairly small number of approaches, assessment indices, and databases. In this context, a comprehensive review of different vessel extraction methods is inevitable. It includes the development of a first-hand classification of these methods. A bibliometric analysis of these articles is also presented. The benefits and drawbacks of the most commonly used techniques are summarized. The primary challenges, as well as the scope of possible changes, are discussed. In order to make a fair comparison, numerous assessment indices are considered. The findings of this survey could provide a new path for researchers for further work in this domain.
Collapse
Affiliation(s)
- Sakambhari Mahapatra
- Department of Electronics and Telecommunication Engineering, Veer Surendra Sai University of Technology, Burla, India
| | - Sanjay Agrawal
- Department of Electronics and Telecommunication Engineering, Veer Surendra Sai University of Technology, Burla, India
| | - Pranaba K Mishro
- Department of Electronics and Telecommunication Engineering, Veer Surendra Sai University of Technology, Burla, India
| | - Rutuparna Panda
- Department of Electronics and Telecommunication Engineering, Veer Surendra Sai University of Technology, Burla, India
| | - Lingraj Dora
- Department of Electrical and Electronics Engineering, Veer Surendra Sai University of Technology, Burla, India
| | - Ram Bilas Pachori
- Department of Electrical Engineering, Indian Institute of Technology Indore, Indore, India
| |
Collapse
|
11
|
Li S, Gao X, Xie Z. Underwater Structured Light Stripe Center Extraction with Normalized Grayscale Gravity Method. SENSORS (BASEL, SWITZERLAND) 2023; 23:9839. [PMID: 38139687 PMCID: PMC10747204 DOI: 10.3390/s23249839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 12/08/2023] [Accepted: 12/13/2023] [Indexed: 12/24/2023]
Abstract
The non-uniform reflectance characteristics of object surfaces and underwater environment disturbances during underwater laser measurements can have a great impact on laser stripe center extraction. Therefore, we propose a normalized grayscale gravity method to address this problem. First, we build an underwater structured light dataset for different illuminations, turbidity levels, and reflective surfaces of the underwater object and compare several state-of-the-art semantic segmentation models, including Deeplabv3, Deeplabv3plus, MobilenetV3, Pspnet, and FCNnet. Based on our comparison, we recommend PSPnet for the specific task of underwater structured light stripe segmentation. Second, in order to accurately extract the centerline of the extracted light stripe, the gray level values are normalized to eliminate the influence of noise and light stripe edge information on the centroids, and the weights of the cross-sectional extremes are increased to increase the function convergence for better robustness. Finally, the subpixel-structured light center points of the image are obtained by bilinear interpolation to improve the image resolution and extraction accuracy. The experimental results show that the proposed method can effectively eliminate noise interference while exhibiting good robustness and self-adaptability.
Collapse
Affiliation(s)
- Shuaishuai Li
- College of Engineering, Ocean University of China, Qingdao 266100, China;
- Key Laboratory of Ocean Engineering of Shandong Province, Qingdao 266100, China
| | - Xiang Gao
- Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Zexiao Xie
- College of Engineering, Ocean University of China, Qingdao 266100, China;
- Key Laboratory of Ocean Engineering of Shandong Province, Qingdao 266100, China
| |
Collapse
|
12
|
Ye Z, Liu Y, Jing T, He Z, Zhou L. A High-Resolution Network with Strip Attention for Retinal Vessel Segmentation. SENSORS (BASEL, SWITZERLAND) 2023; 23:8899. [PMID: 37960597 PMCID: PMC10650600 DOI: 10.3390/s23218899] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 10/27/2023] [Accepted: 10/30/2023] [Indexed: 11/15/2023]
Abstract
Accurate segmentation of retinal vessels is an essential prerequisite for the subsequent analysis of fundus images. Recently, a number of methods based on deep learning have been proposed and shown to demonstrate promising segmentation performance, especially U-Net and its variants. However, tiny vessels and low-contrast vessels are hard to detect due to the issues of a loss of spatial details caused by consecutive down-sample operations and inadequate fusion of multi-level features caused by vanilla skip connections. To address these issues and enhance the segmentation precision of retinal vessels, we propose a novel high-resolution network with strip attention. Instead of the U-Net-shaped architecture, the proposed network follows an HRNet-shaped architecture as the basic network, learning high-resolution representations throughout the training process. In addition, a strip attention module including a horizontal attention mechanism and a vertical attention mechanism is designed to obtain long-range dependencies in the horizontal and vertical directions by calculating the similarity between each pixel and all pixels in the same row and the same column, respectively. For effective multi-layer feature fusion, we incorporate the strip attention module into the basic network to dynamically guide adjacent hierarchical features. Experimental results on the DRIVE and STARE datasets show that the proposed method can extract more tiny vessels and low-contrast vessels compared with existing mainstream methods, achieving accuracies of 96.16% and 97.08% and sensitivities of 82.68% and 89.36%, respectively. The proposed method has the potential to aid in the analysis of fundus images.
Collapse
Affiliation(s)
- Zhipin Ye
- Research Center of Fluid Machinery Engineering & Technology, Jiangsu University, Zhenjiang 212013, China
| | - Yingqian Liu
- Research Center of Fluid Machinery Engineering & Technology, Jiangsu University, Zhenjiang 212013, China
| | - Teng Jing
- Research Center of Fluid Machinery Engineering & Technology, Jiangsu University, Zhenjiang 212013, China
| | - Zhaoming He
- Department of Mechanical Engineering, Texas Tech University, Lubbock, TX 79411, USA
| | - Ling Zhou
- Research Center of Fluid Machinery Engineering & Technology, Jiangsu University, Zhenjiang 212013, China
| |
Collapse
|
13
|
Zhu YF, Xu X, Zhang XD, Jiang MS. CCS-UNet: a cross-channel spatial attention model for accurate retinal vessel segmentation. BIOMEDICAL OPTICS EXPRESS 2023; 14:4739-4758. [PMID: 37791275 PMCID: PMC10545190 DOI: 10.1364/boe.495766] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 07/14/2023] [Accepted: 08/09/2023] [Indexed: 10/05/2023]
Abstract
Precise segmentation of retinal vessels plays an important role in computer-assisted diagnosis. Deep learning models have been applied to retinal vessel segmentation, but the efficacy is limited by the significant scale variation of vascular structures and the intricate background of retinal images. This paper supposes a cross-channel spatial attention U-Net (CCS-UNet) for accurate retinal vessel segmentation. In comparison to other models based on U-Net, our model employes a ResNeSt block for the encoder-decoder architecture. The block has a multi-branch structure that enables the model to extract more diverse vascular features. It facilitates weight distribution across channels through the incorporation of soft attention, which effectively aggregates contextual information in vascular images. Furthermore, we suppose an attention mechanism within the skip connection. This mechanism serves to enhance feature integration across various layers, thereby mitigating the degradation of effective information. It helps acquire cross-channel information and enhance the localization of regions of interest, ultimately leading to improved recognition of vascular structures. In addition, the feature fusion module (FFM) module is used to provide semantic information for a more refined vascular segmentation map. We evaluated CCS-UNet based on five benchmark retinal image datasets, DRIVE, CHASEDB1, STARE, IOSTAR and HRF. Our proposed method exhibits superior segmentation efficacy compared to other state-of-the-art techniques with a global accuracy of 0.9617/0.9806/0.9766/0.9786/0.9834 and AUC of 0.9863/0.9894/0.9938/0.9902/0.9855 on DRIVE, CHASEDB1, STARE, IOSTAR and HRF respectively. Ablation studies are also performed to evaluate the the relative contributions of different architectural components. Our proposed model is potential for diagnostic aid of retinal diseases.
Collapse
Affiliation(s)
| | | | - Xue-dian Zhang
- Shanghai Key Laboratory of Contemporary Optics System, College of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Min-shan Jiang
- Shanghai Key Laboratory of Contemporary Optics System, College of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| |
Collapse
|
14
|
Zhou W, Bai W, Ji J, Yi Y, Zhang N, Cui W. Dual-path multi-scale context dense aggregation network for retinal vessel segmentation. Comput Biol Med 2023; 164:107269. [PMID: 37562323 DOI: 10.1016/j.compbiomed.2023.107269] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 06/22/2023] [Accepted: 07/16/2023] [Indexed: 08/12/2023]
Abstract
There has been steady progress in the field of deep learning-based blood vessel segmentation. However, several challenging issues still continue to limit its progress, including inadequate sample sizes, the neglect of contextual information, and the loss of microvascular details. To address these limitations, we propose a dual-path deep learning framework for blood vessel segmentation. In our framework, the fundus images are divided into concentric patches with different scales to alleviate the overfitting problem. Then, a Multi-scale Context Dense Aggregation Network (MCDAU-Net) is proposed to accurately extract the blood vessel boundaries from these patches. In MCDAU-Net, a Cascaded Dilated Spatial Pyramid Pooling (CDSPP) module is designed and incorporated into intermediate layers of the model, enhancing the receptive field and producing feature maps enriched with contextual information. To improve segmentation performance for low-contrast vessels, we propose an InceptionConv (IConv) module, which can explore deeper semantic features and suppress the propagation of non-vessel information. Furthermore, we design a Multi-scale Adaptive Feature Aggregation (MAFA) module to fuse the multi-scale feature by assigning adaptive weight coefficients to different feature maps through skip connections. Finally, to explore the complementary contextual information and enhance the continuity of microvascular structures, a fusion module is designed to combine the segmentation results obtained from patches of different sizes, achieving fine microvascular segmentation performance. In order to assess the effectiveness of our approach, we conducted evaluations on three widely-used public datasets: DRIVE, CHASE-DB1, and STARE. Our findings reveal a remarkable advancement over the current state-of-the-art (SOTA) techniques, with the mean values of Se and F1 scores being an increase of 7.9% and 4.7%, respectively. The code is available at https://github.com/bai101315/MCDAU-Net.
Collapse
Affiliation(s)
- Wei Zhou
- College of Computer Science, Shenyang Aerospace University, Shenyang, China
| | - Weiqi Bai
- College of Computer Science, Shenyang Aerospace University, Shenyang, China
| | - Jianhang Ji
- College of Computer Science, Shenyang Aerospace University, Shenyang, China
| | - Yugen Yi
- School of Software, Jiangxi Normal University, Nanchang, China.
| | - Ningyi Zhang
- School of Software, Jiangxi Normal University, Nanchang, China
| | - Wei Cui
- Institute for Infocomm Research, The Agency for Science, Technology and Research (A*STAR), Singapore.
| |
Collapse
|
15
|
Li Y, Zhang Y, Liu JY, Wang K, Zhang K, Zhang GS, Liao XF, Yang G. Global Transformer and Dual Local Attention Network via Deep-Shallow Hierarchical Feature Fusion for Retinal Vessel Segmentation. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:5826-5839. [PMID: 35984806 DOI: 10.1109/tcyb.2022.3194099] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Clinically, retinal vessel segmentation is a significant step in the diagnosis of fundus diseases. However, recent methods generally neglect the difference of semantic information between deep and shallow features, which fail to capture the global and local characterizations in fundus images simultaneously, resulting in the limited segmentation performance for fine vessels. In this article, a global transformer (GT) and dual local attention (DLA) network via deep-shallow hierarchical feature fusion (GT-DLA-dsHFF) are investigated to solve the above limitations. First, the GT is developed to integrate the global information in the retinal image, which effectively captures the long-distance dependence between pixels, alleviating the discontinuity of blood vessels in the segmentation results. Second, DLA, which is constructed using dilated convolutions with varied dilation rates, unsupervised edge detection, and squeeze-excitation block, is proposed to extract local vessel information, consolidating the edge details in the segmentation result. Finally, a novel deep-shallow hierarchical feature fusion (dsHFF) algorithm is studied to fuse the features in different scales in the deep learning framework, respectively, which can mitigate the attenuation of valid information in the process of feature fusion. We verified the GT-DLA-dsHFF on four typical fundus image datasets. The experimental results demonstrate our GT-DLA-dsHFF achieves superior performance against the current methods and detailed discussions verify the efficacy of the proposed three modules. Segmentation results of diseased images show the robustness of our proposed GT-DLA-dsHFF. Implementation codes will be available on https://github.com/YangLibuaa/GT-DLA-dsHFF.
Collapse
|
16
|
Huang Y, Yang R, Geng X, Li Z, Wu Y. Two Filters for Acquiring the Profiles from Images Obtained from Weak-Light Background, Fluorescence Microscope, Transmission Electron Microscope, and Near-Infrared Camera. SENSORS (BASEL, SWITZERLAND) 2023; 23:6207. [PMID: 37448056 DOI: 10.3390/s23136207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 06/26/2023] [Accepted: 06/27/2023] [Indexed: 07/15/2023]
Abstract
Extracting the profiles of images is critical because it can bring simplified description and draw special attention to particular areas in the images. In our work, we designed two filters via the exponential and hypotenuse functions for profile extraction. Their ability to extract the profiles from the images obtained from weak-light conditions, fluorescence microscopes, transmission electron microscopes, and near-infrared cameras is proven. Moreover, they can be used to extract the nesting structures in the images. Furthermore, their performance in extracting images degraded by Gaussian noise is evaluated. We used Gaussian white noise with a mean value of 0.9 to create very noisy images. These filters are effective for extracting the edge morphology in the noisy images. For the purpose of a comparative study, we used several well-known filters to process these noisy images, including the filter based on Gabor wavelet, the filter based on the watershed algorithm, and the matched filter, the performances of which in profile extraction are either comparable or not effective when dealing with extensively noisy images. Our filters have shown the potential for use in the field of pattern recognition and object tracking.
Collapse
Affiliation(s)
- Yinghui Huang
- School of Information Science and Technology, Nantong University, Nantong 226019, China
| | - Ruoxi Yang
- School of Electrical and Automation Engineering, Nanjing Normal University, Nanjing 210046, China
| | - Xin Geng
- College of Building Environment Engineering, Zhengzhou University of Light Industry, Zhengzhou 450000, China
| | - Zongan Li
- School of Electrical and Automation Engineering, Nanjing Normal University, Nanjing 210046, China
| | - Ye Wu
- School of Electrical and Automation Engineering, Nanjing Normal University, Nanjing 210046, China
| |
Collapse
|
17
|
Yang R, Chen L, Zhang L, Li Z, Lin Y, Wu Y. Image Enhancement via Special Functions and Its Application for Near Infrared Imaging. GLOBAL CHALLENGES (HOBOKEN, NJ) 2023; 7:2200179. [PMID: 37483414 PMCID: PMC10362124 DOI: 10.1002/gch2.202200179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 02/26/2023] [Indexed: 07/25/2023]
Abstract
Image enhancement is important given that it can be used to highlight the area of interest in the images. This article designs four filters via special function for realizing image enhancement. Firstly, a filter based on the exponential function is designed. When the value of the progression is even, the edge feature can be extracted. When the value of the progression is odd, sharp contrast can be obtained. Secondly, a filter is built using hyperbolic cosine and its inverse function, where a printmaking feature can be extracted. Thirdly, a filter is made via a hyperbolic secant function and its inverse. It can lead to the extraction of image edge. When the progression value is increasing, marginal effect can be found and the brightness is decreasing. Ripple morphology can be found. Fourthly, a filter is constructed through a hyperbolic sine function and its inverse, where marginal features can be extracted. Furthermore, these filters are useful for extracting the marginal features even when a high noise density of 0.9 is added to the original images. They are useful for highlighting the images acquired from near infrared imaging.
Collapse
Affiliation(s)
- Ruoxi Yang
- School of Electrical and Automation EngineeringNanjing Normal UniversityNanjing210046China
| | - Long Chen
- School of Electrical and Automation EngineeringNanjing Normal UniversityNanjing210046China
| | - Ling Zhang
- College of Microelectronics and Communication EngineeringChongqing UniversityChongqing400044China
| | - Zongan Li
- School of Electrical and Automation EngineeringNanjing Normal UniversityNanjing210046China
| | - Yingcheng Lin
- College of Microelectronics and Communication EngineeringChongqing UniversityChongqing400044China
| | - Ye Wu
- School of Electrical and Automation EngineeringNanjing Normal UniversityNanjing210046China
| |
Collapse
|
18
|
Li L, Liu H, Li Q, Tian Z, Li Y, Geng W, Wang S. Near-Infrared Blood Vessel Image Segmentation Using Background Subtraction and Improved Mathematical Morphology. Bioengineering (Basel) 2023; 10:726. [PMID: 37370657 DOI: 10.3390/bioengineering10060726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 05/31/2023] [Accepted: 06/06/2023] [Indexed: 06/29/2023] Open
Abstract
The precise display of blood vessel information for doctors is crucial. This is not only true for facilitating intravenous injections, but also for the diagnosis and analysis of diseases. Currently, infrared cameras can be used to capture images of superficial blood vessels. However, their imaging quality always has the problems of noises, breaks, and uneven vascular information. In order to overcome these problems, this paper proposes an image segmentation algorithm based on the background subtraction and improved mathematical morphology. The algorithm regards the image as a superposition of blood vessels into the background, removes the noise by calculating the size of connected domains, achieves uniform blood vessel width, and smooths edges that reflect the actual blood vessel state. The algorithm is evaluated subjectively and objectively in this paper to provide a basis for vascular image quality assessment. Extensive experimental results demonstrate that the proposed method can effectively extract accurate and clear vascular information.
Collapse
Affiliation(s)
- Ling Li
- Beijing Engineerin Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Haoting Liu
- Beijing Engineerin Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Qing Li
- Beijing Engineerin Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Zhen Tian
- Beijing Engineerin Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Yajie Li
- Beijing Engineerin Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Wenjia Geng
- Department of Traditional Chinese Medicine, Peking University People's Hospital, Beijing 100044, China
| | - Song Wang
- Department of Nephrology, Peking University Third Hospital, Beijing 100191, China
| |
Collapse
|
19
|
Sun Y, Li X, Liu Y, Yuan Z, Wang J, Shi C. A lightweight dual-path cascaded network for vessel segmentation in fundus image. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:10790-10814. [PMID: 37322961 DOI: 10.3934/mbe.2023479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Automatic and fast segmentation of retinal vessels in fundus images is a prerequisite in clinical ophthalmic diseases; however, the high model complexity and low segmentation accuracy still limit its application. This paper proposes a lightweight dual-path cascaded network (LDPC-Net) for automatic and fast vessel segmentation. We designed a dual-path cascaded network via two U-shaped structures. Firstly, we employed a structured discarding (SD) convolution module to alleviate the over-fitting problem in both codec parts. Secondly, we introduced the depthwise separable convolution (DSC) technique to reduce the parameter amount of the model. Thirdly, a residual atrous spatial pyramid pooling (ResASPP) model is constructed in the connection layer to aggregate multi-scale information effectively. Finally, we performed comparative experiments on three public datasets. Experimental results show that the proposed method achieved superior performance on the accuracy, connectivity, and parameter quantity, thus proving that it can be a promising lightweight assisted tool for ophthalmic diseases.
Collapse
Affiliation(s)
- Yanxia Sun
- Department of Software Engineering, Harbin University of Science and Technology, Rongcheng 264300, China
| | - Xiang Li
- Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China, Shenzhen 518110, China
| | - Yuechang Liu
- Department of Software Engineering, Harbin University of Science and Technology, Rongcheng 264300, China
| | - Zhongzheng Yuan
- Department of Software Engineering, Harbin University of Science and Technology, Rongcheng 264300, China
| | - Jinke Wang
- Department of Software Engineering, Harbin University of Science and Technology, Rongcheng 264300, China
| | - Changfa Shi
- Mobile E-business Collaborative Innovation Center of Hunan Province, Hunan University of Technology and Business, Changsha 410205, China
| |
Collapse
|
20
|
Du L, Liu H, Zhang L, Lu Y, Li M, Hu Y, Zhang Y. Deep ensemble learning for accurate retinal vessel segmentation. Comput Biol Med 2023; 158:106829. [PMID: 37054633 DOI: 10.1016/j.compbiomed.2023.106829] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Revised: 03/09/2023] [Accepted: 03/26/2023] [Indexed: 04/15/2023]
Abstract
Significant progress has been made in deep learning-based retinal vessel segmentation in recent years. However, the current methods suffer from low performance and the robust of the models is not that good. Our work introduces an novel framework for retinal vessel segmentation based on deep ensemble learning. The results of benchmarking comparisons indicate that our model outperforms the existing ones on multiple datasets, demonstrating that our models are more effective, superior, and robust for the retinal vessel segmentation. It evinces the capability of our model to capture the discriminative feature representations through introducing the ensemble strategy to integrate different base deep learning models like pyramid vision Transformer and FCN-Transformer. We expect our proposed method can benefit and accelerate the development of accurate retinal vessel segmentation in this field.
Collapse
Affiliation(s)
- Lingling Du
- Department of Ophthalmology, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Hanruo Liu
- The Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Lan Zhang
- Department of Cardiovascular, Fourth Affiliated Hospital, Harbin Medical University, Harbin, China
| | - Yao Lu
- Department of Ophthalmology, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Mengyao Li
- Department of Ophthalmology, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Yang Hu
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China
| | - Yi Zhang
- Department of Ophthalmology, The First Affiliated Hospital of Harbin Medical University, Harbin, China.
| |
Collapse
|
21
|
Kv R, Prasad K, Peralam Yegneswaran P. Segmentation and Classification Approaches of Clinically Relevant Curvilinear Structures: A Review. J Med Syst 2023; 47:40. [PMID: 36971852 PMCID: PMC10042761 DOI: 10.1007/s10916-023-01927-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 02/25/2023] [Indexed: 03/29/2023]
Abstract
Detection of curvilinear structures from microscopic images, which help the clinicians to make an unambiguous diagnosis is assuming paramount importance in recent clinical practice. Appearance and size of dermatophytic hyphae, keratitic fungi, corneal and retinal vessels vary widely making their automated detection cumbersome. Automated deep learning methods, endowed with superior self-learning capacity, have superseded the traditional machine learning methods, especially in complex images with challenging background. Automatic feature learning ability using large input data with better generalization and recognition capability, but devoid of human interference and excessive pre-processing, is highly beneficial in the above context. Varied attempts have been made by researchers to overcome challenges such as thin vessels, bifurcations and obstructive lesions in retinal vessel detection as revealed through several publications reviewed here. Revelations of diabetic neuropathic complications such as tortuosity, changes in the density and angles of the corneal fibers have been successfully sorted in many publications reviewed here. Since artifacts complicate the images and affect the quality of analysis, methods addressing these challenges have been described. Traditional and deep learning methods, that have been adapted and published between 2015 and 2021 covering retinal vessels, corneal nerves and filamentous fungi have been summarized in this review. We find several novel and meritorious ideas and techniques being put to use in the case of retinal vessel segmentation and classification, which by way of cross-domain adaptation can be utilized in the case of corneal and filamentous fungi also, making suitable adaptations to the challenges to be addressed.
Collapse
Affiliation(s)
- Rajitha Kv
- Department of Biomedical Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, Karnataka, India
| | - Keerthana Prasad
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, 576104, Karnataka, India.
| | - Prakash Peralam Yegneswaran
- Department of Microbiology, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, 576104, Karnataka, India
| |
Collapse
|
22
|
Computational intelligence in eye disease diagnosis: a comparative study. Med Biol Eng Comput 2023; 61:593-615. [PMID: 36595155 DOI: 10.1007/s11517-022-02737-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 12/09/2022] [Indexed: 01/04/2023]
Abstract
In recent years, eye disorders are an important health issue among older people. Generally, individuals with eye diseases are unaware of the gradual growth of symptoms. Therefore, routine eye examinations are required for early diagnosis. Usually, eye disorders are identified by an ophthalmologist via a slit-lamp investigation. Slit-lamp interpretations are inadequate due to the differences in the analytical skills of the ophthalmologist, inconsistency in eye disorder analysis, and record maintenance issues. Therefore, digital images of an eye and computational intelligence (CI)-based approaches are preferred as assistive methods for eye disease diagnosis. A comparative study of CI-based decision support models for eye disorder diagnosis is presented in this paper. The CI-based decision support systems used for eye abnormalities diagnosis were grouped as anterior and retinal eye abnormalities diagnostic systems, and numerous algorithms used for diagnosing the eye abnormalities were also briefed. Various eye imaging modalities, pre-processing methods such as reflection removal, contrast enhancement, region of interest segmentation methods, and public eye image databases used for CI-based eye disease diagnosis system development were also discussed in this paper. In this comparative study, the reliability of various CI-based systems used for anterior eye and retinal disorder diagnosis was compared based on the precision, sensitivity, and specificity in eye disease diagnosis. The outcomes of the comparative analysis indicate that the CI-based anterior and retinal disease diagnosis systems attained significant prediction accuracy. Hence, these CI-based diagnosis systems can be used in clinics to reduce the burden on physicians, minimize fatigue-related misdetection, and take precise clinical decisions.
Collapse
|
23
|
Challoob M, Gao Y, Busch A, Nikzad M. Separable Paravector Orientation Tensors for Enhancing Retinal Vessels. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:880-893. [PMID: 36331638 DOI: 10.1109/tmi.2022.3219436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Robust detection of retinal vessels remains an unsolved research problem, particularly in handling the intrinsic real-world challenges of highly imbalanced contrast between thick vessels and thin ones, inhomogeneous background regions, uneven illumination, and complex geometries of crossing/bifurcations. This paper presents a new separable paravector orientation tensor that addresses these difficulties by characterizing the enhancement of retinal vessels to be dependent on a nonlinear scale representation, invariant to changes in contrast and lighting, responsive for symmetric patterns, and fitted with elliptical cross-sections. The proposed method is built on projecting vessels as a 3D paravector valued function rotated in an alpha quarter domain, providing geometrical, structural, symmetric, and energetic features. We introduce an innovative symmetrical inhibitory scheme that incorporates paravector features for producing a set of directional contrast-independent elongated-like patterns reconstructing vessel tree in orientation tensors. By fitting constraint elliptical volumes via eigensystem analysis, the final vessel tree is produced with a strong and uniform response preserving various vessel features. The validation of proposed method on clinically relevant retinal images with high-quality results, shows its excellent performance compared to the state-of-the-art benchmarks and the second human observers.
Collapse
|
24
|
Sun Y, Li Y, Zhang F, Zhao H, Liu H, Wang N, Li H. A deep network using coarse clinical prior for myopic maculopathy grading. Comput Biol Med 2023; 154:106556. [PMID: 36682177 DOI: 10.1016/j.compbiomed.2023.106556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2022] [Revised: 12/19/2022] [Accepted: 01/11/2023] [Indexed: 01/15/2023]
Abstract
Pathological Myopia (PM) is a globally prevalent eye disease which is one of the main causes of blindness. In the long-term clinical observation, myopic maculopathy is a main criterion to diagnose PM severity. The grading of myopic maculopathy can provide a severity and progression prediction of PM to perform treatment and prevent myopia blindness in time. In this paper, we propose a feature fusion framework to utilize tessellated fundus and the brightest region in fundus images as prior knowledge. The proposed framework consists of prior knowledge extraction module and feature fusion module. Prior knowledge extraction module uses traditional image processing methods to extract the prior knowledge to indicate coarse lesion positions in fundus images. Furthermore, the prior, tessellated fundus and the brightest region in fundus images, are integrated into deep learning network as global and local constrains respectively by feature fusion module. In addition, rank loss is designed to increase the continuity of classification score. We collect a private color fundus dataset from Beijing TongRen Hospital containing 714 clinical images. The dataset contains all 5 grades of myopic maculopathy which are labeled by experienced ophthalmologists. Our framework achieves 0.8921 five-grade accuracy on our private dataset. Pathological Myopia (PALM) dataset is used for comparison with other related algorithms. Our framework is trained with 400 images and achieves an AUC of 0.9981 for two-class grading. The results show that our framework can achieve a good performance for myopic maculopathy grading.
Collapse
Affiliation(s)
- Yun Sun
- Beijing Institute of Technology, No. 5, Zhong Guan Cun South Street, Beijing, 100081, China
| | - Yu Li
- Beijing Tongren Hospital, Capital Medical University, No. 2, Chongwenmennei Street, Beijing, 100730, China
| | - Fengju Zhang
- Beijing Tongren Hospital, Capital Medical University, No. 2, Chongwenmennei Street, Beijing, 100730, China
| | - He Zhao
- Beijing Institute of Technology, No. 5, Zhong Guan Cun South Street, Beijing, 100081, China.
| | - Hanruo Liu
- Beijing Institute of Technology, No. 5, Zhong Guan Cun South Street, Beijing, 100081, China; Beijing Tongren Hospital, Capital Medical University, No. 2, Chongwenmennei Street, Beijing, 100730, China
| | - Ningli Wang
- Beijing Tongren Hospital, Capital Medical University, No. 2, Chongwenmennei Street, Beijing, 100730, China
| | - Huiqi Li
- Beijing Institute of Technology, No. 5, Zhong Guan Cun South Street, Beijing, 100081, China.
| |
Collapse
|
25
|
GDF-Net: A multi-task symmetrical network for retinal vessel segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
|
26
|
Hu D, Pan L, Chen X, Xiao S, Wu Q. A novel vessel segmentation algorithm for pathological en-face images based on matched filter. Phys Med Biol 2023; 68. [PMID: 36745931 DOI: 10.1088/1361-6560/acb98a] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 02/06/2023] [Indexed: 02/08/2023]
Abstract
The vascular information in fundus images can provide important basis for detection and prediction of retina-related diseases. However, the presence of lesions such as Coroidal Neovascularization can seriously interfere with normal vascular areas in optical coherence tomography (OCT) fundus images. In this paper, a novel method is proposed for detecting blood vessels in pathological OCT fundus images. First of all, an automatic localization and filling method is used in preprocessing step to reduce pathological interference. Afterwards, in terms of vessel extraction, a pore ablation method based on capillary bundle model is applied. The ablation method processes the image after matched filter feature extraction, which can eliminate the interference caused by diseased blood vessels to a great extent. At the end of the proposed method, morphological operations are used to obtain the main vascular features. Experimental results on the dataset show that the proposed method achieves 0.88 ± 0.03, 0.79 ± 0.05, 0.66 ± 0.04, results in DICE, PRECISION and TPR, respectively. Effective extraction of vascular information from OCT fundus images is of great significance for the diagnosis and treatment of retinal related diseases.
Collapse
Affiliation(s)
- Derong Hu
- School of Mechanical Engineering, Jiangsu University of Technology, Changzhou, People's Republic of China
| | - Lingjiao Pan
- School of Electrical and Information Engineering, Jiangsu University of Technology, Changzhou, People's Republic of China
| | - Xinjian Chen
- School of Electronics and Information Engineering, Soochow University, Suzhou, People's Republic of China
| | - Shuyan Xiao
- School of Electrical and Information Engineering, Jiangsu University of Technology, Changzhou, People's Republic of China
| | - Quanyu Wu
- School of Electrical and Information Engineering, Jiangsu University of Technology, Changzhou, People's Republic of China
| |
Collapse
|
27
|
Wang J, Zhou L, Yuan Z, Wang H, Shi C. MIC-Net: multi-scale integrated context network for automatic retinal vessel segmentation in fundus image. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:6912-6931. [PMID: 37161134 DOI: 10.3934/mbe.2023298] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
PURPOSE Accurate retinal vessel segmentation is of great value in the auxiliary screening of various diseases. However, due to the low contrast between the ends of the branches of the fundus blood vessels and the background, and the variable morphology of the optic disc and cup in the retinal image, the task of high-precision retinal blood vessel segmentation still faces difficulties. METHOD This paper proposes a multi-scale integrated context network, MIC-Net, which fully fuses the encoder-decoder features, and extracts multi-scale information. First, a hybrid stride sampling (HSS) block was designed in the encoder to minimize the loss of helpful information caused by the downsampling operation. Second, a dense hybrid dilated convolution (DHDC) was employed in the connection layer. On the premise of preserving feature resolution, it can perceive richer contextual information. Third, a squeeze-and-excitation with residual connections (SERC) was introduced in the decoder to adjust the channel attention adaptively. Finally, we utilized a multi-layer feature fusion mechanism in the skip connection part, which enables the network to consider both low-level details and high-level semantic information. RESULTS We evaluated the proposed method on three public datasets DRIVE, STARE and CHASE. In the experimental results, the Area under the receiver operating characteristic (ROC) and the accuracy rate (Acc) achieved high performances of 98.62%/97.02%, 98.60%/97.76% and 98.73%/97.38%, respectively. CONCLUSIONS Experimental results show that the proposed method can obtain comparable segmentation performance compared with the state-of-the-art (SOTA) methods. Specifically, the proposed method can effectively reduce the small blood vessel segmentation error, thus proving it a promising tool for auxiliary diagnosis of ophthalmic diseases.
Collapse
Affiliation(s)
- Jinke Wang
- Department of Software Engineering, Harbin University of Science and Technology, Rongcheng 264300, China
- School of Automation, Harbin University of Science and Technology, Harbin 150080, China
| | - Lubiao Zhou
- School of Automation, Harbin University of Science and Technology, Harbin 150080, China
| | - Zhongzheng Yuan
- Department of Software Engineering, Harbin University of Science and Technology, Rongcheng 264300, China
| | - Haiying Wang
- School of Automation, Harbin University of Science and Technology, Harbin 150080, China
| | - Changfa Shi
- Mobile E-business Collaborative Innovation Center of Hunan Province, Hunan University of Technology and Business, Changsha 410205, China
| |
Collapse
|
28
|
Retinal blood vessel segmentation by using the MS-LSDNet network and geometric skeleton reconnection method. Comput Biol Med 2023; 153:106416. [PMID: 36586230 DOI: 10.1016/j.compbiomed.2022.106416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 11/21/2022] [Accepted: 12/04/2022] [Indexed: 12/29/2022]
Abstract
Automatic retinal blood vessel segmentation is a key link in the diagnosis of ophthalmic diseases. Recent deep learning methods have achieved high accuracy in vessel segmentation but still face challenges in maintaining vascular structural connectivity. Therefore, this paper proposes a novel retinal blood vessel segmentation strategy that includes three stages: vessel structure detection, vessel branch extraction and broken vessel segment reconnection. First, we propose a multiscale linear structure detection network (MS-LSDNet), which improves the detection ability of fine blood vessels by learning the types of rich hierarchical features. In addition, to maintain the connectivity of the vascular structure in the process of binarization of the vascular probability map, an adaptive hysteresis threshold method for vascular extraction is proposed. Finally, we propose a vascular tree structure reconstruction algorithm based on a geometric skeleton to connect the broken vessel segments. Experimental results on three publicly available datasets show that compared with current state-of-the-art algorithms, our strategy effectively maintains the connectivity of retinal vascular tree structure.
Collapse
|
29
|
Deep learning-based hemorrhage detection for diabetic retinopathy screening. Sci Rep 2023; 13:1479. [PMID: 36707608 PMCID: PMC9883230 DOI: 10.1038/s41598-023-28680-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 01/23/2023] [Indexed: 01/29/2023] Open
Abstract
Diabetic retinopathy is a retinal compilation that causes visual impairment. Hemorrhage is one of the pathological symptoms of diabetic retinopathy that emerges during disease development. Therefore, hemorrhage detection reveals the presence of diabetic retinopathy in the early phase. Diagnosing the disease in its initial stage is crucial to adopt proper treatment so the repercussions can be prevented. The automatic deep learning-based hemorrhage detection method is proposed that can be used as the second interpreter for ophthalmologists to reduce the time and complexity of conventional screening methods. The quality of the images was enhanced, and the prospective hemorrhage locations were estimated in the preprocessing stage. Modified gamma correction adaptively illuminates fundus images by using gradient information to address the nonuniform brightness levels of images. The algorithm estimated the locations of potential candidates by using a Gaussian match filter, entropy thresholding, and mathematical morphology. The required objects were segmented using the regional diversity at estimated locations. The novel hemorrhage network is propounded for hemorrhage classification and compared with the renowned deep models. Two datasets benchmarked the model's performance using sensitivity, specificity, precision, and accuracy metrics. Despite being the shallowest network, the proposed network marked competitive results than LeNet-5, AlexNet, ResNet50, and VGG-16. The hemorrhage network was assessed using training time and classification accuracy through synthetic experimentation. Results showed promising accuracy in the classification stage while significantly reducing training time. The research concluded that increasing deep network layers does not guarantee good results but rather increases training time. The suitable architecture of a deep model and its appropriate parameters are critical for obtaining excellent outcomes.
Collapse
|
30
|
Xing W, Li G, He C, Huang Q, Cui X, Li Q, Li W, Chen J, Ta D. Automatic detection of A-line in lung ultrasound images using deep learning and image processing. Med Phys 2023; 50:330-343. [PMID: 35950481 DOI: 10.1002/mp.15908] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 06/29/2022] [Accepted: 07/30/2022] [Indexed: 01/25/2023] Open
Abstract
BACKGROUND Auxiliary diagnosis and monitoring of lung diseases based on lung ultrasound (LUS) images is important clinical research. A-line is one of the most common indicators of LUS that can offer support for the assessment of lung diseases. A traditional A-line detection method mainly relies on experienced clinicians, which is inefficient and cannot meet the needs of these areas with backward medical level. Therefore, how to realize the automatic detection of A-line in LUS image is important. PURPOSE In order to solve the disadvantages of traditional A-line detection methods, realize automatic and accurate detection, and provide theoretical support for clinical application, we proposed a novel A-line detection method for LUS images with different probe types in this paper. METHODS First, the improved Faster R-CNN model with a selection strategy of localization box was designed to accurately locate the pleural line. Then, the LUS image below the pleural line was segmented for independent analysis excluding the influence of other similar structures. Next, image-processing methods based on total variation, matched filter, and gray difference were applied to achieve the automatic A-line detection. Finally, the "depth" index was designed to verify the accuracy by judging whether the automatic measurement results belong to corresponding manual results (±5%). In experiments, 3000 convex array LUS images were used for training and validating the improved pleural line localization model by five-fold cross validation. 850 convex array LUS images and 1080 linear array LUS images were used for testing the trained pleural line localization model and the proposed image-processing-based A-line detection method. The accuracy analysis, error statistics, and Harsdorff distance were employed to evaluate the experimental results. RESULTS After 100 epochs, the mean loss value of training and validation set of improved Faster R-CNN model reached 0.6540 and 0.7882, with the validation accuracy of 98.70%. The trained pleural line localization model was applied in the testing set of convex and linear probes and reached the accuracy of 97.88% and 97.11%, respectively, which were 3.83% and 8.70% higher than the original Faster R-CNN model. The accuracy, sensitivity, and specificity of A-line detection reached 95.41%, 0.9244%, 0.9875%, and 94.63%, 0.9230%, and 0.9766% for convex and linear probes, respectively. Compared to the experienced clinicians' results, the mean value and p value of depth error were 1.5342 ± 1.2097 and 0.9021, respectively, and the Harsdorff distance was 5.7305 ± 1.8311. In addition, the accumulated accuracy of the two-stage experiment (pleural line localization and A-line detection) was calculated as the final accuracy of the whole A-line detection system. They were 93.39% and 91.90% for convex and linear probes, respectively, which were higher than these previous methods. CONCLUSIONS The proposed method combining image processing and deep learning can automatically and accurately detect A-line in LUS images with different probe types, which has important application value for clinical diagnosis.
Collapse
Affiliation(s)
- Wenyu Xing
- Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai, China.,Human Phenome Institute, Fudan University, Shanghai, China
| | - Guannan Li
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, China
| | - Chao He
- Department of Emergency and Critical Care, Changzheng Hospital, Naval Medical University, Shanghai, China
| | - Qiming Huang
- School of Advanced Computing and Artificial Intelligence, Xi'an Jiaotong-liverpool University, Suzhou, China
| | - Xulei Cui
- Department of Anesthesiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
| | - Qingli Li
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, China
| | - Wenfang Li
- Department of Emergency and Critical Care, Changzheng Hospital, Naval Medical University, Shanghai, China
| | - Jiangang Chen
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, China.,Engineering Research Center of Traditional Chinese Medicine Intelligent Rehabilitation, Ministry of Education, Shanghai, China
| | - Dean Ta
- Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai, China.,Department of Rehabilitation Medicine, Huashan Hospital, Fudan University, Shanghai, China
| |
Collapse
|
31
|
Liu Y, Shen J, Yang L, Yu H, Bian G. Wave-Net: A lightweight deep network for retinal vessel segmentation from fundus images. Comput Biol Med 2023; 152:106341. [PMID: 36463794 DOI: 10.1016/j.compbiomed.2022.106341] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2022] [Revised: 10/25/2022] [Accepted: 11/16/2022] [Indexed: 11/26/2022]
Abstract
Accurate segmentation of retinal vessels from fundus images is fundamental for the diagnosis of numerous diseases of eye, and an automated vessel segmentation method can effectively help clinicians to make accurate diagnosis for the patients and provide the appropriate treatment schemes. It is important to note that both thick and thin vessels play the key role for disease judgements. Because of complex factors, the precise segmentation of thin vessels is still a great challenge, such as the presence of various lesions, image noise, complex backgrounds and poor contrast in the fundus images. Recently, because of the advantage of context feature representation learning capabilities, deep learning has shown a remarkable segmentation performance on retinal vessels. However, it still has some shortcomings on high-precision retinal vessel extraction due to some factors, such as semantic information loss caused by pooling operations, limited receptive field, etc. To address these problems, this paper proposes a new lightweight segmentation network for precise retinal vessel segmentation, which is called as Wave-Net model on account of the whole shape. To alleviate the influence of semantic information loss problem to thin vessels, to acquire more contexts about micro structures and details, a detail enhancement and denoising block (DED) is proposed to improve the segmentation precision on thin vessels, which replaces the simple skip connections of original U-Net. On the other hand, it could well alleviate the influence of the semantic gap problem. Further, faced with limited receptive field, for multi-scale vessel detection, a multi-scale feature fusion block (MFF) is proposed to fuse cross-scale contexts to achieve higher segmentation accuracy and realize effective processing of local feature maps. Experiments indicate that proposed Wave-Net achieves an excellent performance on retinal vessel segmentation while maintaining a lightweight network design compared to other advanced segmentation methods, and it also has shown a better segmentation ability to thin vessels.
Collapse
Affiliation(s)
- Yanhong Liu
- School of Electrical and Information Engineering, Zhengzhou University, 450001, China; Robot Perception and Control Engineering Laboratory, Henan Province, 450001, China
| | - Ji Shen
- School of Electrical and Information Engineering, Zhengzhou University, 450001, China; Robot Perception and Control Engineering Laboratory, Henan Province, 450001, China
| | - Lei Yang
- School of Electrical and Information Engineering, Zhengzhou University, 450001, China; Robot Perception and Control Engineering Laboratory, Henan Province, 450001, China.
| | - Hongnian Yu
- School of Electrical and Information Engineering, Zhengzhou University, 450001, China; The Built Environment, Edinburgh Napier University, Edinburgh EH10 5DT, UK
| | - Guibin Bian
- School of Electrical and Information Engineering, Zhengzhou University, 450001, China; The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| |
Collapse
|
32
|
Kumar KS, Singh NP. An efficient registration-based approach for retinal blood vessel segmentation using generalized Pareto and fatigue pdf. Med Eng Phys 2022; 110:103936. [PMID: 36529622 DOI: 10.1016/j.medengphy.2022.103936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 11/05/2022] [Accepted: 12/01/2022] [Indexed: 12/12/2022]
Abstract
Segmentation of Retinal Blood Vessel (RBV) extraction in the retina images and Registration of segmented RBV structure is implemented to identify changes in vessel structure by ophthalmologists in diagnosis of various illnesses like Glaucoma, Diabetes, and Hypertension's. The Retinal Blood Vessel provides blood to the inner retinal neurons, RBV are located mainly in internal retina but it may partly in the ganglion cell layer, following network failure haven't been identified with past methods. Classifications of accurate RBV and Registration of segmented blood vessels are challenging tasks in the low intensity background of Retinal Image. So, we projected a novel approach of segmentation of RBV extraction used matched filter of Generalized Pareto Probability Distribution Function (pdf) and Registration approach on feature-based segmented retinal blood vessel of Binary Robust Invariant Scalable Key point (BRISK). The BRISK provides the predefined sampling pattern as compared to Pdf. The BRISK feature is implemented for attention point recognition & matching approach for change in vessel structure. The proposed approaches contain 3 levels: pre-processing, matched filter-based Generalized Pareto pdf as a source along with the novel approach of fatigue pdf as a target, and BRISK framework is used for Registration on segmented retinal images of supply & intention images. This implemented system's performance is estimated in experimental analysis by the Average accuracy, Normalized Cross-Correlation (NCC), and computation time process of the segmented retinal source and target image. The NCC is main element to give more statistical information about retinal image segmentation. The proposed approach of Generalized Pareto value pdf has Average Accuracy of 95.21%, NCC of both image pairs is 93%, and Average accuracy of Registration of segmented source images and the target image is 98.51% respectively. The proposed approach of average computational time taken is around 1.4 s, which has been identified on boundary condition of Pdf function.
Collapse
Affiliation(s)
- K Susheel Kumar
- GITAM University, Bengaluru, 561203, India; National Institute of Technology Hamirpur, Himachal Pradesh 177005, India.
| | | |
Collapse
|
33
|
RADCU-Net: residual attention and dual-supervision cascaded U-Net for retinal blood vessel segmentation. INT J MACH LEARN CYB 2022. [DOI: 10.1007/s13042-022-01715-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
34
|
Comparing Conventional and Deep Feature Models for Classifying Fundus Photography of Hemorrhages. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:7387174. [DOI: 10.1155/2022/7387174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 03/27/2022] [Accepted: 04/08/2022] [Indexed: 11/20/2022]
Abstract
Diabetic retinopathy is an eye-related pathology creating abnormalities and causing visual impairment, proper treatment of which requires identifying irregularities. This research uses a hemorrhage detection method and compares the classification of conventional and deep features. Especially, the method identifies hemorrhage connected with blood vessels or residing at the retinal border and was reported challenging. Initially, adaptive brightness adjustment and contrast enhancement rectify degraded images. Prospective locations of hemorrhages are estimated by a Gaussian matched filter, entropy thresholding, and morphological operation. Hemorrhages are segmented by a novel technique based on the regional variance of intensities. Features are then extracted by conventional methods and deep models for training support vector machines and the results are evaluated. Evaluation metrics for each model are promising, but findings suggest that comparatively, deep models are more effective than conventional features.
Collapse
|
35
|
Fu J, Cao L, Wei S, Xu M, Song Y, Li H, You Y. A GAN-based deep enhancer for quality enhancement of retinal images photographed by a handheld fundus camera. ADVANCES IN OPHTHALMOLOGY PRACTICE AND RESEARCH 2022; 2:100077. [PMID: 37846289 PMCID: PMC10577846 DOI: 10.1016/j.aopr.2022.100077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Revised: 08/05/2022] [Accepted: 08/12/2022] [Indexed: 10/18/2023]
Abstract
Objective Due to limited imaging conditions, the quality of fundus images is often unsatisfactory, especially for images photographed by handheld fundus cameras. Here, we have developed an automated method based on combining two mirror-symmetric generative adversarial networks (GANs) for image enhancement. Methods A total of 1047 retinal images were included. The raw images were enhanced by a GAN-based deep enhancer and another methods based on luminosity and contrast adjustment. All raw images and enhanced images were anonymously assessed and classified into 6 levels of quality classification by three experienced ophthalmologists. The quality classification and quality change of images were compared. In addition, image-detailed reading results for the number of dubiously pathological fundi were also compared. Results After GAN enhancement, 42.9% of images increased their quality, 37.5% remained stable, and 19.6% decreased. After excluding the images at the highest level (level 0) before enhancement, a large number (75.6%) of images showed an increase in quality classification, and only a minority (9.3%) showed a decrease. The GAN-enhanced method was superior for quality improvement over a luminosity and contrast adjustment method (P<0.001). In terms of image reading results, the consistency rate fluctuated from 86.6% to 95.6%, and for the specific disease subtypes, both discrepancy number and discrepancy rate were less than 15 and 15%, for two ophthalmologists. Conclusions Learning the style of high-quality retinal images based on the proposed deep enhancer may be an effective way to improve the quality of retinal images photographed by handheld fundus cameras.
Collapse
Affiliation(s)
- Junxia Fu
- Beijing Aier Intech Eye Hospital, Beijing, China
- Aier Eye Hospital Group, Hunan, China
- Department of Ophthalmology, The Chinese People's Liberation Army General Hospital, Beijing, China
| | - Lvchen Cao
- School of Artificial Intelligence, Henan University, Zhengzhou, China
| | - Shihui Wei
- Department of Ophthalmology, The Chinese People's Liberation Army General Hospital, Beijing, China
| | - Ming Xu
- Aier Eye Hospital Group, Hunan, China
| | - Yali Song
- Aier Eye Hospital Group, Hunan, China
| | - Huiqi Li
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China
| | - Yuxia You
- Beijing Aier Intech Eye Hospital, Beijing, China
- Aier Eye Hospital Group, Hunan, China
| |
Collapse
|
36
|
Ren K, Chang L, Wan M, Gu G, Chen Q. An improved U-net based retinal vessel image segmentation method. Heliyon 2022; 8:e11187. [PMID: 36311363 PMCID: PMC9614856 DOI: 10.1016/j.heliyon.2022.e11187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 08/04/2022] [Accepted: 10/17/2022] [Indexed: 11/06/2022] Open
Abstract
Diabetic retinopathy is not just the most common complication of diabetes but also the leading cause of adult blindness. Currently, doctors determine the cause of diabetic retinopathy primarily by diagnosing fundus images. Large-scale manual screening is difficult to achieve for retinal health screen. In this paper, we proposed an improved U-net network for segmenting retinal vessels. Firstly, due to the lack of retinal data, pre-processing of the raw data is required. The data processed by grayscale transformation, normalization, CLAHE, gamma transformation. Data augmentation can prevent overfitting in the training process. Secondly, the basic network structure model U-net is built, and the Bi-FPN network is fused based on U-net. Datasets from a public challenge are used to evaluate the performance of the proposed method, which is able to detect vessel SP of 0.8604, SE of 0.9767, ACC of 0.9651, and AUC of 0.9787.
Collapse
|
37
|
Deep Learning Technology Applied to Medical Image Tissue Classification. Diagnostics (Basel) 2022; 12:diagnostics12102430. [PMID: 36292119 PMCID: PMC9600639 DOI: 10.3390/diagnostics12102430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Revised: 09/28/2022] [Accepted: 09/28/2022] [Indexed: 11/17/2022] Open
Abstract
Medical image classification is a novel technology that presents a new challenge. It is essential that pathological images are automatically and correctly classified to enable doctors to provide precise treatment. Convolutional neural networks have demonstrated their effectiveness in classifying images in deep learning, which may have dozens or hundreds of layers, to illustrate the relationship between them in terms of their different neural network features. Convolutional layers consisting of small kernels take weights as input and guide them through an activation function as output. The main advantage of using convolutional neural networks (CNNs) instead of traditional neural networks is that they reduce the model parameters for greater accuracy. However, many studies have simply been focused on finding the best CNN model and classification results from a single medical image classification. Therefore, we applied a common deep learning network model in an attempt to identify the best model framework by training and validating different model parameters to classify medical images. After conducting experiments on six publicly available databases of pathological images, including colorectal cancer tissue, chest X-rays, common skin lesions, diabetic retinopathy, pediatric chest X-ray, and breast ultrasound image datasets, we were able to confirm that the recognition accuracy of the Inception V3 method was significantly better than that of other existing deep learning models.
Collapse
|
38
|
Panda NR, Sahoo AK. A Detailed Systematic Review on Retinal Image Segmentation Methods. J Digit Imaging 2022; 35:1250-1270. [PMID: 35508746 PMCID: PMC9582172 DOI: 10.1007/s10278-022-00640-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2021] [Revised: 04/12/2022] [Accepted: 04/14/2022] [Indexed: 11/27/2022] Open
Abstract
The separation of blood vessels in the retina is a major aspect in detecting ailment and is carried out by segregating the retinal blood vessels from the fundus images. Moreover, it helps to provide earlier therapy for deadly diseases and prevent further impacts due to diabetes and hypertension. Many reviews already exist for this problem, but those reviews have presented the analysis of a single framework. Hence, this article on retinal segmentation review has revealed distinct methodologies with diverse frameworks that are utilized for blood vessel separation. The novelty of this review research lies in finding the best neural network model by comparing its efficiency. For that, machine learning (ML) and deep learning (DL) were compared and have been reported as the best model. Moreover, different datasets were used to segment the retinal blood vessels. The execution of each approach is compared based on the performance metrics such as sensitivity, specificity, and accuracy using publically accessible datasets like STARE, DRIVE, ROSE, REFUGE, and CHASE. This article discloses the implementation capacity of distinct techniques implemented for each segmentation method. Finally, the finest accuracy of 98% and sensitivity of 96% were achieved for the technique of Convolution Neural Network with Ranking Support Vector Machine (CNN-rSVM). Moreover, this technique has utilized public datasets to verify efficiency. Hence, the overall review of this article has revealed a method for earlier diagnosis of diseases to deliver earlier therapy.
Collapse
Affiliation(s)
- Nihar Ranjan Panda
- Department of Electronics and Communication Engineering, Silicon Institute of Technology, Bhubaneswar, Orissa, 751024, India.
| | - Ajit Kumar Sahoo
- Department of Electronics and Communication Engineering, National Institute of Technology, Rourkela, Odisha, 769008, India
| |
Collapse
|
39
|
Segmentation of retinal blood vessel using generalized extreme value probability distribution function(pdf)-based matched filter approach. Pattern Anal Appl 2022. [DOI: 10.1007/s10044-022-01108-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
40
|
Tan Y, Yang KF, Zhao SX, Li YJ. Retinal Vessel Segmentation With Skeletal Prior and Contrastive Loss. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2238-2251. [PMID: 35320091 DOI: 10.1109/tmi.2022.3161681] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The morphology of retinal vessels is closely associated with many kinds of ophthalmic diseases. Although huge progress in retinal vessel segmentation has been achieved with the advancement of deep learning, some challenging issues remain. For example, vessels can be disturbed or covered by other components presented in the retina (such as optic disc or lesions). Moreover, some thin vessels are also easily missed by current methods. In addition, existing fundus image datasets are generally tiny, due to the difficulty of vessel labeling. In this work, a new network called SkelCon is proposed to deal with these problems by introducing skeletal prior and contrastive loss. A skeleton fitting module is developed to preserve the morphology of the vessels and improve the completeness and continuity of thin vessels. A contrastive loss is employed to enhance the discrimination between vessels and background. In addition, a new data augmentation method is proposed to enrich the training samples and improve the robustness of the proposed model. Extensive validations were performed on several popular datasets (DRIVE, STARE, CHASE, and HRF), recently developed datasets (UoA-DR, IOSTAR, and RC-SLO), and some challenging clinical images (from RFMiD and JSIEC39 datasets). In addition, some specially designed metrics for vessel segmentation, including connectivity, overlapping area, consistency of vessel length, revised sensitivity, specificity, and accuracy were used for quantitative evaluation. The experimental results show that, the proposed model achieves state-of-the-art performance and significantly outperforms compared methods when extracting thin vessels in the regions of lesions or optic disc. Source code is available at https://www.github.com/tyb311/SkelCon.
Collapse
|
41
|
Sindhusaranya B, Geetha M, Rajesh T, Kavitha M. Hybrid algorithm for retinal blood vessel segmentation using different pattern recognition techniques. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-221137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Blood vessel segmentation of the retina has become a necessary step in automatic disease identification and planning treatment in the field of Ophthalmology. To identify the disease properly, both thick and thin blood vessels should be distinguished clearly. Diagnosis of disease would be simple and easier only when the blood vessels are segmented accurately. Existing blood vessel segmentation methods are not supporting well to overcome the poor accuracy and low generalization problems because of the complex blood vessel structure of the retina. In this study, a hybrid algorithm is proposed using binarization, exclusively for segmenting the vessels from a retina image to enhance the exactness and specificity of segmentation of an image. The proposed algorithm extracts the advantages of pattern recognition techniques, such as Matched Filter (MF), Matched Filter with First-order Derivation of Gaussian (MF-FDOG), Multi-Scale Line Detector (MSLD) algorithms and developed as a hybrid algorithm. This algorithm is authenticated with the openly accessible dataset DRIVE. Using Python with OpenCV, the algorithm simulation results had attained an accurateness of 0.9602, a sensitivity of 0.6246, and a specificity of 0.9815 for the dataset. Simulation outcomes proved that the proposed hybrid algorithm accurately segments the blood vessels of the retina compared to the existing methodologies.
Collapse
Affiliation(s)
- B. Sindhusaranya
- Department of Electronics and Communication Engineering, Ponjesly College of Engineering, Nagercoil, Tamil Nadu, India
| | - M.R. Geetha
- Department of Electronics and Communication Engineering, Ponjesly College of Engineering, Nagercoil, Tamil Nadu, India
| | - T. Rajesh
- Department of Electronics and Communication Engineering, PSN College of Engineering and Technology, Tirunelveli, Tamil Nadu, India
| | - M.R. Kavitha
- Department of Electronics and Communication Engineering, Ponjesly College of Engineering, Nagercoil, Tamil Nadu, India
| |
Collapse
|
42
|
Li Y, Zhang Y, Cui W, Lei B, Kuang X, Zhang T. Dual Encoder-Based Dynamic-Channel Graph Convolutional Network With Edge Enhancement for Retinal Vessel Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1975-1989. [PMID: 35167444 DOI: 10.1109/tmi.2022.3151666] [Citation(s) in RCA: 34] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Retinal vessel segmentation with deep learning technology is a crucial auxiliary method for clinicians to diagnose fundus diseases. However, the deep learning approaches inevitably lose the edge information, which contains spatial features of vessels while performing down-sampling, leading to the limited segmentation performance of fine blood vessels. Furthermore, the existing methods ignore the dynamic topological correlations among feature maps in the deep learning framework, resulting in the inefficient capture of the channel characterization. To address these limitations, we propose a novel dual encoder-based dynamic-channel graph convolutional network with edge enhancement (DE-DCGCN-EE) for retinal vessel segmentation. Specifically, we first design an edge detection-based dual encoder to preserve the edge of vessels in down-sampling. Secondly, we investigate a dynamic-channel graph convolutional network to map the image channels to the topological space and synthesize the features of each channel on the topological map, which solves the limitation of insufficient channel information utilization. Finally, we study an edge enhancement block, aiming to fuse the edge and spatial features in the dual encoder, which is beneficial to improve the accuracy of fine blood vessel segmentation. Competitive experimental results on five retinal image datasets validate the efficacy of the proposed DE-DCGCN-EE, which achieves more remarkable segmentation results against the other state-of-the-art methods, indicating its potential clinical application.
Collapse
|
43
|
Dong F, Wu D, Guo C, Zhang S, Yang B, Gong X. CRAUNet: A cascaded residual attention U-Net for retinal vessel segmentation. Comput Biol Med 2022; 147:105651. [DOI: 10.1016/j.compbiomed.2022.105651] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 05/15/2022] [Accepted: 05/16/2022] [Indexed: 11/25/2022]
|
44
|
Su Y, Cheng J, Cao G, Liu H. How to design a deep neural network for retinal vessel segmentation: an empirical study. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103761] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
45
|
Gao J, Huang Q, Gao Z, Chen S. Image Segmentation of Retinal Blood Vessels Based on Dual-Attention Multiscale Feature Fusion. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:8111883. [PMID: 35844462 PMCID: PMC9279073 DOI: 10.1155/2022/8111883] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 06/07/2022] [Accepted: 06/09/2022] [Indexed: 11/26/2022]
Abstract
Aiming at the problem of insufficient details of retinal blood vessel segmentation in current research methods, this paper proposes a multiscale feature fusion residual network based on dual attention. Specifically, a feature fusion residual module with adaptive calibration weight features is designed, which avoids gradient dispersion and network degradation while effectively extracting image details. The SA module and ECA module are used many times in the backbone feature extraction network to adaptively select the focus position to generate more discriminative feature representations; at the same time, the information of different levels of the network is fused, and long-range and short-range features are used. This method aggregates low-level and high-level feature information, which effectively improves the segmentation performance. The experimental results show that the method in this paper achieves the classification accuracy of 0.9795 and 0.9785 on the STARE and DRIVE datasets, respectively, and the classification performance is better than the current mainstream methods.
Collapse
Affiliation(s)
- Jixun Gao
- School of Computer, Henan University of Engineering, Zhengzhou 451191, China
| | - Quanzhen Huang
- School of Electrical Information Engineering, Henan University of Engineering, Zhengzhou 451191, China
| | - Zhendong Gao
- School of Electrical Information Engineering, Henan University of Engineering, Zhengzhou 451191, China
| | - Suxia Chen
- School of Computer, Henan University of Engineering, Zhengzhou 451191, China
| |
Collapse
|
46
|
Menolotto M, Giardini ME. The Use of Datasets of Bad Quality Images to Define Fundus Image Quality. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:504-507. [PMID: 36086638 DOI: 10.1109/embc48229.2022.9871614] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Screening programs for sight-threatening diseases rely on the grading of a large number of digital retinal images. As automatic image grading technology evolves, there emerges a need to provide a rigorous definition of image quality with reference to the grading task. In this work, on two subsets of the CORD database of clinically grad able and matching non-grad able digital retinal images, a feature set based on statistical and on task-specific morphological features has been identified. A machine learning technique has then been demonstrated to classify the images as per their clinical gradeability, offering a proxy for a rigorous definition of image quality.
Collapse
|
47
|
Biswas S, Khan MIA, Hossain MT, Biswas A, Nakai T, Rohdin J. Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? LIFE (BASEL, SWITZERLAND) 2022; 12:life12070973. [PMID: 35888063 PMCID: PMC9321111 DOI: 10.3390/life12070973] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 05/25/2022] [Accepted: 06/01/2022] [Indexed: 11/22/2022]
Abstract
Color fundus photographs are the most common type of image used for automatic diagnosis of retinal diseases and abnormalities. As all color photographs, these images contain information about three primary colors, i.e., red, green, and blue, in three separate color channels. This work aims to understand the impact of each channel in the automatic diagnosis of retinal diseases and abnormalities. To this end, the existing works are surveyed extensively to explore which color channel is used most commonly for automatically detecting four leading causes of blindness and one retinal abnormality along with segmenting three retinal landmarks. From this survey, it is clear that all channels together are typically used for neural network-based systems, whereas for non-neural network-based systems, the green channel is most commonly used. However, from the previous works, no conclusion can be drawn regarding the importance of the different channels. Therefore, systematic experiments are conducted to analyse this. A well-known U-shaped deep neural network (U-Net) is used to investigate which color channel is best for segmenting one retinal abnormality and three retinal landmarks.
Collapse
Affiliation(s)
- Sangeeta Biswas
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
- Correspondence: or
| | - Md. Iqbal Aziz Khan
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Md. Tanvir Hossain
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Angkan Biswas
- CAPM Company Limited, Bonani, Dhaka 1213, Bangladesh;
| | - Takayoshi Nakai
- Faculty of Engineering, Shizuoka University, Hamamatsu 432-8561, Japan;
| | - Johan Rohdin
- Faculty of Information Technology, Brno University of Technology, 61200 Brno, Czech Republic;
| |
Collapse
|
48
|
Analysis of Vessel Segmentation Based on Various Enhancement Techniques for Improvement of Vessel Intensity Profile. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:7086632. [PMID: 35800676 PMCID: PMC9256369 DOI: 10.1155/2022/7086632] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 05/31/2022] [Accepted: 06/07/2022] [Indexed: 11/27/2022]
Abstract
It is vital to develop an appropriate prediction model and link carefully to measurable events such as clinical parameters and patient outcomes to analyze the severity of the disease. Timely identifying retinal diseases is becoming more vital to prevent blindness among young and adults. Investigation of blood vessels delivers preliminary information on the existence and treatment of glaucoma, retinopathy, and so on. During the analysis of diabetic retinopathy, one of the essential steps is to extract the retinal blood vessel accurately. This study presents an improved Gabor filter through various enhancement approaches. The degraded images with the enhancement of certain features can simplify image interpretation both for a human observer and for machine recognition. Thus, in this work, few enhancement approaches such as Gamma corrected adaptively with distributed weight (GCADW), joint equalization of histogram (JEH), homomorphic filter, unsharp masking filter, adaptive unsharp masking filter, and particle swarm optimization (PSO) based unsharp masking filter are taken into consideration. In this paper, an effort has been made to improve the performance of the Gabor filter by combining it with different enhancement methods and to enhance the detection of blood vessels. The performance of all the suggested approaches is assessed on publicly available databases such as DRIVE and CHASE_DB1. The results of all the integrated enhanced techniques are analyzed, discussed, and compared. The best result is delivered by PSO unsharp masking filter combined with the Gabor filter with an accuracy of 0.9593 for the DRIVE database and 0.9685 for the CHASE_DB1 database. The results illustrate the robustness of the recommended model in automatic blood vessel segmentation that makes it possible to be a clinical support decision tool in diabetic retinopathy diagnosis.
Collapse
|
49
|
Multifilters-Based Unsupervised Method for Retinal Blood Vessel Segmentation. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12136393] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Fundus imaging is one of the crucial methods that help ophthalmologists for diagnosing the various eye diseases in modern medicine. An accurate vessel segmentation method can be a convenient tool to foresee and analyze fatal diseases, including hypertension or diabetes, which damage the retinal vessel’s appearance. This work suggests an unsupervised approach for vessels segmentation out of retinal images. The proposed method includes multiple steps. Firstly, from the colored retinal image, green channel is extracted and preprocessed utilizing Contrast Limited Histogram Equalization as well as Fuzzy Histogram Based Equalization for contrast enhancement. To expel geometrical articles (macula, optic disk) and noise, top-hat morphological operations are used. On the resulted enhanced image, matched filter and Gabor wavelet filter are applied, and the outputs from both is added to extract vessels pixels. The resulting image with the now noticeable blood vessel is binarized using human visual system (HVS). A final image of segmented blood vessel is obtained by applying post-processing. The suggested method is assessed on two public datasets (DRIVE and STARE) and showed comparable results with regard to sensitivity, specificity and accuracy. The results we achieved with respect to sensitivity, specificity together with accuracy on DRIVE database are 0.7271, 0.9798 and 0.9573, and on STARE database these are 0.7164, 0.9760, and 0.9560, respectively, in less than 3.17 s on average per image.
Collapse
|
50
|
Ni J, Wu J, Elazab A, Tong J, Chen Z. DNL-Net: deformed non-local neural network for blood vessel segmentation. BMC Med Imaging 2022; 22:109. [PMID: 35668351 PMCID: PMC9169317 DOI: 10.1186/s12880-022-00836-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 05/31/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The non-local module has been primarily used in literature to capturing long-range dependencies. However, it suffers from prohibitive computational complexity and lacks the interactions among positions across the channels. METHODS We present a deformed non-local neural network (DNL-Net) for medical image segmentation, which has two prominent components; deformed non-local module (DNL) and multi-scale feature fusion. The former optimizes the structure of the non-local block (NL), hence, reduces the problem of excessive computation and memory usage, significantly. The latter is derived from the attention mechanisms to fuse the features of different levels and improve the ability to exchange information across channels. In addition, we introduce a residual squeeze and excitation pyramid pooling (RSEP) module that is like spatial pyramid pooling to effectively resample the features at different scales and improve the network receptive field. RESULTS The proposed method achieved 96.63% and 92.93% for Dice coefficient and mean intersection over union, respectively, on the intracranial blood vessel dataset. Also, DNL-Net attained 86.64%, 96.10%, and 98.37% for sensitivity, accuracy and area under receiver operation characteristic curve, respectively, on the DRIVE dataset. CONCLUSIONS The overall performance of DNL-Net outperforms other current state-of-the-art vessel segmentation methods, which indicates that the proposed network is more suitable for blood vessel segmentation, and is of great clinical significance.
Collapse
Affiliation(s)
- Jiajia Ni
- College of Internet of Things Engineering, HoHai University, Changzhou, China
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Jianhuang Wu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
| | - Ahmed Elazab
- School of Biomedical Engineering, Shenzhen University, Shenzhen, China
- Computer Science Department, Misr Higher Institute for Commerce and Computers, Mansoura, Egypt
| | - Jing Tong
- College of Internet of Things Engineering, HoHai University, Changzhou, China
| | - Zhengming Chen
- College of Internet of Things Engineering, HoHai University, Changzhou, China
| |
Collapse
|