1
|
Zhang X, Zhu Q, Hu T, Guo S, Bian G, Dong W, Hong R, Lin XL, Wu P, Zhou M, Yan Q, Mohi-Ud-Din G, Ai C, Li Z. Joint high-resolution feature learning and vessel-shape aware convolutions for efficient vessel segmentation. Comput Biol Med 2025; 191:109982. [PMID: 40253922 DOI: 10.1016/j.compbiomed.2025.109982] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 02/28/2025] [Accepted: 03/03/2025] [Indexed: 04/22/2025]
Abstract
Clear imagery of retinal vessels is one of the critical shreds of evidence in specific disease diagnosis and evaluation, including sophisticated hierarchical topology and plentiful-and-intensive capillaries. In this work, we propose a new topology- and shape-aware model named Multi-branch Vessel-shaped Convolution Network (MVCN) to adaptively learn high-resolution representations from retinal vessel imagery and thereby capture high-quality topology and shape information thereon. Two steps are involved in our pipeline. The former step is proposed as Multiple High-resolution Ensemble Module (MHEM) to enhance high-resolution characteristics of retinal vessel imagery via fusing scale-invariant hierarchical topology thereof. The latter is a novel vessel-shaped convolution that captures the retinal vessel topology to emerge from unrelated fundus structures. Moreover, our MVCN of separating such topology from the fundus is a dynamical multiple sub-label generation via using epistemic uncertainty, instead of manually separating raw labels to distinguish definitive and uncertain vessels. Compared to other existing methods, our method achieves the most advanced AUC values of 98.31%, 98.80%, 98.83%, and 98.65%, and the most advanced ACC of 95.83%, 96.82%, 97.09%,and 96.66% in DRIVE, CHASE_DB1, STARE, and HRF datasets. We also employ correctness, completeness, and quality metrics to evaluate skeletal similarity. Our method's evaluation metrics have doubled compared to previous methods, thereby demonstrating the effectiveness thereof.
Collapse
Affiliation(s)
- Xiang Zhang
- College of Information and Control Engineering, Xi'an University of Architecture and Technology, Xi'an, China
| | - Qiang Zhu
- College of Information and Control Engineering, Xi'an University of Architecture and Technology, Xi'an, China
| | - Tao Hu
- Northwestern Polytechnical University, China
| | - Song Guo
- College of Information and Control Engineering, Xi'an University of Architecture and Technology, Xi'an, China
| | - Genqing Bian
- College of Information and Control Engineering, Xi'an University of Architecture and Technology, Xi'an, China
| | - Wei Dong
- College of Information and Control Engineering, Xi'an University of Architecture and Technology, Xi'an, China.
| | - Rao Hong
- School of Software, Nanchang University, Nanchang, China
| | - Xia Ling Lin
- School of Software, Nanchang University, Nanchang, China
| | - Peng Wu
- School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, China
| | - Meili Zhou
- Shaanxi Provincial Key Lab of Bigdata of Energy and Intelligence Processing, School of Physics and Electronic Information, Yanan University, Yanan, China.
| | - Qingsen Yan
- School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an, China.
| | | | - Chen Ai
- School of Software, Nanchang University, Nanchang, China
| | - Zhou Li
- Department of Basic Education and Research, Jiangxi Police College, Nanchang, China
| |
Collapse
|
2
|
Kankrale R, Kokare M. Artificial intelligence in retinal image analysis for hypertensive retinopathy diagnosis: a comprehensive review and perspective. Vis Comput Ind Biomed Art 2025; 8:11. [PMID: 40307650 PMCID: PMC12044089 DOI: 10.1186/s42492-025-00194-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Accepted: 03/27/2025] [Indexed: 05/02/2025] Open
Abstract
Hypertensive retinopathy (HR) occurs when the choroidal vessels, which form the photosensitive layer at the back of the eye, are injured owing to high blood pressure. Artificial intelligence (AI) in retinal image analysis (RIA) for HR diagnosis involves the use of advanced computational algorithms and machine learning (ML) strategies to recognize and evaluate signs of HR in retinal images automatically. This review aims to advance the field of HR diagnosis by investigating the latest ML and deep learning techniques, and highlighting their efficacy and capability for early diagnosis and intervention. By analyzing recent advancements and emerging trends, this study seeks to inspire further innovation in automated RIA. In this context, AI shows significant potential for enhancing the accuracy, effectiveness, and consistency of HR diagnoses. This will eventually lead to better clinical results by enabling earlier intervention and precise management of the condition. Overall, the integration of AI into RIA represents a considerable step forward in the early identification and treatment of HR, offering substantial benefits to both healthcare providers and patients.
Collapse
Affiliation(s)
- Rajendra Kankrale
- Department of Computer Science and Engineering, Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Maharashtra 431606, India.
| | - Manesh Kokare
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, Maharashtra 431606, India
| |
Collapse
|
3
|
Yu J, Chen C, Lu M, Fang X, Li J, Zhu M, Li N, Yuan X, Han Y, Wang L, Lu J, Shao C, Bian Y. Computed tomography-based fully automated artificial intelligence model to predict extrapancreatic perineural invasion in pancreatic ductal adenocarcinoma. Int J Surg 2024; 110:7656-7670. [PMID: 39806736 PMCID: PMC11634086 DOI: 10.1097/js9.0000000000001604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Accepted: 04/29/2024] [Indexed: 01/16/2025]
Abstract
BACKGROUND Extrapancreatic perineural invasion (EPNI) increases the risk of postoperative recurrence in pancreatic ductal adenocarcinoma (PDAC). This study aimed to develop and validate a computed tomography (CT)-based, fully automated preoperative artificial intelligence (AI) model to predict EPNI in patients with PDAC. METHODS The authors retrospectively enrolled 1065 patients from two Shanghai hospitals between June 2014 and April 2023. Patients were split into training (n=497), internal validation (n=212), internal test (n=180), and external test (n=176) sets. The AI model used perivascular space and tumor contact for EPNI detection. The authors evaluated the AI model's performance based on its discrimination. Kaplan-Meier curves, log-rank tests, and Cox regression were used for survival analysis. RESULTS The AI model demonstrated superior diagnostic performance for EPNI with 1-pixel expansion. The area under the curve in the training, validation, internal test, and external test sets were 0.87, 0.88, 0.82, and 0.83, respectively. The log-rank test revealed a significantly longer survival in the AI-predicted EPNI-negative group than the AI-predicted EPNI-positive group in the training, validation, and internal test sets (P<0.05). Moreover, the AI model exhibited exceptional prognostic stratification in early PDAC and improved assessment of neoadjuvant therapy's effectiveness. CONCLUSION The AI model presents a robust modality for EPNI diagnosis, risk stratification, and neoadjuvant treatment guidance in PDAC, and can be applied to guide personalized precision therapy.
Collapse
Affiliation(s)
- Jieyu Yu
- Department of Radiology, Changhai Hospital
| | | | - Mingzhi Lu
- Department of Oncology Radiation, Changhai Hospital
| | - Xu Fang
- Department of Radiology, Changhai Hospital
| | - Jing Li
- Department of Radiology, Changhai Hospital
| | | | - Na Li
- Department of Radiology, Changhai Hospital
| | | | - Yaxing Han
- Department of Radiology, No. 411 Hospital, Shanghai, China
| | - Li Wang
- Department of Radiology, Changhai Hospital
| | | | | | - Yun Bian
- Department of Radiology, Changhai Hospital
| |
Collapse
|
4
|
Lv N, Xu L, Chen Y, Sun W, Tian J, Zhang S. TCDDU-Net: combining transformer and convolutional dual-path decoding U-Net for retinal vessel segmentation. Sci Rep 2024; 14:25978. [PMID: 39472606 PMCID: PMC11522399 DOI: 10.1038/s41598-024-77464-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Accepted: 10/22/2024] [Indexed: 11/02/2024] Open
Abstract
Accurate segmentation of retinal blood vessels is crucial for enhancing diagnostic efficiency and preventing disease progression. However, the small size and complex structure of retinal blood vessels, coupled with low contrast in corresponding fundus images, pose significant challenges for this task. We propose a novel approach for retinal vessel segmentation, which combines the transformer and convolutional dual-path decoding U-Net (TCDDU-Net). We propose the selective dense connection swin transformer block, which converts the input feature map into patches, introduces MLPs to generate probabilities, and performs selective fusion at different stages. This structure forms a dense connection framework, enabling the capture of long-distance dependencies and effective fusion of features across different stages. The subsequent stage involves the design of the background decoder, which utilizes deformable convolution to learn the background information of retinal vessels by treating them as segmentation objects. This is then combined with the foreground decoder to form a dual-path decoding U-Net. Finally, the foreground segmentation results and the processed background segmentation results are fused to obtain the final retinal vessel segmentation map. To evaluate the effectiveness of our method, we performed experiments on the DRIVE, STARE, and CHASE datasets for retinal vessel segmentation. Experimental results show that the segmentation accuracies of our algorithms are 96.98, 97.40, and 97.23, and the AUC metrics are 98.68, 98.56, and 98.50, respectively.In addition, we evaluated our methods using F1 score, specificity, and sensitivity metrics. Through a comparative analysis, we found that our proposed TCDDU-Net method effectively improves retinal vessel segmentation performance and achieves impressive results on multiple datasets compared to existing methods.
Collapse
Affiliation(s)
- Nianzu Lv
- College of Information Engineering, Xinjiang Institute of Technology, No.1 Xuefu West Road, Aksu, 843100, Xinjiang, China
| | - Li Xu
- College of Information Engineering, Xinjiang Institute of Technology, No.1 Xuefu West Road, Aksu, 843100, Xinjiang, China.
| | - Yuling Chen
- School of Information Engineering, Mianyang Teachers' College, No. 166 Mianxing West Road, High Tech Zone, Mianyang, 621000, Sichuan, China
| | - Wei Sun
- CISDI Engineering Co., LTD, Chongqing, 401120, China
| | - Jiya Tian
- College of Information Engineering, Xinjiang Institute of Technology, No.1 Xuefu West Road, Aksu, 843100, Xinjiang, China
| | - Shuping Zhang
- College of Information Engineering, Xinjiang Institute of Technology, No.1 Xuefu West Road, Aksu, 843100, Xinjiang, China
| |
Collapse
|
5
|
Xu H, Wu Y. G2ViT: Graph Neural Network-Guided Vision Transformer Enhanced Network for retinal vessel and coronary angiograph segmentation. Neural Netw 2024; 176:106356. [PMID: 38723311 DOI: 10.1016/j.neunet.2024.106356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 04/26/2024] [Accepted: 04/29/2024] [Indexed: 06/17/2024]
Abstract
Blood vessel segmentation is a crucial stage in extracting morphological characteristics of vessels for the clinical diagnosis of fundus and coronary artery disease. However, traditional convolutional neural networks (CNNs) are confined to learning local vessel features, making it challenging to capture the graph structural information and fail to perceive the global context of vessels. Therefore, we propose a novel graph neural network-guided vision transformer enhanced network (G2ViT) for vessel segmentation. G2ViT skillfully orchestrates the Convolutional Neural Network, Graph Neural Network, and Vision Transformer to enhance comprehension of the entire graphical structure of blood vessels. To achieve deeper insights into the global graph structure and higher-level global context cognizance, we investigate a graph neural network-guided vision transformer module. This module constructs graph-structured representation in an unprecedented manner using the high-level features extracted by CNNs for graph reasoning. To increase the receptive field while ensuring minimal loss of edge information, G2ViT introduces a multi-scale edge feature attention module (MEFA), leveraging dilated convolutions with different dilation rates and the Sobel edge detection algorithm to obtain multi-scale edge information of vessels. To avoid critical information loss during upsampling and downsampling, we design a multi-level feature fusion module (MLF2) to fuse complementary information between coarse and fine features. Experiments on retinal vessel datasets (DRIVE, STARE, CHASE_DB1, and HRF) and coronary angiography datasets (DCA1 and CHUAC) indicate that the G2ViT excels in robustness, generality, and applicability. Furthermore, it has acceptable inference time and computational complexity and presents a new solution for blood vessel segmentation.
Collapse
Affiliation(s)
- Hao Xu
- State Key Laboratory of Public Big Data, Guizhou University, Guiyang 550025, China; College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Yun Wu
- State Key Laboratory of Public Big Data, Guizhou University, Guiyang 550025, China; College of Computer Science and Technology, Guizhou University, Guiyang 550025, China.
| |
Collapse
|
6
|
Matloob Abbasi M, Iqbal S, Aurangzeb K, Alhussein M, Khan TM. LMBiS-Net: A lightweight bidirectional skip connection based multipath CNN for retinal blood vessel segmentation. Sci Rep 2024; 14:15219. [PMID: 38956117 PMCID: PMC11219784 DOI: 10.1038/s41598-024-63496-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 05/29/2024] [Indexed: 07/04/2024] Open
Abstract
Blinding eye diseases are often related to changes in retinal structure, which can be detected by analysing retinal blood vessels in fundus images. However, existing techniques struggle to accurately segment these delicate vessels. Although deep learning has shown promise in medical image segmentation, its reliance on specific operations can limit its ability to capture crucial details such as the edges of the vessel. This paper introduces LMBiS-Net, a lightweight convolutional neural network designed for the segmentation of retinal vessels. LMBiS-Net achieves exceptional performance with a remarkably low number of learnable parameters (only 0.172 million). The network used multipath feature extraction blocks and incorporates bidirectional skip connections for the information flow between the encoder and decoder. In addition, we have optimised the efficiency of the model by carefully selecting the number of filters to avoid filter overlap. This optimisation significantly reduces training time and improves computational efficiency. To assess LMBiS-Net's robustness and ability to generalise to unseen data, we conducted comprehensive evaluations on four publicly available datasets: DRIVE, STARE, CHASE_DB1, and HRF The proposed LMBiS-Net achieves significant performance metrics in various datasets. It obtains sensitivity values of 83.60%, 84.37%, 86.05%, and 83.48%, specificity values of 98.83%, 98.77%, 98.96%, and 98.77%, accuracy (acc) scores of 97.08%, 97.69%, 97.75%, and 96.90%, and AUC values of 98.80%, 98.82%, 98.71%, and 88.77% on the DRIVE, STARE, CHEASE_DB, and HRF datasets, respectively. In addition, it records F1 scores of 83.43%, 84.44%, 83.54%, and 78.73% on the same datasets. Our evaluations demonstrate that LMBiS-Net achieves high segmentation accuracy (acc) while exhibiting both robustness and generalisability across various retinal image datasets. This combination of qualities makes LMBiS-Net a promising tool for various clinical applications.
Collapse
Affiliation(s)
- Mufassir Matloob Abbasi
- Department of Electrical Engineering, Abasyn University Islamabad Campus (AUIC), Islamabad, 44000, Pakistan
| | - Shahzaib Iqbal
- Department of Electrical Engineering, Abasyn University Islamabad Campus (AUIC), Islamabad, 44000, Pakistan.
| | - Khursheed Aurangzeb
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, P. O. Box 51178, 11543, Saudi Arabia
| | - Musaed Alhussein
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, P. O. Box 51178, 11543, Saudi Arabia
| | - Tariq M Khan
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia
| |
Collapse
|
7
|
Li C, Mao Y, Liang S, Li J, Wang Y, Guo Y. Deep causal learning for pancreatic cancer segmentation in CT sequences. Neural Netw 2024; 175:106294. [PMID: 38657562 DOI: 10.1016/j.neunet.2024.106294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 03/19/2024] [Accepted: 04/05/2024] [Indexed: 04/26/2024]
Abstract
Segmenting the irregular pancreas and inconspicuous tumor simultaneously is an essential but challenging step in diagnosing pancreatic cancer. Current deep-learning (DL) methods usually segment the pancreas or tumor independently using mixed image features, which are disrupted by surrounding complex and low-contrast background tissues. Here, we proposed a deep causal learning framework named CausegNet for pancreas and tumor co-segmentation in 3D CT sequences. Specifically, a causality-aware module and a counterfactual loss are employed to enhance the DL network's comprehension of the anatomical causal relationship between the foreground elements (pancreas and tumor) and the background. By integrating causality into CausegNet, the network focuses solely on extracting intrinsic foreground causal features while effectively learning the potential causality between the pancreas and the tumor. Then based on the extracted causal features, CausegNet applies a counterfactual inference to significantly reduce the background interference and sequentially search for pancreas and tumor from the foreground. Consequently, our approach can handle deformable pancreas and obscure tumors, resulting in superior co-segmentation performance in both public and real clinical datasets, achieving the highest pancreas/tumor Dice coefficients of 86.67%/84.28%. The visualized features and anti-noise experiments further demonstrate the causal interpretability and stability of our method. Furthermore, our approach improves the accuracy and sensitivity of downstream pancreatic cancer risk assessment task by 12.50% and 50.00%, respectively, compared to experienced clinicians, indicating promising clinical applications.
Collapse
Affiliation(s)
- Chengkang Li
- School of Information Science and Technology of Fudan University, Shanghai 200433, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai 200032, China
| | - Yishen Mao
- Department of Pancreatic Surgery, Pancreatic Disease Institute, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China
| | - Shuyu Liang
- School of Information Science and Technology of Fudan University, Shanghai 200433, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai 200032, China
| | - Ji Li
- Department of Pancreatic Surgery, Pancreatic Disease Institute, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China.
| | - Yuanyuan Wang
- School of Information Science and Technology of Fudan University, Shanghai 200433, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai 200032, China.
| | - Yi Guo
- School of Information Science and Technology of Fudan University, Shanghai 200433, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai 200032, China.
| |
Collapse
|
8
|
Iqbal S, Khan TM, Naqvi SS, Naveed A, Usman M, Khan HA, Razzak I. LDMRes-Net: A Lightweight Neural Network for Efficient Medical Image Segmentation on IoT and Edge Devices. IEEE J Biomed Health Inform 2024; 28:3860-3871. [PMID: 37938951 DOI: 10.1109/jbhi.2023.3331278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2023]
Abstract
In this study, we propose LDMRes-Net, a lightweight dual-multiscale residual block-based convolutional neural network tailored for medical image segmentation on IoT and edge platforms. Conventional U-Net-based models face challenges in meeting the speed and efficiency demands of real-time clinical applications, such as disease monitoring, radiation therapy, and image-guided surgery. In this study, we present the Lightweight Dual Multiscale Residual Block-based Convolutional Neural Network (LDMRes-Net), which is specifically designed to overcome these difficulties. LDMRes-Net overcomes these limitations with its remarkably low number of learnable parameters (0.072 M), making it highly suitable for resource-constrained devices. The model's key innovation lies in its dual multiscale residual block architecture, which enables the extraction of refined features on multiple scales, enhancing overall segmentation performance. To further optimize efficiency, the number of filters is carefully selected to prevent overlap, reduce training time, and improve computational efficiency. The study includes comprehensive evaluations, focusing on the segmentation of the retinal image of vessels and hard exudates crucial for the diagnosis and treatment of ophthalmology. The results demonstrate the robustness, generalizability, and high segmentation accuracy of LDMRes-Net, positioning it as an efficient tool for accurate and rapid medical image segmentation in diverse clinical applications, particularly on IoT and edge platforms. Such advances hold significant promise for improving healthcare outcomes and enabling real-time medical image analysis in resource-limited settings.
Collapse
|
9
|
Jiang Y, Chen J, Yan W, Zhang Z, Qiao H, Wang M. MAG-Net : Multi-fusion network with grouped attention for retinal vessel segmentation. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:1938-1958. [PMID: 38454669 DOI: 10.3934/mbe.2024086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/09/2024]
Abstract
Retinal vessel segmentation plays a vital role in the clinical diagnosis of ophthalmic diseases. Despite convolutional neural networks (CNNs) excelling in this task, challenges persist, such as restricted receptive fields and information loss from downsampling. To address these issues, we propose a new multi-fusion network with grouped attention (MAG-Net). First, we introduce a hybrid convolutional fusion module instead of the original encoding block to learn more feature information by expanding the receptive field. Additionally, the grouped attention enhancement module uses high-level features to guide low-level features and facilitates detailed information transmission through skip connections. Finally, the multi-scale feature fusion module aggregates features at different scales, effectively reducing information loss during decoder upsampling. To evaluate the performance of the MAG-Net, we conducted experiments on three widely used retinal datasets: DRIVE, CHASE and STARE. The results demonstrate remarkable segmentation accuracy, specificity and Dice coefficients. Specifically, the MAG-Net achieved segmentation accuracy values of 0.9708, 0.9773 and 0.9743, specificity values of 0.9836, 0.9875 and 0.9906 and Dice coefficients of 0.8576, 0.8069 and 0.8228, respectively. The experimental results demonstrate that our method outperforms existing segmentation methods exhibiting superior performance and segmentation outcomes.
Collapse
Affiliation(s)
- Yun Jiang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China
| | - Jie Chen
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China
| | - Wei Yan
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China
| | - Zequn Zhang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China
| | - Hao Qiao
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China
| | - Meiqi Wang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China
| |
Collapse
|
10
|
Abdushkour H, Soomro TA, Ali A, Ali Jandan F, Jelinek H, Memon F, Althobiani F, Mohammed Ghonaim S, Irfan M. Enhancing fine retinal vessel segmentation: Morphological reconstruction and double thresholds filtering strategy. PLoS One 2023; 18:e0288792. [PMID: 37467245 DOI: 10.1371/journal.pone.0288792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 07/05/2023] [Indexed: 07/21/2023] Open
Abstract
Eye diseases such as diabetic retinopathy are progressive with various changes in the retinal vessels, and it is difficult to analyze the disease for future treatment. There are many computerized algorithms implemented for retinal vessel segmentation, but the tiny vessels drop off, impacting the performance of the overall algorithms. This research work contains the new image processing techniques such as enhancement filters, coherence filters and binary thresholding techniques to handle the different color retinal fundus image problems to achieve a vessel image that is well-segmented, and the proposed algorithm has improved performance over existing work. Our developed technique incorporates morphological techniques to address the center light reflex issue. Additionally, to effectively resolve the problem of insufficient and varying contrast, our developed technique employs homomorphic methods and Wiener filtering. Coherent filters are used to address the coherence issue of the retina vessels, and then a double thresholding technique is applied with image reconstruction to achieve a correctly segmented vessel image. The results of our developed technique were evaluated using the STARE and DRIVE datasets and it achieves an accuracy of about 0.96 and a sensitivity of 0.81. The performance obtained from our proposed method proved the capability of the method which can be used by ophthalmology experts to diagnose ocular abnormalities and recommended for further treatment.
Collapse
Affiliation(s)
- Hesham Abdushkour
- Nautical Science Deptartment, Faculty of Maritime, King Abdul Aziz University, Jeddah, Saudia Arabia
| | - Toufique A Soomro
- Department of Electronic Engineering, Quaid-e-Awam University of Engineering, Science and Technology Larkana Campus, Sukkur, Pakistan
| | - Ahmed Ali
- Eletrical Engineering Department, Sukkur IBA University, Sukkur, Pakistan
| | - Fayyaz Ali Jandan
- Eletrical Engineering Department, Quaid-e-Awam University of Engineering, Science and Technology Larkana Campus, Sukkur, Pakistan
| | - Herbert Jelinek
- Health Engineering Innovation Center and biotechnology Center, Khalifa University, Abu Dhabi, UAE
| | - Farida Memon
- Department of Electronic Engineering, Mehran University, Janshoro, Jamshoro, Pakistan
| | - Faisal Althobiani
- Marine Engineering Department, Faculty of Maritime, King Abdul Aziz University, Jeddah, Saudia Arabia
| | - Saleh Mohammed Ghonaim
- Marine Engineering Department, Faculty of Maritime, King Abdul Aziz University, Jeddah, Saudia Arabia
| | - Muhammad Irfan
- Electrical Engineering Department, College of Engineering, Najran University, Najran, Saudi Arabia
| |
Collapse
|