1
|
Bhimavarapu U. Retina Blood Vessels Segmentation and Classification with the Multi-featured Approach. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01219-2. [PMID: 39117940 DOI: 10.1007/s10278-024-01219-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 07/20/2024] [Accepted: 07/23/2024] [Indexed: 08/10/2024]
Abstract
Segmenting retinal blood vessels poses a significant challenge due to the irregularities inherent in small vessels. The complexity arises from the intricate task of effectively merging features at multiple levels, coupled with potential spatial information loss during successive down-sampling steps. This particularly affects the identification of small and faintly contrasting vessels. To address these challenges, we present a model tailored for automated arterial and venous (A/V) classification, complementing blood vessel segmentation. This paper presents an advanced methodology for segmenting and classifying retinal vessels using a series of sophisticated pre-processing and feature extraction techniques. The ensemble filter approach, incorporating Bilateral and Laplacian edge detectors, enhances image contrast and preserves edges. The proposed algorithm further refines the image by generating an orientation map. During the vessel extraction step, a complete convolution network processes the input image to create a detailed vessel map, enhanced by attention operations that improve modeling perception and resilience. The encoder extracts semantic features, while the Attention Module refines blood vessel depiction, resulting in highly accurate segmentation outcomes. The model was verified using the STARE dataset, which includes 400 images; the DRIVE dataset with 40 images; the HRF dataset with 45 images; and the INSPIRE-AVR dataset containing 40 images. The proposed model demonstrated superior performance across all datasets, achieving an accuracy of 97.5% on the DRIVE dataset, 99.25% on the STARE dataset, 98.33% on the INSPIREAVR dataset, and 98.67% on the HRF dataset. These results highlight the method's effectiveness in accurately segmenting and classifying retinal vessels.
Collapse
Affiliation(s)
- Usharani Bhimavarapu
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India.
| |
Collapse
|
2
|
Shi D, He S, Yang J, Zheng Y, He M. One-shot Retinal Artery and Vein Segmentation via Cross-modality Pretraining. OPHTHALMOLOGY SCIENCE 2024; 4:100363. [PMID: 37868792 PMCID: PMC10585631 DOI: 10.1016/j.xops.2023.100363] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Revised: 06/29/2023] [Accepted: 06/30/2023] [Indexed: 10/24/2023]
Abstract
Purpose To perform one-shot retinal artery and vein segmentation with cross-modality artery-vein (AV) soft-label pretraining. Design Cross-sectional study. Subjects The study included 6479 color fundus photography (CFP) and arterial-venous fundus fluorescein angiography (FFA) pairs from 1964 participants for pretraining and 6 AV segmentation data sets with various image sources, including RITE, HRF, LES-AV, AV-WIDE, PortableAV, and DRSplusAV for one-shot finetuning and testing. Methods We structurally matched the arterial and venous phase of FFA with CFP, the AV soft labels were automatically generated by utilizing the fluorescein intensity difference of the arterial and venous-phase FFA images, and the soft labels were then used to train a generative adversarial network to learn to generate AV soft segmentations using CFP images as input. We then finetuned the pretrained model to perform AV segmentation using only one image from each of the AV segmentation data sets and test on the remainder. To investigate the effect and reliability of one-shot finetuning, we conducted experiments without finetuning and by finetuning the pretrained model on an iteratively different single image for each data set under the same experimental setting and tested the models on the remaining images. Main Outcome Measures The AV segmentation was assessed by area under the receiver operating characteristic curve (AUC), accuracy, Dice score, sensitivity, and specificity. Results After the FFA-AV soft label pretraining, our method required only one exemplar image from each camera or modality and achieved similar performance with full-data training, with AUC ranging from 0.901 to 0.971, accuracy from 0.959 to 0.980, Dice score from 0.585 to 0.773, sensitivity from 0.574 to 0.763, and specificity from 0.981 to 0.991. Compared with no finetuning, the segmentation performance improved after one-shot finetuning. When finetuned on different images in each data set, the standard deviation of the segmentation results across models ranged from 0.001 to 0.10. Conclusions This study presents the first one-shot approach to retinal artery and vein segmentation. The proposed labeling method is time-saving and efficient, demonstrating a promising direction for retinal-vessel segmentation and enabling the potential for widespread application. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Danli Shi
- Centre for Eye and Vision Research (CEVR), Hong Kong SAR, China
- The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Shuang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Jiancheng Yang
- Swiss Federal Institute of Technology in Lausanne (EPFL), Lausanne, Switzerland
| | - Yingfeng Zheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Mingguang He
- Centre for Eye and Vision Research (CEVR), Hong Kong SAR, China
- The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| |
Collapse
|
3
|
Hu J, Qiu L, Wang H, Zhang J. Semi-supervised point consistency network for retinal artery/vein classification. Comput Biol Med 2024; 168:107633. [PMID: 37992471 DOI: 10.1016/j.compbiomed.2023.107633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 10/02/2023] [Accepted: 10/23/2023] [Indexed: 11/24/2023]
Abstract
Recent deep learning methods with convolutional neural networks (CNNs) have boosted advance prosperity of medical image analysis and expedited the automatic retinal artery/vein (A/V) classification. However, it is challenging for these CNN-based approaches in two aspects: (1) specific tubular structures and subtle variations in appearance, contrast, and geometry, which tend to be ignored in CNNs with network layer increasing; (2) limited well-labeled data for supervised segmentation of retinal vessels, which may hinder the effectiveness of deep learning methods. To address these issues, we propose a novel semi-supervised point consistency network (SPC-Net) for retinal A/V classification. SPC-Net consists of an A/V classification (AVC) module and a multi-class point consistency (MPC) module. The AVC module adopts an encoder-decoder segmentation network to generate the prediction probability map of A/V for supervised learning. The MPC module introduces point set representations to adaptively generate point set classification maps of the arteriovenous skeleton, which enjoys its prediction flexibility and consistency (i.e. point consistency) to effectively alleviate arteriovenous confusion. In addition, we propose a consistency regularization between the predicted A/V classification probability maps and point set representations maps for unlabeled data to explore the inherent segmentation perturbation of the point consistency, reducing the need for annotated data. We validate our method on two typical public datasets (DRIVE, HRF) and a private dataset (TR280) with different resolutions. Extensive qualitative and quantitative experimental results demonstrate the effectiveness of our proposed method for supervised and semi-supervised learning.
Collapse
Affiliation(s)
- Jingfei Hu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China; Hefei Innovation Research Institute, Beihang University, Hefei, 230012, Anhui, China
| | - Linwei Qiu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China; Hefei Innovation Research Institute, Beihang University, Hefei, 230012, Anhui, China
| | - Hua Wang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China; Hefei Innovation Research Institute, Beihang University, Hefei, 230012, Anhui, China
| | - Jicong Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China; Hefei Innovation Research Institute, Beihang University, Hefei, 230012, Anhui, China; Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing, 100083, China.
| |
Collapse
|
4
|
Yi J, Chen C. Multi-Task Segmentation and Classification Network for Artery/Vein Classification in Retina Fundus. ENTROPY (BASEL, SWITZERLAND) 2023; 25:1148. [PMID: 37628178 PMCID: PMC10453284 DOI: 10.3390/e25081148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Revised: 07/15/2023] [Accepted: 07/25/2023] [Indexed: 08/27/2023]
Abstract
Automatic classification of arteries and veins (A/V) in fundus images has gained considerable attention from researchers due to its potential to detect vascular abnormalities and facilitate the diagnosis of some systemic diseases. However, the variability in vessel structures and the marginal distinction between arteries and veins poses challenges to accurate A/V classification. This paper proposes a novel Multi-task Segmentation and Classification Network (MSC-Net) that utilizes the vessel features extracted by a specific module to improve A/V classification and alleviate the aforementioned limitations. The proposed method introduces three modules to enhance the performance of A/V classification: a Multi-scale Vessel Extraction (MVE) module, which distinguishes between vessel pixels and background using semantics of vessels, a Multi-structure A/V Extraction (MAE) module that classifies arteries and veins by combining the original image with the vessel features produced by the MVE module, and a Multi-source Feature Integration (MFI) module that merges the outputs from the former two modules to obtain the final A/V classification results. Extensive empirical experiments verify the high performance of the proposed MSC-Net for retinal A/V classification over state-of-the-art methods on several public datasets.
Collapse
Affiliation(s)
| | - Chouyu Chen
- Department of Computer Science and Technology, Beijing University of Civil Engineering and Architecture, Beijing 100044, China;
| |
Collapse
|
5
|
End-to-End Automatic Classification of Retinal Vessel Based on Generative Adversarial Networks with Improved U-Net. Diagnostics (Basel) 2023; 13:diagnostics13061148. [PMID: 36980456 PMCID: PMC10047448 DOI: 10.3390/diagnostics13061148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 03/07/2023] [Accepted: 03/13/2023] [Indexed: 03/19/2023] Open
Abstract
The retinal vessels in the human body are the only ones that can be observed directly by non-invasive imaging techniques. Retinal vessel morphology and structure are the important objects of concern for physicians in the early diagnosis and treatment of related diseases. The classification of retinal vessels has important guiding significance in the basic stage of diagnostic treatment. This paper proposes a novel method based on generative adversarial networks with improved U-Net, which can achieve synchronous automatic segmentation and classification of blood vessels by an end-to-end network. The proposed method avoids the dependency of the segmentation results in the multiple classification tasks. Moreover, the proposed method builds on an accurate classification of arteries and veins while also classifying arteriovenous crossings. The validity of the proposed method is evaluated on the RITE dataset: the accuracy of image comprehensive classification reaches 96.87%. The sensitivity and specificity of arteriovenous classification reach 91.78% and 97.25%. The results verify the effectiveness of the proposed method and show the competitive classification performance.
Collapse
|
6
|
Toptaş B, Hanbay D. Separation of arteries and veins in retinal fundus images with a new CNN architecture. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2151066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Buket Toptaş
- Computer Engineering Department, Engineering and Natural Science Faculty, Bandırma Onyedi Eylül University, Balıkesir, Turkey
| | - Davut Hanbay
- Computer Engineering Department, Engineering Faculty, Inonu University, Malatya, Turkey
| |
Collapse
|
7
|
Hu J, Wang H, Wu G, Cao Z, Mou L, Zhao Y, Zhang J. Multi-scale Interactive Network with Artery/Vein Discriminator for Retinal Vessel Classification. IEEE J Biomed Health Inform 2022; 26:3896-3905. [PMID: 35394918 DOI: 10.1109/jbhi.2022.3165867] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Automatic classification of retinal arteries and veins plays an important role in assisting clinicians to diagnosis cardiovascular and eye-related diseases. However, due to the high degree of anatomical variation across the population, and the presence of inconsistent labels by the subjective judgment of annotators in available training data, most of existing methods generally suffer from blood vessel discontinuity and arteriovenous confusion, the artery/vein (A/V) classification task still faces great challenges. In this work, we propose a multi-scale interactive network with A/V discriminator for retinal artery and vein recognition, which can reduce the arteriovenous confusion and alleviate the disturbance of noisy label. A multi-scale interaction (MI) module is designed in encoder for realizing the cross-space multi-scale features interaction of fundus images, effectively integrate high-level and low-level context information. In particular, we design an ingenious A/V discriminator (AVD) that utilizes the independent and shared information between arteries and veins, and combine with topology loss, to further strengthen the learning ability of model to resolve the arteriovenous confusion. In addition, we adopt a sample re-weighting (SW) strategy to effectively alleviate the disturbance from data labeling errors. The proposed model is verified on three publicly available fundus image datasets (AV-DRIVE, HRF, LES-AV) and a private dataset. We achieve the accuracy of 97.47%, 96.91%, 97.79%, and 98.18% respectively on these four datasets. Extensive experimental results demonstrate that our method achieves competitive performance compared with state-of-the-art methods for A/V classification. To address the problem of training data scarcity, we publicly release 100 fundus images with A/V annotations to promote relevant research in the community.
Collapse
|
8
|
TW-GAN: Topology and width aware GAN for retinal artery/vein classification. Med Image Anal 2022; 77:102340. [DOI: 10.1016/j.media.2021.102340] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 12/18/2021] [Accepted: 12/20/2021] [Indexed: 11/20/2022]
|
9
|
Karlsson RA, Hardarson SH. Artery vein classification in fundus images using serially connected U-Nets. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 216:106650. [PMID: 35139461 DOI: 10.1016/j.cmpb.2022.106650] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Revised: 01/12/2022] [Accepted: 01/18/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Retinal vessels provide valuable information when diagnosing or monitoring various diseases affecting the retina and disorders affecting the cardiovascular or central nervous systems. Automated retinal vessel segmentation can assist clinicians and researchers when interpreting retinal images. As there are differences in both the structure and function of retinal arteries and veins, separating these two vessel types is essential. As manual segmentation of retinal images is impractical, an accurate automated method is required. METHODS In this paper, we propose a convolutional neural network based on serially connected U-nets that simultaneously segment the retinal vessels and classify them as arteries or veins. Detailed ablation experiments are performed to understand how the major components contribute to the overall system's performance. The proposed method is trained and tested on the public DRIVE and HRF datasets and a proprietary dataset. RESULTS The proposed convolutional neural network achieves an F1 score of 0.829 for vessel segmentation on the DRIVE dataset and an F1 score of 0.814 on the HRF dataset, consistent with the state-of-the-art methods on the former and outperforming the state-of-the-art on the latter. On the task of classifying the vessels into arteries and veins, the method achieves an F1 score of 0.952 for the DRIVE dataset exceeding the state-of-the-art performance. On the HRF dataset, the method achieves an F1 score of 0.966, which is consistent with the state-of-the-art. CONCLUSIONS The proposed method demonstrates competitive performance on both vessel segmentation and artery vein classification compared with state-of-the-art methods. The method outperforms human experts on the DRIVE dataset when classifying retinal images into arteries, veins, and background simultaneously. The method segments the vasculature on the proprietary dataset and classifies the retinal vessels accurately, even on challenging pathological images. The ablation experiments which utilize repeated runs for each configuration provide statistical evidence for the appropriateness of the proposed solution. Connecting several simple U-nets significantly improved artery vein classification performance. The proposed way of serially connecting base networks is not limited to the proposed base network or segmenting the retinal vessels and could be applied to other tasks.
Collapse
Affiliation(s)
- Robert Arnar Karlsson
- Faculty of Medicine at the University of Iceland, Sæmundargata 2, Reykjavík, 102, Iceland; Faculty of Electrical and Computer Engineering at the University of Iceland, Sæmundargata 2, Reykjavík, 102, Iceland.
| | - Sveinn Hakon Hardarson
- Faculty of Medicine at the University of Iceland, Sæmundargata 2, Reykjavík, 102, Iceland.
| |
Collapse
|
10
|
Hatamizadeh A, Hosseini H, Patel N, Choi J, Pole CC, Hoeferlin CM, Schwartz SD, Terzopoulos D. RAVIR: A Dataset and Methodology for the Semantic Segmentation and Quantitative Analysis of Retinal Arteries and Veins in Infrared Reflectance Imaging. IEEE J Biomed Health Inform 2022; 26:3272-3283. [PMID: 35349464 DOI: 10.1109/jbhi.2022.3163352] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The retinal vasculature provides important clues in the diagnosis and monitoring of systemic diseases including hypertension and diabetes. The microvascular system is of primary involvement in such conditions, and the retina is the only anatomical site where the microvasculature can be directly observed. The objective assessment of retinal vessels has long been considered a surrogate biomarker for systemic vascular diseases, and with recent advancements in retinal imaging and computer vision technologies, this topic has become the subject of renewed attention. In this paper, we present a novel dataset, dubbed RAVIR, for the semantic segmentation of Retinal Arteries and Veins in Infrared Reflectance (IR) imaging. It enables the creation of deep learning-based models that distinguish extracted vessel type without extensive post-processing. We propose a novel deep learning-based methodology, denoted as SegRAVIR, for the semantic segmentation of retinal arteries and veins and the quantitative measurement of the widths of segmented vessels. Our extensive experiments validate the effectiveness of SegRAVIR and demonstrate its superior performance in comparison to state-of-the-art models. Additionally, we propose a knowledge distillation framework for the domain adaptation of RAVIR pretrained networks on color images. We demonstrate that our pretraining procedure yields new state-of-the-art benchmarks on the DRIVE, STARE, and CHASE\_DB1 datasets. Dataset link: https://ravirdataset.github.io/data.
Collapse
|
11
|
Simultaneous segmentation and classification of the retinal arteries and veins from color fundus images. Artif Intell Med 2021; 118:102116. [PMID: 34412839 DOI: 10.1016/j.artmed.2021.102116] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 05/20/2021] [Accepted: 05/21/2021] [Indexed: 01/25/2023]
Abstract
BACKGROUND AND OBJECTIVES The study of the retinal vasculature represents a fundamental stage in the screening and diagnosis of many high-incidence diseases, both systemic and ophthalmic. A complete retinal vascular analysis requires the segmentation of the vascular tree along with the classification of the blood vessels into arteries and veins. Early automatic methods approach these complementary segmentation and classification tasks in two sequential stages. However, currently, these two tasks are approached as a joint semantic segmentation, because the classification results highly depend on the effectiveness of the vessel segmentation. In that regard, we propose a novel approach for the simultaneous segmentation and classification of the retinal arteries and veins from eye fundus images. METHODS We propose a novel method that, unlike previous approaches, and thanks to the proposal of a novel loss, decomposes the joint task into three segmentation problems targeting arteries, veins and the whole vascular tree. This configuration allows to handle vessel crossings intuitively and directly provides accurate segmentation masks of the different target vascular trees. RESULTS The provided ablation study on the public Retinal Images vessel Tree Extraction (RITE) dataset demonstrates that the proposed method provides a satisfactory performance, particularly in the segmentation of the different structures. Furthermore, the comparison with the state of the art shows that our method achieves highly competitive results in the artery/vein classification, while significantly improving the vascular segmentation. CONCLUSIONS The proposed multi-segmentation method allows to detect more vessels and better segment the different structures, while achieving a competitive classification performance. Also, in these terms, our approach outperforms the approaches of various reference works. Moreover, in contrast with previous approaches, the proposed method allows to directly detect the vessel crossings, as well as preserving the continuity of both arteries and veins at these complex locations.
Collapse
|