1
|
Bai S, Deng Z, Yang J, Gong Z, Gao W, Shao L, Li F, Wei W, Ma L. FTSNet: Fundus Tumor Segmentation Network on Multiple Scales Guided by Classification Results and Prompts. Bioengineering (Basel) 2024; 11:950. [PMID: 39329692 PMCID: PMC11429472 DOI: 10.3390/bioengineering11090950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2024] [Revised: 09/19/2024] [Accepted: 09/21/2024] [Indexed: 09/28/2024] Open
Abstract
The segmentation of fundus tumors is critical for ophthalmic diagnosis and treatment, yet it presents unique challenges due to the variability in lesion size and shape. Our study introduces Fundus Tumor Segmentation Network (FTSNet), a novel segmentation network designed to address these challenges by leveraging classification results and prompt learning. Our key innovation is the multiscale feature extractor and the dynamic prompt head. Multiscale feature extractors are proficient in eliciting a spectrum of feature information from the original image across disparate scales. This proficiency is fundamental for deciphering the subtle details and patterns embedded in the image at multiple levels of granularity. Meanwhile, a dynamic prompt head is engineered to engender bespoke segmentation heads for each image, customizing the segmentation process to align with the distinctive attributes of the image under consideration. We also present the Fundus Tumor Segmentation (FTS) dataset, comprising 254 pairs of fundus images with tumor lesions and reference segmentations. Experiments demonstrate FTSNet's superior performance over existing methods, achieving a mean Intersection over Union (mIoU) of 0.8254 and mean Dice (mDice) of 0.9042. The results highlight the potential of our approach in advancing the accuracy and efficiency of fundus tumor segmentation.
Collapse
Affiliation(s)
- Shurui Bai
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
| | - Zhuo Deng
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
| | - Jingyan Yang
- Beijing Tongren Hospital, Capital Medical University, Beijing 100730, China
| | - Zheng Gong
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
| | - Weihao Gao
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
| | - Lei Shao
- Beijing Tongren Hospital, Capital Medical University, Beijing 100730, China
| | - Fang Li
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
| | - Wenbin Wei
- Beijing Tongren Hospital, Capital Medical University, Beijing 100730, China
| | - Lan Ma
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
| |
Collapse
|
2
|
Bhimavarapu U. Retina Blood Vessels Segmentation and Classification with the Multi-featured Approach. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01219-2. [PMID: 39117940 DOI: 10.1007/s10278-024-01219-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 07/20/2024] [Accepted: 07/23/2024] [Indexed: 08/10/2024]
Abstract
Segmenting retinal blood vessels poses a significant challenge due to the irregularities inherent in small vessels. The complexity arises from the intricate task of effectively merging features at multiple levels, coupled with potential spatial information loss during successive down-sampling steps. This particularly affects the identification of small and faintly contrasting vessels. To address these challenges, we present a model tailored for automated arterial and venous (A/V) classification, complementing blood vessel segmentation. This paper presents an advanced methodology for segmenting and classifying retinal vessels using a series of sophisticated pre-processing and feature extraction techniques. The ensemble filter approach, incorporating Bilateral and Laplacian edge detectors, enhances image contrast and preserves edges. The proposed algorithm further refines the image by generating an orientation map. During the vessel extraction step, a complete convolution network processes the input image to create a detailed vessel map, enhanced by attention operations that improve modeling perception and resilience. The encoder extracts semantic features, while the Attention Module refines blood vessel depiction, resulting in highly accurate segmentation outcomes. The model was verified using the STARE dataset, which includes 400 images; the DRIVE dataset with 40 images; the HRF dataset with 45 images; and the INSPIRE-AVR dataset containing 40 images. The proposed model demonstrated superior performance across all datasets, achieving an accuracy of 97.5% on the DRIVE dataset, 99.25% on the STARE dataset, 98.33% on the INSPIREAVR dataset, and 98.67% on the HRF dataset. These results highlight the method's effectiveness in accurately segmenting and classifying retinal vessels.
Collapse
Affiliation(s)
- Usharani Bhimavarapu
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India.
| |
Collapse
|
3
|
Wang Y, Li H. A Novel Single-Sample Retinal Vessel Segmentation Method Based on Grey Relational Analysis. SENSORS (BASEL, SWITZERLAND) 2024; 24:4326. [PMID: 39001106 PMCID: PMC11244310 DOI: 10.3390/s24134326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 06/23/2024] [Accepted: 07/02/2024] [Indexed: 07/16/2024]
Abstract
Accurate segmentation of retinal vessels is of great significance for computer-aided diagnosis and treatment of many diseases. Due to the limited number of retinal vessel samples and the scarcity of labeled samples, and since grey theory excels in handling problems of "few data, poor information", this paper proposes a novel grey relational-based method for retinal vessel segmentation. Firstly, a noise-adaptive discrimination filtering algorithm based on grey relational analysis (NADF-GRA) is designed to enhance the image. Secondly, a threshold segmentation model based on grey relational analysis (TS-GRA) is designed to segment the enhanced vessel image. Finally, a post-processing stage involving hole filling and removal of isolated pixels is applied to obtain the final segmentation output. The performance of the proposed method is evaluated using multiple different measurement metrics on publicly available digital retinal DRIVE, STARE and HRF datasets. Experimental analysis showed that the average accuracy and specificity on the DRIVE dataset were 96.03% and 98.51%. The mean accuracy and specificity on the STARE dataset were 95.46% and 97.85%. Precision, F1-score, and Jaccard index on the HRF dataset all demonstrated high-performance levels. The method proposed in this paper is superior to the current mainstream methods.
Collapse
Affiliation(s)
- Yating Wang
- School of Information Science and Technology, Nantong University, Nantong 226019, China
| | - Hongjun Li
- School of Information Science and Technology, Nantong University, Nantong 226019, China
| |
Collapse
|
4
|
Driban M, Yan A, Selvam A, Ong J, Vupparaboina KK, Chhablani J. Artificial intelligence in chorioretinal pathology through fundoscopy: a comprehensive review. Int J Retina Vitreous 2024; 10:36. [PMID: 38654344 PMCID: PMC11036694 DOI: 10.1186/s40942-024-00554-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 04/02/2024] [Indexed: 04/25/2024] Open
Abstract
BACKGROUND Applications for artificial intelligence (AI) in ophthalmology are continually evolving. Fundoscopy is one of the oldest ocular imaging techniques but remains a mainstay in posterior segment imaging due to its prevalence, ease of use, and ongoing technological advancement. AI has been leveraged for fundoscopy to accomplish core tasks including segmentation, classification, and prediction. MAIN BODY In this article we provide a review of AI in fundoscopy applied to representative chorioretinal pathologies, including diabetic retinopathy and age-related macular degeneration, among others. We conclude with a discussion of future directions and current limitations. SHORT CONCLUSION As AI evolves, it will become increasingly essential for the modern ophthalmologist to understand its applications and limitations to improve patient outcomes and continue to innovate.
Collapse
Affiliation(s)
- Matthew Driban
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Audrey Yan
- Department of Medicine, West Virginia School of Osteopathic Medicine, Lewisburg, WV, USA
| | - Amrish Selvam
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, USA
| | | | - Jay Chhablani
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA.
| |
Collapse
|
5
|
Mahapatra S, Agrawal S, Mishro PK, Panda R, Dora L, Pachori RB. A Review on Retinal Blood Vessel Enhancement and Segmentation Techniques for Color Fundus Photography. Crit Rev Biomed Eng 2024; 52:41-69. [PMID: 37938183 DOI: 10.1615/critrevbiomedeng.2023049348] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2023]
Abstract
The retinal image is a trusted modality in biomedical image-based diagnosis of many ophthalmologic and cardiovascular diseases. Periodic examination of the retina can help in spotting these abnormalities in the early stage. However, to deal with today's large population, computerized retinal image analysis is preferred over manual inspection. The precise extraction of the retinal vessel is the first and decisive step for clinical applications. Every year, many more articles are added to the literature that describe new algorithms for the problem at hand. The majority of the review article is restricted to a fairly small number of approaches, assessment indices, and databases. In this context, a comprehensive review of different vessel extraction methods is inevitable. It includes the development of a first-hand classification of these methods. A bibliometric analysis of these articles is also presented. The benefits and drawbacks of the most commonly used techniques are summarized. The primary challenges, as well as the scope of possible changes, are discussed. In order to make a fair comparison, numerous assessment indices are considered. The findings of this survey could provide a new path for researchers for further work in this domain.
Collapse
Affiliation(s)
- Sakambhari Mahapatra
- Department of Electronics and Telecommunication Engineering, Veer Surendra Sai University of Technology, Burla, India
| | - Sanjay Agrawal
- Department of Electronics and Telecommunication Engineering, Veer Surendra Sai University of Technology, Burla, India
| | - Pranaba K Mishro
- Department of Electronics and Telecommunication Engineering, Veer Surendra Sai University of Technology, Burla, India
| | - Rutuparna Panda
- Department of Electronics and Telecommunication Engineering, Veer Surendra Sai University of Technology, Burla, India
| | - Lingraj Dora
- Department of Electrical and Electronics Engineering, Veer Surendra Sai University of Technology, Burla, India
| | - Ram Bilas Pachori
- Department of Electrical Engineering, Indian Institute of Technology Indore, Indore, India
| |
Collapse
|
6
|
Ye Z, Liu Y, Jing T, He Z, Zhou L. A High-Resolution Network with Strip Attention for Retinal Vessel Segmentation. SENSORS (BASEL, SWITZERLAND) 2023; 23:8899. [PMID: 37960597 PMCID: PMC10650600 DOI: 10.3390/s23218899] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 10/27/2023] [Accepted: 10/30/2023] [Indexed: 11/15/2023]
Abstract
Accurate segmentation of retinal vessels is an essential prerequisite for the subsequent analysis of fundus images. Recently, a number of methods based on deep learning have been proposed and shown to demonstrate promising segmentation performance, especially U-Net and its variants. However, tiny vessels and low-contrast vessels are hard to detect due to the issues of a loss of spatial details caused by consecutive down-sample operations and inadequate fusion of multi-level features caused by vanilla skip connections. To address these issues and enhance the segmentation precision of retinal vessels, we propose a novel high-resolution network with strip attention. Instead of the U-Net-shaped architecture, the proposed network follows an HRNet-shaped architecture as the basic network, learning high-resolution representations throughout the training process. In addition, a strip attention module including a horizontal attention mechanism and a vertical attention mechanism is designed to obtain long-range dependencies in the horizontal and vertical directions by calculating the similarity between each pixel and all pixels in the same row and the same column, respectively. For effective multi-layer feature fusion, we incorporate the strip attention module into the basic network to dynamically guide adjacent hierarchical features. Experimental results on the DRIVE and STARE datasets show that the proposed method can extract more tiny vessels and low-contrast vessels compared with existing mainstream methods, achieving accuracies of 96.16% and 97.08% and sensitivities of 82.68% and 89.36%, respectively. The proposed method has the potential to aid in the analysis of fundus images.
Collapse
Affiliation(s)
- Zhipin Ye
- Research Center of Fluid Machinery Engineering & Technology, Jiangsu University, Zhenjiang 212013, China
| | - Yingqian Liu
- Research Center of Fluid Machinery Engineering & Technology, Jiangsu University, Zhenjiang 212013, China
| | - Teng Jing
- Research Center of Fluid Machinery Engineering & Technology, Jiangsu University, Zhenjiang 212013, China
| | - Zhaoming He
- Department of Mechanical Engineering, Texas Tech University, Lubbock, TX 79411, USA
| | - Ling Zhou
- Research Center of Fluid Machinery Engineering & Technology, Jiangsu University, Zhenjiang 212013, China
| |
Collapse
|
7
|
Li R, Li Z, Fan H, Teng S, Cao X. MCFSA-Net: A multi-scale channel fusion and spatial activation network for retinal vessel segmentation. JOURNAL OF BIOPHOTONICS 2023; 16:e202200295. [PMID: 36413066 DOI: 10.1002/jbio.202200295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 11/10/2022] [Accepted: 11/17/2022] [Indexed: 06/16/2023]
Abstract
As the only vascular tissue that can be directly viewed in vivo, retinal vessels are medically important in assisting the diagnosis of ocular and cardiovascular diseases. They generally appear as different morphologies and uneven thickness in fundus images. Therefore, the single-scale segmentation method may fail to capture abundant morphological features, suffering from the deterioration in vessel segmentation, especially for tiny vessels. To alleviate this issue, we propose a multi-scale channel fusion and spatial activation network (MCFSA-Net) for retinal vessel segmentation with emphasis on tiny ones. Specifically, the Hybrid Convolution-DropBlock (HC-Drop) is first used to extract deep features of vessels and construct multi-scale feature maps by progressive down-sampling. Then, the Channel Cooperative Attention Fusion (CCAF) module is designed to handle different morphological vessels in a multi-scale manner. Finally, the Global Spatial Activation (GSA) module is introduced to aggregate global feature information for improving the attention on tiny vessels in the spatial domain and realizing effective segmentation for them. Experiments are carried out on three datasets including DRIVE, CHASE_DB1, and STARE. Our retinal vessel segmentation method achieves Accuracy of 96.95%, 97.57%, and 97.83%, and F1 score of 82.67%, 81.82%, and 82.95% in the above datasets, respectively. Qualitative and quantitative analysis show that the proposed method outperforms current advanced vessel segmentation methods, especially for tiny vessels.
Collapse
Affiliation(s)
- Rui Li
- College of Electronic and Information Engineering, Shandong University of Science and Technology, Qingdao, China
| | - Zuoyong Li
- Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, College of Computer and Control Engineering, Minjiang University, Fuzhou, China
| | - Haoyi Fan
- School of Computer and Artificial Intelligence, Zhengzhou University, Zhengzhou, China
| | - Shenghua Teng
- College of Electronic and Information Engineering, Shandong University of Science and Technology, Qingdao, China
| | - Xinrong Cao
- Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, College of Computer and Control Engineering, Minjiang University, Fuzhou, China
- Fuzhou Digital Healthcare Industry Technology Innovation Center, Minjiang University, Fuzhou, China
| |
Collapse
|
8
|
Upadhyay K, Agrawal M, Vashist P. Characteristic patch-based deep and handcrafted feature learning for red lesion segmentation in fundus images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
9
|
Zhang F, Zheng Y, Wu J, Yang X, Che X. Multi-rater label fusion based on an information bottleneck for fundus image segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
10
|
Guo S. CSGNet: Cascade semantic guided net for retinal vessel segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
11
|
Mahapatra S, Agrawal S, Mishro PK, Pachori RB. A novel framework for retinal vessel segmentation using optimal improved frangi filter and adaptive weighted spatial FCM. Comput Biol Med 2022; 147:105770. [DOI: 10.1016/j.compbiomed.2022.105770] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Revised: 06/08/2022] [Accepted: 06/19/2022] [Indexed: 11/28/2022]
|
12
|
Guo S. LightEyes: A Lightweight Fundus Segmentation Network for Mobile Edge Computing. SENSORS (BASEL, SWITZERLAND) 2022; 22:3112. [PMID: 35590802 PMCID: PMC9104959 DOI: 10.3390/s22093112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 04/15/2022] [Accepted: 04/16/2022] [Indexed: 12/04/2022]
Abstract
Fundus is the only structure that can be observed without trauma to the human body. By analyzing color fundus images, the diagnosis basis for various diseases can be obtained. Recently, fundus image segmentation has witnessed vast progress with the development of deep learning. However, the improvement of segmentation accuracy comes with the complexity of deep models. As a result, these models show low inference speeds and high memory usages when deploying to mobile edges. To promote the deployment of deep fundus segmentation models to mobile devices, we aim to design a lightweight fundus segmentation network. Our observation comes from the fact that high-resolution representations could boost the segmentation of tiny fundus structures, and the classification of small fundus structures depends more on local features. To this end, we propose a lightweight segmentation model called LightEyes. We first design a high-resolution backbone network to learn high-resolution representations, so that the spatial relationship between feature maps can be always retained. Meanwhile, considering high-resolution features means high memory usage; for each layer, we use at most 16 convolutional filters to reduce memory usage and decrease training difficulty. LightEyes has been verified on three kinds of fundus segmentation tasks, including the hard exudate, the microaneurysm, and the vessel, on five publicly available datasets. Experimental results show that LightEyes achieves highly competitive segmentation accuracy and segmentation speed compared with state-of-the-art fundus segmentation models, while running at 1.6 images/s Cambricon-1A speed and 51.3 images/s GPU speed with only 36k parameters.
Collapse
Affiliation(s)
- Song Guo
- School of Information and Control Engineering, Xi'an University of Architecture and Technology, Xi'an 710055, China
| |
Collapse
|