1
|
Xu H, Shao X, Fang D, Huang F. A hybrid neural network approach for classifying diabetic retinopathy subtypes. Front Med (Lausanne) 2024; 10:1293019. [PMID: 38239623 PMCID: PMC10794511 DOI: 10.3389/fmed.2023.1293019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 12/07/2023] [Indexed: 01/22/2024] Open
Abstract
OBJECTIVE Diabetic retinopathy is a prevalent complication among diabetic patients that, if not predicted and treated promptly, can lead to blindness. This paper proposes a method for accurately and swiftly predicting the degree of diabetic retinopathy using a hybrid neural network model. Timely prediction of diabetic retinopathy is crucial in preventing blindness associated with this condition. METHODS This study aims to enhance the prediction accuracy of diabetic retinopathy by utilizing the hybrid neural network model EfficientNet and Swin Transformer. The specific methodology includes: (1) combining local and global features to accurately capture lesion characteristics by leveraging the strengths of both Swin Transformer and EfficientNet models; (2) improving prediction accuracy through a comprehensive analysis of the model's training details and applying data augmentation techniques such as Gaussian blur to enhance the hybrid model's performance; (3) validating the effectiveness and utility of the proposed hybrid model for diabetic retinopathy detection through extensive experimental evaluations and comparisons with other deep learning models. RESULTS The hybrid model was trained and tested on the large-scale real-world diabetic retinopathy detection dataset APTOS 2019 Blindness Detection. The experimental results show that the hybrid model in this paper achieves the best results in all metrics, including sensitivity of 0.95, specificity of 0.98, accuracy of 0.97, and AUC of 0.97. The performance of the model is significantly improved compared to the mainstream methods currently employed. In addition, the model provides interpretable neural network details through class activation maps, which enables the visualization of diabetic retinopathy. This feature helps physicians to make more accurate diagnosis and treatment decisions. The model proposed in this paper shows higher accuracy in detecting and diagnosing diabetic retinopathy, which is crucial for the treatment and rehabilitation of diabetic patients. CONCLUSION The hybrid neural network model based on EfficientNet and Swin Transformer significantly contributes to the prediction of diabetic retinopathy. By combining local and global features, the model achieves improved prediction accuracy. The validity and utility of the model are verified through experimental evaluations. This research provides robust support for the early diagnosis and treatment of diabetic patients.
Collapse
Affiliation(s)
- Huanqing Xu
- The School of Medical Information Engineering, Anhui University of Chinese Medicine, Hefei, China
| | - Xian Shao
- NHC Key Laboratory of Hormones and Development, Chu Hsien-I Memorial Hospital and Tianjin Institute of Endocrinology, Tianjin Medical University, Tianjin, China
| | - Dandan Fang
- The School of Medical Information Engineering, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Fangliang Huang
- The School of Medical Information Engineering, Anhui University of Chinese Medicine, Hefei, China
| |
Collapse
|
2
|
Jiang K, Gong T, Quan L. A medical unsupervised domain adaptation framework based on Fourier transform image translation and multi-model ensemble self-training strategy. Int J Comput Assist Radiol Surg 2023; 18:1885-1894. [PMID: 37010674 DOI: 10.1007/s11548-023-02867-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 03/03/2023] [Indexed: 04/04/2023]
Abstract
PURPOSE Well-established segmentation models will suffer performance degradation when deployed on data with heterogeneous features, especially in the field of medical image analysis. Although researchers have proposed many approaches to address this problem in recent years, most of them are feature-adaptation-based adversarial networks, the problems such as training instability often arise in adversarial training. To ameliorate this challenge and improve the robustness of processing data with different distributions, we propose a novel unsupervised domain adaptation framework for cross-domain medical image segmentation. METHODS In our proposed approach, Fourier transform guided images translation and multi-model ensemble self-training are integrated into a unified framework. First, after Fourier transform, the amplitude spectrum of source image is replaced with that of target image, and reconstructed by the inverse Fourier transform. Second, we augment target dataset with the synthetic cross-domain images, performing supervised learning using the original source set labels while implementing regularization by entropy minimization on predictions of unlabeled target data. We employ several segmentation networks with different hyperparameters simultaneously, pseudo-labels are generated by averaging their outputs and comparing to confidence threshold, and gradually optimize the quality of pseudo-labels through multiple rounds self-training. RESULTS We employed our framework to two liver CT datasets for bidirectional adaptation experiments. In both experiments, compared to the segmentation network without domain alignment, dice similarity coefficient (DSC) increased by nearly 34% and average symmetric surface distance (ASSD) decreased by about 10. The DSC values were also improved by 10.8% and 6.7%, respectively, compared to the existing model. CONCLUSION We propose a Fourier transform-based UDA framework, the experimental results and comparisons demonstrate that the proposed method can effectively diminish the performance degradation caused by domain shift and performs best on the cross-domain segmentation tasks. Our proposed multi-model ensemble training strategy can also improve the robustness of the segmentation system.
Collapse
Affiliation(s)
- Kaida Jiang
- College of Information Science and Technology, Donghua University, Shanghai, China
| | - Tao Gong
- College of Information Science and Technology, Donghua University, Shanghai, China.
| | - Li Quan
- College of Information Science and Technology, Donghua University, Shanghai, China
| |
Collapse
|
3
|
Ong J, Waisberg E, Masalkhi M, Kamran SA, Lowry K, Sarker P, Zaman N, Paladugu P, Tavakkoli A, Lee AG. Artificial Intelligence Frameworks to Detect and Investigate the Pathophysiology of Spaceflight Associated Neuro-Ocular Syndrome (SANS). Brain Sci 2023; 13:1148. [PMID: 37626504 PMCID: PMC10452366 DOI: 10.3390/brainsci13081148] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 07/24/2023] [Accepted: 07/28/2023] [Indexed: 08/27/2023] Open
Abstract
Spaceflight associated neuro-ocular syndrome (SANS) is a unique phenomenon that has been observed in astronauts who have undergone long-duration spaceflight (LDSF). The syndrome is characterized by distinct imaging and clinical findings including optic disc edema, hyperopic refractive shift, posterior globe flattening, and choroidal folds. SANS serves a large barrier to planetary spaceflight such as a mission to Mars and has been noted by the National Aeronautics and Space Administration (NASA) as a high risk based on its likelihood to occur and its severity to human health and mission performance. While it is a large barrier to future spaceflight, the underlying etiology of SANS is not well understood. Current ophthalmic imaging onboard the International Space Station (ISS) has provided further insights into SANS. However, the spaceflight environment presents with unique challenges and limitations to further understand this microgravity-induced phenomenon. The advent of artificial intelligence (AI) has revolutionized the field of imaging in ophthalmology, particularly in detection and monitoring. In this manuscript, we describe the current hypothesized pathophysiology of SANS and the medical diagnostic limitations during spaceflight to further understand its pathogenesis. We then introduce and describe various AI frameworks that can be applied to ophthalmic imaging onboard the ISS to further understand SANS including supervised/unsupervised learning, generative adversarial networks, and transfer learning. We conclude by describing current research in this area to further understand SANS with the goal of enabling deeper insights into SANS and safer spaceflight for future missions.
Collapse
Affiliation(s)
- Joshua Ong
- Department of Ophthalmology and Visual Sciences, University of Michigan Kellogg Eye Center, Ann Arbor, MI 48105, USA
| | | | - Mouayad Masalkhi
- University College Dublin School of Medicine, Belfield, Dublin 4, Ireland
| | - Sharif Amit Kamran
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, NV 89512, USA
| | | | - Prithul Sarker
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, NV 89512, USA
| | - Nasif Zaman
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, NV 89512, USA
| | - Phani Paladugu
- Brigham and Women’s Hospital, Harvard Medical School, Boston, MA 02115, USA
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA 19107, USA
| | - Alireza Tavakkoli
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, NV 89512, USA
| | - Andrew G. Lee
- Center for Space Medicine, Baylor College of Medicine, Houston, TX 77030, USA
- Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, TX 77030, USA
- The Houston Methodist Research Institute, Houston Methodist Hospital, Houston, TX 77030, USA
- Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, NY 10065, USA
- Department of Ophthalmology, University of Texas Medical Branch, Galveston, TX 77555, USA
- University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Texas A&M College of Medicine, Bryan, TX 77030, USA
- Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, IA 50010, USA
| |
Collapse
|
4
|
Ma Y, Qin Y, Liang C, Li X, Li M, Wang R, Yu J, Xu X, Lv S, Luo H, Jiang Y. Visual Cascaded-Progressive Convolutional Neural Network (C-PCNN) for Diagnosis of Meniscus Injury. Diagnostics (Basel) 2023; 13:2049. [PMID: 37370944 PMCID: PMC10297643 DOI: 10.3390/diagnostics13122049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 05/25/2023] [Accepted: 06/02/2023] [Indexed: 06/29/2023] Open
Abstract
OBJECTIVE The objective of this study is to develop a novel automatic convolutional neural network (CNN) that aids in the diagnosis of meniscus injury, while enabling the visualization of lesion characteristics. This will improve the accuracy and reduce diagnosis times. METHODS We presented a cascaded-progressive convolutional neural network (C-PCNN) method for diagnosing meniscus injuries using magnetic resonance imaging (MRI). A total of 1396 images collected in the hospital were used for training and testing. The method used for training and testing was 5-fold cross validation. Using intraoperative arthroscopic diagnosis and MRI diagnosis as criteria, the C-PCNN was evaluated based on accuracy, sensitivity, specificity, receiver operating characteristic (ROC), and evaluation performance. At the same time, the diagnostic accuracy of doctors with the assistance of cascade- progressive convolutional neural networks was evaluated. The diagnostic accuracy of a C-PCNN assistant with an attending doctor and chief doctor was compared to evaluate the clinical significance. RESULTS C-PCNN showed 85.6% accuracy in diagnosing and identifying anterior horn injury, and 92% accuracy in diagnosing and identifying posterior horn injury. The average accuracy of C-PCNN was 89.8%, AUC = 0.86. The diagnosis accuracy of the attending physician with the aid of the C-PCNN was comparable to that of the chief physician. CONCLUSION The C-PCNN-based MRI technique for diagnosing knee meniscus injuries has significant practical value in clinical practice. With a high rate of accuracy, clinical auxiliary physicians can increase the speed and accuracy of diagnosis and decrease the number of incorrect diagnoses.
Collapse
Affiliation(s)
- Yingkai Ma
- Second Affiliated Hospital of Harbin Medical University, Harbin Medical University, Haerbin 150001, China; (Y.M.)
| | - Yong Qin
- Second Affiliated Hospital of Harbin Medical University, Harbin Medical University, Haerbin 150001, China; (Y.M.)
| | - Chen Liang
- Second Affiliated Hospital of Harbin Medical University, Harbin Medical University, Haerbin 150001, China; (Y.M.)
| | - Xiang Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Haerbin 150001, China
| | - Minglei Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Haerbin 150001, China
| | - Ren Wang
- Second Affiliated Hospital of Harbin Medical University, Harbin Medical University, Haerbin 150001, China; (Y.M.)
| | - Jinping Yu
- Second Affiliated Hospital of Harbin Medical University, Harbin Medical University, Haerbin 150001, China; (Y.M.)
| | - Xiangning Xu
- Second Affiliated Hospital of Harbin Medical University, Harbin Medical University, Haerbin 150001, China; (Y.M.)
| | - Songcen Lv
- Second Affiliated Hospital of Harbin Medical University, Harbin Medical University, Haerbin 150001, China; (Y.M.)
| | - Hao Luo
- Department of Control Science and Engineering, Harbin Institute of Technology, Haerbin 150001, China
| | - Yuchen Jiang
- Department of Control Science and Engineering, Harbin Institute of Technology, Haerbin 150001, China
| |
Collapse
|
5
|
Zhang H, Ni W, Luo Y, Feng Y, Song R, Wang X. TUnet-LBF: Retinal fundus image fine segmentation model based on transformer Unet network and LBF. Comput Biol Med 2023; 159:106937. [PMID: 37084640 DOI: 10.1016/j.compbiomed.2023.106937] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 04/01/2023] [Accepted: 04/13/2023] [Indexed: 04/23/2023]
Abstract
Segmentation of retinal fundus images is a crucial part of medical diagnosis. Automatic extraction of blood vessels in low-quality retinal images remains a challenging problem. In this paper, we propose a novel two-stage model combining Transformer Unet (TUnet) and local binary energy function model (LBF), TUnet-LBF, for coarse to fine segmentation of retinal vessels. In the coarse segmentation stage, the global topological information of blood vessels is obtained by TUnet. The neural network outputs the initial contour and the probability maps, which are input to the fine segmentation stage as the priori information. In the fine segmentation stage, an energy modulated LBF model is proposed to obtain the local detail information of blood vessels. The proposed model reaches accuracy (Acc) of 0.9650, 0.9681 and 0.9708 on the public datasets DRIVE, STARE and CHASE_DB1 respectively. The experimental results demonstrate the effectiveness of each component in the proposed model.
Collapse
Affiliation(s)
- Hanyu Zhang
- School of Geography, Liaoning Normal University, Dalian City, 116029, China; School of Computer and Information Technology, Liaoning Normal University, Dalian City, 116029, China; College of Information Science and Engineering, Northeastern University, Shenyang, 110167, China.
| | - Weihan Ni
- School of Computer and Information Technology, Liaoning Normal University, Dalian City, 116029, China.
| | - Yi Luo
- College of Information Science and Engineering, Northeastern University, Shenyang, 110167, China.
| | - Yining Feng
- School of Geography, Liaoning Normal University, Dalian City, 116029, China.
| | - Ruoxi Song
- Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, 100101, China.
| | - Xianghai Wang
- School of Geography, Liaoning Normal University, Dalian City, 116029, China; School of Computer and Information Technology, Liaoning Normal University, Dalian City, 116029, China.
| |
Collapse
|
6
|
Philippi D, Rothaus K, Castelli M. A vision transformer architecture for the automated segmentation of retinal lesions in spectral domain optical coherence tomography images. Sci Rep 2023; 13:517. [PMID: 36627357 PMCID: PMC9832034 DOI: 10.1038/s41598-023-27616-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 01/04/2023] [Indexed: 01/12/2023] Open
Abstract
Neovascular age-related macular degeneration (nAMD) is one of the major causes of irreversible blindness and is characterized by accumulations of different lesions inside the retina. AMD biomarkers enable experts to grade the AMD and could be used for therapy prognosis and individualized treatment decisions. In particular, intra-retinal fluid (IRF), sub-retinal fluid (SRF), and pigment epithelium detachment (PED) are prominent biomarkers for grading neovascular AMD. Spectral-domain optical coherence tomography (SD-OCT) revolutionized nAMD early diagnosis by providing cross-sectional images of the retina. Automatic segmentation and quantification of IRF, SRF, and PED in SD-OCT images can be extremely useful for clinical decision-making. Despite the excellent performance of convolutional neural network (CNN)-based methods, the task still presents some challenges due to relevant variations in the location, size, shape, and texture of the lesions. This work adopts a transformer-based method to automatically segment retinal lesion from SD-OCT images and qualitatively and quantitatively evaluate its performance against CNN-based methods. The method combines the efficient long-range feature extraction and aggregation capabilities of Vision Transformers with data-efficient training of CNNs. The proposed method was tested on a private dataset containing 3842 2-dimensional SD-OCT retina images, manually labeled by experts of the Franziskus Eye-Center, Muenster. While one of the competitors presents a better performance in terms of Dice score, the proposed method is significantly less computationally expensive. Thus, future research will focus on the proposed network's architecture to increase its segmentation performance while maintaining its computational efficiency.
Collapse
Affiliation(s)
- Daniel Philippi
- grid.10772.330000000121511713NOVA Information Management School (NOVA IMS), Universidade Nova de Lisboa, 1070-312 Lisbon, Portugal
| | - Kai Rothaus
- grid.416655.5Department of Ophthalmology, St. Franziskus Hospital, 48145 Muenster, Germany
| | - Mauro Castelli
- NOVA Information Management School (NOVA IMS), Universidade Nova de Lisboa, 1070-312, Lisbon, Portugal. .,School of Economics and Business, University of Ljubljana, Ljubljana, Slovenia.
| |
Collapse
|
7
|
Huang Y, Yang J, Sun Q, Ma S, Yuan Y, Tan W, Cao P, Feng C. Vessel filtering and segmentation of coronary CT angiographic images. Int J Comput Assist Radiol Surg 2022; 17:1879-1890. [PMID: 35764765 DOI: 10.1007/s11548-022-02655-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 04/22/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Coronary artery segmentation in coronary computed tomography angiography (CTA) images plays a crucial role in diagnosing cardiovascular diseases. However, due to the complexity of coronary CTA images and coronary structure, it is difficult to automatically segment coronary arteries accurately and efficiently from numerous coronary CTA images. METHOD In this study, an automatic method based on symmetrical radiation filter (SRF) and D-means is presented. The SRF, which is applied to the three orthogonal planes, is designed to filter the suspicious vessel tissue according to the features of gradient changes on vascular boundaries to segment coronary arteries accurately and reduce computational cost. Additionally, the D-means local clustering is proposed to be embedded into vessel segmentation to eliminate noise impact in coronary CTA images. RESULTS The results of the proposed method were compared against the manual delineations in 210 coronary CTA data sets. The average values of true positive, false positive, Jaccard measure, and Dice coefficient were [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text], respectively. Moreover, comparing the delineated data sets and public data sets showed that the proposed method is better than the related methods. CONCLUSION The experimental results indicate that the proposed method can perform complete, robust, and accurate segmentation of coronary arteries with low computational cost. Therefore, the proposed method is proven effective in vessel segmentation of coronary CTA images without extensive training data and can meet clinical applications.
Collapse
Affiliation(s)
- Yan Huang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China.,School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Jinzhu Yang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China. .,School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China.
| | - Qi Sun
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China.,School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Shuang Ma
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China.,School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Yuliang Yuan
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China.,School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Wenjun Tan
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China.,School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Peng Cao
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China.,School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Chaolu Feng
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China.,School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| |
Collapse
|
8
|
Deng X, Ye J. A retinal blood vessel segmentation based on improved D-MNet and pulse-coupled neural network. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103467] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
9
|
GCA-Net: global context attention network for intestinal wall vascular segmentation. Int J Comput Assist Radiol Surg 2021; 17:569-578. [PMID: 34606060 DOI: 10.1007/s11548-021-02506-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Accepted: 09/17/2021] [Indexed: 12/22/2022]
Abstract
PURPOSE Precise segmentation of intestinal wall vessels is vital to colonic perforation prevention. However, there are interferences such as gastric juice in the vessel image of the intestinal wall, especially vessels and the mucosal folds are difficult to distinguish, which easily lead to mis-segmentation. In addition, the insufficient feature extraction of intricate vessel structures may leave out information of tiny vessels that result in rupture. To overcome these challenges, an effective network is proposed for segmentation of intestinal wall vessels. METHODS A global context attention network (GCA-Net) that employs a multi-scale fusion attention (MFA) module is proposed to adaptively integrate local and global context information to improve the distinguishability of mucosal folds and vessels, more importantly, the ability to capture tiny vessels. Also, a parallel decoder is used to introduce a contour loss function to solve the blurry and noisy blood vessel boundaries. RESULTS Extensive experimental results demonstrate the superiority of the GCA-Net, with accuracy of 94.84%, specificity of 97.89%, F1-score of 73.80%, AUC of 96.30%, and MeanIOU of 76.46% in fivefold cross-validation, exceeding the comparison methods. In addition, the public dataset DRIVE is used to verify the potential of GCA-Net in retinal vessel image segmentation. CONCLUSION A novel network for segmentation of intestinal wall vessels is developed, which can suppress interferences in intestinal wall vessel images, improve the discernibility of blood vessels and mucosal folds, enhance vessel boundaries, and capture tiny vessels. Comprehensive experiments prove that the proposed GCA-Net can accurately segment the intestinal wall vessels.
Collapse
|