1
|
Chen S, Fan J, Ding Y, Geng H, Ai D, Xiao D, Song H, Wang Y, Yang J. PEA-Net: A progressive edge information aggregation network for vessel segmentation. Comput Biol Med 2024; 169:107766. [PMID: 38150885 DOI: 10.1016/j.compbiomed.2023.107766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 10/18/2023] [Accepted: 11/21/2023] [Indexed: 12/29/2023]
Abstract
Automatic vessel segmentation is a critical area of research in medical image analysis, as it can greatly assist doctors in accurately and efficiently diagnosing vascular diseases. However, accurately extracting the complete vessel structure from images remains a challenge due to issues such as uneven contrast and background noise. Existing methods primarily focus on segmenting individual pixels and often fail to consider vessel features and morphology. As a result, these methods often produce fragmented results and misidentify vessel-like background noise, leading to missing and outlier points in the overall segmentation. To address these issues, this paper proposes a novel approach called the progressive edge information aggregation network for vessel segmentation (PEA-Net). The proposed method consists of several key components. First, a dual-stream receptive field encoder (DRE) is introduced to preserve fine structural features and mitigate false positive predictions caused by background noise. This is achieved by combining vessel morphological features obtained from different receptive field sizes. Second, a progressive complementary fusion (PCF) module is designed to enhance fine vessel detection and improve connectivity. This module complements the decoding path by combining features from previous iterations and the DRE, incorporating nonsalient information. Additionally, segmentation-edge decoupling enhancement (SDE) modules are employed as decoders to integrate upsampling features with nonsalient information provided by the PCF. This integration enhances both edge and segmentation information. The features in the skip connection and decoding path are iteratively updated to progressively aggregate fine structure information, thereby optimizing segmentation results and reducing topological disconnections. Experimental results on multiple datasets demonstrate that the proposed PEA-Net model and strategy achieve optimal performance in both pixel-level and topology-level metrics.
Collapse
Affiliation(s)
- Sigeng Chen
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Jingfan Fan
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Yang Ding
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Haixiao Geng
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Danni Ai
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Deqiang Xiao
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Yining Wang
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, China.
| | - Jian Yang
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| |
Collapse
|
2
|
Wang G, Zhou P, Gao H, Qin Z, Wang S, Sun J, Yu H. Coronary vessel segmentation in coronary angiography with a multi-scale U-shaped transformer incorporating boundary aggregation and topology preservation. Phys Med Biol 2024; 69:025012. [PMID: 38200403 DOI: 10.1088/1361-6560/ad0b63] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 11/09/2023] [Indexed: 01/12/2024]
Abstract
Coronary vessel segmentation plays a pivotal role in automating the auxiliary diagnosis of coronary heart disease. The continuity and boundary accuracy of the segmented vessels directly affect the subsequent processing. Notably, during segmentation, vessels with severe stenosis can easily cause boundary errors and breakage, resulting in isolated islands. To address these issues, we propose a novel multi-scale U-shaped transformer with boundary aggregation and topology preservation (UT-BTNet) for coronary vessel segmentation in coronary angiography. Specifically, considering the characteristics of coronary vessels, we first develop the UT-BTNet for coronary vessels segmentation, which combines the advantages of a convolutional neural networks (CNN) and a transformer, and is able to effectively extract the local and global features of angiographic images. Secondly, we innovatively employ boundary loss and topological loss in two stages, in addition to the traditional losses. In the first stage, boundary loss is adopted, which has the effect of boundary aggregation. In the second stage, topological loss is applied to preserve the topology of the vessels, after the network converges. In the experiment, in addition to the two metrics of Dice and intersection over union (IoU), we specifically propose two metrics of boundary intersection over union (BIoU) and Betti error to evaluate boundary accuracy and the continuity of segmentation results. The results show that the Dice is 0.9291, the IoU is 0.8687, the BIoU is 0.5094, and the Betti error is 0.3400. Compared with the other state-of-the-art methods, UT-BTNet achieves better segmentation results, while ensuring the continuity and boundary accuracy of the vessels, indicating its potential clinical value.
Collapse
Affiliation(s)
- Guangpu Wang
- Department of Biomedical Engineering, School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, People's Republic of China
| | - Peng Zhou
- Department of Biomedical Engineering, School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, People's Republic of China
- Academy of Medical Engineering and Translational Medicine, Tianjin University, People's Republic of China
| | - Hui Gao
- Department of Biomedical Engineering, School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, People's Republic of China
| | - Zewei Qin
- Department of Biomedical Engineering, School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, People's Republic of China
| | - Shuo Wang
- Academy of Medical Engineering and Translational Medicine, Tianjin University, People's Republic of China
| | - Jinglai Sun
- Department of Biomedical Engineering, School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, People's Republic of China
| | - Hui Yu
- Department of Biomedical Engineering, School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, People's Republic of China
- Academy of Medical Engineering and Translational Medicine, Tianjin University, People's Republic of China
| |
Collapse
|
3
|
Han T, Ai D, Li X, Fan J, Song H, Wang Y, Yang J. Coronary artery stenosis detection via proposal-shifted spatial-temporal transformer in X-ray angiography. Comput Biol Med 2023; 153:106546. [PMID: 36641935 DOI: 10.1016/j.compbiomed.2023.106546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 01/03/2023] [Accepted: 01/10/2023] [Indexed: 01/13/2023]
Abstract
Accurate detection of coronary artery stenosis in X-ray angiography (XRA) images is crucial for the diagnosis and treatment of coronary artery disease. However, stenosis detection remains a challenging task due to complicated vascular structures, poor imaging quality, and fickle lesions. While devoted to accurate stenosis detection, most methods are inefficient in the exploitation of spatio-temporal information of XRA sequences, leading to a limited performance on the task. To overcome the problem, we propose a new stenosis detection framework based on a Transformer-based module to aggregate proposal-level spatio-temporal features. In the module, proposal-shifted spatio-temporal tokenization (PSSTT) scheme is devised to gather spatio-temporal region-of-interest (RoI) features for obtaining visual tokens within a local window. Then, the Transformer-based feature aggregation (TFA) network takes the tokens as the inputs to enhance the RoI features by learning the long-range spatio-temporal context for final stenosis prediction. The effectiveness of our method was validated by conducting qualitative and quantitative experiments on 233 XRA sequences of coronary artery. Our method achieves a high F1 score of 90.88%, outperforming other 15 state-of-the-art detection methods. It demonstrates that our method can perform accurate stenosis detection from XRA images due to the strong ability to aggregate spatio-temporal features.
Collapse
Affiliation(s)
- Tao Han
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Danni Ai
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Xinyu Li
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Jingfan Fan
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Yining Wang
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, China.
| | - Jian Yang
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| |
Collapse
|