Zhang D, Fan X, Kang X, Tian S, Xiao G, Yu L, Wu W. Class key feature extraction and fusion for 2D medical image segmentation.
Med Phys 2024;
51:1263-1276. [PMID:
37552522 DOI:
10.1002/mp.16636]
[Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 06/28/2023] [Accepted: 07/07/2023] [Indexed: 08/09/2023] Open
Abstract
BACKGROUND
The size variation, complex semantic environment and high similarity in medical images often prevent deep learning models from achieving good performance.
PURPOSE
To overcome these problems and improve the model segmentation performance and generalizability.
METHODS
We propose the key class feature reconstruction module (KCRM), which ranks channel weights and selects key features (KFs) that contribute more to the segmentation results for each class. Meanwhile, KCRM reconstructs all local features to establish the dependence relationship from local features to KFs. In addition, we propose the spatial gating module (SGM), which employs KFs to generate two spatial maps to suppress irrelevant regions, strengthening the ability to locate semantic objects. Finally, we enable the model to adapt to size variations by diversifying the receptive field.
RESULTS
We integrate these modules into class key feature extraction and fusion network (CKFFNet) and validate its performance on three public medical datasets: CHAOS, UW-Madison, and ISIC2017. The experimental results show that our method achieves better segmentation results and generalizability than those of mainstream methods.
CONCLUSION
Through quantitative and qualitative research, the proposed module improves the segmentation results and enhances the model generalizability, making it suitable for application and expansion.
Collapse