1
|
Li Y, Huang XT, Feng YB, Fan QR, Wang DW, Lv FJ, He XQ, Li Q. Value of CT-Based Deep Learning Model in Differentiating Benign and Malignant Solid Pulmonary Nodules ≤ 8 mm. Acad Radiol 2024:S1076-6332(24)00305-2. [PMID: 38806374 DOI: 10.1016/j.acra.2024.05.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 04/27/2024] [Accepted: 05/12/2024] [Indexed: 05/30/2024]
Abstract
RATIONALE AND OBJECTIVES We examined the effectiveness of computed tomography (CT)-based deep learning (DL) models in differentiating benign and malignant solid pulmonary nodules (SPNs) ≤ 8 mm. MATERIALS AND METHODS The study patients (n = 719) were divided into internal training, internal validation, and external validation cohorts; all had small SPNs and had undergone preoperative chest CTs and surgical resection. We developed five DL models incorporating features of the nodule and five different peri-nodular regions with the Multiscale Dual Attention Network (MDANet) to differentiate benign and malignant SPNs. We selected the best-performing model, which was then compared to four conventional algorithms (VGG19, ResNet50, ResNeXt50, and DenseNet121). Furthermore, another five DL models were constructed using MDANet to distinguish benign tumors from inflammatory nodules and the one performed best was selected out. RESULTS Model 4, which incorporated the nodule and 15 mm peri-nodular region, best differentiated benign and malignant SPNs. The model had an area under the curve (AUC), accuracy, recall, precision, and F1-score of 0.730, 0.724, 0.711, 0.705, and 0.707 in the external validation cohort. Model 4 also performed better than the other four conventional algorithms. Model 8, which incorporated the nodule and 10 mm peri-nodular region, was the best model for distinguishing benign tumors from inflammatory nodules. The model had an AUC, accuracy, recall, precision, and F1-score of 0.871, 0.938, 0.863, 0.904, and 0.882 in the external validation cohort. CONCLUSION The study concludes that CT-based DL models built with MDANet can accurately discriminate among small benign and malignant SPNs, benign tumors and inflammatory nodules.
Collapse
Affiliation(s)
- Yuan Li
- Department of Thoracic Surgery, the First Affiliated Hospital of Chongqing Medical University, No.1 Youyi Road, Yuzhong District, Chongqing, China (Y.L.); Department of Thoracic Surgery, National Cancer Center/ National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China (Y.L.)
| | - Xing-Tao Huang
- Department of Radiology, the Fifth People's Hospital of Chongqing, No. 24 Renji Road, Nan'an District, Chongqing, China (X.T.H.)
| | - Yi-Bo Feng
- Institute of Research, Infervision Medical Technology Co., Ltd, 25F Building E, Yuanyang International Center, Chaoyang District. Beijing, China (B.Y.F., R.Q.F., W.D.W.)
| | - Qian-Rui Fan
- Institute of Research, Infervision Medical Technology Co., Ltd, 25F Building E, Yuanyang International Center, Chaoyang District. Beijing, China (B.Y.F., R.Q.F., W.D.W.)
| | - Da-Wei Wang
- Institute of Research, Infervision Medical Technology Co., Ltd, 25F Building E, Yuanyang International Center, Chaoyang District. Beijing, China (B.Y.F., R.Q.F., W.D.W.)
| | - Fa-Jin Lv
- Department of Radiology, the First Affiliated Hospital of Chongqing Medical University, No.1 Youyi Road, Yuzhong District, Chongqing, China (F.J.L., X.Q.H., Q.L.)
| | - Xiao-Qun He
- Department of Radiology, the First Affiliated Hospital of Chongqing Medical University, No.1 Youyi Road, Yuzhong District, Chongqing, China (F.J.L., X.Q.H., Q.L.)
| | - Qi Li
- Department of Radiology, the First Affiliated Hospital of Chongqing Medical University, No.1 Youyi Road, Yuzhong District, Chongqing, China (F.J.L., X.Q.H., Q.L.).
| |
Collapse
|
2
|
Wang KN, Li SX, Bu Z, Zhao FX, Zhou GQ, Zhou SJ, Chen Y. SBCNet: Scale and Boundary Context Attention Dual-Branch Network for Liver Tumor Segmentation. IEEE J Biomed Health Inform 2024; 28:2854-2865. [PMID: 38427554 DOI: 10.1109/jbhi.2024.3370864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/03/2024]
Abstract
Automated segmentation of liver tumors in CT scans is pivotal for diagnosing and treating liver cancer, offering a valuable alternative to labor-intensive manual processes and ensuring the provision of accurate and reliable clinical assessment. However, the inherent variability of liver tumors, coupled with the challenges posed by blurred boundaries in imaging characteristics, presents a substantial obstacle to achieving their precise segmentation. In this paper, we propose a novel dual-branch liver tumor segmentation model, SBCNet, to address these challenges effectively. Specifically, our proposed method introduces a contextual encoding module, which enables a better identification of tumor variability using an advanced multi-scale adaptive kernel. Moreover, a boundary enhancement module is designed for the counterpart branch to enhance the perception of boundaries by incorporating contour learning with the Sobel operator. Finally, we propose a hybrid multi-task loss function, concurrently concerning tumors' scale and boundary features, to foster interaction across different tasks of dual branches, further improving tumor segmentation. Experimental validation on the publicly available LiTS dataset demonstrates the practical efficacy of each module, with SBCNet yielding competitive results compared to other state-of-the-art methods for liver tumor segmentation.
Collapse
|
3
|
Ao Y, Shi W, Ji B, Miao Y, He W, Jiang Z. MS-TCNet: An effective Transformer-CNN combined network using multi-scale feature learning for 3D medical image segmentation. Comput Biol Med 2024; 170:108057. [PMID: 38301516 DOI: 10.1016/j.compbiomed.2024.108057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 12/31/2023] [Accepted: 01/26/2024] [Indexed: 02/03/2024]
Abstract
Medical image segmentation is a fundamental research problem in the field of medical image processing. Recently, the Transformer have achieved highly competitive performance in computer vision. Therefore, many methods combining Transformer with convolutional neural networks (CNNs) have emerged for segmenting medical images. However, these methods cannot effectively capture the multi-scale features in medical images, even though texture and contextual information embedded in the multi-scale features are extremely beneficial for segmentation. To alleviate this limitation, we propose a novel Transformer-CNN combined network using multi-scale feature learning for three-dimensional (3D) medical image segmentation, which is called MS-TCNet. The proposed model utilizes a shunted Transformer and CNN to construct an encoder and pyramid decoder, allowing six different scale levels of feature learning. It captures multi-scale features with refinement at each scale level. Additionally, we propose a novel lightweight multi-scale feature fusion (MSFF) module that can fully fuse the different-scale semantic features generated by the pyramid decoder for each segmentation class, resulting in a more accurate segmentation output. We conducted experiments on three widely used 3D medical image segmentation datasets. The experimental results indicated that our method outperformed state-of-the-art medical image segmentation methods, suggesting its effectiveness, robustness, and superiority. Meanwhile, our model has a smaller number of parameters and lower computational complexity than conventional 3D segmentation networks. The results confirmed that the model is capable of effective multi-scale feature learning and that the learned multi-scale features are useful for improving segmentation performance. We open-sourced our code, which can be found at https://github.com/AustinYuAo/MS-TCNet.
Collapse
Affiliation(s)
- Yu Ao
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, 130022, China
| | - Weili Shi
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, 130022, China; Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, 528437, China
| | - Bai Ji
- Department of Hepatobiliary and Pancreatic Surgery, The First Hospital of Jilin University, Changchun, 130061, China
| | - Yu Miao
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, 130022, China; Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, 528437, China
| | - Wei He
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, 130022, China; Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, 528437, China
| | - Zhengang Jiang
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, 130022, China; Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, 528437, China.
| |
Collapse
|
4
|
Wang J, Peng Y, Jing S, Han L, Li T, Luo J. A deep-learning approach for segmentation of liver tumors in magnetic resonance imaging using UNet+. BMC Cancer 2023; 23:1060. [PMID: 37923988 PMCID: PMC10623778 DOI: 10.1186/s12885-023-11432-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 09/21/2023] [Indexed: 11/06/2023] Open
Abstract
OBJECTIVE Radiomic and deep learning studies based on magnetic resonance imaging (MRI) of liver tumor are gradually increasing. Manual segmentation of normal hepatic tissue and tumor exhibits limitations. METHODS 105 patients diagnosed with hepatocellular carcinoma were retrospectively studied between Jan 2015 and Dec 2020. The patients were divided into three sets: training (n = 83), validation (n = 11), and internal testing (n = 11). Additionally, 9 cases were included from the Cancer Imaging Archive as the external test set. Using the arterial phase and T2WI sequences, expert radiologists manually delineated all images. Using deep learning, liver tumors and liver segments were automatically segmented. A preliminary liver segmentation was performed using the UNet + + network, and the segmented liver mask was re-input as the input end into the UNet + + network to segment liver tumors. The false positivity rate was reduced using a threshold value in the liver tumor segmentation. To evaluate the segmentation results, we calculated the Dice similarity coefficient (DSC), average false positivity rate (AFPR), and delineation time. RESULTS The average DSC of the liver in the validation and internal testing sets was 0.91 and 0.92, respectively. In the validation set, manual and automatic delineation took 182.9 and 2.2 s, respectively. On an average, manual and automatic delineation took 169.8 and 1.7 s, respectively. The average DSC of liver tumors was 0.612 and 0.687 in the validation and internal testing sets, respectively. The average time for manual and automatic delineation and AFPR in the internal testing set were 47.4 s, 2.9 s, and 1.4, respectively, and those in the external test set were 29.5 s, 4.2 s, and 1.6, respectively. CONCLUSION UNet + + can automatically segment normal hepatic tissue and liver tumors based on MR images. It provides a methodological basis for the automated segmentation of liver tumors, improves the delineation efficiency, and meets the requirement of extraction set analysis of further radiomics and deep learning.
Collapse
Affiliation(s)
- Jing Wang
- Department of General medicine, The First Medical Center Department of Chinese PLA General Hospital, Peking, 100039, China
| | - Yanyang Peng
- Department of Radiology, First Medical Center of General Hospital of People's Liberation Army, Peking, China
| | - Shi Jing
- Department of Oncology, Huaihe Hospital, Henan University, Kaifeng, 475000, China
| | - Lujun Han
- Department of Radiology, State Key Laboratory of Oncology in South China, Collaborative Innovation Cancer for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510030, China.
- Translational Medical Center of Huaihe Hospital, Henan University, 115 West Gate Street, Kaifeng, 475000, China.
| | - Tian Li
- School of Basic Medicine, Fourth Military Medical University, Xi'an, 710032, China.
- Translational Medical Center of Huaihe Hospital, Henan University, 115 West Gate Street, Kaifeng, 475000, China.
| | - Junpeng Luo
- Translational Medical Center of Huaihe Hospital, Henan University, 115 West Gate Street, Kaifeng, 475000, China.
- Academy for Advanced Interdisciplinary Studies, Henan University, Zhengzhou, 450046, China.
| |
Collapse
|