1
|
Gao C, Wu L, Wu W, Huang Y, Wang X, Sun Z, Xu M, Gao C. Deep learning in pulmonary nodule detection and segmentation: a systematic review. Eur Radiol 2024:10.1007/s00330-024-10907-0. [PMID: 38985185 DOI: 10.1007/s00330-024-10907-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 04/09/2024] [Accepted: 05/10/2024] [Indexed: 07/11/2024]
Abstract
OBJECTIVES The accurate detection and precise segmentation of lung nodules on computed tomography are key prerequisites for early diagnosis and appropriate treatment of lung cancer. This study was designed to compare detection and segmentation methods for pulmonary nodules using deep-learning techniques to fill methodological gaps and biases in the existing literature. METHODS This study utilized a systematic review with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, searching PubMed, Embase, Web of Science Core Collection, and the Cochrane Library databases up to May 10, 2023. The Quality Assessment of Diagnostic Accuracy Studies 2 criteria was used to assess the risk of bias and was adjusted with the Checklist for Artificial Intelligence in Medical Imaging. The study analyzed and extracted model performance, data sources, and task-focus information. RESULTS After screening, we included nine studies meeting our inclusion criteria. These studies were published between 2019 and 2023 and predominantly used public datasets, with the Lung Image Database Consortium Image Collection and Image Database Resource Initiative and Lung Nodule Analysis 2016 being the most common. The studies focused on detection, segmentation, and other tasks, primarily utilizing Convolutional Neural Networks for model development. Performance evaluation covered multiple metrics, including sensitivity and the Dice coefficient. CONCLUSIONS This study highlights the potential power of deep learning in lung nodule detection and segmentation. It underscores the importance of standardized data processing, code and data sharing, the value of external test datasets, and the need to balance model complexity and efficiency in future research. CLINICAL RELEVANCE STATEMENT Deep learning demonstrates significant promise in autonomously detecting and segmenting pulmonary nodules. Future research should address methodological shortcomings and variability to enhance its clinical utility. KEY POINTS Deep learning shows potential in the detection and segmentation of pulmonary nodules. There are methodological gaps and biases present in the existing literature. Factors such as external validation and transparency affect the clinical application.
Collapse
Affiliation(s)
- Chuan Gao
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Linyu Wu
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Wei Wu
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Yichao Huang
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Xinyue Wang
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Zhichao Sun
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China.
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China.
| | - Maosheng Xu
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China.
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China.
| | - Chen Gao
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China.
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China.
| |
Collapse
|
2
|
Su Y, Xia X, Sun R, Yuan J, Hua Q, Han B, Gong J, Nie S. Res-TransNet: A Hybrid deep Learning Network for Predicting Pathological Subtypes of lung Adenocarcinoma in CT Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01149-z. [PMID: 38861071 DOI: 10.1007/s10278-024-01149-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 04/30/2024] [Accepted: 05/22/2024] [Indexed: 06/12/2024]
Abstract
This study aims to develop a CT-based hybrid deep learning network to predict pathological subtypes of early-stage lung adenocarcinoma by integrating residual network (ResNet) with Vision Transformer (ViT). A total of 1411 pathologically confirmed ground-glass nodules (GGNs) retrospectively collected from two centers were used as internal and external validation sets for model development. 3D ResNet and ViT were applied to investigate two deep learning frameworks to classify three subtypes of lung adenocarcinoma namely invasive adenocarcinoma (IAC), minimally invasive adenocarcinoma and adenocarcinoma in situ, respectively. To further improve the model performance, four Res-TransNet based models were proposed by integrating ResNet and ViT with different ensemble learning strategies. Two classification tasks involving predicting IAC from Non-IAC (Task1) and classifying three subtypes (Task2) were designed and conducted in this study. For Task 1, the optimal Res-TransNet model yielded area under the receiver operating characteristic curve (AUC) values of 0.986 and 0.933 on internal and external validation sets, which were significantly higher than that of ResNet and ViT models (p < 0.05). For Task 2, the optimal fusion model generated the accuracy and weighted F1 score of 68.3% and 66.1% on the external validation set. The experimental results demonstrate that Res-TransNet can significantly increase the classification performance compared with the two basic models and have the potential to assist radiologists in precision diagnosis.
Collapse
Affiliation(s)
- Yue Su
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Xianwu Xia
- Department of Oncology Intervention, Municipal Hospital Affiliated of Taizhou University, Zhejiang, Taizhou, 318000, China
| | - Rong Sun
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Jianjun Yuan
- Department of Oncology Intervention, Municipal Hospital Affiliated of Taizhou University, Zhejiang, Taizhou, 318000, China
| | - Qianjin Hua
- Department of Oncology Intervention, Municipal Hospital Affiliated of Taizhou University, Zhejiang, Taizhou, 318000, China
| | - Baosan Han
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
- Department of Breast Surgery, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200092, China.
| | - Jing Gong
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai, 200032, China.
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China.
| | - Shengdong Nie
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
| |
Collapse
|
3
|
Pan Z, Hu G, Zhu Z, Tan W, Han W, Zhou Z, Song W, Yu Y, Song L, Jin Z. Predicting Invasiveness of Lung Adenocarcinoma at Chest CT with Deep Learning Ternary Classification Models. Radiology 2024; 311:e232057. [PMID: 38591974 DOI: 10.1148/radiol.232057] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/10/2024]
Abstract
Background Preoperative discrimination of preinvasive, minimally invasive, and invasive adenocarcinoma at CT informs clinical management decisions but may be challenging for classifying pure ground-glass nodules (pGGNs). Deep learning (DL) may improve ternary classification. Purpose To determine whether a strategy that includes an adjudication approach can enhance the performance of DL ternary classification models in predicting the invasiveness of adenocarcinoma at chest CT and maintain performance in classifying pGGNs. Materials and Methods In this retrospective study, six ternary models for classifying preinvasive, minimally invasive, and invasive adenocarcinoma were developed using a multicenter data set of lung nodules. The DL-based models were progressively modified through framework optimization, joint learning, and an adjudication strategy (simulating a multireader approach to resolving discordant nodule classifications), integrating two binary classification models with a ternary classification model to resolve discordant classifications sequentially. The six ternary models were then tested on an external data set of pGGNs imaged between December 2019 and January 2021. Diagnostic performance including accuracy, specificity, and sensitivity was assessed. The χ2 test was used to compare model performance in different subgroups stratified by clinical confounders. Results A total of 4929 nodules from 4483 patients (mean age, 50.1 years ± 9.5 [SD]; 2806 female) were divided into training (n = 3384), validation (n = 579), and internal (n = 966) test sets. A total of 361 pGGNs from 281 patients (mean age, 55.2 years ± 11.1 [SD]; 186 female) formed the external test set. The proposed strategy improved DL model performance in external testing (P < .001). For classifying minimally invasive adenocarcinoma, the accuracy was 85% and 79%, sensitivity was 75% and 63%, and specificity was 89% and 85% for the model with adjudication (model 6) and the model without (model 3), respectively. Model 6 showed a relatively narrow range (maximum minus minimum) across diagnostic indexes (accuracy, 1.7%; sensitivity, 7.3%; specificity, 0.9%) compared with the other models (accuracy, 0.6%-10.8%; sensitivity, 14%-39.1%; specificity, 5.5%-17.9%). Conclusion Combining framework optimization, joint learning, and an adjudication approach improved DL classification of adenocarcinoma invasiveness at chest CT. Published under a CC BY 4.0 license. Supplemental material is available for this article. See also the editorial by Sohn and Fields in this issue.
Collapse
Affiliation(s)
- Zhengsong Pan
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Ge Hu
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Zhenchen Zhu
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Weixiong Tan
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Wei Han
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Zhen Zhou
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Wei Song
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Yizhou Yu
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Lan Song
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Zhengyu Jin
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| |
Collapse
|
4
|
Qi K, Wang K, Wang X, Zhang YD, Lin G, Zhang X, Liu H, Huang W, Wu J, Zhao K, Liu J, Li J, Zhang X. Lung-PNet: An Automated Deep Learning Model for the Diagnosis of Invasive Adenocarcinoma in Pure Ground-Glass Nodules on Chest CT. AJR Am J Roentgenol 2024; 222:e2329674. [PMID: 37493322 DOI: 10.2214/ajr.23.29674] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/27/2023]
Abstract
BACKGROUND. Pure ground-glass nodules (pGGNs) on chest CT representing invasive adenocarcinoma (IAC) warrant lobectomy with lymph node resection. For pGGNs representing other entities, close follow-up or sublobar resection without node dissection may be appropriate. OBJECTIVE. The purpose of this study was to develop and validate an automated deep learning model for differentiation of pGGNs on chest CT representing IAC from those representing atypical adenomatous hyperplasia (AAH), adenocarcinoma in situ (AIS), and minimally invasive adenocarcinoma (MIA). METHODS. This retrospective study included 402 patients (283 women, 119 men; mean age, 53.2 years) with a total of 448 pGGNs on noncontrast chest CT that were resected from January 2019 to June 2022 and were histologically diagnosed as AAH (n = 29), AIS (n = 83), MIA (n = 235), or IAC (n = 101). Lung-PNet, a 3D deep learning model, was developed for automatic segmentation and classification (probability of IAC vs other entities) of pGGNs on CT. Nodules resected from January 2019 to December 2021 were randomly allocated to training (n = 327) and internal test (n = 82) sets. Nodules resected from January 2022 to June 2022 formed a holdout test set (n = 39). Segmentation performance was assessed with Dice coefficients with radiologists' manual segmentations as reference. Classification performance was assessed by ROC AUC and precision-recall AUC (PR AUC) and compared with that of four readers (three radiologists, one surgeon). The code used is publicly available (https://github.com/XiaodongZhang-PKUFH/Lung-PNet.git). RESULTS. In the holdout test set, Dice coefficients for segmentation of IACs and of other lesions were 0.860 and 0.838, and ROC AUC and PR AUC for classification as IAC were 0.911 and 0.842. At threshold probability of 50.0% or greater for prediction of IAC, Lung-PNet had sensitivity, specificity, accuracy, and F1 score of 50.0%, 92.0%, 76.9%, and 60.9% in the holdout test set. In the holdout test set, accuracy and F1 score (p values vs Lung-PNet) for individual readers were as follows: reader 1, 51.3% (p = .02) and 48.6% (p = .008); reader 2, 79.5% (p = .75) and 75.0% (p = .10); reader 3, 66.7% (p = .35) and 68.3% (p < .001); reader 4, 71.8% (p = .48) and 42.1% (p = .18). CONCLUSION. Lung-PNet had robust performance for segmenting and classifying (IAC vs other entities) pGGNs on chest CT. CLINICAL IMPACT. This automated deep learning tool may help guide selection of surgical strategies for pGGN management.
Collapse
Affiliation(s)
- Kang Qi
- Department of Thoracic Surgery, Peking University First Hospital, Beijing, China
| | - Kexin Wang
- School of Basic Medical Sciences, Capital Medical University, Beijing, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, 8 Xishiku St, Beijing 100034, China
| | - Yu-Dong Zhang
- Department of Radiology, First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Gang Lin
- Department of Thoracic Surgery, Peking University First Hospital, Beijing, China
| | - Xining Zhang
- Department of Thoracic Surgery, Peking University First Hospital, Beijing, China
| | - Haibo Liu
- Department of Thoracic Surgery, Peking University First Hospital, Beijing, China
| | - Weiming Huang
- Department of Thoracic Surgery, Peking University First Hospital, Beijing, China
| | - Jingyun Wu
- Department of Radiology, Peking University First Hospital, 8 Xishiku St, Beijing 100034, China
| | - Kai Zhao
- Department of Radiology, Peking University First Hospital, 8 Xishiku St, Beijing 100034, China
| | - Jing Liu
- Department of Radiology, Peking University First Hospital, 8 Xishiku St, Beijing 100034, China
| | - Jian Li
- Department of Thoracic Surgery, Peking University First Hospital, Beijing, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, 8 Xishiku St, Beijing 100034, China
| |
Collapse
|
5
|
Li R, Zhou L, Wang Y, Shan F, Chen X, Liu L. A graph neural network model for the diagnosis of lung adenocarcinoma based on multimodal features and an edge-generation network. Quant Imaging Med Surg 2023; 13:5333-5348. [PMID: 37581061 PMCID: PMC10423350 DOI: 10.21037/qims-23-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Accepted: 06/09/2023] [Indexed: 08/16/2023]
Abstract
Background Lung cancer is a global disease with high lethality, with early screening being considerably helpful for improving the 5-year survival rate. Multimodality features in early screening imaging are an important part of the prediction for lung adenocarcinoma, and establishing a model for adenocarcinoma diagnosis based on multimodal features is an obvious clinical need. Through our practice and investigation, we found that graph neural networks (GNNs) are excellent platforms for multimodal feature fusion, and the data can be completed using the edge-generation network. Therefore, we propose a new lung adenocarcinoma multiclassification model based on multimodal features and an edge-generation network. Methods According to a ratio of 80% to 20%, respectively, the dataset of 338 cases was divided into the training set and the test set through 5-fold cross-validation, and the distribution of the 2 sets was the same. First, the regions of interest (ROIs) cropped from computed tomography (CT) images were separately fed into convolutional neural networks (CNNs) and radiomics processing platforms. The results of the 2 parts were then input into a graph embedding representation network to obtain the fused feature vectors. Subsequently, a graph database based on the clinical and semantic features was established, and the data were supplemented by an edge-generation network, with the fused feature vectors being used as the input of the nodes. This enabled us to clearly understand where the information transmission of the GNN takes place and improves the interpretability of the model. Finally, the nodes were classified using GNNs. Results On our dataset, the proposed method presented in this paper achieved superior results compared to traditional methods and showed some comparability with state-of-the-art methods for lung nodule classification. The results of our method are as follows: accuracy (ACC) =66.26% (±4.46%), area under the curve (AUC) =75.86% (±1.79%), F1-score =64.00% (±3.65%), and Matthews correlation coefficient (MCC) =48.40% (±5.07%). The model with the edge-generating network consistently outperformed the model without it in all aspects. Conclusions The experiments demonstrate that with appropriate data=construction methods GNNs can outperform traditional image processing methods in the field of CT-based medical image classification. Additionally, our model has higher interpretability, as it employs subjective clinical and semantic features as the data construction approach. This will help doctors better leverage human-computer interactions.
Collapse
Affiliation(s)
- Ruihao Li
- Academy for Engineering & Technology, Fudan University, Shanghai, China
| | - Lingxiao Zhou
- Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen, China
| | - Yunpeng Wang
- Institutes of Biomedical Sciences, Fudan University, Shanghai, China
| | - Fei Shan
- Shanghai Public Health Clinical Center and Institutes of Biomedical Sciences, Fudan University, Shanghai, China
| | - Xinrong Chen
- Academy for Engineering & Technology, Fudan University, Shanghai, China
| | - Lei Liu
- Academy for Engineering & Technology, Fudan University, Shanghai, China
- Intelligent Medicine Institute, Fudan University, Shanghai, China
- Shanghai Institute of Stem Cell Research and Clinical Translation, Shanghai, China
| |
Collapse
|
6
|
Özdemir Ö, Sönmez EB. Attention mechanism and mixup data augmentation for classification of COVID-19 Computed Tomography images. JOURNAL OF KING SAUD UNIVERSITY. COMPUTER AND INFORMATION SCIENCES 2022; 34:6199-6207. [PMID: 38620953 PMCID: PMC8280602 DOI: 10.1016/j.jksuci.2021.07.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 07/01/2021] [Accepted: 07/07/2021] [Indexed: 12/21/2022]
Abstract
The Coronavirus disease is quickly spreading all over the world and the emergency situation is still out of control. Latest achievements of deep learning algorithms suggest the use of deep Convolutional Neural Network to implement a computer-aided diagnostic system for automatic classification of COVID-19 CT images. In this paper, we propose to employ a feature-wise attention layer in order to enhance the discriminative features obtained by convolutional networks. Moreover, the original performance of the network has been improved using the mixup data augmentation technique. This work compares the proposed attention-based model against the stacked attention networks, and traditional versus mixup data augmentation approaches. We deduced that feature-wise attention extension, while outperforming the stacked attention variants, achieves remarkable improvements over the baseline convolutional neural networks. That is, ResNet50 architecture extended with a feature-wise attention layer obtained 95.57% accuracy score, which, to best of our knowledge, fixes the state-of-the-art in the challenging COVID-CT dataset.
Collapse
Affiliation(s)
- Özgür Özdemir
- Computer Engineering Department, Istanbul Bilgi University, Turkey
| | | |
Collapse
|
7
|
Astaraki M, Smedby Ö, Wang C. Prior-aware autoencoders for lung pathology segmentation. Med Image Anal 2022; 80:102491. [DOI: 10.1016/j.media.2022.102491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Revised: 04/11/2022] [Accepted: 05/20/2022] [Indexed: 10/18/2022]
|
8
|
Zheng B, Yang D, Zhu Y, Liu Y, Hu J, Bai C. 3D gray density coding feature for benign-malignant pulmonary nodule classification on chest CT. Med Phys 2021; 48:7826-7836. [PMID: 34655238 DOI: 10.1002/mp.15298] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 09/13/2021] [Accepted: 09/30/2021] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Early detection is significant to reduce lung cancer-related death. Computer-aided detection system (CADs) can help radiologists to make an early diagnosis. In this paper, we propose a novel 3D gray density coding feature (3D GDC) and fuse it with extracted geometric features. The fusion feature and random forest are used for benign-malignant pulmonary nodule classification on Chest CT. METHODS First, a dictionary model is created to acquire codebook. It is used to obtain feature descriptors and includes 3D block database (BD) and distance matrix clustering centers. 3D BD is balanced and randomly selecting from benign and malignant pulmonary nodules of training data. Clustering centers is got by clustering the distance matrix, which is the distance between every two blocks in 3D BD. Then, feature descriptor is obtained by coding the pulmonary nodule with codebook, and 3D GDC feature is the result of histogram statistics on feature descriptor. Second, geometric features are extracted for fusion feature. Finally, random forest is performed for benign-malignant pulmonary nodule classification with fusion feature of the 3D gray density coding feature and the geometric features. RESULTS We verify the effectiveness of our method on the public LIDC-IDRI dataset and the private ZSHD dataset. For LIDC-IDRI dataset, compared with other state-of-the-art methods, we achieve more satisfactory results with 93.17 ± 1.94% for accuracy and 97.53 ± 1.62% for AUC. As for private ZSHD dataset, it contains a total of 238 lung nodules from 203 patients. The accuracy and AUC achieved by our method are 90.0% and 93.15%. CONCLUSIONS The results show that our method can provide doctors with more accurate results of benign-malignant pulmonary nodule classification for auxiliary diagnosis, and our method is more interpretable than 3D CNN methods, which can provide doctors with more auxiliary information.
Collapse
Affiliation(s)
- BingBing Zheng
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai, China
| | - Dawei Yang
- Department of Pulmonary Medicine, Shanghai Respiratory Research Institute, Zhongshan Hospital, Fudan University, Shanghai, China.,Shanghai Engineering Research Center of Internet of Things for Respiratory Medicine, Shanghai, China
| | - Yu Zhu
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai, China
| | - Yatong Liu
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai, China
| | - Jie Hu
- Department of Pulmonary Medicine, Shanghai Respiratory Research Institute, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Chunxue Bai
- Department of Pulmonary Medicine, Shanghai Respiratory Research Institute, Zhongshan Hospital, Fudan University, Shanghai, China.,Shanghai Engineering Research Center of Internet of Things for Respiratory Medicine, Shanghai, China
| |
Collapse
|
9
|
Gong J, Liu J, Li H, Zhu H, Wang T, Hu T, Li M, Xia X, Hu X, Peng W, Wang S, Tong T, Gu Y. Deep Learning-Based Stage-Wise Risk Stratification for Early Lung Adenocarcinoma in CT Images: A Multi-Center Study. Cancers (Basel) 2021; 13:cancers13133300. [PMID: 34209366 PMCID: PMC8269183 DOI: 10.3390/cancers13133300] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 06/28/2021] [Accepted: 06/28/2021] [Indexed: 12/21/2022] Open
Abstract
Simple Summary Prediction of the malignancy and invasiveness of ground glass nodules (GGNs) from computed tomography images is a crucial task for radiologists in risk stratification of early-stage lung adenocarcinoma. In order to solve this challenge, a two-stage deep neural network (DNN) was developed based on the images collected from four centers. A multi-reader multi-case observer study was conducted to evaluate the model capability. The performance of our model was comparable or even more accurate than that of senior radiologists, with average area under the curve values of 0.76 and 0.95 for two tasks, respectively. Findings suggest (1) a positive trend between the diagnostic performance and radiologist’s experience, (2) DNN yielded equivalent or even higher performance in comparison with senior radiologists, and (3) low image resolution reduced the model performance in predicting the risks of GGNs. Abstract This study aims to develop a deep neural network (DNN)-based two-stage risk stratification model for early lung adenocarcinomas in CT images, and investigate the performance compared with practicing radiologists. A total of 2393 GGNs were retrospectively collected from 2105 patients in four centers. All the pathologic results of GGNs were obtained from surgically resected specimens. A two-stage deep neural network was developed based on the 3D residual network and atrous convolution module to diagnose benign and malignant GGNs (Task1) and classify between invasive adenocarcinoma (IA) and non-IA for these malignant GGNs (Task2). A multi-reader multi-case observer study with six board-certified radiologists’ (average experience 11 years, range 2–28 years) participation was conducted to evaluate the model capability. DNN yielded area under the receiver operating characteristic curve (AUC) values of 0.76 ± 0.03 (95% confidence interval (CI): (0.69, 0.82)) and 0.96 ± 0.02 (95% CI: (0.92, 0.98)) for Task1 and Task2, which were equivalent to or higher than radiologists in the senior group with average AUC values of 0.76 and 0.95, respectively (p > 0.05). With the CT image slice thickness increasing from 1.15 mm ± 0.36 to 1.73 mm ± 0.64, DNN performance decreased 0.08 and 0.22 for the two tasks. The results demonstrated (1) a positive trend between the diagnostic performance and radiologist’s experience, (2) the DNN yielded equivalent or even higher performance in comparison with senior radiologists, and (3) low image resolution decreased model performance in predicting the risks of GGNs. Once tested prospectively in clinical practice, the DNN could have the potential to assist doctors in precision diagnosis and treatment of early lung adenocarcinoma.
Collapse
Affiliation(s)
- Jing Gong
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Jiyu Liu
- Department of Radiology, Shanghai Pulmonary Hospital, 507 Zheng Min Road, Shanghai 200433, China;
| | - Haiming Li
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Hui Zhu
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Tingting Wang
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Tingdan Hu
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Menglei Li
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Xianwu Xia
- Department of Radiology, Municipal Hospital Affiliated to Taizhou University, Taizhou 318000, China;
| | - Xianfang Hu
- Department of Radiology, Huzhou Central Hospital Affiliated Central Hospital of Huzhou University, 1558 Sanhuan North Road, Huzhou 313000, China;
| | - Weijun Peng
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Shengping Wang
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
- Correspondence: (S.W.); (T.T.); (Y.G.); Tel.: +86-13818521975 (S.W); +86-18017312912 (T.T.); +86-18017312040 (Y.G.)
| | - Tong Tong
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
- Correspondence: (S.W.); (T.T.); (Y.G.); Tel.: +86-13818521975 (S.W); +86-18017312912 (T.T.); +86-18017312040 (Y.G.)
| | - Yajia Gu
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
- Correspondence: (S.W.); (T.T.); (Y.G.); Tel.: +86-13818521975 (S.W); +86-18017312912 (T.T.); +86-18017312040 (Y.G.)
| |
Collapse
|