1
|
Li Y, Huang XT, Feng YB, Fan QR, Wang DW, Lv FJ, He XQ, Li Q. Value of CT-Based Deep Learning Model in Differentiating Benign and Malignant Solid Pulmonary Nodules ≤ 8 mm. Acad Radiol 2024:S1076-6332(24)00305-2. [PMID: 38806374 DOI: 10.1016/j.acra.2024.05.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 04/27/2024] [Accepted: 05/12/2024] [Indexed: 05/30/2024]
Abstract
RATIONALE AND OBJECTIVES We examined the effectiveness of computed tomography (CT)-based deep learning (DL) models in differentiating benign and malignant solid pulmonary nodules (SPNs) ≤ 8 mm. MATERIALS AND METHODS The study patients (n = 719) were divided into internal training, internal validation, and external validation cohorts; all had small SPNs and had undergone preoperative chest CTs and surgical resection. We developed five DL models incorporating features of the nodule and five different peri-nodular regions with the Multiscale Dual Attention Network (MDANet) to differentiate benign and malignant SPNs. We selected the best-performing model, which was then compared to four conventional algorithms (VGG19, ResNet50, ResNeXt50, and DenseNet121). Furthermore, another five DL models were constructed using MDANet to distinguish benign tumors from inflammatory nodules and the one performed best was selected out. RESULTS Model 4, which incorporated the nodule and 15 mm peri-nodular region, best differentiated benign and malignant SPNs. The model had an area under the curve (AUC), accuracy, recall, precision, and F1-score of 0.730, 0.724, 0.711, 0.705, and 0.707 in the external validation cohort. Model 4 also performed better than the other four conventional algorithms. Model 8, which incorporated the nodule and 10 mm peri-nodular region, was the best model for distinguishing benign tumors from inflammatory nodules. The model had an AUC, accuracy, recall, precision, and F1-score of 0.871, 0.938, 0.863, 0.904, and 0.882 in the external validation cohort. CONCLUSION The study concludes that CT-based DL models built with MDANet can accurately discriminate among small benign and malignant SPNs, benign tumors and inflammatory nodules.
Collapse
Affiliation(s)
- Yuan Li
- Department of Thoracic Surgery, the First Affiliated Hospital of Chongqing Medical University, No.1 Youyi Road, Yuzhong District, Chongqing, China (Y.L.); Department of Thoracic Surgery, National Cancer Center/ National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China (Y.L.)
| | - Xing-Tao Huang
- Department of Radiology, the Fifth People's Hospital of Chongqing, No. 24 Renji Road, Nan'an District, Chongqing, China (X.T.H.)
| | - Yi-Bo Feng
- Institute of Research, Infervision Medical Technology Co., Ltd, 25F Building E, Yuanyang International Center, Chaoyang District. Beijing, China (B.Y.F., R.Q.F., W.D.W.)
| | - Qian-Rui Fan
- Institute of Research, Infervision Medical Technology Co., Ltd, 25F Building E, Yuanyang International Center, Chaoyang District. Beijing, China (B.Y.F., R.Q.F., W.D.W.)
| | - Da-Wei Wang
- Institute of Research, Infervision Medical Technology Co., Ltd, 25F Building E, Yuanyang International Center, Chaoyang District. Beijing, China (B.Y.F., R.Q.F., W.D.W.)
| | - Fa-Jin Lv
- Department of Radiology, the First Affiliated Hospital of Chongqing Medical University, No.1 Youyi Road, Yuzhong District, Chongqing, China (F.J.L., X.Q.H., Q.L.)
| | - Xiao-Qun He
- Department of Radiology, the First Affiliated Hospital of Chongqing Medical University, No.1 Youyi Road, Yuzhong District, Chongqing, China (F.J.L., X.Q.H., Q.L.)
| | - Qi Li
- Department of Radiology, the First Affiliated Hospital of Chongqing Medical University, No.1 Youyi Road, Yuzhong District, Chongqing, China (F.J.L., X.Q.H., Q.L.).
| |
Collapse
|
2
|
Zhang X, Liu B, Liu K, Wang L. The diagnosis performance of convolutional neural network in the detection of pulmonary nodules: a systematic review and meta-analysis. Acta Radiol 2023; 64:2987-2998. [PMID: 37743663 DOI: 10.1177/02841851231201514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
BACKGROUND Pulmonary nodules are an early imaging indication of lung cancer, and early detection of pulmonary nodules can improve the prognosis of lung cancer. As one of the applications of machine learning, the convolutional neural network (CNN) applied to computed tomography (CT) imaging data improves the accuracy of diagnosis, but the results could be more consistent. PURPOSE To evaluate the diagnostic performance of CNN in assisting in detecting pulmonary nodules in CT images. MATERIAL AND METHODS PubMed, Cochrane Library, Web of Science, Elsevier, CNKI and Wanfang databases were systematically retrieved before 30 April 2023. Two reviewers searched and checked the full text of articles that might meet the criteria. The reference criteria are joint diagnoses by experienced physicians. The pooled sensitivity, specificity and the area under the summary receiver operating characteristic curve (AUC) were calculated by a random-effects model. Meta-regression analysis was performed to explore potential sources of heterogeneity. RESULTS Twenty-six studies were included in this meta-analysis, involving 2,391,702 regions of interest, comprising segmented images with a few wide pixels. The combined sensitivity and specificity values of the CNN model in detecting pulmonary nodules were 0.93 and 0.95, respectively. The pooled diagnostic odds ratio was 291. The AUC was 0.98. There was heterogeneity in sensitivity and specificity among the studies. The results suggested that data sources, pretreatment methods, reconstruction slice thickness, population source and locality might contribute to the heterogeneity of these eligible studies. CONCLUSION The CNN model can be a valuable diagnostic tool with high accuracy in detecting pulmonary nodules.
Collapse
Affiliation(s)
- Xinyue Zhang
- Key Laboratory of Environmental Medicine Engineering, Ministry of Education, Department of Epidemiology & Biostatistics, School of Public Health, Southeast University, Nanjing, China
| | - Bo Liu
- Key Laboratory of Environmental Medicine Engineering, Ministry of Education, Department of Epidemiology & Biostatistics, School of Public Health, Southeast University, Nanjing, China
| | - Kefu Liu
- Department of radiology, The Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou Municipal Hospital, Gusu School, Nanjing Medical University, Suzhou, China
| | - Lina Wang
- Key Laboratory of Environmental Medicine Engineering, Ministry of Education, Department of Epidemiology & Biostatistics, School of Public Health, Southeast University, Nanjing, China
| |
Collapse
|
3
|
Zhang X, Dong X, Saripan MIB, Du D, Wu Y, Wang Z, Cao Z, Wen D, Liu Y, Marhaban MH. Deep learning PET/CT-based radiomics integrates clinical data: A feasibility study to distinguish between tuberculosis nodules and lung cancer. Thorac Cancer 2023. [PMID: 37183577 DOI: 10.1111/1759-7714.14924] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 04/21/2023] [Accepted: 04/22/2023] [Indexed: 05/16/2023] Open
Abstract
BACKGROUND Radiomic diagnosis models generally consider only a single dimension of information, leading to limitations in their diagnostic accuracy and reliability. The integration of multiple dimensions of information into the deep learning model have the potential to improve its diagnostic capabilities. The purpose of study was to evaluate the performance of deep learning model in distinguishing tuberculosis (TB) nodules and lung cancer (LC) based on deep learning features, radiomic features, and clinical information. METHODS Positron emission tomography (PET) and computed tomography (CT) image data from 97 patients with LC and 77 patients with TB nodules were collected. One hundred radiomic features were extracted from both PET and CT imaging using the pyradiomics platform, and 2048 deep learning features were obtained through a residual neural network approach. Four models included traditional machine learning model with radiomic features as input (traditional radiomics), a deep learning model with separate input of image features (deep convolutional neural networks [DCNN]), a deep learning model with two inputs of radiomic features and deep learning features (radiomics-DCNN) and a deep learning model with inputs of radiomic features and deep learning features and clinical information (integrated model). The models were evaluated using area under the curve (AUC), sensitivity, accuracy, specificity, and F1-score metrics. RESULTS The results of the classification of TB nodules and LC showed that the integrated model achieved an AUC of 0.84 (0.82-0.88), sensitivity of 0.85 (0.80-0.88), and specificity of 0.84 (0.83-0.87), performing better than the other models. CONCLUSION The integrated model was found to be the best classification model in the diagnosis of TB nodules and solid LC.
Collapse
Affiliation(s)
- Xiaolei Zhang
- Faculty of Engineering, Universiti Putra Malaysia, Serdang, Malaysia
- Department of Biomedical Engineering, Chengde Medical University, Chengde, Hebei, China
| | - Xianling Dong
- Department of Biomedical Engineering, Chengde Medical University, Chengde, Hebei, China
- Hebei International Research Center of Medical Engineering and Hebei Provincial Key Laboratory of Nerve Injury and Repair, Chengde Medical University, Chengde, Hebei, China
| | | | - Dongyang Du
- School of Biomedical Engineering and Guangdong Province Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
| | - Yanjun Wu
- Department of Biomedical Engineering, Chengde Medical University, Chengde, Hebei, China
| | - Zhongxiao Wang
- Department of Biomedical Engineering, Chengde Medical University, Chengde, Hebei, China
| | - Zhendong Cao
- Department of Radiology, the Affiliated Hospital of Chengde Medical University, Chengde, China
| | - Dong Wen
- Institute of Artificial Intelligence, University of Science and Technology Beijing, Beijing, China
| | - Yanli Liu
- Department of Biomedical Engineering, Chengde Medical University, Chengde, Hebei, China
| | | |
Collapse
|
4
|
Computed Tomography Imaging Features of Lung Cancer under Artificial Intelligence Algorithm and Its Correlation with Pathology. CONTRAST MEDIA & MOLECULAR IMAGING 2023. [DOI: 10.1155/2023/9303688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
This study aims to investigate the relationship between the detection performance of an artificial intelligence (AI) algorithm and pathology in chest computed tomography (CT) images. In this study, a new pulmonary nodule (PN) detection algorithm was designed and developed on the three-dimensional (3D) connected domain algorithm. The appropriate grayscale threshold of CT images was selected, the CT images were converted into black-and-white images, and the useless images were removed. Then, the remaining lung images were formed into a 3D black-and-white pixel matrix. Labeling statistics was carried out, and the size, property, and location of PN could be measured and determined. A self-built database of PNs undergoing chest multislice spiral CT examination was retrospectively selected, and 150 cases were randomly selected by SPSS 22.0. Image processing was performed according to the algorithm and compared with the PN detected by radiologists; finally, the detection results were counted. There were 560 benign and malignant PNs, 312 malignant, and 248 benign. The algorithm detected 498 cases, of which 478 cases were detected accurately, and the sensitivity was 95.98%. The radiologist detected 424 cases, 364 cases were accurate, and the sensitivity was 85.85%. Compared with the detection results of radiologists, the algorithm detection results of nodules in solid nodules and ground glass nodules were more accurate. The detection results of nodules in the pleural connection type, peripheral type, central type, and hilar type were more accurate and statistically significant (
). The malignancy, size, property, and location of different nodules could be accurately determined through CT images under this algorithm. It provided important support for the pathological research of lung cancer and prejudged the future development of PN in patients more accurately.
Collapse
|
5
|
Deep Learning-Based Image Conversion Improves the Reproducibility of Computed Tomography Radiomics Features: A Phantom Study. Invest Radiol 2022; 57:308-317. [PMID: 34839305 DOI: 10.1097/rli.0000000000000839] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
OBJECTIVES This study aimed to evaluate the usefulness of deep learning-based image conversion to improve the reproducibility of computed tomography (CT) radiomics features. MATERIALS AND METHODS This study was conducted using an abdominal phantom with liver nodules. We developed an image conversion algorithm using a residual feature aggregation network to reproduce radiomics features with CT images under various CT protocols and reconstruction kernels. External validation was performed using images from different scanners, consisting of 8 different protocols. To evaluate the variability of radiomics features, regions of interest (ROIs) were drawn by targeting the liver parenchyma, vessels, paraspinal area, and liver nodules. We extracted 18 first-order, 68 second-order, and 688 wavelet radiomics features. Measurement variability was assessed using the concordance correlation coefficient (CCC), compared with the ground-truth image. RESULTS In the ROI-based analysis, there was an 83.3% improvement of CCC (80/96; 4 ROIs with 3 categories of radiomics features and 8 protocols) in synthetic images compared with the original images. Among them, the 56 CCC pairs showed a significant increase after image synthesis. In the radiomics feature-based analysis, 62.0% (3838 of 6192; 774 radiomics features with 8 protocols) features showed increased CCC after image synthesis, and a significant increase was noted in 26.9% (1663 of 6192) features. In particular, the first-order feature (79.9%, 115/144) showed better improvement in terms of the reproducibility of radiomics feature than the second-order (59.9%, 326/544) or wavelet feature (61.7%, 3397/5504). CONCLUSIONS Our study demonstrated that a deep learning model for image conversion can improve the reproducibility of radiomics features across various CT protocols, reconstruction kernels, and CT scanners.
Collapse
|
6
|
Hamdeh A, Househ M, Abd-alrazaq A, Muchori G, Al-saadi A, Alzubaidi M. Artificial Intelligence and the diagnosis of lung cancer in early stage: scoping review. (Preprint).. [DOI: 10.2196/preprints.38773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
BACKGROUND
Lung cancer is considered to be the most fatal out of all diagnoseable cancers. This is, in part, due to the difficulty in detecting lung cancer at an early stage. Moreover, approximately one in five individuals who will develop lung cancer will pass away due to a misdiagnosis. Fortunately, Machine Learning (ML) and Deep Learning (DL) is considered to be a promising solution for detection of lung cancer through developments in radiology.
OBJECTIVE
The purpose of this paper is to is to review how AI can assist identifying and diagnosing of lung cancer in an early stage.
METHODS
PRISMA was utilized and were retrieved from 4 databases: Google Scholar, PubMed, EMBASE, and Institute of Electrical and Electronics Engineers (IEEE). In addition, two phases of screening were implemented in order to determine relevant literature. The first phase was reading the title and abstract, and the second stage was reading the full text. These two steps were independently conducted by three reviewers. Finally, the three authors use a narrative synthesis to present the data.
RESULTS
Overall, 543 potential studies were extracted from four databases. After screening, 26 articles that met the inclusion criteria were included in this scoping review. Several articles utilized privet data including patients’ data and other public sources. 15 articles used data from UCI repository dataset (58%). However, CT scan images was utilized on 9 studies (normal CT was mentioned in 5 articles (19%), two studies used CT scan with PET (7.7%), and two articles used FDG with CT (7.7%). While two articles used demographic data such as age, sex, and educational background (7.7%).
CONCLUSIONS
This scoping review illustrates recent studies that utilize AI models to diagnose lung cancer. The literature currently relies on private and public databases and compare models with physicians or other machine learning technology. Additional studies should be conducted to explore the efficacy of these technologies in clinical settings.
Collapse
|
7
|
Propofol Anesthesia Depth Monitoring Based on Self-Attention and Residual Structure Convolutional Neural Network. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:8501948. [PMID: 35132332 PMCID: PMC8817884 DOI: 10.1155/2022/8501948] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Revised: 11/24/2021] [Accepted: 01/04/2022] [Indexed: 11/17/2022]
Abstract
Methods We compare nine index values, select CNN+EEG, which has good correlation with BIS index, as an anesthesia state observation index to identify the parameters of the model, and establish a model based on self-attention and dual resistructure convolutional neural network. The data of 93 groups of patients were selected and randomly grouped into three parts: training set, validation set, and test set, and compared the best and worst results predicted by BIS. Result The best result is that the model's accuracy of predicting BLS on the test set has an overall upward trend, eventually reaching more than 90%. The overall error shows a gradual decrease and eventually approaches zero. The worst result is that the model's accuracy of predicting BIS on the test set has an overall upward trend. The accuracy rate is relatively stable without major fluctuations, but the final accuracy rate is above 70%. Conclusion The prediction of BIS indicators by the deep learning method CNN algorithm shows good results in statistics.
Collapse
|
8
|
Computergestützte Detektion solider Lungenherde. ROFO-FORTSCHR RONTG 2021. [DOI: 10.1055/a-1556-5015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|