1
|
Zarei F, Jannatdoust P, Malekpour S, Razaghi M, Chatterjee S, Varadhan Chatterjee V, Abbasi A, Haghighi RR. Quantitative analysis of lung lesions using unenhanced chest computed tomography images. THE CLINICAL RESPIRATORY JOURNAL 2024; 18:e13759. [PMID: 38714529 PMCID: PMC11076304 DOI: 10.1111/crj.13759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Revised: 09/01/2023] [Accepted: 04/12/2024] [Indexed: 05/10/2024]
Abstract
INTRODUCTION Chest radiograph and computed tomography (CT) scans can accidentally reveal pulmonary nodules. Malignant and benign pulmonary nodules can be difficult to distinguish without specific imaging features, such as calcification, necrosis, and contrast enhancement. However, these lesions may exhibit different image texture characteristics which cannot be assessed visually. Thus, a computer-assisted quantitative method like histogram analysis (HA) of Hounsfield unit (HU) values can improve diagnostic accuracy, reducing the need for invasive biopsy. METHODS In this exploratory control study, nonenhanced chest CT images of 20 patients with benign (10) and cancerous (10) lesion were selected retrospectively. The appearances of benign and malignant lesions were very similar in chest CT images, and only pathology report was used to discriminate them. Free hand region of interest (ROI) was inserted inside the lesion for all slices of each lesion. Mean, minimum, maximum, and standard deviations of HU values were recorded and used to make HA. RESULTS HA showed that the most malignant lesions have a mean HU value between 30 and 50, a maximum HU less than 150, and a minimum HU between -30 and 20. Lesions outside these ranges were mostly benign. CONCLUSION Quantitative CT analysis may differentiate malignant from benign lesions without specific malignancy patterns on unenhanced chest CT image.
Collapse
Affiliation(s)
- Fariba Zarei
- Medical Imaging Research CenterShiraz University of Medical SciencesShirazIran
- Department of RadiologyShiraz University of Medical SciencesShirazIran
| | | | - Siamak Malekpour
- Department of RadiologyShiraz University of Medical SciencesShirazIran
| | - Mahshad Razaghi
- Student Research CommitteeShiraz University of Medical SciencesShirazIran
| | - Sabyasachi Chatterjee
- Ongil (or Retired Scientist From Indian Institue of Astrophysics, Bengluru)SalemIndia
| | | | - Amirbahador Abbasi
- Student Research CommitteeShiraz University of Medical SciencesShirazIran
| | | |
Collapse
|
2
|
Bao Z, Du J, Zheng Y, Guo Q, Ji R. Deep learning or radiomics based on CT for predicting the response of gastric cancer to neoadjuvant chemotherapy: a meta-analysis and systematic review. Front Oncol 2024; 14:1363812. [PMID: 38601765 PMCID: PMC11004479 DOI: 10.3389/fonc.2024.1363812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Accepted: 03/18/2024] [Indexed: 04/12/2024] Open
Abstract
Background Artificial intelligence (AI) models, clinical models (CM), and the integrated model (IM) are utilized to evaluate the response to neoadjuvant chemotherapy (NACT) in patients diagnosed with gastric cancer. Objective The objective is to identify the diagnostic test of the AI model and to compare the accuracy of AI, CM, and IM through a comprehensive summary of head-to-head comparative studies. Methods PubMed, Web of Science, Cochrane Library, and Embase were systematically searched until September 5, 2023, to compile English language studies without regional restrictions. The quality of the included studies was evaluated using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) criteria. Forest plots were utilized to illustrate the findings of diagnostic accuracy, while Hierarchical Summary Receiver Operating Characteristic curves were generated to estimate sensitivity (SEN) and specificity (SPE). Meta-regression was applied to analyze heterogeneity across the studies. To assess the presence of publication bias, Deeks' funnel plot and an asymmetry test were employed. Results A total of 9 studies, comprising 3313 patients, were included for the AI model, with 7 head-to-head comparative studies involving 2699 patients. Across the 9 studies, the pooled SEN for the AI model was 0.75 (95% confidence interval (CI): 0.66, 0.82), and SPE was 0.77 (95% CI: 0.69, 0.84). Meta-regression was conducted, revealing that the cut-off value, approach to predicting response, and gold standard might be sources of heterogeneity. In the head-to-head comparative studies, the pooled SEN for AI was 0.77 (95% CI: 0.69, 0.84) with SPE at 0.79 (95% CI: 0.70, 0.85). For CM, the pooled SEN was 0.67 (95% CI: 0.57, 0.77) with SPE at 0.59 (95% CI: 0.54, 0.64), while for IM, the pooled SEN was 0.83 (95% CI: 0.79, 0.86) with SPE at 0.69 (95% CI: 0.56, 0.79). Notably, there was no statistical difference, except that IM exhibited higher SEN than AI, while maintaining a similar level of SPE in pairwise comparisons. In the Receiver Operating Characteristic analysis subgroup, the CT-based Deep Learning (DL) subgroup, and the National Comprehensive Cancer Network (NCCN) guideline subgroup, the AI model exhibited higher SEN but lower SPE compared to the IM. Conversely, in the training cohort subgroup and the internal validation cohort subgroup, the AI model demonstrated lower SEN but higher SPE than the IM. The subgroup analysis underscored that factors such as the number of cohorts, cohort type, cut-off value, approach to predicting response, and choice of gold standard could impact the reliability and robustness of the results. Conclusion AI has demonstrated its viability as a tool for predicting the response of GC patients to NACT Furthermore, CT-based DL model in AI was sensitive to extract tumor features and predict the response. The results of subgroup analysis also supported the above conclusions. Large-scale rigorously designed diagnostic accuracy studies and head-to-head comparative studies are anticipated. Systematic review registration PROSPERO, CRD42022377030.
Collapse
Affiliation(s)
- Zhixian Bao
- Department of Gastroenterology, the First Hospital of Lanzhou University, Lanzhou, China
- Department of Gastroenterology, Xi’an NO.1 Hospital, Xi’an, Shaanxi, China
| | - Jie Du
- Department of Social Medicine and Health Management, School of Public Health, Lanzhou University, Lanzhou, China
| | - Ya Zheng
- Department of Gastroenterology, the First Hospital of Lanzhou University, Lanzhou, China
- Gansu Province Clinical Research Center for Digestive Diseases, The First Hospital of Lanzhou University, Lanzhou, China
| | - Qinghong Guo
- Department of Gastroenterology, the First Hospital of Lanzhou University, Lanzhou, China
- Gansu Province Clinical Research Center for Digestive Diseases, The First Hospital of Lanzhou University, Lanzhou, China
| | - Rui Ji
- Department of Gastroenterology, the First Hospital of Lanzhou University, Lanzhou, China
- Gansu Province Clinical Research Center for Digestive Diseases, The First Hospital of Lanzhou University, Lanzhou, China
| |
Collapse
|
3
|
Çalışkan M, Tazaki K. AI/ML advances in non-small cell lung cancer biomarker discovery. Front Oncol 2023; 13:1260374. [PMID: 38148837 PMCID: PMC10750392 DOI: 10.3389/fonc.2023.1260374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 11/16/2023] [Indexed: 12/28/2023] Open
Abstract
Lung cancer is the leading cause of cancer deaths among both men and women, representing approximately 25% of cancer fatalities each year. The treatment landscape for non-small cell lung cancer (NSCLC) is rapidly evolving due to the progress made in biomarker-driven targeted therapies. While advancements in targeted treatments have improved survival rates for NSCLC patients with actionable biomarkers, long-term survival remains low, with an overall 5-year relative survival rate below 20%. Artificial intelligence/machine learning (AI/ML) algorithms have shown promise in biomarker discovery, yet NSCLC-specific studies capturing the clinical challenges targeted and emerging patterns identified using AI/ML approaches are lacking. Here, we employed a text-mining approach and identified 215 studies that reported potential biomarkers of NSCLC using AI/ML algorithms. We catalogued these studies with respect to BEST (Biomarkers, EndpointS, and other Tools) biomarker sub-types and summarized emerging patterns and trends in AI/ML-driven NSCLC biomarker discovery. We anticipate that our comprehensive review will contribute to the current understanding of AI/ML advances in NSCLC biomarker research and provide an important catalogue that may facilitate clinical adoption of AI/ML-derived biomarkers.
Collapse
Affiliation(s)
- Minal Çalışkan
- Translational Science Department, Precision Medicine Function, Daiichi Sankyo, Inc., Basking Ridge, NJ, United States
| | - Koichi Tazaki
- Translational Science Department I, Precision Medicine Function, Daiichi Sankyo, Tokyo, Japan
| |
Collapse
|
4
|
Li R, Zhou L, Wang Y, Shan F, Chen X, Liu L. A graph neural network model for the diagnosis of lung adenocarcinoma based on multimodal features and an edge-generation network. Quant Imaging Med Surg 2023; 13:5333-5348. [PMID: 37581061 PMCID: PMC10423350 DOI: 10.21037/qims-23-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Accepted: 06/09/2023] [Indexed: 08/16/2023]
Abstract
Background Lung cancer is a global disease with high lethality, with early screening being considerably helpful for improving the 5-year survival rate. Multimodality features in early screening imaging are an important part of the prediction for lung adenocarcinoma, and establishing a model for adenocarcinoma diagnosis based on multimodal features is an obvious clinical need. Through our practice and investigation, we found that graph neural networks (GNNs) are excellent platforms for multimodal feature fusion, and the data can be completed using the edge-generation network. Therefore, we propose a new lung adenocarcinoma multiclassification model based on multimodal features and an edge-generation network. Methods According to a ratio of 80% to 20%, respectively, the dataset of 338 cases was divided into the training set and the test set through 5-fold cross-validation, and the distribution of the 2 sets was the same. First, the regions of interest (ROIs) cropped from computed tomography (CT) images were separately fed into convolutional neural networks (CNNs) and radiomics processing platforms. The results of the 2 parts were then input into a graph embedding representation network to obtain the fused feature vectors. Subsequently, a graph database based on the clinical and semantic features was established, and the data were supplemented by an edge-generation network, with the fused feature vectors being used as the input of the nodes. This enabled us to clearly understand where the information transmission of the GNN takes place and improves the interpretability of the model. Finally, the nodes were classified using GNNs. Results On our dataset, the proposed method presented in this paper achieved superior results compared to traditional methods and showed some comparability with state-of-the-art methods for lung nodule classification. The results of our method are as follows: accuracy (ACC) =66.26% (±4.46%), area under the curve (AUC) =75.86% (±1.79%), F1-score =64.00% (±3.65%), and Matthews correlation coefficient (MCC) =48.40% (±5.07%). The model with the edge-generating network consistently outperformed the model without it in all aspects. Conclusions The experiments demonstrate that with appropriate data=construction methods GNNs can outperform traditional image processing methods in the field of CT-based medical image classification. Additionally, our model has higher interpretability, as it employs subjective clinical and semantic features as the data construction approach. This will help doctors better leverage human-computer interactions.
Collapse
Affiliation(s)
- Ruihao Li
- Academy for Engineering & Technology, Fudan University, Shanghai, China
| | - Lingxiao Zhou
- Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen, China
| | - Yunpeng Wang
- Institutes of Biomedical Sciences, Fudan University, Shanghai, China
| | - Fei Shan
- Shanghai Public Health Clinical Center and Institutes of Biomedical Sciences, Fudan University, Shanghai, China
| | - Xinrong Chen
- Academy for Engineering & Technology, Fudan University, Shanghai, China
| | - Lei Liu
- Academy for Engineering & Technology, Fudan University, Shanghai, China
- Intelligent Medicine Institute, Fudan University, Shanghai, China
- Shanghai Institute of Stem Cell Research and Clinical Translation, Shanghai, China
| |
Collapse
|
5
|
Zhou J, Hu B, Feng W, Zhang Z, Fu X, Shao H, Wang H, Jin L, Ai S, Ji Y. An ensemble deep learning model for risk stratification of invasive lung adenocarcinoma using thin-slice CT. NPJ Digit Med 2023; 6:119. [PMID: 37407729 DOI: 10.1038/s41746-023-00866-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 06/26/2023] [Indexed: 07/07/2023] Open
Abstract
Lung cancer screening using computed tomography (CT) has increased the detection rate of small pulmonary nodules and early-stage lung adenocarcinoma. It would be clinically meaningful to accurate assessment of the nodule histology by CT scans with advanced deep learning algorithms. However, recent studies mainly focus on predicting benign and malignant nodules, lacking of model for the risk stratification of invasive adenocarcinoma. We propose an ensemble multi-view 3D convolutional neural network (EMV-3D-CNN) model to study the risk stratification of lung adenocarcinoma. We include 1075 lung nodules (≤30 mm and ≥4 mm) with preoperative thin-section CT scans and definite pathology confirmed by surgery. Our model achieves a state-of-art performance of 91.3% and 92.9% AUC for diagnosis of benign/malignant and pre-invasive/invasive nodules, respectively. Importantly, our model outperforms senior doctors in risk stratification of invasive adenocarcinoma with 77.6% accuracy [i.e., Grades 1, 2, 3]). It provides detailed predictive histological information for the surgical management of pulmonary nodules. Finally, for user-friendly access, the proposed model is implemented as a web-based system ( https://seeyourlung.com.cn ).
Collapse
Affiliation(s)
- Jing Zhou
- Center for Applied Statistics, School of Statistics, Renmin University of China, Beijing, China
| | - Bin Hu
- Department of Thoracic Surgery, Beijing Institute of Respiratory Medicine and Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Wei Feng
- Department of Cardiothoracic Surgery, The Third Xiangya Hospital of Central South University, Changsha, China
| | - Zhang Zhang
- Department of Thoracic Surgery, Changsha Central Hospital, Changsha, China
| | - Xiaotong Fu
- Center for Applied Statistics, School of Statistics, Renmin University of China, Beijing, China
| | - Handie Shao
- Center for Applied Statistics, School of Statistics, Renmin University of China, Beijing, China
| | - Hansheng Wang
- Guanghua School of Management, Peking University, Beijing, China
| | - Longyu Jin
- Department of Cardiothoracic Surgery, The Third Xiangya Hospital of Central South University, Changsha, China
| | - Siyuan Ai
- Department of Thoracic Surgery, Beijing LIANGXIANG Hospital, Beijing, China
| | - Ying Ji
- Department of Thoracic Surgery, Beijing Institute of Respiratory Medicine and Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China.
| |
Collapse
|
6
|
Wang F, Wang CL, Yi YQ, Zhang T, Zhong Y, Zhu JJ, Li H, Yang G, Yu TF, Xu H, Yuan M. Comparison and fusion prediction model for lung adenocarcinoma with micropapillary and solid pattern using clinicoradiographic, radiomics and deep learning features. Sci Rep 2023; 13:9302. [PMID: 37291251 PMCID: PMC10250309 DOI: 10.1038/s41598-023-36409-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 06/02/2023] [Indexed: 06/10/2023] Open
Abstract
To investigate whether the combination scheme of deep learning score (DL-score) and radiomics can improve preoperative diagnosis in the presence of micropapillary/solid (MPP/SOL) patterns in lung adenocarcinoma (ADC). A retrospective cohort of 514 confirmed pathologically lung ADC in 512 patients after surgery was enrolled. The clinicoradiographic model (model 1) and radiomics model (model 2) were developed with logistic regression. The deep learning model (model 3) was constructed based on the deep learning score (DL-score). The combine model (model 4) was based on DL-score and R-score and clinicoradiographic variables. The performance of these models was evaluated with area under the receiver operating characteristic curve (AUC) and compared using DeLong's test internally and externally. The prediction nomogram was plotted, and clinical utility depicted with decision curve. The performance of model 1, model 2, model 3 and model 4 was supported by AUCs of 0.848, 0.896, 0.906, 0.921 in the Internal validation set, that of 0.700, 0.801, 0.730, 0.827 in external validation set, respectively. These models existed statistical significance in internal validation (model 4 vs model 3, P = 0.016; model 4 vs model 1, P = 0.009, respectively) and external validation (model 4 vs model 2, P = 0.036; model 4 vs model 3, P = 0.047; model 4 vs model 1, P = 0.016, respectively). The decision curve analysis (DCA) demonstrated that model 4 predicting the lung ADC with MPP/SOL structure would be more beneficial than the model 1and model 3 but comparable with the model 2. The combined model can improve preoperative diagnosis in the presence of MPP/SOL pattern in lung ADC in clinical practice.
Collapse
Affiliation(s)
- Fen Wang
- Department of Medical Imaging Center, The Affiliated Huaian NO.1 People's Hospital of Nanjing Medical University, No. 1 West Huanghe Road, Huaian, 223300, China
| | - Cheng-Long Wang
- Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, Shanghai, 200062, China
| | - Yin-Qiao Yi
- Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, Shanghai, 200062, China
| | - Teng Zhang
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, 300 GuangZhou Road, Nanjing, 210029, China
| | - Yan Zhong
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, 300 GuangZhou Road, Nanjing, 210029, China
| | - Jia-Jia Zhu
- Department of Radiology, Jiangsu Province Official Hospital, Nanjing, 210024, China
| | - Hai Li
- Department of Pathology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, 210029, China
| | - Guang Yang
- Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, Shanghai, 200062, China
| | - Tong-Fu Yu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, 300 GuangZhou Road, Nanjing, 210029, China
| | - Hai Xu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, 300 GuangZhou Road, Nanjing, 210029, China.
- Department of Radiology, the First Affiliated Hospital of Nanjing Medical University, Jiangsu Province, 300, Guangzhou Road, Nanjing, 210029, China.
| | - Mei Yuan
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, 300 GuangZhou Road, Nanjing, 210029, China.
- Department of Radiology, the First Affiliated Hospital of Nanjing Medical University, Jiangsu Province, 300, Guangzhou Road, Nanjing, 210029, China.
| |
Collapse
|
7
|
Jan YT, Tsai PS, Huang WH, Chou LY, Huang SC, Wang JZ, Lu PH, Lin DC, Yen CS, Teng JP, Mok GSP, Shih CT, Wu TH. Machine learning combined with radiomics and deep learning features extracted from CT images: a novel AI model to distinguish benign from malignant ovarian tumors. Insights Imaging 2023; 14:68. [PMID: 37093321 PMCID: PMC10126170 DOI: 10.1186/s13244-023-01412-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Accepted: 03/20/2023] [Indexed: 04/25/2023] Open
Abstract
BACKGROUND To develop an artificial intelligence (AI) model with radiomics and deep learning (DL) features extracted from CT images to distinguish benign from malignant ovarian tumors. METHODS We enrolled 149 patients with pathologically confirmed ovarian tumors. A total of 185 tumors were included and divided into training and testing sets in a 7:3 ratio. All tumors were manually segmented from preoperative contrast-enhanced CT images. CT image features were extracted using radiomics and DL. Five models with different combinations of feature sets were built. Benign and malignant tumors were classified using machine learning (ML) classifiers. The model performance was compared with five radiologists on the testing set. RESULTS Among the five models, the best performing model is the ensemble model with a combination of radiomics, DL, and clinical feature sets. The model achieved an accuracy of 82%, specificity of 89% and sensitivity of 68%. Compared with junior radiologists averaged results, the model had a higher accuracy (82% vs 66%) and specificity (89% vs 65%) with comparable sensitivity (68% vs 67%). With the assistance of the model, the junior radiologists achieved a higher average accuracy (81% vs 66%), specificity (80% vs 65%), and sensitivity (82% vs 67%), approaching to the performance of senior radiologists. CONCLUSIONS We developed a CT-based AI model that can differentiate benign and malignant ovarian tumors with high accuracy and specificity. This model significantly improved the performance of less-experienced radiologists in ovarian tumor assessment, and may potentially guide gynecologists to provide better therapeutic strategies for these patients.
Collapse
Affiliation(s)
- Ya-Ting Jan
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, 112, Taiwan
- Department of Radiology, MacKay Memorial Hospital, Taipei, Taiwan
- Department of Medicine, MacKay Medical College, New Taipei City, Taiwan
- MacKay Junior College of Medicine, Nursing and Management, New Taipei City, Taiwan
| | - Pei-Shan Tsai
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, 112, Taiwan
- Department of Radiology, MacKay Memorial Hospital, Taipei, Taiwan
- Department of Medicine, MacKay Medical College, New Taipei City, Taiwan
- MacKay Junior College of Medicine, Nursing and Management, New Taipei City, Taiwan
| | - Wen-Hui Huang
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, 112, Taiwan
- Department of Radiology, MacKay Memorial Hospital, Taipei, Taiwan
- Department of Medicine, MacKay Medical College, New Taipei City, Taiwan
- MacKay Junior College of Medicine, Nursing and Management, New Taipei City, Taiwan
| | - Ling-Ying Chou
- Department of Radiology, MacKay Memorial Hospital, Taipei, Taiwan
- Department of Medicine, MacKay Medical College, New Taipei City, Taiwan
- MacKay Junior College of Medicine, Nursing and Management, New Taipei City, Taiwan
| | - Shih-Chieh Huang
- Department of Radiology, MacKay Memorial Hospital, Taipei, Taiwan
- Department of Medicine, MacKay Medical College, New Taipei City, Taiwan
- MacKay Junior College of Medicine, Nursing and Management, New Taipei City, Taiwan
| | - Jing-Zhe Wang
- Department of Radiology, MacKay Memorial Hospital, Taipei, Taiwan
- Department of Medicine, MacKay Medical College, New Taipei City, Taiwan
- MacKay Junior College of Medicine, Nursing and Management, New Taipei City, Taiwan
| | - Pei-Hsuan Lu
- Department of Radiology, MacKay Memorial Hospital, Taipei, Taiwan
- Department of Medicine, MacKay Medical College, New Taipei City, Taiwan
- MacKay Junior College of Medicine, Nursing and Management, New Taipei City, Taiwan
| | - Dao-Chen Lin
- Division of Endocrine and Metabolism, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Chun-Sheng Yen
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, 112, Taiwan
| | - Ju-Ping Teng
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, 112, Taiwan
| | - Greta S P Mok
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau, China
| | - Cheng-Ting Shih
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, 404, Taiwan.
| | - Tung-Hsin Wu
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, 112, Taiwan.
| |
Collapse
|
8
|
Chang CC, Tang EK, Wei YF, Lin CY, Wu FZ, Wu MT, Liu YS, Yen YT, Ma MC, Tseng YL. Clinical radiomics-based machine learning versus three-dimension convolutional neural network analysis for differentiation of thymic epithelial tumors from other prevascular mediastinal tumors on chest computed tomography scan. Front Oncol 2023; 13:1105100. [PMID: 37143945 PMCID: PMC10151670 DOI: 10.3389/fonc.2023.1105100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 03/27/2023] [Indexed: 05/06/2023] Open
Abstract
Purpose To compare the diagnostic performance of radiomic analysis with machine learning (ML) model with a convolutional neural network (CNN) in differentiating thymic epithelial tumors (TETs) from other prevascular mediastinal tumors (PMTs). Methods A retrospective study was performed in patients with PMTs and undergoing surgical resection or biopsy in National Cheng Kung University Hospital, Tainan, Taiwan, E-Da Hospital, Kaohsiung, Taiwan, and Kaohsiung Veterans General Hospital, Kaohsiung, Taiwan between January 2010 and December 2019. Clinical data including age, sex, myasthenia gravis (MG) symptoms and pathologic diagnosis were collected. The datasets were divided into UECT (unenhanced computed tomography) and CECT (enhanced computed tomography) for analysis and modelling. Radiomics model and 3D CNN model were used to differentiate TETs from non-TET PMTs (including cyst, malignant germ cell tumor, lymphoma and teratoma). The macro F1-score and receiver operating characteristic (ROC) analysis were performed to evaluate the prediction models. Result In the UECT dataset, there were 297 patients with TETs and 79 patients with other PMTs. The performance of radiomic analysis with machine learning model using LightGBM with Extra Tree (macro F1-Score = 83.95%, ROC-AUC = 0.9117) had better performance than the 3D CNN model (macro F1-score = 75.54%, ROC-AUC = 0.9015). In the CECT dataset, there were 296 patients with TETs and 77 patients with other PMTs. The performance of radiomic analysis with machine learning model using LightGBM with Extra Tree (macro F1-Score = 85.65%, ROC-AUC = 0.9464) had better performance than the 3D CNN model (macro F1-score = 81.01%, ROC-AUC = 0.9275). Conclusion Our study revealed that the individualized prediction model integrating clinical information and radiomic features using machine learning demonstrated better predictive performance in the differentiation of TETs from other PMTs at chest CT scan than 3D CNN model.
Collapse
Affiliation(s)
- Chao-Chun Chang
- Division of Thoracic Surgery, Department of Surgery, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan
| | - En-Kuei Tang
- Division of Thoracic Surgery, Department of Surgery, Kaohsiung Veterans General Hospital, Kaohsiung, Taiwan
| | - Yu-Feng Wei
- School of Medicine for International Students, College of Medicine, I-Shou University, Kaohsiung, Taiwan
- Division of Chest Medicine, Department of Internal Medicine, E-Da Cancer Hospital, Kaohsiung, Taiwan
| | - Chia-Ying Lin
- Department of Medical Imaging, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan
| | - Fu-Zong Wu
- Department of Radiology, Kaohsiung Veterans General Hospital, Kaohsiung, Taiwan
- Faculty of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Institute of Education, National Sun Yat-sen University, Kaohsiung, Taiwan
| | - Ming-Ting Wu
- Department of Radiology, Kaohsiung Veterans General Hospital, Kaohsiung, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Yi-Sheng Liu
- Department of Medical Imaging, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan
| | - Yi-Ting Yen
- Division of Thoracic Surgery, Department of Surgery, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan
- Division of Trauma and Acute Care Surgery, Department of Surgery, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan
- *Correspondence: Yi-Ting Yen, ; Mi-Chia Ma,
| | - Mi-Chia Ma
- Department of Statistics and Institute of Data Science, National Cheng Kung University, Tainan, Taiwan
- *Correspondence: Yi-Ting Yen, ; Mi-Chia Ma,
| | - Yau-Lin Tseng
- Division of Thoracic Surgery, Department of Surgery, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan
| |
Collapse
|
9
|
Qiao J, Fan Y, Zhang M, Fang K, Li D, Wang Z. Ensemble framework based on attributes and deep features for benign-malignant classification of lung nodule. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
10
|
Ge G, Zhang J. Feature selection methods and predictive models in CT lung cancer radiomics. J Appl Clin Med Phys 2023; 24:e13869. [PMID: 36527376 PMCID: PMC9860004 DOI: 10.1002/acm2.13869] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 08/31/2022] [Accepted: 11/28/2022] [Indexed: 12/23/2022] Open
Abstract
Radiomics is a technique that extracts quantitative features from medical images using data-characterization algorithms. Radiomic features can be used to identify tissue characteristics and radiologic phenotyping that is not observable by clinicians. A typical workflow for a radiomics study includes cohort selection, radiomic feature extraction, feature and predictive model selection, and model training and validation. While there has been increasing attention given to radiomic feature extraction, standardization, and reproducibility, currently, there is a lack of rigorous evaluation of feature selection methods and predictive models. Herein, we review the published radiomics investigations in CT lung cancer and provide an overview of the commonly used radiomic feature selection methods and predictive models. We also compare limitations of various methods in clinical applications and present sources of uncertainty associated with those methods. This review is expected to help raise awareness of the impact of radiomic feature and model selection methods on the integrity of radiomics studies.
Collapse
Affiliation(s)
- Gary Ge
- Department of Radiology, University of Kentucky, Lexington, Kentucky, USA
| | - Jie Zhang
- Department of Radiology, University of Kentucky, Lexington, Kentucky, USA
| |
Collapse
|
11
|
Ma M, Xu S, Han B, He H, Ma X, Chen C. A retrospective diagnostic test study on circulating tumor cells and artificial intelligence imaging in patients with lung adenocarcinoma. ANNALS OF TRANSLATIONAL MEDICINE 2022; 10:1339. [PMID: 36660706 PMCID: PMC9843428 DOI: 10.21037/atm-22-5668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Accepted: 12/12/2022] [Indexed: 12/28/2022]
Abstract
Background Either tumor volume or folate-receptor-positive circulating tumor cells (FR+CTC) has been proven effective in predicting tumor cell invasion. However, it has yet to be documented to use FR+CTC along with artificial intelligence (AI) tumor volume to differentiate between pathological subtypes of lung adenocarcinoma (LUAD). Therefore, this study is aimed to evaluate the accuracy of FR+CTC and AI tumor volume for classifying the invasiveness of LUAD. Methods A total of 226 patients who were diagnosed with LUAD were enrolled. The inclusion criteria were: (I) FR+CTC detection and AI imaging before anticancer therapy, and (II) definite histopathologic diagnosis, which is the gold diagnosis of LUAD and its subtypes. Use the CytoploRare® Detection Kit to quantify FR+CTC and the AI-assisted diagnosis system, ScrynPro, to measure tumor volume. The clinical data were used to construct univariate and multivariate logistic regression models. A nomogram was drawn based on the multivariate logistic regression model. The validity is evaluated by the calibration curve and Hosmer-Lemeshow goodness-of-fit test. Results The mean age of 146 patients (96 males, 49 females and 1 gender missing) retrospectively enrolled was 56.6. In the cohort, 41 and 105 patients were assigned to adenocarcinoma in situ (AIS) + minimally invasive adenocarcinoma (MIA) and invasive pulmonary adenocarcinoma (IPA), respectively. There was no significant difference between the sex distribution and smoking history of the two groups (P=0.155 and P=0.442, respectively). In univariate analysis, the nodules type, maximum density, tumor volume and FR+CTC level were statistically significant with the invasiveness of LUAD (P<0.05). The multivariate analysis showed significant differences in FR+CTC and AI tumor volume (P<0.001). The area under the curves (AUCs) of FR+CTC and AI tumor volume in diagnosing tumor invasiveness were 0.659 and 0.698, respectively. A predictive model combining FR+CTC with AI tumor volume showed a sensitivity of 86.89% and a specificity of 70.94%, and the AUC was 0.841. The nomogram had good agreement with actual observation, and the Hosmer-Lemeshow test yielded non-significant goodness-of-fit. Conclusions FR+CTC and/or AI tumor volume are independent indicators of the invasiveness of LUAD, and the nomogram based on them can be used for the preoperative screening of patients.
Collapse
Affiliation(s)
- Minjie Ma
- Department of Thoracic Surgery, The First Hospital of Lanzhou University, Lanzhou, China
| | - Shangqing Xu
- Skills Training Center, The First Clinical Medical College of Lanzhou University, Lanzhou, China
| | - Biao Han
- Department of Thoracic Surgery, The First Hospital of Lanzhou University, Lanzhou, China
| | - Hua He
- The First Clinical Medical College of Lanzhou University, Lanzhou, China
| | - Xiang Ma
- The First Clinical Medical College of Lanzhou University, Lanzhou, China
| | - Chang Chen
- Department of Thoracic Surgery, The First Hospital of Lanzhou University, Lanzhou, China;,The International Science and Technology Cooperation Base for Development and Application of Key Technologies in Thoracic Surgery, Lanzhou, China
| |
Collapse
|
12
|
Lv Y, Wei Y, Xu K, Zhang X, Hua R, Huang J, Li M, Tang C, Yang L, Liu B, Yuan Y, Li S, Gao Y, Zhang X, Wu Y, Han Y, Shang Z, Yu H, Zhan Y, Shi F, Ye B. 3D deep learning versus the current methods for predicting tumor invasiveness of lung adenocarcinoma based on high-resolution computed tomography images. Front Oncol 2022; 12:995870. [PMID: 36338695 PMCID: PMC9634256 DOI: 10.3389/fonc.2022.995870] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Accepted: 09/30/2022] [Indexed: 11/22/2022] Open
Abstract
Background Different pathological subtypes of lung adenocarcinoma lead to different treatment decisions and prognoses, and it is clinically important to distinguish invasive lung adenocarcinoma from preinvasive adenocarcinoma (adenocarcinoma in situ and minimally invasive adenocarcinoma). This study aims to investigate the performance of the deep learning approach based on high-resolution computed tomography (HRCT) images in the classification of tumor invasiveness and compare it with the performances of currently available approaches. Methods In this study, we used a deep learning approach based on 3D conventional networks to automatically predict the invasiveness of pulmonary nodules. A total of 901 early-stage non-small cell lung cancer patients who underwent surgical treatment at Shanghai Chest Hospital between November 2015 and March 2017 were retrospectively included and randomly assigned to a training set (n=814) or testing set 1 (n=87). We subsequently included 116 patients who underwent surgical treatment and intraoperative frozen section between April 2019 and January 2020 to form testing set 2. We compared the performance of our deep learning approach in predicting tumor invasiveness with that of intraoperative frozen section analysis and human experts (radiologists and surgeons). Results The deep learning approach yielded an area under the receiver operating characteristic curve (AUC) of 0.946 for distinguishing preinvasive adenocarcinoma from invasive lung adenocarcinoma in the testing set 1, which is significantly higher than the AUCs of human experts (P<0.05). In testing set 2, the deep learning approach distinguished invasive adenocarcinoma from preinvasive adenocarcinoma with an AUC of 0.862, which is higher than that of frozen section analysis (0.755, P=0.043), senior thoracic surgeons (0.720, P=0.006), radiologists (0.766, P>0.05) and junior thoracic surgeons (0.768, P>0.05). Conclusions We developed a deep learning model that achieved comparable performance to intraoperative frozen section analysis in determining tumor invasiveness. The proposed method may contribute to clinical decisions related to the extent of surgical resection.
Collapse
Affiliation(s)
- Yilv Lv
- Department of Thoracic Surgery, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Ying Wei
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Kuan Xu
- Department of Thoracic Surgery, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaobin Zhang
- Department of Thoracic Surgery, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Rong Hua
- Department of Thoracic Surgery, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Jia Huang
- Department of Oncologic Surgery, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Min Li
- Department of Radiology, Shanghai Municipal Hospital of Traditional Chinese Medicine, Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - Cui Tang
- Department of Radiology, Yangpu Hospital, Tongji University, Shanghai, China
| | - Long Yang
- Department of Thoracic Surgery, Affiliated Hospital of Gansu Medical College, Pingliang, China
| | - Bingchun Liu
- Department of Thoracic Surgery, Weifang People’s Hospital, Weifang, China
| | - Yonggang Yuan
- Department of Thoracic Surgery, Qilu Hospital of Shandong University, Qingdao, China
| | - Siwen Li
- Department of Thoracic Surgery, Qingyuan People’s Hospital, Guangzhou Medical University, Guangzhou, China
| | - Yaozong Gao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xianjie Zhang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yifan Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yuchen Han
- Department of Pathology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Zhanxian Shang
- Department of Pathology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Hong Yu
- Department of Radiology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Yiqiang Zhan
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
- *Correspondence: Bo Ye, ; Feng Shi,
| | - Bo Ye
- Department of Thoracic Surgery, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
- *Correspondence: Bo Ye, ; Feng Shi,
| |
Collapse
|
13
|
Huang H, Zheng D, Chen H, Wang Y, Chen C, Xu L, Li G, Wang Y, He X, Li W. Fusion of CT images and clinical variables based on deep learning for predicting invasiveness risk of stage I lung adenocarcinoma. Med Phys 2022; 49:6384-6394. [PMID: 35938604 DOI: 10.1002/mp.15903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 04/01/2022] [Accepted: 07/26/2022] [Indexed: 11/08/2022] Open
Abstract
PURPOSE To develop a novel multimodal data fusion model by incorporating computed tomography (CT) images and clinical variables based on deep learning for predicting the invasiveness risk of stage I lung adenocarcinoma that manifests as ground-glass nodules (GGNs), and compare the diagnostic performance of it with that of radiologists. METHODS A total of 1946 patients with solitary and histopathologically confirmed GGNs with maximum diameter less than 3 cm were retrospectively enrolled. The training dataset containing 1704 GGNs was augmented by resampling, scaling, random cropping, etc., to generate new training data. A multimodal data fusion model based on residual learning architecture and two multilayer perceptron with attention mechanism combining CT images with patient general data and serum tumor markers was built. The distance-based confidence scores (DCS) were calculated and compared among multimodal data models with different combinations. An observer study was conducted and the prediction performance of the fusion algorithms was compared with that of the two radiologists by an independent testing dataset with 242 GGNs. RESULTS Among the whole GGNs, 606 GGNs are confirmed as invasive adenocarcinoma (IA) and 1340 are non-IA. The proposed novel multimodal data fusion model combining CT images, patient general data and serum tumor markers achieved the highest accuracy (88.5%), Area under a ROC curve (AUC) (0.957), F1 (81.5%), F1weighted (81.9%) and Matthews correlation coefficient (MCC) (73.2%) for classifying between IA and non-IA GGNs, which was even better than the senior radiologist's performance (accuracy, 86.1%). In addition, the DCSs for multimodal data suggested that CT image had a stronger influence (0.9540) quantitatively than general data (0.6726) or tumor marker (0.6971). CONCLUSION This study demonstrated that the feasibility of integrating different types of data including CT images and clinical variables, and the multimodal data fusion model yielded higher performance for distinguishing IA from non-IA GGNs. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Haozhe Huang
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Xuhui District, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Xuhui District, 130 Dongan Road, Shanghai, 200032, China
| | - Dezhong Zheng
- Laboratory for Medical Imaging Informatics, Shanghai Institute of Technical Physics, Chinese Academy of Science, 500 Yutian Road, Hongkou District, Shanghai, 200083, China.,University of Chinese Academy of Sciences, 19 Yuquan Road, Shijingshan District, Beijing, 100049, China
| | - Hong Chen
- Department of Medical Imaging, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, 600 South Wanping Road, Xuhui District, Shanghai, 200030, China
| | - Ying Wang
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Xuhui District, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Xuhui District, 130 Dongan Road, Shanghai, 200032, China
| | - Chao Chen
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Xuhui District, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Xuhui District, 130 Dongan Road, Shanghai, 200032, China
| | - Lichao Xu
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Xuhui District, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Xuhui District, 130 Dongan Road, Shanghai, 200032, China
| | - Guodong Li
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Xuhui District, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Xuhui District, 130 Dongan Road, Shanghai, 200032, China
| | - Yaohui Wang
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Xuhui District, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Xuhui District, 130 Dongan Road, Shanghai, 200032, China
| | - Xinhong He
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Xuhui District, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Xuhui District, 130 Dongan Road, Shanghai, 200032, China
| | - Wentao Li
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Xuhui District, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Xuhui District, 130 Dongan Road, Shanghai, 200032, China
| |
Collapse
|
14
|
Xiang Y, Dong X, Zeng C, Liu J, Liu H, Hu X, Feng J, Du S, Wang J, Han Y, Luo Q, Chen S, Li Y. Clinical Variables, Deep Learning and Radiomics Features Help Predict the Prognosis of Adult Anti-N-methyl-D-aspartate Receptor Encephalitis Early: A Two-Center Study in Southwest China. Front Immunol 2022; 13:913703. [PMID: 35720336 PMCID: PMC9199424 DOI: 10.3389/fimmu.2022.913703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Accepted: 04/26/2022] [Indexed: 11/17/2022] Open
Abstract
Objective To develop a fusion model combining clinical variables, deep learning (DL), and radiomics features to predict the functional outcomes early in patients with adult anti-N-methyl-D-aspartate receptor (NMDAR) encephalitis in Southwest China. Methods From January 2012, a two-center study of anti-NMDAR encephalitis was initiated to collect clinical and MRI data from acute patients in Southwest China. Two experienced neurologists independently assessed the patients’ prognosis at 24 moths based on the modified Rankin Scale (mRS) (good outcome defined as mRS 0–2; bad outcome defined as mRS 3-6). Risk factors influencing the prognosis of patients with acute anti-NMDAR encephalitis were investigated using clinical data. Five DL and radiomics models trained with four single or combined four MRI sequences (T1-weighted imaging, T2-weighted imaging, fluid-attenuated inversion recovery imaging and diffusion weighted imaging) and a clinical model were developed to predict the prognosis of anti-NMDAR encephalitis. A fusion model combing a clinical model and two machine learning-based models was built. The performances of the fusion model, clinical model, DL-based models and radiomics-based models were compared using the area under the receiver operating characteristic curve (AUC) and accuracy and then assessed by paired t-tests (P < 0.05 was considered significant). Results The fusion model achieved the significantly greatest predictive performance in the internal test dataset with an AUC of 0.963 [95% CI: (0.874-0.999)], and also significantly exhibited an equally good performance in the external validation dataset, with an AUC of 0.927 [95% CI: (0.688-0.975)]. The radiomics_combined model (AUC: 0.889; accuracy: 0.857) provided significantly superior predictive performance than the DL_combined (AUC: 0.845; accuracy: 0.857) and clinical models (AUC: 0.840; accuracy: 0.905), whereas the clinical model showed significantly higher accuracy. Compared with all single-sequence models, the DL_combined model and the radiomics_combined model had significantly greater AUCs and accuracies. Conclusions The fusion model combining clinical variables and machine learning-based models may have early predictive value for poor outcomes associated with anti-NMDAR encephalitis.
Collapse
Affiliation(s)
- Yayun Xiang
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Xiaoxuan Dong
- College of Computer and Information Science, Chongqing, China
| | - Chun Zeng
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Junhang Liu
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Hanjing Liu
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Xiaofei Hu
- Department of Neurology, Southwest Hospital, Third Military Medical University, Chongqing, China
| | - Jinzhou Feng
- Department of Neurology, First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Silin Du
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Jingjie Wang
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Yongliang Han
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Qi Luo
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Shanxiong Chen
- College of Computer and Information Science, Chongqing, China
| | - Yongmei Li
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| |
Collapse
|
15
|
Barragán-Montero A, Bibal A, Dastarac MH, Draguet C, Valdés G, Nguyen D, Willems S, Vandewinckele L, Holmström M, Löfman F, Souris K, Sterpin E, Lee JA. Towards a safe and efficient clinical implementation of machine learning in radiation oncology by exploring model interpretability, explainability and data-model dependency. Phys Med Biol 2022; 67:10.1088/1361-6560/ac678a. [PMID: 35421855 PMCID: PMC9870296 DOI: 10.1088/1361-6560/ac678a] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 04/14/2022] [Indexed: 01/26/2023]
Abstract
The interest in machine learning (ML) has grown tremendously in recent years, partly due to the performance leap that occurred with new techniques of deep learning, convolutional neural networks for images, increased computational power, and wider availability of large datasets. Most fields of medicine follow that popular trend and, notably, radiation oncology is one of those that are at the forefront, with already a long tradition in using digital images and fully computerized workflows. ML models are driven by data, and in contrast with many statistical or physical models, they can be very large and complex, with countless generic parameters. This inevitably raises two questions, namely, the tight dependence between the models and the datasets that feed them, and the interpretability of the models, which scales with its complexity. Any problems in the data used to train the model will be later reflected in their performance. This, together with the low interpretability of ML models, makes their implementation into the clinical workflow particularly difficult. Building tools for risk assessment and quality assurance of ML models must involve then two main points: interpretability and data-model dependency. After a joint introduction of both radiation oncology and ML, this paper reviews the main risks and current solutions when applying the latter to workflows in the former. Risks associated with data and models, as well as their interaction, are detailed. Next, the core concepts of interpretability, explainability, and data-model dependency are formally defined and illustrated with examples. Afterwards, a broad discussion goes through key applications of ML in workflows of radiation oncology as well as vendors' perspectives for the clinical implementation of ML.
Collapse
Affiliation(s)
- Ana Barragán-Montero
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Adrien Bibal
- PReCISE, NaDI Institute, Faculty of Computer Science, UNamur and CENTAL, ILC, UCLouvain, Belgium
| | - Margerie Huet Dastarac
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Camille Draguet
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
- Department of Oncology, Laboratory of Experimental Radiotherapy, KU Leuven, Belgium
| | - Gilmer Valdés
- Department of Radiation Oncology, Department of Epidemiology and Biostatistics, University of California, San Francisco, United States of America
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, United States of America
| | - Siri Willems
- ESAT/PSI, KU Leuven Belgium & MIRC, UZ Leuven, Belgium
| | | | | | | | - Kevin Souris
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Edmond Sterpin
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
- Department of Oncology, Laboratory of Experimental Radiotherapy, KU Leuven, Belgium
| | - John A Lee
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| |
Collapse
|
16
|
Li M, Gong J, Bao Y, Huang D, Peng J, Tong T. Special issue "The advance of solid tumor research in China": Prognosis prediction for stage II colorectal cancer by fusing CT radiomics and deep-learning features of primary lesions and peripheral lymph nodes. Int J Cancer 2022; 152:31-41. [PMID: 35484979 DOI: 10.1002/ijc.34053] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 04/10/2022] [Accepted: 04/21/2022] [Indexed: 11/11/2022]
Abstract
Currently, the prognosis assessment of stage II colorectal cancer (CRC) remains a difficult clinical problem; therefore, more accurate prognostic predictors must be developed. In this study, we developed a prognostic prediction model for stage II CRC by fusing radiomics and deep-learning (DL) features of primary lesions and peripheral lymph nodes (LNs) in computed tomography (CT) scans. First, two CT radiomics models were built using primary lesion and LN image features. Subsequently, an information fusion method was used to build a fusion radiomics model by combining the tumor and LN image features. Furthermore, a transfer learning method was applied to build a deep convolutional neural network (CNN) model. Finally, the prediction scores generated by the radiomics and CNN models were fused to improve the prognosis prediction performance. The disease-free survival (DFS) and overall survival (OS) prediction areas under the curves (AUCs) generated by the fusion model improved to 0.76±0.08 and 0.91±0.05, respectively. These were significantly higher than the AUCs generated by the models using the individual CT radiomics and deep image features. Applying the survival analysis method, the DFS and OS fusion models yielded concordance index (C-index) values of 0.73 and 0.9, respectively. Hence, the combined model exhibited good predictive efficacy; therefore, it could be used for the accurate assessment of the prognosis of stage II CRC patients. Moreover, it could be used to screen out high-risk patients with poor prognoses, and assist in the formulation of clinical treatment decisions in a timely manner to achieve precision medicine. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Menglei Li
- Department of Radiology, Fudan University Shanghai Cancer Center, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, P R China
| | - Jing Gong
- Department of Radiology, Fudan University Shanghai Cancer Center, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, P R China
| | - Yichao Bao
- Department of Colorectal Surgery, Fudan University Shanghai Cancer Center, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, P R China
| | - Dan Huang
- Department of Pathology, Fudan University Shanghai Cancer Center, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, P R China
| | - Junjie Peng
- Department of Colorectal Surgery, Fudan University Shanghai Cancer Center, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, P R China
| | - Tong Tong
- Department of Radiology, Fudan University Shanghai Cancer Center, Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, P R China
| |
Collapse
|
17
|
Alqahtani A. Application of Artificial Intelligence in Discovery and Development of Anticancer and Antidiabetic Therapeutic Agents. EVIDENCE-BASED COMPLEMENTARY AND ALTERNATIVE MEDICINE : ECAM 2022; 2022:6201067. [PMID: 35509623 PMCID: PMC9060979 DOI: 10.1155/2022/6201067] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 03/17/2022] [Accepted: 04/05/2022] [Indexed: 11/18/2022]
Abstract
Spectacular developments in molecular and cellular biology have led to important discoveries in cancer research. Despite cancer is one of the major causes of morbidity and mortality globally, diabetes is one of the most leading sources of group of disorders. Artificial intelligence (AI) has been considered the fourth industrial revolution machine. The most major hurdles in drug discovery and development are the time and expenditures required to sustain the drug research pipeline. Large amounts of data can be explored and generated by AI, which can then be converted into useful knowledge. Because of this, the world's largest drug companies have already begun to use AI in their drug development research. In the present era, AI has a huge amount of potential for the rapid discovery and development of new anticancer drugs. Clinical studies, electronic medical records, high-resolution medical imaging, and genomic assessments are just a few of the tools that could aid drug development. Large data sets are available to researchers in the pharmaceutical and medical fields, which can be analyzed by advanced AI systems. This review looked at how computational biology and AI technologies may be utilized in cancer precision drug development by combining knowledge of cancer medicines, drug resistance, and structural biology. This review also highlighted a realistic assessment of the potential for AI in understanding and managing diabetes.
Collapse
Affiliation(s)
- Amal Alqahtani
- College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, 31541, Saudi Arabia
- Department of Basic Sciences, Deanship of Preparatory Year and Supporting Studies, Imam Abdulrahman Bin Faisal University, P.O. Box 1982, Dammam 34212, Saudi Arabia
| |
Collapse
|
18
|
Fahmy D, Kandil H, Khelifi A, Yaghi M, Ghazal M, Sharafeldeen A, Mahmoud A, El-Baz A. How AI Can Help in the Diagnostic Dilemma of Pulmonary Nodules. Cancers (Basel) 2022; 14:cancers14071840. [PMID: 35406614 PMCID: PMC8997734 DOI: 10.3390/cancers14071840] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 03/29/2022] [Accepted: 03/30/2022] [Indexed: 02/04/2023] Open
Abstract
Simple Summary Pulmonary nodules are considered a sign of bronchogenic carcinoma, detecting them early will reduce their progression and can save lives. Lung cancer is the second most common type of cancer in both men and women. This manuscript discusses the current applications of artificial intelligence (AI) in lung segmentation as well as pulmonary nodule segmentation and classification using computed tomography (CT) scans, published in the last two decades, in addition to the limitations and future prospects in the field of AI. Abstract Pulmonary nodules are the precursors of bronchogenic carcinoma, its early detection facilitates early treatment which save a lot of lives. Unfortunately, pulmonary nodule detection and classification are liable to subjective variations with high rate of missing small cancerous lesions which opens the way for implementation of artificial intelligence (AI) and computer aided diagnosis (CAD) systems. The field of deep learning and neural networks is expanding every day with new models designed to overcome diagnostic problems and provide more applicable and simply used models. We aim in this review to briefly discuss the current applications of AI in lung segmentation, pulmonary nodule detection and classification.
Collapse
Affiliation(s)
- Dalia Fahmy
- Diagnostic Radiology Department, Mansoura University Hospital, Mansoura 35516, Egypt;
| | - Heba Kandil
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
- Information Technology Department, Faculty of Computers and Informatics, Mansoura University, Mansoura 35516, Egypt
| | - Adel Khelifi
- Computer Science and Information Technology Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates;
| | - Maha Yaghi
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.Y.); (M.G.)
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.Y.); (M.G.)
| | - Ahmed Sharafeldeen
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
| | - Ali Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
| | - Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
- Correspondence:
| |
Collapse
|
19
|
Qin C, Hu W, Wang X, Ma X. Application of Artificial Intelligence in Diagnosis of Craniopharyngioma. Front Neurol 2022; 12:752119. [PMID: 35069406 PMCID: PMC8770750 DOI: 10.3389/fneur.2021.752119] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Accepted: 11/12/2021] [Indexed: 12/24/2022] Open
Abstract
Craniopharyngioma is a congenital brain tumor with clinical characteristics of hypothalamic-pituitary dysfunction, increased intracranial pressure, and visual field disorder, among other injuries. Its clinical diagnosis mainly depends on radiological examinations (such as Computed Tomography, Magnetic Resonance Imaging). However, assessing numerous radiological images manually is a challenging task, and the experience of doctors has a great influence on the diagnosis result. The development of artificial intelligence has brought about a great transformation in the clinical diagnosis of craniopharyngioma. This study reviewed the application of artificial intelligence technology in the clinical diagnosis of craniopharyngioma from the aspects of differential classification, prediction of tissue invasion and gene mutation, prognosis prediction, and so on. Based on the reviews, the technical route of intelligent diagnosis based on the traditional machine learning model and deep learning model were further proposed. Additionally, in terms of the limitations and possibilities of the development of artificial intelligence in craniopharyngioma diagnosis, this study discussed the attentions required in future research, including few-shot learning, imbalanced data set, semi-supervised models, and multi-omics fusion.
Collapse
Affiliation(s)
- Caijie Qin
- Institute of Information Engineering, Sanming University, Sanming, China
| | - Wenxing Hu
- University of New South Wales, Sydney, NSW, Australia
| | - Xinsheng Wang
- School of Information Science and Engineering, Harbin Institute of Technology at Weihai, Weihai, China
| | - Xibo Ma
- CBSR & NLPR, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
20
|
Qiu Z, Wu Q, Wang S, Chen Z, Lin F, Zhou Y, Jin J, Xian J, Tian J, Li W. Development of a deep learning-based method to diagnose pulmonary ground-glass nodules by sequential computed tomography imaging. Thorac Cancer 2022; 13:602-612. [PMID: 34994091 PMCID: PMC8841714 DOI: 10.1111/1759-7714.14305] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 12/17/2021] [Accepted: 12/20/2021] [Indexed: 02/05/2023] Open
Abstract
Background Early identification of the malignant propensity of pulmonary ground‐glass nodules (GGNs) can relieve the pressure from tracking lesions and personalized treatment adaptation. The purpose of this study was to develop a deep learning‐based method using sequential computed tomography (CT) imaging for diagnosing pulmonary GGNs. Methods This diagnostic study retrospectively enrolled 762 patients with GGNs from West China Hospital of Sichuan University between July 2009 and March 2019. All patients underwent surgical resection and at least two consecutive time‐point CT scans. We developed a deep learning‐based method to identify GGNs using sequential CT imaging on a training set consisting of 1524 CT sections from 508 patients and then evaluated 256 patients in the testing set. Afterwards, an observer study was conducted to compare the diagnostic performance between the deep learning model and two trained radiologists in the testing set. We further performed stratified analysis to further relieve the impact of histological types, nodule size, time interval between two CTs, and the component of GGNs. Receiver operating characteristic (ROC) analysis was used to assess the performance of all models. Results The deep learning model that used integrated DL‐features from initial and follow‐up CT images yielded the best diagnostic performance, with an area under the curve of 0.841. The observer study showed that the accuracies for the deep learning model, junior radiologist, and senior radiologist were 77.17%, 66.89%, and 77.03%, respectively. Stratified analyses showed that the deep learning model and radiologists exhibited higher performance in the subgroup of nodule sizes larger than 10 mm. With a longer time interval between two CTs, the deep learning model yielded higher diagnostic accuracy, but no general rules were yielded for radiologists. Different densities of components did not affect the performance of the deep learning model. In contrast, the radiologists were affected by the nodule component. Conclusions Deep learning can achieve diagnostic performance on par with or better than radiologists in identifying pulmonary GGNs.
Collapse
Affiliation(s)
- Zhixin Qiu
- Department of Respiratory and Critical Care Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Qingxia Wu
- College of Medicine and Biomedical Information Engineering, Northeastern University, Shenyang, China
| | - Shuo Wang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Beihang University, Beijing, China
| | - Zhixia Chen
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Feng Lin
- Department of Thoracic Surgery, West China Hospital, Sichuan University, Chengdu, China
| | - Yuyan Zhou
- Department of Respiratory and Critical Care Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Jing Jin
- Department of Respiratory and Critical Care Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Jinghong Xian
- Department of Clinical Research, West China Hospital, Sichuan University, Chengdu, China
| | - Jie Tian
- College of Medicine and Biomedical Information Engineering, Northeastern University, Shenyang, China.,CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Beihang University, Beijing, China
| | - Weimin Li
- Department of Respiratory and Critical Care Medicine, West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
21
|
Chen H, Liu J, Lu L, Wang T, Xu X, Chu A, Peng W, Gong J, Tang W, Gu Y. Volumetric segmentation of ground glass nodule based on 3D attentional cascaded residual U-net and conditional radom field. Med Phys 2021; 49:1097-1107. [PMID: 34951492 DOI: 10.1002/mp.15423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 12/08/2021] [Accepted: 12/10/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Ground glass nodule (GGN) segmentation is one of the important and challenging tasks in diagnosing early-stage lung adenocarcinomas. Manually delineation of 3D GGN in computed tomography (CT) image is a subjective, laborious and tedious task, which presents poor repeatability. PURPOSE To reduce the annotation burden and improve the segmentation performance, this study proposes a 3D deep learning-based volumetric segmentation model to segment the GGN in CT images. METHODS A total of 379 GGNs were retrospectively collected from the public database, Shanghai Pulmonary Hospital (SHPH) and Fudan University Shanghai Cancer Center (FUSCC). First, a series of image pre-processing techniques involving image resampling, intensity normalization, 3D nodule patch cropping, and data augmentation, were adopted to generate the input images for the deep learning model by using the CT scans. Then, a 3D attentional cascaded residual network (ACRU-Net) was proposed to develop the deep learning-based segmentation model by using residual network and atrous spatial pyramid pooling module. To improve the model performance, a voxel-based conditional random field (CRF) method was used to optimize the segmentation results. Finally, a balanced cross-entropy and Dice combined loss function was applied to train and build the segmentation model. RESULTS Testing on SHPH and FUSCC datasets, the proposed method generates the Dice coefficients of 0.721±0.167 and 0.733±0.100, respectively, which are higher than that of 3D residual U-Net and ACRU-Net without CRF optimization. CONCLUSIONS The results demonstrated that combining 3D ACRU-Net and CRF effectively improved the GGN segmentation performance. The proposed segmentation model may provide a potential tool to help the radiologist in the segmentation and diagnosis of 3D GGN. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Hui Chen
- Department of Radiology, Medical Community of Linhai First People's Hospital, Linhai, Zhejiang, 317000, China
| | - Jiyu Liu
- Department of Radiology, Shanghai Pulmonary Hospital, Shanghai, 200433, China
| | - Liangjian Lu
- Department of Radiology, Medical Community of Linhai First People's Hospital, Linhai, Zhejiang, 317000, China
| | - Ting Wang
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China
| | - Xiaomin Xu
- Department of Radiology, Medical Community of Linhai First People's Hospital, Linhai, Zhejiang, 317000, China
| | - Aina Chu
- Department of Radiology, Medical Community of Linhai First People's Hospital, Linhai, Zhejiang, 317000, China
| | - Weijun Peng
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China
| | - Jing Gong
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China
| | - Wei Tang
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China
| | - Yajia Gu
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China
| |
Collapse
|
22
|
Sun K, Chen S, Zhao J, Wang B, Yang Y, Wang Y, Wu C, Sun X. Convolutional Neural Network-Based Diagnostic Model for a Solid, Indeterminate Solitary Pulmonary Nodule or Mass on Computed Tomography. Front Oncol 2021; 11:792062. [PMID: 34993146 PMCID: PMC8724915 DOI: 10.3389/fonc.2021.792062] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2021] [Accepted: 11/19/2021] [Indexed: 12/26/2022] Open
Abstract
PURPOSE To establish a non-invasive diagnostic model based on convolutional neural networks (CNNs) to distinguish benign from malignant lesions manifesting as a solid, indeterminate solitary pulmonary nodule (SPN) or mass (SPM) on computed tomography (CT). METHOD A total of 459 patients with solid indeterminate SPNs/SPMs on CT were ultimately included in this retrospective study and assigned to the train (n=366), validation (n=46), and test (n=47) sets. Histopathologic analysis was available for each patient. An end-to-end CNN model was proposed to predict the natural history of solid indeterminate SPN/SPMs on CT. Receiver operating characteristic curves were plotted to evaluate the predictive performance of the proposed CNN model. The accuracy, sensitivity, and specificity of diagnoses by radiologists alone were compared with those of diagnoses by radiologists by using the CNN model to assess its clinical utility. RESULTS For the CNN model, the AUC was 91% (95% confidence interval [CI]: 0.83-0.99) in the test set. The diagnostic accuracy of radiologists with the CNN model was significantly higher than that without the model (89 vs. 66%, P<0.01; 87 vs. 61%, P<0.01; 85 vs. 66%, P=0.03, in the train, validation, and test sets, respectively). In addition, while there was a slight increase in sensitivity, the specificity improved significantly by an average of 42% (the corresponding improvements in the three sets ranged from 43, 33, and 42% to 82, 78, and 84%, respectively; P<0.01 for all). CONCLUSION The CNN model could be a valuable tool in non-invasively differentiating benign from malignant lesions manifesting as solid, indeterminate SPNs/SPMs on CT.
Collapse
Affiliation(s)
- Ke Sun
- Department of Radiology, Huashan Hospital, Fudan University, Shanghai, China
- Department of Radiology, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Shouyu Chen
- Department of Computer Science and Technology, College of Electronics and Information Engineering, Tongji University, Shanghai, China
| | - Jiabi Zhao
- Department of Radiology, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Bin Wang
- Department of Radiology, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Yang Yang
- Department of Radiology, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Yin Wang
- Department of Computer Science and Technology, College of Electronics and Information Engineering, Tongji University, Shanghai, China
| | - Chunyan Wu
- Department of Pathology, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Xiwen Sun
- Department of Radiology, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| |
Collapse
|
23
|
Castillo T. JM, Arif M, Starmans MPA, Niessen WJ, Bangma CH, Schoots IG, Veenland JF. Classification of Clinically Significant Prostate Cancer on Multi-Parametric MRI: A Validation Study Comparing Deep Learning and Radiomics. Cancers (Basel) 2021; 14:12. [PMID: 35008177 PMCID: PMC8749796 DOI: 10.3390/cancers14010012] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 12/01/2021] [Accepted: 12/03/2021] [Indexed: 12/16/2022] Open
Abstract
The computer-aided analysis of prostate multiparametric MRI (mpMRI) could improve significant-prostate-cancer (PCa) detection. Various deep-learning- and radiomics-based methods for significant-PCa segmentation or classification have been reported in the literature. To be able to assess the generalizability of the performance of these methods, using various external data sets is crucial. While both deep-learning and radiomics approaches have been compared based on the same data set of one center, the comparison of the performances of both approaches on various data sets from different centers and different scanners is lacking. The goal of this study was to compare the performance of a deep-learning model with the performance of a radiomics model for the significant-PCa diagnosis of the cohorts of various patients. We included the data from two consecutive patient cohorts from our own center (n = 371 patients), and two external sets of which one was a publicly available patient cohort (n = 195 patients) and the other contained data from patients from two hospitals (n = 79 patients). Using multiparametric MRI (mpMRI), the radiologist tumor delineations and pathology reports were collected for all patients. During training, one of our patient cohorts (n = 271 patients) was used for both the deep-learning- and radiomics-model development, and the three remaining cohorts (n = 374 patients) were kept as unseen test sets. The performances of the models were assessed in terms of their area under the receiver-operating-characteristic curve (AUC). Whereas the internal cross-validation showed a higher AUC for the deep-learning approach, the radiomics model obtained AUCs of 0.88, 0.91 and 0.65 on the independent test sets compared to AUCs of 0.70, 0.73 and 0.44 for the deep-learning model. Our radiomics model that was based on delineated regions resulted in a more accurate tool for significant-PCa classification in the three unseen test sets when compared to a fully automated deep-learning model.
Collapse
Affiliation(s)
- Jose M. Castillo T.
- Department of Radiology and Nuclear Medicine, Erasmus MC, 3015 GD Rotterdam, The Netherlands; (J.M.C.T.); (M.A.); (M.P.A.S.); (W.J.N.); (I.G.S.)
| | - Muhammad Arif
- Department of Radiology and Nuclear Medicine, Erasmus MC, 3015 GD Rotterdam, The Netherlands; (J.M.C.T.); (M.A.); (M.P.A.S.); (W.J.N.); (I.G.S.)
| | - Martijn P. A. Starmans
- Department of Radiology and Nuclear Medicine, Erasmus MC, 3015 GD Rotterdam, The Netherlands; (J.M.C.T.); (M.A.); (M.P.A.S.); (W.J.N.); (I.G.S.)
| | - Wiro J. Niessen
- Department of Radiology and Nuclear Medicine, Erasmus MC, 3015 GD Rotterdam, The Netherlands; (J.M.C.T.); (M.A.); (M.P.A.S.); (W.J.N.); (I.G.S.)
- Faculty of Applied Sciences, Delft University of Technology, Lorentzweg 1, 2628 CJ Delft, The Netherlands
| | - Chris H. Bangma
- Department of Urology, Erasmus MC, 3015 GD Rotterdam, The Netherlands;
| | - Ivo G. Schoots
- Department of Radiology and Nuclear Medicine, Erasmus MC, 3015 GD Rotterdam, The Netherlands; (J.M.C.T.); (M.A.); (M.P.A.S.); (W.J.N.); (I.G.S.)
| | - Jifke F. Veenland
- Department of Radiology and Nuclear Medicine, Erasmus MC, 3015 GD Rotterdam, The Netherlands; (J.M.C.T.); (M.A.); (M.P.A.S.); (W.J.N.); (I.G.S.)
- Department of Medical Informatics, Erasmus MC, 3015 GD Rotterdam, The Netherlands
| |
Collapse
|
24
|
Wang J, Yuan C, Han C, Wen Y, Lu H, Liu C, She Y, Deng J, Li B, Qian D, Chen C. IMAL-Net: Interpretable multi-task attention learning network for invasive lung adenocarcinoma screening in CT images. Med Phys 2021; 48:7913-7929. [PMID: 34674280 DOI: 10.1002/mp.15293] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Revised: 08/26/2021] [Accepted: 09/29/2021] [Indexed: 12/17/2022] Open
Abstract
PURPOSE Feature maps created from deep convolutional neural networks (DCNNs) have been widely used for visual explanation of DCNN-based classification tasks. However, many clinical applications such as benign-malignant classification of lung nodules normally require quantitative and objective interpretability, rather than just visualization. In this paper, we propose a novel interpretable multi-task attention learning network named IMAL-Net for early invasive adenocarcinoma screening in chest computed tomography images, which takes advantage of segmentation prior to assist interpretable classification. METHODS Two sub-ResNets are firstly integrated together via a prior-attention mechanism for simultaneous nodule segmentation and invasiveness classification. Then, numerous radiomic features from the segmentation results are concatenated with high-level semantic features from the classification subnetwork by FC layers to achieve superior performance. Meanwhile, an end-to-end feature selection mechanism (named FSM) is designed to quantify crucial radiomic features greatly affecting the prediction of each sample, and thus it can provide clinically applicable interpretability to the prediction result. RESULTS Nodule samples from a total of 1626 patients were collected from two grade-A hospitals for large-scale verification. Five-fold cross validation demonstrated that the proposed IMAL-Net can achieve an AUC score of 93.8% ± 1.1% and a recall score of 93.8% ± 2.8% for identification of invasive lung adenocarcinoma. CONCLUSIONS It can be concluded that fusing semantic features and radiomic features can achieve obvious improvements in the invasiveness classification task. Moreover, by learning more fine-grained semantic features and highlighting the most important radiomics features, the proposed attention and FSM mechanisms not only can further improve the performance but also can be used for both visual explanations and objective analysis of the classification results.
Collapse
Affiliation(s)
- Jun Wang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Cheng Yuan
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Can Han
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yaofeng Wen
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Hongbing Lu
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Chen Liu
- Department of Radiology, Southwest Hospital, Third Military University (Army Medical University), Chongqing, China
| | - Yunlang She
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Jiajun Deng
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Biao Li
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai, China
| | - Dahong Qian
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Chang Chen
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| |
Collapse
|
25
|
Shi L, Zhao J, Peng X, Wang Y, Liu L, Sheng M. CT-based radiomics for differentiating invasive adenocarcinomas from indolent lung adenocarcinomas appearing as ground-glass nodules: Asystematic review. Eur J Radiol 2021; 144:109956. [PMID: 34563797 DOI: 10.1016/j.ejrad.2021.109956] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Revised: 08/25/2021] [Accepted: 08/28/2021] [Indexed: 11/26/2022]
Abstract
PURPOSE To provide an overview of the available studies investigating the use of computer tomography (CT) radiomics features for differentiating invasive adenocarcinomas (IAC) from indolent lung adenocarcinomas presenting as ground-glass nodules (GGNs), to identify the bias of the studies and to propose directions for future research. METHOD PubMed, Embase, Web of Science Core Collection were searched for relevant studies. The studies differentiating IAC from indolent lung adenocarcinomas appearing as GGNs based on CT radiomics features were included. Basic information, patient information, CT-scanner information, technique information and performance information were extracted for each included study. The quality of each study was assessed using the Radiomic Quality Score (RQS) and the Prediction model Risk of Bias Assessment Tool (PROBAST). RESULTS Twenty-eight studies were included with patients ranging from 34 to 794. All of them were retrospective. Patients in three studies were from multiple centers. Most studies segmented regions of interest manually. Pyradiomics and AK software were the most frequently used for features extraction. The number of radiomics features extracted varied from 7 to 10329. Logistic regression was the most frequently chosen model. Entropy was identified as radiomics signature in seven studies. The AUC of included studies ranged from 0.77 to 0.98 in 15 validation sets. The percentage RQS ranged from 3% to 50%. According to PROBAST, the overall risk of bias (ROB) was high in 89.3% (25/28) of included studies, unclear in 7.1% (2/28) of included studies, and low in 3.6% (1/28) of included studies. All studies were low concern regarding the applicability of primary studies to the review question. CONCLUSION CT radiomics-based model is promising and encouraging in differentiating IAC from indolent lung adenocarcinomas, though they require methodological rigor. Well-designed studies are necessary to demonstrate their validity and standardization of methods and results can prompt their use in daily clinical practice.
Collapse
Affiliation(s)
- Lili Shi
- Medical School, Nantong University, Nantong, China; Institutes of Biomedical Sciences, Fudan University, Shanghai, China
| | - Jinli Zhao
- Department of Radiology, Affiliated Hospital of Nantong University, Nantong, China
| | - Xueqing Peng
- Institutes of Biomedical Sciences, Fudan University, Shanghai, China
| | - Yunpeng Wang
- Institutes of Biomedical Sciences, Fudan University, Shanghai, China
| | - Lei Liu
- Institutes of Biomedical Sciences, Fudan University, Shanghai, China; School of Basic Medical Sciences, and Academy of Engineering and Technology, Fudan University, Shanghai, China.
| | - Meihong Sheng
- Department of Radiology, The Second Affiliated Hospital of Nantong University and Nantong First People's Hospital, Nantong, China.
| |
Collapse
|
26
|
Ursuleanu TF, Luca AR, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Preda C, Grigorovici A. Deep Learning Application for Analyzing of Constituents and Their Correlations in the Interpretations of Medical Images. Diagnostics (Basel) 2021; 11:1373. [PMID: 34441307 PMCID: PMC8393354 DOI: 10.3390/diagnostics11081373] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 07/25/2021] [Accepted: 07/27/2021] [Indexed: 12/13/2022] Open
Abstract
The need for time and attention, given by the doctor to the patient, due to the increased volume of medical data to be interpreted and filtered for diagnostic and therapeutic purposes has encouraged the development of the option to support, constructively and effectively, deep learning models. Deep learning (DL) has experienced an exponential development in recent years, with a major impact on interpretations of the medical image. This has influenced the development, diversification and increase of the quality of scientific data, the development of knowledge construction methods and the improvement of DL models used in medical applications. All research papers focus on description, highlighting, classification of one of the constituent elements of deep learning models (DL), used in the interpretation of medical images and do not provide a unified picture of the importance and impact of each constituent in the performance of DL models. The novelty in our paper consists primarily in the unitary approach, of the constituent elements of DL models, namely, data, tools used by DL architectures or specifically constructed DL architecture combinations and highlighting their "key" features, for completion of tasks in current applications in the interpretation of medical images. The use of "key" characteristics specific to each constituent of DL models and the correct determination of their correlations, may be the subject of future research, with the aim of increasing the performance of DL models in the interpretation of medical images.
Collapse
Affiliation(s)
- Tudor Florin Ursuleanu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
- Department of Surgery I, Regional Institute of Oncology, 700483 Iasi, Romania
| | - Andreea Roxana Luca
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department Obstetrics and Gynecology, Integrated Ambulatory of Hospital “Sf. Spiridon”, 700106 Iasi, Romania
| | - Liliana Gheorghe
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Radiology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Roxana Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Stefan Iancu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Maria Hlusneac
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Cristina Preda
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Endocrinology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Alexandru Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| |
Collapse
|
27
|
Gong J, Liu J, Li H, Zhu H, Wang T, Hu T, Li M, Xia X, Hu X, Peng W, Wang S, Tong T, Gu Y. Deep Learning-Based Stage-Wise Risk Stratification for Early Lung Adenocarcinoma in CT Images: A Multi-Center Study. Cancers (Basel) 2021; 13:cancers13133300. [PMID: 34209366 PMCID: PMC8269183 DOI: 10.3390/cancers13133300] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 06/28/2021] [Accepted: 06/28/2021] [Indexed: 12/21/2022] Open
Abstract
Simple Summary Prediction of the malignancy and invasiveness of ground glass nodules (GGNs) from computed tomography images is a crucial task for radiologists in risk stratification of early-stage lung adenocarcinoma. In order to solve this challenge, a two-stage deep neural network (DNN) was developed based on the images collected from four centers. A multi-reader multi-case observer study was conducted to evaluate the model capability. The performance of our model was comparable or even more accurate than that of senior radiologists, with average area under the curve values of 0.76 and 0.95 for two tasks, respectively. Findings suggest (1) a positive trend between the diagnostic performance and radiologist’s experience, (2) DNN yielded equivalent or even higher performance in comparison with senior radiologists, and (3) low image resolution reduced the model performance in predicting the risks of GGNs. Abstract This study aims to develop a deep neural network (DNN)-based two-stage risk stratification model for early lung adenocarcinomas in CT images, and investigate the performance compared with practicing radiologists. A total of 2393 GGNs were retrospectively collected from 2105 patients in four centers. All the pathologic results of GGNs were obtained from surgically resected specimens. A two-stage deep neural network was developed based on the 3D residual network and atrous convolution module to diagnose benign and malignant GGNs (Task1) and classify between invasive adenocarcinoma (IA) and non-IA for these malignant GGNs (Task2). A multi-reader multi-case observer study with six board-certified radiologists’ (average experience 11 years, range 2–28 years) participation was conducted to evaluate the model capability. DNN yielded area under the receiver operating characteristic curve (AUC) values of 0.76 ± 0.03 (95% confidence interval (CI): (0.69, 0.82)) and 0.96 ± 0.02 (95% CI: (0.92, 0.98)) for Task1 and Task2, which were equivalent to or higher than radiologists in the senior group with average AUC values of 0.76 and 0.95, respectively (p > 0.05). With the CT image slice thickness increasing from 1.15 mm ± 0.36 to 1.73 mm ± 0.64, DNN performance decreased 0.08 and 0.22 for the two tasks. The results demonstrated (1) a positive trend between the diagnostic performance and radiologist’s experience, (2) the DNN yielded equivalent or even higher performance in comparison with senior radiologists, and (3) low image resolution decreased model performance in predicting the risks of GGNs. Once tested prospectively in clinical practice, the DNN could have the potential to assist doctors in precision diagnosis and treatment of early lung adenocarcinoma.
Collapse
Affiliation(s)
- Jing Gong
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Jiyu Liu
- Department of Radiology, Shanghai Pulmonary Hospital, 507 Zheng Min Road, Shanghai 200433, China;
| | - Haiming Li
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Hui Zhu
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Tingting Wang
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Tingdan Hu
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Menglei Li
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Xianwu Xia
- Department of Radiology, Municipal Hospital Affiliated to Taizhou University, Taizhou 318000, China;
| | - Xianfang Hu
- Department of Radiology, Huzhou Central Hospital Affiliated Central Hospital of Huzhou University, 1558 Sanhuan North Road, Huzhou 313000, China;
| | - Weijun Peng
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Shengping Wang
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
- Correspondence: (S.W.); (T.T.); (Y.G.); Tel.: +86-13818521975 (S.W); +86-18017312912 (T.T.); +86-18017312040 (Y.G.)
| | - Tong Tong
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
- Correspondence: (S.W.); (T.T.); (Y.G.); Tel.: +86-13818521975 (S.W); +86-18017312912 (T.T.); +86-18017312040 (Y.G.)
| | - Yajia Gu
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
- Correspondence: (S.W.); (T.T.); (Y.G.); Tel.: +86-13818521975 (S.W); +86-18017312912 (T.T.); +86-18017312040 (Y.G.)
| |
Collapse
|
28
|
On the performance of lung nodule detection, segmentation and classification. Comput Med Imaging Graph 2021; 89:101886. [PMID: 33706112 DOI: 10.1016/j.compmedimag.2021.101886] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 01/11/2021] [Accepted: 02/02/2021] [Indexed: 01/10/2023]
Abstract
Computed tomography (CT) screening is an effective way for early detection of lung cancer in order to improve the survival rate of such a deadly disease. For more than two decades, image processing techniques such as nodule detection, segmentation, and classification have been extensively studied to assist physicians in identifying nodules from hundreds of CT slices to measure shapes and HU distributions of nodules automatically and to distinguish their malignancy. Thanks to new parallel computation, multi-layer convolution, nonlinear pooling operation, and the big data learning strategy, recent development of deep-learning algorithms has shown great progress in lung nodule screening and computer-assisted diagnosis (CADx) applications due to their high sensitivity and low false positive rates. This paper presents a survey of state-of-the-art deep-learning-based lung nodule screening and analysis techniques focusing on their performance and clinical applications, aiming to help better understand the current performance, the limitation, and the future trends of lung nodule analysis.
Collapse
|
29
|
Wu G, Jochems A, Refaee T, Ibrahim A, Yan C, Sanduleanu S, Woodruff HC, Lambin P. Structural and functional radiomics for lung cancer. Eur J Nucl Med Mol Imaging 2021; 48:3961-3974. [PMID: 33693966 PMCID: PMC8484174 DOI: 10.1007/s00259-021-05242-1] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 02/03/2021] [Indexed: 12/19/2022]
Abstract
INTRODUCTION Lung cancer ranks second in new cancer cases and first in cancer-related deaths worldwide. Precision medicine is working on altering treatment approaches and improving outcomes in this patient population. Radiological images are a powerful non-invasive tool in the screening and diagnosis of early-stage lung cancer, treatment strategy support, prognosis assessment, and follow-up for advanced-stage lung cancer. Recently, radiological features have evolved from solely semantic to include (handcrafted and deep) radiomic features. Radiomics entails the extraction and analysis of quantitative features from medical images using mathematical and machine learning methods to explore possible ties with biology and clinical outcomes. METHODS Here, we outline the latest applications of both structural and functional radiomics in detection, diagnosis, and prediction of pathology, gene mutation, treatment strategy, follow-up, treatment response evaluation, and prognosis in the field of lung cancer. CONCLUSION The major drawbacks of radiomics are the lack of large datasets with high-quality data, standardization of methodology, the black-box nature of deep learning, and reproducibility. The prerequisite for the clinical implementation of radiomics is that these limitations are addressed. Future directions include a safer and more efficient model-training mode, merge multi-modality images, and combined multi-discipline or multi-omics to form "Medomics."
Collapse
Affiliation(s)
- Guangyao Wu
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology, Maastricht University Medical Centre+, 6229, Maastricht, The Netherlands. .,Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China. .,Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China.
| | - Arthur Jochems
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology, Maastricht University Medical Centre+, 6229, Maastricht, The Netherlands
| | - Turkey Refaee
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology, Maastricht University Medical Centre+, 6229, Maastricht, The Netherlands.,Department of Diagnostic Radiology, Faculty of Applied Medical Sciences, Jazan University, Jazan, Saudi Arabia
| | - Abdalla Ibrahim
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology, Maastricht University Medical Centre+, 6229, Maastricht, The Netherlands.,Department of Radiology and Nuclear Medicine, GROW - School for Oncology, Maastricht University Medical Centre+, Maastricht, The Netherlands.,Division of Nuclear Medicine and Oncological Imaging, Department of Medical Physics, Hospital Center Universitaire De Liege, Liege, Belgium.,Department of Nuclear Medicine and Comprehensive Diagnostic Center Aachen (CDCA), University Hospital RWTH Aachen University, Aachen, Germany
| | - Chenggong Yan
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology, Maastricht University Medical Centre+, 6229, Maastricht, The Netherlands.,Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Sebastian Sanduleanu
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology, Maastricht University Medical Centre+, 6229, Maastricht, The Netherlands
| | - Henry C Woodruff
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology, Maastricht University Medical Centre+, 6229, Maastricht, The Netherlands.,Department of Radiology and Nuclear Medicine, GROW - School for Oncology, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology, Maastricht University Medical Centre+, 6229, Maastricht, The Netherlands.,Department of Radiology and Nuclear Medicine, GROW - School for Oncology, Maastricht University Medical Centre+, Maastricht, The Netherlands
| |
Collapse
|
30
|
Hu X, Gong J, Zhou W, Li H, Wang S, Wei M, Peng W, Gu Y. Computer-aided diagnosis of ground glass pulmonary nodule by fusing deep learning and radiomics features. Phys Med Biol 2021; 66:065015. [PMID: 33596552 DOI: 10.1088/1361-6560/abe735] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
OBJECTIVES This study aims to develop a computer-aided diagnosis (CADx) scheme to classify between benign and malignant ground glass nodules (GGNs), and fuse deep leaning and radiomics imaging features to improve the classification performance. METHODS We first retrospectively collected 513 surgery histopathology confirmed GGNs from two centers. Among these GGNs, 100 were benign and 413 were malignant. All malignant tumors were stage I lung adenocarcinoma. To segment GGNs, we applied a deep convolutional neural network and residual architecture to train and build a 3D U-Net. Then, based on the pre-trained U-Net, we used a transfer learning approach to build a deep neural network (DNN) to classify between benign and malignant GGNs. With the GGN segmentation results generated by 3D U-Net, we also developed a CT radiomics model by adopting a series of image processing techniques, i.e. radiomics feature extraction, feature selection, synthetic minority over-sampling technique, and support vector machine classifier training/testing, etc. Finally, we applied an information fusion method to fuse the prediction scores generated by DNN based CADx model and CT-radiomics based model. To evaluate the proposed model performance, we conducted a comparison experiment by testing on an independent testing dataset. RESULTS Comparing with DNN model and radiomics model, our fusion model yielded a significant higher area under a receiver operating characteristic curve (AUC) value of 0.73 ± 0.06 (P < 0.01). The fusion model generated an accuracy of 75.6%, F1 score of 84.6%, weighted average F1 score of 70.3%, and Matthews correlation coefficient of 43.6%, which were higher than the DNN model and radiomics model individually. CONCLUSIONS Our experimental results demonstrated that (1) applying a CADx scheme was feasible to diagnosis of early-stage lung adenocarcinoma, (2) deep image features and radiomics features provided complementary information in classifying benign and malignant GGNs, and (3) it was an effective way to build DNN model with limited dataset by using transfer learning. Thus, to build a robust image analysis based CADx model, one can combine different types of image features to decode the imaging phenotypes of GGN.
Collapse
Affiliation(s)
- Xianfang Hu
- Department of Radiology, Huzhou Central Hospital, Affiliated Central Hospital of Huzhou University, 1558 Sanhuan North Road, Huzhou, Zhejiang, 313000, People's Republic of China
| | - Jing Gong
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai, 200032, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| | - Wei Zhou
- Department of Radiology, Huzhou Central Hospital, Affiliated Central Hospital of Huzhou University, 1558 Sanhuan North Road, Huzhou, Zhejiang, 313000, People's Republic of China
| | - Haiming Li
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai, 200032, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| | - Shengping Wang
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai, 200032, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| | - Meng Wei
- Medical imaging Center, The first Affiliated Hospital of Wannan Medical College, No. 2 Zheshan West Road, Wuhu, Anhui, 241001, People's Republic of China
| | - Weijun Peng
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai, 200032, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| | - Yajia Gu
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai, 200032, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| |
Collapse
|
31
|
Ashraf SF, Yin K, Meng CX, Wang Q, Wang Q, Pu J, Dhupar R. Predicting benign, preinvasive, and invasive lung nodules on computed tomography scans using machine learning. J Thorac Cardiovasc Surg 2021; 163:1496-1505.e10. [PMID: 33726909 DOI: 10.1016/j.jtcvs.2021.02.010] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/31/2020] [Revised: 01/28/2021] [Accepted: 02/02/2021] [Indexed: 12/17/2022]
Abstract
OBJECTIVE The study objective was to investigate if machine learning algorithms can predict whether a lung nodule is benign, adenocarcinoma, or its preinvasive subtype from computed tomography images alone. METHODS A dataset of chest computed tomography scans containing lung nodules was collected with their pathologic diagnosis from several sources. The dataset was split randomly into training (70%), internal validation (15%), and independent test sets (15%) at the patient level. Two machine learning algorithms were developed, trained, and validated. The first algorithm used the support vector machine model, and the second used deep learning technology: a convolutional neural network. Receiver operating characteristic analysis was used to evaluate the performance of the classification on the test dataset. RESULTS The support vector machine/convolutional neural network-based models classified nodules into 6 categories resulting in an area under the curve of 0.59/0.65 when differentiating atypical adenomatous hyperplasia versus adenocarcinoma in situ, 0.87/0.86 with minimally invasive adenocarcinoma versus invasive adenocarcinoma, 0.76/0.72 atypical adenomatous hyperplasia + adenocarcinoma in situ versus minimally invasive adenocarcinoma, 0.89/0.87 atypical adenomatous hyperplasia + adenocarcinoma in situ versus minimally invasive adenocarcinoma + invasive adenocarcinoma, and 0.93/0.92 atypical adenomatous hyperplasia + adenocarcinoma in situ + minimally invasive adenocarcinoma versus invasive adenocarcinoma. Classifying benign versus atypical adenomatous hyperplasia + adenocarcinoma in situ + minimally invasive adenocarcinoma versus invasive adenocarcinoma resulted in a micro-average area under the curve of 0.93/0.94 for the support vector machine/convolutional neural network models, respectively. The convolutional neural network-based methods had higher sensitivities than the support vector machine-based methods but lower specificities and accuracies. CONCLUSIONS The machine learning algorithms demonstrated reasonable performance in differentiating benign versus preinvasive versus invasive adenocarcinoma from computed tomography images alone. However, the prediction accuracy varies across its subtypes. This holds the potential for improved diagnostic capabilities with less-invasive means.
Collapse
Affiliation(s)
- Syed Faaz Ashraf
- Department of Cardiothoracic Surgery, University of Pittsburgh School of Medicine, Pittsburgh, Pa
| | - Ke Yin
- Department of Radiology, The Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | | | - Qi Wang
- Department of Radiology, The Fourth Hospital of Hebei Medical University, Hebei, China
| | - Qiong Wang
- Department of Radiology, The Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | - Jiantao Pu
- Department of Radiology, University of Pittsburgh, Pittsburgh, Pa; Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pa
| | - Rajeev Dhupar
- Department of Cardiothoracic Surgery, University of Pittsburgh School of Medicine, Pittsburgh, Pa; VA Pittsburgh Healthcare System, Pittsburgh, Pa.
| |
Collapse
|
32
|
Xie X, Niu J, Liu X, Chen Z, Tang S, Yu S. A survey on incorporating domain knowledge into deep learning for medical image analysis. Med Image Anal 2021; 69:101985. [PMID: 33588117 DOI: 10.1016/j.media.2021.101985] [Citation(s) in RCA: 76] [Impact Index Per Article: 25.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 12/04/2020] [Accepted: 01/26/2021] [Indexed: 12/27/2022]
Abstract
Although deep learning models like CNNs have achieved great success in medical image analysis, the small size of medical datasets remains a major bottleneck in this area. To address this problem, researchers have started looking for external information beyond current available medical datasets. Traditional approaches generally leverage the information from natural images via transfer learning. More recent works utilize the domain knowledge from medical doctors, to create networks that resemble how medical doctors are trained, mimic their diagnostic patterns, or focus on the features or areas they pay particular attention to. In this survey, we summarize the current progress on integrating medical domain knowledge into deep learning models for various tasks, such as disease diagnosis, lesion, organ and abnormality detection, lesion and organ segmentation. For each task, we systematically categorize different kinds of medical domain knowledge that have been utilized and their corresponding integrating methods. We also provide current challenges and directions for future research.
Collapse
Affiliation(s)
- Xiaozheng Xie
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China
| | - Jianwei Niu
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China; Beijing Advanced Innovation Center for Big Data and Brain Computing (BDBC) and Hangzhou Innovation Institute of Beihang University, 18 Chuanghui Street, Binjiang District, Hangzhou 310000, China
| | - Xuefeng Liu
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China.
| | - Zhengsu Chen
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China
| | - Shaojie Tang
- Jindal School of Management, The University of Texas at Dallas, 800 W Campbell Rd, Richardson, TX 75080-3021, USA
| | - Shui Yu
- School of Computer Science, University of Technology Sydney, 15 Broadway, Ultimo NSW 2007, Australia
| |
Collapse
|
33
|
Yin P, Mao N, Chen H, Sun C, Wang S, Liu X, Hong N. Machine and Deep Learning Based Radiomics Models for Preoperative Prediction of Benign and Malignant Sacral Tumors. Front Oncol 2020; 10:564725. [PMID: 33178593 PMCID: PMC7596901 DOI: 10.3389/fonc.2020.564725] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Accepted: 09/21/2020] [Indexed: 12/23/2022] Open
Abstract
Purpose To assess the performance of deep neural network (DNN) and machine learning based radiomics on 3D computed tomography (CT) and clinical characteristics to predict benign or malignant sacral tumors. Materials and methods This single-center retrospective analysis included 459 patients with pathologically proven sacral tumors. After semi-automatic segmentation, 1,316 hand-crafted radiomics features of each patient were extracted. All models were built on training set (321 patients) and tested on validation set (138 patients). A DNN model and four machine learning classifiers (logistic regression [LR], random forest [RF], support vector machine [SVM] and k-nearest neighbor [KNN]) based on CT features and clinical characteristics were built, respectively. The area under the receiver operating characteristic curve (AUC) and accuracy (ACC) were used to evaluate different models. Results In total, 459 patients (255 males, 204 females; mean age of 42.1 ± 17.8 years, range 4–82 years) were enrolled in this study, including 206 cases of benign tumor and 253 cases of malignant tumor. The sex, age and tumor size had significant differences between the benign tumors and malignant tumors (χ2sex = 10.854, Zage = −6.616, Zsize = 2.843, P < 0.05). The radscore, sex, and age were important indicators for differentiating benign and malignant sacral tumors (odds ratio [OR]1 = 2.492, OR2 = 2.236, OR3 = 1.037, P < 0.01). Among the four clinical-radiomics models (RMs), clinical-LR had the best performance in the validation set (AUC = 0.84, ACC = 0.81). The clinical-DNN model also achieved a high performance (an AUC of 0.83 and an ACC of 0.76 in the validation set) in identifying benign and malignant sacral tumors. Conclusions Both the clinical-LR and clinical-DNN models would have a high impact on assisting radiologists in their clinical diagnosis of sacral tumors.
Collapse
Affiliation(s)
- Ping Yin
- Department of Radiology, Peking University People's Hospital, Beijing, Beijing Municipality, China
| | - Ning Mao
- Department of Radiology, Yantai Yuhuangding Hospital, Qingdao University, Yantai, China
| | - Hao Chen
- Department of Radiology, Peking University People's Hospital, Beijing, Beijing Municipality, China
| | - Chao Sun
- Department of Radiology, Peking University People's Hospital, Beijing, Beijing Municipality, China
| | - Sicong Wang
- Pharmaceutical Diagnostics, GE Healthcare, Shanghai, China
| | - Xia Liu
- Department of Radiology, Peking University People's Hospital, Beijing, Beijing Municipality, China
| | - Nan Hong
- Department of Radiology, Peking University People's Hospital, Beijing, Beijing Municipality, China
| |
Collapse
|
34
|
Qi L, Lu W, Wu N, Wang J. Persistent pulmonary subsolid nodules with a solid component smaller than 6 mm: what do we know? J Thorac Dis 2020; 12:4584-4587. [PMID: 32944381 PMCID: PMC7475582 DOI: 10.21037/jtd-20-1972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Linlin Qi
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Wenwen Lu
- Department of Ophthalmology, Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China
| | - Ning Wu
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.,PET-CT Center, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianwei Wang
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
35
|
Liang G, Fan W, Luo H, Zhu X. The emerging roles of artificial intelligence in cancer drug development and precision therapy. Biomed Pharmacother 2020; 128:110255. [DOI: 10.1016/j.biopha.2020.110255] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2020] [Revised: 04/22/2020] [Accepted: 05/10/2020] [Indexed: 12/12/2022] Open
|