1
|
Wang TW, Hong JS, Chiu HY, Chao HS, Chen YM, Wu YT. Standalone deep learning versus experts for diagnosis lung cancer on chest computed tomography: a systematic review. Eur Radiol 2024:10.1007/s00330-024-10804-6. [PMID: 38777902 DOI: 10.1007/s00330-024-10804-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Revised: 03/10/2024] [Accepted: 04/01/2024] [Indexed: 05/25/2024]
Abstract
PURPOSE To compare the diagnostic performance of standalone deep learning (DL) algorithms and human experts in lung cancer detection on chest computed tomography (CT) scans. MATERIALS AND METHODS This study searched for studies on PubMed, Embase, and Web of Science from their inception until November 2023. We focused on adult lung cancer patients and compared the efficacy of DL algorithms and expert radiologists in disease diagnosis on CT scans. Quality assessment was performed using QUADAS-2, QUADAS-C, and CLAIM. Bivariate random-effects and subgroup analyses were performed for tasks (malignancy classification vs invasiveness classification), imaging modalities (CT vs low-dose CT [LDCT] vs high-resolution CT), study region, software used, and publication year. RESULTS We included 20 studies on various aspects of lung cancer diagnosis on CT scans. Quantitatively, DL algorithms exhibited superior sensitivity (82%) and specificity (75%) compared to human experts (sensitivity 81%, specificity 69%). However, the difference in specificity was statistically significant, whereas the difference in sensitivity was not statistically significant. The DL algorithms' performance varied across different imaging modalities and tasks, demonstrating the need for tailored optimization of DL algorithms. Notably, DL algorithms matched experts in sensitivity on standard CT, surpassing them in specificity, but showed higher sensitivity with lower specificity on LDCT scans. CONCLUSION DL algorithms demonstrated improved accuracy over human readers in malignancy and invasiveness classification on CT scans. However, their performance varies by imaging modality, underlining the importance of continued research to fully assess DL algorithms' diagnostic effectiveness in lung cancer. CLINICAL RELEVANCE STATEMENT DL algorithms have the potential to refine lung cancer diagnosis on CT, matching human sensitivity and surpassing in specificity. These findings call for further DL optimization across imaging modalities, aiming to advance clinical diagnostics and patient outcomes. KEY POINTS Lung cancer diagnosis by CT is challenging and can be improved with AI integration. DL shows higher accuracy in lung cancer detection on CT than human experts. Enhanced DL accuracy could lead to improved lung cancer diagnosis and outcomes.
Collapse
Affiliation(s)
- Ting-Wei Wang
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, Taipei, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei, Taiwan
| | - Jia-Sheng Hong
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, Taipei, Taiwan
| | - Hwa-Yen Chiu
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, Taipei, Taiwan
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei, Taiwan
- Department of Chest Medicine, Taipei Veteran General Hospital, Taipei, Taiwan
| | - Heng-Sheng Chao
- Department of Chest Medicine, Taipei Veteran General Hospital, Taipei, Taiwan
| | - Yuh-Min Chen
- School of Medicine, National Yang-Ming Chiao Tung University, Taipei, Taiwan
- Department of Chest Medicine, Taipei Veteran General Hospital, Taipei, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, Taipei, Taiwan.
| |
Collapse
|
2
|
Pan Z, Hu G, Zhu Z, Tan W, Han W, Zhou Z, Song W, Yu Y, Song L, Jin Z. Predicting Invasiveness of Lung Adenocarcinoma at Chest CT with Deep Learning Ternary Classification Models. Radiology 2024; 311:e232057. [PMID: 38591974 DOI: 10.1148/radiol.232057] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/10/2024]
Abstract
Background Preoperative discrimination of preinvasive, minimally invasive, and invasive adenocarcinoma at CT informs clinical management decisions but may be challenging for classifying pure ground-glass nodules (pGGNs). Deep learning (DL) may improve ternary classification. Purpose To determine whether a strategy that includes an adjudication approach can enhance the performance of DL ternary classification models in predicting the invasiveness of adenocarcinoma at chest CT and maintain performance in classifying pGGNs. Materials and Methods In this retrospective study, six ternary models for classifying preinvasive, minimally invasive, and invasive adenocarcinoma were developed using a multicenter data set of lung nodules. The DL-based models were progressively modified through framework optimization, joint learning, and an adjudication strategy (simulating a multireader approach to resolving discordant nodule classifications), integrating two binary classification models with a ternary classification model to resolve discordant classifications sequentially. The six ternary models were then tested on an external data set of pGGNs imaged between December 2019 and January 2021. Diagnostic performance including accuracy, specificity, and sensitivity was assessed. The χ2 test was used to compare model performance in different subgroups stratified by clinical confounders. Results A total of 4929 nodules from 4483 patients (mean age, 50.1 years ± 9.5 [SD]; 2806 female) were divided into training (n = 3384), validation (n = 579), and internal (n = 966) test sets. A total of 361 pGGNs from 281 patients (mean age, 55.2 years ± 11.1 [SD]; 186 female) formed the external test set. The proposed strategy improved DL model performance in external testing (P < .001). For classifying minimally invasive adenocarcinoma, the accuracy was 85% and 79%, sensitivity was 75% and 63%, and specificity was 89% and 85% for the model with adjudication (model 6) and the model without (model 3), respectively. Model 6 showed a relatively narrow range (maximum minus minimum) across diagnostic indexes (accuracy, 1.7%; sensitivity, 7.3%; specificity, 0.9%) compared with the other models (accuracy, 0.6%-10.8%; sensitivity, 14%-39.1%; specificity, 5.5%-17.9%). Conclusion Combining framework optimization, joint learning, and an adjudication approach improved DL classification of adenocarcinoma invasiveness at chest CT. Published under a CC BY 4.0 license. Supplemental material is available for this article. See also the editorial by Sohn and Fields in this issue.
Collapse
Affiliation(s)
- Zhengsong Pan
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Ge Hu
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Zhenchen Zhu
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Weixiong Tan
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Wei Han
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Zhen Zhou
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Wei Song
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Yizhou Yu
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Lan Song
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Zhengyu Jin
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| |
Collapse
|
3
|
Rahman H, Khan AR, Sadiq T, Farooqi AH, Khan IU, Lim WH. A Systematic Literature Review of 3D Deep Learning Techniques in Computed Tomography Reconstruction. Tomography 2023; 9:2158-2189. [PMID: 38133073 PMCID: PMC10748093 DOI: 10.3390/tomography9060169] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 11/27/2023] [Accepted: 12/01/2023] [Indexed: 12/23/2023] Open
Abstract
Computed tomography (CT) is used in a wide range of medical imaging diagnoses. However, the reconstruction of CT images from raw projection data is inherently complex and is subject to artifacts and noise, which compromises image quality and accuracy. In order to address these challenges, deep learning developments have the potential to improve the reconstruction of computed tomography images. In this regard, our research aim is to determine the techniques that are used for 3D deep learning in CT reconstruction and to identify the training and validation datasets that are accessible. This research was performed on five databases. After a careful assessment of each record based on the objective and scope of the study, we selected 60 research articles for this review. This systematic literature review revealed that convolutional neural networks (CNNs), 3D convolutional neural networks (3D CNNs), and deep learning reconstruction (DLR) were the most suitable deep learning algorithms for CT reconstruction. Additionally, two major datasets appropriate for training and developing deep learning systems were identified: 2016 NIH-AAPM-Mayo and MSCT. These datasets are important resources for the creation and assessment of CT reconstruction models. According to the results, 3D deep learning may increase the effectiveness of CT image reconstruction, boost image quality, and lower radiation exposure. By using these deep learning approaches, CT image reconstruction may be made more precise and effective, improving patient outcomes, diagnostic accuracy, and healthcare system productivity.
Collapse
Affiliation(s)
- Hameedur Rahman
- Department of Computer Games Development, Faculty of Computing & AI, Air University, E9, Islamabad 44000, Pakistan;
| | - Abdur Rehman Khan
- Department of Creative Technologies, Faculty of Computing & AI, Air University, E9, Islamabad 44000, Pakistan;
| | - Touseef Sadiq
- Centre for Artificial Intelligence Research, Department of Information and Communication Technology, University of Agder, Jon Lilletuns vei 9, 4879 Grimstad, Norway
| | - Ashfaq Hussain Farooqi
- Department of Computer Science, Faculty of Computing AI, Air University, Islamabad 44000, Pakistan;
| | - Inam Ullah Khan
- Department of Electronic Engineering, School of Engineering & Applied Sciences (SEAS), Isra University, Islamabad Campus, Islamabad 44000, Pakistan;
| | - Wei Hong Lim
- Faculty of Engineering, Technology and Built Environment, UCSI University, Kuala Lumpur 56000, Malaysia;
| |
Collapse
|
4
|
Zhao ZR, Yu YH, Lin ZC, Ma DH, Lin YB, Hu J, Luo QQ, Li GF, Chen C, Yang YL, Yang JC, Lin YB, Long H. Invasiveness assessment by artificial intelligence against intraoperative frozen section for pulmonary nodules ≤ 3 cm. J Cancer Res Clin Oncol 2023; 149:7759-7765. [PMID: 37016100 DOI: 10.1007/s00432-023-04713-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 03/18/2023] [Indexed: 04/06/2023]
Abstract
PURPOSE To investigate the performance of an artificial intelligence (AI) algorithm for assessing the malignancy and invasiveness of pulmonary nodules in a multicenter cohort. METHODS A previously developed deep learning system based on a 3D convolutional neural network was used to predict tumor malignancy and invasiveness. Dataset of pulmonary nodules no more than 3 cm was integrated with CT images and pathologic information. Receiver operating characteristic curve analysis was used to evaluate the performance of the system. RESULTS A total of 466 resected pulmonary nodules were included in this study. The areas under the curves (AUCs) of the deep learning system in the prediction of malignancy as compared with pathological reports were 0.80, 0.80, and 0.75 for all, subcentimeter, and solid nodules, respectively. Additionally, the AUC in the AI-assisted prediction of invasive adenocarcinoma (IA) among subsolid lesions (n = 184) was 0.88. Most malignancies that were misdiagnosed by the AI system as benign diseases with a diameter measuring greater than 1 cm (26/250, 10.4%) presented as solid nodules (19/26, 73.1%) on CT. In an exploratory analysis involving nodules underwent intraoperative pathologic examination, the concordance rate in identifying IA between the AI model and frozen section examination was 0.69, with a sensitivity of 0.50 and specificity of 0.97. CONCLUSION The deep learning system can discriminate malignant diseases for pulmonary nodules measuring no more than 3 cm. The AI model has a high positive predictive value for invasive adenocarcinoma with respect to intraoperative frozen section examination, which might help determine the individualized surgical strategy.
Collapse
Affiliation(s)
- Ze-Rui Zhao
- State Key Laboratory of Oncology in Southern China, Department of Thoracic Surgery, Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Collaborative Innovation Center for Cancer Medicine, 651 Dongfeng Road East, Guangzhou, 510060, Guangdong, People's Republic of China
| | | | - Zhi-Chao Lin
- Department of Thoracic Surgery, Jiangmen Central Hospital, Jiangmen, China
| | - De-Hua Ma
- Department of Thoracic Surgery, Taizhou Hospital, Taizhou, China
| | - Yao-Bin Lin
- State Key Laboratory of Oncology in Southern China, Department of Thoracic Surgery, Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Collaborative Innovation Center for Cancer Medicine, 651 Dongfeng Road East, Guangzhou, 510060, Guangdong, People's Republic of China
| | - Jian Hu
- Department of Thoracic Surgery, School of Medicine, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Qing-Quan Luo
- Shanghai Lung Cancer Center, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Gao-Feng Li
- Department of Thoracic Surgery, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Chun Chen
- Department of Thoracic Surgery, Fujian Medical University Union Hospital, Fuzhou, China
| | - Yu-Lun Yang
- Department of Thoracic Surgery, Fifth Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Jian-Cheng Yang
- Dianei Technology, Shanghai, China.
- Shanghai Jiao Tong University, 800# Dong Chuan Road, Shanghai, 200240, People's Republic of China.
- EPFL, Lausanne, Switzerland.
| | - Yong-Bin Lin
- State Key Laboratory of Oncology in Southern China, Department of Thoracic Surgery, Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Collaborative Innovation Center for Cancer Medicine, 651 Dongfeng Road East, Guangzhou, 510060, Guangdong, People's Republic of China.
| | - Hao Long
- State Key Laboratory of Oncology in Southern China, Department of Thoracic Surgery, Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Collaborative Innovation Center for Cancer Medicine, 651 Dongfeng Road East, Guangzhou, 510060, Guangdong, People's Republic of China.
| |
Collapse
|
5
|
Zhou J, Hu B, Feng W, Zhang Z, Fu X, Shao H, Wang H, Jin L, Ai S, Ji Y. An ensemble deep learning model for risk stratification of invasive lung adenocarcinoma using thin-slice CT. NPJ Digit Med 2023; 6:119. [PMID: 37407729 DOI: 10.1038/s41746-023-00866-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 06/26/2023] [Indexed: 07/07/2023] Open
Abstract
Lung cancer screening using computed tomography (CT) has increased the detection rate of small pulmonary nodules and early-stage lung adenocarcinoma. It would be clinically meaningful to accurate assessment of the nodule histology by CT scans with advanced deep learning algorithms. However, recent studies mainly focus on predicting benign and malignant nodules, lacking of model for the risk stratification of invasive adenocarcinoma. We propose an ensemble multi-view 3D convolutional neural network (EMV-3D-CNN) model to study the risk stratification of lung adenocarcinoma. We include 1075 lung nodules (≤30 mm and ≥4 mm) with preoperative thin-section CT scans and definite pathology confirmed by surgery. Our model achieves a state-of-art performance of 91.3% and 92.9% AUC for diagnosis of benign/malignant and pre-invasive/invasive nodules, respectively. Importantly, our model outperforms senior doctors in risk stratification of invasive adenocarcinoma with 77.6% accuracy [i.e., Grades 1, 2, 3]). It provides detailed predictive histological information for the surgical management of pulmonary nodules. Finally, for user-friendly access, the proposed model is implemented as a web-based system ( https://seeyourlung.com.cn ).
Collapse
Affiliation(s)
- Jing Zhou
- Center for Applied Statistics, School of Statistics, Renmin University of China, Beijing, China
| | - Bin Hu
- Department of Thoracic Surgery, Beijing Institute of Respiratory Medicine and Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Wei Feng
- Department of Cardiothoracic Surgery, The Third Xiangya Hospital of Central South University, Changsha, China
| | - Zhang Zhang
- Department of Thoracic Surgery, Changsha Central Hospital, Changsha, China
| | - Xiaotong Fu
- Center for Applied Statistics, School of Statistics, Renmin University of China, Beijing, China
| | - Handie Shao
- Center for Applied Statistics, School of Statistics, Renmin University of China, Beijing, China
| | - Hansheng Wang
- Guanghua School of Management, Peking University, Beijing, China
| | - Longyu Jin
- Department of Cardiothoracic Surgery, The Third Xiangya Hospital of Central South University, Changsha, China
| | - Siyuan Ai
- Department of Thoracic Surgery, Beijing LIANGXIANG Hospital, Beijing, China
| | - Ying Ji
- Department of Thoracic Surgery, Beijing Institute of Respiratory Medicine and Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China.
| |
Collapse
|
6
|
Ke X, Hu W, Su X, Huang F, Lai Q. Potential of artificial intelligence based on chest computed tomography to predict the nature of part-solid nodules. THE CLINICAL RESPIRATORY JOURNAL 2023; 17:320-328. [PMID: 36740215 PMCID: PMC10113279 DOI: 10.1111/crj.13597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 01/05/2023] [Accepted: 01/30/2023] [Indexed: 02/07/2023]
Abstract
BACKGROUND The potential of artificial intelligence (AI) to predict the nature of part-solid nodules based on chest computed tomography (CT) is still under exploration. OBJECTIVE To determine the potential of AI to predict the nature of part-solid nodules. METHODS Two hundred twenty-three patients diagnosed with part-solid nodules (241) by chest CT were retrospectively collected that were divided into benign group (104) and malignant group (137). Intraclass correlation coefficient (ICC) was used to assess the agreement in predicting malignancy, and the predictive effectiveness was compared between AI and senior radiologists. The parameters measured by AI and the size of solid components measured by senior radiologists were compared between two groups. Receiver operating characteristic (ROC) curve was chosen for calculating the Youden index of each quantitative parameter, which has statistical significance between two groups. Binary logistic regression performed on the significant indicators to suggest predictors of malignancy. RESULTS AI was in moderate agreement with senior radiologists (ICC = 0.686). The sensitivity, specificity and accuracy of two groups were close (p > 0.05). The longest diameter, volume and mean CT attenuation value and the largest diameter of solid components between benign and malignant groups were different significantly (p < 0.001). Logistic regression analysis showed that the longest diameter and mean CT attenuation value and the largest diameter of solid components were indicators for malignant part-solid nodules, the threshold of which were 9.45 mm, 425.0 HU and 3.45 mm, respectively. CONCLUSION Potential of quantitative parameter measured by AI to predict malignant part-solid nodules can provide a certain value for the clinical management.
Collapse
Affiliation(s)
- Xiaoting Ke
- Department of CT/MRI, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Weiyi Hu
- Department of CT/MRI, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Xianyan Su
- Department of CT/MRI, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Fang Huang
- Department of CT/MRI, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Qingquan Lai
- Department of CT/MRI, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| |
Collapse
|
7
|
Characterization of different reconstruction techniques on computer-aided system for detection of pulmonary nodules in lung from low-dose CT protocol. JOURNAL OF RADIATION RESEARCH AND APPLIED SCIENCES 2022. [DOI: 10.1016/j.jrras.2022.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
8
|
Huang H, Zheng D, Chen H, Wang Y, Chen C, Xu L, Li G, Wang Y, He X, Li W. Fusion of CT images and clinical variables based on deep learning for predicting invasiveness risk of stage I lung adenocarcinoma. Med Phys 2022; 49:6384-6394. [PMID: 35938604 DOI: 10.1002/mp.15903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 04/01/2022] [Accepted: 07/26/2022] [Indexed: 11/08/2022] Open
Abstract
PURPOSE To develop a novel multimodal data fusion model by incorporating computed tomography (CT) images and clinical variables based on deep learning for predicting the invasiveness risk of stage I lung adenocarcinoma that manifests as ground-glass nodules (GGNs), and compare the diagnostic performance of it with that of radiologists. METHODS A total of 1946 patients with solitary and histopathologically confirmed GGNs with maximum diameter less than 3 cm were retrospectively enrolled. The training dataset containing 1704 GGNs was augmented by resampling, scaling, random cropping, etc., to generate new training data. A multimodal data fusion model based on residual learning architecture and two multilayer perceptron with attention mechanism combining CT images with patient general data and serum tumor markers was built. The distance-based confidence scores (DCS) were calculated and compared among multimodal data models with different combinations. An observer study was conducted and the prediction performance of the fusion algorithms was compared with that of the two radiologists by an independent testing dataset with 242 GGNs. RESULTS Among the whole GGNs, 606 GGNs are confirmed as invasive adenocarcinoma (IA) and 1340 are non-IA. The proposed novel multimodal data fusion model combining CT images, patient general data and serum tumor markers achieved the highest accuracy (88.5%), Area under a ROC curve (AUC) (0.957), F1 (81.5%), F1weighted (81.9%) and Matthews correlation coefficient (MCC) (73.2%) for classifying between IA and non-IA GGNs, which was even better than the senior radiologist's performance (accuracy, 86.1%). In addition, the DCSs for multimodal data suggested that CT image had a stronger influence (0.9540) quantitatively than general data (0.6726) or tumor marker (0.6971). CONCLUSION This study demonstrated that the feasibility of integrating different types of data including CT images and clinical variables, and the multimodal data fusion model yielded higher performance for distinguishing IA from non-IA GGNs. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Haozhe Huang
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Xuhui District, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Xuhui District, 130 Dongan Road, Shanghai, 200032, China
| | - Dezhong Zheng
- Laboratory for Medical Imaging Informatics, Shanghai Institute of Technical Physics, Chinese Academy of Science, 500 Yutian Road, Hongkou District, Shanghai, 200083, China.,University of Chinese Academy of Sciences, 19 Yuquan Road, Shijingshan District, Beijing, 100049, China
| | - Hong Chen
- Department of Medical Imaging, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, 600 South Wanping Road, Xuhui District, Shanghai, 200030, China
| | - Ying Wang
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Xuhui District, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Xuhui District, 130 Dongan Road, Shanghai, 200032, China
| | - Chao Chen
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Xuhui District, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Xuhui District, 130 Dongan Road, Shanghai, 200032, China
| | - Lichao Xu
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Xuhui District, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Xuhui District, 130 Dongan Road, Shanghai, 200032, China
| | - Guodong Li
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Xuhui District, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Xuhui District, 130 Dongan Road, Shanghai, 200032, China
| | - Yaohui Wang
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Xuhui District, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Xuhui District, 130 Dongan Road, Shanghai, 200032, China
| | - Xinhong He
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Xuhui District, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Xuhui District, 130 Dongan Road, Shanghai, 200032, China
| | - Wentao Li
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Xuhui District, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Xuhui District, 130 Dongan Road, Shanghai, 200032, China
| |
Collapse
|
9
|
苏 志, 毛 文, 李 斌, 郑 智, 杨 博, 任 美, 宋 铁, 冯 海, 孟 于. [Clinical Study of Artificial Intelligence-assisted Diagnosis System in Predicting the
Invasive Subtypes of Early-stage Lung Adenocarcinoma Appearing as Pulmonary Nodules]. ZHONGGUO FEI AI ZA ZHI = CHINESE JOURNAL OF LUNG CANCER 2022; 25:245-252. [PMID: 35477188 PMCID: PMC9051300 DOI: 10.3779/j.issn.1009-3419.2022.102.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 03/21/2022] [Accepted: 03/30/2022] [Indexed: 11/05/2022]
Abstract
BACKGROUND Lung cancer is the cancer with the highest mortality at home and abroad at present. The detection of lung nodules is a key step to reducing the mortality of lung cancer. Artificial intelligence-assisted diagnosis system presents as the state of the art in the area of nodule detection, differentiation between benign and malignant and diagnosis of invasive subtypes, however, a validation with clinical data is necessary for further application. Therefore, the aim of this study is to evaluate the effectiveness of artificial intelligence-assisted diagnosis system in predicting the invasive subtypes of early‑stage lung adenocarcinoma appearing as pulmonary nodules. METHODS Clinical data of 223 patients with early-stage lung adenocarcinoma appearing as pulmonary nodules admitted to the Lanzhou University Second Hospital from January 1st, 2016 to December 31th, 2021 were retrospectively analyzed, which were divided into invasive adenocarcinoma group (n=170) and non-invasive adenocarcinoma group (n=53), and the non-invasive adenocarcinoma group was subdivided into minimally invasive adenocarcinoma group (n=31) and preinvasive lesions group (n=22). The malignant probability and imaging characteristics of each group were compared to analyze their predictive ability for the invasive subtypes of early-stage lung adenocarcinoma. The concordance between qualitative diagnostic results of artificial intelligence-assisted diagnosis of the invasive subtypes of early-stage lung adenocarcinoma and postoperative pathology was then analyzed. RESULTS In different invasive subtypes of early-stage lung adenocarcinoma, the mean CT value of pulmonary nodules (P<0.001), diameter (P<0.001), volume (P<0.001), malignant probability (P<0.001), pleural retraction sign (P<0.001), lobulation (P<0.001), spiculation (P<0.001) were significantly different. At the same time, it was also found that with the increased invasiveness of different invasive subtypes of early-stage lung adenocarcinoma, the proportion of dominant signs of each group gradually increased. On the issue of binary classification, the sensitivity, specificity, and area under the curve (AUC) values of the artificial intelligence-assisted diagnosis system for the qualitative diagnosis of invasive subtypes of early-stage lung adenocarcinoma were 81.76%, 92.45% and 0.871 respectively. On the issue of three classification, the accuracy, recall rate, F1 score, and AUC values of the artificial intelligence-assisted diagnosis system for the qualitative diagnosis of invasive subtypes of early-stage lung adenocarcinoma were 83.86%, 85.03%, 76.46% and 0.879 respectively. CONCLUSIONS Artificial intelligence-assisted diagnosis system could predict the invasive subtypes of early‑stage lung adenocarcinoma appearing as pulmonary nodules, and has a certain predictive value. With the optimization of algorithms and the improvement of data, it may provide guidance for individualized treatment of patients.
Collapse
Affiliation(s)
- 志鹏 苏
- />730030 兰州,兰州大学第二医院胸外科,兰州大学第二临床医学院Department of Thoracic Surgery, Lanzhou University Second Hospital, Lanzhou University Second Clinical Medical College, Lanzhou 730030, China
| | - 文杰 毛
- />730030 兰州,兰州大学第二医院胸外科,兰州大学第二临床医学院Department of Thoracic Surgery, Lanzhou University Second Hospital, Lanzhou University Second Clinical Medical College, Lanzhou 730030, China
| | - 斌 李
- />730030 兰州,兰州大学第二医院胸外科,兰州大学第二临床医学院Department of Thoracic Surgery, Lanzhou University Second Hospital, Lanzhou University Second Clinical Medical College, Lanzhou 730030, China
| | - 智中 郑
- />730030 兰州,兰州大学第二医院胸外科,兰州大学第二临床医学院Department of Thoracic Surgery, Lanzhou University Second Hospital, Lanzhou University Second Clinical Medical College, Lanzhou 730030, China
| | - 博 杨
- />730030 兰州,兰州大学第二医院胸外科,兰州大学第二临床医学院Department of Thoracic Surgery, Lanzhou University Second Hospital, Lanzhou University Second Clinical Medical College, Lanzhou 730030, China
| | - 美玉 任
- />730030 兰州,兰州大学第二医院胸外科,兰州大学第二临床医学院Department of Thoracic Surgery, Lanzhou University Second Hospital, Lanzhou University Second Clinical Medical College, Lanzhou 730030, China
| | - 铁牛 宋
- />730030 兰州,兰州大学第二医院胸外科,兰州大学第二临床医学院Department of Thoracic Surgery, Lanzhou University Second Hospital, Lanzhou University Second Clinical Medical College, Lanzhou 730030, China
| | - 海明 冯
- />730030 兰州,兰州大学第二医院胸外科,兰州大学第二临床医学院Department of Thoracic Surgery, Lanzhou University Second Hospital, Lanzhou University Second Clinical Medical College, Lanzhou 730030, China
| | - 于琪 孟
- />730030 兰州,兰州大学第二医院胸外科,兰州大学第二临床医学院Department of Thoracic Surgery, Lanzhou University Second Hospital, Lanzhou University Second Clinical Medical College, Lanzhou 730030, China
| |
Collapse
|
10
|
Qin C, Hu W, Wang X, Ma X. Application of Artificial Intelligence in Diagnosis of Craniopharyngioma. Front Neurol 2022; 12:752119. [PMID: 35069406 PMCID: PMC8770750 DOI: 10.3389/fneur.2021.752119] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Accepted: 11/12/2021] [Indexed: 12/24/2022] Open
Abstract
Craniopharyngioma is a congenital brain tumor with clinical characteristics of hypothalamic-pituitary dysfunction, increased intracranial pressure, and visual field disorder, among other injuries. Its clinical diagnosis mainly depends on radiological examinations (such as Computed Tomography, Magnetic Resonance Imaging). However, assessing numerous radiological images manually is a challenging task, and the experience of doctors has a great influence on the diagnosis result. The development of artificial intelligence has brought about a great transformation in the clinical diagnosis of craniopharyngioma. This study reviewed the application of artificial intelligence technology in the clinical diagnosis of craniopharyngioma from the aspects of differential classification, prediction of tissue invasion and gene mutation, prognosis prediction, and so on. Based on the reviews, the technical route of intelligent diagnosis based on the traditional machine learning model and deep learning model were further proposed. Additionally, in terms of the limitations and possibilities of the development of artificial intelligence in craniopharyngioma diagnosis, this study discussed the attentions required in future research, including few-shot learning, imbalanced data set, semi-supervised models, and multi-omics fusion.
Collapse
Affiliation(s)
- Caijie Qin
- Institute of Information Engineering, Sanming University, Sanming, China
| | - Wenxing Hu
- University of New South Wales, Sydney, NSW, Australia
| | - Xinsheng Wang
- School of Information Science and Engineering, Harbin Institute of Technology at Weihai, Weihai, China
| | - Xibo Ma
- CBSR & NLPR, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
11
|
Cheng X, Wen H, You H, Hua L, Xiaohua W, Qiuting C, Jiabao L. Recognition of Peripheral Lung Cancer and Focal Pneumonia on Chest Computed Tomography Images Based on Convolutional Neural Network. Technol Cancer Res Treat 2022; 21:15330338221085375. [PMID: 35293240 PMCID: PMC8935416 DOI: 10.1177/15330338221085375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Introduction: Chest computed tomography (CT) is important for the early screening of lung diseases and clinical diagnosis, particularly during the COVID-19 pandemic. We propose a method for classifying peripheral lung cancer and focal pneumonia on chest CT images and undertake 5 window settings to study the effect on the artificial intelligence processing results. Methods: A retrospective collection of CT images from 357 patients with peripheral lung cancer having solitary solid nodule or focal pneumonia with a solitary consolidation was applied. We segmented and aligned the lung parenchyma based on some morphological methods and cropped this region of the lung parenchyma with the minimum 3D bounding box. Using these 3D cropped volumes of all cases, we designed a 3D neural network to classify them into 2 categories. We also compared the classification results of the 3 physicians with different experience levels on the same dataset. Results: We conducted experiments using 5 window settings. After cropping and alignment based on an automatic preprocessing procedure, our neural network achieved an average classification accuracy of 91.596% under a 5-fold cross-validation in the full window, in which the area under the curve (AUC) was 0.946. The classification accuracy and AUC value were 90.48% and 0.957 for the junior physician, 94.96% and 0.989 for the intermediate physician, and 96.92% and 0.980 for the senior physician, respectively. After removing the error prediction, the accuracy improved significantly, reaching 98.79% in the self-defined window2. Conclusion: Using the proposed neural network, in separating peripheral lung cancer and focal pneumonia in chest CT data, we achieved an accuracy competitive to that of a junior physician. Through a data ablation study, the proposed 3D CNN can achieve a slightly higher accuracy compared with senior physicians in the same subset. The self-defined window2 was the best for data training and evaluation.
Collapse
Affiliation(s)
- Xiaoyue Cheng
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - He Wen
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Hao You
- Key laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Li Hua
- Key laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Wu Xiaohua
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Cao Qiuting
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Liu Jiabao
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
12
|
Abstract
PURPOSE OF REVIEW In this article, we focus on the role of artificial intelligence in the management of lung cancer. We summarized commonly used algorithms, current applications and challenges of artificial intelligence in lung cancer. RECENT FINDINGS Feature engineering for tabular data and computer vision for image data are commonly used algorithms in lung cancer research. Furthermore, the use of artificial intelligence in lung cancer has extended to the entire clinical pathway including screening, diagnosis and treatment. Lung cancer screening mainly focuses on two aspects: identifying high-risk populations and the automatic detection of lung nodules. Artificial intelligence diagnosis of lung cancer covers imaging diagnosis, pathological diagnosis and genetic diagnosis. The artificial intelligence clinical decision-support system is the main application of artificial intelligence in lung cancer treatment. Currently, the challenges of artificial intelligence applications in lung cancer mainly focus on the interpretability of artificial intelligence models and limited annotated datasets; and recent advances in explainable machine learning, transfer learning and federated learning might solve these problems. SUMMARY Artificial intelligence shows great potential in many aspects of the management of lung cancer, especially in screening and diagnosis. Future studies on interpretability and privacy are needed for further application of artificial intelligence in lung cancer.
Collapse
Affiliation(s)
- Kai Zhang
- Department of Thoracic Surgery, Peking University People's Hospital, Beijing, China
| | | |
Collapse
|
13
|
Chen H, Liu J, Lu L, Wang T, Xu X, Chu A, Peng W, Gong J, Tang W, Gu Y. Volumetric segmentation of ground glass nodule based on 3D attentional cascaded residual U-net and conditional radom field. Med Phys 2021; 49:1097-1107. [PMID: 34951492 DOI: 10.1002/mp.15423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 12/08/2021] [Accepted: 12/10/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Ground glass nodule (GGN) segmentation is one of the important and challenging tasks in diagnosing early-stage lung adenocarcinomas. Manually delineation of 3D GGN in computed tomography (CT) image is a subjective, laborious and tedious task, which presents poor repeatability. PURPOSE To reduce the annotation burden and improve the segmentation performance, this study proposes a 3D deep learning-based volumetric segmentation model to segment the GGN in CT images. METHODS A total of 379 GGNs were retrospectively collected from the public database, Shanghai Pulmonary Hospital (SHPH) and Fudan University Shanghai Cancer Center (FUSCC). First, a series of image pre-processing techniques involving image resampling, intensity normalization, 3D nodule patch cropping, and data augmentation, were adopted to generate the input images for the deep learning model by using the CT scans. Then, a 3D attentional cascaded residual network (ACRU-Net) was proposed to develop the deep learning-based segmentation model by using residual network and atrous spatial pyramid pooling module. To improve the model performance, a voxel-based conditional random field (CRF) method was used to optimize the segmentation results. Finally, a balanced cross-entropy and Dice combined loss function was applied to train and build the segmentation model. RESULTS Testing on SHPH and FUSCC datasets, the proposed method generates the Dice coefficients of 0.721±0.167 and 0.733±0.100, respectively, which are higher than that of 3D residual U-Net and ACRU-Net without CRF optimization. CONCLUSIONS The results demonstrated that combining 3D ACRU-Net and CRF effectively improved the GGN segmentation performance. The proposed segmentation model may provide a potential tool to help the radiologist in the segmentation and diagnosis of 3D GGN. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Hui Chen
- Department of Radiology, Medical Community of Linhai First People's Hospital, Linhai, Zhejiang, 317000, China
| | - Jiyu Liu
- Department of Radiology, Shanghai Pulmonary Hospital, Shanghai, 200433, China
| | - Liangjian Lu
- Department of Radiology, Medical Community of Linhai First People's Hospital, Linhai, Zhejiang, 317000, China
| | - Ting Wang
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China
| | - Xiaomin Xu
- Department of Radiology, Medical Community of Linhai First People's Hospital, Linhai, Zhejiang, 317000, China
| | - Aina Chu
- Department of Radiology, Medical Community of Linhai First People's Hospital, Linhai, Zhejiang, 317000, China
| | - Weijun Peng
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China
| | - Jing Gong
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China
| | - Wei Tang
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China
| | - Yajia Gu
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China
| |
Collapse
|
14
|
Wang J, Yuan C, Han C, Wen Y, Lu H, Liu C, She Y, Deng J, Li B, Qian D, Chen C. IMAL-Net: Interpretable multi-task attention learning network for invasive lung adenocarcinoma screening in CT images. Med Phys 2021; 48:7913-7929. [PMID: 34674280 DOI: 10.1002/mp.15293] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Revised: 08/26/2021] [Accepted: 09/29/2021] [Indexed: 12/17/2022] Open
Abstract
PURPOSE Feature maps created from deep convolutional neural networks (DCNNs) have been widely used for visual explanation of DCNN-based classification tasks. However, many clinical applications such as benign-malignant classification of lung nodules normally require quantitative and objective interpretability, rather than just visualization. In this paper, we propose a novel interpretable multi-task attention learning network named IMAL-Net for early invasive adenocarcinoma screening in chest computed tomography images, which takes advantage of segmentation prior to assist interpretable classification. METHODS Two sub-ResNets are firstly integrated together via a prior-attention mechanism for simultaneous nodule segmentation and invasiveness classification. Then, numerous radiomic features from the segmentation results are concatenated with high-level semantic features from the classification subnetwork by FC layers to achieve superior performance. Meanwhile, an end-to-end feature selection mechanism (named FSM) is designed to quantify crucial radiomic features greatly affecting the prediction of each sample, and thus it can provide clinically applicable interpretability to the prediction result. RESULTS Nodule samples from a total of 1626 patients were collected from two grade-A hospitals for large-scale verification. Five-fold cross validation demonstrated that the proposed IMAL-Net can achieve an AUC score of 93.8% ± 1.1% and a recall score of 93.8% ± 2.8% for identification of invasive lung adenocarcinoma. CONCLUSIONS It can be concluded that fusing semantic features and radiomic features can achieve obvious improvements in the invasiveness classification task. Moreover, by learning more fine-grained semantic features and highlighting the most important radiomics features, the proposed attention and FSM mechanisms not only can further improve the performance but also can be used for both visual explanations and objective analysis of the classification results.
Collapse
Affiliation(s)
- Jun Wang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Cheng Yuan
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Can Han
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yaofeng Wen
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Hongbing Lu
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Chen Liu
- Department of Radiology, Southwest Hospital, Third Military University (Army Medical University), Chongqing, China
| | - Yunlang She
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Jiajun Deng
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Biao Li
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai, China
| | - Dahong Qian
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Chang Chen
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| |
Collapse
|
15
|
Gong J, Liu J, Li H, Zhu H, Wang T, Hu T, Li M, Xia X, Hu X, Peng W, Wang S, Tong T, Gu Y. Deep Learning-Based Stage-Wise Risk Stratification for Early Lung Adenocarcinoma in CT Images: A Multi-Center Study. Cancers (Basel) 2021; 13:cancers13133300. [PMID: 34209366 PMCID: PMC8269183 DOI: 10.3390/cancers13133300] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 06/28/2021] [Accepted: 06/28/2021] [Indexed: 12/21/2022] Open
Abstract
Simple Summary Prediction of the malignancy and invasiveness of ground glass nodules (GGNs) from computed tomography images is a crucial task for radiologists in risk stratification of early-stage lung adenocarcinoma. In order to solve this challenge, a two-stage deep neural network (DNN) was developed based on the images collected from four centers. A multi-reader multi-case observer study was conducted to evaluate the model capability. The performance of our model was comparable or even more accurate than that of senior radiologists, with average area under the curve values of 0.76 and 0.95 for two tasks, respectively. Findings suggest (1) a positive trend between the diagnostic performance and radiologist’s experience, (2) DNN yielded equivalent or even higher performance in comparison with senior radiologists, and (3) low image resolution reduced the model performance in predicting the risks of GGNs. Abstract This study aims to develop a deep neural network (DNN)-based two-stage risk stratification model for early lung adenocarcinomas in CT images, and investigate the performance compared with practicing radiologists. A total of 2393 GGNs were retrospectively collected from 2105 patients in four centers. All the pathologic results of GGNs were obtained from surgically resected specimens. A two-stage deep neural network was developed based on the 3D residual network and atrous convolution module to diagnose benign and malignant GGNs (Task1) and classify between invasive adenocarcinoma (IA) and non-IA for these malignant GGNs (Task2). A multi-reader multi-case observer study with six board-certified radiologists’ (average experience 11 years, range 2–28 years) participation was conducted to evaluate the model capability. DNN yielded area under the receiver operating characteristic curve (AUC) values of 0.76 ± 0.03 (95% confidence interval (CI): (0.69, 0.82)) and 0.96 ± 0.02 (95% CI: (0.92, 0.98)) for Task1 and Task2, which were equivalent to or higher than radiologists in the senior group with average AUC values of 0.76 and 0.95, respectively (p > 0.05). With the CT image slice thickness increasing from 1.15 mm ± 0.36 to 1.73 mm ± 0.64, DNN performance decreased 0.08 and 0.22 for the two tasks. The results demonstrated (1) a positive trend between the diagnostic performance and radiologist’s experience, (2) the DNN yielded equivalent or even higher performance in comparison with senior radiologists, and (3) low image resolution decreased model performance in predicting the risks of GGNs. Once tested prospectively in clinical practice, the DNN could have the potential to assist doctors in precision diagnosis and treatment of early lung adenocarcinoma.
Collapse
Affiliation(s)
- Jing Gong
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Jiyu Liu
- Department of Radiology, Shanghai Pulmonary Hospital, 507 Zheng Min Road, Shanghai 200433, China;
| | - Haiming Li
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Hui Zhu
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Tingting Wang
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Tingdan Hu
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Menglei Li
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Xianwu Xia
- Department of Radiology, Municipal Hospital Affiliated to Taizhou University, Taizhou 318000, China;
| | - Xianfang Hu
- Department of Radiology, Huzhou Central Hospital Affiliated Central Hospital of Huzhou University, 1558 Sanhuan North Road, Huzhou 313000, China;
| | - Weijun Peng
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Shengping Wang
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
- Correspondence: (S.W.); (T.T.); (Y.G.); Tel.: +86-13818521975 (S.W); +86-18017312912 (T.T.); +86-18017312040 (Y.G.)
| | - Tong Tong
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
- Correspondence: (S.W.); (T.T.); (Y.G.); Tel.: +86-13818521975 (S.W); +86-18017312912 (T.T.); +86-18017312040 (Y.G.)
| | - Yajia Gu
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, China; (J.G.); (H.L.); (H.Z.); (T.W.); (T.H.); (M.L.); (W.P.)
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
- Correspondence: (S.W.); (T.T.); (Y.G.); Tel.: +86-13818521975 (S.W); +86-18017312912 (T.T.); +86-18017312040 (Y.G.)
| |
Collapse
|
16
|
Zhang T, Wang Y, Sun Y, Yuan M, Zhong Y, Li H, Yu T, Wang J. High-resolution CT image analysis based on 3D convolutional neural network can enhance the classification performance of radiologists in classifying pulmonary non-solid nodules. Eur J Radiol 2021; 141:109810. [PMID: 34102564 DOI: 10.1016/j.ejrad.2021.109810] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 05/19/2021] [Accepted: 05/28/2021] [Indexed: 11/19/2022]
Abstract
OBJECTIVE To investigate whether 3D convolutional neural network (CNN) is able to enhance the classification performance of radiologists in classifying pulmonary non-solid nodules (NSNs). MATERIALS AND METHODS Data of patients with solitary NSNs and diagnosed as adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), or invasive adenocarcinoma (IAC) in pathological after surgical resection were analyzed retrospectively. Ultimately, 532 patients in our institution were included in the study: 427 cases (144 AIS, 167 MIA, 116 IAC) were assigned to training dataset and 105 cases (36 AIS, 41 MIA and 28 IAC) were assigned to validation dataset. For external validation, 177 patients (60 AIS, 69 MIA and 48 IAC) from another hospital were assigned to testing dataset. The clinical and morphological characteristics of NSNs were established as radiologists' model. The trained classification model based on 3D CNN was used to identify NSNs types automatically. The evaluation and comparison on classification performance of the two models and CNN + radiologists' model were performed via receiver operating curve (ROC) analysis and integrated discrimination improvement (IDI) index. The Akaike information criterion (AIC) was calculated to find the best-fit model. RESULTS In external testing dataset, radiologists' model showed inferior classification performance than CNN model both in discriminating AIS from MIA-IAC and AIS-MIA from IAC (the area under the ROC curve (Az value), 0.693 vs 0.820, P = 0.011; 0.746 vs 0.833, P = 0.026, respectively). However, combining CNN significantly enhanced the classification performance of radiologists and exhibited higher Az values than CNN model alone (Az values, 0.893 vs 0.820, P < 0.001; 0.906 vs 0.833, P < 0.001, respectively). The IDI index further confirmed CNN's contribution to radiologists in classifying NSNs (IDI = 25.8 % (18.3-46.1 %), P < 0.001; IDI = 30.1 % (26.1-45.2 %), P < 0.001, respectively). The CNN + radiologists' model also provided the best fit over radiologists' model and CNN model alone (AIC value 63.3 % vs. 29.5 %, 49.5 %, P < 0.001; 69.2 % vs. 34.9 %, 53.6 %, P < 0.001, respectively). CONCLUSION CNN successfully classified NSNs based on CT images and its classification performance were superior to radiologists' model. But the classification performance of radiologists can be significantly enhanced when combined with CNN in classifying NSNs.
Collapse
Affiliation(s)
- Teng Zhang
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, 210029, China.
| | - Yida Wang
- Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, Shanghai, 200062, China.
| | - Yingli Sun
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, Shanghai, 200040, China.
| | - Mei Yuan
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, 210029, China.
| | - Yan Zhong
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, 210029, China.
| | - Hai Li
- Department of Pathology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, 210029, China.
| | - Tongfu Yu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, 210029, China.
| | - Jie Wang
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, 210029, China.
| |
Collapse
|
17
|
Jiang B, Zhang Y, Zhang L, H de Bock G, Vliegenthart R, Xie X. Human-recognizable CT image features of subsolid lung nodules associated with diagnosis and classification by convolutional neural networks. Eur Radiol 2021; 31:7303-7315. [PMID: 33847813 DOI: 10.1007/s00330-021-07901-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Revised: 02/03/2021] [Accepted: 03/16/2021] [Indexed: 12/17/2022]
Abstract
OBJECTIVES The interpretability of convolutional neural networks (CNNs) for classifying subsolid nodules (SSNs) is insufficient for clinicians. Our purpose was to develop CNN models to classify SSNs on CT images and to investigate image features associated with the CNN classification. METHODS CT images containing SSNs with a diameter of ≤ 3 cm were retrospectively collected. We trained and validated CNNs by a 5-fold cross-validation method for classifying SSNs into three categories (benign and preinvasive lesions [PL], minimally invasive adenocarcinoma [MIA], and invasive adenocarcinoma [IA]) that were histologically confirmed or followed up for 6.4 years. The mechanism of CNNs on human-recognizable CT image features was investigated and visualized by gradient-weighted class activation map (Grad-CAM), separated activation channels and areas, and DeepDream algorithm. RESULTS The accuracy was 93% for classifying 586 SSNs from 569 patients into three categories (346 benign and PL, 144 MIA, and 96 IA in 5-fold cross-validation). The Grad-CAM successfully located the entire region of image features that determined the final classification. Activated areas in the benign and PL group were primarily smooth margins (p < 0.001) and ground-glass components (p = 0.033), whereas in the IA group, the activated areas were mainly part-solid (p < 0.001) and solid components (p < 0.001), lobulated shapes (p < 0.001), and air bronchograms (p < 0.001). However, the activated areas for MIA were variable. The DeepDream algorithm showed the image features in a human-recognizable pattern that the CNN learned from a training dataset. CONCLUSION This study provides medical evidence to interpret the mechanism of CNNs that helps support the clinical application of artificial intelligence. KEY POINTS • CNN achieved high accuracy (93%) in classifying subsolid nodules on CT images into three categories: benign and preinvasive lesions, MIA, and IA. • The gradient-weighted class activation map (Grad-CAM) located the entire region of image features that determined the final classification, and the visualization of the separated activated areas was consistent with radiologists' expertise for diagnosing subsolid nodules. • DeepDream showed the image features that CNN learned from a training dataset in a human-recognizable pattern.
Collapse
Affiliation(s)
- Beibei Jiang
- Radiology Department, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Haining Rd.100, Shanghai, 200080, China
| | - Yaping Zhang
- Radiology Department, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Haining Rd.100, Shanghai, 200080, China
| | - Lu Zhang
- Radiology Department, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Haining Rd.100, Shanghai, 200080, China
| | - Geertruida H de Bock
- Department of Epidemiology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ, Groningen, The Netherlands
| | - Rozemarijn Vliegenthart
- Department of Radiology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ, Groningen, The Netherlands
| | - Xueqian Xie
- Radiology Department, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Haining Rd.100, Shanghai, 200080, China.
| |
Collapse
|
18
|
Aggarwal R, Sounderajah V, Martin G, Ting DSW, Karthikesalingam A, King D, Ashrafian H, Darzi A. Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis. NPJ Digit Med 2021; 4:65. [PMID: 33828217 PMCID: PMC8027892 DOI: 10.1038/s41746-021-00438-z] [Citation(s) in RCA: 252] [Impact Index Per Article: 84.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 02/25/2021] [Indexed: 12/19/2022] Open
Abstract
Deep learning (DL) has the potential to transform medical diagnostics. However, the diagnostic accuracy of DL is uncertain. Our aim was to evaluate the diagnostic accuracy of DL algorithms to identify pathology in medical imaging. Searches were conducted in Medline and EMBASE up to January 2020. We identified 11,921 studies, of which 503 were included in the systematic review. Eighty-two studies in ophthalmology, 82 in breast disease and 115 in respiratory disease were included for meta-analysis. Two hundred twenty-four studies in other specialities were included for qualitative review. Peer-reviewed studies that reported on the diagnostic accuracy of DL algorithms to identify pathology using medical imaging were included. Primary outcomes were measures of diagnostic accuracy, study design and reporting standards in the literature. Estimates were pooled using random-effects meta-analysis. In ophthalmology, AUC's ranged between 0.933 and 1 for diagnosing diabetic retinopathy, age-related macular degeneration and glaucoma on retinal fundus photographs and optical coherence tomography. In respiratory imaging, AUC's ranged between 0.864 and 0.937 for diagnosing lung nodules or lung cancer on chest X-ray or CT scan. For breast imaging, AUC's ranged between 0.868 and 0.909 for diagnosing breast cancer on mammogram, ultrasound, MRI and digital breast tomosynthesis. Heterogeneity was high between studies and extensive variation in methodology, terminology and outcome measures was noted. This can lead to an overestimation of the diagnostic accuracy of DL algorithms on medical imaging. There is an immediate need for the development of artificial intelligence-specific EQUATOR guidelines, particularly STARD, in order to provide guidance around key issues in this field.
Collapse
Affiliation(s)
- Ravi Aggarwal
- Institute of Global Health Innovation, Imperial College London, London, UK
| | | | - Guy Martin
- Institute of Global Health Innovation, Imperial College London, London, UK
| | - Daniel S W Ting
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | | | - Dominic King
- Institute of Global Health Innovation, Imperial College London, London, UK
| | - Hutan Ashrafian
- Institute of Global Health Innovation, Imperial College London, London, UK.
| | - Ara Darzi
- Institute of Global Health Innovation, Imperial College London, London, UK
| |
Collapse
|
19
|
3D CNN with Visual Insights for Early Detection of Lung Cancer Using Gradient-Weighted Class Activation. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:6695518. [PMID: 33777347 PMCID: PMC7979307 DOI: 10.1155/2021/6695518] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 02/09/2021] [Accepted: 02/23/2021] [Indexed: 11/17/2022]
Abstract
The 3D convolutional neural network is able to make use of the full nonlinear 3D context information of lung nodule detection from the DICOM (Digital Imaging and Communications in Medicine) images, and the Gradient Class Activation has shown to be useful for tailoring classification tasks and localization interpretation for fine-grained features and visual explanation for the internal working. Gradient-weighted class activation plays a crucial role for clinicians and radiologists in terms of trusting and adopting the model. Practitioners not only rely on a model that can provide high precision but also really want to gain the respect of radiologists. So, in this paper, we explored the lung nodule classification using the improvised 3D AlexNet with lightweight architecture. Our network employed the full nature of the multiview network strategy. We have conducted the binary classification (benign and malignant) on computed tomography (CT) images from the LUNA 16 database conglomerate and database image resource initiative. The results obtained are through the 10-fold cross-validation. Experimental results have shown that the proposed lightweight architecture achieved a superior classification accuracy of 97.17% on LUNA 16 dataset when compared with existing classification algorithms and low-dose CT scan images as well.
Collapse
|
20
|
Zhang X, Li H, Wang C, Cheng W, Zhu Y, Li D, Jing H, Li S, Hou J, Li J, Li Y, Zhao Y, Mo H, Pang D. Evaluating the Accuracy of Breast Cancer and Molecular Subtype Diagnosis by Ultrasound Image Deep Learning Model. Front Oncol 2021; 11:623506. [PMID: 33747937 PMCID: PMC7973262 DOI: 10.3389/fonc.2021.623506] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Accepted: 02/15/2021] [Indexed: 12/24/2022] Open
Abstract
Background: Breast ultrasound is the first choice for breast tumor diagnosis in China, but the Breast Imaging Reporting and Data System (BI-RADS) categorization routinely used in the clinic often leads to unnecessary biopsy. Radiologists have no ability to predict molecular subtypes with important pathological information that can guide clinical treatment. Materials and Methods: This retrospective study collected breast ultrasound images from two hospitals and formed training, test and external test sets after strict selection, which included 2,822, 707, and 210 ultrasound images, respectively. An optimized deep learning model (DLM) was constructed with the training set, and the performance was verified in both the test set and the external test set. Diagnostic results were compared with the BI-RADS categorization determined by radiologists. We divided breast cancer into different molecular subtypes according to hormone receptor (HR) and human epidermal growth factor receptor 2 (HER2) expression. The ability to predict molecular subtypes using the DLM was confirmed in the test set. Results: In the test set, with pathological results as the gold standard, the accuracy, sensitivity and specificity were 85.6, 98.7, and 63.1%, respectively, according to the BI-RADS categorization. The same set achieved an accuracy, sensitivity, and specificity of 89.7, 91.3, and 86.9%, respectively, when using the DLM. For the test set, the area under the curve (AUC) was 0.96. For the external test set, the AUC was 0.90. The diagnostic accuracy was 92.86% with the DLM in BI-RADS 4a patients. Approximately 70.76% of the cases were judged as benign tumors. Unnecessary biopsy was theoretically reduced by 67.86%. However, the false negative rate was 10.4%. A good prediction effect was shown for the molecular subtypes of breast cancer with the DLM. The AUC were 0.864, 0.811, and 0.837 for the triple-negative subtype, HER2 (+) subtype and HR (+) subtype predictions, respectively. Conclusion: This study showed that the DLM was highly accurate in recognizing breast tumors from ultrasound images. Thus, the DLM can greatly reduce the incidence of unnecessary biopsy, especially for patients with BI-RADS 4a. In addition, the predictive ability of this model for molecular subtypes was satisfactory,which has specific clinical application value.
Collapse
Affiliation(s)
- Xianyu Zhang
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, Harbin, China
| | - Hui Li
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, Harbin, China
| | - Chaoyun Wang
- Harbin Engineering University Automation College, Harbin, China
| | - Wen Cheng
- Department of Ultrasound, Harbin Medical University Cancer Hospital, Harbin, China
| | - Yuntao Zhu
- Harbin Engineering University Automation College, Harbin, China
| | - Dapeng Li
- Department of Epidemiology, Harbin Medical University, Harbin, China
| | - Hui Jing
- Department of Ultrasound, Harbin Medical University Cancer Hospital, Harbin, China
| | - Shu Li
- Prenatal Diagnosis Center, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Jiahui Hou
- Department of Ultrasound, Harbin Medical University Cancer Hospital, Harbin, China
| | - Jiaying Li
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, Harbin, China
| | - Yingpu Li
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, Harbin, China
| | - Yashuang Zhao
- Department of Epidemiology, Harbin Medical University, Harbin, China
| | - Hongwei Mo
- Harbin Engineering University Automation College, Harbin, China
| | - Da Pang
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, Harbin, China
| |
Collapse
|
21
|
Hu X, Gong J, Zhou W, Li H, Wang S, Wei M, Peng W, Gu Y. Computer-aided diagnosis of ground glass pulmonary nodule by fusing deep learning and radiomics features. Phys Med Biol 2021; 66:065015. [PMID: 33596552 DOI: 10.1088/1361-6560/abe735] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
OBJECTIVES This study aims to develop a computer-aided diagnosis (CADx) scheme to classify between benign and malignant ground glass nodules (GGNs), and fuse deep leaning and radiomics imaging features to improve the classification performance. METHODS We first retrospectively collected 513 surgery histopathology confirmed GGNs from two centers. Among these GGNs, 100 were benign and 413 were malignant. All malignant tumors were stage I lung adenocarcinoma. To segment GGNs, we applied a deep convolutional neural network and residual architecture to train and build a 3D U-Net. Then, based on the pre-trained U-Net, we used a transfer learning approach to build a deep neural network (DNN) to classify between benign and malignant GGNs. With the GGN segmentation results generated by 3D U-Net, we also developed a CT radiomics model by adopting a series of image processing techniques, i.e. radiomics feature extraction, feature selection, synthetic minority over-sampling technique, and support vector machine classifier training/testing, etc. Finally, we applied an information fusion method to fuse the prediction scores generated by DNN based CADx model and CT-radiomics based model. To evaluate the proposed model performance, we conducted a comparison experiment by testing on an independent testing dataset. RESULTS Comparing with DNN model and radiomics model, our fusion model yielded a significant higher area under a receiver operating characteristic curve (AUC) value of 0.73 ± 0.06 (P < 0.01). The fusion model generated an accuracy of 75.6%, F1 score of 84.6%, weighted average F1 score of 70.3%, and Matthews correlation coefficient of 43.6%, which were higher than the DNN model and radiomics model individually. CONCLUSIONS Our experimental results demonstrated that (1) applying a CADx scheme was feasible to diagnosis of early-stage lung adenocarcinoma, (2) deep image features and radiomics features provided complementary information in classifying benign and malignant GGNs, and (3) it was an effective way to build DNN model with limited dataset by using transfer learning. Thus, to build a robust image analysis based CADx model, one can combine different types of image features to decode the imaging phenotypes of GGN.
Collapse
Affiliation(s)
- Xianfang Hu
- Department of Radiology, Huzhou Central Hospital, Affiliated Central Hospital of Huzhou University, 1558 Sanhuan North Road, Huzhou, Zhejiang, 313000, People's Republic of China
| | - Jing Gong
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai, 200032, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| | - Wei Zhou
- Department of Radiology, Huzhou Central Hospital, Affiliated Central Hospital of Huzhou University, 1558 Sanhuan North Road, Huzhou, Zhejiang, 313000, People's Republic of China
| | - Haiming Li
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai, 200032, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| | - Shengping Wang
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai, 200032, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| | - Meng Wei
- Medical imaging Center, The first Affiliated Hospital of Wannan Medical College, No. 2 Zheshan West Road, Wuhu, Anhui, 241001, People's Republic of China
| | - Weijun Peng
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai, 200032, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| | - Yajia Gu
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai, 200032, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| |
Collapse
|
22
|
Yu Y, Wang N, Huang N, Liu X, Zheng Y, Fu Y, Li X, Wu H, Xu J, Cheng J. Determining the invasiveness of ground-glass nodules using a 3D multi-task network. Eur Radiol 2021; 31:7162-7171. [PMID: 33665717 DOI: 10.1007/s00330-021-07794-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Revised: 12/17/2020] [Accepted: 02/15/2021] [Indexed: 10/22/2022]
Abstract
OBJECTIVES The aim of this study was to determine the invasiveness of ground-glass nodules (GGNs) using a 3D multi-task deep learning network. METHODS We propose a novel architecture based on 3D multi-task learning to determine the invasiveness of GGNs. In total, 770 patients with 909 GGNs who underwent lung CT scans were enrolled. The patients were divided into the training (n = 626) and test sets (n = 144). In the test set, invasiveness was classified using deep learning into three categories: atypical adenomatous hyperplasia (AAH) and adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), and invasive pulmonary adenocarcinoma (IA). Furthermore, binary classifications (AAH/AIS/MIA vs. IA) were made by two thoracic radiologists and compared with the deep learning results. RESULTS In the three-category classification task, the sensitivity, specificity, and accuracy were 65.41%, 82.21%, and 64.9%, respectively. In the binary classification task, the sensitivity, specificity, accuracy, and area under the ROC curve (AUC) values were 69.57%, 95.24%, 87.42%, and 0.89, respectively. In the visual assessment of GGN invasiveness of binary classification by the two thoracic radiologists, the sensitivity, specificity, and accuracy of the senior and junior radiologists were 58.93%, 90.51%, and 81.35% and 76.79%, 55.47%, and 61.66%, respectively. CONCLUSIONS The proposed multi-task deep learning model achieved good classification results in determining the invasiveness of GGNs. This model may help to select patients with invasive lesions who need surgery and the proper surgical methods. KEY POINTS • The proposed multi-task model has achieved good classification results for the invasiveness of GGNs. • The proposed network includes a classification and segmentation branch to learn global and regional features, respectively. • The multi-task model could assist doctors in selecting patients with invasive lesions who need surgery and choosing appropriate surgical methods.
Collapse
Affiliation(s)
- Ye Yu
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200127, China
| | - Na Wang
- SenseTime Research, Shanghai, 200233, China
| | - Ning Huang
- SenseTime Research, Shanghai, 200233, China
| | | | - Yuanjie Zheng
- School of Information Science and Engineering at Shandong Normal University, Jinan, 250358, China
| | - Yicheng Fu
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200127, China
| | - Xiaoqian Li
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200127, China
| | - Huawei Wu
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200127, China
| | - Jianrong Xu
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200127, China.
| | - Jiejun Cheng
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200127, China.
| |
Collapse
|
23
|
Wang D, Zhang T, Li M, Bueno R, Jayender J. 3D deep learning based classification of pulmonary ground glass opacity nodules with automatic segmentation. Comput Med Imaging Graph 2021; 88:101814. [PMID: 33486368 PMCID: PMC8111799 DOI: 10.1016/j.compmedimag.2020.101814] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Revised: 09/10/2020] [Accepted: 10/23/2020] [Indexed: 01/15/2023]
Abstract
Classifying ground-glass lung nodules (GGNs) into atypical adenomatous hyperplasia (AAH), adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), and invasive adenocarcinoma (IAC) on diagnostic CT images is important to evaluate the therapy options for lung cancer patients. In this paper, we propose a joint deep learning model where the segmentation can better facilitate the classification of pulmonary GGNs. Based on our observation that masking the nodule to train the model results in better lesion classification, we propose to build a cascade architecture with both segmentation and classification networks. The segmentation model works as a trainable preprocessing module to provide the classification-guided 'attention' weight map to the raw CT data to achieve better diagnosis performance. We evaluate our proposed model and compare with other baseline models for 4 clinically significant nodule classification tasks, defined by a combination of pathology types, using 4 classification metrics: Accuracy, Average F1 Score, Matthews Correlation Coefficient (MCC), and Area Under the Receiver Operating Characteristic Curve (AUC). Experimental results show that the proposed method outperforms other baseline models on all the diagnostic classification tasks.
Collapse
Affiliation(s)
- Duo Wang
- Department of Automation, Tsinghua University, Beijing 100084, China; Department of Radiology, Brigham and Women's Hospital, Boston 02115, USA.
| | - Tao Zhang
- Department of Automation, Tsinghua University, Beijing 100084, China; Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China.
| | - Ming Li
- Department of Radiology, Huadong Hospital affiliated to Fudan University, Shanghai 200040, China.
| | - Raphael Bueno
- Department of Thoracic Surgery, Brigham and Women's Hospital, Boston 02115, USA; Harvard Medical School, Boston 02115, USA.
| | - Jagadeesan Jayender
- Department of Radiology, Brigham and Women's Hospital, Boston 02115, USA; Harvard Medical School, Boston 02115, USA.
| |
Collapse
|
24
|
Ashraf SF, Yin K, Meng CX, Wang Q, Wang Q, Pu J, Dhupar R. Predicting benign, preinvasive, and invasive lung nodules on computed tomography scans using machine learning. J Thorac Cardiovasc Surg 2021; 163:1496-1505.e10. [PMID: 33726909 DOI: 10.1016/j.jtcvs.2021.02.010] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/31/2020] [Revised: 01/28/2021] [Accepted: 02/02/2021] [Indexed: 12/17/2022]
Abstract
OBJECTIVE The study objective was to investigate if machine learning algorithms can predict whether a lung nodule is benign, adenocarcinoma, or its preinvasive subtype from computed tomography images alone. METHODS A dataset of chest computed tomography scans containing lung nodules was collected with their pathologic diagnosis from several sources. The dataset was split randomly into training (70%), internal validation (15%), and independent test sets (15%) at the patient level. Two machine learning algorithms were developed, trained, and validated. The first algorithm used the support vector machine model, and the second used deep learning technology: a convolutional neural network. Receiver operating characteristic analysis was used to evaluate the performance of the classification on the test dataset. RESULTS The support vector machine/convolutional neural network-based models classified nodules into 6 categories resulting in an area under the curve of 0.59/0.65 when differentiating atypical adenomatous hyperplasia versus adenocarcinoma in situ, 0.87/0.86 with minimally invasive adenocarcinoma versus invasive adenocarcinoma, 0.76/0.72 atypical adenomatous hyperplasia + adenocarcinoma in situ versus minimally invasive adenocarcinoma, 0.89/0.87 atypical adenomatous hyperplasia + adenocarcinoma in situ versus minimally invasive adenocarcinoma + invasive adenocarcinoma, and 0.93/0.92 atypical adenomatous hyperplasia + adenocarcinoma in situ + minimally invasive adenocarcinoma versus invasive adenocarcinoma. Classifying benign versus atypical adenomatous hyperplasia + adenocarcinoma in situ + minimally invasive adenocarcinoma versus invasive adenocarcinoma resulted in a micro-average area under the curve of 0.93/0.94 for the support vector machine/convolutional neural network models, respectively. The convolutional neural network-based methods had higher sensitivities than the support vector machine-based methods but lower specificities and accuracies. CONCLUSIONS The machine learning algorithms demonstrated reasonable performance in differentiating benign versus preinvasive versus invasive adenocarcinoma from computed tomography images alone. However, the prediction accuracy varies across its subtypes. This holds the potential for improved diagnostic capabilities with less-invasive means.
Collapse
Affiliation(s)
- Syed Faaz Ashraf
- Department of Cardiothoracic Surgery, University of Pittsburgh School of Medicine, Pittsburgh, Pa
| | - Ke Yin
- Department of Radiology, The Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | | | - Qi Wang
- Department of Radiology, The Fourth Hospital of Hebei Medical University, Hebei, China
| | - Qiong Wang
- Department of Radiology, The Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | - Jiantao Pu
- Department of Radiology, University of Pittsburgh, Pittsburgh, Pa; Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pa
| | - Rajeev Dhupar
- Department of Cardiothoracic Surgery, University of Pittsburgh School of Medicine, Pittsburgh, Pa; VA Pittsburgh Healthcare System, Pittsburgh, Pa.
| |
Collapse
|
25
|
Li C, Jiang C, Gong J, Wu X, Luo Y, Sun G. A CT-based logistic regression model to predict spread through air space in lung adenocarcinoma. Quant Imaging Med Surg 2020; 10:1984-1993. [PMID: 33014730 DOI: 10.21037/qims-20-724] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Background Spread through air space (STAS) is a novel invasive pattern of lung adenocarcinoma and is also a risk factor for recurrence and worse prognosis of lung adenocarcinoma. This study aimed to develop and validate a computed tomography (CT)-based logistic regression model to predict STAS in lung adenocarcinoma. Methods This retrospective study was approved by the institutional review board of two centers and included 578 patients (462 from center I and 116 from center II) with pathologically confirmed lung adenocarcinoma. STAS was identified from 90 center I patients (19.5%) and 28 center II patients (24.1%) from. The maximum diameter, nodule area, and area of solid components in part-solid nodules were measured. Twenty-one semantic characteristics were assessed. Univariate analysis was used to select CT characteristics, which were associated with STAS in the patient cohort of center I. Multivariable logistic regression was used to develop a CT characteristics-based model on those variables with statistical significance. The model was validated in the validation cohort and then tested in the external test cohort (patients from center II). The diagnostic performance of the model was measured by area under the curve (AUC) of receiver operating characteristic (ROC). Results At univariate analysis, age and 11 CT characteristics, including the maximum diameter of the tumor, the maximum area of the tumor, the area and ratio of the solid component, nodule type, pleural thickening, pleural retraction, mediastinal lymph node enlargement, vascular cluster sign, and lobulation, specula were found to be significantly associated with STAS. The optimal logistic regression model included age, maximum diameter and ratio of solid component with odds ratio (OR) value of 0.967 (95% CI: 0.944-0.988), 1.027 (95% CI: 1.008-1.046) and 5.14 (95% CI: 2.180-13.321), respectively. This model achieved an AUC of 0.801 (95% CI: 0.709-0.892) and 0.692 (95% CI: 0.518-0.866) in the validation cohort and the external test cohort, respectively. The difference was not statistically significant (P=0.280). Conclusions CT-based logistic regression machine learning model could preoperatively predict STAS in lung adenocarcinoma with excellent diagnosis performance, which could be supplementary to routine CT interpretation.
Collapse
Affiliation(s)
- Chuanjun Li
- Department of Radiology, Pingshan District People's Hospital of Shenzhen, Shenzhen, China
| | - Changsi Jiang
- Department of Radiology, Shenzhen People's Hospital (The Second Clinical Medical College, Jinan University; The First Affiliated Hospital, Southern University of Science and Technology), Shenzhen, China
| | - Jingshan Gong
- Department of Radiology, Shenzhen People's Hospital (The Second Clinical Medical College, Jinan University; The First Affiliated Hospital, Southern University of Science and Technology), Shenzhen, China
| | - Xiaotao Wu
- Department of Radiology, Pingshan District People's Hospital of Shenzhen, Shenzhen, China
| | - Yan Luo
- Department of Radiology, Shenzhen People's Hospital (The Second Clinical Medical College, Jinan University; The First Affiliated Hospital, Southern University of Science and Technology), Shenzhen, China
| | - Guopin Sun
- Department of Radiology, Pingshan District People's Hospital of Shenzhen, Shenzhen, China
| |
Collapse
|
26
|
Khemasuwan D, Sorensen JS, Colt HG. Artificial intelligence in pulmonary medicine: computer vision, predictive model and COVID-19. Eur Respir Rev 2020; 29:29/157/200181. [PMID: 33004526 PMCID: PMC7537944 DOI: 10.1183/16000617.0181-2020] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Accepted: 08/20/2020] [Indexed: 12/21/2022] Open
Abstract
Artificial intelligence (AI) is transforming healthcare delivery. The digital revolution in medicine and healthcare information is prompting a staggering growth of data intertwined with elements from many digital sources such as genomics, medical imaging and electronic health records. Such massive growth has sparked the development of an increasing number of AI-based applications that can be deployed in clinical practice. Pulmonary specialists who are familiar with the principles of AI and its applications will be empowered and prepared to seize future practice and research opportunities. The goal of this review is to provide pulmonary specialists and other readers with information pertinent to the use of AI in pulmonary medicine. First, we describe the concept of AI and some of the requisites of machine learning and deep learning. Next, we review some of the literature relevant to the use of computer vision in medical imaging, predictive modelling with machine learning, and the use of AI for battling the novel severe acute respiratory syndrome-coronavirus-2 pandemic. We close our review with a discussion of limitations and challenges pertaining to the further incorporation of AI into clinical pulmonary practice. Artificial intelligence (AI) is changing the landscape in medicine. AI-based applications will empower pulmonary specialists to seize modern practice and research opportunities. Data-driven precision medicine is already here.https://bit.ly/324tl2m
Collapse
Affiliation(s)
- Danai Khemasuwan
- Division of Pulmonary and Critical Care Medicine, Virginia Commonwealth University, Richmond, VA, USA
| | | | - Henri G Colt
- Division of Pulmonary and Critical Care Medicine, University of California Irvine, Irvine, CA, USA
| |
Collapse
|
27
|
Ni Y, Yang Y, Zheng D, Xie Z, Huang H, Wang W. The Invasiveness Classification of Ground-Glass Nodules Using 3D Attention Network and HRCT. J Digit Imaging 2020; 33:1144-1154. [PMID: 32705434 PMCID: PMC7649842 DOI: 10.1007/s10278-020-00355-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
The early stage lung cancer often appears as ground-glass nodules (GGNs). The diagnosis of GGN as preinvasive lesion (PIL) or invasive adenocarcinoma (IA) is very important for further treatment planning. This paper proposes an automatic GGNs' invasiveness classification algorithm for the adenocarcinoma. 1431 clinical cases and a total of 1624 GGNs (3-30 mm) were collected from Shanghai Cancer Center for the study. The data is in high-resolution computed tomography (HRCT) format. Firstly, the automatic GGN detector which is composed by a 3D U-Net and a 3D multi-receptive field (multi-RF) network detects the location of GGNs. Then, a deep 3D convolutional neural network (3D-CNN) called Attention-v1 is used to identify the GGNs' invasiveness. The attention mechanism was introduced to the 3D-CNN. This paper conducted a contract experiment to compare the performance of Attention-v1, ResNet, and random forest algorithm. ResNet is one of the most advanced convolutional neural network structures. The competition performance metrics (CPM) of automatic GGN detector reached 0.896. The accuracy, sensitivity, specificity, and area under curve (AUC) value of Attention-v1 structure are 85.2%, 83.7%, 86.3%, and 92.6%. The algorithm proposed in this paper outperforms ResNet and random forest in sensitivity, accuracy, and AUC value. The deep 3D-CNN's classification result is better than traditional machine learning method. Attention mechanism improves 3D-CNN's performance compared with the residual block. The automatic GGN detector with the addition of Attention-v1 can be used to construct the GGN invasiveness classification algorithm to help the patients and doctors in treatment.
Collapse
Affiliation(s)
- Yangfan Ni
- Laboratory for Medical Imaging Informatics, Shanghai Institute of Technical Physics, Chinese Academy of Science, Shanghai, 200083, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Yuanyuan Yang
- Laboratory for Medical Imaging Informatics, Shanghai Institute of Technical Physics, Chinese Academy of Science, Shanghai, 200083, China
| | - Dezhong Zheng
- Laboratory for Medical Imaging Informatics, Shanghai Institute of Technical Physics, Chinese Academy of Science, Shanghai, 200083, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Zhe Xie
- Laboratory for Medical Imaging Informatics, Shanghai Institute of Technical Physics, Chinese Academy of Science, Shanghai, 200083, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Haozhe Huang
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
| | - Weidong Wang
- The General Hospital of the People's Liberation Army, No. 28 Fuxing Road, Haidian District, Beijing, 100039, China.
| |
Collapse
|
28
|
Qi L, Lu W, Wu N, Wang J. Persistent pulmonary subsolid nodules with a solid component smaller than 6 mm: what do we know? J Thorac Dis 2020; 12:4584-4587. [PMID: 32944381 PMCID: PMC7475582 DOI: 10.21037/jtd-20-1972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Linlin Qi
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Wenwen Lu
- Department of Ophthalmology, Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China
| | - Ning Wu
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.,PET-CT Center, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianwei Wang
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
29
|
Wang F, Liu X, Yuan N, Qian B, Ruan L, Yin C, Jin C. Study on automatic detection and classification of breast nodule using deep convolutional neural network system. J Thorac Dis 2020; 12:4690-4701. [PMID: 33145042 PMCID: PMC7578508 DOI: 10.21037/jtd-19-3013] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Backgrounds Conventional ultrasound manual scanning and artificial diagnosis approaches in breast are considered to be operator-dependence, slight slow and error-prone. In this study, we used Automated Breast Ultrasound (ABUS) machine for the scanning, and deep convolutional neural network (CNN) technology, a kind of Deep Learning (DL) algorithm, for the detection and classification of breast nodules, aiming to achieve the automatic and accurate diagnosis of breast nodules. Methods Two hundred and ninety-three lesions from 194 patients with definite pathological diagnosis results (117 benign and 176 malignancy) were recruited as case group. Another 70 patients without breast diseases were enrolled as control group. All the breast scans were carried out by an ABUS machine and then randomly divided into training set, verification set and test set, with a proportion of 7:1:2. In the training set, we constructed a detection model by a three-dimensionally U-shaped convolutional neural network (3D U-Net) architecture for the purpose of segment the nodules from background breast images. Processes such as residual block, attention connections, and hard mining were used to optimize the model while strategies of random cropping, flipping and rotation for data augmentation. In the test phase, the current model was compared with those in previously reported studies. In the verification set, the detection effectiveness of detection model was evaluated. In the classification phase, multiple convolutional layers and fully-connected layers were applied to set up a classification model, aiming to identify whether the nodule was malignancy. Results Our detection model yielded a sensitivity of 91% and 1.92 false positive subjects per automatically scanned imaging. The classification model achieved a sensitivity of 87.0%, a specificity of 88.0% and an accuracy of 87.5%. Conclusions Deep CNN combined with ABUS maybe a promising tool for easy detection and accurate diagnosis of breast nodule.
Collapse
Affiliation(s)
- Feiqian Wang
- Department of Ultrasound, The First Affiliated Hospital of Xi'an Jiaotong University, China
| | - Xiaotong Liu
- National Engineering Lab for Big Data Analytics, Xi'an Jiaotong University, Xi'an, China.,School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Na Yuan
- Department of Ultrasound, The First Affiliated Hospital of Xi'an Jiaotong University, China
| | - Buyue Qian
- National Engineering Lab for Big Data Analytics, Xi'an Jiaotong University, Xi'an, China.,School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Litao Ruan
- Department of Ultrasound, The First Affiliated Hospital of Xi'an Jiaotong University, China
| | - Changchang Yin
- National Engineering Lab for Big Data Analytics, Xi'an Jiaotong University, Xi'an, China.,School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Ciping Jin
- National Engineering Lab for Big Data Analytics, Xi'an Jiaotong University, Xi'an, China.,School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an, China
| |
Collapse
|
30
|
Ohno Y, Aoyagi K, Yaguchi A, Seki S, Ueno Y, Kishida Y, Takenaka D, Yoshikawa T. Differentiation of Benign from Malignant Pulmonary Nodules by Using a Convolutional Neural Network to Determine Volume Change at Chest CT. Radiology 2020; 296:432-443. [DOI: 10.1148/radiol.2020191740] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
31
|
Zhou XL, Wang EG, Lin Q, Dong GP, Wu W, Huang K, Lai C, Yu G, Zhou HC, Ma XH, Jia X, Shi L, Zheng YS, Liu LX, Ha D, Ni H, Yang J, Fu JF. Diagnostic performance of convolutional neural network-based Tanner-Whitehouse 3 bone age assessment system. Quant Imaging Med Surg 2020; 10:657-667. [PMID: 32269926 DOI: 10.21037/qims.2020.02.20] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
Background Bone age can reflect the true growth and development status of a child; thus, it plays a critical role in evaluating growth and endocrine disorders. This study established and validated an optimized Tanner-Whitehouse 3 artificial intelligence (TW3-AI) bone age assessment (BAA) system based on a convolutional neural network (CNN). Methods A data set of 9,059 clinical radiographs of the left hand was obtained from the picture archives and communication systems (PACS) between January 2012 and December 2016. Among these, 8,005/9,059 (88%) samples were treated as the training set for model implementation, 804/9,059 (9%) samples as the validation set for parameters optimization, and the remaining 250/9,059 (3%) samples were used to verify the accuracy and reliability of the model compared to that of 4 experienced endocrinologists and 2 experienced radiologists. The overall variation of TW3-metacarpophalangeal, radius, ulna and short bones (RUS) and TW3-Carpal bone score, as well as each bone (13 RUS + 7 Carpal) between reviewers and the AI, were compared by Bland-Altman (BA) chart and Kappa test, respectively. Furthermore, the time consumption between the model and reviewers was also compared. Results The performance of TW3-AI model was highly consistent with the reviewers' overall estimation, and the root mean square (RMS) was 0.50 years. The accuracy of the BAA of the TW3-AI model was better than the estimate of the reviewers. Further analysis revealed that human interpretations of the male capitate, hamate, the first distal and fifth middle phalanx and female capitate, the trapezoid, and the third and fifth middle phalanx, were most inconsistent. The average image processing time was 1.5±0.2 s in the TW3-AI model, which was significantly shorter than manual interpretation. Conclusions The diagnostic performance of CNN-based TW3 BAA was accurate and timesaving, and possesses better stability compared to diagnostics made by experienced experts.
Collapse
Affiliation(s)
- Xue-Lian Zhou
- The Children's Hospital, Zhejiang University School of Medicine, Division of Endocrinology, National Clinical Research Center for Child Health, Hangzhou 310052, China
| | - Er-Gang Wang
- Center for Genomics and Computational Biology, Duke University, Durham, NC, USA.,Department of Biomedical Engineering, Duke University, Durham, NC, USA
| | - Qiang Lin
- Hangzhou YITU Healthcare Technology Co., Ltd, Hangzhou 310012, China
| | - Guan-Ping Dong
- The Children's Hospital, Zhejiang University School of Medicine, Division of Endocrinology, National Clinical Research Center for Child Health, Hangzhou 310052, China
| | - Wei Wu
- The Children's Hospital, Zhejiang University School of Medicine, Division of Endocrinology, National Clinical Research Center for Child Health, Hangzhou 310052, China
| | - Ke Huang
- The Children's Hospital, Zhejiang University School of Medicine, Division of Endocrinology, National Clinical Research Center for Child Health, Hangzhou 310052, China
| | - Can Lai
- The Children's Hospital, Zhejiang University School of Medicine, Division of Radiology, National Clinical Research Center for Child Health, Hangzhou 310052, China
| | - Gang Yu
- The Children's Hospital, Zhejiang University School of Medicine, Division of Information Science, National Clinical Research Center for Child Health, Hangzhou 310052, China
| | - Hai-Chun Zhou
- The Children's Hospital, Zhejiang University School of Medicine, Division of Radiology, National Clinical Research Center for Child Health, Hangzhou 310052, China
| | - Xiao-Hui Ma
- The Children's Hospital, Zhejiang University School of Medicine, Division of Radiology, National Clinical Research Center for Child Health, Hangzhou 310052, China
| | - Xuan Jia
- The Children's Hospital, Zhejiang University School of Medicine, Division of Radiology, National Clinical Research Center for Child Health, Hangzhou 310052, China
| | - Lei Shi
- Hangzhou YITU Healthcare Technology Co., Ltd, Hangzhou 310012, China
| | - Yong-Sheng Zheng
- Hangzhou YITU Healthcare Technology Co., Ltd, Hangzhou 310012, China
| | - Lan-Xuan Liu
- Hangzhou YITU Healthcare Technology Co., Ltd, Hangzhou 310012, China
| | - Da Ha
- Hangzhou YITU Healthcare Technology Co., Ltd, Hangzhou 310012, China
| | - Hao Ni
- Hangzhou YITU Healthcare Technology Co., Ltd, Hangzhou 310012, China
| | - Jun Yang
- Hangzhou YITU Healthcare Technology Co., Ltd, Hangzhou 310012, China
| | - Jun-Fen Fu
- The Children's Hospital, Zhejiang University School of Medicine, Division of Endocrinology, National Clinical Research Center for Child Health, Hangzhou 310052, China
| |
Collapse
|
32
|
Xia X, Gong J, Hao W, Yang T, Lin Y, Wang S, Peng W. Comparison and Fusion of Deep Learning and Radiomics Features of Ground-Glass Nodules to Predict the Invasiveness Risk of Stage-I Lung Adenocarcinomas in CT Scan. Front Oncol 2020; 10:418. [PMID: 32296645 PMCID: PMC7136522 DOI: 10.3389/fonc.2020.00418] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2019] [Accepted: 03/10/2020] [Indexed: 01/15/2023] Open
Abstract
For stage-I lung adenocarcinoma, the 5-years disease-free survival (DFS) rates of non-invasive adenocarcinoma (non-IA) is different with invasive adenocarcinoma (IA). This study aims to develop CT image based artificial intelligence (AI) schemes to classify between non-IA and IA nodules, and incorporate deep learning (DL) and radiomics features to improve the classification performance. We collect 373 surgical pathological confirmed ground-glass nodules (GGNs) from 323 patients in two centers. It involves 205 non-IA (including 107 adenocarcinoma in situ and 98 minimally invasive adenocarcinoma), and 168 IA. We first propose a recurrent residual convolutional neural network based on U-Net to segment the GGNs. Then, we build two schemes to classify between non-IA and IA namely, DL scheme and radiomics scheme, respectively. Third, to improve the classification performance, we fuse the prediction scores of two schemes by applying an information fusion method. Finally, we conduct an observer study to compare our scheme performance with two radiologists by testing on an independent dataset. Comparing with DL scheme and radiomics scheme (the area under a receiver operating characteristic curve (AUC): 0.83 ± 0.05, 0.87 ± 0.04), our new fusion scheme (AUC: 0.90 ± 0.03) significant improves the risk classification performance (p < 0.05). In a comparison with two radiologists, our new model yields higher accuracy of 80.3%. The kappa value for inter-radiologist agreement is 0.6. It demonstrates that applying AI method is an effective way to improve the invasiveness risk prediction performance of GGNs. In future, fusion of DL and radiomics features may have a potential to handle the classification task with limited dataset in medical imaging.
Collapse
Affiliation(s)
- Xianwu Xia
- Department of Radiology, Municipal Hospital Affiliated to Medical School of Taizhou University, Taizhou, China
| | - Jing Gong
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Wen Hao
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Ting Yang
- Department of Radiology, Municipal Hospital Affiliated to Medical School of Taizhou University, Taizhou, China
| | - Yeqing Lin
- Department of Radiology, Municipal Hospital Affiliated to Medical School of Taizhou University, Taizhou, China
| | - Shengping Wang
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Weijun Peng
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| |
Collapse
|
33
|
Wang J, Chen X, Lu H, Zhang L, Pan J, Bao Y, Su J, Qian D. Feature-shared adaptive-boost deep learning for invasiveness classification of pulmonary subsolid nodules in CT images. Med Phys 2020; 47:1738-1749. [PMID: 32020649 DOI: 10.1002/mp.14068] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Revised: 01/08/2020] [Accepted: 01/22/2020] [Indexed: 12/30/2022] Open
Abstract
PURPOSE In clinical practice, invasiveness is an important reference indicator for differentiating the malignant degree of subsolid pulmonary nodules. These nodules can be classified as atypical adenomatous hyperplasia (AAH), adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), or invasive adenocarcinoma (IAC). The automatic determination of a nodule's invasiveness based on chest CT scans can guide treatment planning. However, it is challenging, owing to the insufficiency of training data and their interclass similarity and intraclass variation. To address these challenges, we propose a two-stage deep learning strategy for this task: prior-feature learning followed by adaptive-boost deep learning. METHODS The adaptive-boost deep learning is proposed to train a strong classifier for invasiveness classification of subsolid nodules in chest CT images, using multiple 3D convolutional neural network (CNN)-based weak classifiers. Because ensembles of multiple deep 3D CNN models have a huge number of parameters and require large computing resources along with more training and testing time, the prior-feature learning is proposed to reduce the computations by sharing the CNN layers between all weak classifiers. Using this strategy, all weak classifiers can be integrated into a single network. RESULTS Tenfold cross validation of binary classification was conducted on a total of 1357 nodules, including 765 noninvasive (AAH and AIS) and 592 invasive nodules (MIA and IAC). Ablation experimental results indicated that the proposed binary classifier achieved an accuracy of 73.4 \% ± 1.4 with an AUC of 81.3 \% ± 2.2 . These results are superior compared to those achieved by three experienced chest imaging specialists who achieved an accuracy of 69.1 \% , 69.3 \% , and 67.9 \% , respectively. About 200 additional nodules were also collected. These nodules covered 50 cases for each category (AAH, AIS, MIA, and IAC, respectively). Both binary and multiple classifications were performed on these data and the results demonstrated that the proposed method definitely achieves better performance than the performance achieved by nonensemble deep learning methods. CONCLUSIONS It can be concluded that the proposed adaptive-boost deep learning can significantly improve the performance of invasiveness classification of pulmonary subsolid nodules in CT images, while the prior-feature learning can significantly reduce the total size of deep models. The promising results on clinical data show that the trained models can be used as an effective lung cancer screening tool in hospitals. Moreover, the proposed strategy can be easily extended to other similar classification tasks in 3D medical images.
Collapse
Affiliation(s)
- Jun Wang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Xiaorong Chen
- Medical Imaging Department, Jinhua Municipal Central Hospital, Jinhua, 321001, China
| | - Hongbing Lu
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China
| | - Lichi Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Jianfeng Pan
- Medical Imaging Department, Jinhua Municipal Central Hospital, Jinhua, 321001, China
| | - Yong Bao
- Changzhou Industrial Technology Research Institute of Zhejiang University, Changzhou, 213022, China
| | - Jiner Su
- Medical Imaging Department, Jinhua Municipal Central Hospital, Jinhua, 321001, China
| | - Dahong Qian
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| |
Collapse
|
34
|
CT-based deep learning model to differentiate invasive pulmonary adenocarcinomas appearing as subsolid nodules among surgical candidates: comparison of the diagnostic performance with a size-based logistic model and radiologists. Eur Radiol 2020; 30:3295-3305. [PMID: 32055949 DOI: 10.1007/s00330-019-06628-4] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2019] [Revised: 11/09/2019] [Accepted: 12/13/2019] [Indexed: 12/17/2022]
Abstract
OBJECTIVES To evaluate the deep learning models for differentiating invasive pulmonary adenocarcinomas (IACs) among subsolid nodules (SSNs) considered for resection in a retrospective diagnostic cohort in comparison with a size-based logistic model and expert radiologists. METHODS This study included 525 patients (309 women; median, 62 years) to develop models, and an independent cohort of 101 patients (57 women; median, 66 years) was used for validation. A size-based logistic model and deep learning models using 2.5-dimension (2.5D) and three-dimension (3D) CT images were developed to discriminate IAC from less invasive pathologies. Overall performance, discrimination, and calibration were assessed. Diagnostic performances of the three thoracic radiologists were compared with those of the deep learning model. RESULTS The overall performances of the deep learning models (Brier score, 0.122 for the 2.5D DenseNet and 0.121 for the 3D DenseNet) were superior to those of the size-based logistic model (Brier score, 0.198). The area under the receiver operating characteristic curve (AUC) of the 2.5D DenseNet (0.921) was significantly higher than that of the 3D DenseNet (0.835; p = 0.037) and the size-based logistic model (0.836; p = 0.009). At equally high sensitivities of 90%, the 2.5D DenseNet showed significantly higher specificity (88.2%; all p < 0.05) and positive predictive value (97.4%; all p < 0.05) than other models. Model calibration was poor for all models (all p < 0.05). The 2.5D DenseNet had a comparable performance with the radiologists (AUC, 0.848-0.910). CONCLUSION The 2.5D DenseNet model could be used as a highly sensitive and specific diagnostic tool to differentiate IACs among SSNs for surgical candidates. KEY POINTS • The deep learning model developed using 2.5D DenseNet showed higher overall performance and discrimination than the size-based logistic model for the differentiation of invasive adenocarcinomas among subsolid nodules for surgical candidates. • The 2.5D DenseNet demonstrated a thoracic radiologist-level diagnostic performance and had higher specificity (88.2%) at equal sensitivities (90%) than the size-based logistic model (specificity, 52.9%). • The 2.5D DenseNet could be used to reduce potential overtreatment for the indolent subsolid nodules or to select candidates for sublobar resection instead of the standard lobectomy.
Collapse
|
35
|
Gong J, Liu J, Hao W, Nie S, Zheng B, Wang S, Peng W. A deep residual learning network for predicting lung adenocarcinoma manifesting as ground-glass nodule on CT images. Eur Radiol 2019; 30:1847-1855. [PMID: 31811427 DOI: 10.1007/s00330-019-06533-w] [Citation(s) in RCA: 52] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2019] [Revised: 09/10/2019] [Accepted: 10/18/2019] [Indexed: 12/19/2022]
Abstract
OBJECTIVE To develop a deep learning-based artificial intelligence (AI) scheme for predicting the likelihood of the ground-glass nodule (GGN) detected on CT images being invasive adenocarcinoma (IA) and also compare the accuracy of this AI scheme with that of two radiologists. METHODS First, we retrospectively collected 828 histopathologically confirmed GGNs of 644 patients from two centers. Among them, 209 GGNs are confirmed IA and 619 are non-IA, including 409 adenocarcinomas in situ and 210 minimally invasive adenocarcinomas. Second, we applied a series of pre-preprocessing techniques, such as image resampling, rescaling and cropping, and data augmentation, to process original CT images and generate new training and testing images. Third, we built an AI scheme based on a deep convolutional neural network by using a residual learning architecture and batch normalization technique. Finally, we conducted an observer study and compared the prediction performance of the AI scheme with that of two radiologists using an independent dataset with 102 GGNs. RESULTS The new AI scheme yielded an area under the receiver operating characteristic curve (AUC) of 0.92 ± 0.03 in classifying between IA and non-IA GGNs, which is equivalent to the senior radiologist's performance (AUC 0.92 ± 0.03) and higher than the score of the junior radiologist (AUC 0.90 ± 0.03). The Kappa value of two sets of subjective prediction scores generated by two radiologists is 0.6. CONCLUSIONS The study result demonstrates using an AI scheme to improve the performance in predicting IA, which can help improve the development of a more effective personalized cancer treatment paradigm. KEY POINTS • The feasibility of using a deep learning method to predict the likelihood of the ground-glass nodule being invasive adenocarcinoma. • Residual learning-based CNN model improves the performance in classifying between IA and non-IA nodules. • Artificial intelligence (AI) scheme yields higher performance than radiologists in predicting invasive adenocarcinoma.
Collapse
|
36
|
The Performance of Deep Learning Algorithms on Automatic Pulmonary Nodule Detection and Classification Tested on Different Datasets That Are Not Derived from LIDC-IDRI: A Systematic Review. Diagnostics (Basel) 2019; 9:diagnostics9040207. [PMID: 31795409 PMCID: PMC6963966 DOI: 10.3390/diagnostics9040207] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2019] [Revised: 11/25/2019] [Accepted: 11/28/2019] [Indexed: 12/27/2022] Open
Abstract
The aim of this study was to systematically review the performance of deep learning technology in detecting and classifying pulmonary nodules on computed tomography (CT) scans that were not from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) database. Furthermore, we explored the difference in performance when the deep learning technology was applied to test datasets different from the training datasets. Only peer-reviewed, original research articles utilizing deep learning technology were included in this study, and only results from testing on datasets other than the LIDC-IDRI were included. We searched a total of six databases: EMBASE, PubMed, Cochrane Library, the Institute of Electrical and Electronics Engineers, Inc. (IEEE), Scopus, and Web of Science. This resulted in 1782 studies after duplicates were removed, and a total of 26 studies were included in this systematic review. Three studies explored the performance of pulmonary nodule detection only, 16 studies explored the performance of pulmonary nodule classification only, and 7 studies had reports of both pulmonary nodule detection and classification. Three different deep learning architectures were mentioned amongst the included studies: convolutional neural network (CNN), massive training artificial neural network (MTANN), and deep stacked denoising autoencoder extreme learning machine (SDAE-ELM). The studies reached a classification accuracy between 68–99.6% and a detection accuracy between 80.6–94%. Performance of deep learning technology in studies using different test and training datasets was comparable to studies using same type of test and training datasets. In conclusion, deep learning was able to achieve high levels of accuracy, sensitivity, and/or specificity in detecting and/or classifying nodules when applied to pulmonary CT scans not from the LIDC-IDRI database.
Collapse
|
37
|
Qi L, Lu W, Yang L, Tang W, Zhao S, Huang Y, Wu N, Wang J. Qualitative and quantitative imaging features of pulmonary subsolid nodules: differentiating invasive adenocarcinoma from minimally invasive adenocarcinoma and preinvasive lesions. J Thorac Dis 2019; 11:4835-4846. [PMID: 31903274 DOI: 10.21037/jtd.2019.11.35] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Background To explore the role of qualitative and quantitative imaging features of pulmonary subsolid nodules (SSNs) in differentiating invasive adenocarcinoma (IAC) from minimally invasive adenocarcinoma (MIA) and preinvasive lesions. Methods We reviewed the clinical records of our institute from October 2010 to December 2015 and included 316 resected SSNs from 287 patients: 260 pure ground-glass nodules, 47 part-solid nodules with solid components ≤5 mm, and 9 ground-glass nodules (GGNs) with cystic airspaces. According to the pathologic review results, 307 SSNs in addition to nine GGNs with cystic airspaces were divided into two groups: A, including atypical adenomatous hyperplasia (AAH) (n=15), adenocarcinoma in situ (AIS) (n=56), and MIA (n=41); B, including 195 IACs. Univariate and binary logistic regression analyses were conducted to identify independent risk factors for IAC. Results Univariate analysis showed significant differences between groups regarding patient age, mean diameter, mean and relative computed tomography (CT) values, volume, mass (all P<0.001), and morphological features including lobulated sign (P<0.001), spiculated sign (P=0.028), vacuole sign/air bronchogram (P<0.001), and pleural retraction (P=0.017). Binary logistic regression and receiver operating characteristic analysis indicated the SSN mass as the only independent risk factor of IAC (odds ratio, 1.007; P<0.001), with an optimal cutoff value of 283.2 mg [area under curve (AUC): 0.859; sensitivity: 68.7%; specificity: 92.9%]. Among lepidic, acinar, and papillary adenocarcinomas, we found significant differences for the vacuole sign/air bronchogram (P=0.032) and mean and relative CT values (P<0.001). All nine GGNs with cystic airspaces were IACs. Conclusions The SSN mass with an optimal cutoff value of 283.2 mg may be reliable for differentiating IAC from MIA and preinvasive lesions.
Collapse
Affiliation(s)
- Linlin Qi
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Wenwen Lu
- Department of Ophthalmology, Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing 100191, China
| | - Lin Yang
- Department of Diagnostic Pathology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Wei Tang
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Shijun Zhao
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Yao Huang
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Ning Wu
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China.,PET-CT Center, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Jianwei Wang
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| |
Collapse
|
38
|
Liu X, Faes L, Kale AU, Wagner SK, Fu DJ, Bruynseels A, Mahendiran T, Moraes G, Shamdas M, Kern C, Ledsam JR, Schmid MK, Balaskas K, Topol EJ, Bachmann LM, Keane PA, Denniston AK. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Health 2019; 1:e271-e297. [PMID: 33323251 DOI: 10.1016/s2589-7500(19)30123-2] [Citation(s) in RCA: 698] [Impact Index Per Article: 139.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2019] [Revised: 08/06/2019] [Accepted: 08/14/2019] [Indexed: 02/06/2023]
Abstract
BACKGROUND Deep learning offers considerable promise for medical diagnostics. We aimed to evaluate the diagnostic accuracy of deep learning algorithms versus health-care professionals in classifying diseases using medical imaging. METHODS In this systematic review and meta-analysis, we searched Ovid-MEDLINE, Embase, Science Citation Index, and Conference Proceedings Citation Index for studies published from Jan 1, 2012, to June 6, 2019. Studies comparing the diagnostic performance of deep learning models and health-care professionals based on medical imaging, for any disease, were included. We excluded studies that used medical waveform data graphics material or investigated the accuracy of image segmentation rather than disease classification. We extracted binary diagnostic accuracy data and constructed contingency tables to derive the outcomes of interest: sensitivity and specificity. Studies undertaking an out-of-sample external validation were included in a meta-analysis, using a unified hierarchical model. This study is registered with PROSPERO, CRD42018091176. FINDINGS Our search identified 31 587 studies, of which 82 (describing 147 patient cohorts) were included. 69 studies provided enough data to construct contingency tables, enabling calculation of test accuracy, with sensitivity ranging from 9·7% to 100·0% (mean 79·1%, SD 0·2) and specificity ranging from 38·9% to 100·0% (mean 88·3%, SD 0·1). An out-of-sample external validation was done in 25 studies, of which 14 made the comparison between deep learning models and health-care professionals in the same sample. Comparison of the performance between health-care professionals in these 14 studies, when restricting the analysis to the contingency table for each study reporting the highest accuracy, found a pooled sensitivity of 87·0% (95% CI 83·0-90·2) for deep learning models and 86·4% (79·9-91·0) for health-care professionals, and a pooled specificity of 92·5% (95% CI 85·1-96·4) for deep learning models and 90·5% (80·6-95·7) for health-care professionals. INTERPRETATION Our review found the diagnostic performance of deep learning models to be equivalent to that of health-care professionals. However, a major finding of the review is that few studies presented externally validated results or compared the performance of deep learning models and health-care professionals using the same sample. Additionally, poor reporting is prevalent in deep learning studies, which limits reliable interpretation of the reported diagnostic accuracy. New reporting standards that address specific challenges of deep learning could improve future studies, enabling greater confidence in the results of future evaluations of this promising technology. FUNDING None.
Collapse
Affiliation(s)
- Xiaoxuan Liu
- Department of Ophthalmology, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK; Academic Unit of Ophthalmology, Institute of Inflammation & Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK; Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK; Health Data Research UK, London, UK
| | - Livia Faes
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK; Eye Clinic, Cantonal Hospital of Lucerne, Lucerne, Switzerland
| | - Aditya U Kale
- Department of Ophthalmology, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
| | - Siegfried K Wagner
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Dun Jack Fu
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Alice Bruynseels
- Department of Ophthalmology, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
| | - Thushika Mahendiran
- Department of Ophthalmology, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
| | - Gabriella Moraes
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Mohith Shamdas
- Academic Unit of Ophthalmology, Institute of Inflammation & Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
| | - Christoph Kern
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK; University Eye Hospital, Ludwig Maximilian University of Munich, Munich, Germany
| | | | - Martin K Schmid
- Eye Clinic, Cantonal Hospital of Lucerne, Lucerne, Switzerland
| | - Konstantinos Balaskas
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK; NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Eric J Topol
- Scripps Research Translational Institute, La Jolla, California
| | | | - Pearse A Keane
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK; Health Data Research UK, London, UK
| | - Alastair K Denniston
- Department of Ophthalmology, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK; Academic Unit of Ophthalmology, Institute of Inflammation & Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK; Centre for Patient Reported Outcome Research, Institute of Applied Health Research, University of Birmingham, Birmingham, UK; NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK; Health Data Research UK, London, UK.
| |
Collapse
|
39
|
Gong J, Liu J, Hao W, Nie S, Wang S, Peng W. Computer-aided diagnosis of ground-glass opacity pulmonary nodules using radiomic features analysis. Phys Med Biol 2019; 64:135015. [PMID: 31167172 DOI: 10.1088/1361-6560/ab2757] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
This study aims to develop a CT-based radiomic features analysis approach for diagnosis of ground-glass opacity (GGO) pulmonary nodules, and also assess whether computer-aided diagnosis (CADx) performance changes in classifying between benign and malignant nodules associated with histopathological subtypes namely, adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), and invasive adenocarcinoma (IAC), respectively. The study involves 182 histopathology-confirmed GGO nodules collected from two cancer centers. Among them, 59 are benign, 50 are AIS, 32 are MIA, and 41 are IAC nodules. Four training/testing data sets-(1) all nodules, (2) benign and AIS nodules, (3) benign and MIA nodules, (4) benign and IAC nodules-are assembled based on their histopathological subtypes. We first segment pulmonary nodules depicted in CT images by using a 3D region growing and geodesic active contour level set algorithm. Then, we computed and extracted 1117 quantitative imaging features based on the 3D segmented nodules. After conducting radiomic features normalization process, we apply a leave-one-out cross-validation (LOOCV) method to build models by embedding with a Relief feature selection, synthetic minority oversampling technique (SMOTE) and three machine-learning classifiers namely, support vector machine classifier, logistic regression classifier and Gaussian Naïve Bayes classifier. When separately using four data sets to train and test three classifiers, the average areas under receiver operating characteristic curves (AUC) are 0.75, 0.55, 0.77 and 0.93, respectively. When testing on an independent data set, our scheme yields higher accuracy than two radiologists (61.3% versus radiologist 1: 53.1% and radiologist 2: 56.3%). This study demonstrates that: (1) the feasibility of using CT-based radiomic features analysis approach to distinguish between benign and malignant GGO nodules, (2) higher performance of CADx scheme in diagnosing GGO nodules comparing with radiologist, and (3) a consistently positive trend between classification performance and invasive grade of GGO nodules. Thus, to improve the CADx performance in diagnosing of GGO nodules, one should assemble an optimal training data set dominated with more nodules associated with non-invasive lung adenocarcinoma (i.e. AIS and MIA).
Collapse
Affiliation(s)
- Jing Gong
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai 200032, People's Republic of China. Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China. Jing Gong and Jiyu Liu contributed equally to this work
| | | | | | | | | | | |
Collapse
|
40
|
Liu C, Chen S, Yang Y, Shao D, Peng W, Wang Y, Chen Y, Wang Y. The value of the computer-aided diagnosis system for thyroid lesions based on computed tomography images. Quant Imaging Med Surg 2019; 9:642-653. [PMID: 31143655 DOI: 10.21037/qims.2019.04.01] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Background Thyroid nodules are commonly found at palpation amounting to 4-7% of the asymptomatic population and 50% of the cases are found at autopsy. Only a small proportion of thyroid nodules are malignant. The major challenge is the differential diagnosis of benign or malignant thyroid nodules, so we aim to develop the computer-assisted diagnostic method based on computed tomography (CT) images for thyroid lesions. Methods In this study, we retrospectively collected 52 benign and 46 malignant thyroid nodules from 90 patients in CT examinations, together with the pathologist findings and radiology diagnosis. The first-order statistic and gray-level co-occurrence matrix features were extracted from thyroid computed tomography images. These texture features were used to assess the malignancy risk of the thyroid nodules. Several classification algorithms, including support vector machine, linear discriminant analysis, random forest, and bootstrap aggregating, were applied in the prediction. Leave-one-out cross-validation was used to evaluate the performance of thyroid cancer recognition. Results In thyroid cancer identification based on a computed tomography image, we found the system using 17 texture features and support vector machine performed well. The accuracy, area under the receiver operating characteristic curve, sensitivity, specificity, positive predictive value, and negative predictive value, were 0.8673, 0.9105, 0.9130, 0.8269, 0.8235 and 0.9146, respectively. Conclusions The proposed computer-aided diagnosis system provides a good assessment of the malignancy-risk of the thyroid nodules, which may help radiologists to improve the accuracy and efficiency of thyroid diagnosis.
Collapse
Affiliation(s)
- Chenbin Liu
- College of Medical Imaging, Shanghai University of Medicine and Health Sciences, Shanghai 201318, China.,Radiation Oncology, Chinese Academy of Medical Science (CAMS) Shenzhen Cancer Hospital, Shenzhen 518117, China
| | - Shanshan Chen
- College of Medical Imaging, Shanghai University of Medicine and Health Sciences, Shanghai 201318, China
| | - Yunze Yang
- Biodesign Institute, Arizona State University, Tempe, AZ, USA
| | - Dangdang Shao
- Biodesign Institute, Arizona State University, Tempe, AZ, USA
| | - Wenxian Peng
- College of Medical Imaging, Shanghai University of Medicine and Health Sciences, Shanghai 201318, China.,Department of Radiology, Hangzhou Medical College, Hangzhou 310053, China
| | - Yan Wang
- Biodesign Institute, Arizona State University, Tempe, AZ, USA
| | - Yihong Chen
- Department of Radiology, Hangzhou Medical College, Hangzhou 310053, China
| | - Yuenan Wang
- Radiation Oncology, Chinese Academy of Medical Science (CAMS) Shenzhen Cancer Hospital, Shenzhen 518117, China
| |
Collapse
|
41
|
Wang S, Zhang R, Deng Y, Chen K, Xiao D, Peng P, Jiang T. Discrimination of smoking status by MRI based on deep learning method. Quant Imaging Med Surg 2018; 8:1113-1120. [PMID: 30701165 DOI: 10.21037/qims.2018.12.04] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Background This study aimed to assess the feasibility of deep learning-based magnetic resonance imaging (MRI) in the prediction of smoking status. Methods The head MRI 3D-T1WI images of 127 subjects (61 smokers and 66 non-smokers) were collected, and 176 image slices obtained for each subject. These subjects were 23-45 years old, and the smokers had at least 5 years of smoking experience. Approximate 25% of the subjects were randomly selected as the test set (15 smokers and 16 non-smokers), and the remaining subjects as the training set. Two deep learning models were developed: deep 3D convolutional neural network (Conv3D) and convolution neural network plus a recurrent neural network (RNN) with long short-term memory architecture (ConvLSTM). Results In the prediction of smoking status, Conv3D model achieved an accuracy of 80.6% (25/31), a sensitivity of 80.0% and a specificity of 81.3%, and ConvLSTM model achieved an accuracy of 93.5% (29/31), a sensitivity of 93.33% and a specificity of 93.75%. The accuracy obtained by these methods was significantly higher than that (<70%) obtained with support vector machine (SVM) methods. Conclusions The deep learning-based MRI can accurately predict smoking status. Studies with large sample size are needed to improve the accuracy and to predict the level of nicotine dependence.
Collapse
Affiliation(s)
- Shuangkun Wang
- Department of Radiology, Beijing Chaoyang Hospital, Capital Medical University, Beijing 10020, China
| | | | | | | | - Dan Xiao
- Tobacco Medicine and Tobacco Cessation Center, China-Japan Friendship Hospital, Beijing 100029, China.,WHO Collaborating Center for Tobacco Cessation and Respiratory Diseases Prevention, China-Japan Friendship Hospital, Beijing 100029, China
| | - Peng Peng
- Department of Radiology, Beijing Chaoyang Hospital, Capital Medical University, Beijing 10020, China
| | - Tao Jiang
- Department of Radiology, Beijing Chaoyang Hospital, Capital Medical University, Beijing 10020, China
| |
Collapse
|